December 5, 2019

"Wonder Material" Graphene: Will it Change or Break the Game?

As the thinnest yet strongest material on Earth, graphene includes a plethora of other amazing properties. Widely considered a “wonder material,” how will it impact the physical world we know once it’s incorporated in everything from batteries and medical sensors to windows and condoms?

What is graphene?

Graphene is an allotrope (a given physical form) of the carbon element. You likely own or have encountered other allotropes of carbon such as the graphite in pencils, charcoal-cooked yakitori or the diamonds you might find in a set of grills. Yet, graphene is a “new arrival” that actually has been produced by accident for centuries through applications of graphite. It was observed in 1962 before being rediscovered, isolated and characterized in 2004. There are so many special properties packed into such a relatively “simple” composition, most notably:

  • Thin: At one atom “thick,” graphene is basically a super-thin sheet of linked carbon atoms (pictured above) and is currently the thinnest known material.
  • Strong: It’s also the strongest material known to exist proportionate to its thickness at 100 times stronger than the strongest steel.
  • Low Density: Again, compared to steel, the material is significantly less dense.
  • Conductive: Is an amazing heat conductor and great conductor for electricity too.
  • Permittivity: High permittivity means it stores electric potential energy in a magnetic field. Combined with graphene’s thinness and high surface area, this means the potential for better batteries.
  • Semi-Permeable: It’s still porous enough to allow water through while filtering other substances.

There are, of course, many other properties of graphene that we simply don’t science enough to be able to explain properly, which makes it suitable for a lot of yet undiscovered uses.

How could it be used?

  • Sex: Among other companies, the Bill and Melinda Gates Foundation has looked into using graphene to make even thinner but stronger condoms, offering a double whammy for both pleasure and protection.
  • Military and Law Enforcement: The material can absorb twice the amount of force as Kevlar, the current most commonly used material in bulletproof vests.
  • Fashion: The material’s properties make it a no-brainer for techwear (such as with Volleback’s Graphene Jacket), but we’re curious to see other fashion contexts where it could be used.
  • Medicine: The material’s thinness and conduciveness pave the way for wearable dermal sensors that help us discreetly track our health and fitness.
  • Sports: With such a high strength to weight ratio, the material has been used professionally as early as the 2018 Winter Olympics in Pyeongchang, South Korea, when it was used to construct a medal-winning sled.
  • Desalination: the fineness of the structure might be able to let water through but filter out salt, which could potentially revolutionize desalination and increasing freshwater supplies.
  • Hair Dye: While not as seemingly game-changing, graphene offers a comparable and non-toxic alternative to hair dyes, while giving hair anti-static and thermal resistance properties.

Are there drawbacks?

With so many potential uses for graphene, it’s not hard to see why it’s hailed as a wonder material, alongside eco-friendly favorites like fungal mycelium. For all this potential, though are there costs to this wonder material beyond simply the current financial constraints of producing it?

The risks surrounding graphene tend to start with their potential to harm us simply because our body doesn’t know what to do with such a “novel material.” For one, it’s brittle and being thin and strong makes it super sharp when fractured — sharp enough to pierce cell membranes and interfere with their function. As with initially helpful materials like fireproofing asbestos, upon further research, graphene has the potential to be toxic when inhaled in large quantities and the body can’t get rid of it.

Then there’s of course, the uncertainty of how the needs of scientific progress, commerce, creativity and industry will combine to produce unpredictable results — especially when it comes to drawing inspiration from nature, playing with genes and involving other living things aside from us.

For example, researchers added graphene to a spider’s drinking water, allowing it to produce silk strands that could hold the weight of a human. This makes it significantly stronger than BioSteel, developed in the early 2000s, which comes from goats genetically modified to produce silk from Orb Weaver spiders in their milk.

The Takeaway

We’re always excited to hear about new technology especially when that tech takes the form of a substance that can be applied to different contexts.

Graphene represents one of those materials we imagine when we think of a fantastical future where everything is functionally efficient to the point of otherworldliness. Note that this isn’t the same as how non-stick Teflon became the trendy material in cookware before it fell into disrepute for being toxic and heat-insulating silicon became popular.

Graphene incorporates so many desirable traits into one tiny material that maybe one day when it becomes easy enough to create (even say, in our own homes), there’s a high chance it will be quickly incorporated into just about everything. This can happen in a way where it can seamlessly integrate in both a functional and artful way.

November 25, 2019

You Have a Problem: Reframing Gear Acquisition Syndrome

We take a much-needed look at G.A.S., what causes it and how to pass it. We promise that this will likely be the last double entendre involving the word “gas” in this article.

What is G.A.S?

It’s not quite the common cold, but it can make us just as miserable. G.A.S stands for “gear acquisition syndrome” and is a strain of addictive retail therapy commonly associated with photographers. It involves purchasing gear at a rate that’s higher than needed and often distracts from the activity the gear’s intended for.

Yet this type of acquisitive behavior can easily affect non-photographers as well, such as people who work with audio. Rob Power and Matt Parker of Music Radar outline the 7 signs of G.A.S. which just as accurately represent phases of G.A.S. We’ve listed them here with examples from our own experience with G.A.S. (aggregated so we don’t single anyone out, Nate).

  1. Dissatisfaction: you’re dissatisfied with your current equipment.
  2. Desire: you see a new piece of equipment that will “complete” you.
  3. Research: you suffer hours of paralysis by analysis wading through options.
  4. Purchase: you break the deadlock with a rapid series of smaller purchases or a single big buy.
  5. Guilt: the gaping hole in the credit card or bank account leaves you pondering your decision, which may take you back to #3 to confirm if you made the right decision.
  6. Acceptance: you come to terms with what you’ve done, and might be filled with newfound and unbridled optimism toward your creative output in the vein of “Oh, The Places You’ll Go!”
  7. Relapse: your unresolved dissatisfaction quickly returns to attack your new creative implement, potentially as you discover that one missing feature that will completely upend your career.

What causes G.A.S?

Photographer, neuroscientist and writer Joshua Sariñana gives a highly detailed breakdown for G.A.S. and also explains the neurochemical mechanisms for how stressors trigger impulsive behavior and how purchases tap into our brain’s reward center. But of the many possible causes for those stressors, he proposes the most likely culprit in creatives: the fear of creativity itself.

Uncertainty: The creative process is already fraught with uncertainty and this uncertainty gives rise to fear of failure, criticism or even critique.

Catastrophizing: This is a common behavior where we always imagine the worst-case scenario. Combined with an existing cognitive bias against ourselves, this behavior repeats and small challenges seem insurmountable.

Avoidant Behavior: Like most living things, we tend to avoid discomforting things, even if that very thing is beneficial to us.

Buying Gear to Ease the Pain: Sariñana notes the potential for buying new gear to resemble drug abuse in the sense that we quickly acclimate to the ‘hit’ that comes with our new purchase, only to seek out bigger and better rewards.

How to get past gas

If you or someone you know has G.A.S., here are some ways to tackle the problem. Some involve dealing with the physical objects themselves while others focus on the mindset that leads to G.A.S.:

Realize you may have it: Even if you’re not a “gear head,” you might be acquiring services, plugins, memberships and subscriptions just as you would physical tools.

Validate yourself: Remind yourself that you are enough, even if your tools were to magically become primitive tomorrow. You have the talent to create something good with what essentials you have right now and the resourcefulness to improve on it later in the polishing phase.

Unplug: Our constant exposure to iconic, famous and professional-level content or simply content that we love constantly reminds us of how painfully inadequate our work is. Accompanying this, the democratization of creative tools means new markets to be targeted with marketing.

Be deliberate: Whether it’s finding references in the planning phases or only searching up in designated phases, if you find yourself stressing more about gear than creating, then it might be time to unplug from your media exposure.

Each item becomes a promise: Realize that each piece isn’t just an obligation to use it: you will have to maintain it and some items might require more purchases to keep them in good condition. If you have too many promises to keep, KonMari (Marie Kondo’s Shinto-based tidying methodology) your gear, digitally if you must: gather all your tools in one place (or start with one category of them if you’ve got that much) and notice how much you have. Keep the essentials, followed only by the ones that stir positive emotions. Take everything else out of play.

Get creative: this doesn’t just mean actually going out or staying in and doing the thing you bought the gear for. This means finding workarounds for limitations in the entire creative setup that includes gear, you personally and your situation. Consider using creative constraints to your advantage.

Borrow or rent: This might help you to let go of the idea that you need to have (as in own) a given tool to validate your creative title, and be comfortable with the fact you just need to use it for that project — especially true if you need to beef up your tiny mirrorless camera just so a client takes you seriously on that day. Likewise, borrowing or renting lets you “try before you buy.”

Co-buy or own: Or, if you’ve thought ahead and are sure you want something and will use it for the long-term: commit to making a few key purchases, either yourself or with someone, and then commit to using them for many years. Once you’ve committed, you’ll come to appreciate and acknowledge their limitations in conveying what you put into it. Assuming you’ve been using this gear this whole time, you’ll come to love it so much you won’t want to lend it out or replace it.

Make shit: Learn to be comfortable with making highly flawed and imperfect work with no intention of sharing it (or the possibility that nothing will come of it). The obsession with constantly making work for display to reinforce a given title may lead you to want to always “put your best foot forward” and buying new tools can add that polish.

The Takeaway

It’s okay. Everyone has suffered a bad case of G.A.S. or several relapses over the years (we’re pretty sure we’ve had a few). What matters is that you catch yourself early or you tweak your rate of acquisition to match your growing skill level or the actual demands of your jobs or career aspirations.

Regardless of what stage you might be in, for those who think you might be catching it, we highly recommend this detailed recount by “gear addict turned photography addict” Olivier Duong.

In writing this, we found a disproportionate amount of literature connecting G.A.S. to photographers and to a lesser degree, musicians. But this problem extends far beyond those two fields and even beyond physical “gear” as we know it to include subscriptions, plugins, services and software.

Whether you’re an artist that’s bought maybe a hundred Copic markers too many or a hobbyist sewer that’s filled their basement with more bolts of fabric than they have projects for, we’d like to hear any experiences with G.A.S. you’d like to share or suggestions on a broader term that captures this insecurity-driven acquisitive behavior in creatives.

October 25, 2019

The Coming Age of Fake Faces and Voices

As AI and machine learning become better at reproducing human likenesses and speech, we wonder how society and the creative industries will cope once the technology becomes widespread. We look at the possible ramifications of Deepfakes and the lesser-known Adobe speech engine VoCo, dubbed “Photoshop for the voice”.

Deepfakes and VoCo

By now, the Internet is no stranger to Deepfakes, whether it’s through hearing about its baser use cases or laughing our way through “re-cast” scenes from iconic films. The technology uses multiple images or footage of a person’s face to create an animated model that can be superimposed atop the original. But few seem to be aware of a similar and arguably, more powerful technology: fake voices. When it was announced in 2016, VoCo was touted as Adobe’s “Photoshop for voice” and while updates have been sparse since, other similar platforms have stepped in, such as LyreBird.

To get a feel for what Voco can do, check out this video where the technology was first debuted at Adobe MAX 2016. It shows the speech engine replicating the voice of actor and director Jordan Peele (who co-hosted) to make him say some funny but embarrassing things he has never said before — all using only 20 minutes of his recorded speech. Coincidentally, Peele also made a PSA where he provided the voice of a deepfaked President Obama in an effort to underscore a renewed need for media literacy in the age of Deepfakes.

Misinformation, Echo Chambers and Social Fallout

We’re continuing to keep a pulse on the potential for big data to amplify narratives, sway conversations and change culture for better or worse. Unfortunately, in the age of fake news, fact-checking is playing a losing game of cat and mouse with dubiously factual content or straight-up misinformation.

We’ve always used a combination of technology and creativity — well-intentioned or malicious — to shape reality, whether it means “cheating” shots to get a certain look on a budget or doctoring media for libelous reasons. Yet every generation has also had experts that keep us informed of how these things are done. The issue that’s most worrying is both that the tech is improving and we’re not listening anymore: even when shown evidence against their beliefs, people will dig in their heels and defend them.

Social media’s information silos and echo chambers threaten to become even worse once the average tech-savvy netizen is able to Deepfake and VoCo-lize with ease. When we lose the ability to trust our senses that much more (something we’ve already been losing as of late), it makes even the most engaged of us despondent to the state of the world and eager to just shut everything off.

The Potential Creative Outcomes

All said, it would be cynical to conclude that the only uses for these technologies are nefarious ones. “Hate the player, not the game,” as they say and we see a lot of potentials for Deepfakes and Voco to assist artists and creative workers.

For creatives providing their likenesses or voices and the people processing them, we see this new dynamic going one of several ways:

  • Quick Fixes: Not unlike content aware tools for Photoshop, Deepfakes and VoCo-like technology can help patch up more severe mistakes that can’t be done with conventional editing of the source material. This will evidently, lower the cost of reshoots and other production expenses as Adobe originally stated for VoCo.

As always, getting things done right the first time will always prevail, and for that there will be someone still thankful for not having to Deepfake or Voco correct hours of poorly captured footage, not to mention it still might not replace the real thing (which is why practical film effects still have an edge on CGI in many cases).

  • Updated Terms: We imagine there is a need to update contracts down the line that prevents someone from creating derivative content off of the images provided for a given project. For instance, an agency could create advertising materials out of video footage of us from say, a music video — so long as we’ve signed off on it.

But as the legal stance on deepfakes and similar content catches up, we could see the addition of key clauses that stipulate something to the effect of : “the client shall not create new material generated by AI taught using the artist’s likeness, voice or previous work.” Or if we allowed it, we could negotiate to be compensated depending on how much content is generated against a portion of our day rate (we’re going to assume the original voice of Siri, Susan Bennett was paid handsomely for her efforts).

  • Composite People: If Generated Photos’ 100,000 Faces project (which generated as many portraits through machine learning) has taught us anything, it’s that AI is getting better and better at generating realistic likenesses of people (albeit portraits of them). We can and should protect the rights to our unique selves and content generated from them, but what if we become less than a thousandth of a generated person in body or voice? Perhaps we could be entitled to a thousandth of the royalties, depending on the platform!

The Takeaway: A Re-Shuffling of the Creative Landscape

All in all, we still don’t know how much machine-generated personalities will change the creative landscape just yet, but we doubt it will be a clear-cut net positive or negative. Take our previous example of digital clothing collections made for the gram: in cases like these, the designer keeps their job, the pattern maker loses theirs, and the 3D modeler posing outfits onto customer photos gained a new one.

Even once we get to the stage where we’re using fully-posable photorealistic models of digital people using text-to-speech that nails personality, we predict the most-respected work and their creators will continue to pride themselves on employing, connecting with and working with real humans that can think for themselves, versus simply doing or saying what they’re programmed to do.

September 13, 2019

AI-assisted News and Its Future in the Attention Economy

Source:

We’re constantly hearing about new apps that aim to be the destination for our valuable attention. But can a news aggregator without a social component do the same? ByteDance’s TopBuzz shows potential as a contender, but goes beyond just bringing us the news we want.

What is TopBuzz?

TopBuzz is the English-language version of Toutiao, the flagship Chinese entertainment and news aggregator app made by ByteDance, which also owns TikTok. The app uses machine and deep learning algorithms to create personalized feeds of news and videos based on users’ interests.

  • User profiles: This is initially built on the app’s understanding of the user’s demographics (age, location, gender, and socio-economic status).
  • Content: The system uses natural language processing to determine if an article is trending, whether it’s long or short, and its timeliness (evergreen or time-bound).
  • Context: It also accounts for location-related data like geography, weather, local news, etc.

By the Numbers

  • 23M: Monthly active TopBuzz users in October 2, 2018 up from 1.8M in November, 2017
  • 36x: Increase in pageviews (34M) referred across Chartbeat publishers worldwide from 2017 to 2018.
  • 24 hours: The time it takes for Toutiao (and likely TopBuzz) to figure out a reader.
  • 200,000: Officially partnered publishers and independent creators including Reuters, CNN, New York Times and BuzzFeed. YouTube creators can also sync their channels too, while bloggers can publish directly or deliver it via RSS.

The Extra Mile

What makes Toutiao stand out among aggregators is that is doesn’t just serve content: it creates it too. During the 2016 Olympics, Toutiao debuted Xiaomingbot to create original news coverage, publishing stories on major events faster than traditional outlets—as in seconds after the event ended.

For an article about a tennis match between Andy Murray and Juan Martin Del Potro, the bot pulled from real time score updates from the Olympics organization, it took images from a recently acquired image-gathering company and it monitored live text commentary on the game.

During the Olympics, the bot published 450 stories with 500-1000 words that achieved read rates in terms of number of reads and impression on par with a human writer.

Bytedance used this same AI content creation in a bot that creates fake news to train the content filter for the app. However, it’s not clear at the moment if TopBuzz publishes AI-generated content in English as well.

The Potential for TopBuzz

While Facebook and Twitter also use machine learning to refine recommendations, they rely more heavily on a user’s social connections. TopBuzz is strictly a news aggregator with no social component similar to Feedly or Flipboard.

But what makes TopBuzz and Toutiao (and future would-be competitors) unique is how hard they’ve doubled down on using AI to win the content game. We’ve all experienced the 20-min or so Netflix sift we do for content recommendations based purely on our viewing history, but because Toutiao analyzes so many other factors, it’s reduced this lag in the consumption cycle to virtually nothing (once it’s figured out the user’s habits).

This combination of AI-fueled curation and creation could set the standard for apps to come — and there are likely to be more. ByteDance’s success with TikTok (which hit one billion users this year) was enough to prompt Facebook to make Lasso in response, and there are bound to be competitors after the same level of stickiness that Toutiao and TopBuzz have achieved.

We’ve always been hungry for knowledge, no question there. But as our attention continues to be commodified and audiences become pickier about what they consume, demand for high quality information (regardless of who or what created it) will increase too. The result? We get to “upgrade” to a cleaner albeit more addictive information diet we consume served buffet style. Users spend an average of 74 minutes on Toutiao. Will that eventually be the “sweet spot” for our news consumption?

But Not So Fast

In our experiences with TopBuzz, we don’t doubt the learning approach taken by the app. But the quality of publications often means that there’s a fair bit of clickbait from questionable outlets. A catchy headline that has us click in does not equate to a great experience. Naturally, most tech companies are notoriously opaque about their algorithms so we’re naturally a bit skeptical as to what defines a piece of content that you personally find compelling. It’s only been a few weeks but in six months, it’d be a worthwhile exploration to see how our experience has improved down the line.

August 13, 2019

Residuals for everyone—selling our data to teach AI

Source:

As more companies and organizations start relying on AI, more and more data will be needed to feed (and train) these powerful programs, but not all data is created equal. While some might be valuable, we might not be so ready to share it. But if there were a means of securing our data and earning for every time it was used, would we be more willing to part with it?

Medical researchers start dabbling in AI, but hit wall

Medical professionals are starting to tap into machine learning as a means of furthering their work, especially to find patterns that can help interpret their patient’s test results. Stanford ophthalmologist Robert Chang hopes to use eye scans to track conditions like glaucoma as part of this ongoing tech rush.

The problem, however, is that doctors and researchers have trouble gathering enough data from either their own patients or others because of the way those patients’ data is handled. Indeed, there’s a great deal of medical data that’s silo’d due to different policies on sharing patient information. This makes it challenging to share patient metrics between institutions, and subsequently to reach critical data mass.

Kara and Differential Privacy

Oasis Labs, founded by UC Berkeley professor Dawn Song, securely stores patient data using blockchain technology that encrypts and anonymizes the data while preventing it from being reverse engineered. It also provides monetary incentives to encourage participants, who could be compensated for each time their data is used to train artificial intelligence.

It’s not just the promise of money that’s making them more willing to submit their data. Song and Chang are trialling Kara, a system that uses differential privacy to ensure the AI gets trained on data (stored on Oasis’ platform), but the data remaining invisible to researchers.

Quality Matters

For the medical industry, having access to quality data will become increasingly important as the reliance on AI increases. Quality doesn’t mean individual data points (a grainy eye scan could throw off the machine’s learning) but rather the entire data set.

In order to prevent biases, which AI systems are prone to depending on what data sets they are fed, a system will need particular segments of the population to contribute data to round out its “training.” For this to happen, incentives will need to be carefully weighed and valued. Training a medical AI designed for a general population, for instance, would require samples from a diverse group of individuals including those with less common profiles. To incentivize participation, compensation might be higher for this group.

Otherwise, the designers of the AI could simply choose not to include certain groups as has happened in the past, thus creating a discriminatory AI. In this case, it’s less a matter of the machine that’s learning and more of the people initiating the teaching. That said, the resultant discriminatory AI has the very real power to change the course of peoples’ lives such as by filtering out their job applications.

Data ownership, Dividends and Industries

Despite these drawbacks, a combination of monetization and secure storage of personal data could signal the beginning of a new market where individuals can earn a fee for sharing data that wouldn’t have been shared in the first place; in essence, royalties for being ourselves, assuming we’re “valuable,” that is.

For the creative industry, the consensus is that for all its strides, AI has yet to evolve beyond being a very powerful assistant in the creative process. At present, it can create derivative work that resembles the art it’s been fed, but still lacks the ability to absorb what we know as inspiration. For example, IBM used its Watson AI to create a movie trailer for a horror movie after feeding it 100 trailers from films of that genre with each scene segmented and analyzed for visual and audio traits.

For now, the emergence of a data market doesn’t seem lucrative enough to birth a new class of workers (lest we all quit today to become walking data mines), but supposing the incentives were enticing and a company like Oasis could guarantee that data privacy was ensured, could we see more creators willing to give up some of their work? Perhaps even unpublished work that would never been seen? Would quick file uploads coupled with a hassle-free “for machine learning only” license mean an influx of would-be creators hoping to make data dividends off work they could license elsewhere too?

On one hand, it would provide a way for creatives to earn residuals off their work given that AI needs thousands if not millions of samples and other sources (such as websites for stock creative assets) might not be as lucrative. That said, just as different data sets are needed for different purposes, we might see the emergence of a metrics-based classification system to objectively grade subjective work and assign value to it.

And if those works can be graded, so too can their creators with all the opportunities that follow a “quality data” distinction. Maybe one day when a program like Watson reaches celebrity artist status, we can brag to our peers, “yeah, I taught it that.”

August 1, 2019

Still Sounds About Right—The Need for Audio Feedback from Devices

Source:

If a tree falls in the forest and no one is around to hear it, does it make a sound? Put another way, if a phone receives a call and the phone is set to silent, does it make an action? Of course it does—whether we can hear it or not. Device sounds, although we still might not pay them much attention, have been giving us the feedback we need for decades, even as tech becomes less and less mechanical.

Thomas McMullan spoke with developers and musicians to understand how the sounds of our machines (those that sound to represent activity) evolved and where they’re going.

Interviewees

  • Jim Reekes: Behind some of Apple’s most iconic audio effects. Used a recording of his old camera for the screenshot sound on Macs. The association between that sound and cameras persists today—even for people who only use digital ones.
  • Ken Kato: Composed the Windows 98 theme, sound designer for Halo 4 with 343 Industries, and current audio director for the VR studio Drifter Entertainment.
  • Steve Milton: Co-founder of Listen, a “sensory experience” company responsible for the sound design of apps including Skype and Tinder.
  • Becoming Real: London-based electronic musician.
  • Lindsay Corstorphine: Music facilitator and band member of Sauna Youth.

Their Quotes

  • Jim Reekes: “Audio is still ignored for the most part. Part of the problem is how good design is invisible.”
  • Ken Kato: “When I made (the Windows 98 bootup sound), Microsoft started out with about 20 sound designers, and there was a little contest, like a league competition. We went up against each other making sounds, and then a committee would choose which sound they liked.”
  • Steve Milton: “The biggest and most obvious is the shift away from skeuomorphic sound. Early sounds would attempt to mimic or sample the real world — quacks, pianos, trash, etc. But as the visual design moved away from skeuomorphism, we also start to hear more abstract expressions, sonically.”
  • Becoming Real: “Machines have become quieter, smaller, less noticeable, as the importance isn’t so much what the technology looks like — it’s how it can perform for us.”
  • Lindsay Corstorphine: “Recently, I’d say sound design has become less ostentatious and more functional, but with a hint of sentimentality for a mechanical past.”

Why this matters

In a previous analysis on usability, we referenced Jakob Nielsen’s 10 Usability Heuristics for Interface design, which were created in 1994 but remain relevant to this day. The first on the list is “visibility of system status” where the system gives users feedback on what is going on. Following up after that is “match between system and the real world,” which means, “The system should speak the users’ language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.”

We can apply this principle to why we still need very tangible signs from machines that, as they get more advanced, produce fewer and fewer sounds when they function. Custom tones help us differentiate messages from different contacts.

Haptic feedback, like vibrations, can do the same or simply lets us know when we fail to unlock our phone with our fingerprints. These are the machine-made “words” that we require to understand for context, but rather than come up with completely original sounds or icons it can be easier to re-use designs that draw existing associations developed through time. We don’t use floppy disks anymore, but the image is often used as the icon for the ‘save’ button in apps and the one-dot-to-two-dot ‘share’ icon has now joined that collective memory.

In the same way, we associate non-verbal sounds like the shutter with photo taking, rapid beeping with timers or alarms, honking with cars, and the sad trombone with failure or disappointment. For now, the associations persist because we still have a “match between system and the real world.”

What sounds right now might not tomorrow

Reekes raises an interesting premise of customizing the sounds of our silent electric cars of the future much like we would the ringtone. As Milton said, the sounds being produced are no longer based on real-world counterparts but are created from zeroes in a digital space.

As the sonic signature of machines changes as their mechanical parts decrease, we might see a greater need for new sounds to stand in and give feedback, and users and sound designers will become less bound by a longing for the past and any obligation to stick to history. We might end up eventually retiring the iconic shutter sound and only come to recognize some new never-before-heard tone instead.

July 15, 2019

What are the immediate benefits and challenges of remote and distributed teams?

Source:

Now that it’s easier than ever to assemble teams of talented people across the world—without having to even share an office—what are some of the benefits and challenges faced by these technologically-enabled work arrangements?

 The new normal

In 2013, Scott Berkun authored a book called The Year Without Pants in which he shared his experience working remotely for WordPress. Since then, these non-traditional work arrangements have become the norm at many companies. They are categorised within 3 broad groups:

  • Fully Distributed: Where team members rarely come into the office and work almost exclusively through the Internet, such as WordPress when it first started.
  • Semi-Distributed: Where some of the roles such as leadership or management are staffed at a headquarters that manage distributed team or teams (Hashicorp, Mattermost).
  • Small Offices: Often new offices can be created to start and host functional teams such as support or sales development.

The challenges

While versatile, there are certainly challenges with having an arrangement like this. This includes ensuring good communication strategies across geographies, especially in cases where the team is distributed. In addition, it’s important to share valuable knowledge or decisions made in person by one part of the team across the network. Finally, the largest challenges can sometimes center around hiring and compensating contractors and employees in these teams, especially ensuring that a company’s practices comply with local laws.

Speaking from experience

While technology and global connectivity have made previously unheard of work arrangements possible, the versatility for both the company and the individuals involved (who often enjoy flexible schedules) does come at a price. For distributed teams in creative companies especially, one of the biggest challenges is creating and maintaining a passionate work culture despite a lack of in-person face time with which to exchange ideas on the fly.

What’s more is where dedicated operational personnel is lacking, this chemistry and synergy needs to be maintained through reliable systems that can account for complex detail-oriented creative work. This doesn’t just mean individual programs (such as if a team is sharing an Adobe CC license) but how the myriad of programs in a team’s chosen tech stack play together. In short, for these distributed creative companies to thrive, they must properly use location-freeing technologies. The tech must have limited energy-sapping snags to keep creative juices flowing.

– Nate Kan

July 11, 2019

How will immersive new media push the evolution of usability?

Jakob Nielsen’s 10 Usability Heuristics for Interface Design (1994) remain relevant today even for UI in modern software, websites, apps and even video games. We’re no stranger to these guidelines being bent or broken for artistic or commercial merit, but how will the playing field change when the interfaces they were designed for eventually evolve to become us?

The Original Heuristics

For reference, heuristics are “any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational, but instead sufficient for reaching an immediate goal.” These can also be used to decrease the cognitive load on a person to speed decision-making. Here is a brief summary of Nielsen’s original 10:

  • Visibility of system status: The system gives users feedback about what is going on.
  • Match between system and the real world: The system favors language and concepts familiar to the user and real-world conventions.
  • User control and freedom: Users have the freedom to undo or exit functions executed by mistake.
  • Consistency and standards: No guesswork as to whether different words, situations, or actions mean the same thing.
  • Error prevention: Careful design that eliminates the potential for errors.
  • Recognition rather than recall: Visuals are used extensively and instructions accessible to minimize the user’s memory load.
  • Flexibility and efficiency of use: Expert users can access accelerators, unseen to novices, that speed up interaction.
  • Aesthetic and minimalist design: Information provided is inconspicuous, relevant and efficient.
  • Help users recognize, diagnose, and recover from errors: Errors identified in plain language (no codes), and constructive solutions are offered.
  • Help and documentation: Easy to search, focused on completing the user’s task and of appropriate length.

The spectrum of immersiveness and user agency

While the above guidelines make perfect sense, developers have always interpreted or flouted them for commercial, artistic or other intentions in social media, video games, apps, websites or any other kind of interactive software.

For one, some games such as Wild West-themed Red Dead Redemption 2 offer the option of switching off the heads-up display (HUD) that includes the map, meaning players have to rely on landmarks and directions from non-player characters to find their way (just like we used to).

If you’ve mistakenly clicked into a 3rd party site when simply trying to clear a pop-up ad, there’s a good chance you’ve gotten a taste of Dark UX to use a less colorful term. Some are not as downright manipulative to squeeze that one-time action out of you, but rather are a combination of more subtle interface design decisions meant to loosen our purse strings or keep us engaged with—or dependent on—a given digital medium.

Depending on the creator’s intentions, we the users will find ourselves falling somewhere on a spectrum with every digital medium we experience, where total unconscious immersion lies at one end and complete freedom and control at the other.

Tomorrow’s interfaces and the blurring of reality

As we get closer to developing better and better media forms that involve the user on a deeper level, many of the above heuristics may become locked to certain benchmarks and inseparably merged as part of a new standard for user experience: total immersion.

Nielsen’s original usability heuristics were created in 1994 and certainly remain relevant beyond the software they were intended for originally. Today, the boundaries between software, apps and websites are constantly being blurred depending on how and how much the user can interact with them. Even though we’ve come a long way from a time when the only input devices were mouse and keyboard, and we’re still busy exploring the potential of capacitive surfaces beyond the touch screen, the interface and user remain separated at the hands.

But because the 10 heuristics have always favored the user anyway—their end goal is to reduce cognitive load and ease decision making—the interface will eventually do away with separate peripherals and the user will become the input device.

When eye movements, speech, and even thought become an industry standard input for interfaces, we’re going to reach a point where the usability of all apps, sites and software is going to be evaluated on the by-then increased user expectations (for example “the program responds quickly to my gestures in the air, moves the displayed area with my eye movements or pauses when my mind is focused elsewhere.”)

By this time, anything that delays this or responds in a non-intuitive way will effectively “break” the immersion, violating several heuristics in one fell swoop and thus affect a program’s usability—what we’d currently call “buggy” or laggy controls.

Art and industry

Regardless of whether an app, program, game or simulation is for commercial or artistic purposes, a creator’s goal is always strong user engagement whether that be evaluated in how often they revisit it or how long they spend with it. Just as long-form journalism, feature length films and perhaps eventually even podcasts decline in popularity, creators need to keep asking themselves how their respective arts and industries might change as attention spans shift and shrink while the path of least resistance shortens.

When VR and other yet to be defined new mediums reach a high enough standard to become widespread and normalized in our everyday lives (we’re getting there), we will have to figure out how we address the divide between this world and the creator’s. Photographer, artist and VR filmmaker Julia Leeb uses VR to so that her audiences can experience the terror of war in an uncomfortable but physically safe manner. What are the implications of future artists that remove user agency to execute their visions? How will industry standards evolve to address Dark UX in commercial VR apps? Should we establish upper limits—a “no go” zone—for how immersive something can be? Or will we simply treat new mediums as just another field of rabbit holes, each of which we impressionable humans can get lost in, as we have tended to do?

You might say we’ve watched one episode of Black Mirror too many, but it never hurts to be mindful of how we use the new things we create, but also how we can be used by them in return.

June 10, 2019

Hollywood is quietly using AI to help decide which movies to make

Los Angeles-based startup Cinelytic licenses historical data about movie performances over time and cross-references it with information about films’ themes and key talent, using machine learning to tease out hidden patterns in the data. Its software then lets clients swap out parts such as lead actors to see how the movie would perform across different territories.

Why the industry is overdue for the AI treatment

Cinelytic’s key talent comes from outside Hollywood. Co-founder and CEO Tobias Queisser comes from the world of finance, where machine learning has been embraced for its utility in high-speed trading to calculating credit risk. Further, Cinelytic co-founder and CTO Dev Sen used to build risk assessment models for NASA, another industry where risk assessment is especially critical.

However, Queisser says that the business side of the film industry is 20 years behind (still relying on Excel and Word) the production side that uses all kinds of high tech to make movie magic.

We’ve been here before

Cinelytics isn’t the first company to attempt to harness AI to predict and improve box office returns:

  • ScriptBook: Belgian company founded in 2015, promised to use algorithms to predict a movie’s success by just analyzing the script.
  • Epagogix: Similarly, UK-based company founded in 2003, uses data from an archive of films to make box office estimates and recommend script changes.
  • Vault: Israeli startup founded the same year promised it could predict which demographics would watch a film by track metrics such as online reception to trailers.
  • 20th Century Fox: Uses a system called Merlin to analyze shots from trailers for content and duration (among other things) and see what other movies people will watch based on preferences.
  • Pilot: Centers its machine-learning process around audience analytics to make box office predictions.

Why this might be good and bad for film

Good: AI saves people the effort of doing some of the things they hate the most, such as sifting through scripts the way a manager would be sifting through a mountain of resumes. This effectively separates the wheat from the chaff.

Bad: For aspiring writers seeking to enter the big leagues, even talented ones, their work might never see the light of day much less a set of human eyes if their story, original as it may be, isn’t attractive to the algorithm evaluating them.

Good: If the prediction models become accurate enough, studios big and small can breathe easier and be more confident with their investments in films, ensuring higher returns on investments and confidence from shareholders, meaning they stay in business longer.

Bad: If an AI-assisted selection process becomes widespread, we’ll see a drop in the diversity of big-budget films being produced as studios seek to cater to the whims of as many demographics as possible, potentially complicating or watering down movies.

Good: Because box office numbers among other end metrics reflect audience choices, AI recommendations from cold hard data could override human prejudices that prevent certain stories from being told or certain people from being involved in a production.

Bad: If followed too closely, those recommendations might mean a studio will get an actor with the biggest box office market draw they can pay for without regard to whether or not the actor fits the story.

At the end of the day

Big studios have to regularly produce films to keep the money coming in year-round by pushing the right audience buttons at the right time of year from summer blockbusters to holiday movies. Movies that are released to coincide with certain attendance patterns (like say, a horror movie in time for Halloween) are usually designed to be enticing and entertaining if forgettable and for these, there are plenty of ways AI can help set big studios up for the best possible numbers they can.

These films types are formulaic enough that allow for this kind of “drag-and-drop” production. An example of this would be family-friendly comedies with a “big dude with a big heart” that have appeared regularly throughout the past twenty-plus years: Arnold Schwarzenegger (Jingle All the Way), Vin Diesel (The Pacifier), The Rock (The Game Plan and The Tooth Fairy) and most recently, John Cena (Playing with Fire).

That said, it remains to be seen if studios will ever let AI override decisions from accomplished writers, producers, and directors whose track record and reputation give them the authority to choose a given actor or other artist to work with. Further, AI can only make predictions based on what has already been made and not how tastes and popularity will shift in the future—the result of many factors outside of a strictly cinema-centric data set. That’s something that requires insight and instinct, something that humans are still valued for.

May 17, 2019

Adobe Tells Users They Can Get Sued for Using Old Versions of Photoshop

Source:

In a move that shocked—or didn’t shock—Adobe users, the company announced that customers could face legal consequences for using old discontinued versions of Photoshop, warning them that they were “no longer licensed to use them.”

This week, Adobe began sending some users of its Lightroom Classic, Photoshop, Premiere, Animate, and Media Director programs letters warning that they weren’t legally allowed to use software they had previously purchased.

“We have recently discontinued certain older versions of Creative Cloud applications and and a result, under the terms of our agreement, you are no longer licensed to use them,” Adobe said in the email. “Please be aware that should you continue to use the discontinued version(s), you may be at risk of potential claims of infringement by third parties.”

How we got here

In 2013, Adobe moved away from its original business model, whereby users could purchase hard copies—and continue to use them regardless of later versions being released. The new subscription-based service Adobe Creative Cloud resulted in notably higher revenues due to the constant stream of monthly fees from users. Naturally, requiring users to regularly sign in online to confirm their paid subscription also curtailed the use of pirated or cracked versions of Adobe’s software that became widespread with the previous business model.

Everyone’s gone insane with subscriptions

Adobe’s transition to subscriptions isn’t necessarily new nor unusual. From video games and mobile apps to delivery-based health food plans and even clothing, business models based on subscriptions are extremely common now to ensure the companies providing them have regular cash flow. But the issue with software and other tech is that they are by their nature modifiable via wireless updates, meaning that features can be added, disabled or removed simply by going online. Worse, some companies can make these modifications mandatory and in Adobe’s case, make accepting the modifications mandatory to continue using the software or service.

And as you might have guessed, we’re also complicit in continuing this behavior because the stipulation that the company is free to do so is buried but clearly written in those End User License Agreements we unashamedly skip through and agree to. This gives companies—pardon the pun—free license to modify products that we don’t own in any concrete way and this most recent move extends to products people thought they owned in perpetuity.

What’s to be done?

Seeing as Adobe’s software serves as the industry standards for many creative fields (Premiere for video, Photoshop and Illustrator for graphic design and InDesign for publishing among others), it’s hard to peel away from that software we might have trained on, are used to or the rest of our collaborators are using. For established creatives, that’s a business expense that can certainly pay for itself (assuming you earn over $60 USD a month on projects), but it’s a big upfront cost for artists that are starting out or looking to go digital.

The solution for them? Draw a line in the sand and stick to open source or free programs that are actively maintained and have a community. There’s no Photoshop equivalent industry standard for creative writing work (as much as Microsoft Office wishes) because people care more about the end result more than the software it was produced on. If you have talent or willingness to create good work, even if it means a few more steps to do with free software what some of Adobe CC’s cutting edge technologies could do better and quicker, the portfolio will speak for itself.

For those tired of getting nickel-and-dimed? You might need to take stock of and start Marie Kondo-ing your monthly and yearly subscriptions and decide whether you could put those savings towards more permanent solutions you own (such as making your own NAS in place of regularly paying for cloud storage).

Play Pause
Context—
Loading...