September 13, 2019

AI-assisted News and Its Future in the Attention Economy

Source:

We’re constantly hearing about new apps that aim to be the destination for our valuable attention. But can a news aggregator without a social component do the same? ByteDance’s TopBuzz shows potential as a contender, but goes beyond just bringing us the news we want.

What is TopBuzz?

TopBuzz is the English-language version of Toutiao, the flagship Chinese entertainment and news aggregator app made by ByteDance, which also owns TikTok. The app uses machine and deep learning algorithms to create personalized feeds of news and videos based on users’ interests.

  • User profiles: This is initially built on the app’s understanding of the user’s demographics (age, location, gender, and socio-economic status).
  • Content: The system uses natural language processing to determine if an article is trending, whether it’s long or short, and its timeliness (evergreen or time-bound).
  • Context: It also accounts for location-related data like geography, weather, local news, etc.

By the Numbers

  • 23M: Monthly active TopBuzz users in October 2, 2018 up from 1.8M in November, 2017
  • 36x: Increase in pageviews (34M) referred across Chartbeat publishers worldwide from 2017 to 2018.
  • 24 hours: The time it takes for Toutiao (and likely TopBuzz) to figure out a reader.
  • 200,000: Officially partnered publishers and independent creators including Reuters, CNN, New York Times and BuzzFeed. YouTube creators can also sync their channels too, while bloggers can publish directly or deliver it via RSS.

The Extra Mile

What makes Toutiao stand out among aggregators is that is doesn’t just serve content: it creates it too. During the 2016 Olympics, Toutiao debuted Xiaomingbot to create original news coverage, publishing stories on major events faster than traditional outlets—as in seconds after the event ended.

For an article about a tennis match between Andy Murray and Juan Martin Del Potro, the bot pulled from real time score updates from the Olympics organization, it took images from a recently acquired image-gathering company and it monitored live text commentary on the game.

During the Olympics, the bot published 450 stories with 500-1000 words that achieved read rates in terms of number of reads and impression on par with a human writer.

Bytedance used this same AI content creation in a bot that creates fake news to train the content filter for the app. However, it’s not clear at the moment if TopBuzz publishes AI-generated content in English as well.

The Potential for TopBuzz

While Facebook and Twitter also use machine learning to refine recommendations, they rely more heavily on a user’s social connections. TopBuzz is strictly a news aggregator with no social component similar to Feedly or Flipboard.

But what makes TopBuzz and Toutiao (and future would-be competitors) unique is how hard they’ve doubled down on using AI to win the content game. We’ve all experienced the 20-min or so Netflix sift we do for content recommendations based purely on our viewing history, but because Toutiao analyzes so many other factors, it’s reduced this lag in the consumption cycle to virtually nothing (once it’s figured out the user’s habits).

This combination of AI-fueled curation and creation could set the standard for apps to come — and there are likely to be more. ByteDance’s success with TikTok (which hit one billion users this year) was enough to prompt Facebook to make Lasso in response, and there are bound to be competitors after the same level of stickiness that Toutiao and TopBuzz have achieved.

We’ve always been hungry for knowledge, no question there. But as our attention continues to be commodified and audiences become pickier about what they consume, demand for high quality information (regardless of who or what created it) will increase too. The result? We get to “upgrade” to a cleaner albeit more addictive information diet we consume served buffet style. Users spend an average of 74 minutes on Toutiao. Will that eventually be the “sweet spot” for our news consumption?

But Not So Fast

In our experiences with TopBuzz, we don’t doubt the learning approach taken by the app. But the quality of publications often means that there’s a fair bit of clickbait from questionable outlets. A catchy headline that has us click in does not equate to a great experience. Naturally, most tech companies are notoriously opaque about their algorithms so we’re naturally a bit skeptical as to what defines a piece of content that you personally find compelling. It’s only been a few weeks but in six months, it’d be a worthwhile exploration to see how our experience has improved down the line.

August 13, 2019

Residuals for everyone—selling our data to teach AI

Source:

As more companies and organizations start relying on AI, more and more data will be needed to feed (and train) these powerful programs, but not all data is created equal. While some might be valuable, we might not be so ready to share it. But if there were a means of securing our data and earning for every time it was used, would we be more willing to part with it?

Medical researchers start dabbling in AI, but hit wall

Medical professionals are starting to tap into machine learning as a means of furthering their work, especially to find patterns that can help interpret their patient’s test results. Stanford ophthalmologist Robert Chang hopes to use eye scans to track conditions like glaucoma as part of this ongoing tech rush.

The problem, however, is that doctors and researchers have trouble gathering enough data from either their own patients or others because of the way those patients’ data is handled. Indeed, there’s a great deal of medical data that’s silo’d due to different policies on sharing patient information. This makes it challenging to share patient metrics between institutions, and subsequently to reach critical data mass.

Kara and Differential Privacy

Oasis Labs, founded by UC Berkeley professor Dawn Song, securely stores patient data using blockchain technology that encrypts and anonymizes the data while preventing it from being reverse engineered. It also provides monetary incentives to encourage participants, who could be compensated for each time their data is used to train artificial intelligence.

It’s not just the promise of money that’s making them more willing to submit their data. Song and Chang are trialling Kara, a system that uses differential privacy to ensure the AI gets trained on data (stored on Oasis’ platform), but the data remaining invisible to researchers.

Quality Matters

For the medical industry, having access to quality data will become increasingly important as the reliance on AI increases. Quality doesn’t mean individual data points (a grainy eye scan could throw off the machine’s learning) but rather the entire data set.

In order to prevent biases, which AI systems are prone to depending on what data sets they are fed, a system will need particular segments of the population to contribute data to round out its “training.” For this to happen, incentives will need to be carefully weighed and valued. Training a medical AI designed for a general population, for instance, would require samples from a diverse group of individuals including those with less common profiles. To incentivize participation, compensation might be higher for this group.

Otherwise, the designers of the AI could simply choose not to include certain groups as has happened in the past, thus creating a discriminatory AI. In this case, it’s less a matter of the machine that’s learning and more of the people initiating the teaching. That said, the resultant discriminatory AI has the very real power to change the course of peoples’ lives such as by filtering out their job applications.

Data ownership, Dividends and Industries

Despite these drawbacks, a combination of monetization and secure storage of personal data could signal the beginning of a new market where individuals can earn a fee for sharing data that wouldn’t have been shared in the first place; in essence, royalties for being ourselves, assuming we’re “valuable,” that is.

For the creative industry, the consensus is that for all its strides, AI has yet to evolve beyond being a very powerful assistant in the creative process. At present, it can create derivative work that resembles the art it’s been fed, but still lacks the ability to absorb what we know as inspiration. For example, IBM used its Watson AI to create a movie trailer for a horror movie after feeding it 100 trailers from films of that genre with each scene segmented and analyzed for visual and audio traits.

For now, the emergence of a data market doesn’t seem lucrative enough to birth a new class of workers (lest we all quit today to become walking data mines), but supposing the incentives were enticing and a company like Oasis could guarantee that data privacy was ensured, could we see more creators willing to give up some of their work? Perhaps even unpublished work that would never been seen? Would quick file uploads coupled with a hassle-free “for machine learning only” license mean an influx of would-be creators hoping to make data dividends off work they could license elsewhere too?

On one hand, it would provide a way for creatives to earn residuals off their work given that AI needs thousands if not millions of samples and other sources (such as websites for stock creative assets) might not be as lucrative. That said, just as different data sets are needed for different purposes, we might see the emergence of a metrics-based classification system to objectively grade subjective work and assign value to it.

And if those works can be graded, so too can their creators with all the opportunities that follow a “quality data” distinction. Maybe one day when a program like Watson reaches celebrity artist status, we can brag to our peers, “yeah, I taught it that.”

August 1, 2019

Still Sounds About Right—The Need for Audio Feedback from Devices

Source:

If a tree falls in the forest and no one is around to hear it, does it make a sound? Put another way, if a phone receives a call and the phone is set to silent, does it make an action? Of course it does—whether we can hear it or not. Device sounds, although we still might not pay them much attention, have been giving us the feedback we need for decades, even as tech becomes less and less mechanical.

Thomas McMullan spoke with developers and musicians to understand how the sounds of our machines (those that sound to represent activity) evolved and where they’re going.

Interviewees

  • Jim Reekes: Behind some of Apple’s most iconic audio effects. Used a recording of his old camera for the screenshot sound on Macs. The association between that sound and cameras persists today—even for people who only use digital ones.
  • Ken Kato: Composed the Windows 98 theme, sound designer for Halo 4 with 343 Industries, and current audio director for the VR studio Drifter Entertainment.
  • Steve Milton: Co-founder of Listen, a “sensory experience” company responsible for the sound design of apps including Skype and Tinder.
  • Becoming Real: London-based electronic musician.
  • Lindsay Corstorphine: Music facilitator and band member of Sauna Youth.

Their Quotes

  • Jim Reekes: “Audio is still ignored for the most part. Part of the problem is how good design is invisible.”
  • Ken Kato: “When I made (the Windows 98 bootup sound), Microsoft started out with about 20 sound designers, and there was a little contest, like a league competition. We went up against each other making sounds, and then a committee would choose which sound they liked.”
  • Steve Milton: “The biggest and most obvious is the shift away from skeuomorphic sound. Early sounds would attempt to mimic or sample the real world — quacks, pianos, trash, etc. But as the visual design moved away from skeuomorphism, we also start to hear more abstract expressions, sonically.”
  • Becoming Real: “Machines have become quieter, smaller, less noticeable, as the importance isn’t so much what the technology looks like — it’s how it can perform for us.”
  • Lindsay Corstorphine: “Recently, I’d say sound design has become less ostentatious and more functional, but with a hint of sentimentality for a mechanical past.”

Why this matters

In a previous analysis on usability, we referenced Jakob Nielsen’s 10 Usability Heuristics for Interface design, which were created in 1994 but remain relevant to this day. The first on the list is “visibility of system status” where the system gives users feedback on what is going on. Following up after that is “match between system and the real world,” which means, “The system should speak the users’ language, with words, phrases, and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order.”

We can apply this principle to why we still need very tangible signs from machines that, as they get more advanced, produce fewer and fewer sounds when they function. Custom tones help us differentiate messages from different contacts.

Haptic feedback, like vibrations, can do the same or simply lets us know when we fail to unlock our phone with our fingerprints. These are the machine-made “words” that we require to understand for context, but rather than come up with completely original sounds or icons it can be easier to re-use designs that draw existing associations developed through time. We don’t use floppy disks anymore, but the image is often used as the icon for the ‘save’ button in apps and the one-dot-to-two-dot ‘share’ icon has now joined that collective memory.

In the same way, we associate non-verbal sounds like the shutter with photo taking, rapid beeping with timers or alarms, honking with cars, and the sad trombone with failure or disappointment. For now, the associations persist because we still have a “match between system and the real world.”

What sounds right now might not tomorrow

Reekes raises an interesting premise of customizing the sounds of our silent electric cars of the future much like we would the ringtone. As Milton said, the sounds being produced are no longer based on real-world counterparts but are created from zeroes in a digital space.

As the sonic signature of machines changes as their mechanical parts decrease, we might see a greater need for new sounds to stand in and give feedback, and users and sound designers will become less bound by a longing for the past and any obligation to stick to history. We might end up eventually retiring the iconic shutter sound and only come to recognize some new never-before-heard tone instead.

July 15, 2019

What are the immediate benefits and challenges of remote and distributed teams?

Source:

Now that it’s easier than ever to assemble teams of talented people across the world—without having to even share an office—what are some of the benefits and challenges faced by these technologically-enabled work arrangements?

 The new normal

In 2013, Scott Berkun authored a book called The Year Without Pants in which he shared his experience working remotely for WordPress. Since then, these non-traditional work arrangements have become the norm at many companies. They are categorised within 3 broad groups:

  • Fully Distributed: Where team members rarely come into the office and work almost exclusively through the Internet, such as WordPress when it first started.
  • Semi-Distributed: Where some of the roles such as leadership or management are staffed at a headquarters that manage distributed team or teams (Hashicorp, Mattermost).
  • Small Offices: Often new offices can be created to start and host functional teams such as support or sales development.

The challenges

While versatile, there are certainly challenges with having an arrangement like this. This includes ensuring good communication strategies across geographies, especially in cases where the team is distributed. In addition, it’s important to share valuable knowledge or decisions made in person by one part of the team across the network. Finally, the largest challenges can sometimes center around hiring and compensating contractors and employees in these teams, especially ensuring that a company’s practices comply with local laws.

Speaking from experience

While technology and global connectivity have made previously unheard of work arrangements possible, the versatility for both the company and the individuals involved (who often enjoy flexible schedules) does come at a price. For distributed teams in creative companies especially, one of the biggest challenges is creating and maintaining a passionate work culture despite a lack of in-person face time with which to exchange ideas on the fly.

What’s more is where dedicated operational personnel is lacking, this chemistry and synergy needs to be maintained through reliable systems that can account for complex detail-oriented creative work. This doesn’t just mean individual programs (such as if a team is sharing an Adobe CC license) but how the myriad of programs in a team’s chosen tech stack play together. In short, for these distributed creative companies to thrive, they must properly use location-freeing technologies. The tech must have limited energy-sapping snags to keep creative juices flowing.

– Nate Kan

July 11, 2019

How will immersive new media push the evolution of usability?

Jakob Nielsen’s 10 Usability Heuristics for Interface Design (1994) remain relevant today even for UI in modern software, websites, apps and even video games. We’re no stranger to these guidelines being bent or broken for artistic or commercial merit, but how will the playing field change when the interfaces they were designed for eventually evolve to become us?

The Original Heuristics

For reference, heuristics are “any approach to problem solving or self-discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational, but instead sufficient for reaching an immediate goal.” These can also be used to decrease the cognitive load on a person to speed decision-making. Here is a brief summary of Nielsen’s original 10:

  • Visibility of system status: The system gives users feedback about what is going on.
  • Match between system and the real world: The system favors language and concepts familiar to the user and real-world conventions.
  • User control and freedom: Users have the freedom to undo or exit functions executed by mistake.
  • Consistency and standards: No guesswork as to whether different words, situations, or actions mean the same thing.
  • Error prevention: Careful design that eliminates the potential for errors.
  • Recognition rather than recall: Visuals are used extensively and instructions accessible to minimize the user’s memory load.
  • Flexibility and efficiency of use: Expert users can access accelerators, unseen to novices, that speed up interaction.
  • Aesthetic and minimalist design: Information provided is inconspicuous, relevant and efficient.
  • Help users recognize, diagnose, and recover from errors: Errors identified in plain language (no codes), and constructive solutions are offered.
  • Help and documentation: Easy to search, focused on completing the user’s task and of appropriate length.

The spectrum of immersiveness and user agency

While the above guidelines make perfect sense, developers have always interpreted or flouted them for commercial, artistic or other intentions in social media, video games, apps, websites or any other kind of interactive software.

For one, some games such as Wild West-themed Red Dead Redemption 2 offer the option of switching off the heads-up display (HUD) that includes the map, meaning players have to rely on landmarks and directions from non-player characters to find their way (just like we used to).

If you’ve mistakenly clicked into a 3rd party site when simply trying to clear a pop-up ad, there’s a good chance you’ve gotten a taste of Dark UX to use a less colorful term. Some are not as downright manipulative to squeeze that one-time action out of you, but rather are a combination of more subtle interface design decisions meant to loosen our purse strings or keep us engaged with—or dependent on—a given digital medium.

Depending on the creator’s intentions, we the users will find ourselves falling somewhere on a spectrum with every digital medium we experience, where total unconscious immersion lies at one end and complete freedom and control at the other.

Tomorrow’s interfaces and the blurring of reality

As we get closer to developing better and better media forms that involve the user on a deeper level, many of the above heuristics may become locked to certain benchmarks and inseparably merged as part of a new standard for user experience: total immersion.

Nielsen’s original usability heuristics were created in 1994 and certainly remain relevant beyond the software they were intended for originally. Today, the boundaries between software, apps and websites are constantly being blurred depending on how and how much the user can interact with them. Even though we’ve come a long way from a time when the only input devices were mouse and keyboard, and we’re still busy exploring the potential of capacitive surfaces beyond the touch screen, the interface and user remain separated at the hands.

But because the 10 heuristics have always favored the user anyway—their end goal is to reduce cognitive load and ease decision making—the interface will eventually do away with separate peripherals and the user will become the input device.

When eye movements, speech, and even thought become an industry standard input for interfaces, we’re going to reach a point where the usability of all apps, sites and software is going to be evaluated on the by-then increased user expectations (for example “the program responds quickly to my gestures in the air, moves the displayed area with my eye movements or pauses when my mind is focused elsewhere.”)

By this time, anything that delays this or responds in a non-intuitive way will effectively “break” the immersion, violating several heuristics in one fell swoop and thus affect a program’s usability—what we’d currently call “buggy” or laggy controls.

Art and industry

Regardless of whether an app, program, game or simulation is for commercial or artistic purposes, a creator’s goal is always strong user engagement whether that be evaluated in how often they revisit it or how long they spend with it. Just as long-form journalism, feature length films and perhaps eventually even podcasts decline in popularity, creators need to keep asking themselves how their respective arts and industries might change as attention spans shift and shrink while the path of least resistance shortens.

When VR and other yet to be defined new mediums reach a high enough standard to become widespread and normalized in our everyday lives (we’re getting there), we will have to figure out how we address the divide between this world and the creator’s. Photographer, artist and VR filmmaker Julia Leeb uses VR to so that her audiences can experience the terror of war in an uncomfortable but physically safe manner. What are the implications of future artists that remove user agency to execute their visions? How will industry standards evolve to address Dark UX in commercial VR apps? Should we establish upper limits—a “no go” zone—for how immersive something can be? Or will we simply treat new mediums as just another field of rabbit holes, each of which we impressionable humans can get lost in, as we have tended to do?

You might say we’ve watched one episode of Black Mirror too many, but it never hurts to be mindful of how we use the new things we create, but also how we can be used by them in return.

June 10, 2019

Hollywood is quietly using AI to help decide which movies to make

Los Angeles-based startup Cinelytic licenses historical data about movie performances over time and cross-references it with information about films’ themes and key talent, using machine learning to tease out hidden patterns in the data. Its software then lets clients swap out parts such as lead actors to see how the movie would perform across different territories.

Why the industry is overdue for the AI treatment

Cinelytic’s key talent comes from outside Hollywood. Co-founder and CEO Tobias Queisser comes from the world of finance, where machine learning has been embraced for its utility in high-speed trading to calculating credit risk. Further, Cinelytic co-founder and CTO Dev Sen used to build risk assessment models for NASA, another industry where risk assessment is especially critical.

However, Queisser says that the business side of the film industry is 20 years behind (still relying on Excel and Word) the production side that uses all kinds of high tech to make movie magic.

We’ve been here before

Cinelytics isn’t the first company to attempt to harness AI to predict and improve box office returns:

  • ScriptBook: Belgian company founded in 2015, promised to use algorithms to predict a movie’s success by just analyzing the script.
  • Epagogix: Similarly, UK-based company founded in 2003, uses data from an archive of films to make box office estimates and recommend script changes.
  • Vault: Israeli startup founded the same year promised it could predict which demographics would watch a film by track metrics such as online reception to trailers.
  • 20th Century Fox: Uses a system called Merlin to analyze shots from trailers for content and duration (among other things) and see what other movies people will watch based on preferences.
  • Pilot: Centers its machine-learning process around audience analytics to make box office predictions.

Why this might be good and bad for film

Good: AI saves people the effort of doing some of the things they hate the most, such as sifting through scripts the way a manager would be sifting through a mountain of resumes. This effectively separates the wheat from the chaff.

Bad: For aspiring writers seeking to enter the big leagues, even talented ones, their work might never see the light of day much less a set of human eyes if their story, original as it may be, isn’t attractive to the algorithm evaluating them.

Good: If the prediction models become accurate enough, studios big and small can breathe easier and be more confident with their investments in films, ensuring higher returns on investments and confidence from shareholders, meaning they stay in business longer.

Bad: If an AI-assisted selection process becomes widespread, we’ll see a drop in the diversity of big-budget films being produced as studios seek to cater to the whims of as many demographics as possible, potentially complicating or watering down movies.

Good: Because box office numbers among other end metrics reflect audience choices, AI recommendations from cold hard data could override human prejudices that prevent certain stories from being told or certain people from being involved in a production.

Bad: If followed too closely, those recommendations might mean a studio will get an actor with the biggest box office market draw they can pay for without regard to whether or not the actor fits the story.

At the end of the day

Big studios have to regularly produce films to keep the money coming in year-round by pushing the right audience buttons at the right time of year from summer blockbusters to holiday movies. Movies that are released to coincide with certain attendance patterns (like say, a horror movie in time for Halloween) are usually designed to be enticing and entertaining if forgettable and for these, there are plenty of ways AI can help set big studios up for the best possible numbers they can.

These films types are formulaic enough that allow for this kind of “drag-and-drop” production. An example of this would be family-friendly comedies with a “big dude with a big heart” that have appeared regularly throughout the past twenty-plus years: Arnold Schwarzenegger (Jingle All the Way), Vin Diesel (The Pacifier), The Rock (The Game Plan and The Tooth Fairy) and most recently, John Cena (Playing with Fire).

That said, it remains to be seen if studios will ever let AI override decisions from accomplished writers, producers, and directors whose track record and reputation give them the authority to choose a given actor or other artist to work with. Further, AI can only make predictions based on what has already been made and not how tastes and popularity will shift in the future—the result of many factors outside of a strictly cinema-centric data set. That’s something that requires insight and instinct, something that humans are still valued for.

May 17, 2019

Adobe Tells Users They Can Get Sued for Using Old Versions of Photoshop

Source:

In a move that shocked—or didn’t shock—Adobe users, the company announced that customers could face legal consequences for using old discontinued versions of Photoshop, warning them that they were “no longer licensed to use them.”

This week, Adobe began sending some users of its Lightroom Classic, Photoshop, Premiere, Animate, and Media Director programs letters warning that they weren’t legally allowed to use software they had previously purchased.

“We have recently discontinued certain older versions of Creative Cloud applications and and a result, under the terms of our agreement, you are no longer licensed to use them,” Adobe said in the email. “Please be aware that should you continue to use the discontinued version(s), you may be at risk of potential claims of infringement by third parties.”

How we got here

In 2013, Adobe moved away from its original business model, whereby users could purchase hard copies—and continue to use them regardless of later versions being released. The new subscription-based service Adobe Creative Cloud resulted in notably higher revenues due to the constant stream of monthly fees from users. Naturally, requiring users to regularly sign in online to confirm their paid subscription also curtailed the use of pirated or cracked versions of Adobe’s software that became widespread with the previous business model.

Everyone’s gone insane with subscriptions

Adobe’s transition to subscriptions isn’t necessarily new nor unusual. From video games and mobile apps to delivery-based health food plans and even clothing, business models based on subscriptions are extremely common now to ensure the companies providing them have regular cash flow. But the issue with software and other tech is that they are by their nature modifiable via wireless updates, meaning that features can be added, disabled or removed simply by going online. Worse, some companies can make these modifications mandatory and in Adobe’s case, make accepting the modifications mandatory to continue using the software or service.

And as you might have guessed, we’re also complicit in continuing this behavior because the stipulation that the company is free to do so is buried but clearly written in those End User License Agreements we unashamedly skip through and agree to. This gives companies—pardon the pun—free license to modify products that we don’t own in any concrete way and this most recent move extends to products people thought they owned in perpetuity.

What’s to be done?

Seeing as Adobe’s software serves as the industry standards for many creative fields (Premiere for video, Photoshop and Illustrator for graphic design and InDesign for publishing among others), it’s hard to peel away from that software we might have trained on, are used to or the rest of our collaborators are using. For established creatives, that’s a business expense that can certainly pay for itself (assuming you earn over $60 USD a month on projects), but it’s a big upfront cost for artists that are starting out or looking to go digital.

The solution for them? Draw a line in the sand and stick to open source or free programs that are actively maintained and have a community. There’s no Photoshop equivalent industry standard for creative writing work (as much as Microsoft Office wishes) because people care more about the end result more than the software it was produced on. If you have talent or willingness to create good work, even if it means a few more steps to do with free software what some of Adobe CC’s cutting edge technologies could do better and quicker, the portfolio will speak for itself.

For those tired of getting nickel-and-dimed? You might need to take stock of and start Marie Kondo-ing your monthly and yearly subscriptions and decide whether you could put those savings towards more permanent solutions you own (such as making your own NAS in place of regularly paying for cloud storage).

May 3, 2019

How Slack and the open office layout ruined productivity

Source:

Advanced workplace communication technologies and a return to humanistic office design were meant to make companies more innovative, making us both happy and productive. They did neither. Where did we go wrong and how do we come away from it?

Productivity Software

Software like Slack, Workplaces and Teams were meant to facilitate communication and get us away from the dreaded overflowing email inbox, but they also brought something just as bad in their wake:

  • The low barrier to entry of communication software means that people share information more frequently and with a significantly decreased quality of content.
  • Work communication becomes its own sort of social media, complete with its distracting allure as a time sink.
  • Rise in performative messaging and information sharing from remote workers to show that they were in fact working at their desks

Open Offices

The open layout of offices where workers are seated within eyeshot of each other or visible behind glass partitions allowed companies to both save on their leases by reducing square footage allotted per employee and give the impression that they were forward thinking and innovative. But this lack of barriers produced some unfavorable results as well:

  • There is no separation from other peoples’ in-person interactions, meaning you’re in earshot of all conversations but even when you have earbuds in, you also aren’t completely focused when things are constantly catching your eyes.
  • 65% of creative people need quiet or absolute silence to do their best work, an environment open offices can’t provide, especially if workers are required to be in office at all times.
  • At many companies, there was an increase in anxiety for women who not only felt personally more pressured by “being on display” all the time, they actually found male coworkers evaluating other female colleagues on their attractiveness, some of many factors that prompted women to seek “hiding spots” where they could find privacy.

How It All Adds Up

When the time lost to distractions in both digital and physical spaces adds up, that subtracts time from normal working hours, which prompts people to multitask hoping to recover that lost time.
The issue is that the length of distractions can increase with the given co-worker(s) and amount of distractions can scale with the size of a given team or department.
This far outweighs a single worker’s capacity to work faster and multitask, which is not actually a thing—it’s the inefficient switching between tasks.

Regardless of the nature of the job (9-5 versus flexible work hours), this almost always results in work being done far beyond regular work hours. While this is certainly fine for the occasional sprint, over time it means an eroded barrier between work and life. And the least productive our best hours of the day are, when we are at our peak focus and energy, the even worse our work is in our off hours.

How We Can Learn From This

Where the individual box-shaped cubicles of the 20th centrury were derided as isolating and alienating us from our coworkers to make mindless drones of us, open offices have made machines of us in different ways: as constantly online, aware, engaged and performing. That said, both styles of work have inherent advantages that can be leveraged to make our time in an office as rewarding as it is productive so we can all go home on time.

  • People are not gears, but we absolutely have them. We need to be able to switch from being inaccessibly focused on our tasks to produce the best work in a given period of time while also being open and attentive to interactions with other people. But we can’t do both at once and we can’t remain in one gear indefinitely. If the space can’t be physically structured to allow us to switch gears, the daily or weekly schedule must.
  • While the romantic notion of the grind and the hustle has simply changed vocabulary and style, in its very concept it means to work harder, but not necessarily smarter. Systems and metrics for KPIs can be oppressive if improperly or unfairly implemented, but these unsexy methods have their place in keeping us focused on what’s important as much as what’s urgent so that have work “diet” that is healthily balanced without too much junk (making office memes, anyone?)
  • For leaders and managers, recognize that provided there are specific measurable goals and deadlines agreed on in advance to measure everyone against, the quieter seemingly distant workers might not necessarily be less committed or involved in a project and the more vocal and active communicators might not necessarily be the most productive either.
April 16, 2019

AI is improving, not taking journalists' jobs

While the fear of AI replacing human jobs in certain sectors might be warranted, those in the journalism field can breathe a sigh of relief and even rejoice, according to journalists from the Wall Street Journal, Washington Post, WIRED, Dogtown Media and Graphika.

Earlier in March, five journalists from those publications met with and spoke to over 1,000 students across the Missouri School of Journalism, the Trulaske College of Business, the College of Engineering and the College of Arts and Science on March 18-19 as part of the Reynolds Journalism Institute’s Innovation Series.

The overall message was that AI is bringing positive change to the news field such as through customized content, improved user relationships, moderating comment sections, and creating more efficient workflows.

Some key takeaways

  • Artificial intelligence is a tool to allow journalists to better understand readers. “Can we make a story more personally relevant to a user, to the reader, watcher or listener? If we can do that, that’s what makes people establish trust. Not just that the information is believable, but the information is believable AND it matters to ME.” (Jeremy Gilbert, Director of Strategic Initiatives, Washington Post)
  • We must recognize what AI is and is not capable of in order to make it work for us. It is certainly an imperfect tool, but has allowed Conde Naste publications to make more strategic spending decisions with advertising thanks to data that provides a better understanding of its users. (Jahna Berry, Head of Content Operations, WIRED)
  • AI offers a means for journalists to re-imagine and better leverage their skill sets. The issue that has always existed for journalists that “there’s always been more data than (journalists) can sift through, You just have to know how to ask the right questions (of) the data, the records, to get (to the) relevant story.” (Nick Monaco, Disinformation Analyst, Graphicka)
  • Journalists need to know who writes the algorithms behind AI, understand their intentions and ultimately hold them accountable. (Steve Rosenbush, Enterprise Technology Editor, The Wallstreet Journal)
  • AI is useful to journalists in allowing them to work smarter, faster and more efficiently. This will then free up more time for journalists and other knowledge workers to think creatively on problem solving and apply themselves to what they do best. (March Fischer, CEO and Co-founder, Dogtown Media)

We’ve been here before

In an episode of MAEKAN It Up, we discussed the potential impact of AI on the creative industries and the workers in them. As with journalism, the importance of human intuition will remain a key factor that prevents the complete replacement of human creative jobs, but it will at least replace a number of human tasks—hopefully the least desirable ones.

Regardless of the job nature, there will always be tasks that are important but time-consuming and that require significantly less creative thinking. And it’s these tasks that we would be happy to allow AI to “have” so that it frees us up to do other things that utilize more of our skill set. Or even better, it can do several rough but usable first passes or concepts that we can then tweak or re-iterate from.

Overall, like Monaco and Fischer above, we remain confident that for now, AI has a welcome role to play by doing our tasks, but they won’t be taking our entire jobs.

Play Pause
Context—
Loading...