March 2, 2020

It's not Game Over yet — The need for utopian narratives

As popular fictional narratives about a bleak future seem to leap off screens and pages and into reality, it’s never been more important to consider that all is not lost. We look at why dystopian narratives are popular and why we need to re-invest our imaginations in utopian ones.

The primer on dystopia

Dystopia isn’t necessarily always about the nightmarish doomsday scenarios that dominate popular media like where AI, infectious disease or natural disasters have gotten out of control (that’s apocalyptic fiction). Instead, the word simply describes “a bad place,” a society that’s largely functioning but has basically lost its way so much that it sucks to live in it. Most of the time, this is brought about by a society’s ruling powers that exert unshakeable influence on:

  • Economics: where rigid controls on the economy create huge class divisions.
  • Family: where entire families and even the definition of “family” is systematically controlled.
  • Religion: where faith becomes either a basis for persecution or for oppression through a theocratic government.
  • Identity: the expression and actualization of an individual’s essential being are strictly controlled.
  • Violence: as an institutionalized means of maintaining control or “balance” à la Battle Royale or The Hunger Games.
  • Nature: nature including animals (and human instincts) are centrally controlled.

Without the more visible (and easier to dismiss) high-concept apocalyptic images we see in film, TV, and games, it’s easy to see alarming developments in some societies that resemble what was depicted in dystopian literature written in the past two centuries.

Why we’ve been here before

If you do find yourself looking at dystopian or speculative fiction, you’re going to find prolonged conflict — an essential element for most stories and arguably what helps them sell. And this hasn’t just been the case recently, it’s happened several times in the last century. Yvonne Shiau details the motivations that drove several successive phases of dystopian fiction in Electric Lit:

  • The genre is defined (’20-30s): George Orwell and Aldous Huxley would explore themes that would regularly appear in the genre, each touching on a different fear. Orwell feared our destruction at the hands of the powers that be, Huxley at the hands of our own comfort and complacency.
  • The threat of war and technology (’50-60s): The end of World War II inevitably fueled speculation about World War III and worse outcomes. Major technological advances like the first satellite, the invention of the first personal computer and the Turing test for intelligence in computers fueled authors’ suspicions about technology.
  • Corporate impact on our bodies (’70-90s): Public concerns shifted from war to other issues such as the environment, economics and the impact of private corporations on how we value and perceive our bodies. Works from this period include The Handmaid’s Tale and Neuromancer (which gave way to cyberpunk).
  • Youth against the world (’00-10s): The genre becomes heavily associated with youth and their conflict with the cruel world gone awry, possibly following in the wake of events such as 9/11. Works like The Hunger Games are adapted and continue to fuel renewed interest in the genre.

While dystopian fiction has regularly allowed us a way to explore our fears about the future, the issue is that those narratives have the capacity to be pervasive enough that they become self-fulfilling prophecies when people accept them as inevitable. And when they dominate the media discourse and thus the mainstream public consciousness, it will inevitably seep into ours as well despite all our attempts to think critically — or optimistically.

Why we need to imagine utopias

All of these factors point to one overarching angle: it’s mentally expedient and commercially viable to imagine more pessimistic outcomes, especially when those themes are already present around us. And as Eleanor Tremeer points out in her Gizmodo article, we never get to see “what happens after the victory,” when our protagonists triumph over evil and a new age begins. Arguably, doing that takes as much effort as it does imagination. But it’s not impossible.

One of our biggest tools in fighting the urge to worry lies in the fact that yes, “we’ve been here before.” Our recent Editor’s Letter touched on history’s capacity to repeat itself (or at least “rhyme”) with our emotional reactions following suit. This is true in the sense that every generation has feared the “end of the world” only to see it lead to another generation where the ensuing prosperity (or at least improvement) makes us forget what happened last time.

As creatives and artists, it’s often our job to tell stories and tell them in certain ways. But we also have the ability to tell powerful new stories and create new realities through our work that doesn’t yet have a perceivable basis in reality. We just have to ask ourselves, “what does the best case scenario look like,” and then temper that with reality and the relevant constraints.

The Takeaway

As humans, we’re naturally susceptible to worry as much as the next person — and there’s certainly a lot to worry about these days. But history has shown that every generation has confronted fears of the world headed for an untimely end, and eventually recovered. Without being able to live those phases in history and see the ups and downs, it takes a lot to resist the urge to internalize our current dread that no, “it’s for real this time. We’re screwed.”

So then, what’s the cure for despair? Some of the solutions we once dreamed of may no longer be feasible in their current state due to our current considerations of sustainability or ethics, but that’s no reason to abandon those trains of thought outright.

Instead, it might be helpful to view utopia and dystopia as push-pull forces that lead us forward: We might not be able to avoid all of the bad on the way, but we can absolutely imagine a future that isn’t just ‘okay’ but in fact, great. As one Sarah Connor (played by Linda Hamilton) famously put it in Terminator 2: Judgement Day: “There is no fate, but what we make for ourselves.”

February 17, 2020

The art of storytelling through data

Data visualization helps us to make sense of complex and difficult topics, but in the process, they can also produce some aesthetically pleasing images. How do we approach its use as a form of storytelling as it becomes more popular?

What is data visualization

In his comprehensive guide “The Art and Science of Data Visualization,” analyst Michael Mahoney defines it simply as “the graphical display of data.” It’s how you take different data points and find ways to differentiate them according to certain variables (such as size, color and shape), and arrange them in a way that can be understood. In his words, “Visualizations are often the main way complicated problems are explained to decision makers.”

In his guide, he provides mantras that underscore four key factors in creating good data visualizations:

  • Effective: “A good graphic tells a story.” Because of the size and breadth of modern data sets, the visualization needs to include only the elements that make identifying patterns and trends easier to comprehend.
  • Simple: “Everything should be made as simple as possible — but no simpler.” Practices like using only 2D or cutting down on extraneous visual elements keep it lightweight.
  • Efficient: “Use the right tool for the job.” Using the right methods to depict the data correctly.
  • Digital: “Ink is cheap. Electrons are even cheaper.” Make more than one graph to split data to show difference between different categories and groupings.

And if not to these decision makers, Mahoney says visualizations are used to help identify patterns in a data set or explain those patterns to wider audiences. To this end, they have to be both truthful and easier to decode.

There is no “one chart fits all”

However, Dear Data’s hand drawn approach also challenges the expectation of how a data story should be told. The project is the brainchild of Giorgia Lupi and Stefanie Posavec who began it as a form of visual correspondence between the two, who would use hand-made postcards to share information about their lives over the course of their year. Their first week of their exchange tracked the number of times they checked the time, while subsequent ones tracked everything from smartphone addiction to number of times they saw their reflections..

Looking at them, it’s not always apparent that the drawing is in fact a representation of data points (perhaps more-so to the untrained eye), much less a picture of someone’s life. Still, “data is a way to filter reality in a way that words cannot,” Lupi says. This consideration adds a layer of nuance and complexity to data visualization that is at odds with the transparent science aspect of it.

Does data visualization need to be easy to understand for everyone? After all, the deep perspectives we can gain from examining our own very personal data might, like many codified narratives, be better reserved for the select interpretation of just us or very specific people. Put another way, some stories are told for specific audiences. Could data stories be told in a similar way that is deemed ‘art’ to the initiated but comparatively opaque to others?

Our reaction to data visualization

Not unlike how editorial illustrations contextualize a much longer and larger story, data visualizations offer us the perception-altering perspectives that accompany drawings. However, our treatment of them might differ because they lie at the intersection of our current preoccupation with:

  • Metrics: Facts that emanate from the scientific or academic community give us a grounding in reality that helps us pierce through conflicting opinions and misinformation, as well as use that knowledge to help us get ahead in life. For better or worse, metrics and figures give us standards we can use to measure our lives against and improve.
  • Visuals: The shift in audience preference for video means visual representations of a topic will rank higher than a few thousand words of even well-written text (and likely even more than charts full of raw data points).
  • Digestibility: Like infographics, data visualizations give people with significantly less experience with a subject the ability to digest a much larger, more complex story.

But beyond just using data visualization as a way of understanding topics we want to know more about, they also could pique our curiosity in others that we otherwise wouldn’t. A look at some of the visualizations from Nathan Yau’s Flowingdata likewise could put us onto important topics like saving for retirement (or decidedly less urgent matters like burger rankings).

The point is that the role of data visualizations in the diverse media landscape will become more pronounced so long as audiences recognize their potential to broaden understanding and fight misinformation..

The Takeaway

Data visualizations provide a way to contextualize phenomena and make sense of complex and difficult topics. However, they are only as honest as the people who design them. Like all stories, it’s important that we also think critically about the larger themes of the topics themselves and especially simplifications of them. When data sets become condensed into comparatively small but pleasing gold “nuggets” of information, those nuggets become as easy to misconstrue and abuse as they are to share.

In short, the facts might be embedded into the image and one story told through their arrangement, but any truths are still unfortunately up to us to figure out.

February 3, 2020

Do culture and nature evolve in the same way?

A recent study argues that culture actually evolves very slowly — at almost the same rate as nature. But does cultural evolution work so neatly in our current culture?

The Study

In a recently published study titled The pace of modern culture, a group of British researchers used metrics designed by evolutionary biologists to compare the rates of change in a species of bird, two kinds of moth and a snail to:

  • Popular songs: They reviewed Billboard Hot 100 songs from 1960-2010.
  • Cars: They tracked changes in the traits of cars sold in the States between 1950-2010.
  • Literature: They also looked at American, Irish and English novels published between 1840-1890
  • Clinical articles: articles from the British Medical Journal published between 1960-2008.

Their conclusion? The two evolve at about the same rate, which means according to Armand Leroi, an evolutionary biologist at Imperial College London and one of the researchers on the study: “We are surprisingly conservative about our choices, and what we like changes very slowly.”

He also likens cultural artifacts to organisms in that they evolve and survive according to whether conditions are hospitable to change or not: “When we make something new, be it a scientific paper or an artwork, we take that thing and throw it into the world and it either lives or dies,” Leroi says. “Its success depends on whether people want it or not, just like natural selection.”

Despite the results, however, Leroi is not concerned about the speed of evolution so much as demonstrating the potential to use tools from one field (in this case, evolutionary biology) to study and track changes in another such as culture.

The other side

Arizona State University human and cultural evolution professor, Charles Perreault holds a different view. He concluded in 2012 that human culture actually moves 50 percent faster than biological evolution. This applies even when controlling for the phenotypic plasticity (the ability of an organism to change in response to its environment) of species with shorter lifespans. Basically, these sorts of species can “iterate” faster and more often over these shorter intervals, but Perreault argues that our cultures still evolve faster in our longer generation times (measured in spans of 20 years).

In his abstract, he also contrasts the biological “vertical” sharing of genetic information (through reproduction) with the transmission of information in culture:

“While cultural information can be transmitted from parents to offspring, it is also transmitted obliquely, between non-parents from a previous generation, and horizontally, between contemporaries. This transmission mode gives cultural evolution the potential to spread rapidly in a population, much like an epidemic disease.”

The Takeaway

Looking at how culture evolves through this lens certainly draws some interesting parallels that support both arguments: for one, the culture of a given society can be slow to adopt change even if it’s constantly exposed to different stimuli and yet it also has the potential to disproportionately influence another or more societies.

We’ve seen the rise of “strong” and “viral” culture exerting undue influence throughout the world, which would lend some troubling evidence to cultural evolution’s problematic origins that followed shortly after the emergence of Darwinism. Is it really a matter of the loudest, most popular (and most funded) culture that survives?

One thing we’ve discussed at length is the idea and impact of media fragmentation. It’s harder than ever to get people on the same wavelength because there’s literally an infinite number of wavelengths for you to tune into. This makes the consolidated sharing of ideas much more difficult than ever.

While history has shown many unfortunate tendencies of this, the power of technology has the power to both perpetuate this trend and amplify smaller, lesser known cultural products and ideas to exert disproportionate influence (to “reproduce”) elsewhere. If these dynamics could be reduced to a science, should we be trying to hack the formula to get the results we want, or should we willfully “devolve” and let cultural nature run its course?

 

January 24, 2020

Can next-gen fake meat become a hit in China?

At a time when China’s economic progress has meant more disposable income, the country’s population now tops the world for meat consumption. But with our uncertain resource future, the rise of meat analogues is continuing to gain momentum. Is there room for fake meats in China (and frankly globally)despite its traditionally pork-loving culture?

Fake meats new and old

While the debate on the health aspects of meat consumption rages on, there is less question about the large impact of livestock production on greenhouse gas emissions Together, these two major factors form the drive to reduce the consumption of meat — real meat that is. Enter meat analogues or “fake” meats. These substitutes are actually nothing new, and have existed for centuries, especially to abide by the dietary laws of different religions, but are taking on new meaning in the 21st century:

  • Tofu: made from soybeans and likely the most widely known meat substitute
  • Tempeh: traditional Indonesian soy product made from the fermented beans and pressed into cake form.
  • Wheat Gluten: use to make most of the fake meats (such as duck and bbq pork) derived from the Chinese Buddhist culinary tradition. Also called seitan.
  • Almonds: Used as a meat and dairy substitute, especially during Lent in Medieval Europe.

As for the comparatively new kids on the block, different companies use different proteins and methods to produce their meat:

  • Beyond Meat: uses proteins from peas, mung bean and rice.
  • Impossible Foods: uses a genetically modified soy molecule called heme
  • Quorn: the UK-based company uses fungal protein with egg albumen as a binder.
  • Zhenmeat: the Beijing-based startup uses 3D printers to produce products that contain elements like bones, which Chinese are used to eating the meat off of.
  • Green Monday: Based in Hong Kong, its breakout ground meat product, OmniPork, uses shiitake mushroom, peas, rice and non-GMO soy.
  • Whole Perfect Food: Founded in 1993 in Shenzhen, the manufacturer of over 300 products uses different methods for different products, such as extracting seaweed protein for vegan seafood.

While the historical meat analogues were created for vegetarians and vegans, they and newer analogues are hoping to sway hardline carnivores and omnivores alike with the promise of the same taste or tastier, better nutrition and the chance to help the planet.

Breaking into the Chinese market

A New York Times article by David Yaffe-Bellany outlines the efforts of American companies Beyond Meat and Impossible Foods that want to break into the Chinese market. they have a few hurdles to overcome:

  • Regulations: there’s a spiderweb of regulatory bodies to get through before it can even be approved.
  • Culture: even though meat-free Buddhist cuisine is thought to to have originated in China, the culture still largely favors tradition and the status associated with meat consumption. This consumption (especially pork) is expected to rise in the next six years.
  • Local Competition: there are already China-based fake meat companies that are ahead of the game in terms of being integrated and localized for that market.

Like all companies trying to enter a tough market, those two will likely have to make serious adjustments to the game plan to integrate successfully.

The KFC Case Study

This isn’t to say they’re doomed to repeat the failures of many US brands in China. For one, KFC had to largely abandon its US model before becoming a the most popular US fast food chain in China:

  • Larger stores: where KFCs in the US favored take out, those in China were doubled in size to welcome groups, extended families and longer stays.
  • Larger menus: the larger menus and rotation of season items are meant to aggressively cater to local tastes (which also required hiring more food prep staff).
  • Smaller cities: Instead of competing with McDonald’s in the largest cities, KFC opted to for the ones with smaller populations where the incomes were rising and the brand’s appeal would still be novel.

In short, success did follow this particular brand, but it meant extensive localization along with a strong business strategy.

The Takeaway

While at least one fast food company managed to find a way to sell fried chicken to a culture that already consumes chicken, it’s going to be a far greater challenge of selling fake meat to an unabashedly carnivorous population. Not unlike the prosperity (and subsequent rise in meat consumption) America enjoyed post-World War II, China’s transition through that war, its own civil war and sharp economic rise mean its appetite for meat will continue to increase.

But it also presents an interesting conundrum with respects to the interaction between established cultures and brands that are new to them. Fake meat spreading into tough markets isn’t going to happen overnight, but it maybe come sooner against the backdrop of wider pushes to change our diets as the planet faces a future with scarcer resources and increasingly more mouths to feed as the population grows roughly 1.05% or 81 million people per year.

It’s an interesting if uneasy intersect between culture, economics and the environment: we could say definitively that because one group is disproportionately impacting the globe with its massive population, they should change. Yet on the other, we might risk dictating that because our current culture or lifestyle of choice is deemed more advanced or “better” than the old one we had, as we come to reckon with its consequences, that a group should move away from it — and toward the product we’re selling — just as they’re starting to enjoy any of the benefits.

What has to happen first to catalyze change? Does a culture have to be catered to above-all or does it depend on the brand’s global recognition, does the brand forcibly re-write the local culture a bit with strong localized imaging and marketing to gain traction — or will more dire circumstances take care of the re-writing?

 

November 4, 2019

The Importance of the Daily "Brain Shower" aka Sleep

The Wired’s Sara Harrison breaks down the results of a study by Dr. Laura Lewis on the importance of sleep (as if you needed another reminder). Lewis and her team at Boston University Lab have revealed how our body clears toxins out of our brains as we sleep.

The Study

The study aimed to test the role of non-REM (deep) sleep in removing toxins in the brain by examining sleep cycles that were as realistic as possible. Here’s how they did that:

  • Late nights: to ensure the sleep cycles were as realistic as possible, subjects were instructed to stay up late the night before so that they could drift off easily at midnight, when the tests were run.
  • Non-invasive: Participants had to lie down and fall asleep inside an MRI machine and were fitted with an EEG cap to measure the currents flowing through their brains. The test was as non-invasive as possible, even forgoing the use of injected dye commonly used for mapping out the body during MRIs.
  • Isolating Metrics: the MRI measured the levels of oxygen and cerebrospinal fluid — the clear liquid found in the brain and spinal cord — in the brain. This was to better understand their relationship during sleep.

Harrison notes how Dr. Lewis sacrificed her own sleep for science to conduct the late night study, running tests until 3am before sleeping in the next day: “It’s this great irony of sleep research,” Lewis says. “You’re constrained by when people sleep.”

The Findings

Lewis found that during non-REM sleep (also known as deep sleep), the following took place:

  • Neurons synchronize:These specialized cells that transmit impulses start to switch on and off at the same time.
  • Blood flow decreases: When they switch off or “go quiet” they have less need for oxygen and as a result, blood flow to the brain decreases.
  • CSF fills in:  When this happens, cerebrospinal fluid rushes into the added space and washes over the brain in large slow waves.

The results builds off of a previous 2013 study led by neuroscientist Maiken Nedergaard that showed toxins like beta amyloid, a potential contributor to Alzheimer’s disease, was cleared out in mice during sleep. Suffice to say sleep is as important for humans as it is for mice.  “[The paper is] telling you sleep is not just to relax,” says Nedergaard. “Sleep is actually a very distinct function.”

The Implications

Harrison notes that this study only focused on non-REM sleep in healthy young adults and not on other sleep cycles and in older people, which means more research is needed. Still, the findings might help improve treatment for conditions such as Alzheimer’s: where previous medications just focused on targeting certain molecules like beta amyloid, understanding the important role cerebrospinal fluid plays means a new path towards other treatments. For one, Nedergaard says, future treatments might emphasize increasing the amount of cerebrospinal fluid washing over the brain.

Where we’re going with this

Even if we don’t have Alzheimer’s, we can’t ignore the important relationship between sleep, healthy brain function and mental health, issues of which are exacerbated by sleep problems especially in those with pre-existing conditions. To add to that, workers in our creative industry are more susceptible to certain issues such as depression.

Even for those who don’t suffer from said issues, it’s unlikely we’ll hear the conclusive end of the debate on how much sleep is good for creativity, with some arguments broadly praising the benefits of less sleep and others suggesting it depends on the type of creative. For that reason, we’re not going to flat-out suggest you get more sleep than you need to feel good nor will we be penning an Analysis titled “Why You Don’t Actually Need to Sleep That Much” anytime soon.

Rather, we take this study as proof of the overall importance of a good night’s sleep, which now that we see how cerebrospinal fluid is involved, could be affectionately (and accurately) called our daily “brain shower.” Just like a real shower, the exact length varies by the individual and you take as long as you need to start/end your day feeling clean, refreshed and creative.

May 23, 2019

Breaking down the science of beauty and how it influences creativity

Source:

In his 2013 book, The Aesthetic Brain, neuroscientist Anjan Chatterjee discusses our brain’s ability to have aesthetic experiences—those deep “magical” moments that leave us in awe. While these responses have evolved out of the same chemical and emotional pathway that helped us survive, they now help us understand why we like the things we like.

What makes an experience aesthetic?

According to Chatterjee, there are certain configurations of sensations and of objects in the world that produce an experience that is qualitatively different than just straight perception.

But what differentiates “aesthetic experiences” from pleasurable ones such as having a good meal or seeing something or someone attractive?

He believes one way aesthetic experiences distinguish themselves is by being self-contained: the experience doesn’t go beyond your own immersion and engagement with experience and it doesn’t come with an impulse to act such as a desire to purchase an object or show it to a friend.

Liking over wanting

Neuroscientist Kent Berridge refers to two systems that work together as part of our brain’s reward system: that of liking and that of wanting. In short, we tend to like the things we want and we want the things we like. Chemically and anatomically, however, they work differently in our brains.

Dopamine’s role in learning is key to helping us get what we want, whereas the “liking” system is purely about pleasure and is mediated by our opioid and cannabinoid receptors. These two systems can be disassociated, however, as in the case of addiction where we want something we don’t necessarily like anymore.

As far as aesthetic experiences go, the liking aspect takes precedence over the wanting aspect; we like for the sake of liking.

The aesthetic triad

Chatterjee and his contemporaries involved in neuroaesthetics believe that there are three means in the brain through which we can have an aesthetic experience and can help understand how our brain is being engaged.

  • Sensor motor circuitry: traditional beauty and scientific sense of “pleasing” aesthetics
  • Emotional and reward circuitry: the wanting and liking system
  • Semantic conceptual circuitry: refers to messages, cultural background, and contextual knowledge

This means that someone could have an aesthetic experience with a piece of art that is not necessarily “beautiful” in the traditional sense but by means of their knowledge of the nuances and concepts behind it and vice versa.

Context and culture

What we consider art and even what is likable to us changes with time and context. Despite our brains having remained largely the same for 150 years, our perceptions of objects that could move us toward an aesthetic experience are fluid and susceptible to influence.

In one Danish study, people were shown abstract images and in one condition, they were told the images were computer-generated with an algorithm. In another, they were told the exact objects were hanging in a gallery. Both the subjects’ verbal responses and imaging of their brain activity suggested they liked the images they thought were in a gallery more.

Our thoughts

Chatterjee’s view of creativity is essentially reconfiguring the problem and seeing it in a different way. He says our current culture emphasizes productivity and a brute force analytic approach to creative solutions without allowing for unstructured downtime—periods of low arousal such as showering or winding down before bed—for organic creative insights to emerge.

For creative people, our perceptions can become our references and our aesthetic experiences our inspirations. But because they can both be shaped and triggered by external contexts, our creativity—that is, our internally derived original thoughts—may very well depend on allocating more of our lives to said downtime. Otherwise, we risk investing more in consuming prevailing narratives instead of writing them ourselves.

April 4, 2019

Instantly Forgotten — How Social Media is Ruining Our Memories

Girl with fragmented memory.
Source:

When we record moments in our lives for the sole purpose of publishing on social media, we risk ruining our memories—the moments themselves and the mental faculty—in the process.

By this point, we’ve seen how social media, with the addicting allure of instant gratification, has ruined social gatherings: the dozens of dinner spread photos before chowing down, live concerts viewed and streamed through a tiny screen, and random annoying disruptions from someone staging a moment for the gram.

But while commemorating life and social moments has always been a part of our desire to record, document and pass down our stories, the way we’re doing it now is not just ruining our memories, the moments, but also our memories, the mental faculty.

The scientific explanation

Neuroscientist James L. McGaugh noted in his 2013 paper “Making Lasting Memories: Remembering the Significant,” that emotional arousal during an experience stimulates your amygdala, the part of your brain that controls emotions, survival instincts, and memory, to release stress hormones that make it more likely for those experiences to be encoded as longer term memories.

Building on that understanding, researchers at the University of California Julia Soares and Benjamin Storm released research last March that suggests that people disengage from the moment when taking photos on camera phones. Their groundbreaking paper, “Forget in a Flash: A Further Investigation of the Photo-Taking Impairment Effect,” compared participants’ memory in three scenarios:

  • after pure observation
  • documentation on the Camera app
  • documentation in a condition when their photos wouldn’t be saved, like on Snapchat.

Now, previous research had already established that by recording memories through things such as a camera would hinder people’s memory—a phenomenon dubbed the “photo-taking impairment effect.”

Where researchers had previously associated this effect with “cognitive offloading,” the process by which people store memories onto an external memory source instead of personally retaining them, Soares and her team went one step further by testing if this offloading was the true cause.

It actually wasn’t. Soares’ new hypothesis, dubbed “attentional disengagement,” suggests using a camera or camera phone takes people out of the present moment and impairs memory formation even after they put the device down.

What’s more, participants using Snapchat were found to have even greater memory impairment than in photo-taking alone, possibly because of all the other distractions like filters and other effects.

The bro science explanation

Even if you don’t care much for the science, there’s a very real threat to our memories that should be concerning if not alarming. By now, we’re no strangers to the idea that we have never been more distracted from our lives at any time in history. If we were to just take something as seemingly innocent as the selfie:

  • you’re finding the right angle for yourself and seeing what you want in the background. I guess you could call that ‘composition.’
  • you’re concerned with nailing your shout-out, gesture or pose; performing the best your life could be.
  • you’re then occupied with editing, polishing and publishing that moment to be experienced a certain way by someone else.

Obviously, it’s not just that we’re missing out on great moments because we’re pulled in another direction (the agenda that is curating the best version of ourselves)—it’s that this distraction is combined with our tendency to not consciously smell, touch and feel things.

It’s like how we take uncompressed RAW camera files or WAV audio and convert them into lossy jpeg or mp3s that have less information in them.

In short, by capturing what could otherwise be our life’s greatest moments into a few seconds of compressed video with crap audio that is then obscured by filters and other add-ons, we’re converting those precious seconds full of densely-coded images, sounds, emotions, scents, tactile sensation and connections into something comparatively devoid of meaning. Something utterly forgettable.

February 19, 2019

Is the world running out of people? This duo believes we might

People aligned wearing yellow teeshirts

No, the title is not a typo: we might get there sooner than you think. That’s if you ask Canadian journalist John Ibbitson and political scientist Darrell Bricker, who believe world population will decline in the next few decades. The two writers, promoting their new book “Empty Planet” during the interview, conclude that within 30 years we’ll begin to a long term demographic shift. Both men believe that once the population decline kicks in, it’ll never stop, challenging assumptions that are held by many people across the world.

What’s really going on?

The book challenges existing UN population models through new studies and number crunching. Unlike the UN which estimates the number to reach 11 billion by 2100, the authors believe old assumptions no longer apply due to tech’s growing impact. In fact, tech impacts people in many different ways, including population growth. Because tech can alleviate certain pain points, fertility rates in some developing countries don’t need to be as high. As the authors outlined:

  • Every woman living within an Indian slum had a smartphone
  • This gave them access to unlimited knowledge via the Internet
  • For context, India is one of the world’s largest countries, with a population well above 1 billion people.

As many economists will attest, economic growth is turbocharged through women’s education as women are often caretakers of future generations. More educated women mean smarter kids, and ultimately better underlying economics for a nation and its population.

 

Who gets the final say?

Although prior forecasts were fairly accurate, the past 100 years won’t be good predictors of our future. The advent of technology and its capabilities change the dynamics of our world, as well as its people and demographics. Perhaps the counter-analysis will be correct, but the fact remains that we are depleting our planet at alarming rates. Smaller populations don’t necessarily mean that this will stop, but harnessing technology properly can help amend these problems. The hard thing for us is to bite the bullet for future generations to thrive.

February 4, 2019

Turns out the science saying screen time is bad isn’t accurate science

A paper by Oxford scientists Amy Orben and Andrew Przybylski questions the quality of the science used to determine whether or not screen time is good or bad for us.

Their concern was that the large data sets and statistical methods employed by researchers looking into the question—for example, thousands and thousands of survey responses interacting with weeks of tracking data for each respondent—allowed for anomalies or false positives to be claimed as significant conclusions.

So what’s wrong with the science being used?

Suppose there was a study on a group of kids that concluded kids who use Instagram for more than two hours a day were three times as likely to suffer depressive episodes or suicidal ideations. The problem is that (made-up) study doesn’t point out that the bottom quartile is far more likely to suffer from ADHD or that the top five percent reported feeling they had a strong support network.

In short, the methods being questioned by the Oxford paper don’t bring up and compare all the statistically significant results that come from those studies. Similar to the danger of “correlation equals causation,” some slight links in the data set might be put forth as the most significant and as the main conclusion of the study, ignoring all other links.

The key takeaway

The Oxford study examined a few example behaviors that have more or less of an effect on well-being. It found was that there is no consistent good or bad effect and that the slightly negative effect of technology use wasn’t as bad as say, having a single parent or needing to wear glasses.

The point is that we often take conclusions of studies, especially those based on flawed methods that exaggerate certain results, and we might use them as throwaway ammunition to win arguments and influence other people. An example could be a parent citing such a study to convince their teenager that technology is

The reality is researchers need to point out all the significant links in the data set—whether or not they support the initial hypothesis—to show they haven’t missed any glaring details in the studies. We, in turn, need to acknowledge that science is a work in progress and that we should think critically about findings before rushing to an application.

Play Pause
Context—
Loading...