February 4, 2019

Turns out the science saying screen time is bad isn’t accurate science

A paper by Oxford scientists Amy Orben and Andrew Przybylski questions the quality of the science used to determine whether or not screen time is good or bad for us.

Their concern was that the large data sets and statistical methods employed by researchers looking into the question—for example, thousands and thousands of survey responses interacting with weeks of tracking data for each respondent—allowed for anomalies or false positives to be claimed as significant conclusions.

So what’s wrong with the science being used?

Suppose there was a study on a group of kids that concluded kids who use Instagram for more than two hours a day were three times as likely to suffer depressive episodes or suicidal ideations. The problem is that (made-up) study doesn’t point out that the bottom quartile is far more likely to suffer from ADHD or that the top five percent reported feeling they had a strong support network.

In short, the methods being questioned by the Oxford paper don’t bring up and compare all the statistically significant results that come from those studies. Similar to the danger of “correlation equals causation,” some slight links in the data set might be put forth as the most significant and as the main conclusion of the study, ignoring all other links.

The key takeaway

The Oxford study examined a few example behaviors that have more or less of an effect on well-being. It found was that there is no consistent good or bad effect and that the slightly negative effect of technology use wasn’t as bad as say, having a single parent or needing to wear glasses.

The point is that we often take conclusions of studies, especially those based on flawed methods that exaggerate certain results, and we might use them as throwaway ammunition to win arguments and influence other people. An example could be a parent citing such a study to convince their teenager that technology is

The reality is researchers need to point out all the significant links in the data set—whether or not they support the initial hypothesis—to show they haven’t missed any glaring details in the studies. We, in turn, need to acknowledge that science is a work in progress and that we should think critically about findings before rushing to an application.

Play Pause
Context—
Loading...