33 stories
·
0 followers

The Terrible Costs of a Phone-Based Childhood

1 Comment and 2 Shares

Photographs by Maggie Shannon

Something went suddenly and horribly wrong for adolescents in the early 2010s. By now you’ve likely seen the statistics: Rates of depression and anxiety in the United States—fairly stable in the 2000s—rose by more than 50 percent in many studies from 2010 to 2019. The suicide rate rose 48 percent for adolescents ages 10 to 19. For girls ages 10 to 14, it rose 131 percent.

The problem was not limited to the U.S.: Similar patterns emerged around the same time in Canada, the U.K., Australia, New Zealand, the Nordic countries, and beyond. By a variety of measures and in a variety of countries, the members of Generation Z (born in and after 1996) are suffering from anxiety, depression, self-harm, and related disorders at levels higher than any other generation for which we have data.

The decline in mental health is just one of many signs that something went awry. Loneliness and friendlessness among American teens began to surge around 2012. Academic achievement went down, too. According to “The Nation’s Report Card,” scores in reading and math began to decline for U.S. students after 2012, reversing decades of slow but generally steady increase. PISA, the major international measure of educational trends, shows that declines in math, reading, and science happened globally, also beginning in the early 2010s.

[Read: It sure looks like phones are making students dumber]

As the oldest members of Gen Z reach their late 20s, their troubles are carrying over into adulthood. Young adults are dating less, having less sex, and showing less interest in ever having children than prior generations. They are more likely to live with their parents. They were less likely to get jobs as teens, and managers say they are harder to work with. Many of these trends began with earlier generations, but most of them accelerated with Gen Z.

Surveys show that members of Gen Z are shyer and more risk averse than previous generations, too, and risk aversion may make them less ambitious. In an interview last May, OpenAI co-founder Sam Altman and Stripe co-founder Patrick Collison noted that, for the first time since the 1970s, none of Silicon Valley’s preeminent entrepreneurs are under 30. “Something has really gone wrong,” Altman said. In a famously young industry, he was baffled by the sudden absence of great founders in their 20s.

Generations are not monolithic, of course. Many young people are flourishing. Taken as a whole, however, Gen Z is in poor mental health and is lagging behind previous generations on many important metrics. And if a generation is doing poorly––if it is more anxious and depressed and is starting families, careers, and important companies at a substantially lower rate than previous generations––then the sociological and economic consequences will be profound for the entire society.

graph showing rates of self-harm in children
Number of emergency-department visits for nonfatal self-harm per 100,000 children (source: Centers for Disease Control and Prevention)

What happened in the early 2010s that altered adolescent development and worsened mental health? Theories abound, but the fact that similar trends are found in many countries worldwide means that events and trends that are specific to the United States cannot be the main story.

I think the answer can be stated simply, although the underlying psychology is complex: Those were the years when adolescents in rich countries traded in their flip phones for smartphones and moved much more of their social lives online—particularly onto social-media platforms designed for virality and addiction. Once young people began carrying the entire internet in their pockets, available to them day and night, it altered their daily experiences and developmental pathways across the board. Friendship, dating, sexuality, exercise, sleep, academics, politics, family dynamics, identity—all were affected. Life changed rapidly for younger children, too, as they began to get access to their parents’ smartphones and, later, got their own iPads, laptops, and even smartphones during elementary school.

[Jonathan Haidt: Get phones out of schools now]

As a social psychologist who has long studied social and moral development, I have been involved in debates about the effects of digital technology for years. Typically, the scientific questions have been framed somewhat narrowly, to make them easier to address with data. For example, do adolescents who consume more social media have higher levels of depression? Does using a smartphone just before bedtime interfere with sleep? The answer to these questions is usually found to be yes, although the size of the relationship is often statistically small, which has led some researchers to conclude that these new technologies are not responsible for the gigantic increases in mental illness that began in the early 2010s.

But before we can evaluate the evidence on any one potential avenue of harm, we need to step back and ask a broader question: What is childhood––including adolescence––and how did it change when smartphones moved to the center of it? If we take a more holistic view of what childhood is and what young children, tweens, and teens need to do to mature into competent adults, the picture becomes much clearer. Smartphone-based life, it turns out, alters or interferes with a great number of developmental processes.

The intrusion of smartphones and social media are not the only changes that have deformed childhood. There’s an important backstory, beginning as long ago as the 1980s, when we started systematically depriving children and adolescents of freedom, unsupervised play, responsibility, and opportunities for risk taking, all of which promote competence, maturity, and mental health. But the change in childhood accelerated in the early 2010s, when an already independence-deprived generation was lured into a new virtual universe that seemed safe to parents but in fact is more dangerous, in many respects, than the physical world.

My claim is that the new phone-based childhood that took shape roughly 12 years ago is making young people sick and blocking their progress to flourishing in adulthood. We need a dramatic cultural correction, and we need it now.

1. The Decline of Play and Independence

Human brains are extraordinarily large compared with those of other primates, and human childhoods are extraordinarily long, too, to give those large brains time to wire up within a particular culture. A child’s brain is already 90 percent of its adult size by about age 6. The next 10 or 15 years are about learning norms and mastering skills—physical, analytical, creative, and social. As children and adolescents seek out experiences and practice a wide variety of behaviors, the synapses and neurons that are used frequently are retained while those that are used less often disappear. Neurons that fire together wire together, as brain researchers say.

Brain development is sometimes said to be “experience-expectant,” because specific parts of the brain show increased plasticity during periods of life when an animal’s brain can “expect” to have certain kinds of experiences. You can see this with baby geese, who will imprint on whatever mother-sized object moves in their vicinity just after they hatch. You can see it with human children, who are able to learn languages quickly and take on the local accent, but only through early puberty; after that, it’s hard to learn a language and sound like a native speaker. There is also some evidence of a sensitive period for cultural learning more generally. Japanese children who spent a few years in California in the 1970s came to feel “American” in their identity and ways of interacting only if they attended American schools for a few years between ages 9 and 15. If they left before age 9, there was no lasting impact. If they didn’t arrive until they were 15, it was too late; they didn’t come to feel American.

Human childhood is an extended cultural apprenticeship with different tasks at different ages all the way through puberty. Once we see it this way, we can identify factors that promote or impede the right kinds of learning at each age. For children of all ages, one of the most powerful drivers of learning is the strong motivation to play. Play is the work of childhood, and all young mammals have the same job: to wire up their brains by playing vigorously and often, practicing the moves and skills they’ll need as adults. Kittens will play-pounce on anything that looks like a mouse tail. Human children will play games such as Tag and Sharks and Minnows, which let them practice both their predator skills and their escaping-from-predator skills. Adolescents will play sports with greater intensity, and will incorporate playfulness into their social interactions—flirting, teasing, and developing inside jokes that bond friends together. Hundreds of studies on young rats, monkeys, and humans show that young mammals want to play, need to play, and end up socially, cognitively, and emotionally impaired when they are deprived of play.

One crucial aspect of play is physical risk taking. Children and adolescents must take risks and fail—often—in environments in which failure is not very costly. This is how they extend their abilities, overcome their fears, learn to estimate risk, and learn to cooperate in order to take on larger challenges later. The ever-present possibility of getting hurt while running around, exploring, play-fighting, or getting into a real conflict with another group adds an element of thrill, and thrilling play appears to be the most effective kind for overcoming childhood anxieties and building social, emotional, and physical competence. The desire for risk and thrill increases in the teen years, when failure might carry more serious consequences. Children of all ages need to choose the risk they are ready for at a given moment. Young people who are deprived of opportunities for risk taking and independent exploration will, on average, develop into more anxious and risk-averse adults.

[From the April 2014 issue: The overprotected kid]

Human childhood and adolescence evolved outdoors, in a physical world full of dangers and opportunities. Its central activities––play, exploration, and intense socializing––were largely unsupervised by adults, allowing children to make their own choices, resolve their own conflicts, and take care of one another. Shared adventures and shared adversity bound young people together into strong friendship clusters within which they mastered the social dynamics of small groups, which prepared them to master bigger challenges and larger groups later on.

And then we changed childhood.

The changes started slowly in the late 1970s and ’80s, before the arrival of the internet, as many parents in the U.S. grew fearful that their children would be harmed or abducted if left unsupervised. Such crimes have always been extremely rare, but they loomed larger in parents’ minds thanks in part to rising levels of street crime combined with the arrival of cable TV, which enabled round-the-clock coverage of missing-children cases. A general decline in social capital––the degree to which people knew and trusted their neighbors and institutions––exacerbated parental fears. Meanwhile, rising competition for college admissions encouraged more intensive forms of parenting. In the 1990s, American parents began pulling their children indoors or insisting that afternoons be spent in adult-run enrichment activities. Free play, independent exploration, and teen-hangout time declined.

In recent decades, seeing unchaperoned children outdoors has become so novel that when one is spotted in the wild, some adults feel it is their duty to call the police. In 2015, the Pew Research Center found that parents, on average, believed that children should be at least 10 years old to play unsupervised in front of their house, and that kids should be 14 before being allowed to go unsupervised to a public park. Most of these same parents had enjoyed joyous and unsupervised outdoor play by the age of 7 or 8.

But overprotection is only part of the story. The transition away from a more independent childhood was facilitated by steady improvements in digital technology, which made it easier and more inviting for young people to spend a lot more time at home, indoors, and alone in their rooms. Eventually, tech companies got access to children 24/7. They developed exciting virtual activities, engineered for “engagement,” that are nothing like the real-world experiences young brains evolved to expect.

Triptych: teens on their phones at the mall, park, and bedroom

2. The Virtual World Arrives in Two Waves

The internet, which now dominates the lives of young people, arrived in two waves of linked technologies. The first one did little harm to Millennials. The second one swallowed Gen Z whole.

The first wave came ashore in the 1990s with the arrival of dial-up internet access, which made personal computers good for something beyond word processing and basic games. By 2003, 55 percent of American households had a computer with (slow) internet access. Rates of adolescent depression, loneliness, and other measures of poor mental health did not rise in this first wave. If anything, they went down a bit. Millennial teens (born 1981 through 1995), who were the first to go through puberty with access to the internet, were psychologically healthier and happier, on average, than their older siblings or parents in Generation X (born 1965 through 1980).

The second wave began to rise in the 2000s, though its full force didn’t hit until the early 2010s. It began rather innocently with the introduction of social-media platforms that helped people connect with their friends. Posting and sharing content became much easier with sites such as Friendster (launched in 2003), Myspace (2003), and Facebook (2004).

Teens embraced social media soon after it came out, but the time they could spend on these sites was limited in those early years because the sites could only be accessed from a computer, often the family computer in the living room. Young people couldn’t access social media (and the rest of the internet) from the school bus, during class time, or while hanging out with friends outdoors. Many teens in the early-to-mid-2000s had cellphones, but these were basic phones (many of them flip phones) that had no internet access. Typing on them was difficult––they had only number keys. Basic phones were tools that helped Millennials meet up with one another in person or talk with each other one-on-one. I have seen no evidence to suggest that basic cellphones harmed the mental health of Millennials.

It was not until the introduction of the iPhone (2007), the App Store (2008), and high-speed internet (which reached 50 percent of American homes in 2007)—and the corresponding pivot to mobile made by many providers of social media, video games, and porn—that it became possible for adolescents to spend nearly every waking moment online. The extraordinary synergy among these innovations was what powered the second technological wave. In 2011, only 23 percent of teens had a smartphone. By 2015, that number had risen to 73 percent, and a quarter of teens said they were online “almost constantly.” Their younger siblings in elementary school didn’t usually have their own smartphones, but after its release in 2010, the iPad quickly became a staple of young children’s daily lives. It was in this brief period, from 2010 to 2015, that childhood in America (and many other countries) was rewired into a form that was more sedentary, solitary, virtual, and incompatible with healthy human development.

3. Techno-optimism and the Birth of the Phone-Based Childhood

The phone-based childhood created by that second wave—including not just smartphones themselves, but all manner of internet-connected devices, such as tablets, laptops, video-game consoles, and smartwatches—arrived near the end of a period of enormous optimism about digital technology. The internet came into our lives in the mid-1990s, soon after the fall of the Soviet Union. By the end of that decade, it was widely thought that the web would be an ally of democracy and a slayer of tyrants. When people are connected to each other, and to all the information in the world, how could any dictator keep them down?

In the 2000s, Silicon Valley and its world-changing inventions were a source of pride and excitement in America. Smart and ambitious young people around the world wanted to move to the West Coast to be part of the digital revolution. Tech-company founders such as Steve Jobs and Sergey Brin were lauded as gods, or at least as modern Prometheans, bringing humans godlike powers. The Arab Spring bloomed in 2011 with the help of decentralized social platforms, including Twitter and Facebook. When pundits and entrepreneurs talked about the power of social media to transform society, it didn’t sound like a dark prophecy.

You have to put yourself back in this heady time to understand why adults acquiesced so readily to the rapid transformation of childhood. Many parents had concerns, even then, about what their children were doing online, especially because of the internet’s ability to put children in contact with strangers. But there was also a lot of excitement about the upsides of this new digital world. If computers and the internet were the vanguards of progress, and if young people––widely referred to as “digital natives”––were going to live their lives entwined with these technologies, then why not give them a head start? I remember how exciting it was to see my 2-year-old son master the touch-and-swipe interface of my first iPhone in 2008. I thought I could see his neurons being woven together faster as a result of the stimulation it brought to his brain, compared to the passivity of watching television or the slowness of building a block tower. I thought I could see his future job prospects improving.

Touchscreen devices were also a godsend for harried parents. Many of us discovered that we could have peace at a restaurant, on a long car trip, or at home while making dinner or replying to emails if we just gave our children what they most wanted: our smartphones and tablets. We saw that everyone else was doing it and figured it must be okay.

It was the same for older children, desperate to join their friends on social-media platforms, where the minimum age to open an account was set by law to 13, even though no research had been done to establish the safety of these products for minors. Because the platforms did nothing (and still do nothing) to verify the stated age of new-account applicants, any 10-year-old could open multiple accounts without parental permission or knowledge, and many did. Facebook and later Instagram became places where many sixth and seventh graders were hanging out and socializing. If parents did find out about these accounts, it was too late. Nobody wanted their child to be isolated and alone, so parents rarely forced their children to shut down their accounts.

We had no idea what we were doing.

4. The High Cost of a Phone-Based Childhood

In Walden, his 1854 reflection on simple living, Henry David Thoreau wrote, “The cost of a thing is the amount of … life which is required to be exchanged for it, immediately or in the long run.” It’s an elegant formulation of what economists would later call the opportunity cost of any choice—all of the things you can no longer do with your money and time once you’ve committed them to something else. So it’s important that we grasp just how much of a young person’s day is now taken up by their devices.

The numbers are hard to believe. The most recent Gallup data show that American teens spend about five hours a day just on social-media platforms (including watching videos on TikTok and YouTube). Add in all the other phone- and screen-based activities, and the number rises to somewhere between seven and nine hours a day, on average. The numbers are even higher in single-parent and low-income families, and among Black, Hispanic, and Native American families.

These very high numbers do not include time spent in front of screens for school or homework, nor do they include all the time adolescents spend paying only partial attention to events in the real world while thinking about what they’re missing on social media or waiting for their phones to ping. Pew reports that in 2022, one-third of teens said they were on one of the major social-media sites “almost constantly,” and nearly half said the same of the internet in general. For these heavy users, nearly every waking hour is an hour absorbed, in full or in part, by their devices.

overhead image of teens hands with phones

In Thoreau’s terms, how much of life is exchanged for all this screen time? Arguably, most of it. Everything else in an adolescent’s day must get squeezed down or eliminated entirely to make room for the vast amount of content that is consumed, and for the hundreds of “friends,” “followers,” and other network connections that must be serviced with texts, posts, comments, likes, snaps, and direct messages. I recently surveyed my students at NYU, and most of them reported that the very first thing they do when they open their eyes in the morning is check their texts, direct messages, and social-media feeds. It’s also the last thing they do before they close their eyes at night. And it’s a lot of what they do in between.

The amount of time that adolescents spend sleeping declined in the early 2010s, and many studies tie sleep loss directly to the use of devices around bedtime, particularly when they’re used to scroll through social media. Exercise declined, too, which is unfortunate because exercise, like sleep, improves both mental and physical health. Book reading has been declining for decades, pushed aside by digital alternatives, but the decline, like so much else, sped up in the early 2010s. With passive entertainment always available, adolescent minds likely wander less than they used to; contemplation and imagination might be placed on the list of things winnowed down or crowded out.

But perhaps the most devastating cost of the new phone-based childhood was the collapse of time spent interacting with other people face-to-face. A study of how Americans spend their time found that, before 2010, young people (ages 15 to 24) reported spending far more time with their friends (about two hours a day, on average, not counting time together at school) than did older people (who spent just 30 to 60 minutes with friends). Time with friends began decreasing for young people in the 2000s, but the drop accelerated in the 2010s, while it barely changed for older people. By 2019, young people’s time with friends had dropped to just 67 minutes a day. It turns out that Gen Z had been socially distancing for many years and had mostly completed the project by the time COVID-19 struck.

[Read: What happens when kids don’t see their peers for months]

You might question the importance of this decline. After all, isn’t much of this online time spent interacting with friends through texting, social media, and multiplayer video games? Isn’t that just as good?

Some of it surely is, and virtual interactions offer unique benefits too, especially for young people who are geographically or socially isolated. But in general, the virtual world lacks many of the features that make human interactions in the real world nutritious, as we might say, for physical, social, and emotional development. In particular, real-world relationships and social interactions are characterized by four features—typical for hundreds of thousands of years—that online interactions either distort or erase.

First, real-world interactions are embodied, meaning that we use our hands and facial expressions to communicate, and we learn to respond to the body language of others. Virtual interactions, in contrast, mostly rely on language alone. No matter how many emojis are offered as compensation, the elimination of communication channels for which we have eons of evolutionary programming is likely to produce adults who are less comfortable and less skilled at interacting in person.

Second, real-world interactions are synchronous; they happen at the same time. As a result, we learn subtle cues about timing and conversational turn taking. Synchronous interactions make us feel closer to the other person because that’s what getting “in sync” does. Texts, posts, and many other virtual interactions lack synchrony. There is less real laughter, more room for misinterpretation, and more stress after a comment that gets no immediate response.

Third, real-world interactions primarily involve one‐to‐one communication, or sometimes one-to-several. But many virtual communications are broadcast to a potentially huge audience. Online, each person can engage in dozens of asynchronous interactions in parallel, which interferes with the depth achieved in all of them. The sender’s motivations are different, too: With a large audience, one’s reputation is always on the line; an error or poor performance can damage social standing with large numbers of peers. These communications thus tend to be more performative and anxiety-inducing than one-to-one conversations.

Finally, real-world interactions usually take place within communities that have a high bar for entry and exit, so people are strongly motivated to invest in relationships and repair rifts when they happen. But in many virtual networks, people can easily block others or quit when they are displeased. Relationships within such networks are usually more disposable.

[From the September 2015 issue: The coddling of the American mind]

These unsatisfying and anxiety-producing features of life online should be recognizable to most adults. Online interactions can bring out antisocial behavior that people would never display in their offline communities. But if life online takes a toll on adults, just imagine what it does to adolescents in the early years of puberty, when their “experience expectant” brains are rewiring based on feedback from their social interactions.

Kids going through puberty online are likely to experience far more social comparison, self-consciousness, public shaming, and chronic anxiety than adolescents in previous generations, which could potentially set developing brains into a habitual state of defensiveness. The brain contains systems that are specialized for approach (when opportunities beckon) and withdrawal (when threats appear or seem likely). People can be in what we might call “discover mode” or “defend mode” at any moment, but generally not both. The two systems together form a mechanism for quickly adapting to changing conditions, like a thermostat that can activate either a heating system or a cooling system as the temperature fluctuates. Some people’s internal thermostats are generally set to discover mode, and they flip into defend mode only when clear threats arise. These people tend to see the world as full of opportunities. They are happier and less anxious. Other people’s internal thermostats are generally set to defend mode, and they flip into discover mode only when they feel unusually safe. They tend to see the world as full of threats and are more prone to anxiety and depressive disorders.

graph showing rates of disabilities in US college freshman
Percentage of U.S. college freshmen reporting various kinds of disabilities and disorders (source: Higher Education Research Institute)

A simple way to understand the differences between Gen Z and previous generations is that people born in and after 1996 have internal thermostats that were shifted toward defend mode. This is why life on college campuses changed so suddenly when Gen Z arrived, beginning around 2014. Students began requesting “safe spaces” and trigger warnings. They were highly sensitive to “microaggressions” and sometimes claimed that words were “violence.” These trends mystified those of us in older generations at the time, but in hindsight, it all makes sense. Gen Z students found words, ideas, and ambiguous social encounters more threatening than had previous generations of students because we had fundamentally altered their psychological development.

5. So Many Harms

The debate around adolescents’ use of smartphones and social media typically revolves around mental health, and understandably so. But the harms that have resulted from transforming childhood so suddenly and heedlessly go far beyond mental health. I’ve touched on some of them—social awkwardness, reduced self-confidence, and a more sedentary childhood. Here are three additional harms.

Fragmented Attention, Disrupted Learning

Staying on task while sitting at a computer is hard enough for an adult with a fully developed prefrontal cortex. It is far more difficult for adolescents in front of their laptop trying to do homework. They are probably less intrinsically motivated to stay on task. They’re certainly less able, given their undeveloped prefrontal cortex, and hence it’s easy for any company with an app to lure them away with an offer of social validation or entertainment. Their phones are pinging constantly—one study found that the typical adolescent now gets 237 notifications a day, roughly 15 every waking hour. Sustained attention is essential for doing almost anything big, creative, or valuable, yet young people find their attention chopped up into little bits by notifications offering the possibility of high-pleasure, low-effort digital experiences.

It even happens in the classroom. Studies confirm that when students have access to their phones during class time, they use them, especially for texting and checking social media, and their grades and learning suffer. This might explain why benchmark test scores began to decline in the U.S. and around the world in the early 2010s—well before the pandemic hit.

Addiction and Social Withdrawal

The neural basis of behavioral addiction to social media or video games is not exactly the same as chemical addiction to cocaine or opioids. Nonetheless, they all involve abnormally heavy and sustained activation of dopamine neurons and reward pathways. Over time, the brain adapts to these high levels of dopamine; when the child is not engaged in digital activity, their brain doesn’t have enough dopamine, and the child experiences withdrawal symptoms. These generally include anxiety, insomnia, and intense irritability. Kids with these kinds of behavioral addictions often become surly and aggressive, and withdraw from their families into their bedrooms and devices.

Social-media and gaming platforms were designed to hook users. How successful are they? How many kids suffer from digital addictions?

The main addiction risks for boys seem to be video games and porn. “Internet gaming disorder,” which was added to the main diagnosis manual of psychiatry in 2013 as a condition for further study, describes “significant impairment or distress” in several aspects of life, along with many hallmarks of addiction, including an inability to reduce usage despite attempts to do so. Estimates for the prevalence of IGD range from 7 to 15 percent among adolescent boys and young men. As for porn, a nationally representative survey of American adults published in 2019 found that 7 percent of American men agreed or strongly agreed with the statement “I am addicted to pornography”—and the rates were higher for the youngest men.

Girls have much lower rates of addiction to video games and porn, but they use social media more intensely than boys do. A study of teens in 29 nations found that between 5 and 15 percent of adolescents engage in what is called “problematic social media use,” which includes symptoms such as preoccupation, withdrawal symptoms, neglect of other areas of life, and lying to parents and friends about time spent on social media. That study did not break down results by gender, but many others have found that rates of “problematic use” are higher for girls.

[Jonathan Haidt: The dangerous experiment on teen girls]

I don’t want to overstate the risks: Most teens do not become addicted to their phones and video games. But across multiple studies and across genders, rates of problematic use come out in the ballpark of 5 to 15 percent. Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?

The Decay of Wisdom and the Loss of Meaning

During that crucial sensitive period for cultural learning, from roughly ages 9 through 15, we should be especially thoughtful about who is socializing our children for adulthood. Instead, that’s when most kids get their first smartphone and sign themselves up (with or without parental permission) to consume rivers of content from random strangers. Much of that content is produced by other adolescents, in blocks of a few minutes or a few seconds.

This rerouting of enculturating content has created a generation that is largely cut off from older generations and, to some extent, from the accumulated wisdom of humankind, including knowledge about how to live a flourishing life. Adolescents spend less time steeped in their local or national culture. They are coming of age in a confusing, placeless, ahistorical maelstrom of 30-second stories curated by algorithms designed to mesmerize them. Without solid knowledge of the past and the filtering of good ideas from bad––a process that plays out over many generations––young people will be more prone to believe whatever terrible ideas become popular around them, which might explain why videos showing young people reacting positively to Osama bin Laden’s thoughts about America were trending on TikTok last fall.  

All this is made worse by the fact that so much of digital public life is an unending supply of micro dramas about somebody somewhere in our country of 340 million people who did something that can fuel an outrage cycle, only to be pushed aside by the next. It doesn’t add up to anything and leaves behind only a distorted sense of human nature and affairs.

When our public life becomes fragmented, ephemeral, and incomprehensible, it is a recipe for anomie, or normlessness. The great French sociologist Émile Durkheim showed long ago that a society that fails to bind its people together with some shared sense of sacredness and common respect for rules and norms is not a society of great individual freedom; it is, rather, a place where disoriented individuals have difficulty setting goals and exerting themselves to achieve them. Durkheim argued that anomie was a major driver of suicide rates in European countries. Modern scholars continue to draw on his work to understand suicide rates today.

graph showing rates of young people who struggle with mental health
Percentage of U.S. high-school seniors who agreed with the statement “Life often seems meaningless.” (Source: Monitoring the Future)

Durkheim’s observations are crucial for understanding what happened in the early 2010s. A long-running survey of American teens found that, from 1990 to 2010, high-school seniors became slightly less likely to agree with statements such as “Life often feels meaningless.” But as soon as they adopted a phone-based life and many began to live in the whirlpool of social media, where no stability can be found, every measure of despair increased. From 2010 to 2019, the number who agreed that their lives felt “meaningless” increased by about 70 percent, to more than one in five.

6. Young People Don’t Like Their Phone-Based Lives

How can I be confident that the epidemic of adolescent mental illness was kicked off by the arrival of the phone-based childhood? Skeptics point to other events as possible culprits, including the 2008 global financial crisis, global warming, the 2012 Sandy Hook school shooting and the subsequent active-shooter drills, rising academic pressures, and the opioid epidemic. But while these events might have been contributing factors in some countries, none can explain both the timing and international scope of the disaster.

An additional source of evidence comes from Gen Z itself. With all the talk of regulating social media, raising age limits, and getting phones out of schools, you might expect to find many members of Gen Z writing and speaking out in opposition. I’ve looked for such arguments and found hardly any. In contrast, many young adults tell stories of devastation.

Freya India, a 24-year-old British essayist who writes about girls, explains how social-media sites carry girls off to unhealthy places: “It seems like your child is simply watching some makeup tutorials, following some mental health influencers, or experimenting with their identity. But let me tell you: they are on a conveyor belt to someplace bad. Whatever insecurity or vulnerability they are struggling with, they will be pushed further and further into it.” She continues:

Gen Z were the guinea pigs in this uncontrolled global social experiment. We were the first to have our vulnerabilities and insecurities fed into a machine that magnified and refracted them back at us, all the time, before we had any sense of who we were. We didn’t just grow up with algorithms. They raised us. They rearranged our faces. Shaped our identities. Convinced us we were sick.

Rikki Schlott, a 23-year-old American journalist and co-author of The Canceling of the American Mind, writes,

The day-to-day life of a typical teen or tween today would be unrecognizable to someone who came of age before the smartphone arrived. Zoomers are spending an average of 9 hours daily in this screen-time doom loop—desperate to forget the gaping holes they’re bleeding out of, even if just for … 9 hours a day. Uncomfortable silence could be time to ponder why they’re so miserable in the first place. Drowning it out with algorithmic white noise is far easier.

A 27-year-old man who spent his adolescent years addicted (his word) to video games and pornography sent me this reflection on what that did to him:

I missed out on a lot of stuff in life—a lot of socialization. I feel the effects now: meeting new people, talking to people. I feel that my interactions are not as smooth and fluid as I want. My knowledge of the world (geography, politics, etc.) is lacking. I didn’t spend time having conversations or learning about sports. I often feel like a hollow operating system.

Or consider what Facebook found in a research project involving focus groups of young people, revealed in 2021 by the whistleblower Frances Haugen: “Teens blame Instagram for increases in the rates of anxiety and depression among teens,” an internal document said. “This reaction was unprompted and consistent across all groups.”

How can it be that an entire generation is hooked on consumer products that so few praise and so many ultimately regret using? Because smartphones and especially social media have put members of Gen Z and their parents into a series of collective-action traps. Once you understand the dynamics of these traps, the escape routes become clear.

diptych: teens on phone on couch and on a swing

7. Collective-Action Problems

Social-media companies such as Meta, TikTok, and Snap are often compared to tobacco companies, but that’s not really fair to the tobacco industry. It’s true that companies in both industries marketed harmful products to children and tweaked their products for maximum customer retention (that is, addiction), but there’s a big difference: Teens could and did choose, in large numbers, not to smoke. Even at the peak of teen cigarette use, in 1997, nearly two-thirds of high-school students did not smoke.

Social media, in contrast, applies a lot more pressure on nonusers, at a much younger age and in a more insidious way. Once a few students in any middle school lie about their age and open accounts at age 11 or 12, they start posting photos and comments about themselves and other students. Drama ensues. The pressure on everyone else to join becomes intense. Even a girl who knows, consciously, that Instagram can foster beauty obsession, anxiety, and eating disorders might sooner take those risks than accept the seeming certainty of being out of the loop, clueless, and excluded. And indeed, if she resists while most of her classmates do not, she might, in fact, be marginalized, which puts her at risk for anxiety and depression, though via a different pathway than the one taken by those who use social media heavily. In this way, social media accomplishes a remarkable feat: It even harms adolescents who do not use it.

[From the May 2022 issue: Jonathan Haidt on why the past 10 years of American life have been uniquely stupid]

A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of the social-media trap precisely. The researchers recruited more than 1,000 college students and asked them how much they’d need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That’s a standard economist’s question to try to compute the net value of a product to society. On average, students said they’d need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

Social media is all about network effects. Most students are only on it because everyone else is too. Most of them would prefer that nobody be on these platforms. Later in the study, students were asked directly, “Would you prefer to live in a world without Instagram [or TikTok]?” A majority of students said yes––58 percent for each app.

This is the textbook definition of what social scientists call a collective-action problem. It’s what happens when a group would be better off if everyone in the group took a particular action, but each actor is deterred from acting, because unless the others do the same, the personal cost outweighs the benefit. Fishermen considering limiting their catch to avoid wiping out the local fish population are caught in this same kind of trap. If no one else does it too, they just lose profit.

Cigarettes trapped individual smokers with a biological addiction. Social media has trapped an entire generation in a collective-action problem. Early app developers deliberately and knowingly exploited the psychological weaknesses and insecurities of young people to pressure them to consume a product that, upon reflection, many wish they could use less, or not at all.

8. Four Norms to Break Four Traps

Young people and their parents are stuck in at least four collective-action traps. Each is hard to escape for an individual family, but escape becomes much easier if families, schools, and communities coordinate and act together. Here are four norms that would roll back the phone-based childhood. I believe that any community that adopts all four will see substantial improvements in youth mental health within two years.

No smartphones before high school 

The trap here is that each child thinks they need a smartphone because “everyone else” has one, and many parents give in because they don’t want their child to feel excluded. But if no one else had a smartphone—or even if, say, only half of the child’s sixth-grade class had one—parents would feel more comfortable providing a basic flip phone (or no phone at all). Delaying round-the-clock internet access until ninth grade (around age 14) as a national or community norm would help to protect adolescents during the very vulnerable first few years of puberty. According to a 2022 British study, these are the years when social-media use is most correlated with poor mental health. Family policies about tablets, laptops, and video-game consoles should be aligned with smartphone restrictions to prevent overuse of other screen activities.

No social media before 16

The trap here, as with smartphones, is that each adolescent feels a strong need to open accounts on TikTok, Instagram, Snapchat, and other platforms primarily because that’s where most of their peers are posting and gossiping. But if the majority of adolescents were not on these accounts until they were 16, families and adolescents could more easily resist the pressure to sign up. The delay would not mean that kids younger than 16 could never watch videos on TikTok or YouTube—only that they could not open accounts, give away their data, post their own content, and let algorithms get to know them and their preferences.

Phone‐free schools

Most schools claim that they ban phones, but this usually just means that students aren’t supposed to take their phone out of their pocket during class. Research shows that most students do use their phones during class time. They also use them during lunchtime, free periods, and breaks between classes––times when students could and should be interacting with their classmates face-to-face. The only way to get students’ minds off their phones during the school day is to require all students to put their phones (and other devices that can send or receive texts) into a phone locker or locked pouch at the start of the day. Schools that have gone phone-free always seem to report that it has improved the culture, making students more attentive in class and more interactive with one another. Published studies back them up.

More independence, free play, and responsibility in the real world

Many parents are afraid to give their children the level of independence and responsibility they themselves enjoyed when they were young, even though rates of homicide, drunk driving, and other physical threats to children are way down in recent decades. Part of the fear comes from the fact that parents look at each other to determine what is normal and therefore safe, and they see few examples of families acting as if a 9-year-old can be trusted to walk to a store without a chaperone. But if many parents started sending their children out to play or run errands, then the norms of what is safe and accepted would change quickly. So would ideas about what constitutes “good parenting.” And if more parents trusted their children with more responsibility––for example, by asking their kids to do more to help out, or to care for others––then the pervasive sense of uselessness now found in surveys of high-school students might begin to dissipate.

It would be a mistake to overlook this fourth norm. If parents don’t replace screen time with real-world experiences involving friends and independent activity, then banning devices will feel like deprivation, not the opening up of a world of opportunities.

The main reason why the phone-based childhood is so harmful is because it pushes aside everything else. Smartphones are experience blockers. Our ultimate goal should not be to remove screens entirely, nor should it be to return childhood to exactly the way it was in 1960. Rather, it should be to create a version of childhood and adolescence that keeps young people anchored in the real world while flourishing in the digital age.

9. What Are We Waiting For?

An essential function of government is to solve collective-action problems. Congress could solve or help solve the ones I’ve highlighted—for instance, by raising the age of “internet adulthood” to 16 and requiring tech companies to keep underage children off their sites.

In recent decades, however, Congress has not been good at addressing public concerns when the solutions would displease a powerful and deep-pocketed industry. Governors and state legislators have been much more effective, and their successes might let us evaluate how well various reforms work. But the bottom line is that to change norms, we’re going to need to do most of the work ourselves, in neighborhood groups, schools, and other communities.

[Read: Why Congress keeps failing to protect kids online]

There are now hundreds of organizations––most of them started by mothers who saw what smartphones had done to their children––that are working to roll back the phone-based childhood or promote a more independent, real-world childhood. (I have assembled a list of many of them.) One that I co-founded, at LetGrow.org, suggests a variety of simple programs for parents or schools, such as play club (schools keep the playground open at least one day a week before or after school, and kids sign up for phone-free, mixed-age, unstructured play as a regular weekly activity) and the Let Grow Experience (a series of homework assignments in which students––with their parents’ consent––choose something to do on their own that they’ve never done before, such as walk the dog, climb a tree, walk to a store, or cook dinner).

Even without the help of organizations, parents could break their families out of collective-action traps if they coordinated with the parents of their children’s friends. Together they could create common smartphone rules and organize unsupervised play sessions or encourage hangouts at a home, park, or shopping mall.

teen on her phone in her room

Parents are fed up with what childhood has become. Many are tired of having daily arguments about technologies that were designed to grab hold of their children’s attention and not let go. But the phone-based childhood is not inevitable.

The four norms I have proposed cost almost nothing to implement, they cause no clear harm to anyone, and while they could be supported by new legislation, they can be instilled even without it. We can begin implementing all of them right away, this year, especially in communities with good cooperation between schools and parents. A single memo from a principal asking parents to delay smartphones and social media, in support of the school’s effort to improve mental health by going phone free, would catalyze collective action and reset the community’s norms.

We didn’t know what we were doing in the early 2010s. Now we do. It’s time to end the phone-based childhood.


This article is adapted from Jonathan Haidt’s forthcoming book, The Anxious Generation: How the Great Rewiring of Childhood Is Causing an Epidemic of Mental Illness.

Read the whole story
SimonHova
34 days ago
reply
Greenlawn, NY
Share this story
Delete
1 public comment
deebee
34 days ago
reply
“Is there any other consumer product that parents would let their children use relatively freely if they knew that something like one in 10 kids would end up with a pattern of habitual and compulsive use that disrupted various domains of life and looked a lot like an addiction?”

What’s he talking about here, phones still or just like a generalized access to tech or any kind? You don’t need a phone to get internet porn and they’re not the only way to play video games.
America City, America

Who Paid For Sarah Sanders’ Six Figure Super Bowl Extravaganza?

1 Comment and 5 Shares

Taylor Swift was not the only high profile Kansas City chiefs fan to enjoy incredible access to the Super Bowl in Las Vegas earlier this month. Like the pop megastar, Arkansas Gov. Sarah Huckabee Sanders (R) watched the game from a luxury suite and celebrated on the field. However, unlike Swift, who is dating one of the team’s star players and broke all kinds of records with her ongoing multibillion dollar tour, it’s not quite clear how (or if) Sanders and her family paid for tickets to the most expensive football game of all time. 

So, TPM set out to figure out just how exactly the governor ended up with such exclusive access to such an exclusive event. Chasing Sanders’ splashy Super Bowl trip was a confounding journey. A combination of brazen spending, stonewalling from the governor’s office, and Sanders’ successful efforts to erode transparency laws left us sure of nothing except the fact a state official somehow managed to enjoy a big night out that almost certainly cost more than her annual salary. 

On one level, Sanders was not at all shy about her very lavish evening with the victorious Chiefs. She posted a series of smiling pictures on Instagram that suggested  an extraordinary amount of access to the game. But when her selfies provoked real questions about the blatantly obvious ethical implications that come along with a civil servant enjoying virtually the same amenities as a billionaire pop star, Sanders and her team shut down. 

The easiest way to find out how the Sanders clan ended up on the field at the big game would be simply asking the governor. However, for nearly two weeks, Sanders’ office has not responded to questions from TPM about how much her tickets cost, whether she bought them on her own, or whether she received them as a gift. Her staff also won’t say whether any state resources, such as a plane or security detail, were used for the trip. Sanders also did not answer how she accomplished any official business for the people of Arkansas by cheering on a team from another state. 

These things are important because, this was, quite literally, a big ticket item. Getting into this year’s Super Bowl was almost cartoonishly expensive and, as governor, Sanders’ ability to receive far smaller gifts is regulated to avoid bribery and corruption. And, a little over a year after taking office, she’s also already come under fire multiple times for her questionable use of taxpayer funds. 

Amid that scrutiny, Sanders has tried to cover the governor’s mansion in a veil of secrecy. Last year, Sanders called a special session of the legislature as she pushed for changes that would gut the state’s Freedom of Information Act. The moves were so radical that, even with her party enjoying large majorities in both Houses, Sanders’ first two attempts to change the open records law were rejected. The assault on transparency inspired a broad coalition of vocal opponents to speak out including Nate Bell, a former Arkansas legislator who was a Republican before becoming an independent as the party embraced Donald Trump in 2015. 

“We had the most eclectic and diverse group of Arkansans show up to oppose that bill that I have ever witnessed in 40 years of political involvement,” Bell said, adding, “We had people who hated each other, who were not on speaking terms, who in some cases had sued each other within the very recent past, sitting in a room together aligned to defeat that bill.”

The motley crew helped thwart Sanders’ most drastic proposed changes to FOIA. However, Sanders was able to get enough support to restrict access to records related to her travel and security — the exact information that could help confirm what went on with her Super Bowl extravaganza. 

As the FOIA fight raged, Bell helped found an organization called Arkansas Citizens for Transparency. The group is currently working to put a constitutional amendment on the ballot in November that would block attempts to restrict public access to government records. Naturally, Bell has some questions of his own about the Sanders clan’s time at the big game. 

“It’s great for a family to be able to do something like that. The question is, obviously, who paid for it and does it have any influence on the decisions that are being made in state government?” Bell asked in an interview with TPM. “And I think, at this point, given the level of opacity that this governor has forced into place, it’s difficult to know.” 

One thing is clear: The Super Bowl trip was almost absurdly expensive. Beyond that, there are no easy answers. 

Sanders, who served as White House press secretary under former President Donald Trump, didn’t just take in the game. She and her family enjoyed incredible access to nearly every aspect of the NFL championship. They even got to meet the mother of Swift’s boyfriend, star Chiefs tight end Travis Kelce. 

On Feb. 12, the day after the Super Bowl, Sanders posted a set of photos showing her and her husband enjoying the Chiefs victory with their three young children. Sanders’ husband, Bryan, is a Kansas City native and the governor has previously been vocal about the family’s ardent Chiefs fandom. 

Over the course of Super Bowl weekend, they were pictured at a pre-game party posing with “Mama Kelce,” who has become something of a star in her own right, particularly since the beginning of her son’s relationship with Swift. During the game, the family took selfies next to the field and watched the action from a luxury suite. They were on the field for the halftime show featuring R&B crooner Usher. Sanders and her family also made it to Chiefs events where they posed with Kelce’s brother, Jason, a Philadelphia Eagles star weighing retirement. 

Though Sanders and her team might be dodging questions about her Super Bowl spending spree, there are a number of factors that helped TPM calculate how much it might have cost for Sanders or anyone who might have given her the tickets: First, luckily for us, she couldn’t resist indulging in the time-honored millennial pastime of bragging on social media. Her Instagram activities provided plenty of clues about the family’s access, including  the lanyards they wore with their halftime show tickets and field passes. 

Furthermore, the game, in a way, happened on my home turf since it was played in the Las Vegas Raiders’ stadium. Your humble TPM correspondent is, much like Hunter S. Thompson, Ice Cube, and the legendary “Violator” is a proud member of Raider nation. That means I come to the table with a healthy distaste for the Chiefs and I know exactly who to call about tickets in Sin City.   

Ken Solky is president of <a href="http://LasVegasTickets.com" rel="nofollow">LasVegasTickets.com</a>, which bills itself as “the number one agency for the most sought-after tickets in Las Vegas and around the world.” Solky has been buying and selling tickets in the city for about thirty years and was described by Forbes as “one of the most influential and called-upon resources in the sports and entertainment world for tickets to major events.” We showed him the pictures of Sanders’ big night and he helped provide an informed estimate of how much cash was involved. 

“It definitely looks like a suite … not just a suite but a pretty damn good suite,” Solky said when he saw the Instagram video of Sanders and her family watching the game. 

Allegiant Stadium, or, as it’s known to us in Raider Nation, “The Death Star,” has a reputation for having some of the most expensive tickets in the NFL. The high prices come thanks to Vegas’ status as a tourist destination and, obviously, the sick Raider vibes. And they were even higher for the Super Bowl. 

According to Solky, suites for the Super Bowl went for a minimum of $750,000. Some, which had better positioning along the field, sold for upwards of $1 million. The suites contain seats for 20 fans meaning, at a minimum, the face value of Sanders’ suite tickets was $37,500 each. And, of course, Sanders’ family had at least five tickets for her, her husband, and their three children. That puts Sanders’ minimum tab for the tickets to the game at $187,500 — and those suite seats were just one part of the family’s lavish Super Bowl experience. 

But even these six figure suite tickets don’t come with the kind of amenities Sanders and her family enjoyed. Most people who aren’t Taylor Swift are not able to get near the grass or party with the Kelces. Solky said the pictures showed off a “mixture of access” and that each pass Sanders and her clan showed off had a price. According to him, that combined figure was likely $3,000 to $10,000 per person.  

Based on that estimate, the Sanders family’s Super Bowl trip would conservatively be worth a cool $202,500. 

Of course, most Super Bowl tickets and passes were sold at a steep premium on the secondary market — and there are indications that Sanders and her family had some that were not available on the open market. 

How do we know? Earlier this month, CBS reported that, for the Super Bowl, “face-value tickets, which are expensive to begin with, are rarely made available to the general public.” Instead, the majority of seats at the game went to NFL players, coaches, league insiders, and their corporate partners. Many of those tickets were then sold via ticket brokers with steep markups. Other tickets — including access to the pre-game parties, halftime show, and field — were sold to the public through a company founded by the NFL called “On Location.” However, based on photos, the field passes displayed by Sanders and her family were not those made available through that service. Solky said the Sanders clan’s passes, which bore the Chiefs logo, were those given to the team. That means Sanders and her family likely obtained their passes through the Chiefs, one of the team’s partners, or via a broker, which would add substantial cost. 

“It’s not free and it definitely has value,” Solky explained. 

The Chiefs communications team did not respond to a request for comment about how much the tickets cost or whether Sanders and her family were guests of the organization. 

There are several glaringly obvious reasons why Sanders’ expenses as a public official matter. First, if her bill for the game was footed by someone else, it would be a major gift that could put the giver in a position to influence the governor or curry favor with her. For this reason, Sanders and other public officials in her state are subject to rules laid out by the Arkansas Ethics Commission that prohibit them from receiving gifts in excess of $100. Anything above this amount would need to be reimbursed — that includes the tickets for other members of Sanders’ immediate family. 

With Sanders seemingly having passes that were only available for the Chiefs or via brokers, she either got them at a steep markup or via an insider. Even if the tickets came from someone who got them for free rather than paying a broker’s premium, they still count as an extremely expensive gift for regulatory purposes. 

The Arkansas ethics rules specifically include children and spouses and they note “tickets to sporting events and shows” are considered gifts that “are valued at their face price.” There is an exception for suites that are leased. Super Bowl suites, which were separate from regular season tickets, are unlikely to fall under this category, however, even if they were, the rules dictate that “the value of a ticket obtained pursuant to a lease shall be the price of the highest individually priced ticket for the event.” In the case of the Super Bowl, that would be $9,500, which would mean, with the most generous possible calculation, Sanders’ seats at the game cost, for the purpose of state ethics rules, about $47,500 — not necessarily including the passes. 

Sanders theoretically could have paid for the big game herself, but she doesn’t appear to be super rich. 

Her salary as governor is approximately $160,000. She made a little more than that as White House press secretary, a position she held until 2019. White House financial disclosures she filed show that, before joining Trump’s team in 2017, Sanders made about $550,000 as a political consultant including an approximately $205,000 salary from the Arkansas firm Second Street Strategies where she was a partner. While her White House disclosure displayed a multimillion dollar IRA account, she described that to me as an error at the time and the same document only listed minimal interest coming from the account. Sanders’ financial disclosures in Arkansas provide a far smaller level of detail, but they show that she maintains investments in Second Street Strategies while her husband, Bryan, is earning income from a local communications firm and a real estate company

It’s all a nice chunk of change, but none of it screams six figure Super Bowl suites. And even if Sanders paid her own way, there are still ethical questions about her enjoyment of such an extravagant evening. 

If she received tickets that were not available to the general public, they would need to be disclosed as a gift even if she paid whoever gave them to her. And if Sanders used state resources to travel somewhere like Vegas, taxpayers might want to know how much of the bill they footed. At every turn, despite flaunting her extraordinary football spending, Sanders has been equally brazen about dodging scrutiny of her expenses. 

Another thing TPM noticed in the course of our investigation: The Super Bowl isn’t the first time Sanders and her family enjoyed a VIP experience at a Chiefs game. A past game they attended is even more interesting to unpack since it theoretically should have made it on to disclosure forms filed earlier this month — and, yet, is nowhere to be found. 

There is one exemption to the prohibitions on gifts for public officials in Arkansas. Regulations note “a public servant may accept a gift conferred on account of a bona fide personal, professional, or business relationship independent of his or her official status.” In those cases, regulators would evaluate “such factors as when the relationship began (i.e., before or after the public servant obtained his or her office or position), the prior history of gift giving between the individuals, whether the gift was given in connection with a holiday or other special occasion, and whether the same gift was given to other public servants.”

Graham Sloan is the director of the Arkansas Ethics Commission, which is responsible for evaluating complaints about potential violations of the rules on gifts. He explained that gifts from close friends and relations are permitted “unless they’re acting as an intermediary for a third party.” 

“If it was just your college roommate who had, you know, struck oil and you all had given gifts over the years, so now it can be conferred,” Sloan said. 

Of course, in Sanders’ case, it might be uniquely difficult to determine whether her relationships — even prior to taking office — are entirely independent of the political realm. Sanders has spent virtually her entire life ensconced in Republican politics. As the daughter of former Arkansas Gov. Mike Huckabee, she grew up inside the governor’s mansion. Before she was elected to follow in his footsteps in 2022, Sanders had an extensive career in politics. She worked as a strategist for her father and other politicians as well as taking stints as a staffer in the administration of President George W. Bush and behind the White House podium for Trump. 

However, even in cases where Sanders received a gift from someone with whom she had a bonafide independent and personal relationship, Sloan said she would be required to publicly disclose it. 

“Gifts are reported on your statement of financial interest,” he explained.

That annual disclosure document is already highly anticipated among some of Sanders’ critics in light of the Super Bowl trip. After the governor posted the photos of her family at the game, Bell shared them on the site once called Twitter.

“This will make for an interesting SFI next January,” he wrote. 

However, in the past, Sanders’ family football fun has not made it onto her annual financial disclosures. 

On Nov. 26, 2023, Sanders posted another series of pictures with her family at a Chiefs game in Kansas City. The pictures were from earlier in the season since, that day, the Chiefs played the Raiders in Vegas. (While the Raiders were not victorious in that contest, I am pleased to report they beat the Chiefs roughly one month later as head coach Antonio Pierce made progress on cleaning up the mess that was the Josh McDaniels era. But, I digress.)

Sanders and her family seemed to have extraordinary access for this earlier game as well. The kids even managed to have a moment with Swift’s boyfriend, Travis Kelce.  

The photos the governor shared on social media showed her family posing on the field and in a luxury suite. One picture shows Sanders’ children giving Kelce a high five as he stepped out in his gear. In another photo, Sanders posed with Tavia Hunt, the wife of Chiefs owner Clark Hunt, who is a frequent donor to Republican politicians. Based on the outfit Tavia Hunt was wearing and a post she made in her own account, the prior game Sanders and her family attended took place on Nov. 20 when the Chiefs played the Eagles.

The distinctive windows and arched entranceways in Sanders’ pictures appear to match Hunt’s “opulent” two-level owner’s box rather than any of the standard suites at Kansas City’s Arrowhead Stadium. Based on the photos Tavia Hunt posted on her own Instagram from the game, she and her husband were both in the suite with Sanders. The governor commented on the pictures of the Hunts in the suite with three heart emojis. The Hunts did not respond to requests for comment about whether Sanders and her family were their guests at that game or the Super Bowl.

If Sanders took in a game from a private suite where tickets were not sold on the open market, that means that, according to ethics regulations, her seats were a gift that would have been valued at the game’s highest price. Yet Sanders’ 2023 statement of financial interest does not include any mention of her being gifted tickets to the Chiefs game where she sat in the owners box. 

Sloan, the director of the Arkansas Ethics Commission, said a public official would be required to disclose tickets they received from a sports team owner — even if the owner did not pay for the individual ticket — as a gift on their annual forms. 

“If you’re in a luxury suite, it would be the highest face price ticket to the event,” Sloan said of the scenario.

In other words, unless Sanders somehow managed to secure seats inside the Chiefs private owner’s suite on the open market, it was a gift that had to be disclosed. While there is no mention of any gifted Chiefs tickets on Sanders’ statement of financial interest for last year, which was filed earlier this month, it does include far smaller items like a $150 turkey call and a $109.99 vest for hunting that were received by her husband. Sanders’ office did not answer questions about why the Chiefs tickets from last year were not identified as a gift on her statement of financial interest. 

And any tickets received by Sanders and her family are just one part of the calculation. There are also travel, lodging, and security costs. With Sanders declining to answer questions about the game, it is unclear whether she brought along a security detail that would have also needed tickets or whether she used government vehicles to travel to the game. 

In other states, TPM would be able to use open records laws to obtain information about a public official’s use of resources. However, the Freedom of Information Act in Arkansas includes a relatively unusual provision that only allows a “citizen of the State of Arkansas” to make requests for records under the law. And, thanks to the exemptions to the law that were obtained by Sanders, even Arkansans can’t obtain records related to her travel. Bell, the transparency advocate in the state, said it’s a case of Sanders making “what was already a quite bad situation much worse.” 

“In terms of, what do I see as the direction of FOIA in Arkansas? It’s been getting worse progressively,” Bell said. “The governor just stepped on the throttle and said, we’re going to make this worse and we’re going to do it much more quickly.” 

Sanders’ push to curb FOIA and shield her records from scrutiny comes as there have been multiple controversies over her spending. It’s an issue that her father also faced during his time in the governor’s mansion. In Sanders’ case, the expenses that have previously raised eyebrows have been related to her travel — and to football. 

Last September, as Sanders was pushing for the changes to the Freedom of Information Act, a local reporter named Matt Campbell broke the news that records showed she spent over $19,000 of state funds on a custom made lectern with a travel case before embarking on a trip to Europe. While Sanders’ team tried to blame an “accounting error” on public money being used for the speech accessory, Campbell obtained records via FOIA that called that explanation into question. The state Republican Party reimbursed the cost, but only after there was a public uproar over what became known as “Podiumgate” (or “Lecterngate,” using, for the insufferable among us, the less common but apparently more proper term). And then, in November, Campbell obtained public records and published a story in the Arkansas Times showing Sanders spent over $13,000 in public funds to throw a kickoff party for the University of Arkansas football team at the governor’s mansion. 

For his part, Bell said the biggest issue with the governor’s spending and gifts is the lack of transparency.

“The secrecy itself opens the door to corruption,” Bell said, before adding, “If someone wants to give lavish gifts to the governor, fine, but everybody in the world should know who it is, and know why, and the cost and all the parameters associated with it.”

Other Arkansans have expressed concerns about Sanders’ Super Bowl trip. The local site Magnolia Reporter conducted a poll earlier this month asking readers if they were troubled by Sanders’ presence at the big game. Magnolia Reporter found 37 percent of the respondents had some level of concern with many wanting a “a full accounting” of potential uses of state money or gifts to the governor. While most of the readers who responded said they were not concerned about Sanders’ Vegas jaunt, that came with a caveat that it wasn’t an issue only as long as the governor “paid for this trip out of her own pocket.” 

At the moment, with Sanders staying silent, it’s impossible to say whether or not that’s the case. 

Correction: An earlier version of this post misidentified the teams that were playing when Sanders was seemingly photographed in the Chiefs owners box as her Instagram post was made on the day of a different game.

Read the whole story
SimonHova
48 days ago
reply
Greenlawn, NY
Share this story
Delete
1 public comment
hannahdraper
42 days ago
reply
Though Sanders and her team might be dodging questions about her Super Bowl spending spree, there are a number of factors that helped TPM calculate how much it might have cost for Sanders or anyone who might have given her the tickets: First, luckily for us, she couldn’t resist indulging in the time-honored millennial pastime of bragging on social media. Her Instagram activities provided plenty of clues about the family’s access, including the lanyards they wore with their halftime show tickets and field passes.

Furthermore, the game, in a way, happened on my home turf since it was played in the Las Vegas Raiders’ stadium. Your humble TPM correspondent is, much like Hunter S. Thompson, Ice Cube, and the legendary “Violator” is a proud member of Raider nation. That means I come to the table with a healthy distaste for the Chiefs and I know exactly who to call about tickets in Sin City.
Washington, DC

one of the best academic paper titles

5 Shares

derinthescarletpescatarian:

imightbeobsessedwithsocks:

derinthescarletpescatarian:

rlyehtaxidermist:

rlyehtaxidermist:

Neural correlates of interspecies perspective taking in the post-mortem Atlantic Salmon: an argument for multiple comparisons correctionALT

one of the best academic paper titles

for those who don’t speak academia: “according to our MRI machine, dead fish can recognise human emotions. this suggests we probably should look at the results of our MRI machine a bit more carefully”

I hope everyone realises how incredibly important this dead fish study is. This was SO fucking important.

I still don’t understand

So basically, in the psych and social science fields, researchers would (I don’t know if they still do this, I’ve been out of science for awhile) sling around MRIs like microbiolosts sling around metagenomic analyses. MRIs can measure a lot but people would use them to measure ‘activity’ in the brain which is like… it’s basically the machine doing a fuckload of statistics on brain images of your blood vessels while you do or think about stuff. So you throw a dude in the machine and take a scan, then give him a piece of chocolate cake and throw him back in and the pleasure centres light up. Bam! Eating chocolate makes you happy, proven with MRI! Simple!

These tests get used for all kinds of stuff, and they get used by a lot of people who don’t actually know what they’re doing, how to interpret the data, or whether there’s any real link between what they’re measuring and what they’re claiming. It’s why you see shit going around like “men think of women as objects because when they look at a woman, the same part of their brain is active as when they look at a tool!” and “if you play Mozart for your baby for twenty minutes then their imagination improves, we imaged the brain to prove it!” and “we found where God is in the brain! Christians have more brain activity in this region than atheists!”

There are numerous problems with this kind of science, but the most pressing issue is the validity of the scans themselves. As I said, there’s a fair bit of stats to turn an MRI image into 'brain activity’, and then you do even more stats on that to get your results. Bennett et. al.’s work ran one of these sorts of experiments, with one difference – they used a dead salmon instead of living human subjects. And they got positive results. The same sort of experiment, the same methodology, the same results that people were bandying about as positive results. According to the methodology in common use, dead salmon can distinguish human facial expressions. Meaning one of two things:

  • Dead salmon can recognise human facial expressions. OR
  • Everyone else’s results are garbage also, none of you have data for any of this junk.

I cannot overstate just how many papers were completely fucking destroyed by this experiment. Entire careers of particularly lazy scientists were built on these sorts of experiments. A decent chunk of modern experimental neuropsychology was resting on it. Which shows that science is like everything else – the best advances are motivated by spite.

Read the whole story
SimonHova
76 days ago
reply
Greenlawn, NY
Share this story
Delete

Even the Oppressed Have Obligations

1 Comment and 2 Shares

After the Hamas attack on Israel October 7, an old, bad argument resurfaced. In the streets of New York, London, and Paris, and on American college campuses, protesters who consider themselves leftists took the position that oppressed people—Palestinians in this case, but oppressed people more generally—can do no wrong. Any act of “resistance” is justified, however cruel, however barbaric, however much these protesters would rage against it if it were committed by someone else.

I remember the same argument from the days of the Algerian struggle for independence from France, when the National Liberation Front (FLN) launched terrorist attacks against European civilians. The movie The Battle of Algiers shows a bomb being planted in a café where teenagers met to drink and dance. This really happened, and figures as eminent as Jean-Paul Sartre defended such attacks. Killing a European, any European, the famous writer announced, was an act of liberation: “There remains a dead man”—the victim—“and a free man”—the killer.

By this same logic, the murder of young and old Israelis has been justified, even celebrated, by people who, again, consider themselves leftists. For them, the Hamas murderers are not ordinary mortals, responsible for what they do; they are agents of resistance, doing what must be done in the name of liberation.

[Simon Sebag Montefiore: The decolonization narrative is dangerous and false]

Framed this way, the issue is simple: Oppressed people have a right to resist; the Palestinians have a right to struggle against the Israeli occupation. But rights come with obligations. What are the obligations of the oppressed and, most immediately, of those who act in their name? This may not seem like an urgent question, given the horrors of the war now unfolding. But it is a question for all time; it is about the moral and political health of all those who fight for liberation—and of everyone who wants to support them.

The Hamas terrorists claim to be acting on behalf of the Palestinian people. At the same time, Hamas is the government of the Gaza Strip—a strange situation: a terrorist organization that also rules a territory. The anomaly explains why Hamas terror leads to actual conventional wars, whereas Irish Republican Army or FLN violence against civilians never did. Hamas’s government is substantial, the real thing, with a civil service and a system of social provision that includes welfare and schooling. It has, therefore, the same obligations that any government has to look after its citizens or, as in Hamas’s case, its subjects. It must secure their rights and protect their lives.

But much evidence suggests that the government of Gaza does not meet these basic obligations. Despite the large funds that Hamas has accumulated, chiefly for its military wing, some 80 percent of Gazans live in poverty. Hamas rejects the very idea of civil rights and liberties; it imposes a harsh religious discipline (though short of the Iranian version), and it does not seem overly concerned with Gazans’ general well-being. Instead of protecting the lives of its people, it exposes them to attack by embedding its military communication and storage centers in the civilian population and firing its missiles from schoolyards and hospital parking lots. It spends much of its money on the manufacture of rockets and the construction of an elaborate network of tunnels for military use. Knowing the wars it plans, it doesn’t build shelters for its people.

Insofar as anyone genuinely cares about Gazans’ well-being, it is the foreign governments that send money (Qatar pays the salaries of the civil service) and the United Nations agencies and other humanitarian organizations working on the ground. One might also mention the state of Israel, which, until October 8, supplied half of Gaza’s electricity. (Cutting off the electricity was, I believe, morally wrong and politically stupid, but those who call it so should acknowledge the years of electric service, even while rockets were fired at Israel.)

What would Gaza look like if Hamas was a normal government? That’s hard to say, because normality is hard to come by in the Middle East today. But when Israel withdrew from the Strip (taking Jewish settlers with it) in 2005, there was excited talk of a Palestinian Hong Kong, with a seaport, an international airport, water-desalination plants, and much else—all of this funded with investment from abroad, chiefly from Western Europe and the Persian Gulf states. Hamas was not interested in anything like that, and all these projects faded with the first major barrage of rockets aimed at Israel in 2006. The definitive end came a year later when Hamas, having won a narrow election victory (the last election in Gaza) seized total power and murdered its opponents. What it wanted was not a prosperous Gaza but a base for a long-term war against Israel—and, later on, against Egypt’s control of the Sinai. Hamas’s rise to power, coupled with the group’s Islamist ideology, is what led to the Israeli-Egyptian blockade, designed (not very successfully) to prevent Hamas from bringing weapons into the Strip.

In light of all this, to cast Hamas solely as an agent of resistance is to overlook a lot. It is a government that has failed its people. It is also a movement for Palestinian national liberation with a significant, but probably minority, following in Gaza and considerable influence throughout the Arab world. It is, finally, a movement that has chosen terror as its means of struggle—not as a last resort but as a matter of policy from its beginning. What are the obligations of a movement like that? I should say right now: Its first obligation is to reject terrorism.

Let’s pause here and look at a classic argument first worked out in a different liberation struggle—the class war of Europe’s and America’s workers. Lenin famously distinguished between “revolutionary” and “trade union” consciousness among the workers, the first directed toward the distant achievement of a communist society, the second aimed right now at higher wages, better working conditions, and the end of the factory foreman’s tyranny. Lenin favored the first and worried that any advance along trade-union lines would make revolution more difficult. Most workers, it turned out, favored the second approach. Revolutionary consciousness ended in dictatorship and terror or in defeat and sectarian isolation; trade-union consciousness led to the successes of social democracy.

That old distinction holds for national liberation too. In the case of Palestine and Israel, revolutionary consciousness aims at a radical triumph: Greater Palestine or Greater Israel “from the river to the sea.” That aim is often expressed in messianic language—the religious version of revolution. By contrast, trade-union consciousness is represented by those who work for a division of the land—two states, sovereign or federated or confederated. That may seem utopian right now, but it isn’t messianic. One can imagine it as a human contrivance, worked out by Palestinians and Jews who are committed concretely to the well-being of their people. We should judge Hamas, I would argue, by the standard of trade unionism because that kind of politics is genuinely responsive to the needs and aspirations of the people it aims to liberate.

Hamas has never been interested in the kinds of political work that follow from a “trade union” commitment. Begin with the obvious: Hamas should be making Gaza into a model of what a liberated Palestine would look like (perhaps, sadly, that’s what it has done). And then it should be organizing on the West Bank to achieve a Palestinian state alongside Israel. It should be working with Israeli opponents of the occupation and with other Palestinian groups for that version of liberation—which is achievable short of war and revolution. Two states (with whatever qualifications on their sovereignty) would be the most beneficial outcome for both Palestinians and Israelis. So Hamas should be building a mass movement with that end in view, a movement that would stand behind or, better yet, replace its revolutionary vanguard. It should be educating people for civil disobedience and planning marches, demonstrations, and general strikes. It should be working to strengthen Palestinian civil society and create the institutions of a future state.

Of course, Israel will make this work difficult; the current Israeli government will make it extremely difficult, because it includes religious messianists and ultranationalist settlers. Settler thugs regularly attack Palestinians living on the West Bank. Against the thugs, self-defense is required—force against force. But the goal of Palestinian “trade unionists,” a state of their own alongside Israel, requires a mass movement. Fatah, Hamas’s rival, produced something like that in the First Intifada, from 1987 to 1993; it wasn’t entirely nonviolent, but in some ways it resembled the nationalist version of a union strike. It played a large role in making the Oslo Accords possible. Hamas can’t claim any similar achievement; indeed, rockets from Gaza helped undermine Oslo.

There are people on the ground in the West Bank committed to nonviolent resistance and to constructive work of exactly the sort I’ve just described, but Hamas does not look at them as allies. Nor does it regard Palestinians in and around the Palestinian Authority who support the idea of two states as allies. It is committed to a revolutionary, totalizing politics. It insists not only on the replacement of the state of Israel by a Palestinian state but also—equally important to Hamas—on the end of any Jewish presence on what it regards as Arab land.

Hamas is not doing anything in a “trade union” way to build a liberation movement with more limited goals—a movement that might actually succeed. That kind of political work requires an organization less Bolshevik-like, less repressive, less rigidly ideological, more inclusive than Hamas has ever been. Hamas is a vanguard that isn’t looking for an organized rear guard. It is an elite of ready-to-be-martyrs who plan to liberate Palestine and eliminate Israel—not by themselves but only with those allies who won’t challenge their supremacy. They seek the help of the Arab street, excited by Hamas’s violence but not capable of replacing Hamas’s rule—and the help of movements and states that share Hamas’s zealotry and will never question its authoritarianism. The resort to terror follows. It is the natural expression of this kind of politics.

The most succinct argument against terror as a strategy for liberation comes from the Russian revolutionary Leon Trotsky. Although he wrote the essay “Their Morals and Ours”—one of the earliest versions of the bad argument that I began with—Trotsky also wrote critically about terrorism, arguing, accurately, that terrorists “want to make the people happy without the participation of the people.” The terrorists, Trotsky continues, mean to “substitute themselves for the masses.” Some on the left view that ambition as heroic and admire terrorists for that reason. But the politics of substitution is an authoritarian politics, not a leftist politics, precisely because it does not look for popular participation. Its end cannot be a democratic state: Algeria, long dominated by authoritarian FLN leaders, is a useful example of how things are likely to turn out. So is Gaza itself.

[Michael Ignatieff: Why Israel should obey Geneva even when its enemies do not]

Terrorism is a betrayal of the oppressed men and women whom its protagonists claim to defend—and plan to rule. Because they substitute themselves for the people, they will, if victorious in their struggle, simply replace the oppressors they defeat. But this is only part of the story. What about the people the terrorists kill? Terrorism is the random killing of innocent men, women, and children for a political purpose. But its worst and most common form is not random in a general way but random within a group: the killing of Black people in the United States by police or by white men with guns, or of Europeans in Algeria, Muslims in India, or, as in the recent attacks, Jews in Israel. This kind of directed terror needs to be called out—as American activists did with the slogan “Black Lives Matter.” Remember the counter-slogan, “All Lives Matter,” that many people—including me—took to be a denial of the specific politics, the racial hatred, that drives the killing.

For similar reasons, we should give the attack of October 7 its right name: It was a pogrom, a massacre undertaken for the purpose of murdering Jews. People who refuse the term, saying instead that all killing of civilians is wrong, are right in the general way that “All Lives Matter” is right, but they are avoiding the crucial moral and political point.

Still, precisely because all lives do matter, we must also draw universal moral lines. What about you and me, random individuals, who are sitting in a café or attending a music festival and are suddenly blown up or machine-gunned by attackers who are deliberately trying to kill us? I can’t understand anyone on the left or the right who, when thinking of themselves in the café or at the festival, would say that such violence is all right. Surely we are all innocent: ordinary folk, going about daily business, thinking of politics only occasionally, worrying about money, looking after kids—or just being kids.

But aren’t men, women, and children just like these also the victims of war? Yes, and terrorism—the deliberate killing of innocent people—is often enough a military strategy, as it was, I believe, in the firebombing of Dresden and the atomic bombing of Hiroshima. But that is not always true: Many armies and many soldiers aim only at military targets and do what they can to avoid or minimize civilian injury and death. That is especially difficult when the enemy deliberately exposes its civilian population to the risks of combat.

Civilian casualties are obviously much easier to avoid in the course of a political struggle. Those who resist oppression can focus and therefore have to focus narrowly on the oppressors. No good society, no liberated state, can be produced by denying life and liberty to the ordinary folk I have described. No good society without them. No good society without you and me! That is the fundamental principle of a decent politics. Terrorism is a deliberate, overt denial of that principle, and so the defenders of terrorism are the betrayers first of the oppressed and then of the rest of us. Like the terrorists, they may think that they are advancing the cause of liberation, but they have forgotten their obligations to you and me.

Read the whole story
SimonHova
161 days ago
reply
Like many progressive Jews living in the US, I have long had reservations about the Palestinian occupation, but I am luckily old enough to heed my parents warnings that peace is a partnership, and not to assume that your partners are as willing as you.
Greenlawn, NY
Share this story
Delete

reddit is having a glitch where it puts the wrong captions over photos and it’s the only thing i…

1 Comment and 2 Shares

adz:

reddit is having a glitch where it puts the wrong captions over photos and it’s the only thing i care about right now

Read the whole story
SimonHova
162 days ago
reply
I don't know when these are not going to be funny any longer, but it's certainly not now!
Greenlawn, NY
Share this story
Delete

The Automat Cinematic Universe

jwz
1 Comment and 3 Shares
I find it soothing that, Legion being a Marvel property, the Waffle Boats Cafeteria and the Key Lime Pie Cafeteria are technically a part of the same multiverse.

Read the whole story
SimonHova
168 days ago
reply
I love reminders that the criminally underrated show Legion is part of the larger MCU.
Greenlawn, NY
Share this story
Delete
Next Page of Stories