153
Views
0
CrossRef citations to date
0
Altmetric
Article Commentary

From TV to social media to “ambient” AI: Insights from 30 years of children’s media policy in the United States

&

ABSTRACT

This essay explores the elements that have historically contributed to a climate in which policy makers feel compelled to regulate media in the United States. It then examines which of these elements are currently in place as lawmakers consider social media and AI regulation. We argue, based on observations of children’s media policy over the past 30 years, that legislative action in the US is almost inevitable. The remainder of the essay lays out the legislative “corrections” that have been proposed and what they suggest about concerns about children’s social media use. We conclude with the challenging road ahead to creating meaningful policy, holding media companies accountable for implementation, and assessing whether and how new regulations make a difference for young people.

Policy makers in the US tend to invoke legislation as a last resort when concerns are raised about media’s impact on young audiences. Instead, government agencies and elected officials have preferred media companies to self-regulate, in part because they operate in a capitalist economy in which the marketplace rules and in part because they exist within a democratic society that protects their First Amendment rights (Kunkel & Watkins, Citation1987). But there have been moments when public pressure to “do something” has resulted in the passage of laws designed to improve children’s experiences with media. We may now be experiencing that moment when pressures to regulate social media companies are strong enough for enactment of public policies.

Nearly 30 years ago, the US Congress overhauled media policy with the passage of the Telecommunications Act of 1996. Today’s media landscape has shifted dramatically, and the kinds of concerns addressed by that Act now seem almost quaint in comparison. Three decades ago, the internet was still new for most families, smartphones were not yet widely available, and social media was in its nascent stage. Indeed, one of the most controversial elements of the Telecommunications Act was the requirement that programs carry ratings and television sets contain a computer chip (known as a V-Chip, the “V” standing for “violence”) that could read the ratings so parents could identify and block programs they didn’t want their children to see (Timmer, Citation2013). Today, of course, children are more likely to “watch TV” on the internet or via a streaming service (Nielsen.com, Citation2023), making the V-Chip irrelevant and the ratings an afterthought. Yet even in its day, the ratings/V-Chip policy was largely ineffective at reducing children’s exposure to problematic content (A. B. Jordan, Citation2008). Indeed, most media scholars would have predicted its failure; there had been no real attempt to establish the feasibility or potential efficacy of the “tools” provided to parents in real-life settings. Some even labeled the legislative efforts “low hanging fruit” – something that policymakers could point to without actually doing anything that would affect the freedoms and economic structure of the media industry.

This brief historical lesson on a flawed media policy can offer insight into the conditions that are necessary for policymakers to act. To make evidence-based policy requires evidence, and by the 1990s, hundreds of academic studies (many funded with government grants) had established a relationship between exposure to television violence and aggressive thoughts and behaviors in children and adolescents. Despite the evidence, the amount of violence on television continued to rise (Smith et al., Citation2002), as did parental concern over the bleak television landscape (Bushman & Cantor, Citation2003). It was an opportune moment for policy action: the Clinton/Gore administration had shown an openness to media activists by supporting the mandate for broadcasters to increase their educational offerings, and the Federal Communications Commission showed a greater tendency to be hands-on (in comparison with the Reagan-era FCC movement toward deregulation).

Public sentiment played a crucial role. At the time, polls showed Americans were increasingly frustrated with what they were seeing on television, and more than 80% of Americans thought television programming was especially harmful to children (Hundt & Kornbluh, Citation1996, p. 13). The FCC received thousands of letters demanding more educational TV. Responding to the public mood, the FCC began pursuing more stringent benchmarks for interpreting the Children’s Television Act of 1990 (Hundt, Citation1997). These new rules included a minimum standard of three hours per week for each station’s educational programming for children, a more precise definition of what content qualifies as “educational,” and a mandate that all educational shows be clearly labeled as such at the time they are broadcast (Federal Communications Commission, Citation1996). The harmful effects of television on children became a bipartisan issue, with support for change from both Republicans and Democrats. Subsequent annual assessments of children’s programming (e.g., A. Jordan et al., Citation2001) showed that the policy changes had moved the needle–the proportion of “highly educational” programs climbed from 29% in 1998 to 49% in 1999. Still, one in five so called educational programs had no discernible educational value.

The same constellation of elements that led to a will to take legislative action on TV programming in the 1990s seems to be emerging around social media and adolescent well-being today. Social media use is at population scale among youth; YouTube, TikTok, Snapchat and Instagram are the most widely used apps among teens (Vogels et al., Citation2022). Concerns over problematic social media use are escalating, and CEOs of the world’s largest social media companies are testifying in Congress more frequently in the last 5 years than in the two decades prior. Social scientists are castigating social media for damaging the mental health of youth, for nudging them into a virtual world animated by dopamine baits, and utilizing design features that are incompatible with the adolescent brain’s developmental trajectory (Alter, Citation2017; Haidt, Citation2021). Layered into these conversations are concerns over the engine that powers the social media feeds that shape what young people see: artificial intelligence (AI).

Social media are society’s first large-scale user interaction with AI. AI algorithms power social media’s personalized recommendations, ad targeting, content ranking, news feeds, and friend suggestions, while erasing stopping cues, among other things. Whistleblowers and former insiders from large social media companies have testified that AI systems are becoming too complex for human engineers to rein in retrospectively.

Such design exacerbates the tensions between time well spent and time wasted for adolescents and leverages the fault lines in cognitive developmental characteristics at a sensitive time of brain development (Blakemore, Citation2018). Research on brain development notes that self-regulation develops linearly and gradually throughout adolescence, and plateaus only in the mid-20s (Steinberg et al., Citation2015). Straightaway, a striking asymmetry surfaces. Social media design introduces an always-on contest against youth decision-making at a time when executive functioning––or the ability to plan, prioritize and organize––is still developing. Whistleblowers like former Facebook data scientist Frances Haugen have provided evidence that the company knows and exploits adolescent vulnerabilities (Haugen, Citation2021), while others have revealed that these apps are designed for addiction (Harris, Citation2019). In the absence of federal action, parents, school districts, and governments have sued social media companies over harms they allege children have suffered from such addictive design (Johnson, Citation2023).

Are these actions warranted? Probably. Directionality is elusive and longitudinal research is rare, but evidence of associations between social media use and negative outcomes are growing (Office of the Surgeon General [OSG], Citation2023). Teens who spend more than 3 hours a day on social media are at twice the risk for mental health problems, including depression and anxiety (Riehm et al., Citation2019). Twice in the span of two years, the Surgeon General issued public health advisories calling “urgent” attention to “significant public health challenges” brought on by youth social media use. Surveys suggest that a majority of parents of adolescents struggle to make sense of the role of persuasive technology design in the lives of their children (Livingstone & Blum-Ross, Citation2020).

The kinds of concerns that are focused on young people’s involvement with social media today are not dissimilar from the concerns observers had about young people’s use of the internet more generally: fears over bullying, exposure to developmentally inappropriate content, exploitation and predatory behavior, and privacy violations. But as young people’s online activities have become more personalized with a blend of “searching” and “suggestions,” media companies like X, Meta, Snapchat, and TikTok are seen as creating (toxic) media environments that serve corporate profitability goals rather than nurturing community and connection.

We have also now reached a point where media coverage of the challenges of social media and AI have made the topic salient for the broader public. More than 100 million people have watched The Social Dilemma (Orlowski, Citation2020), an Emmy-winning Netflix documentary about social media companies’ business model driven by advertising revenue. In the film, defectors from big technology companies offer the insider view on how social media platforms’ manipulation of human behavior for profit is a feature, not a bug. As noted earlier, a Facebook whistleblower outed tens of thousands of pages of internal documents showing that the company knew that apps such as Instagram were harming teens, leading to front page treatment in major newspapers (Wells et al., Citation2021) and a string of highly charged Congressional hearings in the fall of 2021 in which lawmakers on both sides of the aisle publicly scolded tech companies for prioritizing profits over users’ well-being. Adding to the cacophony of complaints was President Biden, who declared in his 2022 State of the Union Address: “It’s time to strengthen privacy protections, ban targeted advertising to children, demand tech companies stop collecting personal data on our children.”

Legislative action is almost inevitable when a particular constellation of elements reaches a tipping point. In the early phase of the internet’s diffusion, U.S. Congress enacted the 1998 Children’s Online Privacy Protection Act (COPPA) in response to fears surrounding data collection and privacy during the rise of electronic commerce. Website operators had to obtain “verifiable parental consent” for collection and use of information on children under 13 years old (U.S. Congress, 1998). In response, social media firms’ Terms of Service (ToS) forbade children under the age of 13 from creating an account. Despite COPPA’s mandate, children under 13 participate in social media in large numbers, often with considerable help from their parents (Boyd et al., Citation2011). Some estimates have shown that nearly 4 in 10 children under 13 are active social media users (Common Sense Media, Citation2022).

Two decades after the first of the large social media platforms arrived on the scene, lawmakers are getting behind the argument that algorithms don’t have First Amendment rights to enter children’s worlds. Dozens of state attorney generals have banded together and sued the biggest names in social media (Johnson, Citation2023). In their filing, they are pointing to data such as the doubling of suicide rates for 10- to 14-year-old girls in the seven years after addictive feeds were introduced. The national conversation around social media harms is increasingly focused on how these apps are being used by sexual predators to groom, abduct, abuse and blackmail children (Isaac, Citation2024). During more than three hours of testimony in March 2024, CEOs of Snap, Meta, and Discord apologized to parents who held up photographs of their dead children – all online child abuse victims. Some of those children died after overdosing on illegal drugs they bought on Snapchat, others committed suicide after being sexually exploited on apps such as Meta’s Facebook Messenger, Instagram, and Discord. Although TikTok––which accounts for 170 million users in the US––is also under fire for similar dangers, lawmakers have focused much more on its national security risks because of the app’s Chinese ownership. However, the consensus around platform agnostic harms is solidifying. States (eg., Utah, Arkansas) are pushing back on the underlying AI-powered design which is pervasive, rather than brand names, and demanding that social media companies verify users’ ages and disable autoplay and push notifications. At the federal level, a gamut of legislative “corrections” are proposed: stop all children under the age of 13 from using social media, require permission from a guardian for users under 18 to create an account, prohibit social media companies from using algorithms to recommend content to users under 18, abide by a “duty of care” and force social media companies to give minors the option to disable addictive product features and ban targeted advertising to children and teens.

Pressure is building from many flanks and there is political will to put in place some guardrails around social media. But creating meaningful policy––whether in the television era or later––has proved challenging. How does Congress ensure that in seeking to solve a problem, it is not solving yesterday’s problem and forever playing catch up? Is there some way of knowing that one starting point is better than another?

The lessons from the V-Chip and COPPA policy efforts are instructive. Companies tend to align with regulatory measures in ways that allow them to point to something, without dramatically altering their business model. For example, soon after the Facebook whistleblower testified in Congress, many social media apps including Instagram introduced digital “nudges” that would pop up and remind users to take a break. In other words, they selected low hanging fruit, just as in the case of TV era rulemaking and subsequent industry self-regulation. These sorts of responses have worked better for social media companies than for adolescent users, who say they feel “stressed” by social media (Natarajan, Citation2024). Based on the current legislative trajectory, we can expect debates over social media to persist and escalate depending on new controversies or revelations. As one Senator fumed, “we’re done talking!” (Social Media Company CEOs Testify on Online Child Sexual Exploitation, Citation2024).

That brings us to the question: if we’re done talking, what changes might actually make a difference? First, stakeholders need to get on the same page regarding what needs regulation. When lawmakers place social media and AI in different buckets – as they are now – it gives the tech industry leverage that emerges from the cracks in between. Even the US Supreme Court is debating what a YouTube “recommendation” really is (Google Gonzalez, Citation2023). Does recommending or suggesting the next piece of content make a platform a publisher? Lawyers flounder, cases break down, social media companies escape. It is important to recognize that content is downstream from the design features that are intended to keep users hooked, and AI fuels such design across a multiplicity of media including but not limited to social media. Many harms attributed to social media spill over to messaging apps (e.g., Discord and WhatsApp), conversational chatbots, and streaming services. Trying to regulate social media as if it exists in a silo is problematic. AI’s “ambient” presence across apps enables personalized prediction at scale such that each individual user is served up content that is designed to keep them coming back to whatever is “next up.” Such design puts the onus of decision making squarely on users.

Second, it is an opportune moment to revisit internet regulation more generally. Lawmakers lament the inability to sue online platforms because of the “safe harbor” that companies enjoy under Section 230 of the Communications Decency Act of 1996 (United States Code, Citation1996). Section 230 is (part of) a statutory law that protects online platforms from being held liable for content posted by third parties. Policymakers argue that when users can sue online platforms, it will be transformative for user rights in the digital era.

Third, young people have a right to know how social media companies and large online platforms profit from their use––the advertising supported business model. This information is no secret, it is available in companies’ public documents, their earnings calls, and annual reports. However, social media companies don’t disclose where the advertising dollars are coming from and on what basis advertisers are being charged. The public must demand this information. Users, especially young ones, must be made aware how their attention is bundled and sold to advertisers for financial gain, and what their customer lifetime value adds up to. Prior research has highlighted that young people can benefit a great deal if they are aware of commercial influences, and the operationalization of persuasion in the attention economy (Buckingham, Citation2017).

Of the federal bills currently pending in the US, the Kids Online Safety Act (KOSA) appears to have gained the most traction. It is unclear whether it will get the votes to pass, but if it does, there are elements that have the potential to create a safer and more positive social media experience for young people. Platforms would be required to enable the strongest privacy settings for teen users by default. They would also be required to provide minors with options to protect their information, disable addictive product features, and opt out of personalized algorithmic recommendations. The Act would require a dedicated channel to report harmful behavior, a component that could be a valuable resource for parents. In theory, these tools could be useful. But if we’ve learned something from media policies with technical solutions (e.g., the V-Chip) and parental involvement (e.g., television ratings), it is imperative to first assess whether such policies are feasible and efficacious before rolling them out in an untested way. Research that first explores how young people themselves would adopt “disabling” strategies and whether parents would use a dedicated reporting channel is crucial to conduct before rather than after new media policies are implemented. The community of scholars focused on children, adolescents, and the media must be ready to participate in this effort.

Disclosure statement

No potential conflict of interest was reported by the author(s).

Additional information

Notes on contributors

Amy Jordan

Amy Jordan is Professor of Journalism and Media Studies at Rutgers University. She is former co-editor of the Journal of Children and Media and co-editor (with Dafna Lemish and Vicky Rideout) of Children, adolescents, and media (Routledge, 2017). Corresponding author: [email protected]

Nikhila Natarajan

Nikhila Natarajan is a doctoral student in Media Studies in the School of Communication and Information at Rutgers University. She studies adolescent media use with a focus on artificial intelligence (AI).

References