Connect with us

Hi, what are you looking for?

Assets Under ControlAssets Under Control

Editor's Pick

Kids & Internet: Education Over Regulation, and Many Options to Fit Different Families’ Needs

Jennifer Huddleston

In 2026, discussion continues about the impact of technology and social media on kids and teens. New technologies such as AI chatbots seem to have only increased the fervor of this discussion. Experts, parents, and policymakers all seem to be asking what should be done to help kids and teens have beneficial experiences while limiting potentially negative or harmful ones.

In the first month of January 2026, the American Academy of Pediatrics issued new guidance on screen time and social media. Among the notable elements of this new report and guidance was the idea that just limiting screen time was insufficient to prevent potential harm in the digital ecosystem. The Federal Trade Commission (FTC) also hosted a workshop on age verification that I participated in. This could reasonably indicate that the FTC is likely considering how these concerns may intersect with existing authority under the Children’s Online Privacy Protection Act (COPPA) or other authorities.

Globally and in many US states, this debate about what, if any, policy actions should be undertaken has also been growing. The last year has seen significant restrictions on online content in the UK in the name of protecting children, a social media ban for under-16s in Australia, and many attempts to impose age-verification or age-appropriate design requirements at a state and federal level in the US. The unintended consequences of such approaches for speech and the privacy of all users have been significant even in the relatively early days of such laws.

Any caring, rational adult wants to protect the next generation from harm, but policy is a one-size-fits-all solution to what is a more individualized problem, varying from family to family.

Parents Remain the Best Decision Makers

Each child and each family is different, and the concerns about young people and technology are not always the same, even among children in the same family. Some parents are concerned about the amount of time a child spends on devices. Others are seriously concerned about potential exposure to harmful content such as pornographic, eating disorder, or self-harm content. Others may have similar concerns to the offline world, such as who is contacting their children or concerns about bullying.

Given the wide array of concerns, policies are unlikely to address all of them, nor can they make the nuanced decisions a parent might make about exceptions. Policy takes this away from the parents and, in many ways, implies they cannot be trusted to make the decisions themselves. As Abundance Institute’s Christopher Koopman, a dad of seven, wrote in a recent X post, “I’m uneasy with how quickly some people are willing to move from concern about kids online to the conclusion that parents can’t be trusted. When parental consent is described as a ‘loophole’ or a ‘collective action problem,’ what’s really being said is that our judgment is the obstacle. That’s because some parents struggle to draw or enforce boundaries, the rest of us should lose the ability to make those choices for our own kids.”

Parents are managing a lot, but that does not negate that they are typically in the best position to make these decisions. Rather than focusing on limiting their choices, policymakers and companies should empower parents to understand which tools are available and how to use them. Civil society groups like Family Online Safety Initiative (FOSI) also provide templates to help parents have conversations about technology with their children. Finally, we should not forget about kids and teens themselves. Companies, educators, and parents should help them understand what to do if they encounter a problem or content or contact that they do not want to be exposed to.

Best Practices vs. Policy Mandates

Many platforms have rolled out various parental controls at different levels of the online experience, from the device itself to app stores to individual apps. These controls vary from defaults on teen accounts that limit who can contact a user, to defaults that limit inappropriate search results, to many other features that can be useful to users of all ages. Many of these elements may be seen as best practices, and the industry often comes to develop formal or informal standards around such issues. 

Additionally, specialized “safe” options and tools that provide additional information to parents have emerged separately from mainstream apps and products. We should applaud companies for responding to concerns and the demands of users in the market. But just because something may be a best practice does not mean it can or should become a legislative mandate.

However, if such practices are legislated rather than the result of voluntary decisions or responses to market expectations, they may not be beneficial and could even prevent better solutions in the future. Legislation is static while innovation remains dynamic. What may be seen as a best practice today to verify an individual’s age could become outdated in the future. Law can rarely evolve as fast as technology, and regulations can lock in what was the best option at the time while preventing services from using better options in the future.

Additionally, mandating best practices assumes all technologies work the same and are targeting the same user. The expectations around parental controls and the availability of parental controls may vary dramatically depending on the intended audience of a website. In some cases, a website may never have been intended or targeted to those under 18, but still have children as users due to a particular family’s need or interest. Unlike COPPA, many age verification proposals are much broader in their scope, such that they risk triggering concerns about much broader age verification than necessary. For example, following the implementation of a UK law, Spotify and Discord age-verified users to access the full content due to the potential “harmful” content.

Parents, young people, and industry should work together to figure out what options may respond to concerns, but these voluntary and market-driven best practices can be more tailored to the particular needs of any platform than a top-down legislative approach that would eliminate choice.

Kids’ Online Safety Laws Have Questionable Impact and Significant Consequences

There are always tradeoffs and the experience of the enforcement of existing online safety laws to see the consequences these laws can have.

First, there is a question of whether these laws are even effective to begin with. Some early evidence suggests that they are not. Australian kids rushed to apps not covered by its law when the “social media ban” went into effect. Many of these have fewer security or parental controls than traditional apps. In the UK and US states where age verification has gone into effect, searches, usage, and downloads of VPNs often spike.

The most observable impact of verification’s risk is the cost to privacy. Data breaches happen even with good cybersecurity practices, and age verification requires the collection of more sensitive data. From a breach of Discord’s age verification records in the UK that compromised over 70000 IDs to the breach of the dating safety website tea, it’s not surprising that many users feel uncomfortable or concerned about the potential privacy leak of more sensitive information. Such risks only increase as more places online require identification verification.

Concerningly, some policymakers seem willing to engage in increasingly restrictive policies to achieve a result. This can be seen in responses that suggest, when these laws have not been fully successful, that proposed additional policies to limit encryption or VPN use have been seen in places like the UK. These privacy-enhancing technologies are used to protect many good actors, including businesses and whistleblowers, but could find themselves under threat from policymakers who would make us all less safe.

While protecting kids and teens online is a noble goal, the realities of these laws’ consequences should be carefully considered. In addition to the above concerns about ineffectiveness and privacy, in the US, especially, there are significant First Amendment concerns, as I have discussed in prior pieces.

Conclusion

Debates over how to best keep young people safe online are likely to only intensify, especially as new technologies like AI chatbots may amplify the uncertainty and concerns of parents and policymakers. A wide range of groups, like the AAP, will try to provide guidance to parents, but each family will still have to make the best decisions about the risks associated with their concerns and the benefits that can be gained from appropriate technology use. 

Our best solutions will be those that prioritize education over regulation, enabling a wide range of options that fit different families’ needs. We must also consider that questions about youth online safety affect not only the next generation but all internet users.

You May Also Like

Politics

The Trump administration extended invitations to Russia and Belarus to join a proposed Gaza ‘Board of Peace,’ officials in both countries said Monday.  Kremlin...

Politics

Iran’s Supreme Leader Ayatollah Ali Khamenei on Saturday lashed out at President Donald Trump, labeling him a ‘criminal’ and accusing the U.S. of orchestrating...

Editor's Pick

Jennifer Huddleston and David Inserra The year 2025 was certainly busy in tech policy, with both positives and negatives. While, of course, there were...

Editor's Pick

Christian Kruse and Norbert Michel Today marks the publication of a new Cato report: The Case for Micro-Offerings. The report recommends creating a micro-offering...

Generated by Feedzy