California protects children online in a way every state should follow

On September 15, California Governor Gavin Newsom (D) signed into law the Age-Appropriate Design Code Act, which passed unanimously in the state Senate in late August despite protests from the tech industry.

Inspired by the UK Children’s Code that came into effect last year, California law protects the privacy and well-being of children online by requiring companies to assess the impact of any product or service designed to children or “likely to be viewed by children”.

The law will take effect on July 1, 2024, after which companies found guilty of breaking the law could face fines of up to $7,500 per affected child. While that may seem like a small sum, similar legislation in the European Union has allowed the Irish Data Protection Commission to fine Meta $400 million over Instagram’s handling of children’s data. . (In the case of the new law, the California Attorney General would impose fines.)

California’s Age-Appropriate Design Code Act defines a child as anyone under the age of 18, compared to the Children’s Online Privacy Protection Act (COPPA) of 1998, where 13 is the age limit.

COPPA codified children’s data protections, prohibiting “unfair or deceptive acts or practices in connection with the collection, use, and/or disclosure of personal information from and about children on the Internet.”

The new California law goes further. It demands that the highest privacy settings be the default for young users and that companies “provide a clear signal” to let children know when their location is being tracked.

Jim Steyer, founder and CEO of Common Sense Media, one of the bill’s main sponsors, told HuffPost, “This is a very important victory for children and families.”

The law is firmly on the side of child safety rather than profit, stating: “If a conflict arises between business interests and the best interests of children, companies must prioritize privacy, safety and the welfare of children in relation to commercial interests.

In a 2019 interview with The New York Times, Baroness Beeban Kidron, chief architect of the UK Children’s Code, explained her encounters with tech executives.

“The main thing they ask me is, ‘Do you really expect companies to give up their profits by restricting the data they collect on children?’ His answer ? ‘Of course I am! Of course, everyone should.

“If a conflict arises between commercial interests and the best interests of children, companies must prioritize the privacy, safety and well-being of children over commercial interests.”

– California Age-Appropriate Design Code Act

How will the Age-Appropriate Design Code Act protect children online?

The dangers of the Internet to children go beyond children being contacted by strangers online (although by making high privacy settings the default, California law tries to prevent such interactions).

Parents are increasingly concerned about the excessive time children spend online, the appeal of platforms with autoplay and other addictive features, and children’s exposure to content that promotes behaviors dangerous like self-harm and eating disorders.

The Age-Friendly Design Code Act requires companies to write a ‘data protection impact assessment’ for every new product or service, detailing how children’s data may be used and whether harm may result from this use.

“Fundamentally, [companies] must consider whether their product design exposes children and teens to harmful content, enables harmful contact with others, or uses harmful algorithms,” Steyer said.

Under the law, Steyer explained, YouTube, for example, would still be able to make video recommendations. The difference is that they would have less data to extract when making these recommendations. Companies would also be responsible for assessing whether their algorithms amplify harmful content and taking action if so.

Haley Hinkle, policy adviser at Fairplay, an organization “dedicated to ending marketing to children,” told HuffPost that by commissioning an impact assessment, “big tech companies will be tasked with assessing the impact that their algorithms will have on the children”. before they offer a product or a new design feature to the public.

Hinkle continued, “This is critical to shifting responsibility for the security of digital platforms onto the platforms themselves, and away from families who don’t have the time or resources to decode endless pages of privacy policies and guidelines. ‘settings options.’

Under the law, a company cannot “collect, sell, share or retain” a young person’s information unless it is necessary for the app or platform to provide its service. The law orders companies to “estimate the age of child users with a reasonable level of certainty,” or simply grant data protection to all users.

“You cannot profile a child or teen by default unless the company has appropriate safeguards in place,” Steyer said. “And you can’t collect precise geolocation information by default.”

Hinkle explained the motivation for companies to collect such data. “Online platforms are designed to capture as much of children’s time and attention as possible. The more data a platform collects on a child or adolescent, the more effectively it can target them with content and design features to keep them online.

Although the scope of the law is limited to California, it is hoped that it could spur further reform, as some companies changed their practices around the world before the enactment of the Children’s Code in the UK. Instagram, for example, has made teen accounts private by default. , disabling direct messages between children and adults they do not follow. However, how they define ‘adult’ varies by country – it’s 18 in the UK and ‘some countries’, but 16 elsewhere in the world, according to their statement announcing the changes.

While it’s unclear if Instagram will raise that age limit to 18 in California, the Age-Appropriate Design Code Act requires companies to consider “the unique needs of different age groups” and the stages of development, defined by law as follows: “0 to 5 years or “pre-literacy and early literacy”, 6 to 9 years or “basic primary school years”, 10 to 12 years or “years of transition’, 13 to 15 or ‘early adolescent’ and 16 to 17 or ‘approaching adulthood.’

“Child development and social media are not optimally aligned.”

-Devorah Heitner

What are the biggest threats to children online?

Some threats to children come from large, impersonal companies that collect data in order to subject them to targeted advertisements or profile them with targeted content that may promote dangerous behaviors, such as eating disorders.

Other threats come from people your child knows in real life, or even from your child himself.

Devorah Heitner, author of “Screenwise: Helping Kids Survive (And Thrive) In Their Digital World,” told HuffPost that in addition to “interpersonal harm from people they know,” such as cyberbullying, “there are ways to jeopardize their own reputation.

“What you share when you’re 12 could live with you for a very long time,” Heitner explained.

While no law can stop a child from posting something they probably shouldn’t post, the Age-Appropriate Design Code Act requires companies to “consider the unique needs of different age groups.” age,” setting the precedent that children and adolescents are in the developmental phase. different from adults and require different protections.

“Child development and social media are not optimally aligned,” Heitner noted.

What can parents do now to protect their children’s privacy and safety?

Parents don’t have to wait for big tech companies to change their practices before California’s new law takes effect. There are things you can do now to increase your child’s online privacy and safety.

Hinkle suggests keeping children away from social media until at least age 13. To do this, she says, it can be helpful to connect with the parents of your child’s friends, since the presence of their peers is the biggest draw for most social networks. kids.

Once they have social media accounts, Hinkle suggests “reviewing the settings with your child and explaining why you want the most protective settings enabled.” These include turning off location data, opting into private accounts, and turning off contact with strangers.

Heitner advocates an approach she calls “mentoring rather than monitoring.” Because safety settings can’t do much, and because kids are so good at finding workarounds, she argues your best defense is an ongoing conversation with your child about their online habits and the impact that his actions may have, both on himself and on others. .

Your children will encounter harmful content during their online hours. You want them to feel comfortable telling you about it or, if appropriate, pointing it out.

When it comes to examining their own behavior, children need to know that you are open to discussion and won’t be quick to judge. Heitner suggests using phrases such as “I I know you’re a good friend, but if you post this, it might not sound like that.

Children need to understand how what they post can be misinterpreted and why they should always think before posting, especially when they feel angry.

It’s a delicate balance between respecting the importance of your child’s online life to them, while teaching them that social media “can make you feel bad, and that [companies] are enjoying your time there,” Heitner said.

Parents’ goal should be to educate kids about these issues and “get kids to accept a healthy skepticism” about big tech, Heitner said.

In addition to the resources available on Common Sense Media, Steyer recommends parents take advantage of Apple’s privacy settings, which Common Sense Media helped develop.

He also suggested that parents be role models in their own media consumption.

“If you spend all your time [there] yourself, what message does this send to your child? »




huffpost

Not all news on the site expresses the point of view of the site, but we transmit this news automatically and translate it through programmatic technology on the site and not from a human editor.
Back to top button