In September of 2021, U.S. Senators Markey (D-MA) and Blumenthal (D-CT) reintroduced the Kids Internet and Design Safety (KIDS) Act – legislation that prohibits commercial online platforms that are child-directed from engaging in certain practices including implementing features that encourage additional engagement with the platform, promoting particular types of content, and using specific advertising methods.
Online platforms have operated for years with little to no oversight or regulation, but Congress has begun to take up measures to fill in gaps and strengthen privacy measures– particularly concerning children. President Biden mentioned children’s privacy as a priority for his administration in his February 2022 State of the Union address and called for a ban on targeted advertising for children online.
The KIDS Act proposes:
Requiring social media companies to provide “Covered Users” under age 16, raising the age from 13 under COPPA, with the option to protect their information, disable addictive product features and opt out of algorithmic recommendations.
Giving parents more control over their child’s social media usage.
Requiring social media platforms to conduct a yearly independent audit to assess their risk to minors.
Allowing researchers to access data from tech companies to investigate potential harms to children*.
COPPA vs. KIDS Act:
The KIDS Act expands on certain provisions included in the Children’s Online Privacy Protection Act (COPPA), enacted in 1998. These expanded provisions include:
Expanding application to any child-directed platform, meaning one that targets "Covered Users."
This determination is demonstrated by factors such as subject matter, visual content, use of multimedia including animated characters and music or other audio content, or other verifiable evidence relating to the composition and intended composition of the audience, among others.
Child-directed platforms will have to comply with these measures under the Act, whether or not there is knowledge that children under 16 are using their platforms (the standard of proof under COPPA).
Once a platform is designated as being “child-directed,” the KIDS Act would ban features like:
Auto-play without user input.
Push alerts that notify the user when they are not actively engaging with the platform.
“Likes,” or other methods of displaying the quantity of positive engagement or feedback that a Covered User receives from other users.
Badges or other visual rewards acquired from increased levels of engagement with the platform.
Any feature or setting that “unfairly” encourages a Covered User, due to their age, to make purchases, submit content or commit more time to engaging with the platform**.
*Siberling, Amanda. “Senators propose the Kids Online Safety Act after FB leaks – TechCrunch.” TechCrunch, 16 February 2022, https://techcrunch.com/2022/02/16/senators-propose-the-kids-online-safety-act-after-facebook-haugen-leaks/. Accessed 31 March 2022.** Mintz. “KIDS Act Would Expand Existing Federal Protections Under COPPA and Require Significant Changes to Existing Business Models.” The National Law Review, vol. XII, no. 95, 2020, p. 211. The National Law Review, https://www.natlawreview.com/article/kids-act-would-expand-existing-federal-protections-under-coppa-and-require. Accessed 5 April 2022.
Disclaimer: The content of this page reflects Pixalate’s opinions with respect to the factors that Pixalate believes can be useful to the digital media industry. Any proprietary data shared is grounded in Pixalate’s proprietary technology and analytics, which Pixalate is continuously evaluating and updating. Any references to outside sources should not be construed as endorsements. Pixalate’s opinions are just that - opinion, not facts or guarantees.
Per the MRC,
“'Fraud' is not intended to represent fraud as defined in various laws, statutes and ordinances or as conventionally used in U.S. Court or other
legal proceedings, but rather a custom definition strictly for advertising measurement purposes. Also per the MRC,
“‘Invalid Traffic’ is defined generally as traffic
that does not meet certain ad serving quality or completeness criteria, or otherwise does not represent legitimate ad traffic that should be included in measurement counts.
Among the reasons why ad traffic may be deemed invalid is it is a result of non-human traffic (spiders, bots, etc.), or activity designed to produce fraudulent traffic.”