Two House Democrats are asking for answers from privacy organizations about a program aimed at protecting children online. In a letter aimed at ensuring the organizations are doing what they’re supposed to under the Children’s Online Privacy protection Act (COPPA) Safe Harbor program, the lawmakers said, “Parents do not have confidence that their children’s privacy is sufficiently protected online and do not have the time or resources to read through complicated and convoluted privacy policies.”
Between the leaked Facebook documents exposing research on the detrimental effects of social media on children in the midst of a pandemic where the Omicron variant has caused schools across the country to shift back to virtual learning, it seems as if our kids are currently living their whole lives online, and that’s a terrifying prospect for parents. Wouldn’t it be ideal if there was a stamp of approval that could be placed on an app or website that indicated that it was safe for kids to use?
In 2013, the Federal Trade Commission (FTC) had that very idea and amended the COPPA rule to create a Safe Harbor program. The program allowed industry groups to apply to be certified as self-regulators. Companies that join an FTC-approved Safe Harbor program and adhere to its guidelines are deemed to be in compliance with COPPA. Enrollment in a Safe Harbor program provides a shield (or a “safe harbor”) for online operators from a potential FTC enforcement action. Win win, right? Parents can rely on the Safe Harbor program’s stamp of approval, companies get help with their compliance obligations, and the privacy organizations make money while fulfilling the noble mission of protecting children’s privacy online.
Unfortunately, the Safe Harbor programs have not been immune to criticism. One of the most vocal critics has been former Democratic FTC Commissioner Rohit Chopra (who is now the Director of the FTC’s Consumer Financial Protection Bureau). In a statement he issued when the FTC settled claims with the mobile gaming app developer Miniclip, Chopra criticized these privacy policing programs for resting on their “lifetime approval” from the FTC and failing to adequately monitor the participants in their program. In that case, Miniclip was terminated from the Children’s Advertising Review Unit’s (CARU) Safe Harbor program for violating its guidelines, but Miniclip continued to publicly state that it had CARU’s stamp of approval. Chopra noted that although CARU did the right thing in terminating Miniclip, terminations for violating guidelines are too rare. He also said that none of this information is public, and it should be. The public should know when a participant is terminated and what it did wrong. The FTC should have this information so it can investigate and bring an enforcement action if appropriate.
So what are the key takeaways from this Congressional inquiry to the Safe Harbor programs?
The Safe Harbor programs are going to be forced to adopt more of a white box approach going forward. As an initial matter, they will need to open up the curtains and disclose their performance data (including complaints handled, disciplinary actions taken) to respond to the questions from the legislators. If they continue to try to keep this information confidential as a matter of practice, Congress may ask the FTC to beef up its oversight of the COPPA Safe Harbor program including by requiring them to make this type of complaint data public.
This letter will likely cause the FTC to do its own inquiry into the Safe Harbor programs if a non-public investigation is not already underway. The FTC has authority under Section 6(b) of the FTC Act to require responses to specific questions from entities. Chopra called on the Commission to issue 6(b) orders to the Safe Harbor programs as part of a strategy to “modernize” the agency’s rules and enforcement strategy to root out children’s privacy violations. While they overlapped at the Commission, Chopra had the new Chair’s ear and Chair Lina Khan has indicated in public statements that she is committed to prioritizing enforcement in the area of children’s privacy using all of the tools at the agency’s disposal.
The Safe Harbor programs’ responses to this Congressional inquiry will be used to inform any amendments to the COPPA rule. The FTC is currently engaged in a review of the COPPA rule which was last updated in 2013. The public comment period has closed. The next step in the rule review process is that the FTC will issue proposed amendments which will likely include changes to the Safe Harbor program intended to strengthen it. The “lifetime approval” for Safe Harbors will likely be modified in a way that subjects the companies to routine reviews to maintain accreditation by the FTC.
The FTC will continue to terminate Safe Harbors that do not fulfill their oversight requirements and will be on the lookout for potential conflicts of interest. Last year, the FTC removed Aristotle from the list of authorized Safe Harbor organizations. In announcing that case, the Director of the FTC’s Bureau of Consumer Protection indicated that FTC views it as a conflict of interest when children's privacy organizations are funded by the website operators and app developers that they are supposed to be regulating. Aristotle was the first FTC-approved Safe Harbor organization to have its approval revoked, but it may not be the last.
This inquiry is part of a growing tidal wave of concern from lawmakers on both sides of the aisle about children’s safety and privacy online. There are a number of proposals to strengthen COPPA and this inquiry will likely lead to even more.
These Stories on Thought Leadership
Disclaimer: The content of this page reflects Pixalate’s opinions with respect to the factors that Pixalate believes can be useful to the digital media industry. Any proprietary data shared is grounded in Pixalate’s proprietary technology and analytics, which Pixalate is continuously evaluating and updating. Any references to outside sources should not be construed as endorsements. Pixalate’s opinions are just that - opinion, not facts or guarantees.
Per the MRC, “'Fraud' is not intended to represent fraud as defined in various laws, statutes and ordinances or as conventionally used in U.S. Court or other legal proceedings, but rather a custom definition strictly for advertising measurement purposes. Also per the MRC, “‘Invalid Traffic’ is defined generally as traffic that does not meet certain ad serving quality or completeness criteria, or otherwise does not represent legitimate ad traffic that should be included in measurement counts. Among the reasons why ad traffic may be deemed invalid is it is a result of non-human traffic (spiders, bots, etc.), or activity designed to produce fraudulent traffic.”