On November 17th, the UK’s Information Commissioner’s Office (ICO), sent information requests to Apple and Google after receiving a complaint from a digital children’s safety organization 5Rights Foundation alleging that the app stores are violating the UK’s Age Appropriate Design Code. The code came into effect in September of last year and requires digital services that are “likely to be assessed by children under 18” to take steps to protect children’s safety and privacy online and ensure that their services are designed with “the best interests of the child” in mind. The ICO allowed for a one-year grace period for tech companies to come into compliance with the code before enforcing it in September of this year. In its letter to the ICO, 5Rights summarized 12 common issues that it alleged are indicative of widespread violations of the code by a number of tech companies. 5Rights alleged breaches of the code including the use of dark patterns and nudges, low default privacy settings, and excessive data sharing with third parties.
In response to the letter, then Information Commissioner Elizabeth Denham stated that the regulator formally asked Google and Apple about the extent to which the risks associated with the processing of personal data factor into their determinations of the appropriate age rating for an app. If the responses from both companies are that these risks are not factored into the age ratings, it remains to be seen what the ICO plans to do about that. The code is a set of 15 design recommendations reflecting a risk-based approach. It does not have penalties baked into it, but the ICO has made clear that companies that violate the code may be in violation of the General Data Protection Regulation (GDPR) or the Privacy and Electronics Communication Regulation (PECR) both of which provide for financial penalties.
Age Ratings in the the Google Play and Apple App Stores
Both Google and Apple use age ratings for apps and games to inform parents and users of potentially objectionable content. Age ratings focus exclusively on content and do not factor in the risks associated with processing personal data, nor do they factor in the target audience for an app. Google explains it this way, “Content ratings are used to describe the minimum maturity level of content in apps. However, content ratings don’t tell you whether an app is designed for users of a specific age.” Apple’s age ratings are similarly designed exclusively for the purpose of describing the content of apps. While the age rating is essential information about app content, it does not tell parents everything they need to know. Parents also need to know the target age range for the app and whether it collects personal information. Despite the fact that Google and Apple both ask developers to specify a target age range when they publish an app in the stores, they do not include that information in the app stores. Rather, Google and Apple put the burden on parents to somehow figure it out. While some developers provide the target age range in the app store description, the vast majority do not. A content rating is simply not sufficient to determine whether an app targets children.
The Path Forward
Apple and Google’s responses to the ICO are due at the end of the year. A new Information Commissioner, John Edwards, steps into the role on January 3rd. Given Edwards’ laudable efforts to take the big tech platforms to task in his previous capacity as the New Zealand Privacy Commissioner and his call for more regulation of the big digital platforms, I am optimistic that the ICO will flex its regulatory muscles.
The ICO has a unique opportunity here to require Google and Apple to make significant changes to the app stores so that parents get the information they need to know about apps in the stores prior to download. Standard 1 of the code requires design choices that are in the best interest of the child. Standard 4 of the code requires transparency. With regard to transparency, the code requires that the privacy information provided to users, and other published terms, be concise, prominent and in clear language. Companies must provide “bite-sized” explanations about how they use personal data at the point that the use is activated.
Disclaimer: The content of this page reflects Pixalate’s opinions with respect to the factors that Pixalate believes can be useful to the digital media industry. Any proprietary data shared is grounded in Pixalate’s proprietary technology and analytics, which Pixalate is continuously evaluating and updating. Any references to outside sources should not be construed as endorsements. Pixalate’s opinions are just that - opinion, not facts or guarantees.
Per the MRC,
“'Fraud' is not intended to represent fraud as defined in various laws, statutes and ordinances or as conventionally used in U.S. Court or other
legal proceedings, but rather a custom definition strictly for advertising measurement purposes. Also per the MRC,
“‘Invalid Traffic’ is defined generally as traffic
that does not meet certain ad serving quality or completeness criteria, or otherwise does not represent legitimate ad traffic that should be included in measurement counts.
Among the reasons why ad traffic may be deemed invalid is it is a result of non-human traffic (spiders, bots, etc.), or activity designed to produce fraudulent traffic.”