Age-Appropriate Design Code

 

 Summary

  • Open Rights Group supports higher default privacy settings for children, in particular a restriction on behavioral advertising and data processing that leads to extended user engagement.
  • Open Rights Group does raise concerns regarding the application of the age brackets in practice, in particular if it would create the need for age verification on sites which would have negative outcomes for children, calling for purpose limitation and data minimisation standards to be clearly set out.
  • Open Rights Group encourages child data impact assessments, where children are consulted by online services on the clarity of the information they present to children to explain what the service does with their personal data.
  • Open Rights Group calls for the Code of Practice to set out a clear set of principles for improving information given to children and parents about the level of data processing the service undertakes.
  • Open Rights Group calls for the code to also seek to build the capacity of children to understand their rights, by building in parental consent counter-signing, and investing in education.

Introduction

Recital 38 of the General Data Protection Regulation recognizes that children merit “specific protection with regard to their personal data, as they may be less aware of the risks, consequences and safeguards concerned and their rights in relation to the processing of personal data.” The recital goes on to make specific reference to the collection of personal data for marketing or user profiles or for services offered directly to a child.

The recital lays out that children should be considered differently to the adults online. It is an uncontroversial observation that rarely is the distinction made online between an adult and a child in a meaningful way. This is particularly true when it comes to privacy. Services targeted at children process data in similar ways to services that are mixed or targeted at adults.

The Age-Appropriate Design Code of Practice provides an opportunity to fix that imbalance. It can address the relationship a child has with online services by creating stronger default settings and working towards better provision of information to children about terms and conditions and privacy notices. It can also operate as a learning experience, preparing children for adulthood as effective participants online with agency and confidence in their rights.

Achieving both of these outcomes require different approaches. With the former, tighter controls on data processing of online services, and a duty to conduct child impact data assessment’s that include consultation with children would seem reasonable. On the latter, we need to seek a system that does not create a completely unrealistic digital life for under-18s that is quickly stripped away and replaced with the online experience of an adult. There are steps that need to be taken, in terms of capacity building, that the Code has to actively seek out so children can become effective participants online and can exercise and understand their rights online as they move into adulthood.

It is also important to make clear what data protection, and this Code of Practice, can achieve. It addresses the relationship between an individual data subject (a child) and a data controller (an online service), the processing that takes place of the data subject’s personal data, the basis for that processing, and the responsibility a controller has to that data subject and their rights.

It does not specifically address marketing or content regulation, nor should it. These other areas may be discussed in terms of the forthcoming ePrivacy Regulation. The Code of Practice should not be used as a vehicle for either of these aims as the GDPR is simply not designed for these discussions.

Ages to be included

Open Rights Group does not have any specific concerns regarding the appropriateness of the age ranges to be included from a development needs perspective. We would seek to question how practical these age ranges will be for services to be delivered. The age ranges could be illustrative but making it too prescriptive may backfire in creating further data processing without suitable protections in place or remove the opportunity for children to appear on a service altogether.

If the code would require data controllers to know which of their users exist in these brackets, how would they do this proportionately, meeting the data minimisation standards which are important to uphold?

Granular age verification would require the data controller to collect and process specific data, perhaps giving more insight for behavioral advertising, unless it is explicitly restricted. That would be an ironic twist to a code of practice that seeks to improve privacy standards and undermines it by requiring granular age verification that leads to granular profiling.

Verification of age, if it were to be meaningful, requires attributes for verification. Children before they receive their driver’s licence would only hold their birth certificate or a passport as identity attributes. Requiring these to be provided before accessing an online service is a disproportionate burden, and in terms of passports which are expensive, leads to digital exclusion based on family income.

Additionally, if the burdens were too great on the online services, they may decide to remove or restrict access to their service when previously they offered it. This has been seen with the General Data Protection Regulation in other areas. While these services may be acting in error, the effect is the same, a restriction of access to information and it should be considered that the proportionality of rules in this sector can have extreme responses.

Recommendation on age brackets:

  • Data minimisation and purpose limitation design standards in the code of practice clearly sets out that the age of a user is only to be used for verification purposes, and not for further processing for tracking, or profiling purposes.
  • Users should not be required to verify themselves at the granular level of the age brackets.

Principles to consider from the Convention of the Rights of the Child

The principles contained with the Convention of the Rights of the Child that should be considered when designing the Code of Practice are:

Article 12 – The right of a child to express their views in all matters affecting them

The Code of Practice could touch on a number of areas addressing Article 12:

Child data impact assessments

Where online services are specifically or likely to target children the data controller should undertake an additional impact assessment, beyond the mandatory Data Protection Impact Assessment that includes seeking the views of children. The assessment could include: the clarity of the consent framework the service operates, the ability of the child to understand and activate the rights.

Counter-signing in parental consent

Research shows that children have an instinct towards their privacy, with younger children seeking a greater privacy than older. This could be reflected in their own consent for processing. While parental consent is a legal requirement for children under the age of consent, those children should still be given the opportunity to counter-sign having had the information presented to them in an intelligible form. If the child objects to processing, the parent, if they had previously consented, should be notified of the wishes of the child.

Article 13 – Freedom to seek, receive and impart information and ideas of all kinds

The Code of Practice should consider sensitively the effect a disproportionate regulatory regime may have on a child’s right to seek, receive and impart information. Creating an environment where online services do not offer their services, which some have done since the General Data Protection Regulation came into force, would be a negative outcome for the child’s right to seek, receive and impart information.

Article 16 – The right to privacy

Behavioral advertising

Behavioral advertising is shown to be particularly persuasive to children. The basis of targeted advertising is an increased processing of personal data. The Code of Practice could restrict the opportunity for data controllers to perform behavioral advertising on children’s personal data. 

Recital 38 also suggests that children “merit special protection”, in particular “the use of personal data of children for the purposes of marketing or creating personality or user profiles.” The Code of Practice could make definitive statements about this “special protection” may include. Practically, we should see a great respect for Article 16 as a result of the Code of Practice.

Verification, not profiling

In seeking to identify a child online the Code of Practice must deal delicately with verification requirements. Verification may be necessary to confirm a child is under the age of consent, or under 18, but it should not be used for further profiling of the user. If such a practice were to occur this would have a negative impact on Article 16 rights.

Article 17 – The right to access to information and material from a diversity of national and internal sources.

Accessing information from a variety of sources speaks to the Code’s need to deal delicately with the proportionality of the design standards imposed. Online services that may restrict access due to a heavy-handed design regime would be a negative outcome for Article 17 that should be avoided.

Article 27 – The right to a standard of living adequate for the child’s physical, mental, spiritual, moral and social development.

The right to social development includes the right to access and experience the Internet. The Code should operate to give children the tools to enjoy social development and be in control of their personal data. The code should also seek to help them develop their skills online by seeking learning and capacity building opportunities to improve their understanding of their rights.

In addition, the right to social development applies to all children equally. Any measure that would require identity attributes so granular or specific that a child is unable to meet those by lack of resources (for instance requiring a passport or identity verification) would be a negative outcome for Article 27 and should be avoided by the code.

Open Rights Group would also recommend considering the work of the Article 29 Working Party, now known as the European Data Protection Board.

On aspects of design of code of practice

Default privacy settings for children should be set at a higher level.

It is a stated intention of the General Data Protection Regulation in Recital 38 that children merit special protection. On a practical level, this would mean their privacy settings are different to that of adults. Specifically, their default privacy settings should be set at a higher level.

The wish for high default privacy settings is also reflected in research that shows that children of younger ages believe they should have higher default privacy settings.

Restriction on behavioral advertising and data processing for extended user engagement

Efforts to extend user engagement have an outsized effect on children. There is already a social incentive to be connected and interacting ]with peers that adding a layer of technological incentivization has lead to concerning outcomes. Research conducted by YoungScot from last year showed the majority of children agreed they couldn’t live without their devices, with an even higher number believing that some products have been designed to be addictive, and a significant portion feeling that these factors contribute to sleeplessness.

Restricting data processing for extending user engagement could remove one of the driving factors contributing to a perceived need to have devices available and an “always on” mentality.

Behavioral advertising is a tool to develop targeted advertising based on the browser habits of individuals. The United States Federal Trade Commission has taken the approach to require affirmative parental consent before behavioral advertising of children’s data can be conducted. The Age Appropriate Code of Practice should go further.

It is difficult for parents, let alone children to understand behavioral advertising.Parental consent may not be an appropriate model for seeking to process this data, and further the processing of this data fails to meet the Recital 38 statement of seeking “specific protection” for children. As the Working Party 29 suggested in an opinion from 2013, data controllers should not process children’s data for behavioral advertising purposes. The Code of Practice should make that recommendation a standard. 

Consult with children (not just parents)

The parental consent model operates only when a child doesn’t have capacity to give consent on their own terms, this means anyone under the age of 13 in the UK (12 in Scotland). That model requires a parent to sign off on the data processing that is to take place. However, the appreciation of privacy may be drastically different between parent and child and purely parental consent may miss the learning opportunity that is presented.

Parental consent is a model that operates on the basis that parents (1) have a good grasp of privacy notices (2) are sensitive to their children’s development needs online and (3) have a realistic assessment of risk for their child in using these services.

Arguably parents fail on all 3 of these areas continuously. In particular research has shown that parents are not understanding of how children use online services, can overreact to misunderstood contexts, and are at risk of ‘consent fatigue’ leading to clicking without thinking.

To achieve the learning and capacity building outcome that Open Rights Group supports for the Code of Practice, there should be a requirement for joint consent between the parent and the child. This gives the child responsibility and an insight into agency that is waiting for them in adulthood, while also providing the necessary guardian approval for children under the age of 13.

The Dutch Data Protection Authority, in guidelines from 2007, pointed out a social responsibility of the website owners under the age of 16 to explain the rights and obligations of their users in a clear and understandable language. This was despite the fact the legal requirement only extended to receiving the consent of the parents.

If the parent consents, a notice should be sent to the child that allows them to also consent, in language that the child can understand, giving them the opportunity to also consent and exercise their right to express their views. If the child does not consent, another notice should be sent to the parent to notify.

Parental consent does not build children’s confidence in asserting their rights and may not protect their privacy. Encouraging counter-signing, joint consent or parallel consent achieves both aims of addressing the relationship between a data subject and a data controller and improving the agency of younger internet users.

Accessing and exercising rights

Article 13 of the General Data Protection Regulation sets out the information to be provided where personal data are collected from the data subject. In particular the controller shall provide the data subject with information about exercising their rights.

This means that regardless of the basis for processing, controllers dealing with personal data of children need to provide this information in a language the child understands to meet this.

There is a question about processing that relies on parental consent. In such circumstances should the child still be capable of exercising their data protection rights independent of parental consent? 

The Code of Practice should seek clarity on the question, how should a controller respond to a request directly from a child whose data is being processed on the basis of parental consent.

Providing clear information

There is a clear need for a general improvement of providing clearer information to consumers, whether they are adults, children, or the parents of those children. Much research has gone into the length of notices and how adults fail to engage fully with the consent process.  Creating a burden or standards rating framework for clearer information for parents and children would go a long way to meeting the aims of the Convention on the Rights of the Child.

The Global Privacy Enforcement Network privacy sweep from 2015 looked at data processing and children, including the provision of information. The conclusions were concerning. They found that 78% of sites assessed failed to use simple language or failed to present warnings that children could easily read and understand. This is an unacceptable rate and shows a disregard for responsibilities that the websites assessed have towards children.

Just as nutrition information has undertaken a revolution in clarity, data processing is due one. Efforts like Data Rights Finder ]that try to summarise and highlight relevant information, or ranking systems like Who Has Your Back offer a template that the Commissioner’s code of practice could seek to emulate. Additionally, establishing Childhood Data Impact Assessments should include a question on the clarity of information provided, including consultation with children on the language used.

A rating system could seek to articulate the level of data processing that takes place on the site, a level of privacy potentially, which can be easily understood and compared against other online services. It should also seek to show how the level of privacy is affected by the data subject choosing to restrict permissions for data processing.

Other examples of design

Friction in technology can drive people to very different outcomes. Deployed correctly with good usability, it can make people think about what a controller is doing with their data, and that is a good thing. Deployed incorrectly, it can result in people circumventing controls, leading to negative outcomes, or disabling controls altogether.

For example, Apple’s parental controls on Macs block all https websites. The basis being that due to encryption, the content filter is unable to examine the content of a page and for that reason encrypted websites must be explicitly allowed. Considering the prevalence of https, and the benefit of SSL encryption, the friction created by the Parental Controls here are counterproductive.

It is against best practice and leads to friction and constant need for parents to unblock normal websites. Further, it could easily lead to parents disabling parental controls to save themselves, and their kids, the unnecessary friction. This is bad design.

Additionally, we see where hurdles are placed to try and benefit children but are too low and not correctly constituted. Research shows that millions of parents in America avoid age bans on social media sites like Facebook by letting, or assisting, their child lie about their age.

This is due to the binary decision online services have taken on their responsibility to data subjects. The sites too often are set on the premise that you are either over 13 and thus can consent and be treated as an adult, or under-13 and thus you can’t consent and have too many regulatory burdens. The result is under-13’s lying to get on Facebook and have their data processed as though they are adults.

This is not an argument for establishing mandatory age verification requiring hard identity attributes. That would lead to negative outcomes we have laid out above such as digital exclusion and unfair burdens. This is an articulation of the problem that the code of practice should seek to solve: give children and their parents the opportunity to access online services honestly, resulting in age-appropriate experiences and information designed to enhance safety, exercise rights, and prepare the child for engaging online when they reach adulthood.

Beyond standards for data controllers

While better information provision and consultation with children on the wording of terms and conditions, and privacy notices would be useful in building up children’s agency it would mean very little unless proper investment in children’s ability to be competent, confident online actors is achieved.

This is achieved by doing more than setting standards for data controllers. It requires education at a proper level, from an early age and continuing throughout school years. Beyond the code of practice there should be a call for curriculum development that would achieve this.

The problems for children begin not at opaque wording in privacy policies but at the existence of privacy policies. Younger people appear unaware of what privacy policies are or where to find them. With a challenge such as this, it doesn’t matter how much work is put into a privacy policy that is clear for multiple reading ages if a child doesn’t know where to find a privacy policy, or to even know they should expect to see one on a service they visit, is concerning.

The wish for greater education is evidenced also by children themselves. Consultations have shown an interest from children to learn more about how the internet and companies on the internet workThese wishes should not be set aside. 

Placing greater burdens on data controllers to operate with regard to children to one thing is a laudable outcome. Investing in educating children to gain a better understanding than their parents about the internet, the internet economy, and their rights online, has the potential to change society.