Psychological pollution, algorithms and children.

Lara Mikocki

13 min read

Who has the moral responsibility to ensure a child’s right to protection against attention-mining algorithms?

Keywords: Cyberpsychology, Behaviour, Social Networking, Ethics, Algorithms, Children

Introduction

Your next video: “Killer Daddy Pig Kills and Eats Peppa Pig”. This is the reality of videos on Youtube Kids autoplay — a space where popular children’s characters like Peppa Pig is eaten by her father, to Peppa Pig drinking a bottle of bleach, and far worse. The topic of children’s online media garnered attention when covered in James Bridles widely read 2017 essay, ‘Something is wrong on the Internet’1, covering the swathe of children’s videos that lean towards extreme violence, sexuality and fear. For purposes of brevity, I will not argue what media is immoral, but, as per the above case, assume that there is content online, fed through attention-hungry advertisement revenue algorithms, that can be morally problematic for children. This then raises the question: who’s responsibility is it to ensure a child’s right to protection against attention-mining algorithms online?

Although I will not be able to cover all aspects of this question, I have narrowed the topic by making a parallel between ecological pollution and psychological pollution. To do this, I use widely known climate ethics frameworks to help in answering the responsibility question of algorithm-led psychological pollution. I begin by suggesting ecological resources are similar to psychological resources. I parallel psychological resources as a global resource in likeness to ecological resources, that also have distributive justice requirements in regards to responsibilities distribution.

From these parallels, I propose adapting Simon Caney’s hybrid model of moral responsibility — currently used for climate ethics — to apply to mitigating and adapting algorithmic damages towards children. I adapt Caney’s Polluter Pays Principle and Ability to Pay Principle as Polluter Protects Principle and Ability to Protect Principle.

Within the framework constraints, I assess three key responsibility actors with a moral duty to protect children from algorithmic attention-mining. The three actors include corporations, government, and parents. I determine, through Caney’s hybrid model, that corporations in collaboration with government, have first and second-order responsibility; and parents have a second-order responsibility. This gives shape to some policy recommendations for algorithmic governance.

Part 1: Psychological Pollution and Children

1a. The parallel between psychological and ecological responsibility.

We usually associate pollution with the natural environment — that is the earth, atmospheric and ocean ecosystems. However, we can also experience pollution in our psychological ecosystem. There is no clear definition of psychological pollution, with the only openly available definition by child abuse academic Mary Garret Bodel in her 1994 Op-ed1 as a form of “brainwashing perpetuated by a culture that glorifies violence and destruction as fun”. I see this definition as extreme, and propose one in line with ecological degradation: psychological pollution as the exhaustion and degradation of available psychological resources, with some symptoms that can include worries, distress, fatigue and fear. I suggest some attention-mining algorithms as the determinants of such symptoms, particularly autoplay and recommended videos ad revenue tools.

From the above notion, it could be claimed that psychological health is needed for sustaining wellbeing, much as a planet needs ecological health. Psychological pollution is a global issue, as it blossoms from a global structure — the Internet. When an algorithm from a corporation in one country determines the online health of a child across the globe, there is, arguably, a new global responsibility paradigm.

1b. A child’s right to protection from online exploitation.

I determined there is a psychological pollution problem, sustained by a global resource — the Internet. Now I will explore children’s rights within this current online ecosystem, to understand if we need to mitigate and adapt to psychological pollution burdens. I will cover two ways in which children are vulnerable to algorithms. Firstly, they are still developing agency and autonomy; and secondly, this phase of development means they could be exploited by attention-mining algorithms.

The Definition of Children, Agency and Autonomy

The discussion of agency and autonomy is extensive and important, and particularly tricky in the case of children. It is unfortunately not the key topic of the paper, but important to note, so we will cover it briefly. For brevity I will use the UNCRC3 definition of children as persons under 18 years of age, bound to legal guardianship of an adult. Childhood, and particularly early childhood, is a phase in which the human being is more vulnerable because they have not yet completed developing physically or mentally4. In this sense, the capacity for autonomy and agency could also still be vulnerable.

In all discussions the concept of autonomy is the focus of much debate5, however for simplicity we will use the SEP, which describes it as “capacity to live one’s life according to reasons and motives that are taken as one’s own”6. As for agency, the SEP defines it as the “capacity to act, [where] ‘agency’ denotes the exercise or manifestation of this capacity.”7 In both definitions, a child’s capacity for both agency and autonomy could be considered vulnerable in developmental stages when confronted with algorithms. Further to this, algorithmic mechanisms arguably deplete agency by making the choice to play a new video, for example. Being able to stop the video is not offering enough power to a child, as self-control is still developing. Mischel and Metzner (1962)8 of the famed marshmallow experiment found evidence that the ability to resist temptation is positively related to age because as children become older, they increasingly discover how to employ these self-control strategies9. Policy can recommend disuse of autoplay for minors. For example, in Korea, legislation stops children from playing online games that require a resident registration number between midnight and 6am without parental permission10.

A Right To Protection

Having defined children as vulnerable to some extent, I will now discuss how algorithms pose a threat. According to the Convention on the Rights of the Child11, a child has a right to various freedoms such as that of quality education, a name, and a right to protection. In regards to algorithmic governance, ‘a right to be protected from being hurt and mistreated, in body or mind’12, is the most compelling in regards to algorithms and psychological pollution. This protection right has been further elaborated to include abuse, neglect, exploitation and discrimination13. Returning to the opening Peppa Pig case, with algorithms leading to sexual and violent content, it would seem a child is consistently and blatantly abused and exploited in every case similar to this.

One might ask, what about the content creators? Are they not to blame? Yes, they play a role, however, the content creators are usually looking for revenue from the advertisement model, capitalising on widely used keywords. The model is incentivising the content, and that is where exploitation can be tackled more effectively, as opposed to chasing rogue creators scattered across the globe.

Part 2: Distributive Justice Theories & Psychological Pollution

2a. Conventional theories of justice are insufficient for psychological pollution

Having established there is a global pollution problem next to ecological degradation — psychological pollution — what framework can distribute these burdens? Conventional distributive theories of justice, such as Rawls’ theory of justice, are insufficient in two key ways: (i) they distribute within a state, whereas Internet-based concerns — such as psychological pollution — are global; and (ii) they do not account for the intergenerational dimensions of global psychological pollution. These concerns calls for Caney’s hybrid model: a combination of the Polluter Pays Principle and the Ability to Pay Principle14. The hybrid account, used for global distribution of environmental burdens and benefits, has a place for Internet-based psychological pollution.

2b. Caney’s Hybrid Account

Caney’s proposed distributive approach arises in response to the internationally-recognized Polluter Pays Principle (PPP) from 197215, mandating that each polluting actor pay for environmental damage in proportion to their polluting activities. PPP suggests polluters “foot the bill” for both mitigation and adaptation of environmental damage. Mitigation refers to “actions that reduce net carbon emissions and limit long-term climate change,” and adaptation to “actions that help human and natural systems adjust to climate change”16. However, Caney notices that many polluters (such as developing countries) are not able to pay for their polluting activities, whereby he supplements the PPP with the Ability to Pay Principle (APP). Those that have the ability to pay (often wealthy countries and business) also become responsible for absorbing climate costs, even if they are not the biggest emitters.

For my purposes I draw from this hybrid framework, however translate environmental pollution to psychological pollution. Mitigation for the psychological pollution account are actions that reduce and limit psychological degradation from online mechanisms, such as distress, worry, and fear. Adaptation in the psychological pollution account are actions that help children and their guardians manage psychological burdens from algorithms. Caney also distinguishes between first-order responsibilities — to mitigate and enable adaptation and compensation; and second-order responsibilities — to encourage agents to comply with first-order responsibilities. If we apply Caney’s approach where children are the mined resource for attention, and therefore to be protected, three responsibility actors become apparent: government, guardian/parent and the corporation. I argue all three actors have moral responsibility of varying degrees.

Part 3: Responsibility Model of Protection, the PPP and the APP

For the purposes of orienting language towards protecting children, the Polluter Pays Principle is from hereon in adapted to Polluter Protects Principle (PPP); and the Ability to Pay Principle is adapted to Ability to Protect Principle (APP).

3a. Corporation and Governmental Responsibility

Corporations and PPP

The Polluter Protects Principle (PPP) recommends the protection responsibility of mitigation and adaptation costs to weigh most on corporation. Just as clean up costs of an oil spill is the responsibility of the oil company, so is the protective burden of psychological contamination by the polluting corporation. A corporation’s psychological contamination is also caused arguably more deliberately than an oil spill. Corporations have a moral responsibility to provide protection through mitigation and adaptation for two reasons. Firstly, the unprecedented socio-psychological harm of the ‘move fast and break things’ attitude of technology corporations has too large an implication on children to go unchecked. Secondly, there is vast unprecedented wealth accumulation created through corporations’ exploitative algorithmic models. This wealth allows for resources to be available for protection costs (like online tools and education), allowing them to address the same social and protective failings enabled by their product. This makes corporations responsible within both PPP and APP conditions.

To the first point, ‘in the predominantly privately owned and run world of the Internet, Apple, Google, Huawei, Microsoft […] are forces to be reckoned with. It is they who largely decide what our online lives look like and what new directions the information society will take’17. Though some Internet use can be healthy, there is a correlation of socio-psychological symptoms linked to digital media use, from mental illness, eating disorders and cyberbullying to sleep deprivation18. Next to this, a social condition is observable in that half of all mental illnesses begin by the age of 14, with anxiety and personality disorders sometimes beginning around age 1119. With children impressionable and vulnerable to mental conditions, there is a responsibility by corporations to manage these risks, not exacerbate them through algorithms that survive on these vulnerabilities.

To the second point, the wealth born from Internet revenue models is concentrated and extensive. The Forbes Global 2019 list20 of technology companies account for more than $9 trillion in market value, $4 trillion in assets and nearly $3 trillion in sales globally. This wealth proffers resources to develop tools and systems necessary to improve children’s online protection. However this would need more governmental oversight and collaboration, as the following section will describe.

Government as the APP and Collaborator

In regards to governmental responsibility, policy has a significant role in children’s lives, from education to healthcare. However, in regards to Internet governance, the Internet market is too large and fast an industry for government to effectively manage all responsibility costs and measures. The government, though arguably not a direct polluter, does have an Ability to Protect as an incentivising and regulatory body. The government also has the ability to curb commercial interests of those businesses who might use their extensive resources to develop tools to respond to psychological pollution issues. Whether through education or tools — these regulatory measures can help shape a child’s healthy interpersonal relationship with the online world.

Next to this, market competition sometimes needs government regulations to correct for market failures.21 The tandem protective responsibility of corporation and government can be further supported by the market failure model of Joseph Heath 22 in which he describes the benefits of cooperation. For brevity’s sake, I will not expand on this much further than to describe that cooperation can increase beneficial social outcomes for all.

From this account, cooperation and government have first-order responsibilities — to mitigate and enable adaptation; as well as second-order responsibilities — to encourage agents to comply. This collaborative responsibility role suggests policy must find ways to unite business and government more meaningfully, whereby joint responsibility of government and business could produce more effective outcomes than either working alone.

3b. Parents and Guardians Responsibility

An ability to protect is a key role of a guardian. The parenting role has become increasingly complex, however the parent has the most meaningful connection to the child, therefore one of the greatest abilities to protect in the Internet age. The APP doesn’t imply developing the methods for online protection (as per corporation and governmental bodies could), but more so as ambassadors in implementing them, completing the triad of child protection across the three responsibility actors.

As described earlier, children’s moral status has a special character as it is developmental. Parents have the responsibility of fostering that development in a healthy way. Parental authority should be exercised with the view to assisting children to acquire the capacities that facilitate their transition from ‘special agents’ to full and functioning members of the moral community.23 Parents and guardians can fulfil some superficial first and second-order responsibilities through monitoring of online activities. However, due to algorithmic manipulation embedded in ad revenue techniques, even with options to opt out, managing online behaviour can be difficult to navigate. I suggest a parent’s ability to protect is significantly improved with tools that can protect a child from attention-hungry algorithms. Such tools can include biometric age recognition, that keep children to a specific platform free of ads, much in the way some high-sugar foods marketed at children could be restricted24. Once tools are in place from government and business, parents can fulfil second-order responsibilities, much in the same way a parent implements existing second-order responsibilities by not allowing a child to drive or drink alcohol.

Part 4 : Responsibility Policy Implementations

There are a multitude of policy implications with the proposed responsibility framework for algorithmic governance. We have covered a few policy recommendations throughout the paper, but the key advice from the above analysis is that government should make greater legislation efforts to collaborate with corporations in finding solutions that mitigate and adapt to global psychological pollution in the case of children. How these mitigation and adaption policies look like are not within the scope of this paper, however can begin with tackling attention-mining techniques, such as ad revenue models algorithms of big tech. Ultimately, solutions should mitigate and adapt to the effects of psychological pollution through regulation and tools.

Next to this, implementing business responsibility is difficult, but such measures can also borrow from the original PPP. Implementation of PPP can sit under three main categories: command and control law, economic instruments and “soft law”25. Command and control law uses licensing procedures, prohibitions, emission limits and sanctions; economic instruments use subsidies, certificates, tax alleviations and liability rules. And soft laws are centred around voluntary agreements, management systems and labelling26. All of the above instruments could be adopted for the regulation of psychological pollution.

Conclusion

In this paper, I have determined children are at risk of psychological pollution, with three key responsibility actors in the need for algorithmic governance via an adapted version of Simon Caney’s hybrid account. These actors include government, business and parent. According to the Polluter Protects Principle, corporations are most liable, and have a first-order responsibility in collaboration with government to mitigate and adapt to the effects of psychological pollution through regulation, policies and tools. Having an ability to protect, parents or guardians have a second-order responsibility and can implement protection, either through home monitoring or by applying the mitigation and adaption techniques developed by the business and government collaboration efforts. This analysis demonstrates the need for action, and that policy can start managing the growing psychological pollution problem of the Internet reaching developing minds, before the next 1 billion people get online.

  1. Bodel, M. (1994, November 1). Psychological Pollution. Retrieved October 27, 2019, from https://academic.oup.com/sw/article-abstract/39/6/632/1912665?redirectedFrom=fulltext

  2. Broeders, D. (2017, April 5). The public core of the internet: an international agenda for internet governance. Retrieved October 27, 2019, from https://english.wrr.nl/publications/policy-briefs/2015/04/10/the-public-core-of-the-internet-an-international-agenda-for-internet-governance

  3. UNCRC. (2019, October 21). UN Convention on the Rights of the Child (UNCRC) — Unicef UK. Retrieved October 27, 2019, from https://www.unicef.org.uk/what-we-do/un-convention-child-rights/

  4. Humanium. (2017, October 4). Understanding Children’s Right to Protection — Humanium. Retrieved October 27, 2019, from https://www.humanium.org/en/protection/

  5. Christman, J. (2015, January 9). Autonomy in Moral and Political Philosophy (Stanford Encyclopedia of Philosophy). Retrieved October 27, 2019, from https://plato.stanford.edu/entries/autonomy-moral/

  6. Christman, J. (2015, January 9). Autonomy in Moral and Political Philosophy (Stanford Encyclopedia of Philosophy). Retrieved October 27, 2019, from https://plato.stanford.edu/entries/autonomy-moral/

  7. Schlosser, M. (2015, August 10). Agency (Stanford Encyclopedia of Philosophy). Retrieved October 27, 2019, from https://plato.stanford.edu/entries/agency/

  8. Mischel, W., & Metzner, R. (1962). Preference for delayed reward as a function of age, intelligence, and length of delay interval. The Journal of Abnormal and Social Psychology, 64(6), 425–431.

  9. Bucciol, A., Houser, D., & Piovesan, M. (2010). Willpower in children and adults: a survey of results and economic implications. International Review of Economics, 57(3), 259–267. https://doi.org/10.1007/s12232-010-0103-8

  10. OECD. (2018). Children & Young People’s Mental Health in the Digital Ag. Retrieved from https://www.oecd.org/els/health-systems/Children-and-Young-People-Mental-Health-in-the-Digital-Age.pdf

  11. General Assembly. (1990). Convention on the Rights of the Child. Retrieved from https://www.ohchr.org/en/professionalinterest/pages/crc.aspx

  12. General Assembly. (1990). Convention on the Rights of the Child. Retrieved from https://www.ohchr.org/en/professionalinterest/pages/crc.aspx

  13. UNICEF. (2013). Child Protection form Violence, Exploitation and Abuse. Retrieved from https://www.unicef.org/publicpartnerships/files/Child_Protection_from_Violence_Exploitation_and_Abuse_2013_Thematic_Report.pdf

  14. Caney, S. (2005). Cosmopolitan Justice, Responsibility, and Global Climate Change. Leiden Journal of International Law, 18(4), 747–775. https://doi.org/10.1017/S0922156505002992

  15. OECD. (1992). The Polluter Pays Principle Analyses and Recommendations. Retrieved from http://www.oecd.org/officialdocuments/publicdisplaydocumentpdf/?cote=OCDE/GD(92)81&docLanguage=En

  16. Caney, S. (2005). Cosmopolitan Justice, Responsibility, and Global Climate Change. Leiden Journal of International Law, 18(4), 747–775. https://doi.org/10.1017/S0922156505002992

  17. Broeders, D. (2017, April 5). The public core of the internet: an international agenda for internet governance. Retrieved October 27, 2019, from https://english.wrr.nl/publications/policy-briefs/2015/04/10/the-public-core-of-the-internet-an-international-agenda-for-internet-governance

  18. OECD. (2018). Children & Young People’s Mental Health in the Digital Ag. Retrieved from https://www.oecd.org/els/health-systems/Children-and-Young-People-Mental-Health-in-the-Digital-Age.pdf

  19. OECD. (2018). Children & Young People’s Mental Health in the Digital Ag. Retrieved from https://www.oecd.org/els/health-systems/Children-and-Young-People-Mental-Health-in-the-Digital-Age.pdf

  20. Forbes. (2019). GLOBAL 2000 The World’s Largest Public Companies. Retrieved October 27, 2019, from https://www.forbes.com/global2000/

  21. Brennan, J. (2015, November 11). Joseph Heath, Morality, Competition, and the Firm: The Market Failures Approach to Business Ethics, Oxford University Press, 2014. Retrieved from https://kiej.georgetown.edu/joseph-heath-morality-competition-and-the-firm-the-market-failures-approach-to-business-ethics-oxford-university-press-2014/

  22. Heath, J. (2014). Morality, Competition, and the Firm: The Market Failures Approach to Business Ethics. United Kingdom: Oxford University Press.

  23. Archard, D., & M. Macleod, C. M. (2002). The Moral and Political Status of Children. United Kingdom: OUP Oxford.

  24. Hawkes, C. (2007). https://www.who.int/dietphysicalactivity/regulatory_environment_CHawkes07.pdf. Retrieved from https://www.who.int/dietphysicalactivity/regulatory_environment_CHawkes07.pdf

  25. Margaret Rosso Grossman, Agriculture and the Polluter Pays Principle, vol. 11.3 ELECTRONIC JOURNAL OF COMPARATIVE LAW, (December 2007), https://www.ejcl.org/113/article113-15.pdf

  26. Margaret Rosso Grossman, Agriculture and the Polluter Pays Principle, vol. 11.3 ELECTRONIC JOURNAL OF COMPARATIVE LAW, (December 2007), https://www.ejcl.org/113/article113-15.pdf

Previous
Previous

An Ethical Analysis of a Child Abuse Detection Algorithm

Next
Next

Are neuro-education biometrics morally acceptable when used for the education of children?