Risky sharenting occurs when parents and guardians regularly share sensitive and identifying information about children on social media platforms. The practice fuels the datafication of children’s lives, exposing them to risks of cyberharms whilst potentially contaminating their online and digital identities.
This paper unravels the infrastructural and structural barriers impeding ongoing efforts to disrupt legal but harmful digital cultures of parenting of which risky sharenting represents an example. To achieve its objectives, the paper draws on insights from zemiology (the study of social harms) to analyse the policies instituted by social media platforms and the data from a digital passive ethnography of a Facebook group of parents practising sharenting.
With insights from the documentary analysis and ethnography which form part of an interdisciplinary study of sharenting funded by the Economics and Social Research Council (ESRC), the paper reveals that whilst the infrastructural barriers to harm prevention are posed by the design logics and rationalities of social media platforms, structural obstacles stem from regulatory gaps in contemporary AI governance. Together, these empower and enable the designers of the main social media platforms to embed in their technologies, visible and invisible affordances capable of inviting and facilitating harmful forms of use.
This paper draws on the findings of the discourse analysis and digital ethnography to develop a remedial framework that outlines the harm causation process enabled by regulatory gaps and technology affordances, and the points at which preventative policies should be introduced to disrupt the process. Through its analysis of the nexus of regulatory gaps, technology affordances, and harms, the paper advances the interdisciplinary scholarship on AI ethics and governance. More specifically, by providing the empirical example of risky sharenting, the paper expands the nascent literature on the harms of emerging cultures of digital parenting facilitated by AI technologies.