<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Jeremy Malcolm on Medium]]></title>
        <description><![CDATA[Stories by Jeremy Malcolm on Medium]]></description>
        <link>https://medium.com/@jmalcolm?source=rss-e617981bb386------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Thu, 09 Apr 2026 09:06:07 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@jmalcolm/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Deepfakes, Fiction, and the Future of CSAM Law]]></title>
            <link>https://medium.com/@jmalcolm/deepfakes-fiction-and-the-future-of-csam-law-cecfa69801dd?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/cecfa69801dd</guid>
            <category><![CDATA[bdsm]]></category>
            <category><![CDATA[age-play]]></category>
            <category><![CDATA[authors]]></category>
            <category><![CDATA[unicef]]></category>
            <category><![CDATA[united-nations]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Wed, 18 Feb 2026 01:21:23 GMT</pubDate>
            <atom:updated>2026-02-18T11:51:05.712Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*AUbmgqbaGPnyWYLc.png" /></figure><p>On February 10, 2026, Australian author <a href="https://www.theguardian.com/australia-news/2026/feb/10/sydney-author-lauren-mastrosa-tori-woods-guilty-child-abuse-daddys-little-toy-ntwnfb">Lauren Ashley Matrosa was convicted</a> of possessing and distributing child abuse material. However, no children were involved or harmed. The conviction was based on an erotic novel that she had written, in which the only sexual activity takes place between adults play-acting a “Daddy Dom/little girl” (DD/lg) fantasy. Nevertheless, the judge ruled that Australia’s definition of “child abuse material” (which in other jurisdictions is known as CSAM, CSAEM, or child pornography) is broad enough to include even fictional representations of such roleplay.</p><h3>A kinky canary in the coal mine</h3><p>It may seem paradoxical that representations of DD/lg (or ageplay, the broader term for age-based <a href="https://prostasia.org/blog/online-ageplay-safety-tips/">BDSM roleplay</a>) should be illegal while the actual practice of ageplay remains legal. BDSM-related <a href="https://www.tandfonline.com/doi/abs/10.1080/00224499.2016.1139034">fantasies</a> are <a href="https://www.researchgate.net/publication/318505712_Fifty_Shades_of_Belgian_Gray_The_Prevalence_of_BDSM-Related_Fantasies_and_Activities_in_the_General_Population">widely practiced</a> as a means of exploring power, identity, or even <a href="https://www.mirror.co.uk/news/uk-news/woman-24-who-dresses-up-25098024">processing past experiences</a>. Yet despite this, ageplay communities are frequently stigmatised as inherently abusive or as proxies for child sexual abuse (CSA), even though no minors are involved.</p><p>A prominent UK child safety advisor who supports banning DD/lg content has described it as the <a href="https://johncarr.blog/2019/07/30/the-new-currency-of-predatory-paedophiles/">currency of predatory paedophiles</a>, precipitating a <a href="https://www.telegraph.co.uk/news/2019/08/03/lolita-phenomenon-allowing-men-groom-teenage-girls-without-fear/">ban of the topic from Facebook</a>. Today, ageplay-related search terms are shadowbanned on major platforms or trigger deterrence messages warning users that they may be seeking illegal content.</p><p>My argument has always been that this is the wrong approach. It’s true that BDSM content is offensive to many and that people should not be exposed to it without their consent. But this is why “trigger warnings” in a book, or equivalent <a href="https://jere.my/three-guidelines-for-child-exploitation-policies/">tags and filters for online content</a>, are a better approach than blanket censorship or even criminalization. Matrosa’s book had <a href="https://www.theguardian.com/australia-news/2026/feb/10/sydney-author-lauren-mastrosa-tori-woods-guilty-child-abuse-daddys-little-toy-ntwnfb">a myriad of trigger warnings</a>, but this was judged irrelevant by the court, who assessed the legality of the book not by the standards of the BDSM community that it was written for, but rather by the standards of a hypothetical offended observer.</p><h3>UN moves to loosen the CSAM definition</h3><p>The Matrosa case is not an anomaly. It is the domestic manifestation of a broader shift in how institutions define and regulate “child abuse material.” Just one week prior to Matrosa’s conviction, the United Nations agency for children, <a href="https://www.unicef.org/press-releases/deepfake-abuse-is-abuse">UNICEF, issued a statement</a> urging states to loosen their legal definitions of CSAM to include AI-generated content “even without an identifiable victim”.</p><p>While this statement does not expressly address the case of novels, a 2019 UN draft proposal had recommended that states extend the definition of CSAM to include “written materials in print or online”. When this draft recommendation encountered significant opposition, including objections from over 17,000 signatories to a petition, the <a href="https://www.ohchr.org/sites/default/files/Documents/HRBodies/CRC/CRC.C.156_OPSC_Guidelines.pdf">final document</a> settled on a less specific recommendation that “representations of non-existing children or of persons appearing to be children” should be covered.</p><p>The problem is this: CSAM is a term that doesn’t get stronger the more you pack into it. Rather, it gets weaker. The term Child Sexual Abuse Material was expressly coined to replace the term child pornography because the latter term fails to infer that those depicted in such material are victims of child sexual abuse. The moral gravity of the term, and the justification for associating it with lengthy criminal penalties of imprisonment, are weakened when it is loosened to include victimless materials within its scope.</p><h3>Deepfakes are a problem we can solve</h3><p>With all that said, the UNICEF statement does call attention to a very real and legitimate concern: generative AI is being used to create sexually explicit deepfake images and videos of real victims. Across 11 countries studied by UNICEF, ECPAT, and Interpol, as many as 1.2 million children had their images manipulated into deepfakes in the past year, <a href="https://www.techpolicy.press/minors-are-on-the-frontlines-of-the-sexual-deepfake-epidemic-heres-why-thats-a-problem/">most commonly by their own peers</a>. Despite the virtual nature of the images, this constitutes a form of image-based sexual abuse, causing direct and profound harm.</p><p>Clearly, something must be done. But loosening the definition of CSAM to include victimless content is neither necessary nor sufficient to address this problem. The harm of deepfakes arises from the non-consensual use of a real person’s image — not from the abstract existence of offensive synthetic imagery. Expanding criminal categories to cover all AI-generated content risks <a href="https://drawingthelineprinciples.org/watchlist">diverting attention and resources away</a> from identifying victims, removing abusive material, and holding perpetrators accountable.</p><p>Here are three better solutions:</p><ul><li><strong>Education:</strong> If, as UNICEF reports, most deepfakes are created by peers within schools, then our response must account for the reality that many perpetrators are themselves minors. <a href="https://www.techpolicy.press/minors-are-on-the-frontlines-of-the-sexual-deepfake-epidemic-heres-why-thats-a-problem/">Indiscriminate prosecution can exacerbate harm</a>, entrench stigma, and disrupt rehabilitation among minors who may not fully appreciate the consequences. Instead, policy should prioritize education, digital literacy programs, age-appropriate interventions, and restorative justice approaches. Criminal penalties may be justified in egregious cases involving intent to harass or exploit, but proportionality must prevail: prevention and education outperform punishment in addressing youth-driven behavior.</li><li><strong>NCII laws:</strong> Targeted, victim-centered alternatives to CSAM laws already exist and work better in many contexts. Numerous jurisdictions have enacted laws on non-consensual intimate imagery (popularly called revenge porn, and a subset of <a href="https://c4osl.org/beyond-the-filter-tech-facilitated-gender-based-violence/">tech-facilitated gender-based violence</a> or TFGBV), some explicitly extending protections to AI-generated content featuring identifiable individuals. These frameworks offer practical tools: rapid content takedown mechanisms, civil damages for victims, and platform liability for failing to remove clearly unlawful material.</li><li><strong>Data protection laws</strong>: In other jurisdictions, particularly within Europe, existing data protection frameworks offer another more targeted solution to the problem of AI deepfakes. The EU’s General Data Protection Regulation (GDPR) treats biometric data — including facial recognition data and image embeddings used in AI systems — as a special category requiring a lawful basis for processing. The scraping, storage, and manipulation of children’s images to generate deepfakes may already violate these provisions. Rather than expanding criminal definitions of CSAM to encompass all synthetic imagery, enforcement efforts could focus on unlawful data processing and misappropriation of likeness.</li></ul><h3>Conclusion</h3><p>When it comes to generated or artistic content, the decisive question is simple: does it exploit and harm a real, non-consenting person? If it does — as in the case of deepfakes of real children — it is a form of abuse and demands a firm, targeted response. If it does not, then however offensive it may be, it does not belong in the same criminal category.</p><p>Edge cases like AI-generated deepfakes have led some to argue for collapsing all depictions of minors, real or imagined, into the definition of CSAM. But conflating fiction with victimization weakens both enforcement and principle. Criminal law loses clarity. Resources are misdirected. And the moral gravity of the term “child sexual abuse material” is diluted.</p><p>The case of Lauren Matrosa illustrates where this path leads: criminal liability imposed not for harm, but for offense. A free society does not protect only inoffensive art. It protects art and literature precisely because criminal penalties must be necessary and proportionate, imposed only to prevent or punish conduct that causes real and identifiable harm. This is not a radical proposition; it is a <a href="https://policyreview.info/articles/news/drawing-the-line-child-safety-laws/2058">cornerstone of international human rights law</a>. Offensive art shouldn’t be distributed without safeguards, but it should be allowed to exist.</p><p>Real CSAM on the other hand — including deepfakes of real children — is not merely offensive. It is abusive. Our response to it demands precision, enforcement, and support for victims. Therefore the solution is to target the harm directly, through measures such as preventative education, data privacy frameworks, and targeted image abuse laws — not to expand the existing criminal category of CSAM until it loses its meaning.</p><p>This May, the Center for Online Safety and Liberty (COSL) will convene a session at <a href="https://rightscon.net">RightsCon 2026</a> to examine these sensitive and complex issues directly. RightsCon is the world’s leading summit on human rights in the Internet age, attended by Internet companies, governmental bodies such as UNICEF, and civil society activists.</p><p>Our session will explore how policymakers can address AI-enabled abuse without collapsing fiction into criminality, how tools and frameworks for addressing the issue can be deployed more effectively, and how freedom of expression principles can be preserved while strengthening protection for real victims. The discussion will include lawyer and activist Mar Diez, Professor K S Park of Open Net Korea, Shambhawi Paudel of ILGA Asia, and Emma Shapiro of the Don’t Delete Art project- each bringing expertise on digital rights, platform governance, gender-based violence, and artistic censorship. We would welcome you to <a href="https://www.rightscon.org/registration/">join us there</a>.</p><p><em>Originally published at </em><a href="https://c4osl.org/deepfakes-fiction-csam-law/"><em>https://c4osl.org</em></a><em> on February 18, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=cecfa69801dd" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[False Positives, Real Harm: When Child Safety Systems Get It Wrong]]></title>
            <link>https://medium.com/@jmalcolm/false-positives-real-harm-when-child-safety-systems-get-it-wrong-e5b194a5d5e6?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/e5b194a5d5e6</guid>
            <category><![CDATA[trust-and-safety]]></category>
            <category><![CDATA[csam]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Sun, 11 Jan 2026 22:45:50 GMT</pubDate>
            <atom:updated>2026-01-12T04:12:10.262Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*UhUMYlVbXtwXy-iv.png" /></figure><p>When Jonas (not his real name) posted a photo of himself in sports clothes to his own Instagram account, the last thing that he expected was for his account to be suspended for suspected child exploitation. Although Jonas is in his 20s, and his sports clothes do not sexualize him at all, evidently an AI image classifier used in automated content moderation had falsely flagged his upload as possible child sexual abuse material (CSAM). Jonas initially reacted with disbelief when this happened — but this was soon followed by a mounting sense of fear about what being under investigation for child exploitation might mean for him.</p><p>Adriana (not her real name) was also shocked when she was banned from a popular adult-only platform after its AI moderation system incorrectly flagged her use of common terms within BDSM communities. Despite the platform claiming to use human review, no one assessed her case before the ban was enforced, revealing a significant operational failure in how moderation systems are applied. She writes:</p><blockquote><em>As a survivor of CSA, being banned for safe, sane, and consensual kink practices was deeply triggering. It feels hypocritical to punish adults for sexual expression while simultaneously failing to build stronger safeguards against child exploitation. Human review should be mandatory whenever content is flagged — both to protect children and prevent false positives.</em></blockquote><h3>Wrongful arrests and lost memories</h3><p>Jonas and Adriana are both right to be concerned. When innocent content is reported as child exploitation, innocent lives can be ruined. In 2024, a grandmother was <a href="https://www.abc.net.au/news/2024-04-30/act-grandmother-filming-child-abuse-material-tiktok-post/103786002">reported to Australian police</a> by TikTok over a non-sexual massage video of her infant granddaughter, sent to the child’s mother; media coverage at the time falsely labeled her as a child abuser. In 2022, <a href="https://www.nytimes.com/2022/08/21/technology/google-surveillance-toddler-photo.html">Google reported a father to police</a> because he used Gmail to send medical photos of his toddler to the child’s doctor. Both were eventually cleared of wrongdoing, but have received no redress over being falsely accused.</p><p>There are many similar stories. Thousands of users not only of Instagram, but also <a href="https://www.abc.net.au/news/2025-08-21/meta-falsely-banned-users-lose-sentimental-photos-memories/105671940">Facebook</a> and <a href="https://www.nytimes.com/2023/11/27/technology/google-youtube-abuse-mistake.html">YouTube</a>, are facing platform bans and the prospect of police investigation. Some victims of these bans talk of losing <a href="https://www.abc.net.au/news/2025-08-21/meta-falsely-banned-users-lose-sentimental-photos-memories/105671940">thousands of sentimental photos</a>, while others have had <a href="https://www.abc.net.au/news/2025-07-17/meta-wrongly-accuses-user-of-breaching-child-sex-abuse-rules/105540896">online businesses ruined</a>. For others, the bone-chilling fear of being reported to authorities weighs on them more heavily than losing their account. When a young gay man engaged in consensual sexual role-play with another adult was arrested by police on child exploitation charges in 2021, he couldn’t face the shame that the false charges brought upon him, and <a href="https://dallasvoice.com/texas-mans-family-sues-montgomery-county-city-of-conroe-over-sting-operation/">took his own life</a>.</p><p>Internet platforms bear a heavy responsibility to keep their platforms free of real child sexual abuse, and some are not discharging this responsibility very well, or are even <a href="https://www.theguardian.com/technology/2026/jan/02/elon-musk-grok-ai-children-photos">contributing to the problem</a>. But shortcomings in online child-safety reporting systems cut both ways. While it is unacceptable when real CSAM remains online without being taken down, falsely accusing innocent people of serious crimes isn’t a trade-off that we should have to accept.</p><h3>AI is worsening the problem</h3><p>This problem isn’t going away, but rather it’s getting worse. One reason is increasing reliance on inaccurate AI classifiers, with even photos of family pets being <a href="https://petapixel.com/2025/07/21/meta-bans-instagram-user-for-posting-video-of-her-dogs-that-violated-nudity-rules/">flagged as child abuse</a>, and apparently never undergoing human review. Platforms must report genuine CSAM, and the initial use of AI systems to help flag it in public uploads is justified. But this should never result in an immediate account ban without manual review by a human moderator.</p><p>Another factor contributing to the rise in innocent people being flagged for child abuse is that platforms are becoming increasingly risk-averse, under pressure from lawmakers and an electorate of concerned parents who aim to hold them responsible for online child sexual abuse. Laws both <a href="https://www.gov.uk/government/collections/online-safety-act">current</a> and <a href="https://www.badinternetbills.com/">threatened</a> are shifting increasing liability onto platforms for getting it wrong — with the predictable result that they are taking fewer chances, no matter the cost to users who are wrongly accused.</p><p>One option to address this problem is for victims of over-reporting to use the same tactic — hitting platforms in the hip pocket when they get it wrong. That’s what William Lawshe did, after being wrongly reported to authorities by Verizon over what were plainly 18+ erotic images, even bearing adult site watermarks. <a href="https://perkinscoie.com/insights/blog/can-providers-be-sued-mistaken-csam-reports-maybe-says-new-ruling-0">Lawshe sued Verizon</a> and its CSAM-scanning service provider to seek compensation for the disgrace and health problems that he suffered after being carelessly and wrongly reported to authorities. A final decision is yet to be handed down, but the court has already allowed key claims to proceed.</p><h3>How platforms can do better</h3><p>Nobody should have to resort to a lawsuit simply to keep their name and their online presence clear of false child exploitation allegations: prevention is, as usual, better than cure. A large part of that responsibility falls on platforms simply to <a href="https://prostasia.org/project/sexual-content-moderation-principles/">do a better job</a>: to draw their child exploitation policies narrowly and precisely, and to involve humans before taking actions such as account bans or referrals to law enforcement.</p><p>For those that consistently fail to live up to their responsibilities towards innocent users, sunlight might be able to help. One of the first projects of the Center for Online Safety and Liberty (COSL) was the launch of the <a href="https://harmfultominors.org/">Harmful to Minors</a> transparency archive, where false child exploitation takedowns are published for public critique.</p><p>Later in 2026, COSL will also be publishing a second edition of our <a href="https://prostasia.org/project/sexual-content-moderation-principles/">Drawing the Line Watchlist</a>. The first edition of the Watchlist evaluated ten countries around the world for how accurately their laws draw the line between personal expression and lived abuse. The second will extend this analysis to the policies and enforcement practices of Internet platforms, including how well they safeguard users against false positives, provide appeal mechanisms, and limit automated escalation.</p><p>Over the longer term, COSL is also pursuing a vision to foster and establish alternative platforms and tools that hold safety and liberty in better balance, allowing a diverse range of creative content and personal expression to flourish, without sacrificing safety. These projects include our privacy-first offshore hosting service <a href="https://liberato.io/">Liberato</a>, our upcoming fan community <a href="https://c4osl.org/fanrefuge/">Fan Refuge</a>, and our open source content warning system, <a href="https://c4osl.org/dead-dove-beyond-blanket-censorship/">Dead Dove</a>.</p><h3>Conclusion</h3><p>The fight against child sexual abuse online is too important to be undermined by blunt, unaccountable systems that punish the innocent. When platforms treat false positives as an acceptable cost of doing business, they shift the burden of their own errors onto ordinary users, who are left to face fear, stigma, and lasting harm alone.</p><p>Child safety and civil liberties are not opposing values, and we should reject any approach that claims otherwise. The solution is not less vigilance, but better vigilance: narrower rules, human judgment, transparency, and accountability. Until platforms adopt those principles, innocent people will continue to pay the price for mistakes they did not make.</p><p><em>Originally published at </em><a href="https://c4osl.org/false-positives-real-harm-when-child-safety-systems-get-it-wrong/"><em>https://c4osl.org</em></a><em> on January 11, 2026.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e5b194a5d5e6" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Fiction or Felony? The Blurring of Art and Abuse]]></title>
            <link>https://medium.com/@jmalcolm/fiction-or-felony-the-blurring-of-art-and-abuse-5d81079b2783?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/5d81079b2783</guid>
            <category><![CDATA[csam]]></category>
            <category><![CDATA[ai-art]]></category>
            <category><![CDATA[censorship]]></category>
            <category><![CDATA[pornography]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Thu, 12 Jun 2025 19:49:27 GMT</pubDate>
            <atom:updated>2025-06-12T22:01:24.143Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*HW8JBQbZ24XG97Wz.jpg" /></figure><p>When Danish AI artist “Barry Coty” was arrested in 2023, it came as a surprise. He had believed that the fantasy AI porn images he was creating were a harmless outlet that might reduce demand for real child abuse material. Instead, he found himself at the center of a global Interpol operation and facing criminal charges that would help reshape laws across multiple countries.</p><p>For prosecutors, the case was clear-cut: Coty had created and distributed sexual imagery depicting minors, regardless of whether those minors were real or AI-generated. His case became a catalyst for an upcoming <a href="https://www.ft.dk/samling/20241/lovforslag/l184/index.htm">2025 law</a> in Denmark banning AI-generated sexual content involving minors, part of a global wave of legislation that has swept through <a href="https://enoughabuse.org/get-vocal/laws-by-state/state-laws-criminalizing-ai-generated-or-computer-edited-child-sexual-abuse-material-csam/">39 U.S. states</a> and the European Parliament, which <a href="https://www.europarl.europa.eu/news/en/press-room/20250512IPR28357/child-sexual-abuse-updated-rules-to-address-new-technological-risks">approved a directive</a> to criminalize AI systems used to generate such content. Globally, <a href="https://rm.coe.int/declaration-on-protecting-children-against-sexual-exploitation-and-sex/1680b25a78">pressure is rising</a> for countries to criminalize more virtual sex offenses.</p><p>This movement reflects a seemingly “common sense” principle: content depicting anyone under 18 sexually should be illegal, whether the subjects are real or virtual. But scratch beneath this consensus, and a more complex picture emerges-one that encompasses far more than AI-generated images. As researcher Aurélie Petit <a href="https://c4osl.org/beyond-the-filter-aurelie-petit/">recently discussed with me</a>, while AI deepfakes may grab the headlines, the end result of a zero-tolerance approach is that AI images are treated alongside fan fiction, art, memoirs, and more, principally from queer creators and women, all within the thought-terminating category of child sexual abuse material (CSAM).</p><p>This article explores the views of a growing number of experts, including lawyers and psychologists, who challenge this approach, arguing that it drives over-criminalization, stifles artistic expression, disproportionately harms marginalized communities like LGBTQ+ individuals, and even obstructs effective sex abuse prevention efforts. Through these insights, we examine whether the rush to criminalize AI-generated content (and more) oversimplifies a complex issue-and what’s at stake when nuance is ignored.</p><h3>From protecting victims to policing fiction</h3><p>In response to a proposal in the 2026 New York budget to redefine felony sex offenses to include AI-generated content, the New York City Bar Association, a 23,000-member organization of prosecutors, defense attorneys, and judges, <a href="https://www.nycbar.org/reports/report-on-legislation-by-the-mass-incarceration-task-force-and-sex-offense-working-group/">voiced strong opposition</a>, alongside other groups such as New York Legal Aid. The City Bar argued:</p><blockquote><em>The purpose of criminalization is not to punish the distasteful proclivities of adults who view it, but rather to disincentivize recorded abuse and thereby prevent future victimization. Part L’s proposed amendments move away from this rationale of protecting children, as no actual children are involved-simply computer-generated representations-and embrace punishing people for their sexual fantasies. This would be a sea change in the justification for laws against child pornography.</em></blockquote><p>While this shift toward criminalizing fantasies may seem novel in the U.S., other countries have embraced it for decades. In 1993, Canadian lawmakers <a href="https://www.wired.com/1995/03/canada/">confidently declared</a> that “it is wrong to have these fantasies and wrong to write them down. Period.” Within a year of passing a law to criminalize such content, police were raiding art galleries and<a href="https://www.cbc.ca/arts/today-in-1993-artist-eli-langer-arrested-for-paintings-deemed-child-pornography-1.3374663"> arresting artists</a> and <a href="https://www.freedomtoread.ca/newsbytes/author-and-publisher-of-hansel-et-gretel-acquitted-in-quebec-court/">authors</a> for works deemed child pornography. In Australia, where a<a href="https://www.smh.com.au/national/simpsons-cartoon-ripoff-is-child-porn-judge-20081208-6tmk.html"> parody Simpsons cartoon</a> was prosecuted as “child abuse material”, author Lauren Tesolin-Mastrosa faces similar charges in 2025 over a <a href="https://www.criminaldefencelawyers.com.au/blog/what-is-the-meaning-of-child-abuse-material">fictional erotic novel depicting adults</a>. Authorities there not only use the same laws against real and fictional crimes, but <a href="https://jere.my/australia-versus-human-rights-online/">don’t even keep track</a> of the difference in official statistics.</p><p>Now, the U.S. is following Canada’s lead, with police also literally <a href="https://jere.my/how-lgbtq-content-could-become-illegal/">raiding art galleries</a> over works mislabeled as child pornography. Legal and human rights experts are pushing back, arguing that the distinction is critical. In the Australian <em>Simpsons</em> case, Justice Adams emphasized:</p><blockquote><em>At the outset it is necessary to appreciate, as I think, that there is a fundamental difference in kind between a depiction of an actual human being and the depiction of an imaginary person. … There was a tendency in the arguments before me to suggest that the distinction is merely one of degree. This is quite wrong. Such an approach would trivialize pornography that utilized real children and make far too culpable the possession of representations that did not.</em></blockquote><h3>Challenging the link between fictional content and harm</h3><p>This blurring of lines between fictional and real abuse is fueling a broader wave of U.S. laws that use similar logic to target sexual content more generally. The same unproven assumptions that drive AI-specific legislation-that consuming certain content leads to harmful behavior-now appear in laws targeting pornography broadly. California’s <a href="https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240AB1831">AB 1831</a> asserts, without evidence, that “pornography may increase sexually aggressive thoughts and behaviors” and that AI-generated content “normalizes and validates the sexual exploitation of children.” <a href="https://c4osl.org/age-verification-laws-trading-privacy-for-protection-that-doesnt-work/">Age-verification laws</a> like Texas’s HB 1181 apply this reasoning to restrict access to content deemed “harmful to minors,” sweeping in <a href="https://c4osl.org/?mailpoet_router&amp;endpoint=view_in_browser&amp;action=view&amp;data=WzE1LCI1MTA3MDc0OTlkZmQiLDAsMCwxMSwxXQ">manga and anime</a> alongside actual pornography. A proposed federal <a href="https://c4osl.org/fight-the-interstate-obscenity-definition-act/">Interstate Obscenity Definition Act</a> would further expand criminalization of sexual content based on these same theoretical harms. The common thread: laws justified by the claim that consuming sexual content promotes abuse-but does this “slippery slope” argument hold up?</p><p>The <a href="https://www.nycbar.org/reports/report-on-legislation-by-the-mass-incarceration-task-force-and-sex-offense-working-group/">New York City Bar Association’s submission</a> opposing the redefinition of felony sex offenses to include AI-generated content cites numerous studies that challenge that assumption. A 2010 study found sex crimes, including child sex offenses, <a href="https://pubmed.ncbi.nlm.nih.gov/19665229/">declined during periods of unregulated pornography access</a>. A 2023 meta-study on fictional sexual materials (FSM), including depictions of minors, <a href="https://link.springer.com/article/10.1007/s11920-023-01435-7">found no link to sexual aggression</a> and suggested FSM use might reduce harmful impulses in high-risk individuals through a “cathartic effect,” proposing it as a harm reduction outlet.</p><p>A<a href="https://journals.sagepub.com/doi/10.1177/0093650207309359"> 2008 study</a> examined claims made during the passage of an earlier U.S. law, the 2003 PROTECT Act, which equated penalties for virtual image-based crimes with those involving real children. It found no evidence that FSM increases acceptance of child sexual abuse, directly rebutting the Act’s rationale. In 2012, a <a href="https://www.information.dk/indland/2012/07/sexologisk-klinik-tegnet-boerneporno-skadeligt">Danish sexological clinic</a> also advised against banning FSM, citing no clear harm. Yet as the political winds changed, Denmark reversed course with its 2025 law criminalizing AI-generated content, spurred by an Interpol operation targeting synthetic CSAM networks. And this brings us back to Barry Coty.</p><h3>An agenda to raise penalties for fictional sex crimes</h3><p>Danish AI artist Barry Coty (a pseudonym) was arrested in 2023 for creating and distributing AI-generated depictions of minors in sexual contexts via a paid subscription platform, as part of an Interpol sting operation called <a href="https://www.europol.europa.eu/media-press/newsroom/news/25-arrested-in-global-hit-against-ai-generated-child-sexual-abuse-material">Operation Cumberland</a>, that also netted a string of other arrests around the world. Coty’s case, one of the first targeting an AI porn creator, highlights the global push to equate fictional content with real abuse. He contacted me unsolicited in order to tell his story.</p><p>In January 2025, Coty pled guilty to the charges and originally received a sentence of one year and three months, partly suspended, and 200 hours of community service. But the prosecution appealed the verdict, hoping to establish a stricter precedent around synthetic material. Today (June 12, 2025), they succeeded, increasing Coty’s sentence to eighteen months of actual jail time. Coty plans to appeal that decision to Denmark’s Supreme Court. He explains:</p><blockquote><em>My intentions for making and distributing these images have always been to reduce the amount of real CSAM material being sought out and shared on the internet, with a less demoralizing replacement to the individual consuming them, and of course to reduce the suffering being continuously done to children that are victims of real CSAM by their abusive and non-consensual images being spread around the internet.</em></blockquote><p>Policymakers face genuine challenges here. The rapid emergence of AI technology has outpaced existing legal frameworks, creating uncertainty about how to protect children while preserving legitimate rights. The visceral public reaction to any content involving minors-even fictional-creates enormous political pressure to act decisively.</p><p>There also seems to be no question-going purely from descriptions of them-that the images Coty produced would be perceived as confronting and offensive by most. There’s little question why authorities target these images as a starting point to broaden the criminalization of fictional content-it’s unsettling to consider their existence online, even in obscure sex forums, and to acknowledge that they fulfill a sexual interest for some. For many, criminalizing these images serves as a convenient stand-in for criminalizing that interest itself.</p><h3>The wrong tool for a misunderstood problem</h3><p>But the law is a blunt tool, and the wrong one, for managing the existence of paraphilic sexual interests within the community, especially among those, like Coty, who have gone out of their way to find harmless artistic outlets for them. Criminalization advocates <a href="https://www.iwf.org.uk/news-media/news/white-house-roundtable-is-important-moment-in-recognising-threat-of-ai-child-sexual-abuse-imagery/">have suggested</a> that the distribution of such images in niche online sex forums carries a “terrifying potential to flood the internet with a tsunami of abuse imagery”, or even that this could trigger the “conversion” of those who unwittingly view them into pedophiles.</p><p>But both of these are far-fetched suggestions. The reality is that almost all online platforms strictly rule against AI-generated sexual content with characters resembling minors, and there are already <a href="https://jere.my/generative-ai-and-children-prioritizing-harm-prevention/">effective tools to weed it out and to distinguish it from real abuse imagery</a>. If such imagery were to be distributed on a mainstream platform, it would doubtless be reported to authorities, who have <a href="https://www.washingtonpost.com/technology/2023/06/19/artificial-intelligence-child-sex-abuse-images/">clearly affirmed</a> that they already possess the legal authority to prosecute.</p><p>As to the possibility that unwitting viewers of these images could be transformed into pedophiles, this smacks even more strongly of fearmongering. It is <a href="https://pubmed.ncbi.nlm.nih.gov/16866601/">widely accepted by experts</a> that accessing a particular type of sexual content is a sign that a person already has an interest in it, rather than being the catalyst for them to develop a new sexual interest, especially not in something that they previously found disgusting. As Coty stated to me, “most people feel an innate repulsion towards such imagery”.</p><p>There are, and always will be, Internet users who reject child sexual abuse, while at the same time being drawn towards representations of underage sexuality. Their reasons for doing so, and the representations that they seek out, both <a href="https://c4osl.org/beyond-the-filter-aurelie-petit/">exist on a spectrum</a>. Within that spectrum are many legitimate artistic works, disproportionately created and consumed by sexual abuse survivors, LGBTQ+ people, young people, and women. Some may even escape misclassification as CSAM and enjoy critical acclaim. In a <a href="https://www.infrastructure.gov.au/sites/default/files/documents/mancs-jeremy-malcolm.pdf">2024 submission</a> that I wrote to the Australian government urging reforms to its classification system, I wrote:</p><blockquote><em>Mainstream TV series such as </em>Euphoria<em> (depicting characters represented as children having sex) and </em>Game of Thrones<em> (representing characters engaged in incest) are routinely passed with MA 15+ or R 18+ ratings… But while mainstream Hollywood TV and movies can be classified quite leniently, it is no exaggeration to say that if a person enters the country with Japanese cartoons that depict exactly the same subjects as </em>Euphoria<em> or </em>Game of Thrones<em>, they stand a very real risk of being arrested.</em></blockquote><p>It will never be possible, nor would it be desirable, to erase all such representations from the Internet and to criminalize those who seek them out. There will always be those who create or consume representations of minors that make us uncomfortable, or who do so for reasons we find uncomfortable. In cases in which the consumption of such content crosses the line into fueling problematic behaviors, the approach that professionals recommend is one of <a href="https://www.tandfonline.com/doi/abs/10.1080/10538712.2024.2356194">harm reduction and prevention</a>, not criminalization. So the real solution may simply be for us to make peace with this, and allow those professionals to do their jobs.</p><h3>Fictional material is not CSAM</h3><p>The conflation of fictional and real sexual abuse material represents a calculated political strategy <a href="https://prostasia.org/blog/blowing-the-whistle-on-ecpat/">decades in the making</a>. Advocacy organizations and their allies in government have systematically engineered linguistic shifts to expand the scope of what constitutes sexual crimes, often in ways that serve interests beyond actual child protection.</p><p>For example, the 2016 <a href="https://ecpat.org/luxembourg-guidelines/">Luxembourg Terminology Guidelines</a> began to establish a new international norm that the term “child sexual abuse material” should be used in preference to “child pornography” when referring to “material that depicts and/or that documents acts that are sexually abusive and/or exploitative to a child.” While this makes sense, the devil in the details-made more explicit in a <a href="https://ecpat.org/wp-content/uploads/2025/04/Second-Edition-Terminology-Guidelines-final.pdf">2025 Revision</a>-was to include fictional content also, a move completely at odds with the stated rationale that undermines the term’s gravity. Even the original term “child pornography” <a href="https://c4osl.org/beyond-the-filter-aurelie-petit/">is a more accurate term</a> for sexually arousing images that don’t record abuse, as FBI Special Agent Kenneth Lanning observes, <a href="https://www.missingkids.org/content/dam/missingkids/pdfs/publications/nc70.pdf">writing</a>:</p><blockquote><em>The efforts to encourage use of this new term is a good example of well-intentioned people trying to solve a problem by emotionally exaggerating the problem… It is interesting to note some of those advocating for use of the term child-abuse images also advocate for criminalizing as child pornography visual images that do not even portray actual children. You cannot have it both ways.</em></blockquote><p><a href="https://info.thorn.org/hubfs/thorn-safety-by-design-for-generative-AI.pdf">Common justifications</a> given for the criminalization of AI-generated images depicting minors are that they may be made in the image of real individuals, be generated using models that were trained on real CSAM, or be used in grooming children. Such cases involve specific abuses that should be prosecuted directly, not used to criminalize all fictional works. Barry Coty insists that none of those justifications apply in his case. No real CSAM was ever found in his possession, and he has no history of sex offending. While small volumes of illicit content have been inadvertently included in the training data of <a href="https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse">mainstream generative AI models</a>, Coty insists he never used any unlawful content in editing his creations, and he described to me in some detail the technical process that he followed to achieve this.</p><p>In such cases, whether AI tools are involved or not, equating fictional content to photos and videos recorded at actual crime scenes trivializes the real suffering of the victims of those crimes. When policymakers obscure this distinction with linguistic sleight-of-hand, it should be called out as dishonest, and their real political motivations exposed: inflating “child abuse material” statistics, providing moral cover for broader censorship campaigns, and creating new categories of criminals to justify expanded enforcement budgets and powers. Coty writes:</p><blockquote><em>Why are the lawmakers to keen to include fictional abuse into real abuse statistics? I think there is some incentive to do so in order to argue why the state should have more oversight into private messages through online surveillance. In this way, fictive child pornography becomes a catalyst/scapegoat to finally get better tools to go after drug dealers and terrorists.</em></blockquote><p>Calling this out is fraught, as honest discourse on this topic is often ruthlessly punished. Yet, one cannot claim moral authority while blurring actual abuse with offensive fiction for broader political ends.</p><h3>The fight back begins with Fan Refuge</h3><p>Censorship of fictional sexual materials is an issue that sits at the very nexus of the four priority areas of the Center for Online Safety and Liberty (COSL): promoting safer hosting, supporting fans, combating cyberbullying and abuse, and engaging in legal advocacy. At the very core of our mission-and even our name-is the firm belief that it is neither acceptable nor necessary to sacrifice online liberty for the sake of safety.</p><p>That includes upholding the liberty for creators and fans to express themselves without fear of prosecution over fictional content, while at the same time ensuring that nobody is exposed to potentially offensive content, even if it is fictional, without their consent.</p><p>Here’s how we’re putting that into practice, starting right at home. First, this month we are launching a crowdfunding campaign for a new creator platform called <a href="https://c4osl.org/fan-refuge/">Fan Refuge</a>, which will launch as a testbed for some open source trust and safety tools that we’ve been developing. Fan Refuge won’t be an adult content platform, and it won’t allow AI generated content at all. But it will prioritize empowering its users to curate their own experiences, rather than imposing site-wide censorship on arbitrary moral grounds.</p><h3>Justice for Real Survivors</h3><p>Second, we’re launching a major new advocacy project titled <a href="https://c4osl.org/project/justice-for-real-survivors/">Justice for Real Survivors</a>, directed at the problem that politicians and policymakers are intentionally blurring the lines between fictional and non-fictional sex crimes. The project’s aim is to begin to reshape laws, policies, social norms, and language to prioritize real sex crimes with real victims, and to clearly distinguish them from crimes under obscenity or censorship laws.</p><p>To kick off the Justice for Real Survivors project, we are convening a diverse advisory board who will develop a statement of principles setting out the harms of conflating sex crimes with fictional, artistic, and educational texts. These principles will be opened for broader sign-on, and will create a framework for other activities under the project, including research, coalition-building, policy advisory, public campaigns, and strategic litigation support. (The opinions expressed in this article are my own, not those of the advisory board.)</p><p><a href="https://thehill.com/opinion/criminal-justice/599189-were-still-not-spending-enough-to-prevent-child-sexual-abuse/">Only miniscule funding</a> is made available for resources for the prevention of sexual abuse. So we’re grateful to have already secured the interest of a philanthropic donor in supporting the project’s first research output. This will be a major legal review which will provide a comparative analysis of the treatment of fictional sexual materials across ten countries, and assess the compatibility of these legal regimes with human rights standards. We hope to begin this survey in the third quarter of 2025.</p><h3>Conclusion</h3><p>The rush to criminalize fictional content-from AI-generated images to erotic novels-promises safety but delivers censorship, punishing creators while diverting resources from real victims. Policymakers, swayed by unproven claims that niche fantasies will “flood” the mainstream or “convert” viewers into predators, employ linguistic shifts that equate offensive art with heinous crimes. Yet, as science shows, porn consumption reflects pre-existing interests, not new ones, and banning fictional outlets will only push consumers into darker corners and obstruct harm reduction.</p><p>The conflation of art and abuse isn’t just misguided-it’s harmful. With fewer than <a href="https://www.nbcnews.com/specials/sex-assault-convictions/">4% of real sexual assaults</a> leading to convictions and only <a href="https://www.comparitech.com/blog/vpn-privacy/child-abuse-online-statistics/">3.5% of CSAM reports investigated</a>, survivors are sidelined as authorities pursue victimless prosecutions. Marginalized creators- <a href="https://www.vice.com/da/article/ten-years-ago-in-vice-daddys-little-slut-7/">often survivors themselves</a>, along with LGBTQ+ artists, young people, and women-face censorship or prosecution for works that challenge norms, while honest discourse is stifled by <a href="https://onlinelibrary.wiley.com/doi/abs/10.1111/1745-9125.12355">stigmatizing attacks</a>. The solution lies not in erasing uncomfortable content but in embracing nuance: prioritizing real victims, empowering creators, and trusting professionals to prevent abuse without sacrificing liberty.</p><p>Barry Coty’s case represents one end of the spectrum-his AI-generated content would be deeply offensive to most people, and few are likely to rush to his defense. But the legal principles established through his prosecution won’t stop with creators like him. The same frameworks now being used to criminalize his work will also expand to target fan fiction writers, manga artists, abuse survivors processing trauma through art, and LGBTQ+ creators exploring identity and sexuality. When we normalize prosecuting people for offensive but victimless content, we create precedents that reach far beyond the most unsympathetic cases.</p><p>Through Fan Refuge and Justice for Real Survivors, we’re forging a path forward-building platforms that respect user choice, reshaping laws to focus on actual harm, and amplifying survivors’ voices. But change demands courage. Will we confront the uncomfortable truth that fictional content isn’t abuse, or cling to moral panic at the cost of justice? The choice is ours, and the stakes are high.</p><p><em>Originally published at </em><a href="https://c4osl.org/fiction-or-felony/"><em>https://c4osl.org</em></a><em> on June 12, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5d81079b2783" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Limits of Zero Tolerance — Animated Pornography, Platform Moderation, and Free Expression]]></title>
            <link>https://medium.com/@jmalcolm/the-limits-of-zero-tolerance-animated-pornography-platform-moderation-and-free-expression-9cd77c1918f8?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/9cd77c1918f8</guid>
            <category><![CDATA[animation]]></category>
            <category><![CDATA[generative-ai]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Thu, 29 May 2025 19:23:14 GMT</pubDate>
            <atom:updated>2025-05-29T21:46:35.110Z</atom:updated>
            <content:encoded><![CDATA[<h3>The Limits of Zero Tolerance — Animated Pornography, Platform Moderation, and Free Expression</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*TvGO0Xu3Fu1ObSS2.png" /></figure><p><em>This is a transcript of a </em><a href="https://c4osl.org/beyond-the-filter-aurelie-petit/"><em>conversation</em></a><em> between Jeremy Malcolm, Brandy Brightman and Aurélie Petit held on May 23, 2025 for the podcast </em><a href="https://c4osl.org/subscribe-to-podcast">Beyond the Filter</a><em>, which is presented here edited for length and clarity.</em></p><p><strong>Brandy</strong>: Hello and welcome to Beyond the Filter, a podcast about censorship. My name’s Brandy…</p><p><strong>Jeremy</strong>: And my name’s Jeremy. In this episode, we’ll be talking with a special guest, Aurélie Petit, who is a social and visual researcher of animation and technology and a PhD candidate in the Film Studies Department at Concordia University Montreal. She’s just published an article in the <em>Journal of Porn Studies</em> titled “The Limits of Zero Tolerance Policies for Animated Pornographic Media.” Welcome to the podcast, Aurélie.</p><p><strong>Aurélie</strong>: Hi, thank you for having me.</p><p><strong>Jeremy</strong>: So congratulations on the article. … First, I wondered if you would relate the story that you open your article with about what happened on Twitch in 2023.</p><p><strong>Aurélie</strong>: Yes, Twitch, you know, as a response to fans’ criticism that the platform was targeting sexuality where they felt it was unjustified-and a lot of queer people were being targeted, a lot of women, everybody who was showing too much nudity, like, if you could see your cleavage on Twitch, you would be considered basically doing porn-and so the platform started to really soften its policies over adult content. And especially when it was about animation, you know. And they were saying, if you were drawing, sculpting, or making animation on Twitch, and it was drawing naked people, it was fine. And in one day, so many people made porn, like animated characters in porn, that the day after, the platform decided to cancel its policies. In one day, which is crazy for like a platform like Twitch.</p><p><strong>Jeremy</strong>: Well, I think that’s not the only example of something like that happening. Didn’t OnlyFans also turn their policies around in a couple of days or so? So, yeah, I think one of the themes of your paper is that these platforms maybe don’t have very well thought-through policies on animated content, versus real content that depicts the same thing. So, what did you discover in the platforms that you looked at? I think there were 30 platforms, right?</p><p><strong>Aurélie</strong>: Yeah, so, you know, this example of Twitch, I used it to open the question of, as you say, that platforms don’t know what to do with animation. And it ended up having them write policies that are totally weird or go against common sense. And so, I was curious of this question of, how do you moderate animation? And especially how do you moderate animation on a pornographic platform that actually have to be like, is this “good sex” or is it “bad sex”? And so, I decided to find 30 pornographic websites and I took their policy documents and, you know, I was focusing on this question of age. So, I wanted to see, when they talk about age and, you know, child pornography, how do they include animation in this discussion?</p><p><strong>Jeremy</strong>: Yeah, so I noticed it’s interesting that you used the term “animated child pornography” because child pornography as a term is not really used anymore, is it? We’ve started to move to child sexual abuse material. But speaking for myself, I always feel uncomfortable in using the term child sexual abuse material when there’s no child sexual abuse, right? When there’s no child. And in that way, even though child pornography is like an antiquated term, it’s more appropriate in some ways.</p><blockquote><em>I also feel uncomfortable talking about child sexual abuse materials, you know, and talking about child exploitation, knowing that, of course, it’s a spectrum.</em></blockquote><p><strong>Aurélie</strong>: I also feel uncomfortable talking about child sexual abuse materials, you know, and talking about child exploitation, knowing that, of course, it’s a spectrum. Like, you know, computer-generated imagery, sometimes that is AI, where the child may be used as a model, you know, and it’s maybe part of an exploitation circle or trafficking, you know. But then you also have the Bart Simpsons of the world, as I call them, you know, and it’s like, it’s a spectrum. And what I was able to see in a lot of those policies is like, they don’t make a lot of space for this spectrum, you know? A very problematic, hyper-realistic animated image of a child in a sexual context was, at least on paper, put at the same level as a very cartoonish representation of a child.</p><p><strong>Brandy</strong>: And I feel like animation is just fundamentally ambiguous because with live action, you’ve got an actor involved or a person or whatever, and they’ve got an unambiguous age. No matter what they look like, no matter what age they’re playing, they have a true age, which you can use to ultimately judge the content. But with animation, they don’t have a true age. They just have however old the creator decides to say they are on that day, and however old they look. And the age of the character may change on the creator’s whim, and you can’t really judge purely on appearance because in reality, you can’t really judge purely on appearance either. And the stylization of animation makes things even more ambiguous. A creator may have a character that they say is usually 15, but whenever they appear in pornography, they may just claim that they’re 18, even though they draw them exactly the same way.</p><p><strong>Jeremy</strong>: Aging up.</p><p><strong>Aurélie</strong>: Yeah. That’s why I’m more interested in talking about problematic representation, you know, than about age and gender. Like, we can have a conversation about, you know, representing young looking characters in pornography. And it’s a better conversation than trying to argue which age is a character. But, as a community we can wonder, do we want the imagery of pornography to be mostly young looking people. Like, do we want it to be the <em>main</em> representation of animation? Knowing that it’s not always the case, but it’s also often made by, you know, sexist animation studios, sexist creators. Like, it’s a more interesting conversation for me.</p><p><strong>Jeremy</strong>: Isn’t there also a cultural history behind, particularly, as you mentioned, lolicon and shotacon? Those are particular genres that have a stylized look for their characters that isn’t directly related to age, right? And I think you found only two of the platforms that you looked at actually used that fandom terminology to talk about particular types of content that, by those names, you know, referring to them as shotacon and lolicon content.</p><p><strong>Aurélie</strong>: Well, again, if we link it to this idea of problematic media representations, the history we have for the beginning of lolicon is that it was a genre that kind of always existed. You can trace it back to the golden age of manga, but then it really solidified where a group of men in Japan started to realize that like, yaoi [, gay male porn for women, was] taking a lot of space in the fandom, the conventions, so as an answer they decided to really popularize this representation of young girls. So this woman-exclusionary, queer-exclusionary story is very much part of it. So, for me, that’s kind of the history of it that I see. And then, because those representations match perfectly within the patriarchal society, they became the norm. But, at the origin, they were made in a sexual setting, just as a way to make women feel uncomfortable, which they succeeded at, you know. And loli has a complicated history now because, yes, it’s very present in pornography, but also this idea of the young, moey, cute, kawaii girl is also just very popular now. And of course, you also have lesbian and queer creators in Japan who are also making those kind of manga. And I don’t want to make them totally disappear from the conversation.</p><p><strong>Jeremy</strong>: That’s part of the problem, isn’t it, with a zero tolerance approach, is that you can’t really draw those lines. And so, one of the main risks, I guess, of a zero tolerance approach is that you mentioned in your paper is that violators can actually be reported to authorities, right, over this sort of content. So, what consequences can platform users suffer if they’re reported to authorities over animated content?</p><p><strong>Aurélie</strong>: Well, you know, the language that platforms use in their policies reveal a lot about the kind of moderation they want to perform for a reader or for online users. So, to say zero tolerance, it’s very stern legal language. When we’re showing we have zero tolerance over it, we are in compliance with the authorities, we even collaborate with them. It means that they can provide the authorities with all of the data they have about the user. And the problem comes when it’s animation or a cartoon, because they’re going to use the same language and they’re going to group animation with child sexual abuse materials.</p><p><strong>Brandy</strong>: So I find it crazy to think that you could have someone posting a picture of a real child of being abused. And then on the other hand, you could have someone posting a racy picture of, like, Summer from <em>Rick and Morty</em>, and they could get treated exactly the same-that just sounds insane to me.</p><p><strong>Aurélie</strong>: Yeah, I was talking with a content creator who used Patreon. So Patreon is actually interesting for adult content creators because their policies leave a lot of room for animation and pornography. You know, while they explicitly state “we do not want live action pornography,” they do allow animation, cartoons, and illustrations. But I was talking to this creator who had had his entire account deleted in one day without being able to negotiate, because suddenly they decided he was doing bestiality or it was incest or something, and he was like, “I was not doing anything that I had not done before.” But because they’re using those big terms, it’s very dangerous when it says bestiality, child sexual abuse materials. All of these are like paraphilic sexualities that we have a fair reason to believe should be banned and illegal [in real life]. But for animation, you know, he was doing 3D monster porn.</p><p><strong>Jeremy</strong>: Yeah, I’ve heard that Patreon treats monster porn as bestiality. And I believe that actually comes not from Patreon themselves, but from their payment processor, which enforces that rule. And of course, that throws up a whole lot of questions about furry fandom and their right to express themselves in visual form.</p><p><strong>Brandy</strong>: Yeah, I feel like we’re entering another spectrum of ambiguity. We’ve got the age spectrum of ambiguity. And then we’ve got the sentience spectrum of ambiguity, where you’ve got like animals on one side, human on the other, and then in the middle, you’ve got like these anthropomorphized animals and bestial aliens and things. Where do you draw the line there if you’re talking about bestiality?</p><p><strong>Aurélie</strong>: Yeah, and that’s a problem for a platform. So that’s why I end up not even advocating for better tools, because I don’t think the answer is there, you know. The answer is not to have tools that are going to be more efficient at flagging, because it’s a cultural problem.</p><p><strong>Jeremy</strong>: Yeah, there is something else that you refer to in your paper about more reliance being placed on AI for moderation. It’s not going to solve the problems, right? It’s just going to push them to a different level, where we have to assess are these tools going to do a better job at making the important differentiations between different sorts of content? Are they going to do a better job than humans?</p><p><strong>Aurélie</strong>: Mm-hmm. Which even leave less room for negotiation. It’s exactly what happened with Twitch, you know, people who had hundreds of thousands of followers. You know, and it’s the same for the person I was talking to who was a Patreon content creator. That’s how he pays his rent. And he lost his platform in one day. And I feel like people are going to assume AI moderators are more objective when they’re really not.</p><p><strong>Brandy</strong>: Like, I read an article the other day that an AI couldn’t even tell that like a stick bug wasn’t a stick where humans clearly could. I mean, it’s going to get better, but… I think we’ve brought up before that AI has its own biases. Was it in your article that you mentioned that it has a harder time recognizing like dark faces over pale faces?</p><blockquote>If animation is always going to be wrongly flagged, then this myth of efficiency of those AI automated detection tools doesn’t exist anymore.</blockquote><p><strong>Aurélie</strong>: Yeah… But then if it doesn’t work, if animation is always going to be wrongly flagged, then this myth of efficiency of those AI automated detection tools doesn’t exist anymore.</p><p><strong>Jeremy</strong>: So just to change the topic slightly, I want to ask you about what we should be doing to reduce the harms of content that maybe people don’t want to be exposed to other than reporting them to authorities, which is clearly disproportionate and harmful in itself. What else can we do about people being exposed to content that they don’t want to see?</p><p><strong>Aurélie</strong>: Well, you know, there are some things that really surprised me when I started to think through this research. I went on Pornhub and I wrote “lolicon.” And then I got an automated pop-up that told me, “you know, “if you’re a pedophile and you need help, you can go on this website.” And I felt like I started to see the contour of the problem. Me, I was doing it as a researcher, but I was like, if I am a consumer and I see this, my first answer is going to be, “Pornhub doesn’t know what’s happening.” You know, “they don’t know what they’re talking about. They don’t understand that like, I’m just looking for cartoons. You know, I’m just looking for an image.” And it’s crazy. And like, I think I would feel very defensive. And there’s a very libertarian approach to, you know, animation and anime porn that has always existed in the anime fandom community, because for a long time, pornographic animation was unfairly targeted. And so people became very protective of it. And it created a lot of discourse that was dismissive of concern over age and gender that we can legitimately have. Again, it’s not saying, “let’s ban all animated pornography” if we just say, “it’s kind of crazy that all of those girls look so young.” It’s a conversation that can be nuanced and exist. So for me, it would be, first of all, for those platforms to start using those terms used by fans, like shotacon, lolicon, and actually understand and define them. Being able to explain in their policies where they’re using it, and why this content is actually banned. You know, instead of just saying, “we ban all child pornography (real, virtual, drawing, cartoons),” it’s to say, “we ban all like child sexual abuse materials. Within those child sexual abuse materials, we include animation. Here’s why.” I think it would be a beginning of an answer.</p><p><strong>Jeremy</strong>: I think you’ve acknowledged that there is a spectrum, right? There is definitely material that is perhaps AI-generated, trained on real humans, and we don’t want to see it. And then there’s also really creative, artistic content that does reference characters who may be of indeterminate age. And the creators of that content may themselves be queer. They may have legitimate artistic points to make, and yet it’s all bundled together under the same category. Now, my fear is that if we do that, then people who have legitimate content are going to be forced to post that in dark spaces, in encrypted channels, maybe on the Tor network, places like that, where we also find real child sexual abuse material. So I feel like there has to be some middle ground where we can post content that fits within that part of the spectrum on the clear web, rather than pushing it into the dark web, which is only going to associate it more closely with real abuse content.</p><p><strong>Aurélie</strong>: It’s actually interesting because today I translated another article I wrote for <em>Porn Studies</em> called “The Hentai Streaming Platform Wars,” which I can send to you if you want after. So it’s been that I was reading it again today, and it’s an article I published in December on the ecosystem of pornographic animation online, and all of those streaming platforms that exist and a lot of them are totally black boxes, you know, we don’t know where they come from, we don’t know who is running them, we don’t know how exactly they make money, but they have a lot of content. And a lot of the content is on the margin of legality and probably illegal in a lot of countries. So I think first we would need to have a pornographic platform that is dedicated to animation, because then this kind of platform would allow for much more nuance, because they would only be talking about animation. They would not have to even think about this question of, “but what do you do if it’s a real child?”, and I think it would solve a lot of problems.</p><p><strong>Brandy</strong>: As someone who consumes a lot of fantasy content, it muddies the water even more. Like, I was watching an anime the other day and one of the characters looks like a child and is often mistaken for a child by a lot of the other characters, but they’re actually basically a hobbit, they’re like a middle-aged man with like a family of three. So it just made me start wondering, how would you classify the hentai if someone decided to make hentai of this character? He’s got the maturity and the chronological age of like a 35-year-old, but he looks about 11.</p><p><strong>Jeremy</strong>: So how much of this is about media criticism and like media education, rather than making it about censorship as it is at present?</p><p><strong>Aurélie</strong>: Zahra Stardust, she’s an Australian activist and tech writer and academic, I remember she stated this and she said it in her recent book <em>Indy Porn</em> that we insist so much on telling people that porn is not real, especially teenagers. But maybe we should start to tell people actually porn is real, you know, it is an industry. And it’s the same with those representations, like, you know, instead of saying like, oh, but it’s not real, is to say, actually, it is real, like <em>someone</em> is making it. It’s a question I used to tell my students in a class on porn, because I was teaching animation and we had a week on pornography. And I was like, are those characters sex workers? And of course they were all saying, no, they’re not. And I was like, okay, so where are the workers here? You know, where are the people? What do we think is behind this short animation? Someone made it. There was maybe a voice actor involved. To kind of make the industry appear, I think that’s part of education. And, once you start to think of the industry, you start to think also about the political economy of it, and the cultural economy. And like, what are the politics of the creators? What kind of content do you want to consume? And what kind of content do we want to distribute?</p><p><strong>Brandy</strong>: Are you saying that the thing we should use to judge a piece of content is the character of the creator, who made it?</p><p><strong>Aurélie</strong>: More to understand the context of the production. Because if you are born to understand that like, the Bart Simpson [porn], it’s a parody, it’s a commentary on how those comics are super family-friendly, and they’re actually not family-friendly. But they’re not supposed to be pornographic. It gives you some hints to understand them.</p><blockquote>I think framing these as problematic rather than as illegal or even just not using terms that impute illegality when instead we should just be talking about them as being problematic is a lot more helpful.</blockquote><p><strong>Jeremy</strong>: Yeah, I think framing these as problematic rather than as illegal or even just not using terms that impute illegality when instead we should just be talking about them as being problematic is a lot more helpful. And really allows a lot more space for a conversation around these works rather than shutting down conversation as we otherwise would do by applying blanket categories of censorship to them.</p><p><strong>Aurélie</strong>: Can I give you another example that is maybe more related to the question of age? In my thesis right now, I’m looking at this anime called <em>Kite</em> and it was distributed in the US in the 2000s. But the first time it was distributed, it was heavily edited, and some fans were mad because they were like, “oh, they took off all of the pornography part of it.” But among the pornography part of it, that the editors decided to take off during importation, there was a lot of scenes of rape against a child. It was animation, but you had a fan movement to bring back the unedited version. And again, it was a very libertarian attitude and super resistant to the idea of any kind of editing being done. But then you put it in perspective and you’re like, it was a very problematic media representation. Like, how do you think the younger woman in anime fandom felt to see fans being like, “it is actually super important for us to be able to see the rape of this child”? Obviously, a fictional child, but still. Sometimes it’s more important and about creative freedom of course, super important. But sometimes it’s like, let’s take a step back and actually wonder what are we fighting for. And, again, how does it make all the people feel when we do this?</p><p><strong>Jeremy</strong>: So, you’ve mentioned your PhD research. Is there anything else in your research that you’re planning to publish articles about? Or is there any other research that is on the horizon for you?</p><p><strong>Aurélie</strong>: I have another article that is about the metaphor that a lot of AI porn platforms use about, “oh, you can do whatever you want,” but actually there are policies and rules. A lot of those rules are good, but what does that mean when they’re pretending that there are no rules, and then how do you actually apply those rules to content that is not realistic? So I’m working on this and I’m hopefully finishing my thesis in the next couple of months.</p><p><strong>Brandy</strong>: I feel like, yeah, AI is going to be a big fly in the ointment because we’ve got live action, and then we’ve got animated. But then AI is going to create this whole other subgenre of things that things that look live animated, but don’t actually involve people, which I think are going to have to have their own completely individual set of rules.</p><p><strong>Jeremy</strong>: Yeah, I mean, one of the problems is that if there’s only two categories, which is real or fictional, then AI is always going to be the thin end of the wedge to regulate all fictional media. And so I think there is some merit in saying maybe there should be a third category of content in the way that we regulate it. And that is going to be incredibly contentious because as you may know, just just in the last few days, the US has put a moratorium on new state level regulation of AI. So we are kind of stuck in an unregulated state for a while.</p><p><strong>Aurélie</strong>: If this is like the last part where I can give advice, it’s to look at how other communities who have always dealt with those questions have been dealing with them. Actually get interested in this history of the moderation of non-realistic content. Because animation has always existed. And animation has always been regulated. Look what worked, look what didn’t work. Talk to those people who are fans and who actually will be the consumers being impacted. And get curious. That’s why I’m defending also using terms like lolicon, which actually talk to this community, they understand what you mean.</p><p><em>To listen to the full, </em><a href="https://c4osl.org/beyond-the-filter-aurelie-petit/"><em>unedited version of this interview</em></a><em> and read the episode notes with sources, search for “</em><a href="https://c4osl.org/subscribe-to-podcast"><em>Beyond the Filter</em></a><em>” in your favorite podcasting app.</em></p><p><em>Originally published at </em><a href="https://jere.my/the-limits-of-zero-tolerance-animated-pornography/"><em>https://jere.my</em></a><em> on May 29, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9cd77c1918f8" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How LGBTQ+ content could become illegal]]></title>
            <link>https://medium.com/@jmalcolm/how-lgbtq-content-could-become-illegal-ee8ca9d19993?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/ee8ca9d19993</guid>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Mon, 24 Mar 2025 00:31:17 GMT</pubDate>
            <atom:updated>2025-03-25T18:58:12.335Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*00ZtPn88jMdm_mkRSuGDlg.jpeg" /></figure><p>In 2020, Yulia Tsvetkov, a 26-year-old Russian theater director and artist, <a href="https://hyperallergic.com/537146/russian-artist-faces-six-years-in-jail-for-pro-lgbtq-social-media-posts/">found herself under house arrest</a>, facing up to six years in prison. Her crime? Sharing feminist and LGBTQ+-friendly artwork on social media, which authorities branded as “propaganda of non-traditional sexual relations among minors.” Just a year earlier, Michelle, a 53-year-old transgender woman, was <a href="https://meduza.io/en/feature/2019/12/02/russian-trans-woman-sentenced-to-potentially-fatal-three-years-in-prison-for-posting-manga-on-social-media">sentenced to three years</a> in a Russian men’s prison-where she faced potential violence and medical neglect-for posting erotic manga illustrations online. Prosecutors twisted her art into evidence of child abuse, ignoring her identity and the context of her work.</p><p>These aren’t isolated incidents. They’re stark reminders of how governments can wield vague laws to silence queer voices, turning self-expression into a criminal act. Russia’s crackdown on “non-traditional” content is a chilling blueprint-one that feels uncomfortably relevant as the United States veers toward conservative authoritarianism.</p><p>With cultural tides shifting, how confident can we be that LGBTQ+ individuals-and the platforms hosting their voices-won’t face similar criminalization here? History offers little comfort: from <a href="https://www.theguardian.com/books/2017/mar/24/refuge-and-rebellion-how-queer-artists-worked-in-the-shadow-of-the-law">Joe Orton’s 1962 arrest</a> in the UK for defacing library books to the censorship battles over <a href="https://archive.nytimes.com/www.nytimes.com/books/01/04/08/specials/ginsberg-controversy.html">Allen Ginsberg’s <em>Howl</em></a> in 1950s America, queer art has long been a target. Today, as book bans surge and political rhetoric sharpens, the question isn’t if this could happen-it’s when. More urgently, what can human rights activists and trust and safety professionals do to protect these freedoms before they’re lost to us all?</p><h3>A swing against LGBTQ+ content online</h3><p>Not long ago, Big Tech platforms <a href="https://www.nbcbayarea.com/on-air/as-seen-on/google_-facebook-floats-poised-for-pride-parade-in-san-francisco_bay-area/1972766/">waved rainbow flags</a> and touted inclusivity. Now, hate speech targeting transgender people is openly allowed on platforms like <a href="https://www.them.us/story/elon-musk-x-ban-users-cisgender-slur">X</a> and <a href="https://apnews.com/article/meta-facebook-hate-speech-trump-immigrant-transgender-41191638cd7c720b950c05f9395a2b49">Facebook</a>, echoing the U.S.’s <a href="https://thehill.com/homenews/state-watch/5186943-lgbtq-groups-call-on-democrats-to-do-more-to-protect-their-rights/">rollback of transgender rights</a>. Are we nearing a day when hosting LGBTQ+ content becomes a crime in itself?</p><p>Though it sounded far-fetched a year ago, the pieces of a ban are falling into place. A slick rhetorical trick equates queer content-think inclusive sex ed, heartfelt memoirs, or a kids’ book about two penguins-with hardcore porn, obscenity, and even child sex abuse material (CSAM), all swept under the vague label of “content harmful to minors.” With this sleight of hand, laws that look reasonable at first glance-like age verification for adult sites or holding platforms accountable for harmful content-threaten to silence queer kids seeking community and trans adults sharing their truth online.</p><p>Below, I unpack three tactics that could underpin an LGBTQ+ content ban-two already in motion, one looming on the horizon. I’ll make the case that anyone who values internet freedom and LGBTQ+ rights must act now to counter these attacks before they lock us out of a free web. I’ll share how I’m fighting back, and I invite you to join me-whether by speaking out, coding solutions, or amplifying this fight.</p><h3>Threat 1: Age verification laws</h3><p>The first tactic is already rolling out: laws that cloak censorship as child protection, starting with age verification. The far-right Heritage Foundation’s <a href="https://static.project2025.org/2025_MandateForLeadership_FULL.pdf">Project 2025 agenda</a> for Trump’s second term lays it bare:</p><blockquote><em>Pornography should be outlawed. The people who produce and distribute it should be imprisoned. Educators and public librarians who purvey it should be classed as registered sex offenders. And telecommunications and technology firms that facilitate its spread should be shuttered.</em></blockquote><p>This isn’t <a href="https://www.techdirt.com/2024/09/16/heritage-foundation-admits-kosa-will-be-useful-for-removing-pro-abortion-content-if-trump-wins/">just about porn</a>-it equates LGBTQ+ sex education, dismissed as “gender ideology,” with an existential threat to conservative values, <a href="https://www.heritage.org/gender/commentary/how-big-tech-turns-kids-trans">blaming Big Tech</a> for enabling both. Offline, this conflation first hit libraries. Across the U.S., lawmakers are <a href="https://iowacapitaldispatch.com/2025/02/17/bill-proposes-removing-obscenity-law-exemptions-for-libraries-schools/">proposing</a> and <a href="https://www.savannahnow.com/story/news/politics/state/2025/03/04/is-senate-bill-74-to-protect-children-or-about-censorship-and-fear/81258645007/">passing</a> bills to jail librarians for lending books deemed “harmful to minors”-a label <a href="https://www.marshall.edu/library/bannedbooks/gender-queer/">slapped on LGBTQ+ works</a> like the award-winning <em>Gender Queer</em>.</p><p>Now, this library crackdown is fueling a wave of state age verification laws online. These rules force platforms to verify users’ ages before granting access to anything vaguely “harmful to minors”-a net that could catch everything from memoirs to queer teen forums. Texas’s law is under <a href="https://www.aclu.org/cases/free-speech-coalition-inc-v-paxton">Supreme Court review</a> right now, and its fate could decide whether dozens of similar measures survive or collapse.</p><h3>Threat 2: Section 230 repeal</h3><p>If age verification falters, there’s another play in the works: unraveling the law that keeps platforms safe from lawsuits. Whatever the Supreme Court decides on age laws, Section 230 of the 1996 Communications Decency Act stands as a key shield for sites hosting user-generated LGBTQ+ content labeled “harmful to minors.” It’s the rule that protects platforms from liability for what users post while letting them moderate freely-basically, the internet’s free-speech backbone.</p><p>Gutting Section 230 could force platforms to preemptively scrub LGBTQ+ content-no age checks needed. Bipartisan bills like the EARN IT Act (discussed <a href="https://circleid.com/posts/20220208-the-earn-it-act-the-wrong-solution-to-a-complex-problem">here</a> and <a href="https://jere.my/four-proposed-child-safety-laws-four-approaches/">here</a>) are pushing this, tying immunity to aggressive crackdowns on CSAM. The result? Over-censorship on steroids. After FOSTA-SESTA passed, platforms didn’t just target sex trafficking-they <a href="https://prostasia.org/blog/from-the-newsletter-why-we-still-oppose-the-earn-it-act/">axed legal content overnight</a>. Tumblr’s 2018 porn ban, meant to dodge liability, <a href="https://prostasia.org/blog/tumblrs-adult-content-ban-admission-defeat/">ended up nuking queer art</a> and support forums too. This could wipe out trans creators’ posts or queer teen support groups in a heartbeat, all to avoid a lawsuit.</p><p>Here’s the twist: while the Trump-aligned Heritage Foundation pushes deregulation elsewhere, bipartisan calls to dismantle Section 230 keep growing. Some Democrats, <a href="https://www.techdirt.com/2025/02/21/while-democracy-burns-democrats-prioritize-demolishing-section-230/">oddly</a>, lead the charge, ignoring its role as a free-speech lifeline. A second Trump term might not prioritize this-even Elon Musk, who’s tweaked X with 230 in mind, calls full repeal a “ <a href="https://x.com/cb_doge/status/1852522550519033895">disaster</a> “ after leaning on it in court. Still, that’s cold comfort when queer voices hang in the balance, one policy shift from being silenced.</p><h3>Threat 3: Quasi-official censorship</h3><p>Beyond age laws and Section 230, a third threat looms-one that’s more speculative but already weaponizable. The Internet has a built-in takedown machine: the <a href="https://jere.my/how-your-platform-can-find-report-csam/">CSAM hash list</a> run by the National Center for Missing and Exploited Children (NCMEC). This quasi-governmental group maintains a database of real child abuse images-hashed for platforms to scan and remove matches, often triggering criminal probes. It’s meant for a narrow purpose: stopping actual child pornography, as U.S. law defines it. So why isn’t it censoring queer content yet? Because it’s been tightly controlled-until now.</p><p>To date, NCMEC has assiduously limited its hash list to real child abuse images; a <a href="https://www.missingkids.org/content/dam/missingkids/pdfs/Concentrix-NCMEC-document.pdf">2024 audit</a> helped to ensure that this was so, backed by Supreme Court rulings <a href="https://prostasia.org/blog/defending-lolita-from-censorship/">tying the legal definition of CSAM</a> to harm against actual kids. But pressure’s mounting from both <a href="https://www.missingkids.org/blog/2024/generative-ai-csam-is-csam">within</a> and <a href="https://www.aei.org/technology-and-innovation/ai-revolution-raises-terrifying-questions-about-virtual-child-pornography/">outside</a> NCMEC to stretch that definition. <a href="https://jere.my/generative-ai-and-children-prioritizing-harm-prevention/">Generative AI art</a> is the thin edge-uncontroversial to some, but opening the door to other artwork being included. In 2025, <a href="https://reason.com/2025/02/21/texas-cops-seized-photographs-from-a-museum-and-launched-child-pornography-investigation/">Texas police raided an art gallery</a> over non-sexualized pieces misidentified as child porn, while lawmakers have <a href="https://www.npr.org/2021/11/02/1051471236/texas-governor-abbott-calls-for-removal-of-obscene-school-library-books">slapped that same loaded term</a> on <em>Gender Queer</em>. Redefining “CSAM” to include LGBTQ+ content isn’t a leap-it’s a step.</p><p>Look abroad, and the warning signs flash brighter. The <a href="https://prostasia.org/blog/from-the-newsletter-cybertip/">Canadian Center for Child Protection</a> has overreached its NCMEC-like blocklist, demanding takedowns of frames from a kids’ movie and ethnographic photos-once even <a href="https://www.lateja.cr/sucesos/video-joven-de-17-anos-difundio-por-internet-14/66NV7C4YABAOPKTAU3YJ43BC2M/story/">reporting a teen for her blog art</a>. <a href="https://jere.my/ai-and-victimless-content-under-europes-csa-regulation/">Europe</a> and the <a href="https://jere.my/online-safety-bill-privacy-invasion/">United Kingdom</a> push AI to flag broad swaths of text and graphics, while <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">Australia</a> prosecutes over Simpsons parodies and <a href="https://www.news.com.au/national/nsw-act/crime/sydney-author-lauren-tesolinmastrosa-arrested-over-pedophilia-book/news-story/5babb82438d7adc5ca699c877b07641a">erotic novels for women</a>. These aren’t hypotheticals-they’re blueprints for abuse.</p><p>As a trust-and-safety professional, I’ve always said <a href="https://jere.my/three-guidelines-for-child-exploitation-policies/">we should tackle distasteful art</a> without diluting the horror of real CSAM-abuse of actual kids. But NCMEC’s already leaning right. Post-Trump’s reelection, it <a href="https://www.thehandbasket.co/p/ncmec-doj-lgbtqia-executive-order">scrubbed transgender victims from its site</a>. How much would it take to flip it into a censorship tool, hashing queer content as “illegal”? With lawmakers <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">long smearing LGBTQ+ folks as predators</a>, the groundwork’s there. This isn’t just crystal-ball gazing-it’s a threat we can’t ignore.</p><h3>Fighting back: a new Center for Online Safety and Liberty</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*aiq8QHdKVmCZuZSR.jpg" /></figure><p>This is a collective effort. COSL acts as an incubator for independent projects tied to a core mission: empowering individuals and communities to thrive online by building safer spaces, fostering creativity, combating harm, and championing digital rights and freedom. We’re confronting threats like age verification, Section 230 rollbacks, encryption battles, and content-scanning overreach. We’re crafting free, open-source trust-and-safety tools-starting with my own <a href="https://jere.my/dead-dove-content-warning-plugin-wordpress/">Dead Dove</a> and <a href="https://jere.my/modtools-image/">Modtools:Image</a>, with more ambitious ones ahead. And we’re fostering safe, inclusive communities, beginning with fan spaces (yes, I’m letting my geek flag fly).</p><p>Our first highlighted project, <a href="https://liberato.io">Liberato</a>, is a nonprofit hosting service where I’m Head of Trust and Safety, that sets out to serve marginalized communities who face the highest risks of censorship and surveillance. It scans content against NCMEC’s CSAM hash list and removes matches-no compromises there. But if anyone demands we axe artistic or LGBTQ+ content, we’ll log it in a transparency archive and push back hard.</p><p>Liberato is only the beginning. Each month, COSL will unveil new efforts-podcasts, fundraisers, petitions, software, social platforms-all driving our cause forward.</p><p>But COSL needs you to succeed. These threats strike deep-silencing queer voices isn’t a future risk, it’s happening. If that stirs you, join us; there are plenty of ways that you can:</p><p>However you contribute, COSL amplifies your impact. A freer, safer Internet starts with us-let’s build it now.</p><p><em>Originally published at </em><a href="https://jere.my/how-lgbtq-content-could-become-illegal/"><em>https://jere.my</em></a><em> on March 24, 2025.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ee8ca9d19993" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Announcing AskLex.ai]]></title>
            <link>https://medium.com/@jmalcolm/announcing-asklex-ai-55273a6ed71b?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/55273a6ed71b</guid>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Thu, 30 Jan 2025 23:40:24 GMT</pubDate>
            <atom:updated>2025-01-30T23:40:24.665Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*-sWQoVhtt0TpnT8a-5y2aQ.jpeg" /></figure><p>Navigating legal challenges, whether business or personal, can feel overwhelming and time-consuming. It’s no wonder many people are now turning to generative AI chatbots such as ChatGPT to answer their legal questions, with greater convenience and far lower cost than traditional legal services.</p><p>But despite the advances made in generative AI over the past several years, relying on a chatbot’s advice on legal matters would be foolish. AI chatbots often return outdated, inaccurate, or even completely fictitious information. They’re also not great at identifying jurisdiction-specific nuances. Chatbots themselves will tell you that it’s a good idea to seek the advice of a real local lawyer before placing reliance on AI answers.</p><p>So today I’m launching a new legal advice website, <a href="https://asklex.ai">AskLex.ai</a>, that makes that easy. Like other chatbots, AskLex.ai’s chatbot Lex is trained on publicly available legal texts, court decisions, educational resources, and legal articles, with some modifications specifically designed to limit it from straying outside of its competence or using information from foreign jurisdictions.</p><p>But unlike other chatbots, AskLex.ai allows you to upgrade your chat to include a real lawyer, licensed to practice in your area. I have some experience with this. More than 20 years ago, I launched <a href="https://web.archive.org/web/20020122115748/http://www.ilaw.com.au/">iLaw</a>, Australia’s first full-service, fully on-line legal practice, which presented a unique offer: “You can ask a question on any topic related to Australian law, and for only $50 a qualified iLaw consultant in your locality will contact you with the answer.”</p><p>Through AskLex.ai, I’m bringing that same offer back today — even at the same price! — with a few AI improvements. This video shows how it works:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2F7h1M6A8ZOEQ%3Ffeature%3Doembed&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D7h1M6A8ZOEQ&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2F7h1M6A8ZOEQ%2Fhqdefault.jpg&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/900a00851a0d82f72289c65fff120c50/href">https://medium.com/media/900a00851a0d82f72289c65fff120c50/href</a></iframe><p>How accurate is Lex? Well, while developing AskLex.ai (and moonlighting on a legal advice line), I’ve spent about 400 hours testing it on topics as diverse as drafting a lease agreement, understanding your rights in a workplace dispute, licensing an invention, and negotiating a divorce. A few times, I was able to catch Lex out. But she caught me out a few times too. Overall, I would say that Lex and a real lawyer make a good team together.</p><p>One of the most exciting improvements that AskLex.ai offers over my previous online legal advice service iLaw, is that it also offers fixed price phone consultations (currently also just $50 during the alpha release phase), and simple legal document drafting (currently $100). Additionally, the site offers a $100 membership that provides clients with unlimited chat access both to Lex and to a real local lawyer.</p><p>AskLex.ai is currently in an alpha public release. During this phase, Lex’s advice is limited to U.S. and Australian law, since those are the jurisdictions that I am licensed in, and have been able to review and tune her accuracy. The availability of local legal advice from a real lawyer is currently to limited to New York State and Western Australia (the latter for pro bono cases only), however I plan to expand the service soon.</p><p>AI chatbots offer incredible convenience and efficiency, but they are not a substitute for the expertise of a licensed lawyer when it comes to serious legal matters. AskLex.ai bridges the gap by providing users with the best of both worlds: AI-driven insights and the option to upgrade to professional legal advice at an affordable price. <a href="https://asklex.ai">Try AskLex.ai out yourself</a> — I’d love to hear your comments!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=55273a6ed71b" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Bluesky melts down over Jesse Singal]]></title>
            <link>https://medium.com/@jmalcolm/bluesky-melts-down-over-jesse-singal-1ab4bd42d414?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/1ab4bd42d414</guid>
            <category><![CDATA[bluesky-social]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Mon, 16 Dec 2024 03:30:45 GMT</pubDate>
            <atom:updated>2024-12-16T04:35:03.195Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*H4msWVmmtQamtKhe.jpg" /></figure><p>In the first week of December 2024, controversial journalist Jesse Singal joined upstart social network Bluesky. Bluesky had been experiencing massive user growth since the election result had been called for Donald Trump in November, as many users of X (formerly Twitter) looked to jump ship for a vessel not helmed by Trump ally Elon Musk.</p><p>Due to Singal’s record of journalism critiquing the case for youth gender transition and aggressively clashing with his critics, his presence on Bluesky created immediate tension within Bluesky’s existing user base, which skewed transgender. Many trans users had adopted Bluesky early while Musk was already making them feel unwelcome on X, such as by <a href="https://x.com/elonmusk/status/1671370284102819841">labelling the term “cisgender” as hate speech</a>.</p><p>Singal soon became Bluesky’s most blocked user — but many didn’t simply want him blocked, they wanted him gone, and were prepared to raise the stakes until they got this. One of the tactics employed was to throw additional mud, accusing Singal not only of misconduct in his reporting on trans issues, but also of being a pedophile.</p><p>On 13 December, Roderick’s team <a href="https://bsky.app/profile/safety.bsky.app/post/3ld7e2hsd322r">made a ruling</a>. Ignoring the pedophilia slurs (which his more level-headed critics on Bluesky <a href="https://bsky.app/profile/sirosenbaum.bsky.social/post/3lcxvi6g2ks2c">recognized as risable and false</a>), and focusing on more credible allegations that Singal had infringed the privacy of Bluesky users, the team ultimately decided that despite the outcry, he had done nothing to violate the Bluesky terms of service and could stay. But was this the right decision?</p><h3>The case against Singal</h3><p>I am the father of a wonderful trans daughter, who began her medical transition while she was underage. The policies that Singal’s journalism advocates would have made it more difficult for her to begin her transition. While I cannot speak to the experiences of other trans children, I do know that any such delay to my daughter’s transition would not have been what she needed.</p><p>While I accept Singal’s right to conduct journalism that I disagree with, I ultimately supported those who called for Singal to be banned from Bluesky, on the ground that he had stepped over the line by violating a rule against promoting material from hate groups, when he posted a screenshot from the deplorable <a href="https://www.nbcnews.com/tech/internet/cloudflare-kiwi-farms-keffals-anti-trans-rcna44834">doxxing and cyberstalking community, Kiwifarms</a>.</p><p>When <a href="https://blog.cloudflare.com/kiwifarms-blocked/">Cloudflare banned Kiwifarms in 2022</a>, it identified the website as an exceptional case in which censorship was justified due to its promotion of real-world violence. By allowing Singal to post screenshots from Kiwifarms, other Bluesky users are now being emboldened to do the same, which can only lead to more doxxing and harassment of trans people.</p><p>With that said, Singal and I also have a history. In 2021, I had spoken to Singal’s colleague Katie Herzog on background for <a href="https://www.blockedandreported.org/p/because-god-hates-us-or-is-dead-heres-90c">an episode of their podcast</a>, Blocked &amp; Reported, in which the two journalists investigated one of my earliest trust and safety clients, <a href="https://mapsupport.club/">MAP Support Club</a> (MSC) — a support group for teenagers and adults who identified as experiencing sexual attraction towards younger children, which they had chosen never to act upon.</p><p>The sensitivity and the importance of this group was both immediately apparent to me when I first assumed its trust and safety role, and I took the role very seriously:</p><ul><li>I consulted child sexual abuse (CSA) prevention professionals about what safeguards would be needed, and raised funding from the <a href="https://justbeginnings.org/">Just Beginnings Collaborative</a> to support those measures.</li><li>I secured a partnership with survivor-led CSA prevention group <a href="https://www.stopitnow.org/">Stop It Now</a>, to ensure that its helpline operators were available in the group to provide regular guided group support sessions.</li><li>I <a href="https://github.com/prostasia/rocketchatcsam">commissioned the development of software</a> that would ensure that no illicit content was uploaded to the club’s chat forum, and collaborated with MSC’s administrators on the strengthening of the group’s safeguarding rules.</li><li>I engaged an independent team from Nottingham-Trent University to conduct an evaluation of the group’s safety and effectiveness.</li></ul><p>Singal and Herzog conducted their investigation of the group in the face of a <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">fierce social media backlash</a> against it and its fiscal sponsor from those wrongly convinced of a conspiracy theory that it was a front for a grooming operation.</p><p>But the journalists found otherwise, concluding that “They are genuinely trying to make life better, to reduce the likelihood of children being harmed. And they’re also trying to save the lives of people who have a… shitty lot in life.” Independent academic experts have since <a href="https://digitalcommons.wcl.american.edu/research/80/">reached the same conclusion</a>, and the group remains active today, in partnership with professional clinicians.</p><p>Beyond this, the only evidence offered against Singal to support the slurs against him were a series of articles that he wrote that his critics perceived as being <a href="https://www.thecut.com/2016/09/britain-has-a-hotline-for-pedophiles.html#_ga=2.73832657.2138172432.1630631362-1425867110.1621528509">too sympathetic to pedophiles</a>, in that they recommended “treating people with pedophilic interests like human beings who can be reasoned and empathized with.” Such a sympathetic framing can be uncomfortable to read, but it does align with the approach taken by <a href="https://time.com/6253908/america-child-sex-abuse-prevention/">mainstream public health professionals</a>.</p><h3>The case against Rodericks</h3><p>The movement to have Singal expelled from Bluesky didn’t stop with him, but extended to Bluesky Head of Trust &amp; Safety, Aaron Rodericks, over his inaction on the matter and his perceived favouritism towards Singal. And in echoes of an attack previously levied against <a href="https://jere.my/2022-year-of-the-groomer/">Twitter Head of Trust &amp; Safety Yoel Roth</a> by Elon Musk himself, this included extending false allegations that Rodericks too was a pedophile.</p><p>As far as I know, I have never met Rodericks personally. However although my Bluesky presence was then (and still is) pretty small, he and I had been mutual followers since about early 2024 due to our shared industry connection. At that time, another brouhaha was brewing on Bluesky over how it was enforcing its guidelines against child abuse. While Bluesky was banning users who promoted or excused abuse, many argued that it ought to also ban those who liked or shared suggestive-but-legal artwork of fictional characters resembling children or animals, and those who admitted to struggling with pedophilic impulses.</p><p>One user who argued that Bluesky had the balance right and shouldn’t be engaging in a broader crackdown was a queer artist named Terra Wilder, who wrote (from a now-deactivated account):</p><blockquote><em>Ok, what alternative should they have as a space to be social because they’re still human beings. Your idea is entirely unreasonable, they’ll never stop trying to enter spaces. Or should we just kill them all. That’ll definitely stop them from emerging forever. … Nobody wants to handle this as a realistic problem it is just gut disgust and no solution.</em></blockquote><p>This promoted a pile-on of abuse against her in which she herself was accused of pedophilia, with the ultimate outcome that she attempted to take her own life and was admitted to hospital. I expressed outrage of this in an exchange of my own directed to the prominent 35-thousand follower account that had been leading the pile-on. Referring to Terra and to another user Jamie who had been subjected to a similar pile-on, I wrote:</p><blockquote><em>Jamie is right. There is no credible case to be made that either they or the person who was hospitalised were pro-abuse. You just don’t want queer people like them to have community. But they correctly call you on your bullshit: this isn’t about abuse. It’s about you taking offense and lashing out.</em></blockquote><p>In retaliation for me making these comments, the user in question posted false smears against me based on an article written by <a href="https://rationalwiki.org/wiki/Reduxx">Anna Slatz</a>, a notorious transphobic far-right journalist and Kiwifarms user who had once published a Nazi manifesto. Her secondary source was a smear website from a disgruntled former volunteer colleague of mine at the Internet Corporation for Assigned Names and Numbers (ICANN), who was <a href="https://web.archive.org/web/20230322165150/atlarge-lists.icann.org/pipermail/at-large/2020q1/006795.html">dismissed from that organisation</a> after stalking and abusing several other colleagues.</p><p>Rodericks was then drawn into the dispute by association simply because he followed me. Rather than ignoring this bullying, or at least investigating the sources behind it as he should have done, Rodericks capitulated and unfollowed.</p><p>Predictably, this wasn’t enough to satisfy these extremists, who continued to push for a mass crackdown on accounts associated (in their minds) with pedophilia or zoophilia, which Bluesky eventually implemented in November. One message from a Bluesky moderator (or perhaps an AI) to a user banned over furry art provided a <a href="https://bsky.app/profile/deadlytwisted.bsky.social/post/3lbx4lvll5s2b">sweeping justification for art censorship</a>:</p><blockquote><em>Depicting sexual acts between humans and animals, even as art, is deeply problematic. Animals cannot consent, and such depictions promote the exploitation and abuse of animals. Art has a powerful influence and can normalize harmful behaviors, unintentionally endorsing or promoting these acts. Therefore, it’s crucial to avoid representing such.</em></blockquote><p>But predictably again, this crackdown resulted in many accounts of innocent trans people being targeted, which <a href="https://bsky.app/profile/safety.bsky.app/post/3lbsqm7kfns23">Bluesky itself acknowledged,</a> reversing many of the bans by late November. While attempting to placate one faction of users, Rodericks had outraged another. So it goes in this profession.</p><p>Six months after Terra Wilder’s hospitalisation, many remain convinced that Rodericks, myself, and Jesse Singal are all engaged in a joint conspiracy against them aimed at promoting pedophilia and undermining trans people. It would be laughable if only the real-world consequences of such misinformation and bullying campaigns weren’t so serious.</p><h3>Lessons for Bluesky and other platforms</h3><p>I have always insisted that platforms have a social as well as a legal responsibility to avoid hosting sexual abuse content or facilitating grooming. In other blog articles, I have outlined some of the practical approaches that they can implement for <a href="https://jere.my/how-your-platform-can-find-report-csam/">finding and reporting CSAM</a>, and for making their platforms <a href="https://jere.my/how-your-platform-can-protect-young-people-from-online-harms/">safe for younger users</a>.</p><p>But at the same time, not everything that causes users to cry “pedophile” should be actioned. In fact, much of the time when this word is uttered, it is used purely for its rhetorical effect, by users who are themselves engaged in antisocial behaviours such as targeted harassment.</p><p>The users who engage in such pedojacketing abuse may honestly feel that they are justified in doing so. Jesse Singal has, undoubtedly, made it more difficult for transitioners, and his reporting on pedophilia, while scientifically accurate, may be legitimately triggering for abuse survivors, who may feel that he expresses greater sympathy for pedophiles than for them.</p><p>Associating professionals with stigmatised populations that they report on or work with is so common that there’s even a word for it; <a href="https://journals.sagepub.com/doi/full/10.1177/10790632221146496">courtesy stigma</a>. But that <a href="https://jere.my/2022-year-of-the-groomer/">doesn’t make it OK</a>. Too often, journalists and trust and safety professionals who are targeted are <a href="https://www.psychologytoday.com/us/blog/prevention-now/202312/encountering-hatred-in-child-sexual-abuse-prevention-work">themselves marginalised</a>. It is no coincidence that pile-on campaigns targeting these individuals attract participants from far-right groups such as Kiwifarms, even when they may have been initiated by progressives with good intentions.</p><p>As <a href="https://jere.my/child-protection-censorship-on-wikipedia/">other platforms have also discovered</a>, a small, very vocal faction of users who fling pedophilia smears can have an outsized negative influence over its moderation practices. Bluesky must learn to resist their influence. Its <a href="https://bsky.social/about/blog/03-12-2024-stackable-moderation">composable moderation</a> architecture already provides effective mechanisms for users who are triggered by content that offends them. Bending further to those who use ugly false smears to get their way is not in the longer term interests of Bluesky as a healthy community.</p><p><em>Originally published at </em><a href="https://jere.my/bluesky-melts-down-over-jesse-singal/"><em>https://jere.my</em></a><em> on December 16, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1ab4bd42d414" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Australia Versus Human Rights Online]]></title>
            <link>https://medium.com/@jmalcolm/australia-versus-human-rights-online-a0e495135211?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/a0e495135211</guid>
            <category><![CDATA[australia]]></category>
            <category><![CDATA[misinformation]]></category>
            <category><![CDATA[age-assurance]]></category>
            <category><![CDATA[human-rights]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Thu, 28 Nov 2024 06:47:30 GMT</pubDate>
            <atom:updated>2024-11-28T09:04:29.668Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*gs9tyza7GgVtq4T1nO9gzQ.png" /></figure><p>Australia is one of the few countries of its size, democratic or otherwise, that doesn’t protect human rights in its constitution. Although the High Court has judicially constructed a few constitutionally implied human rights, these are very narrow. In particular, the implied freedom of speech right recognised by the High Court only extends to political speech, leaving the vast majority of other speech acts unprotected.</p><p>Whether recognised in domestic law or not, Australia is still obliged to uphold the human rights recognised in the treaties to which it is a party, such as the International Covenant on Civil and Political Rights (ICCPR) and the Convention on the Rights of the Child (CRC). There is a Parliamentary Joint Committee on Human Rights (PJCHR) to review bills and legislative instruments for human rights compatibility based on Australia’s international obligations.</p><p>However the PJCHR does not have the power to block legislation, and the recommendations it does make are both politically influenced and time constrained. What this means is that if a person believes that their human rights are infringed by an Australian law and those rights aren’t reflected in domestic legislation, their practical options for obtaining recourse are very limited.</p><h3>Efforts towards a Human Rights Act</h3><p>In my recent submission on <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">Modernising Australia’s National Classification Scheme</a>, I referred to a rare case in which the United Nations Human Rights Committee (UNHRC) ruled directly in favour of a Australian whose human rights were infringed by a Tasmanian anti-homosexuality law, which led to the passage of the Human Rights (Sexual Conduct) Act of 1994 to ensure that those rights were reflected in Commonwealth law, so that they could be directly enforced.</p><p>This points in the direction of a possible compromise solution: if amending the constitution to add a Bill of Rights is improbable (the last attempt at enshrining new rights in the Constitution, the proposal to establish an indigenous voice in Parliament, <a href="https://www.aec.gov.au/Elections/referendums/2023.htm">failed spectacularly in 2023</a>), why couldn’t the Parliament at least pass a Human Rights Act? While weaker than a constitutional instrument, this is, after all, the approach that New Zealand took with the passage of its New Zealand Bill of Rights Act 1990 (NZBORA).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*f-ETgOKuNCnW9F6C.jpg" /></figure><p>Well, there have been proposals for Australia to do just this. Over 2008–2009, a National Human Rights Consultation recommended a federal Human Rights Act. But despite widespread public support, the proposal was never implemented due to lack of political will. Today, the passage of a federal Human Rights Act remains the subject of advocacy by various groups including <a href="https://alhr.org.au/time-australia-walk-talk-human-rights-experts-welcome-australian-human-rights-commission-proposal-federal-human-rights-act/">Australian Lawyers for Human Rights</a> and <a href="https://alhr.org.au/time-australia-walk-talk-human-rights-experts-welcome-australian-human-rights-commission-proposal-federal-human-rights-act/">Amnesty Australia</a>, but remains no closer to realisation.</p><p>The same is mostly true at a State level. In Western Australia, an independent consultation committee convened by the government in 2007 strongly recommended that it introduce a Human Rights Act. But no action was taken then, nor has been taken since. Yesterday, I attended a convening of the coalition <a href="https://www.wa4hra.com.au/">Western Australia for a Human Rights Act</a> who are still advocating for the adoption of a WA Human Rights Act, following the lead of Queensland, the Australian Capital Territory, and Victoria which have already adopted similar laws.</p><p>Exactly what this Act could do to help those who are subjected to government human rights violations will be considered towards the conclusion of this article. But to set some context, what evidence exists that such laws are needed in the first place? Are Australians actually more vulnerable to governmental overreach affecting their rights, compared with other countries that explicitly recognise human rights in domestic law? We don’t need to look any further than the current Federal Parliamentary session to find an answer to those questions.</p><h3>New laws: misinformation versus freedom of expression</h3><p>The absence of constitutional or even statutory protection for human rights has offered Australian governments the latitude to pass some of the world’s most invasive restrictions on Internet users, and to grant the most sweeping law enforcement powers.</p><p>The most effective lever for convincing the public that such authoritarian laws are justified is frequently by invoking child safety. Although experts recommend that child safety be treated as a <a href="https://time.com/6253908/america-child-sex-abuse-prevention/">public health issue</a>, it is more expedient for politicians to <a href="https://prostasia.org/blog/how-the-politics-of-child-protection-gives-us-laws-that-harm-children/">“securitise” the issue</a>, since this enables them to push through extreme measures of surveillance, censorship, and criminalisation, that the public otherwise wouldn’t accept.</p><p>This year alone, two federal initiatives fit that bill. The first was the Combatting Misinformation and Disinformation Bill 2024, which was promoted as being necessary to safeguard young people from harmful content, particularly misinformation related to health, safety, and social issues.</p><p>But the <a href="https://humanrights.gov.au/our-work/legal/submission/combatting-misinformation-and-disinformation-bill-2024">Australian Human Rights Commission</a>, <a href="https://www.acl.org.au/media/acl-celebrates-defeat-of-misinformation-bill/">faith-based organisations</a>, and <a href="https://www.youtube.com/watch?v=ZAL1ZkSPuZ0">opposition political parties</a> were among those expressing apprehension about the government’s power to define and regulate “truth,” raising alarms about potential censorship and government overreach​. Following these widespread concerns, the Bill was ultimately <a href="https://www.abc.net.au/news/2024-11-24/laws-to-regulate-misinformation-online-abandoned/104640488">abandoned in November 2024</a>.</p><h3>New laws: privacy versus age assurance</h3><p>The second such measure is the Online Safety Amendment (Social Media Minimum Age) Bill, which is expected to pass this week, given its bipartisan support. Nevertheless this populist bill, which would require social media companies to enforce a minimum age limit of 16, has also been <a href="https://www.abc.net.au/news/2024-11-27/social-media-ban-legislative-enshittification-cultural-moment/104648512">broadly criticised</a> by experts for its impacts on the human rights of young people to engage in the online world.</p><p>During the scant 24 hours during which submissions were received in a rushed consultation, over 15,000 responses were received. In my own response, I wrote:</p><blockquote><em>The bill undermines the crucial role of parents, who are better equipped than the government to assess the risks and benefits of their children’s participation in online society. By imposing a blanket ban, the bill incentivizes children and parents to circumvent the law, fostering disrespect for both legal authority and the institutions that enact such measures.</em></blockquote><blockquote><em>Moreover, the bill presents significant risks to privacy and freedom of expression. In their effort to comply, social media platforms may collect additional personal information, increasing the potential for misuse or data breaches. Requiring identification could also chill free speech, including political discourse, raising constitutional concerns and violating fundamental rights.</em></blockquote><blockquote><em>Many children are ready to engage with social media under parental supervision well before the age of 16. For marginalized groups or children with disabilities, social media offers essential support and connection. Denying access to these platforms could have profoundly negative effects, stunting their social development and leaving them ill-prepared to navigate adulthood when they eventually gain legal access.</em></blockquote><h3>Enforcement creep: free expression</h3><p>It’s not only new laws that have the potential to encroach upon the human rights of Australian Internet users, but also the “enforcement creep” the occurs when regulators test the use of their powers under existing laws. The best example of that this year occurred during stand-off between Julie Inman-Grant, Australia’s eSafety Commissioner, and Elon Musk over demands for the global removal of a violent video from X (formerly Twitter) showing a stabbing attack on a Bishop during a livestreamed sermon in April 2024.</p><p>Inman-Grant issued a takedown order under Australia’s Online Safety Act, aiming to prevent the graphic content from being accessible to children. She declined to accept X’s compromise of blocking the video to Australian users only-arguing (in fairness, correctly), that a Virtual Private Network (VPN) could be used to bypass a national-level block. While other platforms complied, X refused, arguing the video did not violate its policies and raising concerns about global censorship implications. Inman-Grant took X to Australia’s Federal Court to enforce the removal but ultimately <a href="https://www.abc.net.au/news/2024-06-05/esafety-elon-musk-x-church-stabbing-videos-court-case/103937152">dropped the case in June 2024</a>.</p><p>Elon Musk is a poor champion of freedom of expression, given his censorship of simple words like “cisgender” on X, and his penchant for using pedophilia smears against critics, competitors, and <a href="https://jere.my/2022-year-of-the-groomer/">even his own former Head of Trust &amp; Safety</a>. Nonetheless, Musk was on the right side of this issue: nobody elected Julie Inman-Grant to any position at all, least of all to the position of global Internet censor.</p><h3>Enforcement creep: encryption</h3><p>A second example of Australian regulators foreshadowing their intention to exert their existing powers in ways that threaten human rights came in September 2024, when the head of the Australian Security Intelligence Organisation (ASIO) said that the organisation <a href="https://www.abc.net.au/news/2024-09-05/asio-chief-mike-burgess-warns-tech-companies-encrypted-chats/104308374">may start forcing technology companies</a> to provide access to encrypted chats during certain security investigations.</p><p>The following month, the encrypted <a href="https://www.404media.co/encrypted-chat-app-session-leaves-australia-after-visit-from-police-2/">messenger company Session left Australia</a> to reestablish itself in Switzerland, after police visited an employee’s residence and asked them questions about the app’s operation and sought details about a particular user.</p><p>The interception and decryption of online communications is not a new power of Australian regulators, but it has laid mostly latent since passage of the <a href="https://www.legislation.gov.au/C2018A00148/latest/text">Assistance and Access Act</a> in 2018. Then Prime Minister Malcolm Turnbull was rightly ridiculed the previous year for blustering that “The laws of mathematics are very commendable, but the only law that applies in Australia is the law of Australia.” But with executives of encrypted Internet platforms actually coming under police investigation, who is laughing now?</p><h3>Violating children’s rights: privacy</h3><p>It’s notable that Queensland is one of three states and territories, along with Victoria and the Australian Capital Territory, that have so far adopted human rights legislation designed to prevent the government from infringing human rights, because as I’ve <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">previously described</a>, it was also Queensland that hosted perhaps the most egregious violations of children’s rights ever committed in a law enforcement operation.</p><p>The operation, in which authorities directly uploaded abuse images of non-consenting minors for the consumption of online abusers, was subsequently declared a “clear violation of the UN children’s convention” by UNICEF, and <a href="https://www.vg.no/nyheter/i/L8ly4/unicef-clear-violation-of-un-childrens-convention">also faced condemnation</a> from Amnesty International. <a href="https://www.vg.no/nyheter/i/9jz75/police-acting-as-judges">Forum shopping</a> by cooperating law enforcement agencies had settled on Queensland as a base for the operation precisely because of it lacked the legal safeguards that would have made the operation unlawful elsewhere.</p><h3>Violating children’s rights: real CSAM versus obscenity</h3><p>Another area that infringes children’s rights, and one that I have <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">devoted a lot of attention to on this blog</a> due to its intersection with my professional work, is the way in which image-based abuse crimes against them are conflated with obscenity prosecutions over offensive art or literature. I have argued that treating real child abuse content as if it was nothing more than a risque anime cartoon is bad policy, and indeed a <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">disgraceful manifestation of rape culture</a>.</p><p>Furthermore, as I recently uncovered through FOIA applications to the Australian Federal Police (AFP) and the Commonwealth Director of Public Prosecutions (CDPP), neither agency even tracks the distinction between real abuse crimes, and fictional and fantasy sexual materials. Seeking to uncover more, I have made a further FOIA request to uncover the CDPP’s internal prosecution guidelines for such cases. The CDPP has refused to comply, and I have requested a review from the Office of the Australian Information Commissioner (OAIC), which remains pending at the date of this article.</p><p>Meanwhile, I am pursuing law reform through other avenues. In a letter on this topic to the Attorney-General, I recently wrote:</p><blockquote><em>The current approach risks sending the message that federal criminal law is less about preventing harm to children and more about policing perceived deviant desires. This is a misstep, as it trivialises the horrific nature of real image-based abuse and conflates it with offenses better addressed through obscenity laws. It is vital to recognise that the exploitation of real children is a fundamentally different and far more serious issue than fictional or symbolic representations… A legislative adjustment to address this disparity would represent a significant step toward ensuring that the law’s primary focus remains on protecting real children from harm.</em></blockquote><h3>How far would a Human Rights Act go to help?</h3><p>While what has been covered so far suffices to show that the human rights of Australians are under constant threat from overreaching and misguided government policies aimed at controlling our behaviour online, the biggest remaining open question is what, if anything, a Human Rights Act could do to fix that. In short, the answer is that while a Human Rights Act offers significant potential benefits, it is not a panacea.</p><p>One of its key limitations is that, unlike constitutional rights, statutory rights can be amended or overridden by subsequent legislation. This means that while the Act could provide robust protections on paper, it remains vulnerable to political shifts. For instance, a future government could weaken or dismantle its protections, especially without widespread public support. With that said, strong public support and alignment with international human rights norms can create a political cost for governments considering regressive changes.</p><p>An additional limitation is that courts under such an Act likely would not have the power to strike down incompatible laws. Instead, they could only issue declarations of incompatibility, leaving the final decision to Parliament. This structure maintains legislative supremacy but could limit the Act’s practical impact if political will is lacking. The experience of other countries shows that strong advocacy and public engagement are crucial for these frameworks to function effectively.</p><p>Finally, there is no immediate prospect of a federal Human Rights Act, and the benefits of a patchwork of State-based laws are limited. Yesterday’s seminar on a potential WA Human Rights Act included case studies of how Australians in States that do have a human rights law have benefited from access to a formalised process for asserting their human rights in response to government actions that violate them.</p><p>But there is little that can be done at a State level to protect the rights of those who are affected by Commonwealth legislation to regulate Internet usage nationwide. A federal Human Rights Act would not render State laws redundant; instead, it would create a baseline standard, allowing States to build on these protections with region-specific legislation. Both, in other words, are needed.</p><h3>Potential benefits of implementing a Human Rights Act</h3><p>Despite these inherent challenges, the experience of other countries demonstrates that the benefits of even a statutory Human Rights Act can be significant, especially when paired with active public engagement and advocacy. A Human Rights Act would provide a useful first step towards protecting civil liberties and promoting governmental accountability, by serving as a legal benchmark against which all new legislation is assessed, similar to frameworks in New Zealand and the United Kingdom.</p><p>Australians could also potentially directly invoke the Act in legal disputes, enhancing their ability to challenge governmental decisions. Currently, legal recourse depends largely on domestic laws aligning with international treaties, which often lack enforceability. Under a Human Rights Act, courts would be empowered to scrutinise government actions and legislation for human rights compliance. While it may not overturn laws, this process could compel Parliament to reconsider measures found incompatible with international human rights standards.</p><p>In practical terms, a Federal Human Rights Act could serve as a safeguard against overreach in areas like surveillance and online regulation. For example, debates around the Online Safety Act or proposed misinformation laws might have seen more balanced outcomes if guided by clear statutory human rights protections.</p><h3>Conclusion</h3><p>My own career of digital rights activism has taken me around the world, from the <a href="https://jere.my/why-the-eu-will-lose-its-battle-for-chat-control/">United Nations</a> to <a href="https://www.youtube.com/watch?v=oAAk0EC4JIg&amp;t=277s">Silicon Valley</a>, but it began in Australia during the early 2000s, going head to head against the government on behalf of organisations such as the Internet Society of Australia, the WA Internet Association, and Electronic Frontiers Australia. This year that I’ve spent back in Australia has taken me full circle, bringing my advocacy back to its local roots.</p><p>That’s timely, because for a country of its size, Australia has increasingly become a world leader in bad Internet policy-from age assurance mandates, to demands for encryption backdoors, to global content takedowns, and criminalisation of speech. Australia’s lack of a constitutional, or even a statutory bill of rights plays a big part in it holding that dubious honour.</p><p>It should go without saying that the government should not be able to violate international human rights law, even when they claim child protection as the justification. Yet thanks to fearmongering and misinformation propagated from both right and left, the public has become all too credulous when government overreach is framed as being necessary to protect child safety. Human rights are one of the few safeguards that we have to stand firm against such encroachments.</p><p>Although I don’t harbour any illusions that a Human Rights Act would operate as a cure-all, any new weapon in the armoury of a civil rights defender is to be welcomed. I strongly support current Western Australian advocacy efforts to protect civil liberties and bring accountability and redress to victims of State government overreach and misfeasance.</p><p>But even more important in the context of Commonwealth laws affecting the Internet will be the passage of a Federal Human Rights Act, to extend human rights protections to all Australians on a uniform basis, and lay a stronger foundation for a more principled approach to Internet policymaking going forward.</p><p><em>Originally published at </em><a href="https://jere.my/australia-versus-human-rights-online/"><em>https://jere.my</em></a><em> on November 28, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=a0e495135211" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Dead Dove: a content warning plugin for Wordpress]]></title>
            <link>https://medium.com/@jmalcolm/dead-dove-a-content-warning-plugin-for-wordpress-8c14d0d75821?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/8c14d0d75821</guid>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[content-warning]]></category>
            <category><![CDATA[safetybydesign]]></category>
            <category><![CDATA[trust-and-safety]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Wed, 30 Oct 2024 01:45:41 GMT</pubDate>
            <atom:updated>2024-10-30T01:59:11.533Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*sH13r0qwdpRCPqVG7TUR0Q.jpeg" /></figure><p>What should a social media platform do about content that is lawful, complies with the platform’s terms of service, but could be offensive to some of its users? Expecting users to deal with such content by proactively blocking it will burden and drive them away. On the other hand, burying such content leads to a milquetoast experience, especially for minorities who rely on these platforms to express their identities, share lived experiences, and challenge societal norms. Suppressing edgy content risks silencing these voices and reinforcing mainstream narratives that marginalise them.</p><p>A delicate balance is needed-one that empowers users to control their experience without erasing the diversity of thought and expression that makes social media vibrant and inclusive. The question, then, is how platforms can foster respectful engagement while maintaining space for controversial or challenging ideas. As someone passionate about free expression and safety online, the solution that I have always recommended has been the use of <a href="https://jere.my/three-guidelines-for-child-exploitation-policies/">tags and content warnings</a> to allow users to curate their own experience and avoid unwanted exposure to potentially offensive content online, without resorting to heavy-handed censorship.</p><h3>Announcing Dead Dove</h3><p>So today, I’m announcing my release of , a new content moderation plugin for WordPress that is designed to make this easy for the 43.5% of all websites globally that use this incredibly popular publishing platform. Simply by installing this plugin, administrators of WordPress-based sites that typically lack a dedicated engineering team, gain access to a flexible system that allows them to define a default set of tags that will gate potentially offensive content behind an informative warning-while also allowing users to override those defaults and establish their own content preferences.</p><p>It works by leveraging WordPress’s existing tags feature. To add a content warning, the administrator simply adds a tag to identify the type of content that requires a warning, including an extended description which will be shown in the warning box. Below is an example of the tags screen from one of my trust and safety clients that is developing a WordPress-based site. There is a separate preference screen (not shown here) where the administrator marks which of these tags should generate a warning by default.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*aJXEQpHZh8CQ_JqT.png" /></figure><p>(Oh, and if you’re wondering where “dead dove” phraseology comes from? It’s a term used in fandom circles to describe a content warning that can be given about potentially triggering content, without spoiling exactly <em>what</em> you’re about to encounter if you proceed. Check out <a href="https://www.youtube.com/watch?v=EbpK7uHiM_8">this clip from Arrested Development</a> to see where it came from.)</p><p>Once defined by the administrator, the tag can be applied to an entire post, a block such as a paragraph or image, or even just an excerpt of text such as a single sentence. When the end user views the content to which the tag has been applied, they will see it blurred out, and a dialogue box similar to the following will be displayed:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MSdUwbInnh8TSlsZ.png" /></figure><h3>Customising the warnings</h3><p>Note the “Modify your content warning settings” link at the bottom of the dialogue. This is provided so that the user can modify the default set of tags that generate content warnings, if they don’t agree with the choices made by the site administrator. Below is an example of the preferences screen that the user will see when they click that link.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/532/0*Kfab4HYKMWG-6Oja.png" /></figure><p>Of course, the exact tags that the user sees will be those that their site administrator has defined. For the examples above, I’ve used the major warning tags from the long-running fandom website, <a href="https://archiveofourown.org/">Archive of Our Own</a>, and added a general “Not Safe For Work” tag for 18+ adult content. Note that it’s possible both to deselect tags that are set by default, and to select tags that aren’t-which is especially useful for users to customise their own icks, triggers, and phobias.</p><h3>The future</h3><p>Of course, a content moderation system based around <em>Dead Dove</em> can only be effective if the site enforces users tagging their content appropriately. For now, although Dead Dove works with both the modern WordPress block editor (Gutenberg), as well as with the classic editor, tagging content does require access to the WordPress dashboard. Therefore, it’s currently most useful in an editorial environment, such as a private company or organisation website with a limited number of trained users.</p><p>But I’m also working on making it more useful for social websites that allow arbitrary numbers of end users to add their own blogs or images to the site. This will involve extending a future version of the plugin to support the simplified blog editor provided by <a href="https://www.buddyboss.com/integrations/buddypress-user-blog/">BuddyBoss</a>, a social community platform for WordPress.</p><p>Other enhancements planned for future versions include hiding content previews on category pages, which are currently not hidden from view, and allowing the smart application of content warnings based on the user’s physical location-specifically, there are categories of pornography, artwork, and even prose, that although legal in the United States, are <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">illegal in some other countries</a>. The plugin could provide a measure of additional safety to users from countries with such repressive laws.</p><p>A platform that intends to use <em>Dead Dove</em> at scale will need to educate its community about how to correctly tag their own content not only for content discovery, but also content warning. We have examples of how this can work in practice-such as Archive of Our Own, already mentioned above. But we’ve also seen what goes wrong when platforms unwisely <a href="https://jere.my/child-protection-censorship-on-wikipedia/">entrust too much editorial power</a> to untrained volunteers.</p><p>Enabling platforms to strike a better balance between user empowerment and safety is part of the service that I provide in my own trust and safety practice. For me, helping my clients develop communities that care about their own safety and freedom, is just as important as helping them develop or deploy software to protect their platforms.</p><h3>Conclusion</h3><p>Ultimately, the goal of <em>Dead Dove</em> is to empower both site administrators and users, fostering a safer, more inclusive online environment without compromising freedom of expression. By offering thoughtful content warnings and customisable preferences, this plugin ensures that platforms can accommodate diverse perspectives while respecting individual boundaries. As the Internet continues to evolve, tools like <em>Dead Dove</em> offer a practical way to strike the delicate balance between expression and sensitivity-without sacrificing either.</p><p>Try <em>Dead Dove</em> on your site today and let me know what you think! You can download it for free from the <a href="https://wordpress.org/plugins/dead-dove/">official WordPress plugins registry</a>.</p><p><em>Originally published at </em><a href="https://jere.my/dead-dove-content-warning-plugin-wordpress/"><em>https://jere.my</em></a><em> on October 30, 2024.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8c14d0d75821" width="1" height="1" alt="">]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Cybersecurity for Trust and Safety Professionals Handling CSAM]]></title>
            <link>https://medium.com/@jmalcolm/cybersecurity-for-trust-and-safety-professionals-handling-csam-4a9855d2d4b9?source=rss-e617981bb386------2</link>
            <guid isPermaLink="false">https://medium.com/p/4a9855d2d4b9</guid>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[trust-and-safety]]></category>
            <dc:creator><![CDATA[Jeremy Malcolm]]></dc:creator>
            <pubDate>Fri, 07 Jun 2024 07:45:04 GMT</pubDate>
            <atom:updated>2024-07-04T00:54:54.403Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wwYxynYnpAkbRPKX0hGwlg.png" /></figure><p>Following five years working in trust and safety in the United States, this year I moved back to Australia. While here, I received a notification from Cloudflare’s excellent CSAM Scanning Tool (<a href="https://jere.my/how-your-platform-can-find-report-csam/">reviewed here</a>) that a forum post uploaded to a website of one of my clients had been identified as suspected CSAM (child sexual abuse material). No problem right? Simply review the image, determine if it really is CSAM, and then complete the usual <a href="https://jere.my/how-your-platform-can-find-report-csam/">removal, archival, and reporting procedures</a>.</p><p>Well, it turns out that that’s easier said than done…</p><h3>Law Enforcement and Trust &amp; Safety at Odds</h3><p>It’s surprising how often police and trust and safety professionals are not on the same team. Some years ago a client of mine, an adult website, received a request from the police that we <em>not</em> ban certain accounts that we identified as apparently engaged in grooming or solicitation, because those were actually bait or honeypot accounts being run by the police. For the safety of our users, we refused the request and banned the abusive accounts.</p><p>If that wasn’t bad enough, in both the United States and Australia, individual trust and safety professionals have been targeted by police over possession offences. For example, in 2022 <a href="https://reason.com/2022/12/09/this-principal-investigated-a-sexting-incident-so-the-police-charged-him-with-possessing-child-porn/">a school principal was arrested</a> for maintaining possession of child abuse images shared from students for the purpose only of reporting and discipline — there was no allegation that the images were used or intended for any other purpose.</p><p>The problem of overzealous enforcement isn’t limited to trust and safety teams and professionals. As described in depth in my <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">recent submission</a> on the review of Australia’s content classification laws, other odd choices of target for prosecution have included a grandmother over innocent footage of her grandchild, and a man who shared a sexual Simpsons meme.</p><p>So when I receive a report of CSAM, I’m not only worried for the safety of whoever might be a victim of that material, I’m also worried about the safety of the person who reported it, and about my own safety. Simply put, I no longer trust Australian law enforcement to allow me to do my job without hindrance.</p><p>This article will outline the legal risks that trust and safety professionals face (mostly in Australia, though also comparing with U.S. law), the insufficient legal protections that they enjoy, and some of the cybersecurity precautions that they may be advised to take in order to protect themselves.</p><h3>Insufficient Legal Protection</h3><p>Nineteenth century legal philosopher John Austin maintained that laws do not necessarily have any moral basis, but rather simply express the will of the sovereign authority, backed by the threat of sanctions. There is no better illustration of this than the case of CSAM. It might be lawful (but is never morally right) for <a href="https://www.vg.no/nyheter/utenriks/i/L8ly4/unicef-clear-violation-of-un-childrens-convention">police to widely distribute child abuse images</a>, while conversely it is morally right (but might always not be lawful) for trust and safety professionals to handle CSAM as part of their duties to remove and report it.</p><p>If an ambulance driver exceeds the legal speed limit, they could raise a legal defence of necessity if they were ever charged over doing so, because the law recognises that their otherwise illegal actions were justified. So too, there are some circumstances in which trust and safety professionals are able to raise a defence to a charge of CSAM possession.</p><p>The problem for the profession is that these circumstances are very narrow. In the United States, handling as few as three items of CSAM, even if the images in question were promptly deleted and/or reported to authorities, can land a trust and safety professional with a possession charge to which they can raise no defence. This limit is very few for professionals who may uncover large amounts of CSAM all at once, all from a single user of their platform.</p><p>Under Australian law there is no such two item amnesty, but there are defences for those engaged in enforcing, monitoring compliance with, or investigating a contravention of Australian or foreign law. Unless such a defence can be established, handling such content at all amounts to a strict liability crime, which depending on the circumstances could see charges brought under the Criminal Code, the Customs Act, and/or State law.</p><p>It’s also worth noting that this Australian defence isn’t available at all in response to charges brought under the Customs Act following a search at the border. In other words, trust and safety professionals who travel with work devices are liable to having these searched at the border without a warrant, and have no defence if sensitive content is found on that devices, perhaps even in cache or <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">deleted space</a>.</p><p>Another problem is that both the U.S. two item amnesty and the Australian Criminal Code defences are what are called affirmative defences. This means that you can still be charged with a crime and possibly imprisoned without bail, before having the opportunity to raise the defence. You are then effectively required to prove your own innocence at trial (possibly waiting years), or to cop a guilty plea.</p><p>While you might not think that this would be an issue for you, because you don’t store such content on your work device anyway, does your company have any moderation guides that include samples that might be illegal (hopefully not, <a href="https://www.forbes.com/sites/alexandralevine/2022/08/04/tiktok-is-storing-uncensored-images-of-child-sexual-abuse-and-using-them-to-train-moderators/?sh=582ec1b45acb">and yet…</a>)? Does your web browser cache contain images that passed through your platform’s moderation dashboard? How sure are you of your answer?</p><p>Also keep the destination’s law in mind. In Australia, illegal content includes much <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">consensual 18+ pornography, artwork, fiction, and non-fiction</a> that might be entirely permissible under your platform’s terms of service and the platform’s local law — posing an especially high risk for professionals who work for adult, LGBTQ+, or <a href="https://prostasia.org/blog/what-purity-policing-fans-get-wrong/">fan platforms</a> and who might routinely deal with such content.</p><h3>Threat Modelling</h3><p>This being so, it remains incumbent upon trust and safety professionals to take care of their own safety by exercising sensible cybersecurity practices. This doesn’t mean that they should ever intentionally break the law — but it does mean that they should avoid ever being put in a situation where they risk being arrested by overzealous law enforcement authorities, and having to affirmatively prove their own innocence.</p><p>The starting point in <a href="https://www.nonviolent-conflict.org/blog_post/practitioners-civil-resistance-assess-cybersecurity-threat-modeling/">threat modelling for cybersecurity</a> involves asking four questions:</p><ul><li><strong>Who are you?</strong> If your work includes receiving, triaging, investigating, or acting on reports of illicit platform content, then the channels through which you receive these reports are of interest to law enforcement. If you’re travelling with electronics, you’re also automatically placed under suspicion.</li><li><strong>Who is your adversary?</strong> While our ultimate adversaries are online abusers, as explained above it is unfortunately necessary to treat state, federal, and border law enforcement agencies as potential adversaries also.</li><li><strong>What do they want?</strong> Law enforcement’s priority is simply making arrests and convictions. To support these convictions, what is needed is evidence their target dealt with illicit material in some way, such as possession, importation, or sharing.</li><li><strong>How will they try to get it?</strong> There are three main ways:</li><li><strong>Reporting:</strong> Often charges begin with a report from a platform, and sometimes those reports can be false. For example, Google once reported a man to police over <a href="https://www.theguardian.com/technology/2022/aug/22/google-csam-account-blocked">medical photos of his child</a> that had been stored on Google&#39;s cloud, and it was TikTok who informed on the <a href="https://www.abc.net.au/news/2024-04-30/act-grandmother-filming-child-abuse-material-tiktok-post/103786002">Australian grandmother</a>.</li><li><strong>Surveillance:</strong> Under both U.S. and Australian law, telecommunications providers and online platforms have obligations to assist law enforcement, usually under warrant or similar order. These obligations are broader in Australia, where the communications regulator even possesses the power to <a href="https://policyreview.info/articles/analysis/regulatory-arbitrage-and-transnational-surveillance-australias-extraterritorial">compel providers to provide access to encrypted communications</a>.</li><li><strong>Border search:</strong> Both U.S. and Australian border police can also search personal electronics at the border without a warrant. In the U.S., a suspect <a href="https://arstechnica.com/tech-policy/2024/04/cops-can-force-suspect-to-unlock-phone-with-thumbprint-us-court-rules/">can be forced to use biometrics</a> to unlock their device. In Australia, a suspect can be <a href="https://www5.austlii.edu.au/au/legis/cth/consol_act/ca191482/s3la.html">forced to divulge a PIN or password</a> in some circumstances.</li></ul><p>In short, law enforcement agencies may wrongly treat the channels through which trust and safety professionals receive reports as potential evidence of criminal activity, and may select them as targets for investigation. This can include engaging in upstream online surveillance, physical attacks, and legal coercion.</p><p>This doesn’t necessarily come with a polite request or warning. In 2021, a client of mine was reported to authorities by Google and its entire cloud account suspended without notice, simply because a single user had misused the platform for CSAM. I’ve had other clients whose sites have been taken down in similar circumstances, before anyone thought to talk with the client’s own trust and safety team.</p><p>In some cases, the authorities literally even come in guns blazing. In 2019 the owner of a website that published taboo sex stories was the subject of an over-the-top paramilitary style raid of his property that uncovered nothing (though the publisher was eventually sentenced to an astonishing 40 years imprisonment). <a href="https://x.com/SammiSteeleNews/status/1194775803059421184">Neighbours filmed explosions</a> at the scene.</p><p>Law enforcement do not care about the difference between fantasy and reality, between art and abuse, between a family photo and a crime scene, to them it is all one and the same. Frankly, most members of the public are of pretty much the same view.</p><p>Today, supposedly progressive commentators, arm in arm with <a href="https://x.com/scarlettrfranks/status/1795621606624207292">sex work abolitionists</a>, baldly put forth the view that it is <a href="https://x.com/jason_koebler/status/1795497290763002309">crazy and wrong</a> to say that <a href="https://jere.my/generative-ai-and-children-prioritizing-harm-prevention/">AI pornography shouldn’t be regulated</a> by the same legal standards as CSAM. In this environment, law enforcement operates with significant impunity for overstepping, and <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">few are willing to even talk about it</a>.</p><h3>Travelling with a Work Device</h3><p>A comprehensive personal cybersecurity tutorial is beyond the scope of this article. For that, I recommend <a href="https://digital-defense.io/">DigitalDefense.io</a>. Instead, here I will focus on some measures that trust and safety professionals should consider in one particular situation in which they are the most vulnerable — when they are travelling. Many of these tips are also equally applicable to those working remotely from a home office.</p><p>To protect against physical attacks at the border, the simplest advice is simply never to travel with a device that you have used for accessing material that might be unlawful in any country that you are visiting or transiting through. There is no simple way for you to ensure that traces of that content do not remain on your device, possibly in forms that are invisible to you and that you cannot easily remove. If you can, keep separate devices for travel, and only use them to access known safe content while you are away.</p><p>If that isn’t possible and you do need to work while travelling, then there are a few next best options. The one that I would recommend is only to work from a temporary operating system such as <a href="https://tails.net/">Tails</a> or <a href="https://www.whonix.org/">Whonix</a>, that won’t store anything to your device. Both options also automatically use the <a href="https://prostasia.org/blog/should-the-tor-network-be-shut-down/">Tor network</a> to avoid network based surveillance. The main difference between them is that Tails can run directly from a removable device such as a USB flash drive, DVD, or SD card, while Whonix requires virtualisation software such <a href="https://www.virtualbox.org/">Virtualbox</a> running on the host machine.</p><p>If your tolerance for risk is a little higher, you could travel and work with a Chromebook. This would require that before crossing a border, you perform a <a href="https://support.google.com/chromebook/answer/183084?hl=en">powerwash</a> and then sign in again with a separate, second account that hasn’t been ever used for work, in case you are selected for a search. This isn’t quite as safe as using Tails or Whonix, as it is possible that advanced forensics techniques may still be able to recover traces of the previous account’s data from the device’s storage or from Google.</p><p>A Chromebook does not come with a built-in VPN, but it does support many third party VPN services and corporate VPNs. For power users, enabling Chrome OS’s <a href="https://support.google.com/chromebook/answer/9145439?hl=en">support for Debian GNU/Linux</a> makes a range of other security software, including the <a href="https://prostasia.org/blog/should-the-tor-network-be-shut-down/">Tor browser</a>, available.</p><h3>Storage</h3><p>I would not advise ever travelling with a device that has previously been used to access unsafe content that may have been cached or stored to a local filesystem. Even deleting such content that you are aware of is not guaranteed to render it irretrievable. Secure deletion utilities that were once relatively reliable are no longer effective when used with certain filesystems and storage technologies, <a href="https://kb.iu.edu/d/aiut">including SSDs</a>.</p><p>With that said, a situation could conceivably arise in which a trust and safety professional could be required to store CSAM while travelling, eg. under the REPORT Act mentioned below. In this case, the best options are to keep that content in an <a href="https://drive.proton.me/">end-to-end encrypted file storage service</a> or in a separate locally <a href="https://www.veracrypt.fr/">encrypted filesystem</a> — never, it should go without saying, on Google Drive, OneDrive, or similar. Note that relying on your device’s full-disk encryption is not enough. Although important in case of your device being lost or stolen, full disk encryption is less useful if the device is seized at the border and you are forced to power up and unlock it.</p><h3>Passwords</h3><p>It is also important to store strong and unique passwords for each device and service in a password manager such as <a href="https://bitwarden.com/">Bitwarden</a> or <a href="https://www.1password.com/">1Password</a>, so that they won’t all be compromised if you are required to unlock a device. Consider also using unique usernames for sensitive online services such as VPNs, and storing these in the password manager too.</p><p>If you use a PIN as a quick unlock mechanism for your device or your password manager — generally a bad idea — be sure that it is not reused. Also ensure that your password manager does not auto-unlock on login, that it auto-locks when inactive, and (of course) that you commit the unlock password to memory rather than writing it down. Ideally, temporarily uninstall it from your device while travelling.</p><p>Do realise that if you use a phone or phone-based authenticator app as a second-factor authentication device for any online service and that phone is seized, you will lose your second factor — this is aside from the fact that SMS authentication is <a href="https://techcommunity.microsoft.com/t5/microsoft-entra-blog/it-s-time-to-hang-up-on-phone-transports-for-authentication/ba-p/1751752">insecure anyway</a>. A better option would be to discretely travel with a 2FA device such as a <a href="https://www.yubico.com/">Yubikey</a>.</p><h3>Holding Law Enforcement Accountable</h3><p>If you are disturbed that it has become necessary for trust and safety professionals to jump through such hoops simply to protect themselves from overzealous enforcement authorities, then you’re right to be concerned. In the longer term, law enforcement does need to be held accountable for the misuse of its powers. But this will be a long fight, that few have shown themselves willing to take on.</p><p>I have <a href="https://jere.my/child-protection-and-civil-liberties-in-the-balance/">experienced first hand</a> the manner in which law enforcement has colonised the establishment child protection sector, crowding out and seeking to discredit dissenting voices such as sex worker collectives and human rights advocates, or simply making them feel unwelcome by partnering with <a href="https://x.com/ProstasiaInc/status/1450932862404808707">sex work abolitionist groups</a> or <a href="https://www.thecut.com/article/ashton-kutcher-thorn-spotlight-rekognition-surveillance.html">surveillance tech vendors</a> that threaten the rights and safety of minorities.</p><p>It’s fair to say that in the broader trust and safety field however the terrain remains contested. While law enforcement perspectives remain influential, their advocates share mindspace with those promoting <a href="https://jere.my/2022-year-of-the-groomer/">public health based approaches</a>, <a href="https://digitalmedusa.org/risking-human-rights-is-risking-digital-trust-and-safety/">human rights assessments</a>, and <a href="https://glaad.org/smsi/report-meta-fails-to-moderate-extreme-anti-trans-hate-across-facebook-instagram-and-threads/">LGBTQ+ rights</a>. While law enforcement-friendly viewpoints and vendors still dominate, sufficient noise is being made on the sidelines that the field as a whole has not yet been ceded to law enforcement, nor ever should it be.</p><p>In fact I’ve <a href="https://jere.my/online-safety-bill-privacy-invasion/">written before</a> that it is incumbent upon trust and safety professionals who situate their work within a rights framework to push back against government proposals that flout human rights norms, rather than acquiescing to a dystopian future of blanket surveillance that imperils our communities and ourselves. This means opposing measures such as those that would require private communications to be <a href="https://jere.my/why-the-eu-will-lose-its-battle-for-chat-control/">vetted by AI robots</a>, or that would <a href="https://jere.my/generative-ai-and-children-prioritizing-harm-prevention/">further criminalise speech</a>.</p><p>Not only should new authoritarian law enforcement powers be on our radar, but it’s just as important to continue applying scrutiny to the use of existing powers by agencies such as the eSafety Commissioner which has <a href="https://www.youtube.com/watch?v=9NHY3gGaITE">platformed hate groups</a>, and Australian Border Force (ABF) which has a record of <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">systematically misusing</a> its coercive powers of search and arrest for corrupt and illegal purposes.</p><h3>Fighting for Better Protections for the Profession</h3><p>Finally, as a profession we ought to be actively advocating for new protections to be enacted for trust and safety professionals performing their duties in good faith. While working in the United States, I routinely dealt with lawful content that could nevertheless be judged obscene under Australia’s <a href="https://jere.my/drawing-the-line-australias-misguided-war-on-comics/">puritan censorship laws</a>. Even doing that, <em>outside the country</em>, is technically an offence under Australian law.</p><p>The existing protections in both U.S. and Australian law are in urgent need of strengthening, so that trust and safety professionals acting reasonably and in good faith have immunity for dealings with content that they are required to access as a <em>bona fide</em> requirement of their profession.</p><p>To begin with, in Australia, the existing trust and safety defences under the Criminal Code should be extended to importation offences, and Australians should not be criminalised over web browsing that they lawfully do overseas. In the United States, there should be a higher threshold number of images before a defendant loses the possible defence that they were reporting, or had deleted, the images.</p><p>Unfortunately however lawmakers have shown little interest in supporting a safe enabling environment for trust and safety professionals. The <a href="https://www.congress.gov/bill/118th-congress/senate-bill/474">Revising Existing Procedures On Reporting via Technology Act</a> (or the REPORT Act), which passed into U.S. law in May 2024, does now authorise platforms to retain illicit content for longer after they have reported it to NCMEC — now for one year, rather than 90 days. But this change was made for the benefit of law enforcement, not the trust and safety profession.</p><h3>Conclusion</h3><p>Trust and safety professionals who receive reports of objectionable material already bear a heavy mental and emotional burden by being exposed to this content. They shouldn’t also have to worry about being arrested simply for doing their jobs. Yet these professionals frequently work at great personal risk, especially when they travel.</p><p>The current legal landscape in both Australia and the United States presents significant risks to professionals in this field that cannot be ignored. For as long as this remains the case, it will be crucial for such professionals to adopt stringent cybersecurity practices and to stay informed about the legal implications of their work.</p><p>Furthermore, there is a pressing need for legal reforms that recognise and protect the essential work of trust and safety professionals. This means pushing for amendments that offer comprehensive immunity for actions taken in good faith and expanding current defenses to cover all aspects of their duties. Only through such measures can we create a safer and more supportive environment for those on the front lines of online safety.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=4a9855d2d4b9" width="1" height="1" alt="">]]></content:encoded>
        </item>
    </channel>
</rss>