Press Release: Mu's response to the Crime and Policing Bill
![](/user/pages/01.blog/press-release-mus-response-to-the-crime-and-policing-bill/mublack.jpeg)
Numerous media outlets are reporting a plan by the UK government to pass laws against tools that can be used to make criminalized images of AI children. Accounts are somewhat vague and even contradictory on details. According to The Guardian, under the upcoming Crime and Policing Bill:
- "It will become illegal to possess, create or distribute AI tools designed to generate child sexual abuse material."
- "It will also become illegal for anyone to possess manuals that teach potential offenders how to use AI tools to either make abusive imagery or to help them abuse children."
- "A stringent new law targeting those who run or moderate websites designed for the sharing of images or advice to other offenders will be put in place."
- "Extra powers will also be handed to the Border Force, which will be able to compel anyone who it suspects of posing a sexual risk to children to unlock their digital devices for inspection."
Quite how these provisions will be worded is unclear, as illustrated by the slightly differing interpretations offered up by Reuters and The Independent.
Mu co-founder Brian Ribbon wrote a detailed analysis of the proposal to criminalize AI tools prior to the official announcement in parliament, and his concern about it representing an excuse to snoop on the public looks chillingly accurate in light of the 'extra powers' to be granted to the Border Force.
Mu makes the following comments in response to the proposals:
-
Contrary to claims made by the government, there is no evidence that viewing AI images of children causes contact offending. Rather, AI images can serve as a better release for people who are only attracted to those under the age of consent. To help prevent contact offending, the UK needs to radically improve safe access to mental health support for individuals who seek it, and to stop using threatening and demeaning language that makes people afraid to reach out for help.
-
While government press releases focus on the horror of photos of real children being used to make nude or sexual images which are then distributed online, community sources familiar with such matters inform us that those images do not represent the majority. Most AI images are made without using the likeness of real children; that is, generated entirely by a computer.
-
Many AI tools can ultimately be used to make sexual images of children. Therefore, the UK's new law will either criminalize the many users of AI tools who have no intention of making sexual images of children, or be virtually unenforceable by requiring clear evidence of intent.
-
British authorities have a history of justifying 'new powers' by claiming a need to protect children, and then abusing those new powers to snoop on citizens en masse. The British public are likely to see the same scenario yet again, with the proposed new Border Force powers conveniently announced alongside the upcoming AI legislation. The Border Force already has the right to inspect devices at the border, so we are left wondering exactly how far the government plans to go, and what their true intentions may be.
Please contact b.ribbon@map-union.org for further comment.