Town Crier: The Clash of the Titans (Fans Edition)
Hear Ye, Hear Ye
The global showdown between the Australia’s e-Safety Commissioner and the godlike powers of Elon Musk draws the two most contested principles of the internet onto a single battleground.
On one hand, internet’s foundational promise of the liberating power of nature of free-flowing information, emancipating humanity from the tyranny of central power and control.
On the other, the simple expectation that governments should keep their citizens safe from dangerous situations whether in the real or virtual world.
The catalyst for this face-off was the live-streamed stabbing of a TikTok bishop which sparked a real-life riot outside the church. But it launched into a febrile environment, book-ended by the awful stabbings at Bondi Westfield and yet more individual attacks on women by lonely, angry men untethered to reality.
The e-Safety Commissioner issued take-down orders for all the digital platforms for content available in Australia, but also, more controversially globally citing the ability of citizens to access the material through VPNs.
Musk has challenged the order, doing what he loves best and using his personal megaphone to make cheap jokes, share basic memes and preach his freedom theology; drawing out Australian leaders from the PM down to return serve with interest.
Behind the bad takes, Musk probably has a point; a nation-state presuming to set the terms for the World Wide Web may constitute imperial over-reach. But conversely, our elected representatives are right to act on dangerous content whether it is online or in the real world. They do it all the time.
The harm principle has always been one that limits liberty, so the debate really becomes one of the balance of the two and the way this is resolved is a product of each nation’s particular political and cultural values.
In Australia, the Online Safety Act assumes the general free flow of content but with specific powers to protect vulnerable groups, such as children, from harmful content initially conceived as specific and proportionate interventions.
It’s also important to remember the e-Safety Commissioner was an initiative of the last Coalition government after the Rudd Government tried but failed to establish a broader ‘filter’ to prevent he distribution of child pornography, sparking a massive backlash against government ‘censorship‘.
As Musk fights to keep his stabbing images circulating, the debate is moving towards more defined limits for children with an unlikely unity ticket between Opposition leader Peter Dutton and women's safety advocates like Jess Hill.
Both have suggested online verification as one of the approaches to protect children, Dutton for preventing underage access to platforms, Hill to specifically block boys from adult pornography.
The push-back here is about whether we have systems in place where such sensitive information as age verification can be trusted to either Pornhub or Bet365, with secure government ID still a pipedream-in-progress.
But I think there are bigger issues that get papered over by a legal framework designed to deal with individual harm perpetrated on a digital infrastructure which, by its very design, is harmful.
I don’t think this is hyperbole: we know the business models of the platforms are to maximise virality, we know that the content that works is divisive and emotional, we know the impact of these sorts of ecosystems is to distort truth and maximise anger.
These structures, based on US principles of free speech which are not reflected in our constitution, have freed platforms of any semblance of responsibility for the information they serve. Unlike restaurants or water companies there are no enforceable health standards.
So how we ensure safety on a system that is engineered to be unsafe? Surely a regulator with a big hammer is condemned to a lifetime chasing the next bad thing, without making the environment any safer.
As the platforms not just embrace AI, but actually buy up the technology to dominate it, surely the bigger project is to build alternative places that aren’t inherently dangerous.
At least that’s the project of the Centre of the Public Square – and we suspect that Elon Musk won’t be a fan of our work either.
Live Event
If you are Melbourne next Friday (May 10) join us to launch 'Fringe to Famous' a new book looking at Australian cultural industries.
The book - a collaboration between academics Tony Moore, Mark Gibson, Chris McAuliffe and Maura Edmond - will be launched by Peter Garrett.
We'll be hosting a panel looking at the impact of AI on culture - and deploying our terrific Civility feedback tool.
6pm Friday 10 May
Kaleide Theatre RMIT City Campus 360 Swanston Street Melbourne
Free event, registration required https://events.humanitix.com/fringe-to-famous-book-launch
Burning Platforms
In the latest episode of Burning Platforms, we were joined by author Tim Dunlop to kick around the media, literally, what is their mission in an age of mass information and can they be trusted to do the work.
We also look at:
· Google’s sacking of staff of protests over ‘cloud apartheid’.
· A new AI that remembers everything you’ve ever said – with thoughts sure to follow.
Listen to Burning Platforms here or watch the show here.
Policy updates:
· A wide-ranging Senate inquiry into AI is calling for public submissions – we are doing one, what about you?
· Government review of the powers of the e-Safety Commissioner - why should Elon Musk have all the fun?
Worth Clicking:
· OK, Computer: this discussion between the NYT’s Ezra Klein and Verve editor Nilay Patel is a mind-expanding dive into what happens when and if AI takes over the internet (spoiler: it might not be all bad).
· The Internet’s Philosopher – the New Yorker looks at the work of reclusive academic Byuung-Chul Han who argues ‘the smart-phone is a mobile labour camp.’
· Not So Fast – Meta‘s AI chief calls BS on AGI.