The Age of Intelligence?

The Age of Intelligence?

Hear Ye! Hear Ye!

I think it was Spinal Tap‘s lead guitarist David St Hubbins who observed "there’s a fine line between stupid and clever." So too the overlords of AI who wax lyrical about the omniscient powers of their tech.

We’ve just lived through a week when Open AI’s Sam Altman released the latest version of his large language model tool, which gives users the unmissable opportunity to merge predictive text with voice to manufacture their own social interactions.

The new tool, as the promo material breathlessly tells us (begging the question as to whether it was written by a real person or an AI) not only takes more time to "think" but can outperform PhD students in physics, maths and biology without the need for lifetime learning and self-reflection!

It was also a week it became official that the one-time Not for Profit would join the Big Technocracy as a fully-fledged corporation dedicated to raising capital and maximising profit;  that is, as Altman warned, as long as the march of technology doesn’t eradicated the need for money in the process!

And it was the week where Altman published the latest iteration of his vision splendid, a sweeping mini manifesto grandiosely titled "The Intelligence Age." Regular readers know I have nothing against manifestos, but Altman’s offering has left me ruminating on St Hubbins' defining observation.

Here’s a sample:

“In the next couple of decades, we will be able to do things that would have seemed like magic to our grandparents. This phenomenon is not new, but it will be newly accelerated. People have become dramatically more capable over time; we can already accomplish things now that our predecessors would have believed to be impossible.

Worth noting here Altman is not referring to our ability to fry the planet through climate change nor the contribution that the massive thirst for energy and power that generative AI demands. Indeed, he goes on to assert that far from exacerbating that problem, AI will solve it:

Although it will happen incrementally, astounding triumphs – fixing the climate, establishing a space colony, and the discovery of all of physics – will eventually become commonplace. With nearly-limitless intelligence and abundant energy – the ability to generate great ideas, and the ability to make them happen – we can do quite a lot. 

Indeed, name a negative consequence, Altman is confident AI will come up with the answers, if it is still a work in progress:

There are a lot of details we still have to figure out, but it’s a mistake to get distracted by any particular challenge. Deep learning works, and we will solve the remaining problems. We can say a lot of things about what may happen next, but the main one is that AI is going to get better with scale, and that will lead to meaningful improvements to the lives of people around the world.

It all sounds brilliant, at least until you line up the hype job with the reality. What the latest version of this vision splendid does is basically automate the human voice and trivialise the notion of lifelong learning.

Many of the jobs we do today would have looked like trifling wastes of time to people a few hundred years ago, but nobody is looking back at the past, wishing they were a lamplighter. If a lamplighter could see the world today, he would think the prosperity all around him was unimaginable.

Rather than lamp-lighting, this feels like gaslighting. The reality is that Altman has launched a commercial product that has been rushed into the market so fast that his initial sales pitch that 'AI needed to be harnessed for good’  itself needed to be wound back before it undermined his whole business model. But now he is confident that the way to save the world is to give the tool with the potential to destroy it the power to tame itself.

But here’s the catch: after being sacked by his board and then reinstated when the financial implications of standing on principle become apparent, these aspirations are now outside his set of KPIs. Altman’s vision of an "Age of Intelligence" is predicated on the development of machine learning that cannot be driven by the market impulses that Altman now embraces.

In fact, he is repudiating his mission even before the ink has even dried.

Burning Platforms

We dig deeper into these contradictions in this week’s burning platforms with Lee Schofield from "Future is Now" with our regular panelists Digital Rights Watch chair Lizzie O’Shea and Health Engine CEO Dan Stinton.

We also cover:

·  The fallout from 7-Eleven’s privacy breach when it ‘accidentally’ collected facial recognition of staff and customers through a customer satisfaction rating system (what could go wrong there?).

·  Misinformation and disinformation laws: platform responsibility or an attack on free speech?

· And Dan’s take on Mark Zuckerberg’s new Metaverse glasses.

Download the podcast here.

Watch the video here:

Policy Updates:


Privacy: Submissions on the government’s modest first tranche of reforms close Friday. We will be making one and if you want to add your voice go to Privacy Now, now.

AI Regulation: Our policy director Jordan Guiao has pulled together this submission on the proposed mandatory AI guidelines.

Misinformation: more new legislation which we discussed in this week’s podcast. Jordan also had this to say on ABC News Online.

What We’re Clicking

What’s in a Name? Lizzie shared with me this fascinating piece that questions the utility of "Artificial Intelligence" as a construct, rather than seeing it as the same old tech stack. 

Complexity theory: Fascinating discussion with micro-biologist Neil Theise about his book "Notes on Complexity" a totally original take on the nature of our reality.

Doing it like Kevin Bacon: And loathe as I am to reference myself, I was pleased to get a few Footloose references into my Guardian column on why I’m not supporting a ban on social media for kids.