AI could save the world – should it really be regulated by the people who don’t care?

The beginning of November has been marked by the occasion of an international AI Safety Summit, intended as the beginning of "urgent talks on the risks and opportunities posed by rapid advances in frontier AI". Based on reports emerging from the event, the tone was a mixture of collective enthusiasm for containment and nation-specific posturing.

Many eyes will have been caught by photos from a staged discussion between British-PM-for-now Rishi Sunak and techno-bully Elon Musk having what looks like a pretty good time talking about the perils and opportunities presented by AI. Sunak has been particularly keen in recent months to broadcast his enthusiasm for AI and in particular the ways it can become a growing aspect of the tech industry in the UK. Musk, for his part, has been vocal about his concerns over the existential risks posed by AI, and in his comments to Sunak covers both the opportunities (a "future of abundance") and threats (substantial economic disruption) of forthcoming technology.

A theme that runs through the safety summit, the Prime Minister’s positioning, and the tech entrepreneur’s grandstanding is the idea that AI needs to be regulated, and it is pretty self-evident that all the people involved think the technology should be regulated by themselves and the institutions that they control. So let’s think for just a minute about who is actually talking about what’s best for the rest of us. In Sunak we have a politician who recently pledged, with no evident sense of equivocation, to "max out" the North Sea, by which he can presumably only mean completely exhaust one of the largest fossil fuel reserves on the planet for whatever short-term economic benefit he imagines that would attain. And then Musk is a man who has contributed substantially to disinformation about the climate crisis both indirectly under the guise of supporting "free speech" and directly through his own claims.

These are the people telling us to trust them with protecting the rest of us from the potential harms of AI. But can we really have confidence in them to handle this technology responsibly, when they’ve been so unconcerned with the collective good in other life-and-death matters? And as a counterpoint, if AI does have a chance of causing catastrophic harm to our societies, it must also have the capacity to offer solutions to the most dire challenges facing the planet. What exactly that means in the context of for instance climate change – maybe including the combination of world-saving strategising and messaging that AI might enable – remains to be discovered, but it seems calamitous to consider placing regulation of the technology that might save us from the most fundamental crisis of our times in hands of precisely the people who would rather ignore that crisis in the first place.


Posted

by