Riposte to the 23

In late October 2023, a number of well-known figures co-authored a short note on Managing AI Risks in an Era of Rapid Progress. This serves as the latest instalment in what has become something of a genre: thoughtful messages from thoughtful people communicating concern about what might go wrong with artificial intelligence. This sentence, found on the third page of this latest document, is typical of the tone and extent of what’s being expressed: "This unchecked AI advancement could culminate in a large-scale loss of life and the biosphere, and the marginalization or even extinction of humanity."

A couple interesting trends in the discussion around AI are evident in this high-level treatment:

  • There is a belief that AIs are positioned to be expert operators in essentially human matters. Early in the document the authors state that "companies are engaged in a race to create generalist AI systems that match or exceed human abilities in most cognitive work".

  • The notion that AIs are going to become "systems that can plan, act in the world, and pursue goals" – so, in a word (and one which comes up a number of times in the document) will be "autonomous" – looms large in anxiety about the fairly short-term future of this technology.

There are some points worth picking apart here. For starters, what exactly counts as "cognitive work" isn’t really specified. But we can get a pretty good idea that what the authors must mean are things like making scientific discoveries, concocting complicated plans, persuading people to do things, and generally the types of activities that for humans involve hard thinking informed by deep domain-specific knowledge. It’s interesting that the authors suggest that the category of things that AIs will do well will include "most" of the "cognitive work" humans do, since most humans probably spend most of their time dealing with things like solving mundane problems (tying your shoes on a train platform) and making simple plans (getting across the road without being hit by a car).

Now this might seem beside the point: just because an AI can’t brush your teeth for you doesn’t mean it can’t be involved in a plot to ruin your day, and much worse. But it’s worth considering that, when it comes to the things that motivate and indeed allow humans to be experts, there is a cognitive lattice connecting the most sublime accomplishments of the mind to the momentary banality of simply persisting, a network of all the things that matter to us, from knowing ourselves and our place in the universe, to doing things that bring value and pleasure to our lives, to just simply not falling over all the time. In fact there is a rich field of research in cognitive science and linguistics suggesting that human expertise in conceptually tricky areas relies on making well-structured analogies between abstract domains and the concrete experiences of day-to-day life. Think, for instance, of the thought experiments involving things like elevators and trains that informed Albert Einstein’s deep insight into the nature of the physical universe. Our engagement with the true nature of our reality is done by proximity, through reflection and projection, by scaling up from the things that matter to each of us to the things that matter to all of us. How, without any anchoring in the basic world of physical objects, biological processes, and social interactions that humans need to navigate moment by moment, can an AI ever really be considered to know anything about reality in our universe at all?

Doubts about what "knowledge" really means in the context of a totally disembodied, purely informational entity raise questions about what it would mean for an AI to actually plan to do things, as well. To begin with, by virtue of what do humans and things somewhat similar to humans have goals in the world? It is definitely the case that the behaviour of a number of systems that are very unlike humans – the economy, the weather, biological evolution, to name a few – are best understood, from a human perspective, as having perhaps somewhat figurative "goals" such as market equilibrium or natural selection, but we don’t seriously consider those goals as being underwritten by things like the desires, ambitions, and anxieties that all humans feel and that we understand when we consider the motives of any other human in acting the way they do. Systems like these are undeniably complex and hard to predict, and at the same time they are worth predicting and indeed influencing, because they deeply impact the things we humans value in our lives.

But there is something else, something more essentially human, involved in hoping, wanting, and, eventually, meaning for something to happen. So it is unproductive to talk about the economy or the weather actually "planning" or "intending" any of the consequences that we associate with their mechanisms. To do so would suggest that we could engage with those systems on the basis of the same things that ground us in the world, and thus that we could somehow talk not just about them but also to them, reason with them, rather than simply grapple with their causes and their outcomes. It is similarly, in the end, probably distracting at best to talk about how "we may irreversibly lose control of autonomous AI systems, rendering human intervention ineffective", because "control" of these systems indicates that there is a basis for having top-down views of what the systems mean on their own to do in the first place. What could it possibly mean, for instance, for an AI to "form coalitions with human actors and other AI systems"? Surely the basis of a "coalition" is some sense of shared mission in the world, but an AI just isn’t in the world like that.

Yes, AIs can no doubt be pushed out into the world to do things like browse the web and design chemistry experiments (to pick two examples mentioned by the authors of the letter), but in the end they are acting as abstract factotums, unconcerned with any sense of purpose to the way they map from one pattern to another. They only "know" things and "plan" things by virtue of us interpreting what they do as corresponding to knowledge and intent; they have no notion of value assigned to these things, simply because they have no notion of value to begin with.

None of this uncertainty about the nature of the knowledgeability and agency of AI should lead us to simply write off concerns about what AI might end up facilitating, and in particular the bad things that could be done not exactly by AI but with AI – and the authors of the recent letter are certainly aware of this "malicious actor" factor. If AI has the potential to change the world in a radically positive way, then it simply must be the case that it could also change the world in a negative way; raw power is morally unaligned. So, should the use of AI be regulated? Yes, certainly, if bad or irresponsible people are using AI to do bad and irresponsible things, they should be stopped, as would be the case with the abuse of any technology.

But the emergence of fears regarding the expertise and animus possessed by AI should also make us pause to consider another angle on all this. There is arguably a risk that the focus on the ability of AI to become "autonomous" lends itself to a belief in the idea that the right AI can be trusted to express wisdom and make decisions that are necessarily aligned with the best interests of human beings. But is this really the case? One of the best things about us humans is our ability to project our own sense of being in the world onto all sorts of other things, including but hardly limited to other people. But an overcommitment to the notion of well-regulated AI on the basis of it having any sort of desire to make things happen in a certain way could put us in a situation where we develop a blindspot to the purely procedural and so abstract nature of the technology.

Perhaps in the end what we need to consider "regulating" first is not the technology itself, but the way we communicate about it. One area of particular significance may turn out to be the way that letters like the one we’ve considered here, written by thoughtful people with evident concern for humanity at large, could nonetheless become instruments for actors interested primarily in preserving their own interests through exclusive control over a technology that, with the right orientation, could serve to improve and maybe even save life on our planet and in our times.


Posted

by