The AI-framing trap

Just a quick thought on what’s become apparent in conversations about AI with smart people recently, and where they go wrong.

It’s not so much that they don’t understand the abilities or inner-workings of AI, because they do. It’s that they have adopted the framing offered by AI technology and apply it too widely. Let me explain.

AI technology is computational. It takes input and produces output, which we want to evaluate. This fits a certain type of reasoning problems: retrieving information, answering customer queries, producing an image, and so on. There’s input, there’s output, and in the middle is a reasoning black box. Now we just have to make the innards of the box better and better. It’s a computer science world view. It has its place.

Problem is, lots of problems humans deal with aren’t like that; in fact most problems aren’t like that. Making a cup of coffee the first morning in an AirBNB isn’t a black-box-reasoning-problem. Designing a new building isn’t a black-box-reasoning-problem. Educators developing a new spelling method isn’t a black-box-reasoning-problem. Deciding whether to invest or not isn’t a black-box-reasoning-problem. Why not? Because they are ill-defined problems requiring a design process with constant interaction with what the environment affords materially and socially.

The black-box-reasoning framing, when applied inappropriately, steers you in the wrong direction. No, it won’t help to make better reasoning agents. No, there aren’t any benchmarks for these problems. No, you won’t get to AGI just by learning better black boxes from ever larger datasets.

Reasoning is important, but it’s far from the only aspect of human intelligence. If ‘reasoning’ and ‘intelligence’ are synonymous to you, then you might have fallen into the AI-framing trap.


Posted

by