Unlock the Editor’s Digest for free
Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.
The writer is former editor-in-chief of Wired magazine and writes Futurepolis, a newsletter on the future of democracy
Imagine you’re a machine superintelligence that wants a body to move around in. Would you choose a human form?
Probably not. Biologists have no settled theory on how our bipedalism evolved, but — like everything else in biology — it’s a kludge rather than an optimal design. Moving and balancing on two limbs is an impressive engineering feat, but it is dicey on rough terrain and puts terrible pressure on a spine first evolved for quadrupeds. If we hadn’t started from four limbs, might not a centaur-like shape be better?
Yet in the tech sector the humanoid robot is the cultural handmaiden to another holy grail: artificial general intelligence.
This week, Meta was reported to be following Tesla, Apple, and Nvidia in planning a big investment into AI-powered humanoid robots. Their initial focus will be household chores.
Companies like Figure AI and Agility Robotics are already building electronic bipeds for warehousing tasks like moving goods around. But outside of quite specific applications, there’s really not much need for these machines.
It’s no accident that humanoid robots and AGI are the fever dreams of nerds raised on science fiction (I say this as one of them). Both assume that the human form, whether physical or mental, is the pinnacle of evolution.
The human mind, however, far from being a “general” intelligence, evolved for the very specific job of operating a human body, which itself is the outcome of evolutionary compromises. Simple computers can outperform us at any number of brute-force tasks. Even our powers of abstract reasoning are constrained. Just try visualising a nine-dimensional mathematical object or synthesising the arguments of 100 philosophy books at once. AI can already do a better job.
And yet, we are extraordinary in other ways. Think of simple but delicate tasks like washing a wine glass, changing a nappy or placing a flower in an arrangement. Now imagine doing them with your eyes closed. Notice how little the task depends on seeing and how much relies on your skin’s astonishingly fine-grained sensitivity as well as your ability to measure tension, pressure and weight through the muscles of your fingers, hands, and arms. Embedding a tactile sensorium in every cubic millimetre of the body is an engineering challenge orders of magnitude more difficult than vision, itself still not a fully solved problem in robotics.
Could a home-care robot ever be that sensitive? And even if it could, why confine it to two legs and two arms? Why not four and six so it can pick up an elderly or sick person comfortably and move them without fear of tripping?
One argument is that they would look scary. The other is that human-shaped robots suit environments that are designed for humans.
Both points have some merit. Regardless, no robot, humanoid or otherwise, will be as versatile as we are — definitely not at a price that anyone can afford to pay.
Just as there is a “jagged frontier” in AI, where systems that excel at one task can be abysmal at a closely related one, a robot’s capabilities will probably be wildly inconsistent. Flower arranging might be possible — after all, the worst that can happen is a crushed flower — but reliable nappy changing and elder care might be forever out of reach.
In short, expect robots to remain specialised.
That superintelligent AI almost certainly wouldn’t pick one body design to carry its mind around in. Why limit computational capacity to what fits inside a single skull? Or constrain mobility to what a certain set of limbs can do?
Instead, it would sit in its server farm and connect wirelessly to whichever robot shell suited its needs at that moment. It might even try out a human form from time to time — just for the novelty.