Close Menu
London Herald
  • UK
  • London
  • Politics
  • Sports
  • Finance
  • Tech
What's Hot

Cameron Norrie interview: ‘I never cared about being British No1 – I’m having more fun than I’ve ever had’

June 29, 2025

Can Chelsea FC add Joao Pedro to Club World Cup squad after ‘reaching £50m transfer agreement’?

June 29, 2025

UK rail regulator urged to limit approvals of private train services

June 29, 2025
London HeraldLondon Herald
Sunday, June 29
  • UK
  • London
  • Politics
  • Sports
  • Finance
  • Tech
London Herald
Home » Big Tech’s push into military AI is troubling

Big Tech’s push into military AI is troubling

Jaxon BennettBy Jaxon BennettJune 25, 2025 Tech 4 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Stay informed with free updates

Simply sign up to the Artificial intelligence myFT Digest — delivered directly to your inbox.

The writer is programme director of the Institute for Global Affairs at Eurasia Group

When OpenAI and Mattel announced a partnership earlier this month, there was an implicit recognition of the risks. The first toys powered with artificial intelligence would not be for children under 13.

Another partnership last week came with seemingly fewer caveats. OpenAI separately revealed that it had won its first Pentagon contract. It would pilot a $200mn programme to “develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains,” according to the US Department of Defense. 

That a major tech company could launch military work with so little public scrutiny epitomises a shift. The national security application of everyday apps has in effect become a given. Armed with narratives about how they’ve supercharged Israel and Ukraine in their wars, some tech companies have framed this as the new patriotism, without having a conversation about whether it should be happening in the first place, let alone how to ensure that ethics and safety are prioritised.

Silicon Valley and the Pentagon have always been intertwined, but this is OpenAI’s first step into military contracting. The company has been building a national security team with alumni of the Biden administration, and only last year did it quietly remove a ban on using its apps for such things as weapons development and “warfare.” By the end of 2024, OpenAI had partnered with Anduril, the Maga-aligned mega-startup headed by Palmer Luckey.

Big Tech has changed dramatically since 2018, when Google staffers protested against a secret Pentagon effort called Project Maven over ethical concerns, which led the tech giant to let the contract expire. Now, Google has totally revised its approach.

Google Cloud is collaborating with Lockheed Martin on generative AI. Meta, too, changed its policies so that the military can use Llama AI. Big Tech stalwarts Amazon and Microsoft are all in. And Anthropic has partnered with Palantir to get Claude to the US military.

It’s easy to imagine AI’s advantages here, but what’s missing from public view is a conversation about the risks. It’s now well-documented that AI sometimes hallucinates, or takes on a life of its own. On a more structural level, consumer tech may not be secure enough for national security uses, experts have warned.

Many Americans and western Europeans share this scepticism. My organisation’s recent survey of the US, UK, France and Germany found that majorities support stricter regulations when it comes to military AI. People worry it could be weaponised by adversaries — or used by their own governments to surveil citizens.

Respondents were offered eight statements, half emphasising AI’s benefits to their country’s military and half emphasising the risks. In the UK, less than half (43 per cent) said that AI would help their country’s military improve its workflow, while a large majority (80 per cent) said that these new technologies needed to be more regulated to protect people’s rights and freedoms.

Using AI for war could, at its most extreme, mean entrusting a flawed algorithm in questions of life or death. And that’s already happening in the Middle East.

The Israeli news outlet +972 Magazine has investigated Israel’s military AI in its targeting of Hamas leaders in Gaza and reported that “thousands of Palestinians — most of them women and children or people who were not involved in the fighting — were wiped out by Israeli air strikes, especially during the first weeks of the war, because of the AI program’s decisions”.

The US military, for its part, has used AI for selecting targets in the Middle East, but a senior Pentagon official told Bloomberg last year that it wasn’t reliable enough to act on its own.

An open conversation about what it means for tech giants to work with militaries is overdue. As Miles Brundage, a former OpenAI researcher, has warned: “AI companies should be more transparent than they currently are about which national security, law enforcement and immigration related use cases they do and don’t support, with which countries/agencies, and how they enforce these rules.”

At a time of war and instability around the world, the public is clamouring for a conversation about what it really means for the military to use AI. They deserve some answers.



Source link

Jaxon Bennett

Keep Reading

Trump says he has found group of ‘wealthy people’ to buy TikTok

US energy groups spend record sums on power plants to feed data centres

Inside the British lab growing a biological computer

Nvidia executives cash out $1bn worth of shares

Trump halts US-Canada trade talks over Big Tech tax dispute

Why wearable devices struggle to turn health into wealth

Add A Comment
Leave A Reply Cancel Reply

Editors Picks
Latest Posts

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

Advertisement
Demo

News

  • World
  • US Politics
  • EU Politics
  • Business
  • Opinions
  • Connections
  • Science

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

© 2025 London Herald.
  • Privacy Policy
  • Terms
  • Accessibility

Type above and press Enter to search. Press Esc to cancel.