Close Menu
London Herald
  • UK
  • London
  • Politics
  • Sports
  • Finance
  • Tech
What's Hot

TfL network future expansion to include DLR to Thamesmead

July 13, 2025

Person dies after southeastern train crashes into van in Kent

July 13, 2025

Woolwich residents object to Chinese restaurant in building

July 13, 2025
London HeraldLondon Herald
Sunday, July 13
  • UK
  • London
  • Politics
  • Sports
  • Finance
  • Tech
London Herald
Home » A more intelligent approach to AI regulation

A more intelligent approach to AI regulation

Jaxon BennettBy Jaxon BennettJuly 13, 2025 Tech 3 Mins Read
Share
Facebook Twitter LinkedIn Pinterest Email


Unlock the Editor’s Digest for free

Roula Khalaf, Editor of the FT, selects her favourite stories in this weekly newsletter.

One of the biggest policy challenges of our times will be how to regulate artificial intelligence appropriately. As the powerful general purpose technology is rapidly adopted across society and the economy, the task will be to maximise its upsides while minimising its downsides. AI is already proving a helpful boost to productivity in sectors such as software, marketing and administration. But its widespread use also raises real concerns about its more harmful impacts, ranging from algorithmic discrimination to deepfakes and disinformation. The Grok chatbot’s praise for Adolf Hitler last week underlined the myriad issues that will emerge.

To date, regulators and lawmakers have failed to grasp the full dimensions of the challenge. Since 2016, more than 30 governments have enacted some form of AI regulation, according to Unesco. But few of these initiatives match the fast-evolving scale or complexity of the issue. A better approach is possible.

In the US, the Trump administration has prioritised innovation over regulation. AI is seen as critical to help the US maintain its technological edge over China. But even though Washington has failed to pass any federal legislation concerning AI, many states have been rushing to fill the void. At least 45 have introduced 550 bills this year that focus on AI, according to the National Conference of State Legislatures, covering privacy, cyber security, employment, education and public safety.

So alarmed are some big AI companies about this piecemeal regulation that they lobbied the US Congress to impose a 10-year moratorium on all state legislation in the field. Rightly, the Senate rejected this rash idea, which had been included in the “big beautiful bill”, by 99 to one. The next logical step, though, is for Congress itself to hammer out federal legislation forestalling the need for such state activism. It makes no sense for individual states to adopt different rules on, say, autonomous vehicles. National, or ideally international, standards should apply.

If Washington is in danger of under-regulating AI, the EU risks over-regulating the technology through its EU AI Act, which is gradually coming into force. European start-up and industry associations have warned that the act’s overly broad provisions impose an excessive burden on smaller companies and will entrench the power of bigger incumbents. The EU last week pressed ahead with unveiling its code of practice for general purpose AI despite fierce lobbying against it.

Other technologists highlight the practical difficulties of trying to regulate the base technology itself, rather than just focusing on its applications. The intent of EU legislators may be admirable but the AI act risks hobbling European companies trying to exploit its potential. Start-ups fear they may end up spending more on lawyers than software engineers to comply with the law.

Rather than seeking to regulate AI as a category in its own right, it makes more sense to focus on the technology’s applications and modify existing legislation accordingly. Competition policy should be used to check the concentration of corporate power among the big AI companies. Existing consumer, finance and employment regulations should be modified to protect rights that are long enshrined in legislation.

Instead of adopting sweeping laws that are hard to comply with and enforce, it would be smarter to concentrate on mitigating specific real-world harms and ensuring real accountability for those deploying the technology. Polling in many western countries shows users are understandably wary of the indiscriminate introduction of AI. Narrower, clearer, enforceable rules would help deepen consumer trust and accelerate its beneficial deployment.



Source link

Jaxon Bennett

Keep Reading

Ex-Sequoia partner closes in on $400mn European tech fund

Elon Musk’s xAI seeks up to $200bn valuation in next fundraising

How Elon Musk’s rogue Grok chatbot became a cautionary AI tale

Amazon’s annual deal fest is no longer all about the bargains

Google to agree cloud discount as US government squeezes Big Tech

Elon Musk is still the Tesla wild card

Add A Comment
Leave A Reply Cancel Reply

Editors Picks
Latest Posts

Subscribe to News

Get the latest sports news from NewsSite about world, sports and politics.

Advertisement
Demo

News

  • World
  • US Politics
  • EU Politics
  • Business
  • Opinions
  • Connections
  • Science

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

© 2025 London Herald.
  • Privacy Policy
  • Terms
  • Accessibility

Type above and press Enter to search. Press Esc to cancel.