Big Tech's latest beef with the CMA
Plus AI Overviews and sycophantic chatbots.
Happy Thursday, first a diary date. The Morning Intelligence is hosting an online panel and Q&A on the Sovereign AI Unit, next Friday, April 17, at 10am. The expert panel will look at why the UK is making this bet now, where the unit should intervene and what limitations it will face. We'll be joined by one of the unit's architects, Tom Westgarth. Register here.
Necessity is the mother of invention
Sky-high energy costs, little compute and a lack of funding — the UK doesn’t have the ingredients to develop its own LLMs right now, but London startup Locai Labs has found an ingenious way to repurpose the best open source models into a UK sovereign LLM. Meet GB1 — it doesn’t train on your conversations, uses licensed data, and is powered by renewable energy. UK fusion pioneers First Light are among the organisations already using it and there’s much more to come.
Happening Today 📆
Ideas against disinfo: The Cambridge Disinformation Conference continues with today's speakers including former tech minister Damian Collins. Collins will set out six recommendations to tackle online disinfo, including cutting off advertising revenue from platforms benefiting from its spread, strengthening the Online Safety Act and building an ‘AI Trust Stack’ to help people verify content.
Evening plans: The Institute for Driverless Transport launches with an event at One Triton Square this evening.
🚨 🚨 Happening Monday: To keep reading the whole of your favourite niche newsletter from Monday you'll need two secs (and a credit card) to upgrade to a paid membership. You get another 30 days free if you sign up by this Monday. After that date, new paid subscribers get seven days free. It costs less than a quid a day, you know it makes sense. 🚨 🚨
News In Brief 🩳
A true friend: Asking chatbots a question rather than telling them what you think makes them less sycophantic, research from the AI Security Institute out this morning shows. Researchers tested 440 prompt variants across OpenAI and Anthropic models to find that when prompts were framed as questions rather than statements, the chatbot was less likely to agree with the user.
It is Ian: The tech department has confirmed Ian Cheshire as its preferred candidate for the next Ofcom chair. The former Channel 4 chair will be quizzed by MPs on the Science, Innovation and Technology Committee next week. Cheshire said he would set out his vision for Ofcom during that hearing.
Healthy news: Think-tanks the IPPR and Interface want urgent action on AI Overviews. They have put forward a "competition-first" policy package to halt unauthorised use of publisher content and require platforms to pay news organisations. The paper also repeats the IPPR's call for “nutrition” labels to show which sources underpin AI-generated news.
Health news: The NHS's data chief has vowed to continue rolling out Palantir's Federated Data Platform, telling colleagues it was delivering "outstanding" results.
Big Tech fears CMA algorithm grab
Ministers want to beef up the Competition and Markets Authority's remit to investigate algorithms, including live-testing powers, but tech firms are pushing back, accusing the regulator of over-reach.
Power grab: A government consultation has just closed on reforms at the competition watchdog which would give the CMA extra powers to scrutinise companies' algorithms. “Algorithms can be misused in ways that breach competition law," the consultation reads. "Government wants to ensure the CMA has robust tools to investigate and address such harmful practices.”
Why we need it: The watchdog already has powers to investigate algorithms for competition harms. It can request documents through an information notice, but, the government argues, that can take multiple rounds of requests, meaning investigations take longer. It believes the regulator's powers are “insufficient to reveal the role of algorithms."
Testing times: Under the new powers it would be able to force companies to perform live-testing, allowing it to see how an algorithm operates. Tech companies fear it goes too far, lacks safeguards and would have impacts beyond the UK.
You've gone too far: The CCIA, a lobby group which represents companies including Amazon, Google, Meta and Apple, argues the proposals would be a “significant extension of intrusive powers” and threaten commercial confidentiality. It described the proposals as an “over-reach that will test the CMA’s technical capabilities," in its response to the consultation. “These powers are globally unprecedented and their potential impacts should be considered in much greater detail."
Is this the 4Ps? The CMA has shifted to four principles called the "4Ps" — proportionality, predictability, process and pace — and the CCIA argued the proposals would also undermine them and place extra burdens on SMEs. Its UK senior director, Matthew Sinclair, said: "We’re generally concerned that the consultation would mean a more intrusive competition policy, when the signals from Government and CMA up to now were that they wanted the opposite."
Spotted Elsewhere 👀
A weekly round-up of interesting stuff I've read.
What's the plan? Chair of the Science, Innovation and Technology Committee Chi Onwurah has accused the government of making confusing decisions about tech sovereignty and lacking a strategy.
It's no myth: Platformer and Transformer both have enlightening reads on the fuss around Anthropic's Project Glasswing, built to contain its latest model Claude Mythos. Platformer's includes the line "Glasswing is built on a deeply uncomfortable premise — that the only way to protect us from dangerous AI models is to build them first." Matt Clifford also gives his views on Mythos, describing it as "game changing".
Ambition: AI minister Kanishka Narayan and chair of the Sovereign AI Unit James Wise chat about what the unit will be doing in this YouTube video. Narayan says the unit was hitting the right "cultural moment" and it was aiming to build "multiple trillion dollar" AI companies in the UK.
How can middle powers influence global AI governance? A mixture of "exporting" expertise, capabilities and governance approaches, the Centre for Long-Term Resilience outlines in its Substack.
Back tomorrow,
Tom