- AI Weekly
- Posts
- The UK's £2 billion AI bill just arrived (Almost)
The UK's £2 billion AI bill just arrived (Almost)
Big investors are buying this “unlisted” stock
When the founder who sold his last company to Zillow for $120M starts a new venture, people notice. That’s why the same VCs who backed Uber, Venmo, and eBay also invested in Pacaso.
Disrupting the real estate industry once again, Pacaso’s streamlined platform offers co-ownership of premier properties, revamping the $1.3T vacation home market.
And it works. By handing keys to 2,000+ happy homeowners, Pacaso has already made $110M+ in gross profits in their operating history.
Now, after 41% YoY gross profit growth last year alone, they recently reserved the Nasdaq ticker PCSO.
Paid advertisement for Pacaso’s Regulation A offering. Read the offering circular at invest.pacaso.com. Reserving a ticker symbol is not a guarantee that the company will go public. Listing on the NASDAQ is subject to approvals.
The AI Truth Nobody Wants You to Know
Your government is about to make a £2 billion mistake. And you're paying for it.
Here's what happened last week while you were scrolling TikTok:
The UK almost bought ChatGPT for every single person in Britain.
Not just you. Your nan. Your neighbor who still uses Internet Explorer. All 69.6 million of us.
The price tag? £2 billion of your money.
The Deal That Almost Happened
Peter Kyle sits across from Sam Altman in San Francisco. Kyle runs UK tech policy. Altman runs OpenAI.
They're talking about the biggest AI deal in history.
"What if we gave ChatGPT Plus to everyone?" Altman asks.
Kyle probably choked on his coffee.
ChatGPT Plus costs £20 a month normally. For everyone in Britain, that's £2 billion annually. Straight from the treasury.
But here's the thing that should terrify you...
Kyle "never really took it seriously" because of the cost.
Not because it's a bad idea. Not because of privacy concerns. Not because maybe people should choose for themselves.
Just the money.
What They Actually Signed Instead
They did sign something. A "memorandum of understanding."
Fancy words for "let's test AI everywhere in government."
Schools. Hospitals. Courts. Defense systems.
Kyle already uses ChatGPT to write government policy. He asks it for business advice. The civil servants have their own AI helper called "Humphrey."
Your taxes are funding AI to make decisions about your life.
And you had zero say in it.
Meanwhile, 300 Miles Away
Robert Dillon is eating breakfast when police knock on his door.
"You're under arrest for trying to lure a child."
The crime happened 300 miles from his house. He's never been to Jacksonville Beach.
But the AI said he did it.
93% confident.
How the Algorithm Destroyed a Life
Here's what actually happened:
Someone commits a crime near a McDonald's
Police feed security footage into facial recognition
Computer spits out Robert's face
Police put his photo in a lineup
Witnesses point at him
He gets arrested
No alibi check. No investigation. Just trust the machine.
Nine months later, charges dropped. His life? Already ruined.
The computer was wrong.
This Isn't Rare
At least seven people wrongfully arrested because AI thought it knew better.
Six were Black. One wasn't.
Spot the pattern?
These systems work great on perfect photos in perfect lighting. Real life isn't perfect.
But police trust them anyway.
Detroit got sued. Now they can't arrest someone just because an algorithm says so.
20+ American cities banned police facial recognition entirely.
The UK is still "creating governance frameworks."
Translation: they haven't figured it out yet.
The Real Problem Nobody Talks About
Your government wants AI everywhere. Fast.
They're spending £14 billion on "AI growth." Creating "AI zones." Hiring AI companies as consultants.
But they're not asking the right questions:
What happens when it's wrong?
Who pays when lives get destroyed?
Why should a computer decide guilt or innocence?
They're asking: "How do we get more AI?"
That's backwards.
What This Means for You
Two stories. Same problem.
Story 1: UK wants to buy AI for everyone with your money. Story 2: AI destroys innocent lives when police trust it too much.
Both show governments rushing toward AI without thinking it through.
They see efficiency. Cost savings. Headlines about being "world leaders."
You see the bill. And the consequences when it goes wrong.
The Question Nobody's Asking
If the UK had bought ChatGPT for everyone, what happens when it gives dangerous medical advice to your elderly parent?
When it helps write discriminatory housing policies?
When it recommends harsh sentences in court cases?
"It was just the AI" isn't good enough.
But that's exactly what they'll say.
Here's What You Need to Know
Next time you hear about a new AI initiative, ask three questions:
Who profits from this?
Who pays when it fails?
Did anyone ask if we actually need it?
The answers will tell you everything.
The Bottom Line
Your government is betting your money and your future on systems that get basic facts wrong.
They're doing it because AI companies promise the moon.
And because admitting they don't understand the technology would look bad in headlines.
But when the algorithm flags you as a criminal...
When the AI advisor gives your doctor wrong information...
When the computer decides you don't qualify for benefits...
You won't care about headlines.
You'll want a human who can think.
Remember that next time someone promises AI will solve everything.
P.S. - The facial recognition system that wrongly arrested Robert Dillon? It searches through 11 million mugshots and 22 million driver's license photos. Every month. Your photo might already be in there.
Here’s a list of sources used for research in the article above:
Times of India: “Sam Altman may have a $2.5 billion 'plan' to offer access to ChatGPT's premium plan for free to all users of this country”timesofindia.indiatimes
Euro Weekly News: “UK and OpenAI discussed ChatGPT for the entire population”euroweeklynews
Independent: “Minister discussed £2bn deal to give ChatGPT Plus to all UK ...”news.yahoo+1
Reuters: “UK and ChatGPT maker OpenAI sign new strategic partnership”reuters
BBC News: “OpenAI and UK sign deal to use AI in public services”bbc
Gov.uk: “OpenAI to expand UK office and work with government departments ...” ; “Memorandum of Understanding between UK and OpenAI ...”gov+1
OpenAI: “OpenAI and UK Government announce strategic partnership ...”openai
Economic Times: “Sam Altman offered ChatGPT Plus for all UK citizens in £2-billion deal”economictimes
Digital Trends: “OpenAI deal could bring ChatGPT Plus to an entire country”digitaltrends
Action News Jax: “AI, Wrong Guy: Investigating the use and dangers of artificial ...”actionnewsjax+1
Biometric Update: “Florida man's facial recognition match not probable cause for ...”biometricupdate
Petapixel: “Man is Wrongfully Jailed For Heinous Crime Due To Facial ...”petapixel
Yahoo News: “AI, Wrong Guy: Investigating the use and dangers of artificial ...”yahoo
Quadrangle (Michigan Law): “Flawed Facial Recognition Technology Leads to Wrongful Arrest ...”quadrangle.michigan.law.umich
ACLU: “The Untold Number of People Implicated in Crimes They Didn't ...” ; "Police Say a Simple Warning Will Prevent Face Recognition ..."..." aclu; "Williams v. City of Detroit | American Civil Liberties Union" aclu; "Civil Rights Advocates Achieve the Nation's Strongest Police ..." aclu
Innocence Project: “Artificial Intelligence is Putting Innocent People at Risk of Being ...”innocenceproject
NPR: “Facial Recognition Leads To False Arrest Of Black Man In Detroit”npr
Detroit CBS News: “Detroit police to change use of facial recognition technology after ...”cbsnews
New York Times: “Facial Recognition Led to Wrongful Arrests. So Detroit Is Making ...”nytimes
Security Industry Association: “U.S. States and Cities Rethinking Bans, Setting Rules for Law ...”securityindustry
Biometrics Institute: "Biometrics Institute Report Calls for Unified UK Policy on Police ..."idtechwire
Technology Review: “AI is sending people to jail—and getting it wrong”technologyreview
Amnesty International: "Racial bias in facial recognition algorithms"amnesty
ScienceDirect, Sage Journals, PMC, Arxiv: Various demographic bias/accuracy studiespmc.ncbi.nlm.nih+4
College of Policing UK: “Live facial recognition technology guidance published”college
GOV.UK: “Live Facial Recognition technology to catch high-harm offenders”gov
CBC Canada: “How a New Jersey man was wrongly arrested through facial ...”cbc
All these sources were used to build the newsletter article, provide factual claims, and document policy and real-world cases referenced.
https://www.reuters.com/world/uk/uk-chatgpt-maker-openai-sign-new-strategic-partnership-2025-07-21/
https://www.digitaltrends.com/computing/openai-deal-could-bring-chatgpt-plus-to-an-entire-country/
https://uk.news.yahoo.com/minister-discussed-2bn-deal-chatgpt-131147262.html
https://openai.com/global-affairs/openai-and-uk-government-partnership/
https://euroweeklynews.com/2025/08/24/uk-and-openai-discussed-chatgpt-for-the-entire-population/
https://www.independent.co.uk/news/uk/politics/peter-kyle-chatgpt-open-ai-sam-altmann-b2813232.html
https://www.yahoo.com/news/ai-wrong-guy-investigating-dangers-202312086.html
https://www.aclu.org/cases/williams-v-city-of-detroit-face-recognition-false-arrest
https://www.nytimes.com/2024/06/29/technology/detroit-facial-recognition-false-arrests.html
https://www.technologyreview.com/2019/01/21/137783/algorithms-criminal-justice-ai/
https://amnesty.ca/features/racial-bias-in-facial-recognition-algorithms/
https://www.sciencedirect.com/science/article/pii/S1071581925001053
https://www.college.police.uk/article/live-facial-recognition-technology-guidance-published
https://www.gov.uk/government/news/live-facial-recognition-technology-to-catch-high-harm-offenders
https://www.cbc.ca/news/canada/facial-recognition-technology-police-1.7228253
Reply