• AI Weekly
  • Posts
  • Did Google and Yale Just Cure Cancer?

Did Google and Yale Just Cure Cancer?

In partnership with

Want to get the most out of ChatGPT?

ChatGPT is a superpower if you know how to use it correctly.

Discover how HubSpot's guide to AI can elevate both your productivity and creativity to get more things done.

Learn to automate tasks, enhance decision-making, and foster innovation with the power of AI.

Big Tech's Classroom Gambit & The AI That Actually Found Something

Listen, we need to talk about what's happening in American classrooms right now—and I'm not talking about the usual funding crisis or standardized testing nightmare. I'm talking about Microsoft, OpenAI, and Anthropic dropping $23 million to train 400,000 teachers on AI. And while that's going down, Google's AI just casually discovered a potential cancer therapy that scientists then validated in actual living cells.

Yeah. What the actual heck is happening?

The $23 Million Question

Here's the setup: In July 2025, the American Federation of Teachers—that's 1.8 million educators, the second-largest teachers union in the country—announced they're partnering with the biggest names in AI to launch something called the National Academy for AI Instruction. Microsoft's in for $12.5 million over five years. OpenAI's throwing in $8 million cash plus another $2 million in API credits and computing power. Anthropic kicked in $500,000 to get the party started.

The goal? Train one in ten U.S. teachers—400,000 people—on how to actually use AI in classrooms.

Now, before you start screaming about corporate takeover of education (valid impulse, hold that thought), here's the kicker: AFT President Randi Weingarten says they went to these companies, not the other way around. "There is no one else who is helping us with this," she explained. "That's why we felt we needed to work with the largest corporations in the world."

It's a fascinating admission. The education system is so starved for resources and expertise on this stuff that the unions had to go hat-in-hand to Big Tech. And they tried to structure the deal with some guardrails—teachers and unions own the IP developed through the program, and the training is supposed to be "tool-agnostic," meaning they're not just teaching teachers to shill ChatGPT.

Here's why this matters: AI adoption in schools is already happening at breakneck speed, with or without training, with or without policy, with or without anyone really knowing what the hell they're doing.

The Numbers Are Bananas

Let's break down what's actually going on in classrooms right now:

Teacher adoption jumped from 46% to 60% in just one school year (2023-24 to 2024-25). But here's the thing: 58% of K-12 teachers still have zero formal AI training. That's nearly two years after ChatGPT launched and became the fastest-growing consumer app in history.

Student use is even more wild: 70% of teens have used generative AI. Another study found 89% of students admit to using ChatGPT for homework. Homework! The thing we've been assigning since the dawn of formal education is now being outsourced to a chatbot, and only 31% of schools have policies about it.

One high school English teacher at a training session asked the question everyone's thinking: "Are we going to be replaced with AI?"

The trainer, Kathleen Torregrossa, opened her workshop in San Antonio with a reality check: "We all know, when we talk about AI, teachers say, 'Nah, I'm not doing that.' But we are preparing kids for the future. That is our primary job. And AI, like it or not, is part of our world."

She's right. But that doesn't make the anxiety about job displacement—or the wholesale transformation of what teaching even means—any less real.

What Could Possibly Go Wrong?

The critics are not holding back. Matt Miller, an Indiana high school Spanish teacher who's written six books for educators, put it bluntly: "The AFT/OpenAI/Anthropic partnership scares the crap out of me. Whenever you get that marriage between an organization and big companies, we just keep asking ourselves, 'Oh, yeah, what could go wrong?'"

His concern? That despite claims of tool-agnostic training, the whole thing will eventually "funnel back to their product." Because that's how these partnerships always work, right? The training sessions feel neutral, but somehow you end up in their ecosystem.

Alex Kotran, CEO of the AI Education Project, called the $23 million "a bit of a drop in the bucket" considering these companies' valuations. "Symbolic at best," he said. Which, fair point—OpenAI alone is valued at over $150 billion. This is pocket change for them, but it buys influence in every classroom where these teachers work.

Even Microsoft CEO Brad Smith acknowledged teachers should maintain "a healthy dose of skepticism" about tech company involvement. "While it's easy to see the benefits right now, we should always be mindful of the potential for unintended consequences," Smith said, pointing to concerns about AI's impact on critical thinking. "We have to be careful. It's early days."

When the CEO of the company funding the initiative tells you to be skeptical, maybe listen?

The Federal Government Finally Shows Up

Recognizing this is all spiraling out of control, President Trump signed Executive Order 14277 in April 2025, establishing the White House Task Force on Artificial Intelligence Education. It's a whole alphabet soup of agencies—Education, Labor, Agriculture, Energy—all coordinating to figure out how to handle AI in schools.

The task force is supposed to foster "appropriate integration" of AI into education (whatever that means), provide comprehensive training for educators, and develop an "AI-ready workforce" starting in K-12. They're setting up public-private partnerships with AI companies, universities, and nonprofits to create online resources.

So now we've got Big Tech training teachers directly through union partnerships, AND the federal government creating task forces to coordinate with Big Tech on education policy. The thing is, both of these efforts are racing to catch up with something that's already happening in every classroom in America.

Meanwhile, at the Lab...

Okay, let's pivot to something that's actually kind of mind-blowing. On October 15, Google DeepMind and Yale University announced that their AI model successfully generated a novel hypothesis about cancer—and then scientists validated it experimentally.

Not "analyzed existing data." Not "predicted protein structures." Generated an original scientific hypothesis that turned out to be correct.

The model is called C2S-Scale 27B (Cell2Sentence-Scale), a 27-billion-parameter beast built on Google's open-source Gemma architecture. It's designed to understand single-cell biology by treating cellular data like language. Train it on enough cells, and apparently it starts having ideas.

Sundar Pichai announced it on social media with the kind of restrained excitement CEOs deploy when they think they might have stumbled into something huge: "An exciting milestone for AI in science... With more preclinical and clinical tests, this discovery may reveal a promising new pathway for developing therapies to fight cancer."

Cold Tumors and Hot Leads

Here's the cancer challenge the AI was trying to solve: Many tumors are "cold"—invisible to your immune system. They don't display enough antigens (immune-triggering signals) on their surface, so your immune cells just cruise right past them like they're not even there.

The goal is to make these cold tumors "hot" by forcing them to display those signals through a process called antigen presentation. But you can't just blast every cell with immune activation—that would cause massive inflammation and potentially kill the patient. You need something smart: a therapy that only boosts the immune signal when there's already a little bit of immune context present, but not enough to work on its own.

This is sophisticated, conditional reasoning. You need a drug that acts as an amplifier, but only in a specific environment.

The Yale and Google teams tasked C2S-Scale 27B with finding exactly that: a "conditional amplifier" that would boost antigen presentation only in immune-context-positive environments (real patient tumor samples with low-level interferon signaling) but do nothing in immune-context-neutral environments (isolated cell cultures with no immune context).

The model virtually screened over 4,000 drugs across both contexts. It found some known candidates—about 10-30% of its hits were already in the literature. But the rest? "Surprising hits" with no prior known link to antigen presentation.

The Silmitasertib Surprise

The big discovery was silmitasertib (also known as CX-4945), a CK2 kinase inhibitor that had never been reported to enhance antigen presentation. The AI predicted something specific and weird: this drug would dramatically increase antigen presentation when combined with low-dose interferon, but would do basically nothing on its own or in the wrong context.

Yale researchers tested it using human neuroendocrine cell models that weren't in the AI's training data. The results:

  • Silmitasertib alone: no effect

  • Low-dose interferon alone: modest effect

  • Silmitasertib + low-dose interferon: roughly 50% increase in antigen presentation

The AI was right. It had discovered a genuine synergistic effect—a way to make cold tumors more visible to the immune system, but only under the right conditions. This isn't just data crunching; it's hypothesis generation that was deemed worthy of real-world experimental validation.

David van Dijk, the Yale professor who led the collaboration, was pumped: "Working with Google Research and DeepMind has been truly exhilarating. I've come to realize that collaborations like this—uniting Yale's world-class School of Medicine and Department of Computer Science with industry leaders—are absolutely instrumental for projects of this scale. Their engineering expertise and massive compute resources make possible what simply wouldn't be achievable otherwise."

What This Actually Means

The C2S-Scale model represents an evolution from Google's earlier AI breakthrough, AlphaFold, which predicted protein structures from sequences. AlphaFold was transformative—it solved a 50-year-old problem in biology. But it was fundamentally analyzing patterns in existing data.

This new model is doing something different: generating novel, testable hypotheses. It's moving from pattern recognition to scientific creativity. And because they trained it on over 57 million cells from 800+ public datasets, it has a genuinely comprehensive understanding of cellular behavior.

The model is now available open-source on Hugging Face and GitHub. The research team published their findings on bioRxiv, which means other scientists can start building on this work immediately.

Is this the "moonshot moment" for AI in medicine that people have been predicting? Maybe. It's early—this needs way more preclinical and clinical validation before it becomes a real therapy. But it's a hell of a proof of concept.

The Uncomfortable Parallel

So here's where these two stories intersect in a weird way: Both involve Big Tech deploying AI systems into domains where humans are supposed to be the experts—education and scientific research—and both raise profound questions about who benefits and who gets left behind.

In classrooms, we're training teachers to use AI tools built by companies that have a financial interest in making those tools indispensable. The training might be "tool-agnostic," but the companies funding it sure aren't. And even with the best intentions, there's a power imbalance when cash-strapped public institutions partner with the richest corporations in the world.

In cancer research, we're watching AI systems make genuine scientific contributions—which is amazing!—but the computational resources required are so massive that only a handful of organizations can afford to build these models. Google has the compute. Yale has the medical expertise. Together they can do things that would be impossible for 99% of research institutions.

This is regulatory capture and resource concentration playing out in real time, dressed up as innovation and partnership.

What's Next?

For AI in education, the immediate future looks like more partnerships, more task forces, and way more chaos. District-provided training for teachers increased from 23% to 48% in just one year, which sounds good until you realize that still leaves half of school districts providing zero guidance. Meanwhile, 89% of students are already using ChatGPT for homework.

The $23 million from Microsoft, OpenAI, and Anthropic will train 400,000 teachers over five years. That's meaningful. But there are about 3.7 million teachers in U.S. public schools. Even if this program hits every target, that's about 10% coverage. The other 90% are on their own, figuring it out as they go, with students who are already three steps ahead of them.

For AI in medical research, we're looking at an acceleration of discovery that could be genuinely transformative—if we can figure out how to democratize access to these tools. The C2S-Scale model is open-source, which is huge. But training these massive models requires computational resources that most academic institutions simply don't have. The gap between what Google/Microsoft/Meta can do with AI and what everyone else can do is growing, not shrinking.

President Trump's Executive Order 14277 establishing the AI education task force is the kind of thing that sounds important but might end up being just another interagency committee that issues reports nobody reads. The real action is happening in individual districts where superintendents and principals are making it up as they go.

The National Education Association (NEA)—the bigger teachers union with 3 million members—announced their own separate partnership with Microsoft in September 2025. Microsoft gave them $325,000 to develop "microcredentials" for AI training. That's... not a lot of money for 3 million people. But it shows that even the organizations theoretically positioned to negotiate on behalf of educators don't have much leverage here.

RAND Corporation researcher Christopher Doss nailed the fundamental ambiguity: "There's a lot of gray area about what a student would use AI for. So, for example, if they're writing an essay, and they ask the AI to critique something that they wrote themselves, is that cheating? And then if they edit it off of that, is that cheating?"

Nobody knows. We're all just making it up.

The silmitasertib discovery is now in the hands of Yale researchers who are exploring the mechanism and testing additional predictions in other immune contexts. If it pans out—and that's a big if—we're looking at years of additional research before it becomes an actual therapy. But the blueprint is there: build bigger models, train them on comprehensive datasets, and let them generate hypotheses that humans can test.

The scary part? Both of these stories—AI in classrooms and AI in cancer research—are about systems being deployed faster than we can understand their implications. We're figuring out the rules while the game is already in progress.

And the companies building these tools? They're not waiting around for us to catch up.

The C2S-Scale 27B model and resources are available at Hugging Face and GitHub. The full research preprint is published on bioRxiv. If you're a teacher looking for AI training resources, check out the AFT's National Academy for AI Instruction, or your district's professional development programs. If they don't have one, well, you're in the majority.

Reply

or to participate.