In partnership with

The AI Race Just Went Nuclear — Own the Rails.

Meta, Google, and Microsoft just reported record profits — and record AI infrastructure spending:

  • Meta boosted its AI budget to as much as $72 billion this year.

  • Google raised its estimate to $93 billion for 2025.

  • Microsoft is following suit, investing heavily in AI data centers and decision layers.

While Wall Street reacts, the message is clear: AI infrastructure is the next trillion-dollar frontier.

RAD Intel already builds that infrastructure — the AI decision layer powering marketing performance for Fortune 1000 brands. Backed by Adobe, Fidelity Ventures, and insiders from Google, Meta, and Amazon, the company has raised $50M+, grown valuation 4,900%, and doubled sales contracts in 2025 with seven-figure contracts secured.

Shares remain $0.81 until Nov 20, then the price changes.

👉 Invest in RAD Intel before the next share-price move.

This is a paid advertisement for RAD Intel made pursuant to Regulation A+ offering and involves risk, including the possible loss of principal. The valuation is set by the Company and there is currently no public market for the Company's Common Stock. Nasdaq ticker “RADI” has been reserved by RAD Intel and any potential listing is subject to future regulatory approval and market conditions. Investor references reflect factual individual or institutional participation and do not imply endorsement or sponsorship by the referenced companies. Please read the offering circular and related risks at invest.radintel.ai.

Hey, this story from Google is crazy.

Malware Just Learned to Rewrite Itself (Welcome to the Worst Timeline)

So here's a fun development: Russian military intelligence just deployed malware that asks AI to write its own code in real-time. Not theoretical. Not a research paper. Active operations, right now, targeting Ukrainian government networks.

Google caught two separate malware families—PROMPTFLUX and PROMPTSTEAL—literally querying LLMs during execution to generate new attack code on the fly. PROMPTSTEAL, attributed to Russia's APT28, masquerades as an innocent image generator while secretly asking an open-source AI model "hey, how do I steal data from this Windows system?" Then it just... executes whatever the AI suggests.

The mechanism is genuinely clever: instead of hard-coding attack commands that antivirus can fingerprint, the malware queries Gemini or Hugging Face APIs every hour, requests fresh obfuscation techniques, saves the mutated version, and keeps going. One variant literally prompts the AI with "act as an expert VBScript obfuscator"—and it works.

Here's why this matters: Traditional malware defense relies on signatures and known patterns. But if malware rewrites itself hourly using AI? Those signatures become useless almost immediately. It's like trying to catch someone who changes their face every 60 minutes.

Yes, current samples are still detectable and somewhat clunky—security researcher Marcus Hutchins correctly notes they're not operationally perfect yet. But that's missing the point entirely. These are proof-of-concepts from June 2025 that are already deployed. The five-month gap between discovery and disclosure means we're seeing yesterday's experiments, not today's capabilities.

The defenders are now racing an opponent that literally evolves faster than their detection systems update.

Reply

or to participate

Recommended for you

No posts found