This Week in AI
This week showed AI moving decisively into its commercial phase — massive funding rounds, ads entering chatbots, and enterprises pushing AI deeper into production workflows. At the same time, the tech stack is shifting fast, with new ultra-fast coding models, hardware plays beyond NVIDIA, and rapid model retirements signalling how quickly the frontier is evolving. Security and governance are rising in parallel, as cyber-misuse, model extraction, and trust debates become as important as capability gains.
Anthropic lands a $30B mega-round, hitting a $380B valuation
Anthropic announced a $30B Series G that values the company at $380B post-money, with the funds earmarked for frontier research, product work, and infrastructure expansion. The size of the round underscores just how aggressively capital is still chasing “frontier” model labs—especially those with strong enterprise/coding traction.
Read more:
OpenAI starts testing ads in ChatGPT
OpenAI has begun a phased test of ads in ChatGPT for logged-in adult users on the Free and Go tiers in the U.S., while Plus/Pro/Business/Enterprise/Education remain ad-free during the test. OpenAI says ads won’t affect answers and that conversations remain private from advertisers, framing ads as a way to fund broader access to more capable features.
Read more:
https://openai.com/index/testing-ads-in-chatgpt/
https://help.openai.com/en/articles/20001047-ads-in-chatgpt
GPT-4o is retired from ChatGPT
OpenAI has retired GPT-4o (plus GPT-4.1, GPT-4.1 mini, and o4-mini) from ChatGPT as of February 13, 2026, while noting there are no API changes at this time. The move formalises the ongoing shift toward newer model families—and, inevitably, forces users and teams to adapt workflows that were tuned to GPT-4o’s specific “feel.”
Read more:
https://openai.com/index/retiring-gpt-4o-and-older-models/
https://help.openai.com/en/articles/20001051-retiring-gpt-4o-and-other-chatgpt-models
OpenAI + Cerebras launch GPT-5.3-Codex-Spark for ultra-low-latency coding
OpenAI released a research preview of GPT-5.3-Codex-Spark, positioning it as a real-time coding model optimised for “near-instant” interaction, including claims of >1,000 tokens/sec when served on ultra-low-latency hardware. Cerebras says the model is powered by its Wafer-Scale Engine, marking a notable “non-NVIDIA” hardware partnership milestone for an OpenAI model release.
Read more:
https://openai.com/index/introducing-gpt-5-3-codex-spark/
https://www.cerebras.ai/blog/openai-codexspark
Google warns of AI misuse in cyberattacks—and large-scale attempts to extract/clone models
Google’s Threat Intelligence reporting says nation-state and other threat actors are already using LLMs across parts of the intrusion lifecycle (research, targeting, lure generation, etc.). In parallel, Google describes “model extraction” behaviour, including a campaign involving 100,000+ prompts, highlighting that frontier model providers are treating distillation/extraction as a first-class security and IP threat.
Read more: