AI News Today: Microsoft Phi-4 Vision, Pentagon Blacklists Anthropic, OpenAI Building GitHub Rival (Updated 06:39) (Updated 06:41).

Affiliate disclosure: We earn commissions when you shop through the links on this page, at no additional cost to you.

Thursday’s AI news cycle opens with a trio of significant developments — a new efficient reasoning model from Microsoft, a major geopolitical blow to Anthropic, and OpenAI quietly moving to compete with one of Microsoft’s own products.

Microsoft Launches Phi-4-Reasoning-Vision-15B — The Model That Knows When Not to Think

Microsoft released Phi-4-Reasoning-Vision-15B, a 15-billion-parameter multimodal model designed around a surprisingly practical insight: not every task needs deep reasoning, and burning compute on overthinking is wasteful.

Advertisement

The model processes both images and text, handles complex math and science problems, reads charts and documents, navigates GUIs, and performs everyday visual tasks like captioning photos and reading receipts. It’s available immediately through Microsoft Foundry, HuggingFace, and GitHub under a permissive license.

What makes this notable is the positioning: rather than competing on raw benchmark scores, Microsoft is explicitly targeting the gap between frontier model capability and real-world deployment economics. Big models are expensive, slow, and energy-hungry — Phi-4 is designed for teams who need reliable multimodal AI without the infrastructure overhead of GPTGPT-5 or Claude 3.7.

Pentagon Designates Anthropic a “Supply Chain Risk” — Defense Contractors Bail on Claude

In one of the stranger AI news stories of the week, defense companies are preemptively abandoning Anthropic’s Claude after Defense Secretary Pete Hegseth designated the company a “supply chain risk.” While Anthropic can still challenge the designation in court, contractors aren’t waiting — they’re switching models “out of an abundance of caution,” according to CNBC.

The move reflects the growing entanglement of AI companies with national security policy. Anthropic, which has positioned Claude as a safety-focused enterprise AI, now finds itself politically blacklisted from one of the most lucrative government contract sectors. The long-term implications for Anthropic’s government business — and its valuation — remain to be seen.

OpenAI Is Building a GitHub Rival

Prompted by recent GitHub outages, OpenAI is quietly developing its own code repository platform. The project is still months from completion, but the implications are significant: it would put OpenAI in direct competition with Microsoft, which holds a substantial stake in the company and owns GitHub outright.

The move signals that OpenAI is thinking beyond AI models toward full developerdeveloper infrastructure — a strategy that would make it a platform company rather than just a model provider. Whether Microsoft views this as a threat or an expected evolution of its investment remains unclear.

Also: AI Hallucinations Hit Wikipedia

A non-profit called Open Knowledge Association has been using AI to translate Wikipedia articles at scale — and the results have been messy. Hallucinated citations, fabricated sources, and unrelated references have crept into translated articles across multiple languages. Wikipedia editors are now imposing restrictions on OKA contributors, with blocks for repeat offenders.

It’s a useful reminder that AI at scale without human review creates compounding trust problems — especially on platforms where accuracy is the entire value proposition.

The Takeaway

Today’s AI news today reflects the industry’s growing complexity: technical progress (Phi-4), political interference (Anthropic/Pentagon), competitive realignment (OpenAI vs GitHub), and the ongoing quality debt from AI-at-scale deployments (Wikipedia). The pace isn’t slowing — if anything, the stakes are getting higher faster.

This article was produced with the assistance of AI tools and reviewed by the AIStackDigest editorial team.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top