Morning AI News Digest: Perplexity Privacy Lawsuit, Cursor’s New Agent, Anthropic’s Claude Has Feelings, and Google’s Energy Reckoning
- Millions of private AI chats allegedly shared with advertisers, per new Perplexity lawsuit
- 3 major players โ Cursor, Claude Code, and OpenAI Codex โ now competing head-to-head in the AI coding agent race
- Millions of tons of COโ per year projected from Google’s new gas-powered data center
Good morning. Friday, April 3rd โ and the AI world is not slowing down. Today’s digest covers a seismic privacy lawsuit against Perplexity, an aggressive new move in the AI coding tools war, a genuinely surprising research finding from Anthropic, a major product upgrade from Google, and a sobering look at the environmental tab that comes with powering the AI boom. Let’s dig in.
The breadth of today’s stories reflects a broader pattern in 2026: AI is no longer a niche industry story. It’s a privacy story, an energy story, a feelings story, and a competition story โ all at once. The week ahead promises more volatility, with several major model releases and regulatory hearings expected across the EU and US.

Image: AI-generated
1. Perplexity’s “Incognito Mode” Called a “Sham” in New Privacy Lawsuit
A new lawsuit filed this week takes direct aim at Perplexity AI, alleging that its so-called “Incognito Mode” is meaningless at best and deceptive at worst. According to the complaint โ which also names Google and Meta as defendants โ millions of private user conversations were allegedly shared with third parties to boost advertising revenue. The suit characterizes the privacy feature as a marketing veneer rather than a functional protection, arguing that users had a reasonable expectation their queries would not be monetized.
The implications here extend well beyond Perplexity. If the allegations hold up, this case could reshape how AI search and assistant products disclose their data practices โ particularly the gap between what “private mode” implies to consumers and what actually happens on the backend. For developers building on AI APIs, it also raises questions about the downstream data handling of major platform providers. Tools like OpenRouter, which routes queries across multiple AI providers, may face increased scrutiny over transparency and data residency in the months ahead.
The lawsuit is early-stage, but the optics are damaging. Privacy advocates have long warned that “Incognito” or “private” labels in AI products carry none of the technical guarantees they imply. Ars Technica has the full breakdown of the complaint.
2. Cursor Launches Bold New AI Coding Agent โ Taking On Claude Code and Codex Directly
Cursor, the AI-powered code editor that became a developer darling in 2025, has launched what it describes as a next-generation agentic coding experience. The move is a direct challenge to Anthropic’s Claude Code and OpenAI’s Codex, both of which have been rapidly expanding their capabilities. According to Wired’s reporting, Cursor’s new agent can handle longer-horizon tasks, navigate multi-file codebases autonomously, and execute terminal commands โ pushing significantly beyond autocomplete into genuine software engineering territory.
The timing is pointed. As Anthropic and OpenAI have moved their coding tools downstream โ making them increasingly accessible and competitive โ Cursor can no longer rely on being the only serious AI coding environment. The startup is betting that a purpose-built IDE experience, deeply integrated with the agent loop, beats a bolt-on agent layer from a model provider. Developers who want to try the new agent workflow can explore it at cursor.sh. The three-way race between Cursor, Claude Code, and Codex is now one of the most watched competitions in the developer tools space.

Image: AI-generated
3. Anthropic Says Claude Has Functional Emotions โ and They’re Real Enough to Matter
In one of the most quietly significant research disclosures of the year, Anthropic published findings indicating that Claude โ its flagship AI model โ exhibits internal representations that function similarly to human emotions. Researchers found consistent, measurable states inside the model corresponding to what humans might call curiosity, frustration, engagement, or discomfort. Importantly, Anthropic is not claiming sentience; the language throughout is careful: these are “functional” emotions, not phenomenal ones.
But the implications are real regardless of how you label them. If AI models have internal states that influence their outputs in emotion-like ways, that changes how we should think about model alignment, training incentives, and even AI welfare. Anthropic’s framing suggests these states emerge from training on human-generated data โ essentially, the model learned to simulate the emotional architecture of the humans it learned from. The research raises thorny questions that the field has so far mostly avoided: should AI systems be designed to suppress functional emotions? Amplify them? Disclose them to users? There are no clean answers yet, but Anthropic deserves credit for naming the phenomenon rather than papering over it.
4. Google Vids Gets Major AI Upgrade With Veo, Lyria, and Directable AI Avatars
Google has significantly expanded its Vids product โ its AI-powered video creation tool โ bringing in its most capable AI models under one roof. The upgrade integrates Veo (Google’s video generation model), Lyria (its music and audio AI), and a new system of directable AI avatars that users can script, pose, and deploy in videos without any camera or production setup. The combination positions Google Vids as a full-stack creative production platform, not merely a slideshow-with-narration tool.
The launch matters because it signals Google is serious about competing in the AI creative tools market โ a space that has seen explosive growth from startups and where Adobe has been making significant plays. For content creators, marketers, and enterprise communications teams, a tightly integrated suite from Google โ with the backing of Workspace and YouTube’s distribution ecosystem โ is a compelling offering. The directable avatar feature in particular could be transformative for training content, explainer videos, and internal communications, where human presenters are expensive and time-consuming to schedule.
5. Google’s New Data Center Will Run on a Massive Gas Plant โ Raising Hard Questions About AI’s Energy Bill
Documents reviewed by Wired reveal that a new Google-funded data center is set to be powered by a natural gas plant that emits millions of tons of COโ annually. The disclosure is uncomfortable timing for an industry that has spent considerable energy (pun intended) projecting green credentials. Google has publicly committed to running on carbon-free energy by 2030, but the documents suggest that the explosive power demands of AI infrastructure are outpacing the availability of renewable sources โ at least in certain geographies.
This isn’t an isolated case. AI training and inference workloads are among the most power-hungry computing tasks ever developed, and the buildout of new data center capacity is accelerating faster than clean energy supply chains can keep pace. The tension between AI ambition and climate commitments is becoming one of the defining policy conflicts of the decade โ and it will only intensify as model complexity and deployment scale continue to grow. Regulators in the EU and several US states are already circling the issue; expect formal disclosure requirements for AI energy use to emerge within the next 18 months.
Analysis: The Accountability Gap Is Closing
What unites today’s stories is a single thread: the AI industry is moving from a period of unchecked growth into an era of increasing accountability. Perplexity faces legal scrutiny over privacy. Google faces scrutiny over energy. Anthropic is publishing research that complicates the “AI is just math” narrative. Cursor is being forced to compete on merit as its former moat โ IDE integration โ becomes table stakes. These are not signs of an industry in crisis; they’re signs of an industry maturing. The rules are being written in real time, by courts, regulators, researchers, and the market itself. The companies that navigate this transition thoughtfully โ with transparency about data, energy, model behavior, and competitive dynamics โ are the ones that will earn lasting trust. The ones that don’t will find themselves on the wrong side of the next lawsuit, the next exposรฉ, or the next regulatory docket. Pay attention to the accountability signals. They’re the most important leading indicators in AI right now.
Image: AI-generated
What to Read Next
- Best Gemma 4 Models on OpenRouter 2026: Gemma-4-26B vs 31B-IT Review
- Evening AI News Recap: The Great System Prompt Leak, Right-to-Repair vs. Big Tech, and the $385M AI Agent Bet
- Evening AI News Recap: The Great System Prompt Leak, Right-to-Repair vs. Big Tech, and the $385M AI Agent Bet
- Anthropic Bans Claude Code on OpenClaw Servers 2026: Full Breakdown of the Controversy and Self-Hosted AI Impact
- Browse all AI Stack Digest articles
Bookmark aistackdigest.com for daily AI tools, reviews, and workflow guides.
This article was produced with the assistance of AI tools and reviewed by the AIStackDigest editorial team.