What Is Going On
Microsoft’s AI receipts. Satya Nadella took the stand in the Musk v. Altman trial and delivered a pretty blunt message: Elon Musk never personally raised concerns to him that Microsoft’s OpenAI investment violated any special commitments. That matters because Musk’s lawsuit argues OpenAI abandoned its nonprofit mission, with Microsoft allegedly helping that shift along through its multibillion-dollar backing. Nadella, for his part, framed the relationship as commercial from day one, not charity. He said Microsoft offered OpenAI steep compute discounts early on with the expectation of business and branding upside later. And later definitely arrived: a Microsoft executive said the partnership has generated about $9.5 billion in recognized revenue as of March 2025. The broader backdrop here is Microsoft’s $13 billion-plus bet on OpenAI, which Nadella said he’s “very proud” of, while Musk sees that scale as the moment things went off the rails. More here.
AI Corner
Package trap spreads. The Mini Shai-Hulud supply-chain mess is still making waves, and this time the spotlight is on five malicious NuGet packages posing as legitimate Chinese .NET UI libraries. The playbook is classic but nasty: look trustworthy, get installed, then quietly siphon off valuable data. According to researchers, these fake packages are built to steal browser credentials, crypto wallet data, SSH keys, and local files, basically a greatest-hits list for attackers looking to cash in fast. It’s another reminder that open-source ecosystems remain a prime target, especially when attackers can hide behind convincing package names and familiar branding. For developers, the lesson is painfully clear: package trust is fragile, and dependency hygiene matters more than ever. Vetting publishers, checking package behavior, and tightening software supply-chain controls are no longer optional. More here.
AI that actually talks. Thinking Machines is pitching a different vision for human-AI collaboration: not more agent scaffolding, but models built for interaction from the ground up. Its new research preview, called interaction models, is designed to process audio, video, and text continuously, so the AI can respond in real time instead of waiting for rigid turn-taking cues. The big idea is simple: humans do better work when they can interrupt, clarify, show, and react on the fly, and today’s chat-style interfaces make that weirdly hard. By baking interactivity into the model itself, Thinking Machines says it can unlock smoother dialogue, natural interjections, simultaneous speech, time awareness, and even concurrent tool use like search or UI generation while the conversation is still happening. In short, the company wants AI that collaborates more like a coworker and less like a ticketing system. More here.
Digg pivots again. Digg is back again, but this time it’s ditching the Reddit-clone playbook for something closer to its roots: a news aggregator, starting with AI. Kevin Rose’s reboot now pulls in real-time signals from X, using sentiment analysis, clustering, and engagement tracking to rank what’s actually breaking through the noise. The homepage highlights things like the most-viewed story, the fastest-rising topic, and the headline you probably missed, while also surfacing top AI people, companies, and politicians. It’s a clever idea, especially for data nerds or anyone who wants an AI news pulse without living on X all day. The catch: it’s still buggy, there’s no real community discussion on Digg itself, and it’s not obvious why mainstream users would choose it over X, RSS, or a news app. Still, if it works, publishers could benefit from the extra traffic. More here.
AI metrics gone wild. Amazon appears to have a very modern workplace problem: employees reportedly using internal AI tools for tasks that don’t really need AI, just to juice their usage numbers. The alleged behavior points to a classic corporate incentive mess, where staff optimize for the metric instead of the mission. In other words, if leadership is tracking AI adoption closely, some workers may be treating the tool less like a productivity boost and more like a box to tick. It’s a neat snapshot of the broader generative AI moment inside big companies, where pressure to show uptake can create weird habits, fuzzy ROI, and a lot of performative experimentation. The bigger takeaway is simple: measuring AI success by raw usage may tell executives less about real value than they think. More here.
News You Can Use
- ●
Daybreak goes defensive. OpenAI is pushing deeper into enterprise security with Daybreak, a new cybersecurity initiative built around its frontier models, Codex Security, and a heavyweight roster of partners like Cloudflare, Cisco, CrowdStrike, Palo Alto Networks, and Snyk. The pitch is simple: find and fix software vulnerabilities earlier, before they turn into bigger headaches. Daybreak folds secure code review, threat modeling, patch validation, dependency analysis, and remediation guidance into Codex Security, turning it from a coding helper into more of an operational security layer. OpenAI is also splitting access by trust level, with standard GPT-5.5 for general use, Trusted Access for verified defenders, and a more permissive GPT-5.5-Cyber preview for tightly controlled red teaming and testing. It’s not a full public launch yet, but the bigger message is clear: OpenAI wants Codex to become a governed AI security platform, not just a developer tool. More here.