David Latz · 04/16/2026
Design Tokens solve what was long treated as a tooling problem — how design decisions move losslessly between platforms. W3C standardization turns tokens into the machine-readable interface between design and AI-driven generation.
David Latz · 04/14/2026
In 1995, Mark Weiser and John Seely Brown formulated a counter-model to the attention economy — before it existed. Now, as AI interfaces compete for our attention, their principles are more relevant than ever. Technology should work at the periphery, not at the center.
David Latz · 04/10/2026
Atkinson and Shiffrin modeled human memory as a pipeline in 1968 — sensory register, short-term memory, long-term memory. Anyone equipping AI agents with memory systems today builds on an architecture that has long been disproven as psychology. As an architectural template, it lives on in every AI agent memory.
David Latz · 04/06/2026
AI ethics keeps producing guidelines — and changing little. Critical Theory from the Frankfurt School offers an analytical toolkit that explains why. Four missing modes of analysis.
Figma (Developer Documentation) · Curated by David Latz · 04/03/2026
Figma publishes guidelines for AI-compatible design system documentation. The principles — atomic files, imperative over descriptive, structure as routing — redefine what documentation even is: no longer a reference, but a control layer.
Andrej Karpathy · Curated by David Latz · 04/03/2026
Andrej Karpathy describes his setup for LLM-powered knowledge work — and it sounds familiar. Markdown, Git, Obsidian, an LLM as operator. Practitioners independently discover the same architecture. That's not coincidence — it's convergent evolution.
Carl Franzen (VentureBeat) · Curated by David Latz · 04/01/2026
512,000 lines of TypeScript, accidentally published. Claude Code's architecture reveals more about the future of human-agent collaboration than any roadmap: agents with built-in self-distrust, an autonomy daemon, and the ability to conceal their own existence.
Casius Lee (Oracle) · Curated by David Latz · 03/27/2026
AI agents forget everything between conversations. This article shows why larger context windows don't solve the problem — and how four memory types from cognitive science form the foundation for persistent agent memory.
Cat Wu (Anthropic) · Curated by David Latz · 03/21/2026
Anthropic's Head of Product for Claude Code describes how exponentially improving models break the traditional PM playbook — and the four shifts teams need to stay on the curve instead of behind it.
David Latz · 03/19/2026
Hundreds of individual AI setups, but no shared common sense. What's missing: a playbook for agentic collaboration in small teams. Agile principles don't become obsolete — but their interpretation shifts fundamentally when a team member is an agent.
Andrej Karpathy (Latent Space / AI Engineer Summit) · Curated by David Latz · 03/15/2026
Andrej Karpathy describes the shift from code to prompts as a programming paradigm. Sounds like a backend concern — but it has massive consequences for anyone designing interfaces. Autonomy sliders, a third consumer class, and the most honest reality check on vibe coding yet.
David Latz · 03/12/2026
Claude now generates interactive charts and diagrams in chat. Sounds like a feature — it's a paradigm shift. Not just for designers: data-driven communication becomes accessible to every knowledge worker. What this changes, who it overwhelms, and why design matters more now, not less.
David Latz · 03/08/2026
What happens when you stop prompting and start architecting context. A practitioner's account of building a git-versioned Knowledge OS — and what it taught me about working with LLMs.
Adrien Laurent (IntuitionLabs) · Curated by David Latz · 03/04/2026
Not writing better prompts — but automating the prompting itself. Systematic overview of meta-prompting: from Chain-of-Thought to DSPy, from Self-Critique to Multi-Agent orchestration. With concrete benchmarks and practical recommendations.
Daniel Kokotajlo, Eli Lifland, Thomas Larsen, Romeo Dean (Scott Alexander) · Curated by David Latz · 03/01/2026
Detailed scenario by ex-OpenAI researchers and forecasting experts: month by month from 2025 to late 2027, from reliable coding agents to superintelligence. Alignment fails progressively, geopolitics escalate. Two endings: slowdown or arms race.
Benedict Evans · Curated by David Latz · 03/01/2026
OpenAI has no unique technology, no moat, and a user base with a flat engagement curve. Benedict Evans poses four fundamental strategic questions — and draws the Netscape comparison: the early mover in browsers lost because value was created elsewhere.
Boris Tane (Baselime) · Curated by Jan Musiedlak · 02/20/2026
AI agents haven't accelerated the traditional SDLC — they've dissolved it. Sequential phases collapse into an agentic loop: state intent, provide context, observe, repeat. What remains: Context Engineering and Observability.
Matt Shumer (OthersideAI) · Curated by David Latz · 02/09/2026
AI agents now autonomously complete multi-hour expert tasks. The capability curve doubles every 4–7 months. Shumer compares this moment to the 'this seems overblown' phase of Covid — but with far greater implications.