✳︎ Panoptia Labs

Field Notes

Observations, assessments and impulses.

David Latz · 04/14/2026

Calm Technology: A 1995 Design Principle Becomes Relevant Again

In 1995, Mark Weiser and John Seely Brown formulated a counter-model to the attention economy — before it existed. Now, as AI interfaces compete for our attention, their principles are more relevant than ever. Technology should work at the periphery, not at the center.

David Latz · 04/10/2026

Three-Store Model: The Blueprint Behind Every AI Memory Architecture

Atkinson and Shiffrin modeled human memory as a pipeline in 1968 — sensory register, short-term memory, long-term memory. Anyone equipping AI agents with memory systems today builds on an architecture that has long been disproven as psychology. As an architectural template, it lives on in every AI agent memory.

Andrej Karpathy · Curated by David Latz · 04/03/2026

LLM Knowledge Bases: Why Everyone Lands on the Same Stack

Andrej Karpathy describes his setup for LLM-powered knowledge work — and it sounds familiar. Markdown, Git, Obsidian, an LLM as operator. Practitioners independently discover the same architecture. That's not coincidence — it's convergent evolution.

Casius Lee (Oracle) · Curated by David Latz · 03/27/2026

Agent Memory: Why Your AI Has Amnesia and How to Fix It

AI agents forget everything between conversations. This article shows why larger context windows don't solve the problem — and how four memory types from cognitive science form the foundation for persistent agent memory.

Cat Wu (Anthropic) · Curated by David Latz · 03/21/2026

Product Management on the AI Exponential

Anthropic's Head of Product for Claude Code describes how exponentially improving models break the traditional PM playbook — and the four shifts teams need to stay on the curve instead of behind it.

David Latz · 03/19/2026

The Agile Manifesto Needs an Update — for Working with AI Agents

Hundreds of individual AI setups, but no shared common sense. What's missing: a playbook for agentic collaboration in small teams. Agile principles don't become obsolete — but their interpretation shifts fundamentally when a team member is an agent.

Andrej Karpathy (Latent Space / AI Engineer Summit) · Curated by David Latz · 03/15/2026

Software 3.0 — What Karpathy's Theses Mean for Interface Design

Andrej Karpathy describes the shift from code to prompts as a programming paradigm. Sounds like a backend concern — but it has massive consequences for anyone designing interfaces. Autonomy sliders, a third consumer class, and the most honest reality check on vibe coding yet.

David Latz · 03/12/2026

When Visualization Becomes Cheap, Clarity Becomes Expensive

Claude now generates interactive charts and diagrams in chat. Sounds like a feature — it's a paradigm shift. Not just for designers: data-driven communication becomes accessible to every knowledge worker. What this changes, who it overwhelms, and why design matters more now, not less.

Adrien Laurent (IntuitionLabs) · Curated by David Latz · 03/04/2026

Meta-Prompting: LLMs Crafting & Enhancing Their Own Prompts

Not writing better prompts — but automating the prompting itself. Systematic overview of meta-prompting: from Chain-of-Thought to DSPy, from Self-Critique to Multi-Agent orchestration. With concrete benchmarks and practical recommendations.

Daniel Kokotajlo, Eli Lifland, Thomas Larsen, Romeo Dean (Scott Alexander) · Curated by David Latz · 03/01/2026

AI 2027: A Scenario

Detailed scenario by ex-OpenAI researchers and forecasting experts: month by month from 2025 to late 2027, from reliable coding agents to superintelligence. Alignment fails progressively, geopolitics escalate. Two endings: slowdown or arms race.

Benedict Evans · Curated by David Latz · 03/01/2026

How Will OpenAI Compete?

OpenAI has no unique technology, no moat, and a user base with a flat engagement curve. Benedict Evans poses four fundamental strategic questions — and draws the Netscape comparison: the early mover in browsers lost because value was created elsewhere.

Boris Tane (Baselime) · Curated by Jan Musiedlak · 02/20/2026

The Software Development Lifecycle Is Dead

AI agents haven't accelerated the traditional SDLC — they've dissolved it. Sequential phases collapse into an agentic loop: state intent, provide context, observe, repeat. What remains: Context Engineering and Observability.

Matt Shumer (OthersideAI) · Curated by David Latz · 02/09/2026

Something Big Is Happening

AI agents now autonomously complete multi-hour expert tasks. The capability curve doubles every 4–7 months. Shumer compares this moment to the 'this seems overblown' phase of Covid — but with far greater implications.