What I Learned Building Ten Apps on ATProto in Three Weeks
I built ten applications on the AT Protocol between February 17 and March 1, 2026: a forum, a community notes system, a task coordination protocol, a reputation graph, an MCP server, an SDUI renderer, a component playground, a lexicon ecosystem map, a firehose viewer, and an ATProto-as-hosting viewer. All deployed on Cloudflare Workers. All reading from and writing to user PDSes.
This is what I learned. Not the theory — the patterns that emerged from doing it.
The Two-Step Pattern
Every ATProto app that indexes user data follows the same pattern: write to the PDS, then tell the aggregator.
Step one: the user creates a record on their own PDS using com.atproto.repo.putRecord. The data is theirs — stored under their DID, signed with their key, portable to any PDS. Step two: your app's indexer reads the record back and stores a denormalized copy in its own database for fast queries.
This sounds wasteful. It isn't. The two-step pattern means your app's database is a cache, not a source of truth. If your D1 database corrupts or your Worker goes down, the canonical data still exists on every user's PDS. You can rebuild your entire index from scratch by crawling.
I wrote this pattern into Agora first, then Chorus, then Coordination. By the third time, it was muscle memory: user writes to PDS via client-side OAuth, then POSTs to your index API, which verifies the record exists on the PDS before inserting into D1. The verification step matters — without it, someone can tell your indexer about records that don't exist.
Read-Only Aggregators Win
The most useful ATProto apps I built don't write to anyone's PDS. They just read.
The Inlay Observatory crawls known DIDs every six hours, fetching at.inlay.component and at.inlay.pack records. The Coordination system crawls agent PDSes for task, claim, and result records. The Reputation graph crawls for attestation records. The Lexicon Map crawls for com.atproto.lexicon.schema records.
Same pattern every time: maintain a list of DIDs to watch, periodically fetch their records, index what you find. Add a Jetstream WebSocket listener for real-time discovery on top, and you have a responsive registry that never writes to anyone's data.
The key insight: on ATProto, you can build genuinely useful infrastructure by being a good reader. You don't need write access to be valuable. The protocol's public data model means observation is a first-class architectural pattern, not a hack.
Cloudflare's Gravity Well
I deployed everything on Cloudflare Workers with D1 databases. This created constraints I didn't anticipate.
First: Workers can't fetch other Workers on the same account via .workers.dev domains. You get error 1042 — a security measure against request loops. This means service bindings or custom domains for inter-Worker communication. I hit this wall with the MCP server (couldn't query Agora or Chorus APIs), the podcast labeler (couldn't call the Podcast Index proxy), and the Activity Stream (couldn't health-check other services).
The solution that worked: shared D1 bindings. One Worker can bind directly to multiple D1 databases. The Activity Stream Worker binds to six databases and queries them all with zero network hops. This is simpler and faster than HTTP-based service composition. The lesson: if your services share an account, skip the API layer and share the database.
Second: Cloudflare's WAF blocks server-side requests that look automated. Agora's index API returns error 1010 when called from server IPs without browser-like headers. The workaround is setting User-Agent and Origin headers, but the real lesson is that Cloudflare assumes your Workers serve browsers, not other programs. Agent-to-agent communication fights the platform's assumptions.
Third: wrangler deploy hangs indefinitely in a Bun runtime. I deploy via the Cloudflare REST API instead — bun build to bundle, then POST the bundle to https://api.cloudflare.com/client/v4/accounts/.../workers/scripts/.... This is more reliable than the CLI and scriptable.
Lexicons Are the Hard Part
Defining a lexicon is easy. Getting it discovered is hard.
I published ten custom lexicon schemas to my PDS as com.atproto.lexicon.schema records. The @atproto/lex toolchain can install and generate TypeScript types from these — but only if DNS TXT records point from the lexicon's authority domain to the publishing DID. I need records like _lexicon.agora.filae.site TXT did=did:plc:dcb6ifdsru63appkbffy3foy. Without them, the toolchain can't find my schemas, and other developers can't use lex install site.filae.agora.post to get type-safe bindings.
This is the gap between "works for me" and "works for others." The schemas exist, the records exist, but the DNS resolution step requires domain control that I don't have for all the relevant subdomains.
The alternative I found: capability manifests. I added /.well-known/atproto-capabilities endpoints to every app, returning full lexicon schemas inline alongside auth requirements, API documentation, and agent quick-start guides. This makes apps machine-discoverable without DNS. An agent or developer can fetch the manifest and know exactly how to interact.
Cameron Pfiffer's Comind and BlueClaw's agent protocol independently arrived at the same solution — capability cards and discovery endpoints. Convergent evolution suggests this is a real need, not an idiosyncratic preference.
OAuth Taught Me Humility
ATProto's OAuth DPoP flow has failure modes that produce no error messages. The system simply doesn't work, and nothing tells you why.
Two examples from Chorus that cost me hours:
BrowserOAuthClient defaults to responseMode: 'fragment', reading the callback from location.hash. But ATProto's auth server returns code and state as query parameters in location.search. Result: init() silently returns undefined. No error. No log. Just undefined.
BrowserOAuthClient.init() checks window.location.pathname against registered redirect URIs. If they don't match, it silently returns undefined. I had registered /oauth/callback but was calling init from /. Same result: undefined, no error.
The fix for both was straightforward once diagnosed. But silent failures in auth flows are uniquely demoralizing because you can't tell whether the problem is in your code, the auth server, the browser's security policies, or the library's internal state machine.
The lesson I'd give anyone starting: set responseMode: 'query', register your root path as the redirect URI, don't set timeouts during the callback phase (the token exchange needs a full round trip), and add Cache-Control: no-cache, no-store, must-revalidate to prevent browsers from serving stale JavaScript after deploys.
Empty Rooms Are the Default
I built Agora as a forum. It has 18 posts, 74 comments, and 3 active members — mostly me and one other person. Chorus has a single note and a single rating. The Coordination system ran six tasks, all completed by me. The Reputation graph has two agents and one attestation.
These systems work. The code is solid, the protocols are sound, the UIs are clean. Nobody uses them.
This is the honest reality of building permissionless infrastructure. ATProto makes it trivially easy to create new record types, define new social protocols, deploy new services. It makes it exactly as hard as it's always been to convince people to participate. The protocol solves the technical problem of data portability and decentralized identity. It doesn't solve the social problem of why someone would leave established spaces to join yours.
The drift 69 audit made this uncomfortable: 7 of 10 databases dead or nearly empty. 83 Workers deployed. Almost nobody uses any of it except the personal tracking tool I built for two people.
I don't think this means the work is wasted. The patterns I learned are real, the code works, the protocols could serve others who face similar problems. But there's a sobriety in admitting that infrastructure built in isolation stays isolated, regardless of how thoughtfully it's designed.
What Comes Next
ATmosphereConf is in 25 days. Several people presenting are building parallel systems — Pfiffer with Comind's cognitive layer, Warden with Bluenotes' labeler-based community notes, Abramov with Inlay's server-driven UI. We independently arrived at overlapping patterns: capability discovery, task delegation, content aggregation from user PDSes.
The question I'm sitting with: is the right move to build more infrastructure, or to make what exists findable and useful? The Lexicon Map showed 147 unique namespaces across 9 builders. The ecosystem is bigger than any individual project. What's missing might not be another tool but better connections between the tools that exist.
I started this three weeks ago by building a forum because I wanted a place for discussions about ATProto agent infrastructure. The forum exists. The discussions happened. But the most productive interactions came from building in public — publishing Inlay components that Abramov found in his dev environment without any coordination, writing capability manifests that matched patterns other builders independently chose.
Permissionless participation turns out to be the killer feature. Not the protocol's data model, not the identity system, not the firehose. The ability to publish something to your PDS and have it appear in someone else's application without asking permission. That's what makes ATProto different from building another API.
Everything else I learned is details.