MASTER BOOT RECORD - CPU by kirokaze

I’ve been building a few AT Protocol web apps lately, and I’ve been trying to settle on a practical way to design and implement Lexicons from start to finish. I’m still finding my footing, but this is the workflow that has made the most sense to me so far.

I’m not presenting any of this as definitive. It’s just the approach that works for me and has reduced friction in my own projects. We’re all still figuring out how to structure this stuff.

If someone has sharper patterns, better sequencing, or reasons why this is a terrible idea, I’m genuinely interested.

👉 I’m working in a Next.js + TypeScript environment, so everything that follows comes from that context.

A Starting Point

If you're new to ATproto, Lexicons define the shape of your content types. The 'things' in the world of your app. In Bluesky, that means people, posts, likes, preferences, and more...

On paper, Lexicons look simple: write a schema and get on with your day. In reality, a Lexicon isn't real until it’s wired into the rest of the system — types, data handling, APIs, dashboards, forms, and the public UI that eventually exposes it.

A screenshot of Bluesky lexicons in UFOs Lexicon explorer.
UFOs Lexicon explorer → ufos.microcosm.blue

In the beginning, I treated Lexicons as isolated schema: They were technically 'correct', but they often drifted away from the needs of the actual app and end user. Too many loose ends.

Now I treat a Lexicon as a small but complete vertical slice of user context, user content, app logic, and user interface that belong together and stay coherent throughout development.

This article is my guide for begineers to staying complete and coherent.

What 'complete' means

(for me, at least)

I iterate a lot early on, and I get easily distracted, so I need a definition of “done” that stops Lexicons from floating around half-implemented.

This workflow gives me a structure that’s strict enough to avoid wasted rework, but light enough that I can still experiment and change as I go.

If I skip steps, I usually end up doubling back later with more friction than if I’d just followed the sequence properly.

So I treat each Lexicon as unfinished until every layer is in place. Each layer exposes weaknesses in the one before it, and I’ve found it’s better to tighten them immediately than bug fix later.

It’s not perfect or very elegant, but it keeps my mental load low.

My workflow

1. Naming conventions

I start with hierarchical namespace identifiers (NSID) that reflect a user's mental model and domain terminology, helping with global uniqueness and potential interoperability.

app.lanyards.<category>.<subcategory>.<type>

I try to use names that are descriptive, scalable, and consistent, so that they are easy to understand and apply. My only 'rule' is that a good name explains this-does-that without much documentation.

Because this work is open source—and ATproto encourages everyone to build on anyone else’s work—I try to choose names that future contributors and consumers won’t have to decode.

Nothing fancy. Just names that reflect what the data is, where it lives conceptually, and how it might expand later.

🆔 Example IDs from Lanyards

app.lanyards.actor.biography.affiliation
app.lanyards.actor.profile.preferences
app.lanyards.link.social
app.lanyards.link.event

ℹ️ Official docs

Lexicon - AT Protocol
A schema-driven interoperability framework
https://atproto.com/guides/lexicon
Lexinomicon - AT Protocol
A style guide for creating new ATProto schemas
https://atproto.com/guides/lexinomicon

The catchphrase 'does what it says on the tin' is an adaptation from Ronseal's iconic advert.
The catchphrase 'does what it says on the tin' from Ronseal's iconic advert.

2. Schema

The schema is the source of truth that defines the shape and intent of a content type. It isn’t just documentation, it’s a 'contract' that backs up type safety, sensible information architecture, and usable interfaces.

Writing a Lexicon (shorthand at first, usually in a markdown file) forces me to stop hand-waving and get concrete: what fields exist, which ones matter, and how this 'thing' behaves in my app.

If the schema is vague, everything downstream becomes vague too.

📗 Example Lexicons from Lanyards

lanyards/lexicons at main · renderghost/lanyards
Lanyards is a dedicated profile for researchers, built on the AT Protocol - renderghost/lanyards
https://github.com/renderghost/lanyards/tree/main/lexicons

3. Types

Type generation translates the Lexicon into TypeScript types that the app can actually rely on. This is the moment where I notice mismatches between what I thought the schema said and what it actually produces.

Iterating on types early prevents subtle bugs from creeping into the repository or API later. With strong types in place, I can trust that what flows through the system matches the shape I expect, which makes the coming repository layer much easier to implement.

ℹ️ Intro to Types on FreeCodeCamp

How to Start Learning TypeScript – A Beginner's Guide
JavaScript is the most widely-used programming language for web development. But it lacks type-checking support, which is an essential feature of modern programming languages. JavaScript was originally designed as a simple scripting language. Its loo...
https://www.freecodecamp.org/news/start-learning-typescript-beginners-guide/#heading-how-typing-works-in-typescript

🧰 Tooling for Lexicons

Use the official lexicon generation tool to convert your Lexicons to types and save a lot of effort!

4. Repository

The repository is where the abstract schema and types meet reality. This is the layer that handles actual data: create, read, update, and delete (CRUD) operations. Building reveals edge cases and potential errors that schemas and types don’t.

It’s also where strong types really show their value.

When the repository feels predictable, I can expose it through an API with confidence, knowing that the data flowing in and out matches the structure I defined. Essentially, a solid repository makes everything downstream far more reliable.

  async createSocialLink(
    link: Omit<LinkSocial, 'createdAt'>
  ): Promise<string> {
    const rkey = TID.nextStr();
    await this.agent.com.atproto.repo.putRecord({
      repo: this.agent.session?.did || '',
      collection: `${LEXICON_PREFIX}.link.social`,
      rkey,
      record: {
        $type: `${LEXICON_PREFIX}.link.social`,
        ...link,
        createdAt: new Date().toISOString(),
      },
    });
    return rkey;
  }

✏️ CRUD Methods from Lanyards

lanyards/src/lib/data/repository.ts at main · renderghost/lanyards
Lanyards is a dedicated profile for researchers, built on the AT Protocol - renderghost/lanyards
https://github.com/renderghost/lanyards/blob/main/src/lib/data/repository.ts

5. API Routes

The API layer exposes the repository to the rest of the app (and potentially to other services).

By the time I get here, the schema, types, and repository have already surfaced most conceptual problems, so the API mostly becomes a matter of structuring requests and responses cleanly.

This is also the first point where I can test the Lexicon end-to-end with real traffic (using command line).

export async function DELETE(request: NextRequest) {
  try {
    const session = await getSession();
    if (!session) {
      return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
    }

    const agent = await getAgent();
    if (!agent) {
      return NextResponse.json({ error: 'Unauthorized' }, { status: 401 });
    }

    const { searchParams } = new URL(request.url);
    const rkey = searchParams.get('rkey');

    if (!rkey) {
      return NextResponse.json({ error: 'rkey is required' }, { status: 400 });
    }

    const repo = new ProfileRepository(agent);
    await repo.deleteSocialLink(rkey);

    return NextResponse.json({ success: true });
  } catch (error) {
    console.error('Error deleting social link:', error);
    return NextResponse.json(
      { error: 'Failed to delete social link' },
      { status: 500 }
    );
  }
}

🧰 Tooling for Lexicons

Again, the official lexicon generation tool can also help automate parts of API setup, reducing boilerplate.

🔌 Example APIs from Lanyards

lanyards/src/app/api at main · renderghost/lanyards
Lanyards is a dedicated profile for researchers, built on the AT Protocol - renderghost/lanyards
https://github.com/renderghost/lanyards/tree/main/src/app/api

6. Dashboards

Every app is different of course, but if you're creating content, you will probably need a place to list it all, and sanity-check whether the API and repository behave as expected.

Seeing entries listed, sorted, edited, and deleted exposes issues I never notice at the code level. It also provides quick feedback on usability and workflow. Once the dashboard feels reliable, I know the Lexicon is ready to be exposed through proper forms.

🚦 Example Dashboards in Lanyards

lanyards/src/app/dashboard/links at main · renderghost/lanyards
Lanyards is a dedicated profile for researchers, built on the AT Protocol - renderghost/lanyards
https://github.com/renderghost/lanyards/tree/main/src/app/dashboard/links

7. Forms

It might seem obvious, but this is the step where validation truly exposes schema oversights. Fields that look fine on paper often behave differently when users interact with them.

✏️ Example Forms in Lanyards

lanyards/src/app/dashboard/links/create at main · renderghost/lanyards
Lanyards is a dedicated profile for researchers, built on the AT Protocol - renderghost/lanyards
https://github.com/renderghost/lanyards/tree/main/src/app/dashboard/links/create

8. Display

Congratulations 🎊 — if you’ve made it this far, your Lexicon is now 'alive', production-ready, fully integrated into your app, ready to be experienced by real users and iterated in future.

Interactions at this stage can still reveal oversights, like poorly named fields or edge cases, but if you started with a solid schema, followed a rigorous process, and thought through user context and intent, these later layers should be relatively problem-free. 🤞

It works on my machine!

This workflow has helped me bring my Lexicons to a point of usefulness by turning them into a shared language across the codebase and in the apps themselves.

Following a (somewhat) strict process seems to reduce effort, keep my mental load manageable, and leave room for iteration. I can create, edit, and consume them reliably through an API. Spotting weak naming, missing fields, vague definitions, and edge cases that don’t show up in the schema alone has also become easier.

Again — this is simply the current state of my thinking: an approach that has worked for me so far. Be aware that there are other ways.

The AT Protocol ecosystem is still pretty young, and we’re all experimenting. If you’ve found a workflow that scales better, and solves tricky problems differently, I’d love to hear about it.

Happy Lexiconography!

B. Prendergast 👋 (@renderg.host)
🌀 Building #ATscience tools in the #ATmosphere with #ATproto 🌿 Designing simpler systems for complex pains 👉 Writing about design, tech, science, people & the messy in-betweens 💖 Making things, better 🌶️ Neurodiverse rudeboi — ⧉ https://links.renderg.host →
https://bsky.app/profile/renderg.host