Three years ago, if I were going to build out a side project, I would almost always start with the UI. I wanted to get an MVP out fast to see if the idea was worth pursuing, so I would grab data however I could, wire it into a React app, and spend most of my time making the interface usable. That approach worked well enough then, but it does not work nearly as well when LLMs are part of the development process.
If you want an LLM to help you build, refine, and extend a system, you have to design the system so the model can actually use it. A UI-first architecture makes this nearly impossible. A data-first and API-first approach, with a small CLI on top, gives the LLM everything it needs to reason about your idea and test it. That shift completely changes how fast you can iterate.
Here is how I now think about building something when I know an LLM will be involved.
Why UI-first Falls Apart With LLMs
Starting with the UI worked when everything was built by hand. But a UI is a messy, blended layer that hides the real structure of your system behind markup, state, and whatever framework patterns you have tied into the page. An LLM cannot reliably interact with that. It cannot easily tease apart domain boundaries, guess which backend action maps to which form, or infer why the page behaves the way it does.
So when you try to use an LLM to extend or validate the system, everything slows down. It has no clean handle to grab. It either resorts to browser automation, which is brittle and token-hungry, or it has to guess at private server actions that were never designed to be testable in isolation.
If you begin with the UI, you make the system opaque to the one tool that could help you build it faster.
Build the System So the LLM Can Use It
Instead of starting with the interface, focus on the single workflow that would prove your idea has legs. Then design that workflow around the data. The majority of apps are really just data pipelines that take in information, organize it, and return something meaningful.
If you build that workflow into a clear API, an LLM can hit it directly. It can send sample inputs, verify outcomes, and give you feedback. Once that API exists, you can have the model generate a small CLI that wraps those endpoints. That CLI then becomes:
- something you can use interactively
- a simple, domain-shaped vocabulary for your system
- a fully introspectable surface the LLM can explore with help flags
At that point you have not only built the start of your app, you have also built the tools the LLM needs to help you expand it. Every endpoint becomes a new building block the model can use to add features, test invariants, or refine your design.
Save the UI for Last
Once your data layer, API, and CLI are solid, building the UI becomes easy. More importantly, it becomes optional in the early stages. You already know the core of the idea works. You have a workflow the LLM can interact with, test, and extend. And you have a reliable architecture your future UI can sit on top of without introducing confusion.
This is very different from something like NextJS or similar frameworks, where the domain logic often ends up blended into loaders, actions, and page components. That setup is fine for a human, but it is almost impossible for an LLM to reason about cleanly. You lose the ability to test small, isolated workflows, and you make iteration much harder.
If you want LLMs to meaningfully accelerate your development, begin with the data problem, build the API, wrap it in a simple CLI, and let the model help you from there. By the time you get around to building the UI, the hard part is already done.