One of the most useful shifts I've made in working with AI didn't come from a better prompt. It came from changing the assignment.
Early on, like a lot of developers, I would describe the kind of application I wanted and let the AI start building. Sometimes that meant a desktop app, sometimes a web app, sometimes some kind of hybrid tool with a certain feel. I might guide it along the way, and I might know roughly what I wanted the screens to look like, but the assignment was still mostly open-ended. Build me one of these. Make it look something like this. Put this here. Add that there.
And the AI would respond the way AI usually responds. It would take the shortest path.
It would hand-code layouts. It would improvise structure. It would drop in CSS. It would create one-off spacing fixes. It would make visual decisions that looked reasonable in the moment. And because it could do that quickly, it often felt productive. You could see the product taking shape. You could click around. You could tell yourself you were making progress.
In a narrow sense, you were.
But over time, I started to notice a pattern. We were spending too much time tweaking the surface and too little time building the real mechanics that made the project valuable. Worse, the AI was constantly making up local rules as it went. It wasn't malicious. It wasn't even wrong in the moment. It was simply doing what an unconstrained system does. It was solving the immediate problem using whatever seemed fastest and most plausible.
That's when I started thinking about boats.
The world is full of boats that float. There are plenty of builders who can make something attractive, comfortable, and perfectly enjoyable for a day on the water. There's nothing wrong with that. A daysailer has its place. So does the sleek production boat people sometimes jokingly call a plastic fantastic. It may be fast. It may be attractive. It may be exactly what someone wants for a certain kind of use.
But nobody serious confuses that with a true blue water design.
A blue water boat is built for a different purpose from the beginning. It's not just built to float. It's built to endure. It's built with changing conditions in mind. It's built with consequences in mind. It's built to keep going when the water stops being friendly.
That's the difference I started to see in software built with AI.
Left to itself, AI is very good at building the software equivalent of a daysailer. It can get you something that floats. It can get you something that looks good in screenshots. It can get you something you can click through and demonstrate. It can even get you something that feels surprisingly polished for a while.
What it doesn't reliably do on its own is produce a blue water design.
That takes intent.
The shift for me came when I stopped asking the AI to invent the surface and the structure at the same time. Instead of treating every new project like a fresh chance for improvisation, I started thinking much harder about the rules the work needed to live inside.
That meant choosing the underlying tools and patterns on purpose. It meant deciding what kind of layout behavior belonged across projects. It meant deciding what kinds of controls, navigation patterns, and interaction rules should be considered normal. It meant drawing some boundaries and saying, no, you don't get to solve every little local problem by quietly inventing a new standard.
That was a big change.
Before that, the AI was doing what AI naturally does when the assignment is loose. It wasn't just building features. It was also inventing the visual language, the structural language, and the maintenance burden one answer at a time.
That's where a lot of drift comes from.
If the AI is free to improvise the interface, the layout rules, the interaction style, and the fallback behavior every time you ask for a change, then even a decent result today can quietly make the overall system worse tomorrow. One screen gets a special treatment. One panel gets custom behavior. One workflow gets its own exception. None of those decisions look catastrophic by themselves. But they accumulate.
That's why I don't think the right question is "Can the AI build this?"
Usually it can.
The better question is "What rules is the AI building under?"
That's the blue water question.
If you're building a quick prototype, a limited internal tool, or a throwaway experiment, a daysailer may be fine. A little AI improvisation may even be ideal. There are times when speed matters more than structure. There are times when getting something afloat is exactly the right call.
But if you're building a serious application, a product family, or a system that's going to evolve over time, then you can't afford to let the AI define the architecture one local decision at a time.
You need blue water thinking.
You need to choose the hull before painting the cabin.
You need to decide what belongs in the framework and what belongs in the product. You need to decide what can be reused. You need to decide what interaction patterns are normal and what should count as exceptions. You need to decide when the AI is allowed to step outside the system and, just as importantly, when it needs to stop and ask first.
That may sound restrictive. In practice, it's liberating.
Once the AI is no longer improvising the foundation, the work tends to get better. The product looks more coherent. The interface decisions feel more intentional. The amount of visual fiddling goes down. And the attention goes back where it belongs: to the mechanics of the product, the workflow, the information model, and the experience that actually makes the thing worth building.
That's what I mean by the blue water design.
It's not about making software fancy. It's not about overengineering. It's not about freezing every decision before work begins.
It's about deciding what the vessel is supposed to survive.
A good framework does for AI what a good development shop does for human programmers. It reduces accidental variation. It preserves standards. It makes handoffs easier. It increases the odds that local work contributes to a coherent whole.
And it changes the conversation from:
Can you build me something like this?
to:
Here is the system. Build the next part without violating it.
That's a far better assignment.
It also says something important about experience. One of the biggest advantages experienced developers bring to AI work isn't just that they know more syntax or more tools. It's that they know when the model is solving the wrong problem. They know the difference between a shortcut and a standard. They know when a local success is creating a long-term liability.
They know the difference between something that floats and something that is seaworthy.
That's why this matters beyond interface work. The same pattern shows up in architecture, testing, deployment, documentation, content systems, and workflow design. If you leave the rules open, the AI will fill the vacuum. It will create structure because structure is required to move forward. The question is whether it's creating the structure you would have chosen if you had stopped to think about it first.
Too often, the answer is no.
That doesn't mean the AI failed. It means the assignment failed.
The wrong lesson to take from that is that AI can't help build serious software. The right lesson is that serious software needs serious boundaries. If you want a blue water product, you have to provide the blue water design.
Anyone can help build something that gets around the bay.
If you want to cross oceans, don't let the AI make up the rules.
Where this comes from
This guide grew out of a pattern I kept running into while working with AI on software design. The more I dealt with systems that drifted through local decisions, the more I realized the real problem wasn't just implementation quality. It was that the rules themselves were being invented too late.
You can read the more personal side of that in the companion Field Note, The day I realized AI was building daysailers.
-- Charles