The AI Junk Drawer Problem
by Charles Edmonds
One of the easiest ways to make AI less effective is to keep helping it without ever cleaning up.
A rule here.
A reminder there.
A style note.
A project file.
A lessons-learned document.
A copied block from an earlier session because it worked once.
Another warning because the AI missed something last time.
Each addition feels sensible.
That’s exactly why the problem is easy to miss.
Before long, you are no longer giving the AI a clean working context. You’re giving it a junk drawer.
Everybody knows what a junk drawer is. It is full of things that might matter. Most of them got there for a good reason. But when you need one thing right now, the drawer does not help you think. It makes you dig.
A lot of AI workflows end up looking exactly like that.
The issue isn’t that rules, notes, or reference files are bad. Complex work often needs all three. The issue is that useful context can quietly turn into accumulated clutter. Once that happens, the AI is not just doing the task. It is also sorting through the pile around the task.
That hidden sorting has a cost.
It costs focus.
It costs clarity.
It costs precision.
And because the AI can often push through the clutter well enough to stay useful, the cost remains invisible for a while. The work still gets done. The answers still come back. The process still feels productive.
But the environment is getting noisier.
That matters.
When AI goes off course, the instinctive response is often to add another instruction. If it misses a rule about formatting, add a note. If it forgets a constraint, you add a warning. If it drifts, add a stronger reminder. If it repeats a mistake, add another example.
That can work.
It can even work impressively well for a while.
But if every solved problem becomes permanent background guidance, the environment gradually fills with stale fixes, overlapping rules, and old decisions that no longer deserve equal weight.
That is the AI junk drawer problem.
More context is not always better context Link to heading
The usual advice around AI tends to focus on giving the model more context. Sometimes that is right. Thin context causes mistakes too.
But there is a point where added context stops helping and starts competing.
Some instructions are foundational.
Some belong only to one project.
Some belong only to the task in front of you.
Some were useful once and should have been retired.
When those all live together without enough structure, the model has to decide what matters most right now. That interpretation burden is real. It doesn’t always show up as a dramatic failure. Often it shows up as softer drift: a fuzzier answer, a missed priority, a local rule treated like a global one, or a response that feels almost right but not fully sharp.
This is why prompt quality is not just a writing problem.
It is an architecture problem.
Build a workbench, not a drawer Link to heading
The better model is not the junk drawer. It is the workbench.
A workbench is arranged for the current job. The tools you need are present. The distractions are not. The setup reflects intention.
That is how strong AI workflows behave.
Foundational guidance should be small, stable, and truly foundational.
Project guidance should stay with the project.
Task instructions should stay with the task.
Temporary cautions should not become permanent background noise unless they have earned that place.
The goal is not to make every prompt short.
The goal is to make the signal strong.
Sometimes that will still require a fair amount of context. The issue is not length by itself. The issue is whether the context is relevant, ordered, and proportional.
A long workbench can still be organized.
A short junk drawer is still a junk drawer.
A practical check Link to heading
If you think your AI workflow may be getting cluttered, ask yourself a few blunt questions.
- Can you explain why each major instruction or file is still there?
- Do you know which rules are foundational and which came from one past mishap?
- Have you removed anything recently, or do you only add?
- Is the AI working from a staged setup, or rummaging through a pile?
If those questions are hard to answer, the next improvement may not be another note or another rule.
It may be cleanup.
Where this goes next Link to heading
This is one of the workflow patterns I explore more deeply in Real Programmers Use AI, especially the difference between accumulated context and deliberately staged context.
For a project-grounded example of how this shows up in real work, see When our AI workflow became a junk drawer.