You spend your career fighting information overload. Too many notes, too many tools, too many inputs competing for limited attention. You build systems to tame the chaos. You classify, organize, retrieve.
Then you turn around and give your AI assistant 4,527 lines of instructions in a single file and wonder why it chokes.
That is not a hypothetical number. That is the exact size my AI project instructions file reached before it started failing on every conversation. 4,527 lines. 220 kilobytes. Loaded into every single interaction, consuming most of the available context window before any real work could begin.
I had given my AI the same disease I teach people to cure.
How Good Intentions Build a 4,527-Line Monster
Nobody creates a 4,527-line instructions file on purpose. It grows the same way your inbox grows, the same way your notes app accumulates 3,000 entries you cannot find, the same way your Notion workspace becomes a graveyard of half-finished pages.
You add one useful rule. Then another. A database schema here. An integration workflow there. A style guide, a naming convention, a decision log, a contact list, an agent architecture, a quarterly planning protocol. Each addition makes sense in isolation. Each one makes the AI slightly smarter for that specific task.
"Clutter and confusion are not attributes of information, they are failures of design." — Edward Tufte
Tufte nailed it decades before AI existed. The problem was never the information itself. Every single one of those 4,527 lines contained something valuable. The problem was that I had dumped all of it into a single file with zero classification, zero hierarchy, and zero sense of what was needed when.
I had built a library with one shelf.
Every book crammed onto the same surface. Fiction next to tax law next to recipes next to technical manuals. When you need the recipe, you have to wade through everything else first. And if the shelf can only hold so many books before it collapses, which is exactly what a context window is, you eventually cannot fit the book you actually came for.
That is precisely what was happening. My AI was spending its entire cognitive budget reading instructions it did not need for the task at hand. A simple journal entry required loading database schemas, agent architectures, frontend standards, and CRM protocols. A content review session dragged in quarterly planning rules, chess configurations, and Tana integration details.
The context window is not infinite. Everything you force the AI to read is something it cannot think about.
The Wall: When Your Fix Keeps Breaking the Fixer
Here is where the story gets interesting. And by interesting, I mean infuriating.
I recognized the problem. I asked Claude to analyze the file and propose a restructuring plan. Reasonable request. Except the task of analyzing a 4,527-line file required loading that same 4,527-line file into the context window. The very problem I was trying to solve was preventing me from solving it.
First session: hit "prompt too long" halfway through the analysis.
Second session: got further, hit compaction. Lost context. Had to restart.
Third session. Fourth session. Fifth session. Same wall. Different angle. Same crash.
Six consecutive sessions. Each one hitting the context limit, each one losing everything it had figured out so far. Like trying to empty a flooded basement with a bucket that keeps developing holes.
Here is what I did that most people do not: I kept going.
Not through stubbornness. Through a technique that turns AI friction into progress instead of frustration. When a session hit the wall, I copied the blocked prompt, started a fresh session, pasted it in, and let Claude continue from where it crashed. Each restart preserved the strategic direction. Each new session picked up what the previous one had built.
Six sessions. Same problem. Same persistence. The AI handled the architecture. I handled the continuity.
That is not a workaround. That is a partnership.
The Fix: Hub and Spoke
The solution, once we fought through the wall to reach it, was elegant in the way that good information architecture always is.
Stop making the AI carry every book every time it walks into the library. Give it a map, and let it grab what it needs.
The old model: one massive file (4,527 lines), loaded into every conversation, every turn.
The new model: a lean hub (640 lines) containing only the rules that apply to EVERY interaction, plus 9 specialized spoke files that Claude reads on demand, only when the task requires them.
The hub contains: workflow principles, journal rules, data integrity checks, quick references. The universal knowledge.
The spokes contain:
- Database Schema (1,138 lines). Loaded only when working with data.
- Tana Integration (526 lines). Loaded only when syncing to Tana.
- Business Context (619 lines). Loaded only when discussing PM or company operations.
- Content Rules (386 lines). Loaded only when creating articles or handling quotes.
- Agent Architecture (223 lines). Loaded only when building or modifying agents.
- CRM and Meetings (145 lines). Loaded only during people or meeting operations.
- Frontend Standards (134 lines). Loaded only when building UI.
- Quarterly Planning (110 lines). Loaded only during goal setting.
- Glossary (47 lines). Loaded only when terminology questions arise.
4,527 lines became 640. An 86% reduction in always-loaded context. Total system knowledge? Preserved completely. Actually better organized than before, because the classification process revealed duplications, contradictions, and outdated rules that had been hiding in the monolith.
"Perfection is achieved, not when there is nothing more to add, but when there is nothing left to take away." — Antoine de Saint-Exupéry
The result was immediate. No more "prompt too long" failures. No more mid-task compaction crashes. Claude could breathe. It had room to think, to reason, to hold the actual conversation in its working memory instead of burning 80% of its capacity on instructions it did not need.
Less input. More intelligence. That is not a paradox. That is information architecture.
The Principle Behind the Fix (And Why It Applies to Everything You Build)
Here is what matters more than my specific file structure: the principle is universal.
The Library With One Shelf is not an AI problem. It is an information architecture problem that shows up everywhere.
- Your notes app with 3,000 unclassified entries? Library with one shelf.
- Your project management tool with every task in a single board? Library with one shelf.
- Your email inbox with zero filters, zero folders, zero rules? Library with one shelf.
And now, your AI instructions file with 4,527 lines of everything-all-at-once? Same failure. Same root cause. Same fix.
The fix is always the same three moves:
- Classify. What categories of knowledge exist here? Not "what topics" but "what domains does this information serve?"
- Separate. Move each domain into its own container. Give it boundaries. Give it a clear purpose.
- Retrieve on demand. Build the map that tells you (or your AI) which container to open for which task.
This is the same principle that drives Input, the first stage of ICOR: capture with intention, not with volume. And the same principle behind Control, the second stage: process raw information into classified, retrievable knowledge. The methodology does not change because the "user" is an AI instead of a human brain. Good information architecture works for any intelligence, biological or artificial.
"I would have written a shorter letter, but I did not have the time." — Blaise Pascal
Pascal understood something most people still miss: compression is harder than accumulation. Cutting 4,527 lines to 640 required more thinking, more discipline, and more clarity than writing the original 4,527 lines ever did. Every line in the hub earned its place. Every spoke file has a clear boundary.
That is the real work of building a productivity system. Not adding more. Deciding what to remove.
The Craft Lesson: What Six Failed Sessions Taught Me About Working With AI
The technical fix matters. But the human behavior behind it matters more.
Most people would have stopped after the second crashed session. "This is too hard. I will live with the slow performance." And they would have spent the next six months fighting the same friction, losing context mid-task, restarting conversations, wasting hours re-explaining what the AI should already know.
Here is what persistence with AI actually looks like:
- Session crashes are data, not failures. Each crashed session revealed exactly where the bottleneck lived. The crashes were the diagnosis.
- Copy, paste, continue. When a session hits the wall, your strategic direction does not die. Extract it. Transplant it. Keep building.
- Trust the process, not the individual session. No single conversation could solve this. But six conversations, each building on the last, could.
- Let the AI do what it does best. I set the direction. Claude handled the analysis, the file splitting, the cross-referencing, the consistency checks. The human brought persistence. The AI brought processing power. Neither could have done it alone.
This is how productive AI partnerships work. Not one magical prompt. Not one perfect session. Iterative progress through friction, with both human and AI contributing what they do best.
Refine, the fourth stage of ICOR, teaches exactly this: continuous improvement through iteration. The system does not need to be perfect on the first pass. It needs to get better on every pass. Six sessions of imperfect progress produced a result that one "perfect" session never could have reached.
Your AI has information overload because you built its knowledge the same way you build everything else: one useful addition at a time, with no architecture underneath.
The fix is not to know less. The fix is to organize what you know.
Build your AI a library with more than one shelf. Then watch what it can do when it can finally think.