2026-04-05
On Context Windows
Why context length changes everything about how we use LLMs.
The jump from 4K to 1M context tokens isn’t just a quantitative improvement — it’s a qualitative shift in what’s possible.
With small context, you have to be clever about what you feed the model. You summarize, you chunk, you build RAG pipelines. With large context, you can just… give it the whole thing.
What changes
- Code review: Feed the entire codebase, not just a diff
- Research: Load full papers, not abstracts
- Debugging: Include the full stack trace, logs, and surrounding code
What stays the same
The model still needs clear instructions. More context doesn’t fix vague prompts. Garbage in, garbage out — just more of it.
The real skill isn’t managing context anymore. It’s knowing what question to ask.