It's incredible that people can feed up to one million tokens (1 000 000) to LLMs and yet they still most of the time fail to take advantage of that enormous context window. No wonder people say that the output generated by LLMs is always crap... I mean, they're not great but at least they can manage to do a pretty good job - that is, only IF you teach them well... Beyond that, everyone has their own effort + time / results ratio.
"Engineers are finding out that writing, that long shunned soft skill, is now key to their efforts. In Claude Code: Best Practices for Agentic Coding, one of the key steps is creating a CLAUDE.md file that contains instructions and guidelines on how to develop the project, like which commands to run. But that’s only the beginning. Folks now suggest maintaining elaborate context folders.
A context curator, in this sense, is a technical writer who is able to orchestrate and execute a content strategy around both human and AI needs, or even focused on AI alone. Context is so much better than content (a much abused word that means little) because it’s tied to meaning. Context is situational, relevant, necessarily limited. AI needs context to shape its thoughts.
(...)
Tech writers become context writers when they put on the art gallery curator hat, eager to show visitors the way and help them understand what they’re seeing. It’s yet another hat, but that’s both the curse and the blessing of our craft: like bards in DnD, we’re the jacks of all trades that save the day (and the campaign)."
