This repository documents an innovative approach to technical writing: the collaborative co-authorship of a detailed academic-style report by both a human expert and a large language model. The report, titled The Anatomy of Context-Generic Programming, explores the nuanced concepts of context-generic programming patterns and fission-driven development methodologies within the Rust programming language ecosystem. Rather than presenting only the final polished report, we have chosen to preserve the entire authorship journey, including the human-written original content, the structured instructions provided to the LLM, multiple AI-generated drafts, and the iterative refinement process that led to the finished work.
This transparency serves an important purpose: it demonstrates that high-quality technical documentation can emerge from a carefully orchestrated human-AI partnership, where the human provides intellectual direction, domain expertise, and critical judgment, while the AI contributes substantial writing capacity, detailed elaboration, and tireless revision. This repository thus functions as both a case study in collaborative AI-assisted authorship and as a resource for others seeking to employ similar methodologies for their own technical documentation projects.
The creation of this report followed a structured, iterative methodology designed to leverage the strengths of both human expertise and artificial intelligence while mitigating the weaknesses inherent in each. The following sections describe each phase of this process in detail, explaining not only what was done but why each step was necessary to produce a high-quality, comprehensive technical report.
The authorship process began with a human expert developing an original draft containing the foundational ideas, key arguments, and critical technical insights that needed to appear in the final report. This draft, preserved in the human-original directory, serves as the intellectual backbone of the entire project. The human author's role at this stage was not to write the complete report in finished form, but rather to establish the core structure, identify the major themes to be explored, provide specific technical examples, and articulate the key distinctions and concepts that readers needed to understand. By establishing this human-authored foundation, the project ensures that the report remains grounded in authentic expertise rather than allowing the LLM to generate content from pure inference alone. This initial draft becomes the reference document against which all subsequent AI-generated versions are measured.
With the human-written foundation in place, the first AI draft was generated and saved in the ai-draft directory. Rather than simply asking the LLM to write the report once, the project employed a specific instruction strategy: we provided detailed prompts asking the AI to produce a hyper-detailed report that would explore every facet of the subject matter with maximum depth and comprehensiveness. This initial AI draft begins with an executive summary and a complete table of contents, providing a roadmap for the LLM before diving into the detailed chapters.
Given the context window limitations of large language models, each chapter was generated through a separate prompt rather than attempting to produce the entire report in a single pass. This approach offered a crucial advantage: it allowed each chapter to receive the full attention of the model's context window, meaning more of the human-written source material, the original instructions, and the developing narrative could be kept in focus. For every chapter prompt, the original human instructions and the executive summary were re-attached to ensure that the AI did not lose sight of the project's overall vision or contradict decisions made in earlier chapters.
The AI draft, though thorough and detailed, inevitably contains elements typical of first-draft writing: some repetition where similar concepts are explained multiple times from different angles, occasional tangential elaboration on secondary points, and the kind of verbose elaboration that comes from an explicit instruction to write at maximum length and detail. This verbosity, however, was intentional rather than a flaw to be immediately corrected, as it served an important purpose in the subsequent revision phase.
After the complete AI draft was assembled, we initiated a critical review phase by asking the LLM to read through the entire draft with fresh attention and produce a detailed analysis of its strengths and weaknesses. This review, preserved in ai-revision-1, identifies redundancies, suggests restructuring opportunities, flags unclear passages, and recommends content improvements. Rather than viewing the review as criticism, we understood it as the AI contributing analytical capacity to the editing process. The AI, having just written the draft, could identify sections where it had belabored a point unnecessarily, areas where the argument could be more tightly constructed, and passages where clarity could be improved through reorganization.
This review step served a dual purpose: it provided valuable guidance for the human author's subsequent review, and it demonstrated to the AI model what quality revision looks like, priming it to produce better work in the next phase. By asking the LLM to review its own work before revising it, we created a natural checkpoint where verbosity and redundancy could be identified and targeted for improvement.
The fourth phase involved passing both the original draft report and the detailed AI-generated review to the LLM, with new instructions to produce a substantially revised version that addressed the identified issues. To ensure this phase produced coherent, high-quality work, we used a structured approach: first, the AI was asked to produce a new executive summary and revised table of contents incorporating the insights from the review. This task forced the model to synthesize what it had learned and to think strategically about the report's overall organization before embarking on detailed revisions.
Each chapter of the revised report was then generated through a separate prompt, working around the context window constraint while maintaining narrative continuity. For every chapter prompt, the original human-written source material, the complete review, and the original project instructions were re-attached. This redundancy in context provision ensures that the AI does not regress to earlier, less refined approaches or lose sight of the project's objectives. Critically, we instructed the AI to begin its work on each chapter by explicitly documenting the specific action items derived from the review that applied to that chapter, and to provide a detailed outline of how it intended to address those items before proceeding to write the revised chapter prose. This intermediate step of outlining ensures that the revision is thoughtful and targeted rather than superficial.
Following the AI's formal revision phase, the human author conducted a comprehensive reading of the entire revised report, identifying specific sections requiring further improvement. Rather than making all corrections unilaterally, the human author used a structured amendment instruction approach, directing the AI to revise particular passages with specific guidance such as "replace this code example with one that demonstrates X instead of Y" or "expand this section to better explain the connection between A and B." This human-directed refinement ensures that the final report reflects human judgment about what matters most, while still allowing the AI to contribute its writing and elaboration capabilities.
The project then entered a cyclical phase of additional AI revisions, conducted chapter by chapter without requiring a complete review of the entire report each time. This focused approach allows for efficient iteration: the AI can concentrate its effort on specific improvement targets without the overhead of reviewing material that is already satisfactory. After several rounds of manual amendments and targeted revisions, the report converged on its final form, which was then saved as report.md.
The process of creating this report revealed several important insights about how to work effectively with large language models when producing detailed, high-quality technical documentation. These insights challenge some common intuitions about how LLMs work and offer practical guidance for others undertaking similar projects.
Large language models exhibit a consistent tendency toward brevity and stylistic economy in their baseline operation. They are trained on vast amounts of internet text where conciseness is rewarded and verbose explanations are often truncated or summarized. This fine-tuning creates a default mode where LLMs prefer to present information in point forms, bullet lists, tables, and short declarative sentences rather than flowing prose and detailed explanations. When tasked with producing technical documentation, this tendency must be actively counteracted through careful prompting strategies.
We discovered that the most effective approach to compelling LLMs to write at greater length and depth involves using explicit modifiers such as "hyper detailed," "deep dive," and "comprehensive" in the prompt instructions. Additionally, framing the target audience as readers with advanced domain expertise—such as academics, PhD-level researchers, or industry practitioners—triggers the model to adopt a more substantive and detailed writing voice. The model infers, reasonably, that such an audience would find brief or superficial treatment unsatisfying, and it adjusts accordingly.
Different LLM providers and models exhibit varying responsiveness to these prompting techniques. In our experience, Claude has proven particularly responsive to requests for hyper-detailed writing and maintains length and detail across multiple turns of conversation. Other models, such as ChatGPT, sometimes continue to optimize for brevity even when explicitly instructed otherwise, requiring more aggressive and creative prompting strategies to achieve comparable depth.
An unexpectedly effective technique involves asking the LLM to produce a detailed outline of what it plans to write before actually writing the content. Outlines themselves tend to be relatively brief by nature, yet the existence of a detailed outline creates a constraint on subsequent writing: the model becomes committed to elaborating on each point in the outline and feels compelled to flesh out the outline's structure in prose form. This indirect approach circumvents the model's natural laziness and forces more substantive output than directly requesting longer writing would produce.
A counterintuitive but valuable discovery emerged from this project: it is often better to ask LLMs to write with maximum verbosity in early drafts rather than requesting efficiency or concision from the beginning. The reasoning behind this approach is subtle but important. When an LLM is explicitly instructed to be verbose—to explore tangents, elaborate on secondary points, and think out loud—it often generates more interesting, nuanced, and substantive content than it would otherwise produce. The verbosity forces the model to engage more deeply with each topic, to consider multiple angles and implications, and to articulate connections that it might otherwise leave implicit.
This verbose first draft then serves a different purpose than a traditionally concise draft would. Rather than representing the final product ready for minor polish, it functions as a rich source material from which a human editor can extract, refine, and reorganize the most valuable insights. During subsequent revision phases, the human editor and the AI working in revision mode can identify the signal within the verbose noise, preserving the substantive content and interesting insights while eliminating repetition and unnecessary tangents. This approach to drafting acknowledges a fundamental reality about LLM writing: more exploration leads to better final products than premature optimization for brevity.
One of the most important lessons from this project contradicts a tempting but flawed approach to LLM-assisted content generation: do not ask LLMs to reorganize or restructure large bodies of content, and do not delegate decisions about which content is important to the model. While LLMs can assist with many aspects of writing and revision, they are surprisingly poor at determining which ideas are central to a report's purpose and which are peripheral or expendable. When given authority to reorganize chapters, merge sections, or eliminate redundancy on their own initiative, LLMs tend to simplify and homogenize content, removing nuance and the distinctive voice or perspective of the original human author.
Instead, the most effective approach is to maintain strict human authority over the overall structure and content scope of the report. The human author establishes the chapter organization, decides what ideas must be included, and defines the report's narrative arc. The LLM is then tasked with elaborating within that human-defined structure, writing and rewriting within the bounds set by the human, but not reorganizing or making unilateral decisions about importance. When content needs to be shortened or streamlined, the human author identifies what to cut or refine rather than delegating that decision to the model.
This principle extends to the human-written original draft: it should be kept in sync with the report's evolving chapter organization, and the LLM should be explicitly instructed never to remove material that appeared in the human original. While this approach requires more human involvement than a fully automated content generation system would demand, it ensures that the final report preserves the intellectual substance and perspective of the human expert rather than devolving into an LLM's average-case interpretation of the subject matter.
A practical pitfall we encountered involved attempting to improve specific aspects of LLM-generated content through vague or general revision instructions. For example, asking the model to "use a database query example instead of an HTTP request" might seem like clear guidance, but in practice such instructions often fail to produce the desired result. The model may interpret the instruction differently than intended, or may produce an example that is technically correct but poorly integrated with the surrounding prose or not appropriately complex for the context.
The most reliable way to ensure that specific passages meet your requirements is to provide detailed, concrete examples or to write particularly critical content by hand and then ask the LLM to write the surrounding content that integrates with your hand-authored sections. If you require a specific code example to appear in the report, write it yourself or find an existing example that precisely matches your needs, rather than asking the LLM to generate something vague and then revising it multiple times. This approach is more efficient than an iterative cycle of vague requests and partial corrections.
Perhaps the most practically important discovery from this project concerns how to manage the LLM's context window to maximize the quality and coherence of generated content. While many contemporary AI tools offer "agent mode" or automated systems that manage context internally and attempt to handle large tasks without human oversight, we found that manually managing the context window—deliberately controlling what information is included with each prompt and what is excluded—produces substantially better results for lengthy technical writing projects.
This project was primarily written using Copilot in "ask mode," with outputs copied manually between turns. This manual approach, though requiring more user effort, offered significant advantages. By carefully curating what information is included in each prompt's context window, we could fit more relevant source material, instructions, and previously-written content into the available context than would typically fit if the AI were managing context automatically. For example, when revising a specific chapter, we could simultaneously include the human-written source material for that chapter, the AI's review comments that specifically addressed that chapter, the project instructions, the complete executive summary, and the revised table of contents—ensuring the AI had maximum relevant information to draw upon.
When generating a single chapter, it is common for the context window to be fully saturated with the relevant source material, prior instructions, and project context. This saturation is not a failure but rather a success: it indicates that the AI is operating with maximum available information about the project's goals and constraints. Writing technical content with such rich context typically produces better results than writing with sparse context, because the model can maintain fine-grained consistency with earlier chapters, better integrate with the established narrative, and make more informed decisions about emphasis and depth.
Throughout the entire methodology, one pattern proves consistent: human involvement and expertise at each phase improves the quality of the final product. The human provides the initial vision and structure, reviews and critiques AI-generated content, guides revisions toward specific improvements, and makes final judgments about what the report should emphasize. The AI, meanwhile, contributes writing capacity, the ability to elaborate at length on technical material, responsiveness to iterative refinement, and untiring revision capabilities. Neither partner is simply "better" at the task; rather, their contributions are complementary. The human ensures the report says something true and important; the AI ensures it is said thoroughly, clearly, and without undue brevity.
The methodology developed through creating this report can be summarized into a set of actionable best practices for others undertaking similar LLM-assisted technical writing projects. First, establish a clear human-written foundation that articulates the core ideas and structure before asking the LLM to elaborate. Second, explicitly instruct the LLM to write with maximum depth and detail, using language that signals you want substantive exploration rather than concise summary. Third, do not delegate structural decisions or decisions about content importance to the LLM; maintain human authority over these critical choices. Fourth, manage the context window manually and deliberately, including all relevant source material with each prompt. Fifth, plan for multiple revision phases rather than expecting a single pass to produce publication-ready work. Finally, recognize that this process requires more initial setup and human involvement than naive approaches to LLM assistance, but produces correspondingly higher-quality results that justify the additional effort.