The rise of AI coding assistants like GitHub Copilot and Claude Code has created a new challenge: how do you give an AI agent enough context to write production-quality code that follows your team’s conventions, architecture patterns, and business rules? The answer is Instruction Driven Development (IDD) — a methodology where markdown documentation becomes the executable specification layer that guides AI agents through your codebase.
What Is Instruction Driven Development
IDD is a development approach where carefully structured markdown files serve as the primary interface between human developers and AI coding agents. Instead of relying on the AI to infer patterns from code alone, you provide explicit, layered instructions that describe your project’s architecture, coding standards, testing expectations, and domain-specific rules.
The key insight is that AI agents are dramatically more effective when they have access to well-organized, contextual documentation that mirrors how a senior engineer would onboard a new team member. IDD formalizes this into a repeatable, maintainable system.
The Three-Layer Instruction Architecture
- Global Instructions — Repository-wide rules that apply everywhere. These live in files like
.github/copilot-instructions.mdorAGENTS.mdand define the project’s tech stack, coding conventions, commit message formats, and architectural principles. Think of these as the constitution of your codebase. - Localized Instructions — Path-specific guidance placed near the code it describes. Files in
.github/instructions/with glob patterns (e.g.,*.angular.instructions.mdfor frontend code,*.api.instructions.mdfor backend routes) provide focused context without polluting the global scope. - Prompt Templates — Reusable task templates in
.github/prompts/that encode common workflows like creating a new API endpoint, adding a database migration, or writing integration tests. These are the playbooks that turn one-off instructions into repeatable processes.
Why IDD Works
- Separation of Concerns — Requirements and conventions live in documentation, not scattered across code comments. AI agents get the full picture without scanning thousands of lines of code.
- Version-Controlled Context — Instructions evolve with the codebase through git. When architecture changes, the instructions change in the same commit, keeping AI agents current.
- Team Alignment — The same instructions that guide AI agents also serve as living documentation for human developers. Onboarding a new team member becomes as simple as pointing them to the instruction files.
- Reduced Hallucination — When AI agents have explicit rules to follow, they generate fewer incorrect assumptions. The specificity of IDD instructions acts as a natural guardrail against model confabulation.
- Tool Agnosticism — IDD instructions work with GitHub Copilot, Claude Code, Cursor, and any future AI assistant. The methodology is about structuring knowledge, not about a specific tool.
Getting Started with IDD
Start by creating a .github/copilot-instructions.md file in your repository. Document your tech stack, folder structure, naming conventions, and testing requirements. Then identify your three most common development tasks and create prompt templates for them. Within a week, you will see measurably better AI-generated code that requires fewer corrections and follows your team’s standards.
I have built a complete reference implementation demonstrating IDD in practice: a Task Management monorepo with Angular, Node.js, and MongoDB that uses layered instruction files at every level. You can explore it at github.com/blitznihar/task-tracker-copilot-md.
IDD represents a shift from treating AI assistants as autocomplete engines to treating them as junior developers who need well-structured onboarding materials. The teams that master this paradigm will ship faster, with fewer bugs, and with code that is more consistent than anything achievable through manual review alone.
Leave a comment