The primitives in this ecosystem are the building blocks you compose to turn a generic model into a team‑specific coding assistant: instruction files, skills, prompts, custom agents (and sub‑agents), and hooks. Think of them as layers: always‑on rules at the bottom, on‑demand capabilities on top, and automation wrapped around the lifecycle.


1. Instruction files: Persistent rules and context

Instruction files are Markdown configurations that Copilot always includes in the context when working in your repo or specific files.

  • They live alongside your code (for example .instructions.md or repo‑level instruction files) and often use glob patterns to target languages or folders.
  • You capture architecture decisions, coding standards, naming conventions, security constraints, and “how this codebase works” so the agent doesn’t guess.
  • File‑ or pattern‑scoped instructions let you tune behavior per domain (e.g., frontend vs. backend vs. infra scripts).

Rationale: This is your “always‑on brain” for the codebase; you remove prompt repetition and make the agent opinionated in the same way your senior engineers are.


2. Skills: On‑demand specialized capabilities

Skills are folders (with SKILL.md) that define how to perform a specialized task, plus any helper scripts or examples.

  • SKILL.md contains YAML frontmatter (metadata) and instructions describing when and how to use the skill.
  • Copilot decides when to inject a skill into context based on the user’s request and the skill description—for example “debug input handling for this game” or “migrate legacy API calls.”
  • Skills are ideal for repeatable domain tasks: debugging patterns, migration playbooks, data‑access rules, or company‑specific frameworks.

Rationale: Instructions describe global rules, while skills encode detailed procedures that are only loaded when relevant, keeping the context window efficient.


3. Prompts: Reusable slash‑command workflows

Prompt files define named prompts that appear as slash commands (e.g., /test, /document, /refactor) inside Copilot chat.

  • They bundle a task pattern, guidance, and sometimes specific tools into a reusable command your team can trigger instantly.
  • Typical uses: generate tests for the current file, summarize a diff, propose a refactor plan, or scaffold a feature implementation outline.
  • Prompts can be tailored per repo so their behavior reflects local conventions and dependencies.

Rationale: Prompts are UX primitives for humans: they standardize how people ask for common operations, reducing prompt variability and making outcomes more predictable.


4. Custom agents and sub‑agents: Role‑based specialization

Custom agents are defined via agent config files (for example .agent.md under .github/agents) that describe a persona, its tools, and its behavior.

  • The frontmatter configures name, description, tools (built‑in tools and MCP servers), model, and where the agent is available.
  • The Markdown body defines its role, expertise, boundaries, and how it should respond—for example “Solution Architect,” “Security Reviewer,” or “Test‑first Implementer.”
  • These agents appear in the chat agent dropdown and can be invoked directly for tasks that match their specialization.

Sub‑agents are agents that run under an orchestrator agent to handle subtasks in parallel.

  • The orchestrator can delegate subtasks like planning, implementation, accessibility review, and cleanup to different agents, each working in its own context.
  • Only distilled results return to the orchestrator, preventing its context from being flooded with every intermediate step.

Rationale: This mirrors a real engineering team: you encode roles and responsibilities into agents, then let them collaborate while preserving clear separation of concerns and cleaner context windows.


5. Hooks: Lifecycle automation and policy enforcement

Hooks are shell commands that run at key lifecycle points of an agent session, configured via hook files described in the docs.

  • They can trigger on events like session start/stop, agent or sub‑agent start/stop, before or after a tool call, or before/after edits are applied.
  • Hooks receive JSON input describing what the agent is doing, and can decide to log, transform, veto, or augment actions (for example enforce formatting, run linters, or perform security checks before committing changes).
  • Output from hooks can influence whether the agent continues, rolls back, or adjusts its plan.

Rationale: Hooks move important practices (lint, tests, security, approvals) from “please remember” into enforced automation, embedding your governance into the agent runtime itself.


6. How the primitives fit together

Taken together, these primitives give you a layered design:

  • Instruction files: stable background knowledge and guardrails.
  • Skills: contextual, task‑specific playbooks the agent loads when needed.
  • Prompts: ergonomic entry points for common user workflows.
  • Custom agents and sub‑agents: specialized roles and multi‑agent orchestration.
  • Hooks: lifecycle glue for automation, quality, and compliance.