The Linux kernel has merged its first official rules for AI-assisted code contributions — redefining the line between tool and author in ways every developer using AI agents needs to understand.
Signed-off-by tag.
The Developer Certificate of Origin is a legal statement — and only a human can make it.
Assisted-by tag naming the AI agent,
the model version, and any specialized analysis tools like coccinelle or sparse.
coding-assistants.rst is the kernel's first
explicit policy document on AI tooling. It went through the normal RFC and review process —
proposals by Dave Hansen and Sasha Levin preceded the final merge —
and is now part of torvalds/linux main.
It governs every AI-assisted patch submitted to the Linux kernel going forward.
"AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin."
The human submitter reviews all AI-generated code, ensures license compliance, and adds their own
Signed-off-by to certify the DCO.
Contributions should include an Assisted-by tag
listing the agent name, model version, and specialized tools used.
The kernel doc defines a new commit trailer tag. Its format is precise and follows the same
Key: Value convention as other
kernel commit trailers like Reviewed-by
and Tested-by.
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2] # Real examples from the documentation: Assisted-by: Claude:claude-3-opus coccinelle sparse Assisted-by: Copilot:gpt-4o clang-tidy Assisted-by: Cursor:claude-sonnet-4-6 smatch sparse
The document draws a clear line between specialized analysis tools that meaningfully contribute to code correctness — and the normal development environment every developer uses anyway.
| Tool | List in Assisted-by? | Why |
|---|---|---|
coccinelle |
Yes | Semantic patch transformation — specialized static analysis |
sparse |
Yes | Kernel-specific static analysis and type checking |
smatch |
Yes | Source code pattern matching for bugs |
clang-tidy |
Yes | Clang-based linting with kernel-relevant checks |
git |
No | Standard version control — baseline dev tooling |
gcc / make |
No | Compiler and build system — implicit in any kernel work |
| Text editors (vim, emacs, VSCode) | No | Normal editing environment, not analysis tools |
Configure your AI agent and tools below. Your correctly formatted Assisted-by tag is generated in real time.
Signed-off-by.
The Developer Certificate of Origin is a lightweight legal mechanism used by the Linux kernel
and hundreds of other open-source projects. When you add Signed-off-by: Your Name <email>
to a commit, you are certifying — under your own identity — one of the following:
An AI model cannot make any of these certifications. It has no legal identity, cannot own IP, cannot represent that it has the right to license its output, and cannot be held accountable for license violations. The entire DCO framework depends on a traceable, responsible human being at the end of the chain.
This is not a philosophical choice by the kernel community — it is a legal necessity. The moment an AI
agent adds a Signed-off-by, the certification chain
breaks. No one with legal standing has vouched for the code. The patch is, in a strict reading of the DCO,
uncertified regardless of the AI agent's claimed confidence.
The kernel doc establishes a clear sequence that must occur before any AI-assisted contribution can be submitted. The AI is a tool — the human is the author, reviewer, certifier, and owner.
The AI tool generates, suggests, or significantly modifies code. At this point, the work has no legal status — it is raw output from a tool, not a certifiable contribution.
The human developer reviews all AI-generated code. Not a spot-check — a thorough review. Correctness, style, security, kernel-subsystem conventions. The reviewer is now accountable for what they read and accepted.
Ensure the AI-generated code is GPL-2.0-only compatible (for the kernel). No permissive-license fragments, no copyleft traps, correct SPDX identifiers. AI tools are trained on vast corpora — provenance is not guaranteed.
Document which AI agent and model version were used, and which specialized analysis tools were involved. This creates a transparent audit trail of AI involvement without obscuring human authorship.
Only after review and verification does the human add their Signed-off-by. This is the legal certification — your name, your responsibility, your email address on the record forever.
The contribution is submitted. From this point forward, the human is the legally responsible author. Bugs, regressions, security issues, license violations — they are yours. AI is not a defense.
The Linux kernel's policy is not just a rule for kernel contributors. It is the world's most important open-source project formally codifying a position on AI agency — and it draws a sharp, non-negotiable line: autonomous agents cannot certify contributions to projects that use DCO.
This matters because the current trajectory of AI tooling is toward greater autonomy — agents that open pull requests, submit patches, create commits, and iterate without human intervention per-step. The kernel policy says: that autonomy stops at the DCO boundary. A human must be in the loop before any legally binding act of contribution.
Assisted-by disclosure workflowThere is a fundamental mismatch between what AI agents can do (generate correct, compilable, well-structured code) and what they cannot be (a legally responsible author). The kernel policy formalizes this gap. As agents become more capable, the gap does not close — it widens, because the stakes of autonomous contribution rise. This is the new architectural constraint for agent AI systems operating in open-source ecosystems.
This also sets a precedent for how open-source governance bodies will respond to AI contributions. Rather than banning AI tooling — which would be unenforceable — the kernel community has chosen disclosure and human accountability as its governing framework. That is almost certainly the model other major projects will follow. Expect similar language in CNCF projects, Apache Foundation guidelines, and major package registry policies within the next 12–18 months.
Answer four questions to generate a compliance checklist tailored to your specific situation.
Select an answer for each question. Your checklist updates automatically.
The mechanics are simple. The discipline is not. Here is what a fully compliant AI-assisted kernel commit looks like:
mm/vmalloc: fix off-by-one in vmap_area boundary check . When calculating the end boundary for a vmap area, the check used a strict less-than comparison where less-than-or-equal was required. This could cause the last page of the vmalloc range to be incorrectly considered unavailable under high-memory-pressure conditions. . Identified with sparse and reviewed manually against the allocator invariants in include/linux/vmalloc.h. . Assisted-by: Claude:claude-sonnet-4-6 sparse Signed-off-by: Your Name <your.email@example.com>
Assisted-by comes before your Signed-off-by in the trailer block.git, gcc, or your editor. Only specialized analysis tools belong in Assisted-by.claude-sonnet-4-6, not "Claude Sonnet".
The Linux kernel's AI policy is deceptively simple: use whatever tools help you write better code, but stand behind
what you submit. The Signed-off-by is not a formality —
it is the legal foundation of open-source contribution, and it requires a human being who can be held accountable.
For the agent AI world, this is a clarifying constraint. The question was never whether AI could write good kernel code — clearly it can help. The question was always who owns the output. The kernel community has answered: you do. Every time. Fully. The AI is a power tool, not a coauthor.
As AI agents become more autonomous and capable, this human-accountability requirement becomes more important, not less. The more an AI can do unsupervised, the more critical it is that a human with full understanding reviews and certifies the result. That is not a limitation of AI — it is the correct architecture for trusted, legally sound open-source development.