Linux Kernel Policy — Official

AI Writes the Code.
You Own Everything.

The Linux kernel has merged its first official rules for AI-assisted code contributions — redefining the line between tool and author in ways every developer using AI agents needs to understand.

April 2026 / Documentation/process/coding-assistants.rst / DCO · Signed-off-by · Assisted-by
In Plain English — The Three Rules

What Actually Merged Into the Kernel

coding-assistants.rst is the kernel's first explicit policy document on AI tooling. It went through the normal RFC and review process — proposals by Dave Hansen and Sasha Levin preceded the final merge — and is now part of torvalds/linux main. It governs every AI-assisted patch submitted to the Linux kernel going forward.

Prohibited

AI Cannot Sign Off

"AI agents MUST NOT add Signed-off-by tags. Only humans can legally certify the Developer Certificate of Origin."

Required

Human Takes Full Ownership

The human submitter reviews all AI-generated code, ensures license compliance, and adds their own Signed-off-by to certify the DCO.

Should Include

Disclose with Assisted-by

Contributions should include an Assisted-by tag listing the agent name, model version, and specialized tools used.

The Assisted-by Tag — Deep Dive

The kernel doc defines a new commit trailer tag. Its format is precise and follows the same Key: Value convention as other kernel commit trailers like Reviewed-by and Tested-by.

Assisted-by Format Documentation/process/coding-assistants.rst
Assisted-by: AGENT_NAME:MODEL_VERSION [TOOL1] [TOOL2]

# Real examples from the documentation:
Assisted-by: Claude:claude-3-opus coccinelle sparse
Assisted-by: Copilot:gpt-4o clang-tidy
Assisted-by: Cursor:claude-sonnet-4-6 smatch sparse

What Gets Listed vs. What Doesn't

The document draws a clear line between specialized analysis tools that meaningfully contribute to code correctness — and the normal development environment every developer uses anyway.

Tool List in Assisted-by? Why
coccinelle Yes Semantic patch transformation — specialized static analysis
sparse Yes Kernel-specific static analysis and type checking
smatch Yes Source code pattern matching for bugs
clang-tidy Yes Clang-based linting with kernel-relevant checks
git No Standard version control — baseline dev tooling
gcc / make No Compiler and build system — implicit in any kernel work
Text editors (vim, emacs, VSCode) No Normal editing environment, not analysis tools

Assisted-by Tag Builder

Configure your AI agent and tools below. Your correctly formatted Assisted-by tag is generated in real time.

Tag Builder

Live Output
Generated Tag
Assisted-by: Claude:claude-sonnet-4-6
Paste this into your commit message's trailer section — after the blank line following the patch description, alongside your own Signed-off-by.

The DCO and Why AI Can't Sign It

The Developer Certificate of Origin is a lightweight legal mechanism used by the Linux kernel and hundreds of other open-source projects. When you add Signed-off-by: Your Name <email> to a commit, you are certifying — under your own identity — one of the following:

The Core Problem

An AI model cannot make any of these certifications. It has no legal identity, cannot own IP, cannot represent that it has the right to license its output, and cannot be held accountable for license violations. The entire DCO framework depends on a traceable, responsible human being at the end of the chain.

This is not a philosophical choice by the kernel community — it is a legal necessity. The moment an AI agent adds a Signed-off-by, the certification chain breaks. No one with legal standing has vouched for the code. The patch is, in a strict reading of the DCO, uncertified regardless of the AI agent's claimed confidence.

The Responsibility Chain

The kernel doc establishes a clear sequence that must occur before any AI-assisted contribution can be submitted. The AI is a tool — the human is the author, reviewer, certifier, and owner.

1
AI Agent

Code Generation

The AI tool generates, suggests, or significantly modifies code. At this point, the work has no legal status — it is raw output from a tool, not a certifiable contribution.

2
Human

Review Every Line

The human developer reviews all AI-generated code. Not a spot-check — a thorough review. Correctness, style, security, kernel-subsystem conventions. The reviewer is now accountable for what they read and accepted.

3
Human

Verify License Compatibility

Ensure the AI-generated code is GPL-2.0-only compatible (for the kernel). No permissive-license fragments, no copyleft traps, correct SPDX identifiers. AI tools are trained on vast corpora — provenance is not guaranteed.

4
Human

Add the Assisted-by Tag

Document which AI agent and model version were used, and which specialized analysis tools were involved. This creates a transparent audit trail of AI involvement without obscuring human authorship.

5
Human (Legal)

Add Your Signed-off-by

Only after review and verification does the human add their Signed-off-by. This is the legal certification — your name, your responsibility, your email address on the record forever.

6
Human (Owner)

Submit — and Own All Consequences

The contribution is submitted. From this point forward, the human is the legally responsible author. Bugs, regressions, security issues, license violations — they are yours. AI is not a defense.

What This Means for the Agent AI World

The Linux kernel's policy is not just a rule for kernel contributors. It is the world's most important open-source project formally codifying a position on AI agency — and it draws a sharp, non-negotiable line: autonomous agents cannot certify contributions to projects that use DCO.

This matters because the current trajectory of AI tooling is toward greater autonomy — agents that open pull requests, submit patches, create commits, and iterate without human intervention per-step. The kernel policy says: that autonomy stops at the DCO boundary. A human must be in the loop before any legally binding act of contribution.

Immediate Impact

  • Fully autonomous patch agents cannot submit to the kernel without human certification
  • Every AI-assisted commit requires a human review pass, not just a merge approval
  • AI coding tools need an explicit Assisted-by disclosure workflow
  • Existing AI-generated commits without disclosure are technically non-compliant

Watch This Space

  • Will npm, PyPI, and crates.io adopt analogous AI disclosure policies?
  • Can SBOM (Software Bill of Materials) standards evolve to track AI involvement?
  • How do MCP-based autonomous agents fit when they open PRs on behalf of humans?
  • Will DCO itself evolve to accommodate verified human-in-the-loop AI workflows?
The Authorship Gap

There is a fundamental mismatch between what AI agents can do (generate correct, compilable, well-structured code) and what they cannot be (a legally responsible author). The kernel policy formalizes this gap. As agents become more capable, the gap does not close — it widens, because the stakes of autonomous contribution rise. This is the new architectural constraint for agent AI systems operating in open-source ecosystems.

This also sets a precedent for how open-source governance bodies will respond to AI contributions. Rather than banning AI tooling — which would be unenforceable — the kernel community has chosen disclosure and human accountability as its governing framework. That is almost certainly the model other major projects will follow. Expect similar language in CNCF projects, Apache Foundation guidelines, and major package registry policies within the next 12–18 months.

Is My Contribution Compliant?

Answer four questions to generate a compliance checklist tailored to your specific situation.

Compliance Checker

Select an answer for each question. Your checklist updates automatically.

1. Does the target project use DCO (Developer Certificate of Origin)?
Yes No Not sure
2. Did an AI tool generate or significantly modify any part of this contribution?
Yes No
3. Have you personally reviewed every line of code that will be submitted?
Yes, fully reviewed Partially Not yet
4. Is this a contribution to the Linux kernel specifically?
Yes No
Your Compliance Checklist

    How to Stay Compliant

    The mechanics are simple. The discipline is not. Here is what a fully compliant AI-assisted kernel commit looks like:

    Complete Compliant Commit Message
    mm/vmalloc: fix off-by-one in vmap_area boundary check
    .
    When calculating the end boundary for a vmap area, the check used
    a strict less-than comparison where less-than-or-equal was required.
    This could cause the last page of the vmalloc range to be incorrectly
    considered unavailable under high-memory-pressure conditions.
    .
    Identified with sparse and reviewed manually against the allocator
    invariants in include/linux/vmalloc.h.
    .
    Assisted-by: Claude:claude-sonnet-4-6 sparse
    Signed-off-by: Your Name <your.email@example.com>

    Common Mistakes to Avoid


    The Human Is Not Optional

    The Linux kernel's AI policy is deceptively simple: use whatever tools help you write better code, but stand behind what you submit. The Signed-off-by is not a formality — it is the legal foundation of open-source contribution, and it requires a human being who can be held accountable.

    For the agent AI world, this is a clarifying constraint. The question was never whether AI could write good kernel code — clearly it can help. The question was always who owns the output. The kernel community has answered: you do. Every time. Fully. The AI is a power tool, not a coauthor.

    As AI agents become more autonomous and capable, this human-accountability requirement becomes more important, not less. The more an AI can do unsupervised, the more critical it is that a human with full understanding reviews and certifies the result. That is not a limitation of AI — it is the correct architecture for trusted, legally sound open-source development.