Skip to content

Add an LLM policy for rust-lang/rust#1040

Open
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy
Open

Add an LLM policy for rust-lang/rust#1040
jyn514 wants to merge 49 commits into
rust-lang:masterfrom
jyn514:llm-policy

Conversation

@jyn514
Copy link
Copy Markdown
Member

@jyn514 jyn514 commented Apr 17, 2026

View all comments

FCP link

this comment

Summary

This document establishes a policy for how LLMs can be used when contributing to rust-lang/rust. Subtrees, submodules, and dependencies from crates.io are not in scope. Other repositories in the rust-lang organization are not in scope.

This policy is intended to live in Forge as a living document, not as a dead RFC. It will be linked from CONTRIBUTING.md in rust-lang/rust as well as from the rustc- and std-dev-guides.

Moderation guidelines

This PR is preceded by an enormous amount of discussion on Zulip. Almost every conceivable angle has been discussed to death; there have been upwards of 3000 messages, not even counting discussion on GitHub. We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself. In particular, we mark several topics as out of scope below. We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

So, the following are considered off topic for this PR specifically:

  • Long-term social or economic impact of LLMs
  • The environmental impact of LLMs
  • Anything to do with the copyright status of LLM output. We have gotten initial confirmation from US lawyers that LLM output likely doesn't cause issues under US law. We still need to confirm this with EU lawyers and in other parts of the world.
  • Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules. For an extended rationale, please see this comment.

Feedback guidelines

We are aware that parts of this policy will make some people very unhappy. As you are reading, we ask you to consider the following.

  • Can you think of a concrete improvement to the policy that addresses your concern? Consider:
    • Whether your change will make the policy harder to moderate
    • Whether your change will make it harder to come to a consensus
  • Does your concern need to be addressed before merging or can it be addressed in a follow-up?
    • Keep in mind the cost of not creating a policy.

If your concern is for yourself or for your team

  • What are the specific parts of your workflow that will be disrupted?
    • In particular we are only interested in workflows involving rust-lang/rust. Other repositories are not affected by this policy and are therefore not in scope.
  • Can you live with the disruption? Is it worth blocking the policy over?

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

Motivation

  • Many people find LLM-generated code and writing deeply unpleasant to read or review.
  • Many people find LLMs to be a significant aid to learning and discovery.
  • rust-lang/rust is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
    • Having a policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is not intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs. It is only intended to set out the future policy of rust-lang/rust itself.

Ethical issues

See this thread.

Drawbacks

  • This bans some valid usages of LLMs. We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
  • This intentionally does not address the moral, social, and environmental impacts of LLMs. These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
  • This intentionally does not attempt to set a project-wide policy. We have attempted to come to a consensus for upwards of a month without significant progress. We are cutting our losses so we can have something rather than adhoc moderation decisions.
  • This intentionally does not apply to subtrees of rust-lang/rust. We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

Rationale and alternatives

  • We could create a project-wide policy, rather than scoping it to rust-lang/rust. This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date. It has the disadvantage that we think it is nigh-impossible to get everyone to agree. There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
  • We could have different standards for people in the Rust project than for new contributors. That would make moderation much easier, and allow us to experiment with additional LLM use. However, it reinforces existing power structures, creates more of a gap between authors and reviewers, and feels "unfriendly" to new contributors.
  • We could have a more lenient policy that allows "responsible and appropriate" use of LLMs. This raises the question of what "responsible and appropriate" means. The usual suggestion is "self-review, and judging the change by the same standard as any other change"; but this neglects the reputational and social harm of work that "feels" LLM generated. It also makes our moderation policy much harder to understand, and increases the likelihood of re-litigating each moderation decision.
  • We could have a more strict policy that removes the threshold of originality condition. This has the advantage that our policy becomes easier to moderate and understand. It has the disadvantage that it becomes easy for people to intend to follow the policy, but be put in a position where their only choices are to either discard the PR altogether, rewrite it from scratch, or tell "white lies" about whether an LLM was involved.
  • We could have a more strict policy that bans LLMs altogether. It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.
  • We could have no policy at all. This avoids banning valid use cases; avoids implicitly legitimizing the use of LLMs by setting a policy; and saves all of us a great deal of time and effort. However, it greatly increases the baseline level of distrust within the project; makes review assignment even more of a dice roll than it is already; wastes a great deal of contributor and moderator time dealing with LLM-authored PRs on a case-by-case basis; and gives no guidance for people who really do want to use an LLM in a responsible way.

Prior art

This prior art section is taken almost entirely from Jane Lusby's summary of her research, although we have taken the liberty of moving the Rust project's prior art to the top. We thank her for her help.

Rust

Other organizations

These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.

  • full ban
    • postmarketOS - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS - multi point ethics based rational with citations included
    • zig
      • philosophical, cites Profession (novella)
      • rooted in concerns around the construction and origins of original thought
    • servo
      • more pragmatic, directly lists concerns around ai, fairly concise
    • qemu
      • pragmatic, focuses on copyright and licensing concerns
      • explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
    • forgejo
      • bans AI for review, code, documentation, and communication
      • mentions "legal uncertainties" as a motivating factor
      • explicitly excludes machine translation
  • allowed with supervision, human is ultimately responsible
    • scipy
      • strict attribution policy including name of model
    • llvm
    • blender
    • linux kernel
      • quite concise but otherwise seems the same as many in this category
    • mesa
      • framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.

        Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.

    • firefox
    • ghostty
      • pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
    • fedora
      • clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
  • curl
    • does not explicitly require humans understand contributions, otherwise policy is similar to above policies
  • linux foundation
    • encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
  • In progress

Unresolved questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.

Rendered

@rustbot rustbot added the S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. label Apr 17, 2026
@rustbot
Copy link
Copy Markdown
Collaborator

rustbot commented Apr 17, 2026

r? @jieyouxu

rustbot has assigned @jieyouxu.
They will have a look at your PR within the next two weeks and either review your PR or reassign to another reviewer.

Use r? to explicitly pick a reviewer

Why was this reviewer chosen?

The reviewer was selected based on:

  • Fallback group: @Mark-Simulacrum, internal-sites
  • @Mark-Simulacrum, internal-sites expanded to Mark-Simulacrum, Urgau, ehuss, jieyouxu
  • Random selection from Mark-Simulacrum, Urgau, ehuss, jieyouxu

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

@rustbot label T-libs T-compiler T-rustdoc T-bootstrap

@rustbot rustbot added T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc labels Apr 17, 2026
Comment thread src/policies/llm-usage.md Outdated
## Summary
[summary]: #summary

This document establishes a policy for how LLMs can be used when contributing to `rust-lang/rust`.
Subtrees, submodules, and dependencies from crates.io are not in scope.
Other repositories in the `rust-lang` organization are not in scope.

This policy is intended to live in [Forge](https://forge.rust-lang.org/) as a living document, not as a dead RFC.
It will be linked from `CONTRIBUTING.md` in rust-lang/rust as well as from the rustc- and std-dev-guides.

## Moderation guidelines

This PR is preceded by [an enormous amount of discussion on Zulip](https://rust-lang.zulipchat.com/#narrow/channel/588130-project-llm-policy).
Almost every conceivable angle has been discussed to death;
there have been upwards of 3000 messages, not even counting discussion on GitHub.
We initially doubted whether we could reach consensus at all.

Therefore, we ask to bound the scope of this PR specifically to the policy itself.
In particular, we mark several topics as out of scope below.
We still consider these topics to be important, we simply do not believe this is the right place to discuss them.

No comment on this PR may mention the following topics:

- Long-term social or economic impact of LLMs
- The environmental impact of LLMs
- Anything to do with the copyright status of LLM output
- Moral judgements about people who use LLMs

We have asked the moderation team to help us enforce these rules.

## Feedback guidelines

We are aware that parts of this policy will make some people very unhappy.
As you are reading, we ask you to consider the following.

- Can you think of a *concrete* improvement to the policy that addresses your concern? Consider:
  - Whether your change will make the policy harder to moderate
  - Whether your change will make it harder to come to a consensus
- Does your concern need to be addressed before merging or can it be addressed in a follow-up?
  - Keep in mind the cost of *not* creating a policy.

### If your concern is for yourself or for your team
- What are the *specific* parts of your workflow that will be disrupted?
  - In particular we are *only* interested in workflows involving `rust-lang/rust`.
    Other repositories are not affected by this policy and are therefore not in scope.
- Can you live with the disruption? Is it worth blocking the policy over?

---

Previous versions of this document were discussed on Zulip, and we have made edits in responses to suggestions there.

## Motivation
[motivation]: #motivation

- Many people find LLM-generated code and writing deeply unpleasant to read or review.
- Many people find LLMs to be a significant aid to learning and discovery.
- `rust-lang/rust` is currently dealing with a deluge of low-effort "slop" PRs primarily authored by LLMs.
  - Having *a* policy makes these easier to moderate, without having to take every single instance on a case-by-case basis.

This policy is *not* intended as a debate over whether LLMs are a good or bad idea, nor over the long-term impact of LLMs.
It is only intended to set out the future policy of `rust-lang/rust` itself.

## Drawbacks
[drawbacks]: #drawbacks

- This bans some valid usages of LLMs.
  We intentionally err on the side of banning too much rather than too little in order to make the policy easy to understand and moderate.
- This intentionally does not address the moral, social, and environmental impacts of LLMs.
  These topics have been extensively discussed on Zulip without reaching consensus, but this policy is relevant regardless of the outcome of these discussions.
- This intentionally does not attempt to set a project-wide policy.
  We have attempted to come to a consensus for upwards of a month without significant process.
  We are cutting our losses so we can have *something* rather than adhoc moderation decisions.
- This intentionally does not apply to subtrees of rust-lang/rust.
  We don't have the same moderation issues there, so we don't have time pressure to set a policy in the same way.

## Rationale and alternatives
[rationale-and-alternatives]: #rationale-and-alternatives

- We could create a project-wide policy, rather than scoping it to `rust-lang/rust`.
  This has the advantage that everyone knows what the policy is everywhere, and that it's easy to make things part of the mono-repo at a later date.
  It has the disadvantage that we think it is nigh-impossible to get everyone to agree.
  There are also reasons for teams to have different policies; for example, the standard for correctness is much higher within the compiler than within Clippy.
- We could have a more strict policy that removes the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html) condition.
  This has the advantage that our policy becomes easier to moderate and understand.
  It has the disadvantage that it becomes easy for people to intend to
  follow the policy, but be put in a position where their only choices
  are to either discard the PR altogether, rewrite it from scratch, or
  tell "white lies" about whether an LLM was involved.
- We could have a more strict policy that bans LLMs altogether.
  It seems unlikely we will be able to agree on this, and we believe attempting it will cause many people to leave the project.

## Prior art
[prior-art]: #prior-art

This prior art section is taken almost entirely from [Jane Lusby's summary of her research](rust-lang/leadership-council#273 (comment)),
although we have taken the liberty of moving the Rust project's prior art to the top.
We thank her for her help.

### Rust
- [Moderation team's spam policy](https://github.com/rust-lang/moderation-team/blob/main/policies/spam.md/#fully-or-partially-automated-contribs)
- [Compiler team's "burdensome PRs" policy](rust-lang/compiler-team#893)
### Other organizations
 These are organized along a spectrum of AI friendliness, where top is least friendly, and bottom is most friendly.
- full ban
  - [postmarketOS](https://docs.postmarketos.org/policies-and-processes/development/ai-policy.html)
        - also explicitly bans encouraging others to use AI for solving problems related to postmarketOS
        - multi point ethics based rational with citations included
  - [zig](https://ziglang.org/code-of-conduct/)
    - philosophical, cites [Profession (novella)](https://en.wikipedia.org/wiki/Profession_(novella))
    - rooted in concerns around the construction and origins of original thought
  - [servo](https://book.servo.org/contributing/getting-started.html#ai-contributions)
    - more pragmatic, directly lists concerns around ai, fairly concise
  - [qemu](https://www.qemu.org/docs/master/devel/code-provenance.html#use-of-ai-content-generators)
    - pragmatic, focuses on copyright and licensing concerns
    - explicitly allows AI for exploring api, debugging, and other non generative assistance, other policies do not explicitly ban this or mention it in any way
- allowed with supervision, human is ultimately responsible
  - [scipy](https://github.com/scipy/scipy/pull/24583/changes)
    - strict attribution policy including name of model
  - [llvm](https://llvm.org/docs/AIToolPolicy.html)
  - [blender](https://devtalk.blender.org/t/ai-contributions-policy/44202)
  - [linux kernel](https://kernel.org/doc/html/next/process/coding-assistants.html)
    - quite concise but otherwise seems the same as many in this category
  - [mesa](https://gitlab.freedesktop.org/mesa/mesa/-/blob/main/docs/submittingpatches.rst)
    - framed as a contribution policy not an AI policy, AI is listed as a tool that can be used but emphasizes same requirements that author must understand the code they contribute, seems to leave room for partial understanding from new contributors.
        > Understand the code you write at least well enough to be able to explain why your changes are beneficial to the project.
  - [forgejo](https://codeberg.org/forgejo/governance/src/branch/main/AIAgreement.md)
    - bans AI for review, does not explicitly require contributors to understand code generated by ai.
      One could interpret the "accountability for contribution lies with contributor even if AI is used" line as implying this requirement, though their version seems poorly worded imo.
  - [firefox](https://firefox-source-docs.mozilla.org/contributing/ai-coding.html)
  - [ghostty](https://github.com/ghostty-org/ghostty/blob/main/AI_POLICY.md)
    - pro-AI but views "bad users" as the source of issues with it and the only reason for what ghostty considers a "strict AI policy"
  - [fedora](https://communityblog.fedoraproject.org/council-policy-proposal-policy-on-ai-assisted-contributions/)
    - clearly inspired and is cited by many of the above, but is definitely framed more pro-ai than the derived policies tend to be
- [curl](https://curl.se/dev/contribute.html#on-ai-use-in-curl)
  - does not explicitly require humans understand contributions, otherwise policy is similar to above policies
- [linux foundation](https://www.linuxfoundation.org/legal/generative-ai)
  - encourages usage, focuses on legal liability, mentions that tooling exists to help automate managing legal liability, does not mention specific tools
- In progress
  - NixOS
    - NixOS/nixpkgs#410741

## Unresolved questions
[unresolved-questions]: #unresolved-questions

See the "Moderation guidelines" and "Drawbacks" section for a list of topics that are out of scope.
Copy link
Copy Markdown
Member

@jieyouxu jieyouxu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really like this version, and thanks a ton for working on it. Specifically:

  • It doesn't try to dump entire walls of text, which is unfortunately a good way to be sure nobody reads it. Instead, it gives you concrete examples, and a guiding rule-of-thumb for uncovered scenarios, and acknowledges upfront that it surely cannot be exhaustive.
  • I also like where it points out the nuance and recognizes the uncertainties.
  • I like that it covers both "producers" and "consumers" (with nuance that reviewers can also technically use LLMs in ways that are frustrating to the PR authors!)

I left a few suggestions / nits, but even without them this is still a very good start IMO.

(Will not leave an explicit approval until we establish wider consensus, which likely will take the form of 4-team joint FCP.)

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@ChayimFriedman2
Copy link
Copy Markdown

The links to Zulip are project-private, FWIW.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented Apr 17, 2026

The links to Zulip are project-private, FWIW.

I'm aware. This PR is targeted towards Rust project members moreso than the broad community.

Copy link
Copy Markdown
Member

@davidtwco davidtwco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm happy with this as an initial policy for the rust-lang/rust repository.

View changes since this review

Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/how-to-start-contributing.md Outdated
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md
Comment thread src/policies/llm-usage.md Outdated
Comment thread src/policies/llm-usage.md Outdated
@rust-lang rust-lang locked as too heated and limited conversation to collaborators May 15, 2026
@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 15, 2026

I'm about to close my browser for the day because I'm absolutely exhausted, but I would like to say one last thing.

I recognize that there is a power imbalance here. I recognize that people outside the project feel helpless to influence decisions because they don't have commit rights, and that you're all commenting here in violation of the moderation policy because you don't have any idea how else to make your voice heard. I'm sorry about that. I don't like the way that software is built and published, and that includes Rust.

I don't have any solutions here. All I can do is offer to listen. You can find my email on my website, and I promise if you email me there I will hear you out.

@jackh726
Copy link
Copy Markdown
Member

jackh726 commented May 15, 2026

@rust-rfcbot resolve restrictive-but-not-time-limited

The additional section on experimental LLM usage resolves my concern. Although there is no "time limit" or otherwise automatic criteria on this policy expiring, I also think that we're going to be hard-pressed to find a better policy anytime soon. I am fairly confident that the discussion is not over after this policy is merged, and that discussion will only get more refined at we gain experience and evidence.


On some specific points raised since my last comments:

re. "committing the Project": yes, I do think it does in some sense, but I also think this is well-balanced enough with enough stakeholders that introducing extra process to this (by bringing in the LC) isn't going to help anything. I would encourage the LC to talk about this and give thoughts about any concrete ways to improve the policy - I'm sure jyn would do a great job incorporating them.

re. the policy being long and hard to follow (Niko's comment): Yeah, I understand this. I think it is a bit long, and can be hard to follow. I also think that's just the state of the conversation right now and nuance we have to draw. I would love to find ways to summarize this better, give easier-to-grasp guidelines, etc. after this is merged (and I think that's okay).

re. experimental usage: I think we can figure out in practice is the conditions/process here makes sense or not. I personally would love for us to have a better intake system for LLM-generated code than "you need to find a reviewer first" - but I practically just don't know how to make that happen right now, both due to limited review capacity and in the technical logistics of it. I think the condition of "no soundness-critical code" is largely left to the discretion of the reviewer here - but I think the guidelines here are good. It'll help imo that the changes should be talked about ahead of time with the reviewer.

@nikomatsakis
Copy link
Copy Markdown
Contributor

nikomatsakis commented May 15, 2026

I've been thinking over this policy. What I like about is it that it moves slowly and carefully, leaving room for LLM-authored content while making the default negative.

However, I think what really bothers me about is that it signals such a low trust in our fellow reviewers. I don't like these complex lists of rules about when you can and can't use an LLM and what kind of code can and can't be written with it etc. I think if we are going to allow some folks to review LLM-authored content, let's just do it, and assume that we can all talk to one another if we find the standard of review doesn't seem high enough.

I spent a few minutes that I should've been spending preparing for RustWeek banging out a draft of an alternative framing of this policy that I think captures its spirit in a higher-level, simpler way:

https://hedgedoc.nikomatsakis.com/s/uQAxCxrrS

I don't know how I feel 100% about that policy as I wrote it, but I think I could live with it.

Clarification: I don't know what the "details" of the experimentation policy should be for how LLM-content gets reviewed. I just think that there should be a group of people who are experimenting with the best option. It might be that initially unsolicited LLM-authored PRs are closed and the author is referred to Zulip. It might be that they are assigned someone from a separate pool. I don't think we have to overspecify it here.

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 16, 2026

No comment on this PR may mention the following topics:

Why is that not allowed?

Because we've already talked about it over and over and over and over. Believe me, every point about those that's been raised on this PR and on Fedi has been already discussed at length and in exhausting detail. We came to the conclusion that the people with ethical concerns will not be satisfied by anything less than a full ban, and we do not have a consensus to pass a full ban. So reiterating the concerns just makes people mad at each other without helping anything.

I've seen the sentiment a few times "I would rather the Rust project lose people than start using LLMs". I don't agree with it but I can understand it. I ask you this though: Would you rather have no Rust project than a Rust project that uses LLMs? Would you rather have no policy at all?

@rust-lang rust-lang unlocked this conversation May 16, 2026
@dballesg
Copy link
Copy Markdown

"people with ethical concerns will not be satisfied by anything less than a full ban"

Obvious. Those ethical concerns come from people that have strong ethical values, and share deep concerns of the inequality created and amplifyed by this "technologies" worldwide.

Dont talk about the elephant in the room doesn't make it dissappear.

I would prefer Rust to dissapear if is going to allow non working and non functional code, that will plague it with thousands of bugs to be committed into it.

BTW wasn't one of the reasons to create Rust to have a safe, almost bugless programming language? 🤔

Whomever says LLMs work to generate working code is lying and they know it. So full ban or nothing.

@anotak
Copy link
Copy Markdown

anotak commented May 16, 2026

Hi, longtime rust user here, not frequently interacting with the development but i have read from time to time over the years. I read the LLM policy and the feedback guidelines.

I think the very fact that it was felt necessary to have the moderation policy in place for this discussion should give serious pause on allowing any LLM contributions at all, even with the caveats in the policy. Putting aside all the actual banned topics, the fact that it comes with so much baggage itself creates so much social friction. That friction will harm the larger Rust community. The fact that folks who have never interacted with Rust development are here and upset should indicate how deeply serious this is for the stability of the community. I think that friction will be much more costly than the benefits of allowing LLM-authored code, even with the caveats in place in the current PR.

The decisions here have a top-down effect on the larger culture of Rust. They should not be viewed in isolation of just the language itself. The language and compiler development team sets a lot of the cultural norms.

@ismay-tue
Copy link
Copy Markdown

No comment on this PR may mention the following topics:

Why is that not allowed?

Because we've already talked about it over and over and over and over. Believe me, every point about those that's been raised on this PR and on Fedi has been already discussed at length and in exhausting detail. We came to the conclusion that the people with ethical concerns will not be satisfied by anything less than a full ban, and we do not have a consensus to pass a full ban. So reiterating the concerns just makes people mad at each other without helping anything.

I've seen the sentiment a few times "I would rather the Rust project lose people than start using LLMs". I don't agree with it but I can understand it. I ask you this though: Would you rather have no Rust project than a Rust project that uses LLMs? Would you rather have no policy at all?

Thank you for explaining, because I was genuinely interested. So if I understand you correctly, people had strong opposing views wrt. LLMs and the debate got rather intense. That can be difficult.

I would personally recommend to not shy away from including all aspects of this debate in your decision making process though. I think that the only way you'll get people to understand eachother, and be able to work with eachother, is to make everyone involved feel (and be) heard. That is a difficult process to get right.

If such a process is causing conflict, then I would recommend you to involve people who are experienced at guiding such processes. What you are dealing with is not just an engineering problem, but a social process as well. So rather than whittling it down to a pure engineering issue, engage with the social aspect as well. If that is not going well, then please involve people experienced in doing so, because otherwise this debate will keep having an effect on your community.

I'll bow out of this issue, but I hope that that helps and I sincerely hope that you'll be able to continue this process in a productive, inclusive manner. I hope you didn't mind me commenting, but I've personally seen a lot of organisations affected by engineers who are uncomfortable with tackling difficult social dynamics. That is understandable, but know that there are entire professions dedicated to dealing with these types of situations in a productive, professional manner (i.e. organisational psychologists, etc.).

@jyn514
Copy link
Copy Markdown
Member Author

jyn514 commented May 16, 2026

I would personally recommend to not shy away from including all aspects of this debate in your decision making process though. I think that the only way you'll get people to understand eachother, and be able to work with eachother, is to make everyone involved feel (and be) heard.

The problem here is who counts as "everyone". I think everyone in the project does feel at least heard at this point, even if they don't like the policy or don't feel represented. But coming to a consensus between everyone in the world is simply not possible. At some point you have to pick and choose who gets a say.

We chose (we have chosen, we will continue to choose) people in the project as the ones who get a say. We want to hear input from the outside community to make sure we aren't isolating ourselves, but that's not the same as letting them make decisions for us. @alice-i-cecile says this very well in this comment.

The moderation guidelines were written with project members in mind, and I checked with anti-AI people like @jdonszelmann beforehand that they were ok with introducing those restrictions. Because it was discussed beforehand, I didn't think to include a rationale for it. I have now done so, you can read it here.

@TheLastZombie
Copy link
Copy Markdown

Would you rather have no Rust project than a Rust project that uses LLMs?

I mean, I'd kinda prefer the former. If the only way for a project to exist was by using an inherently harmful and fascist technology, it's worth taking a step back and asking whether the pros really still outweigh the (innumerable) cons.

@nnethercote
Copy link
Copy Markdown

Some thoughts I have had while reading these comments.

  • The Rust project currently has almost no restrictions on LLM use at all.
  • This policy would put significant restrictions on LLM use.
  • The Rust project has members with a wide range of opinions on LLM usage, from very negative to very positive.
  • The Rust project has a decision making process that is heavy on consensus. There is no benevolent dictator.
  • "Politics is the art of the possible."

Comment thread src/policies/llm-usage.md
Comment on lines +198 to +201
### The meaning of "originally created"

This document uses the phrase "originally created" to mean "text that was generated by an LLM (and then possibly edited by a human)".
No amount of editing can change how it was originally created; how it was generated sets the initial style and it's very hard to change once it's set.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I notice that the policy does not specifically mention LLM-backed autocompletion anywhere. I suspect this may lead to confusion later. Editors are now willing to fill in entire functions and files (and safety comments, other documentation, etc.), but people seem to mentally bucket this in a different category.

Given the intent of the policy, I'd expect it might want to say something along the lines of:

This includes use of editor autocompletion when that feature is implemented using an LLM (check the documentation). Once you accept a nontrivial autocompletion, the work is "originally created" by an LLM and no amount of editing (other than deleting it entirely) can remove that.

I'm not sure where you'd want to put this.

View changes since the review

Copy link
Copy Markdown
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had a different conclusion: "autocompletion" usually doesn't meet the bar of originality and is thus usually allowed.

If it's "nontrivial", sure. But at least in my experience, I almost never get those.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I think it's fair to clarify that the main bar is whether any actual functional code was written, not just syntax. The example I like to use is implementing Iterator for &mut I; it's a massive wall of text, but it's not exactly intellectually thrilling work, and honestly it should probably even be autocompletable by R-A and not even require an LLM to figure that out.

"Threshold of originality" is an okay thing to defer to since it's well-understood for copyright, but ultimately however that gets worded, I think it's where we should be aiming for. Anything else isn't worth litigating.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added comments elsewhere suggesting we should avoid relying on copyright terminology such as "threshold of originality." The concern here is that it risks a circular conclusion by treating all LLM-generated contributions as trivial by definition.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think that's the case since all of this is going to be relative to moderator arbitration anyway, and the mods know what the rule actually means. But we can discuss that particular point in the other thread you opened about it.

For here, I think that the point about "auto-completed" code, from the context of code that is mostly syntax boilerplate and thus would be exactly the same regardless of who wrote it, is reasonable to allow and not litigate regardless of the tool being used to do it.

Comment thread src/policies/llm-usage.md
Comment on lines +218 to +223
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.

This policy can be dissolved in a few ways:

- An accepted FCP by teams using rust-lang/rust.
Copy link
Copy Markdown
Contributor

@traviscross traviscross May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- A simple majority of members of teams using rust-lang/rust.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust.
- A simple majority of members of teams using rust-lang/rust and that have ratified the policy.
- No outstanding concerns from those members.
This policy can be dissolved in a few ways:
- An accepted FCP by teams using rust-lang/rust and that have ratified the policy.

Something like this might help in being more clear about which teams need to be involved in these later actions.

View changes since the review

@The-personified-devil
Copy link
Copy Markdown

As a Rust user I very much appreciate this step in the right direction and just wanted to make sure that it's not just negative voices being heard, even tho I don't have much to say on policy specifics.

Tho I sincerely hope that those in this thread mentioning that Rust shouldn't exist if it can't be fully ethical apply this same moral rigorousness to most other things in their lives.

@SoNick

This comment was marked as low quality.

Comment thread src/policies/llm-usage.md
- Using machine-translation (e.g. Google Translate) from your native language without posting your original message.
Doing so can introduce new miscommunications that weren't there originally, and prevents someone who speaks the language from providing a better translation.
- ℹ️ Posting both your original message and the translated version is always ok, but you must still disclose that machine-translation was used.
- "Trivial" code changes that do not meet the [threshold of originality](https://fsfe.org/news/2025/news-20250515-01.en.html).
Copy link
Copy Markdown

@xtqqczze xtqqczze May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I do not think "threshold of originality" is the right concept here. That standard concerns whether a work reflects the author’s own free and creative choices, which cannot meaningfully apply to LLM output. If the intent is to exclude mechanically trivial changes, the policy should describe that directly instead of relying on copyright terminology.

View changes since the review

Copy link
Copy Markdown

@clarfonthey clarfonthey May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I disagree that this is the case, since I've pretty much always interpreted "threshold of originality" to additionally mean that something is unique and substantial enough to be copyrightable.

But would be interested if you have any particular examples of it being used in this context, since if this is a particularly decided matter and that's just not clear, it would be helpful to know.

Side note: I thought that the Foundation's policy used the term, but no, what they actually use is this:

*Materially, in this case, refers to incidences where the AI fundamentally generated, structured, and/or substantively influenced the output, beyond just editorial.

I feel like at least one of the policy examples I sifted through did use the term, but I would be more willing to assume that the legal definition was used correctly from someone like the Foundation who almost certainly had legal counsel give it a pass before publishing. Since they didn't, you might be right, just, I don't know what is the true case here.

Edit: a quick ctrl-F for "originality" in all the policies that were mentioned turns up nothing, so, maybe this was the only place where it was mentioned.

Comment thread src/policies/llm-usage.md
Your contributions are your responsibility; you cannot place any blame on an LLM.
- ℹ️ This includes when asking people to address review comments originally created by an LLM. See "review bots" under ⚠️ above.

### The meaning of "originally created"
Copy link
Copy Markdown

@xtqqczze xtqqczze May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
### The meaning of "originally created"
### The meaning of "originally generated"

I am still not comfortable with the use of "originally created" throughout this document, as it seems to imply that LLM output can itself be a creative work or meet a threshold of originality.

I think "originally generated" would be more precise and neutral terminology, and would avoid that implication.

View changes since the review

@reillypascal
Copy link
Copy Markdown

LLMs are non-deterministic; the issue of “hallucinations” is not solved, and it does not look like it will be solved anytime soon, if ever. If we aren't trusting humans to be perfectly vigilant for manual memory management, it doesn't make sense to expect them to catch every single machine-generated error either.

@clarfonthey
Copy link
Copy Markdown

clarfonthey commented May 16, 2026

Just wanna add some additional context to the folks who are coming here because of the various places that have uncritically shared quotes from this thread without context:

This thread was meant to be mostly kept among team members to get a policy out to fix the status quo of nobody knowing what to do with LLM contributions. You might think that the default without a policy is to accept all LLM contributions, or reject all LLM contributions, but no, the answer is the third option that everyone hates: having absolutely no idea what to do with anything.

(If you think there are other problems: I agree! But this thread isn't about them.)

We all agree that objective slop is bad, but there are so many things that are explicitly not slop, and there have also been several cases where contributors don't necessarily know what they're doing and we want to help. The current situation is very confusing because the popularity of LLMs and the lack of communication on them has amplified the difficulty of working with new contributors, and while there are some cases that are extremely easy to tell out as bad, there are a lot of cases where we have absolutely no idea what to do and it's very difficult to deal with.

Just want to let everyone know that, hey, if you're worried about things, this is not the place to discuss it.

I've personally objected to some of the wording that people have been pointing out as bad, ultimately, it was not meant to be shared so widely outside of the project. If you got here from one of the various places doing that, you should maybe talk to the people who shared it and call them out on it.

If you don't know what the situation is, please feel free to ask about it in any number of places that is not here. Similarly, simply stating your opinions on LLMs here is entirely unhelpful and will not affect anything. I hesitate to direct everyone to alternative avenues because I think that most of the people participating were not actually interested in the boring stuff that's actually going on and just wanted to say something to stop something they thought was happening, but actually isn't.

Many team members are happy to individually discuss things outside of this thread to help clarify things to people who don't know the situation. And we would very strongly appreciate it if others who have made themselves aware of the situation would help out too. The uncritical sharing of quotes from this thread out of context resulted in some truly horrendous stuff happening yesterday and you should know that the internet seeks to find the lowest common denominator, so, maybe try and promote a critical analysis of the situation instead. Because even if you're not the kind of person who would do something awful because of a poor understanding of the situation, someone else might be, and we want to foster a community that explicitly rejects that.

Thanks. Sorry this is so vague; I feel like adding additional details would just hurt the situation. Those who know where to find out more can, but this isn't the place.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

I-council-nominated Nominated for discussion during a council meeting. needs-fcp This change is insta-stable, so needs a completed FCP to proceed. S-waiting-on-review Status: Awaiting review from the assignee but also interested parties. T-bootstrap Team: Bootstrap T-compiler Team: Compiler T-libs Team: Library / libs T-rustdoc Team: rustdoc T-types Team: Types

Projects

None yet

Development

Successfully merging this pull request may close these issues.