Skip to content

Project-wide LLM policy#3959

Open
clarfonthey wants to merge 9 commits into
rust-lang:masterfrom
clarfonthey:llm-policy
Open

Project-wide LLM policy#3959
clarfonthey wants to merge 9 commits into
rust-lang:masterfrom
clarfonthey:llm-policy

Conversation

@clarfonthey
Copy link
Copy Markdown
Contributor

@clarfonthey clarfonthey commented May 4, 2026

View all comments

Preface

A lot of discussion has occurred in private about the topic of LLM policy, and while some of that context has been included in the prior art, most of it is intentionally omitted here.

To keep things focused on policy, there are two broad categories of comments we'd like to request you avoid:

  • Simply stating your viewpoint on LLMs, even if you provide reasons. While these arguments can be useful for the RFC, they are better worded as explicit suggestions to specific areas of the RFC, rather than as just general comments.
  • Debating others' viewpoints on LLMs, or calling others out on their viewpoints. Again, we would rather these be focused on specific policy points and the merits of those points, than just general bickering.

In general, defer to the code of conduct.

This RFC is long, and most of it is the sections surrounding and justifying the actual policy, rather than the policy itself. You are both welcome to and encouraged to skip around using the outline feature on GitHub. (In the rendered view, this is the bulleted list button on the top-right of the file view.)

While I have received a lot of feedback that the RFC is perhaps too long, a lot of the sections that might be considered for removal constitute important arguments that have been mentioned. In terms of discussion on length alone, I would appreciate that these arguments be directed on potentially simplifying the text of the policy itself, rather than just removing sections of motivation. Similarly, if you feel a particular argument could be expanded or added, feel free to mention that as well.

In general, I know that this RFC is going to be exhausting, and the discussion before it has been too. Removing details is not really going to help, although I'd be happy to accept any feedback on revising the text of these sections to be shorter without losing any meaning.

Important

Since RFCs involve many conversations at once that can be difficult to follow, please use review comment threads on the text changes instead of direct comments on the RFC.

If you don't have a particular section of the RFC to comment on, you can click on the "Comment on this file" button on the top-right corner of the diff, to the right of the "Viewed" checkbox. This will create a separate thread even if others have commented on the file too.

Existing policies

Right now, there are a few concurrent policy decisions:

This RFC intends to supersede the two existing policy RFCs, but it intentionally does not supersede the current policy scoped to rust-lang/rust; that policy is free to be merged before this one, as its original goal was to put a policy in place before an RFC was accepted. In fact, even if after an RFC is accepted, that one can still be merged, since updating the policies everywhere takes time, and getting a policy out immediately is still a net benefit.

Once an RFC is accepted, things can be adjusted for consistency.

Summary

This RFC proposes a strict policy regarding generative Artificial Intelligence (AI) models, specifically Large Language Models (LLMs), and their use within the rust-lang organization. It aims to minimize the harm done by LLMs by reducing both the extent they're used and the control they're given over the Rust project. The policy can be summarized in the following checklist with terms that will be defined throughout the RFC:

  1. If the LLM usage is trivial, it is completely ignored by the policy and always allowed. Generally, this means that changes made by LLMs are indistinguishable from those made by humans, where the LLM didn't have any creative input into the change.
  2. If the LLM usage is slop, it is considered spam and moderated accordingly. Generally, this means submitting changes made by LLMs with minimal human intervention.
  3. Any potentially non-trivial LLM usage must be disclosed in ideally as detailed as a manner as possible. This may necessitate additional tooling to notify new contributors about the policy and explain how disclosure works.
  4. If a contributor does not fully understand the code they submit, their contribution may be rejected for that reason alone. Note that such usage is not always considered slop, and is considered separately. (For example, they may understand a large portion, but not all of it, which shows that they still put in a lot of effort.)
  5. If a user is found to be repeatedly lying about LLM usage (using LLMs in a non-trivial way without disclosing that usage), this is a COC violation that will be moderated accordingly.
  6. As long as users are honest, follow the COC otherwise, and respect boundaries, the worst punishment for non-trivial LLM usage is the rejection of a contribution.

In terms of additional tooling for disclosure, this RFC encourages the creation of a bot that automatically replies to contributions from new users informing them of the LLM policy and what constitutes sufficient disclosure. As mentioned, in general, going into as much detail as possible (e.g. prompts used, etc.) is preferred, but not always required. The RFC leaves the exact details of such implementation unspecified and up for revision later.

Rendered

@clarfonthey clarfonthey added the T-leadership-council Relevant to the Leadership Council, which will review and decide on this RFC. label May 4, 2026
Comment thread text/3959-llm-policy.md
Comment on lines +611 to +612
* Neurodivergent authors tend to replicate the "terseness" of many LLMs, and often show up as false positives in LLM detection
* Kenyan authors, many of whom helped filter the data for LLMs, often show up as false positives in LLM detection
Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FWIW, while I've tried to cite everything across the RFC, this section was added after the first draft and I knew both of these had citations, but had a lot of trouble finding explicit articles about it. So, if you happen to find sources for these, I'd be happy to update the RFC to include them.

View changes since the review

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When reading that section, I thought back to this article: https://marcusolang.substack.com/p/im-kenyan-i-dont-write-like-chatgpt.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(To be clear, I do think this blog post is sufficient evidence for the second point, although I'm mostly leaving this open to remind myself to look for other sources too.)

Comment thread text/3959-llm-policy.md Outdated

1. If the LLM usage is *trivial*, it is completely ignored by the policy and always allowed. Generally, this means that changes made by LLMs are indistinguishable from those made by humans, where the LLM didn't have any creative input into the change.
2. If the LLM usage is *slop*, it is considered spam and moderated accordingly. Generally, this means submitting changes made by LLMs with minimal human intervention.
3. *Nontrivial* LLM usage must be *disclosed* in ideally as detailed as a manner as possible. This may necessitate additional tooling to notify new contributors about the policy and explain how disclosure works.
Copy link
Copy Markdown

@bushrat011899 bushrat011899 May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In PRs I've submitted to rust and cargo, I include a single sentence at the end usually of the form:

No AI tooling of any kind was used during the creation of this PR.

I think including an AI disclosure prompt/template in the GitHub PR template is a simple way to cover off disclosure while allowing the author the freedom to go into as much or as little detail as they feel is appropriate.

View changes since the review

Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 4, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, having a section on a PR template just to fill in LLM disclosure is a good idea, and that's one of the multiple options that could be used for tooling.

This is actually the one difference between the summary for the RFC and the one proposed for inclusion in the actual policy: for the RFC, the tooling is an important point since it doesn't exist yet, but for the final policy, the goal is to already have that.

I mostly leave the details out of the RFC since they're almost certainly going to be done via experimentation, but this is a pretty easy one to start with.

Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md

While there are arguments to be made about some models having less power consumption, it ultimately doesn't matter if they fundamentally require operation based upon brute force. As hopefully any programmer even vaguely educated on complexity knows, brute force is *the worst* way to solve any problem, and should always be used as a last resort. LLMs put brute force front and center as the best option.

With code, we have methods to bound operation: for example, a famous case is sorting algorithms based upon quicksort, a quadratically-bounded algorithm, which fall back to some explicitly-optimal method like heapsort or merge-sort. Rust uses these algorithms in its standard library.
Copy link
Copy Markdown

@comex comex May 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The argument in this section could be clarified a bit.

You seem to be saying that the idea of an unbounded loop where the model picks some action (writing code, running a tool), receives the result, and uses it to inform future actions, is inherently "brute force" and a "feedback loop" and bad.

But that can't be true, because that also describes how humans work.

Of course, LLMs are considerably worse at it than humans. And I can think of ways in which those terms apply uniquely to LLMs. "Brute force" applies uniquely to LLMs in that one strategy LLMs use is to make up for a lack of intelligence with sheer persistence. "Feedback loop" applies uniquely to LLMs in that LLMs are much more vulnerable than humans to getting stuck with a bad assumption and spending a long time barking up the wrong tree. Or perhaps more generally in that LLM performance tends to fall off drastically past certain complexity levels, which could be analogized to an audio feedback loop amplifying noise.

However, you don't really explain that. As written, the argument could apply just as well to humans. And I could respond: sure, it would be nice if there were a way to solve arbitrary programming problems with a bounded amount of work, but that just doesn't exist.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I feel like I did technically cover this point, but I do think you have a point here. I'm mostly replying to point out that I do want to consider what you've written, but am going to think about it a bit to figure out how to properly articulate that.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also for some additional context, this piece is probably the closest to being directly inspired by this massive essay I've been writing for years going more in-depth on this. Like, there, I spend a lot more time explaining the specifics of complexity theory, incompleteness proofs, and other things to make a point, but like, I both haven't written that fully and am not going to write that fully here. But I feel like there's some nugget of information that could be gleaned without going past the surface level.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this section has been reworked a bit; let me know what you think.

Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md Outdated
Co-authored-by: Arhan Chaudhary <arhan.ch@gmail.com>
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md
@traviscross traviscross added the I-council-nominated Indicates that an issue has been nominated for prioritizing at the next council meeting. label May 8, 2026
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md
Comment thread text/3959-llm-policy.md Outdated
Comment thread text/3959-llm-policy.md
Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 14, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this change is big enough to be worth noting in a separate thread:

in short, non-trivial usage is now disallowed. Disclosure now acts as a way for contributors to justify that LLM usage did not affect the output, i.e., was trivial.

The main motivation for this was the recent news that the Colossus data centre in Memphis, explicitly cited by this policy as an outlier because the industry has not been so overtly evil, now powers Claude Code, the model that basically everyone on GitHub is using. Now, it's clear that most LLM users on GitHub are directly doing harm harder and faster than before.

Sure, there are other models, but I doubt many people will switch. The arguments laid out here do explicitly support a more restrictive model regardless of this news, and this is only additional justification for going all-in.

It's impossible to look at all the things the AI industry does in a good light. The only possible way is to argue that the ends justify the means, and even then, I would say that anyone who isn't a climate denialist and a racist would at least consider switching away from Claude Code toward a different model.

Overall, the reception of this RFC has been overwhelmingly positive, and I'd like to thank the numerous people who've personally reached out to thank me for what I've done so far. You're the folks who make this worth it.

There may still be a few parts that feel weird to read in light of these revisions. As always, please feel free to point them out on the diff.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe the summary may need to get updated to add that non-trivial usage is not allowed. Currently it is not very clear to me.

Love how the RFC turned out.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, you're totally right. There's also another comment on that I can address too.

Comment thread text/3959-llm-policy.md Outdated
Co-authored-by: +merlan #flirora <flirora@flirora.xyz>
@clarfonthey clarfonthey dismissed ArhanChaudhary’s stale review May 14, 2026 22:53

GitHub is being weird. I already merged these changes, so, it's weird it's showing them as unaddressed.

Comment thread text/3959-llm-policy.md
Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to also add this in a separate thread because I'm frustrated and don't feel like I have any other outlet to speak about it than this. And I think this is important context for this policy specifically.

The discussion on policy has been absolutely, utterly vitriolic. The people who have been downvoting my comments in Zulip and on GitHub to show dissent instead of just directly insulting me have been really grading. I'm tired of having to be the one to point out the obvious bad faith in the people arguing against reasonable policy and being treated like the villain for it. Honestly, if I were only starting to write a policy now with the amount of hate I've experienced thusfar, I would have just fucking given up; I'm only continuing by momentum and spite at this point.

(Maybe that's not true, but I dunno. I love Rust too much.)

But of course, because said hate is only implicit, and not explicit, nobody will ultimately receive any consequences from it. I'll just have to deal with it all. I won't even get any fucking apologies.

I can't help but point out how obviously punching down it is for people to pick on the person who's been thrown in the dirt by the AI industry and to claim that really, they're the one who's at fault for trying to limit the harm of LLMs. Really, I'm being insensitive and unreasonable by asking a community that has made me enjoy programming more than I ever have grow a fucking spine and not destroy the thing I love.

Ultimately, this discussion has destroyed our mental health for past month and I know that the same people who are completely unwilling to admit that LLM usage has consequences are similarly unwilling to admit that their actions have taken a toll on us too. This discussion in particular has been a bit of motivation to focus on more healthy plurality, which ironically has been less healthy because the person who wants to just tell everyone how rude they're being is currently not allowed to do so. If you wanna read more, you can check out our blog, but I don't feel like making it easy for the people who've been effectively harassing us to read something so emotionally vulnerable.

I hope you're all happy for what you've done, and I hope you enjoy your time at Rust Week without me. I could have figured out how to stay with a friend in Utrecht, but I genuinely don't think I could look some of you in the face for how foul you've been these past few weeks. And, I'd rather any resources we do manage to get for international travel get spent on trying to reunite with our partner that we haven't seen in 7 years.

You know, something that hasn't been helped by being thrown under the bus by the AI industry and not having a full-time job for 4 years.

Anyway, I've been expecting this to not have any more traction due to Rust Week distractions, but I actually initially messed up my timing and thought that it was this week, not next. So, I get to be even more stressed out about the discussions happening that I won't be participating in. I've wanted to potentially get more involved with the actual team despite only being a meagre helper on the hashbrown repo, but this situation has been just too stressful to work on anything else too seriously.

Have fun, I guess.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whatever it means to you to hear this from the outside: I sympathize greatly with your frustrations here and I appreciate your work more than you can imagine. Please take care, please feel better.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for all the work you've done. I also really appreciate it.

Take a break if you need to, take care of yourself. If you wish to vent, feel free to reach out.

Wishing you the best.

Comment thread text/3959-llm-policy.md

[esolang]: https://arxiv.org/abs/2603.09678v1

Claude Code, often cited as one of the best tools, has a great example in its source code which it accidentally leaked. Whenever the model is asked to output valid JSON in "non-interactive" mode, i.e. triggered as part of an internal loop, it just repeatedly feeds the model the error in the generated JSON until it correctly generates valid JSON. Originally, the genuine code for this was cited, but it's just way too confusing even with access to the full source code to cite here.
Copy link
Copy Markdown

@leo60228 leo60228 May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is, to my knowledge, factually inaccurate. While I have not looked at the leaked Claude Code source code, restricting LLMs to valid outputs has been a common feature for several years, including in Claude. llama.cpp has an open-source implementation of this technique as evidence that this is actually filtering output tokens and not just brute-force.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, this actually makes a lot of sense and should be the way things are done, but at least my best reading of the Claude Code source is it's not doing that and I didn't really have any other examples to go off of. And while JSON validation is simple I expect the story to lean in favour of brute force when building code since you can't easily determine whether a particular set of tokens will compile.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

but at least my best reading of the Claude Code source is it's not doing that

Masked decoding is implemented in the inference backend as an FSM that triggers on the tool call token. It wouldn't be visible in frontend glue ("harness") like CC. There is some error-looping code in the CC leak, but without the backend code it's hard to tell whether that is necessary or just functionally dead code that a clanker inserted. I don't think we can draw factual conclusions from the CC source -- with clanker code the thing it "looks like" it's doing is misleading.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agree that gleaning anything from the source code is probably a worthless endeavour.

Had already trimmed a lot of this section but probably should have trimmed it more.

@ahicks92
Copy link
Copy Markdown

Ok, so firstly I know that we are supposed to reply inline but I am blind and I don't use GitHub PR review stuff well enough to do that quickly, and this will likely be my only comment for my sanity unless someone comes up with a very good reason for me to reply. For that reason I hope you all can be patient with me. But I am a former contributor (I did struct field reordering), I am disabled (I share at least the job-related concerns, because man are blind people going to get hit first), and I've thought about whether I should say anything and what that should be for a week or so now. I want to make a few points.

First: as of last week blind people can play Civilization V. This was done by a friend of mine with zero coding knowledge who has produced 3 quality accessibility mods with Claude since around Christmas. Factorio Access, my project, also only supports Factorio 2.0 because of LLMs. Otherwise it just wouldn't have been feasible. If nothing else, I think we can agree that "blind people can now make games accessible in a matter of weeks" (or even sometimes days, as happened wtih STS 2) is an unambiguous net good. But on top of that Mozilla themselves have admitted that Mythos is, if not human level, then at least a major accelerator for security research. I also have a friend who happens to be in a very un-supportive environment who uses Claude specifically and LLMs in general when he feels suicidal, because it's what he has, and that helps. I recently built an audio description pipeline for videos in a day. It should have taken a week or two to try but I just asked for it and Claude was all "done" and it worked and I had an investor and a new company within 48 hours. We should be launching next month. If we take my struct field reordering contribution--6 months at the time--I would say that I could do it in 3 now without an LLM, probably much less with one. Any "LLM is ineffective" narrative needs to somehow cope with at least evidence of this quality and it also needs to cope with the models we are getting in 6 months or so, which some people, almost certainly including people involved with Rust, have access to. When used responsibly they can be used to produce good code. Even when not used responsibly, which is what seems to be part of the claims here for LLM use on Claude Code itself, they produce working code that is mostly bug free. Given the sheer volume of users for CC, if CC didn't work for most people in the sense of internal code quality issues in the binary itself, we would know. There is an argument here that LLMs are not good enough for compiler dev, but that argument needs to be made specifically, because arguing that they are ineffective in general is trivially countered.

Now it may seem that I got away from the ethical point there too, but I didn't. It's just that these are personal examples I can comment on for both arguments. From the ethical side it is simply true that most things do x harm and y good. You have to weigh them off against each other and that's complicated. I also believe it to be the case that a lot of the LLM harms are overblown in context. I can construct arguments that Twitter, cars, eating meat, etc. are all equally harmful at minimum, and that's only because I'm being generous. But I won't because that's not the point I want to get at. The point I want to get at is that Rust is a programming language, not a social justice project, and if you want to make social arguments then why does it support Loong? Why are Rust contributors not banned from driving cars? taking a social stance on social issues is one thing, but saying "x technology is harmful so we should ban it" doesn't really work when a ton of other things are just as harmful. I just took a shower. That used way more resources than my Claude usage will use today, probably this week. But no one is going to come to me and say "don't take showers". You can make an argument that Rust should have a say over what tools I use because they are tech and not over if I take a shower because it has nothing to do with tech, but the argument that you would be very hard pressed to make is that they aren't equally harmful from a utilitarian perspective.

I will make no secret of it that I am mostly, pro-AI is close but not right. I follow the Yudkowsky-type alignment we might all die by 2035 concerns. I agree that it's not clear cut what the right answers are. I agree that some harm is done. But I could talk at length about whose fault the harms are and start a whole argument and no one would agree on anything by the end of it. Possibly not even me.

So what's the point. The points are:

  • You can't just say "AI harmful, should ban". That's not really an argument for anything.
  • You can't just say "AI dangerously ineffective, should ban", because it's not and anyone can trivially prove that by just trying it in good faith, or asking the people with access to Mythos.
  • You (in specific @clarfonthey , not general you for this one) are heavily anti-LLM. Trying to write the LLM policy for the community when the person doing it is anti-LLM all the way can't work. It is unfair to the counterarguments in the first place and unfair to the people on the other side of the argument such as myself who can easily point at the valuable things we have done with AI, in my case specifically helping minority groups.
  • LLMs are getting more effectively rapidly. By the time this policy passes even, should it be accepted, there may be a model that obsoletes most of the effectiveness/quality arguments. Even if not we know such models are coming and we can put rough bars on the timelines.

Now at a personal level let me say that I sympathize with not being able to find a job. It sucks. I also sympathize with watching my career die. But the thing is that it's not "LLM out to get disabled people", it really is probably a career death situation. I have specifically been rearranging my life to deal with this stuff for a year, and consider myself incredibly fortunate to be able to do so. When it comes to people who are out of a job, blind people are right up there with you in general. It's not great to say the least. As a society, we are going to have to navigate a very complex set of social issues as a whole on a very short timeline.

But the problem is that Rust is tech with a lot of stakeholders. And I for one would be put off by not being able to use LLMs if I contributed today, because even if they are causing chaos, they are also causing great efficiency wins for those who know how to use them. I agree with policies that say you're responsible for your code; I agree that there is definitely a problem with people doing fly-by contributions. But honestly my reaction to this is that if Rust as a whole banned LLMs, I would probably start scaling down my Rust usage in new projects simply because Rust will get left behind if it goes this hard: both because contributors won't join, and because LLMs are already superhuman in lots of ways. If you really want to go there, you have to justify lots of other things. So, sure, try for it, but the reason that there's so many visceral reactions on the other side is that this is in some effect saying "the social issue is so important that we should risk the project's future". Maybe you all think that it is. I am divorced enough from Rust contribution as a whole that I wouldn't know. But, that's what you're effectively saying. And you have to address it like that. Things like pronouns and the other places Rust takes a stance are not existential in the same way. Respecting someone's identity isn't "wee should go x times slower than everyone else and not adopt the newest tech". Those are really different and if you want to make these arguments you need to account for it.

I could honestly go on for a while. But I think this says the part I think needs to be said.

Comment thread text/3959-llm-policy.md
Copy link
Copy Markdown

@olivia-fl olivia-fl May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I understand that my voice does not have very much weight here, since I have only made fairly minor contributions to rust. I do not currently intend to make any future contributions due to the project's external appearance of support of LLMs. I am very strongly in support of this policy, especially now that it has been changed to ban non-trivial LLM use entirely. If it were accepted I would consider contributing to the project again. This is both because of the requirements it would impose but also, more importantly, because it takes the extensive ethical issues seriously rather than pretending that they do not exist or are somehow irrelevant to the practice of software-making. Seeing this gives me hope.

View changes since the review

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I also have been thinking of contributing for a while, and having a policy like this would be very encouraging for me. I want to contribute to projects that care about ethical issues.

Copy link
Copy Markdown
Contributor

@jplatte jplatte May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As another contributor who hasn't done that much¹, I am in the same boat.

¹ a couple refactors towards const generics during libs blitz, some std API improvements / additions

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thirded, I find pro LLM stances extremely offputting, have made minor contributions to Rust, rust-analyzer etc. in the past, and am absolutely not motivated to further engage with an ecosystem that is welcoming towards this technology.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also just clarifying, I explicitly mentioned I appreciated this being posted despite it potentially falling under:

  • Simply stating your viewpoint on LLMs, even if you provide reasons. While these arguments can be useful for the RFC, they are better worded as explicit suggestions to specific areas of the RFC, rather than as just general comments.

I think that pointing out the sheer number of people who have been outright turned away from this project because of its inability to reject LLMs and their harm is important. So, as long as people keep their comments on this specific point confined to this thread, I think it's fine.

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm in much the same position. I won't contribute until a policy at least as strict as this one is adopted, and honestly even then, I'd still be hesitant to so long as pro-LLM voices are in positions of power in the project.

Comment thread text/3959-llm-policy.md
3. Any *potentially non-trivial* LLM usage must be *disclosed* in ideally as detailed as a manner as possible. This may necessitate additional tooling to notify new contributors about the policy and explain how disclosure works.
4. If a contributor does not fully understand the code they submit, their contribution may be rejected for that reason alone. Note that such usage is not always considered *slop*, and is considered separately. (For example, they may understand a large portion, but not all of it, which shows that they still put in a lot of effort.)
5. If a user is found to be repeatedly lying about LLM usage (using LLMs in a non-trivial way without disclosing that usage), this is a COC violation that will be moderated accordingly.
6. As long as users are honest, follow the COC otherwise, and respect boundaries, the worst punishment for *non-trivial* LLM usage is the rejection of a contribution.
Copy link
Copy Markdown

@olivia-fl olivia-fl May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
6. As long as users are honest, follow the COC otherwise, and respect boundaries, the worst punishment for *non-trivial* LLM usage is the rejection of a contribution.
6. *Non-trivial* LLM usage is disallowed with a few exceptions, but as long as users are honest, follow the COC otherwise, and respect boundaries, the worst punishment for non-trivial LLM usage is the rejection of a contribution.

nit: the original wording I think implies that the policy is looser than it actually is. if the only thing we're saying about non-trivial usage that follows other requirements is that the "worst punishment" is X, it reads to me like there is a wide range of outcomes for typical non-trivial usage. this is weaker than what we're actually doing: banning non-trivial usage by default with a few specific exceptions.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, I actually didn't properly fix this when rewording the summary. Admittedly it's the last bit I edited it and I tried to make the edits minimal, but admittedly too minimal.

Comment thread text/3959-llm-policy.md
* Basic auto-completion of syntax, spell-checking, and copy-editing. In this case, the model is simply accelerating what a user already intended to do, rather than deciding what to do.
* Even the writing of certain code or text can be considered trivial, if little creative input is required to write it. "Boilerplate" code is a good example of this.

Note that this LLM usage being allowed *does not* constitute an explicit endorsement; it simply represents a pragmatic approach to enforcement, since it is difficult to distinguish. This policy does not try to distinguish between generative AI, LLMs, and other forms of machine learning, since the category of "trivial usage" covers broadly enough to avoid needing that distinction.
Copy link
Copy Markdown

@olivia-fl olivia-fl May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A large fraction of the ethical issues with LLM use discussed above apply equally to trivial and nontrivial usage. It is perhaps worth strengthening "not explicitly endorsing" trivial usage to explicitly condemning it, while admitting that enforcing a ban on trivial usage is less practical than a ban on nontrivial usage and so we may focus on nontrivial usage out of pragmatism.

Personally I think it may be a good idea to go farther and require disclosure for both trivial and nontrivial usage. This makes things simpler for contributors, who must always disclose LLM use rather than being asked to distinguish between trivial and possibly-non-trivial use privately. It's much more difficult to enforce disclosure for trivial use, and we would likely want to err on the side of caution when identifying undisclosed trivial use. However, I believe this is okay. It is okay to ask people for things without an effective mechanism to force them to comply. This is the basis of ideas like trust and consent, and if we are abandoning trust and consent we have bigger problems.

I do completely understand if you don't want to strengthen the requirements around trivial usage because it would make the policy less likely to be accepted.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So, I think this is interesting to interpret in light of this comment: #3959 (comment)

and my response: #3959 (comment)

since, ultimately, even though I think it's an absolute condemnation of the industry for not making anything better, these do constitute accessibility tools that people use, and we want to make sure that we're specifically condemning the industry and not endorsing the usage, rather than condemning the users.

Also, in private I've made the analogy that we're asking a large industry to not donate half of their salaries to the evilest part of that industry, and not just chastising individuals for their usage of these things for one reason or another. I personally think that the ethical issues make the things revolting to use, but that ultimately doesn't change that they have helped people in one way or another. Rather than just telling everyone that they're being wrong for using them, I think we should try and get them to join our side and fight against the industry, often by replacing the terrible tool of LLMs with a better one, instead of just chastising them.

Copy link
Copy Markdown

@olivia-fl olivia-fl May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's a fair argument that explicitly condemning use (as opposed to just disallowing it) is not useful in this context. I think I am now in agreement.

LLM/"AI" use in the context of accessibility is genuinely really complicated and yeah, I don't really have any interest in condemning or disallowing accessibility use-cases. Where you draw the line is complicated though. It would be very bad for the policy to ban machine transcription use in the development process, but I do think we should disallow the kinds of examples @ahicks92 described in their comment where LLMs were used to generate accessibility software rather than as an accessibility tool itself. A nontrivially LLM-generated PR to add screen-reader-friendly error reporting to rust should be rejected just the same as any other nontrivially LLM-generated PR.

I think a disclosure requirement for trivial accessibility use would be harmful because it forces disabled contributors to mark themselves as disabled. Trivial use for non-accessibility reasons (boilerplate generation or autocomplete) is a lot less complicated and I am straightforwardly ethically opposed to it for mostly the same reasons I am straightforwardly ethically opposed to nontrivial use. Ideally I would like to have policy would ban trivial non-accessibility use, or at least require disclosure for trivial non-accessibility use. However, I can see the problem that trying to draw a box around trivial accessibility use at the policy level is much more fraught than drawing a box around trivial use in general. A common kind of harm done by accessibility policies that draw a tight box around accessibility vs non-accessibility use-cases is that they require disabled people to "prove" that they are sufficiently disabled to be able to use a thing for accessibility purposes to a system that is both structurally and individually ableist. We should be very careful to avoid producing this kind of outcome here. I am not sure where I stand on balancing these issues, and when in doubt would prefer to err on the side of not making an ableist system.

Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 16, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, and to be clear, I'm still saying that we should disallow ultimately letting the LLM write any code that isn't just shortcutting syntax or something to that effect. Ultimately, the idea here is that if we really taper what kinds of things people are allowed to use LLMs for, the people who are just using them to shortcut actually writing code will either give up or acknowledge that the tool isn't helping them actually write code, just doing it for them, and the people who still continue to use it are the people who feel that it is actually helping for accessibility concerns. Kinda hoping that the current formulation is self-correcting for that, but as usual, no idea what would happen in practice.

Like, I mention trait implementations as a common example in Rust because they can be genuinely lengthy but pretty much have exactly one way to be written in a lot of cases. Personally, I think that we should just get rust-analyzer to the point where it's able to autocomplete these trivial cases for us, and then an LLM is not involved, but in cases where it's not possible, I'll relent and just let people do it.

Like, working with code is atrocious for a blind person even with the best tools, so, I'm more than willing to give people the benefit of the doubt that these things actually help them make it tolerable.

Comment thread text/3959-llm-policy.md
Copy link
Copy Markdown
Contributor Author

@clarfonthey clarfonthey May 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Going to start a separate thread to respond to @ahicks92's comment because I think it's an reasonable comment to be discussed and want at least the folks who can manage with GitHub's UI to keep the thread organised. Link here: #3959 (comment)

Sorry for the extremely soapboxy ramble. I'm terrible at writing anything that isn't a small essay because I think there's always a lot of context to explain and I tried to strip a lot of it from the RFC even though it's so long as-is.

First, and it goes without saying, that the tech industry has done an absolutely abysmal job with accessibility. The fact that GitHub, despite claiming to have various accessibility options, is still horrendous to use even for a sighted person, is just one example of this. (And to anyone, including Austin, who reads this and doesn't want to deal with the threaded UI due to various accessibility issues, please feel free to similarly respond directly in the thread. We can cope.)

I think that Rust itself should do way better to support blind people. One obvious example is that the compiler currently relies a lot on outputting graphical characters to point out spans and such when, well, this is pretty meaningless to a blind person unless you have a very expensive and big braille display capable of helping you understand said markings. I've seen loads of projects attempt to create "accessible" CLIs that always word things so they're read properly by screen readers, and try to reduce noise, but ultimately I'm not 100% sure what the best solution is. Right now, I assume that the best way to interface with Rust as a Blind user is just to use rust-analyzer and an IDE that you like, since that will act like a proper UI that you can navigate. Even just being able to go through compiler warnings as a proper list instead of just a bunch of lines on a terminal is an improvement.

One of the explicit accessibility carve-outs I make here is also for stuff like dictation software and text-to-speech, since even though I think it's extremely frustrating that the best possible tools we've designed have been with LLMs, I still think that people should be allowed to use the best possible tools and it's not on them that the best tools are shit.

And a good example of what I mean by this, by the way, is carykh's project from six years ago where he managed to make automatic lip-syncing software with some very simple code: https://www.youtube.com/watch?v=y3B8YqeLCpY

While there are some papers that demonstrate that we know so little about glyph recognition that terrible neural networks beat the best hand-built models in performance, this is absolutely not true when it comes to audio. We have an astonishingly deep understanding of how human speech works and just need to tap into that linguistic knowledge. The same is true for lexical analysis, and one argument that I left entirely out of this policy because it was off-topic is how current AI models flagrantly ignore all this research and do the shitty things. Old-school lexical analysis is able to perfectly tokenise text in basically every language, whereas OpenAI's tokeniser, at least last I checked, splits Chinese characters between UTF-8 boundaries. A proper Chinese tokeniser would at least try and split up characters into radicals using the data Unicode provides, but this just converts Chinese into byte gibberish that isn't even valid UTF-8.

Basically, it's an absolute condemnation to the tech industry that these are some of the best tools we have available, since the evidence shows that we just aren't trying hard enough. We decided to let hyperscaling and mass surveillance solve the problem instead of our brains.

Anyway, my point here is that I absolutely think Austin and Blind folks have the right to continue using LLMs to make accessibility tools despite the fact that they are an objectively terrible tool for the reasons mentioned in this RFC. It doesn't matter if the best tool for the job is a bad tool; it's a tool, and you can't build without tools. Also, while I think it's a disgrace on the industry that Blind folks are not a large part of it, I don't think that we should consider their financial contribution to the LLM industry as anything worth worrying aboutv.

But, importantly, I think that most of us should not only do better, but actively work to replace these tools with ones that are better and more ethical. Like, one thing that constantly rubs me the wrong way is the insistence that LLMs are a tool like any other, when like. My entire point is I very strongly disagree with you. No other tool has managed to so successfully make it difficult to get a job, double the cost of power, make it impossible to buy a computer, and pollute black communities with such efficacy. The fact that we are literally building tools and are considering these things as equally useful is a condemnation of us, because we are the ones responsible for building tools.

So, my opinion is that while I'm disappointed to see how many things that LLMs are doing for Blind accessibility, this only makes me want to double down on my position that we should continue doing work to reduce LLM usage as much as possible.

The main goal behind the trivial usage provision is twofold, and I discussed this a bit on fedi earlier when summarising the newer policy. The first is hoping that some of the most uncritical LLM users will just let us know who they are so we don't have to ask them first. The second is keeping people as accountable as possible to the code they write to ensure that the LLMs are not writing the code, but they are. Rather than try and guess this, we ask people to tell us explicitly.

I don't know something like a chatbot is the most accessible thing for Blind people to use longer-term, but I certainly don't think that LLMs are the best long-term. The trivial usage clause, to me, is inherently in favour of these kinds of uses, as long as the LLMs aren't writing non-trivial amounts of code. Syntax boilerplate, trait implementations, etc., I think are all acceptable usages of this thing, and an old example I got rid of to avoid link rot was the implementation of &mut I for Iterator: this is sure a lot of text, but it's not exactly intellectually thrilling activity, and I am fine with a machine writing it and checking it for obvious mistakes. What I don't trust a machine to write are all the actually important bits, like real iterator implementations. I do acknowledge this is fundamentally less accessible than allowing an LLM to do it, but I think that we should be aiming to reduce the net harm to everyone done by the AI industry and continue to improve accessibility without it.

Ultimately, that means that some people are going to be unable or unwilling to contribute to Rust, and that's not an insignificant cost. So, we need to pick up our accessibility game and make better tools that don't use LLMs too.

View changes since the review

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also just to comment: I think that the original post's comments on a few topics, like pronouns, were unacceptable, but I think that the issues for Blind Rust users are very real and wanted to address them seriously. Also, since the user said they likely weren't going to comment again, I don't think it's worth trying to litigate those points, since you can't change your behaviour if you don't plan to behave… again? This analogy doesn't work at all.

Just wanted to point this out since I admittedly missed this point on first reading, and think it's worth clarifying that the COC is still relevant during this discussion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

I-council-nominated Indicates that an issue has been nominated for prioritizing at the next council meeting. T-leadership-council Relevant to the Leadership Council, which will review and decide on this RFC.

Projects

None yet

Development

Successfully merging this pull request may close these issues.