Skip to content

Pull requests: lm-sys/FastChat

Author
Filter by author
Loading
Label
Filter by label
Loading
Use alt + click/return to exclude labels
or + click/return for logical OR
Projects
Filter by project
Loading
Milestones
Filter by milestone
Loading
Reviews
Assignee
Filter by who’s assigned
Assigned to nobody Loading
Sort

Pull requests list

Fix off-by-one in is_partial_stop causing false positives
#3848 opened Apr 12, 2026 by Chessing234 Loading…
1 of 2 tasks
Fix bitwise NOT (~) on boolean in tokenizer fallback
#3845 opened Apr 10, 2026 by Chessing234 Loading…
Fix off-by-one in completion_tokens count in generate_stream
#3843 opened Apr 9, 2026 by Chessing234 Loading…
2 tasks
Fix typos in judge prompts: "You evaluation" → "Your evaluation"
#3841 opened Apr 7, 2026 by Chessing234 Loading…
3 tasks done
Fix batch embedding averaging for batch_size > 1
#3839 opened Apr 6, 2026 by Chessing234 Loading…
4 tasks
feat: add fastllm worker for high-performance inference
#3828 opened Apr 1, 2026 by crawfordxx Loading…
6 tasks
feat: add GGUF/GGML model worker using llama-cpp-python
#3827 opened Apr 1, 2026 by crawfordxx Loading…
7 tasks
fix: type annotations and exception handling
#3817 opened Mar 18, 2026 by LincolnBurrows2017 Loading…
fix: correct stderr logger level from ERROR to WARNING
#3797 opened Mar 13, 2026 by gambletan Loading…
3 tasks
ProTip! Mix and match filters to narrow down what you’re looking for.