Conversation
ⓘ You are approaching your monthly quota for Qodo. Upgrade your plan Review Summary by QodoAdd RecordOnly message type and exclude from LLM context
WalkthroughsDescription• Add new RecordOnly message type for audit-only messages • Exclude RecordOnly messages from LLM dialog history • Filter conversation content to skip record-only messages Diagramflowchart LR
A["MessageTypeName enum"] -->|"adds RecordOnly constant"| B["New message type"]
C["GetConversationContent method"] -->|"filters using RecordOnly"| D["Optimized dialog history"]
B -->|"excluded from"| D
File Changes1. src/Infrastructure/BotSharp.Abstraction/Conversations/Enums/MessageTypeName.cs
|
Code Review by Qodo
1. RecordOnly leaks to LLM
|
| /// <summary> | ||
| /// Persisted for record/audit but excluded from default LLM dialog history. | ||
| /// </summary> | ||
| public const string RecordOnly = "record_only"; |
There was a problem hiding this comment.
1. Recordonly leaks to llm 🐞 Bug ⛨ Security
MessageTypeName.RecordOnly is documented as “excluded from default LLM dialog history”, but ConversationService.GetConversationSummary and EvaluatingService still build LLM inputs from _storage.GetDialogs without filtering RecordOnly. This can send record/audit-only content to LLM providers through summarization/evaluation flows.
Agent Prompt
### Issue description
`RecordOnly` dialogs are intended to be persisted for audit/record, but excluded from **default** LLM dialog history. Some LLM-facing flows (conversation summarization + evaluation) currently include dialogs fetched directly from storage without filtering out `MessageTypeName.RecordOnly`, so audit-only content can be sent to LLM providers.
### Issue Context
- `MessageTypeName.RecordOnly`’s XML doc sets an explicit contract.
- `ConversationService.GetConversationSummary` builds a text prompt from stored dialogs and sends it via `GetChatCompletions`.
- `EvaluatingService` converts stored dialogs into `ref_conversation` strings passed into `instructService.Instruct`.
### Fix Focus Areas
- src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationService.Summary.cs[23-29]
- src/Infrastructure/BotSharp.Core/Conversations/Services/ConversationService.Summary.cs[101-119]
- src/Infrastructure/BotSharp.Core/Evaluations/Services/EvaluatingService.Evaluate.cs[19-28]
- src/Infrastructure/BotSharp.Core/Evaluations/Services/EvaluatingService.Evaluate.cs[144-159]
### Suggested fix
Apply a consistent “LLM history” filter (at least excluding `RecordOnly`, and ideally using the same comparer conventions as elsewhere) before building prompt content in these flows. Consider centralizing the filter into a shared helper to avoid future drift.
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
No description provided.