fix: Truncate AI reply to 200 chars in memory extraction to prevent doc pollution
The AI reply often contains full document content (passport details, etc.) which the memory extraction LLM incorrectly stores as user facts. Limiting to 200 chars avoids including document content while keeping the gist. Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
3
bot.py
3
bot.py
@@ -1396,7 +1396,8 @@ class Bot:
|
||||
{"role": "user", "content": (
|
||||
f"Existing memories:\n{existing_text}\n\n"
|
||||
f"User message: {user_message[:500]}\n"
|
||||
f"AI reply: {ai_reply[:500]}\n\n"
|
||||
# Only include first 200 chars of AI reply to avoid document content pollution
|
||||
f"AI reply (summary only): {ai_reply[:200]}\n\n"
|
||||
"New facts to remember (JSON array of strings):"
|
||||
)},
|
||||
],
|
||||
|
||||
Reference in New Issue
Block a user