- Extract and store memorable facts (name, language, preferences) per user - Inject memories into system prompt for personalized responses - LLM-based extraction after each response, deduplication against existing - JSON files on Docker volume (/data/memories), capped at 50 per user - System prompt updated: respond in users language, use memories - Commands: !ai memories (view), !ai forget (delete all) Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
27 lines
496 B
YAML
27 lines
496 B
YAML
services:
|
|
agent:
|
|
build: .
|
|
command: python agent.py start
|
|
env_file: .env
|
|
restart: unless-stopped
|
|
network_mode: host
|
|
|
|
bot:
|
|
build: .
|
|
command: python bot.py
|
|
env_file: .env
|
|
restart: unless-stopped
|
|
environment:
|
|
- LITELLM_BASE_URL
|
|
- LITELLM_API_KEY
|
|
- DEFAULT_MODEL
|
|
- WILDFILES_BASE_URL
|
|
- WILDFILES_ORG
|
|
volumes:
|
|
- bot-crypto:/data/crypto_store
|
|
- bot-memories:/data/memories
|
|
|
|
volumes:
|
|
bot-crypto:
|
|
bot-memories:
|