LiteLLM doesn't expose mxbai-embed-large; text-embedding-3-small with the
`dimensions` param produces 1024-dim vectors that fit translation_memory.embedding.
Refs: CF-3125
Node.js MCP server exposing translate/search_tm/upsert_glossary/record_correction
over StreamableHTTP on :9222. Routes translation calls to gemini-2.5-flash via
LiteLLM, augmented with per-tenant TM + glossary + tone profile from the
SmartTranslate pgvector DB.
Schema migration in sql/001_schema.sql already applied to smarttranslate DB.
Fleet registration in Infrastructure/litellm/config.yaml.
Refs: CF-3122 CF-3123 CF-3124 CF-3125