4.5 KiB
4.5 KiB
title, type, permalink
| title | type | permalink |
|---|---|---|
| overview | note | voyage/knowledge/overview |
Architecture Overview
API Proxy Pattern
Frontend never calls Django directly. All API calls go through src/routes/api/[...path]/+server.ts → Django at http://server:8000. Frontend uses relative URLs like /api/locations/.
AI Chat (Collections → Recommendations)
- AI travel chat is embedded in Collections → Recommendations via
AITravelChat.svelte(no standalone/chatroute). - Provider selector loads dynamically from
GET /api/chat/providers/(backed bylitellm.provider_list+CHAT_PROVIDER_CONFIGinbackend/server/chat/llm_client.py). - Supported configured providers: OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, GitHub Models, OpenRouter, OpenCode Zen (
opencode_zen,api_base=https://opencode.ai/zen/v1, default modelopenai/gpt-5-nano). - Chat conversations stream via SSE through
/api/chat/conversations/. ChatViewSet.send_message()accepts optional context fields (collection_id,collection_name,start_date,end_date,destination) and appends a## Trip Contextsection to the system prompt when provided. When acollection_idis present, also injectsItinerary stops:fromcollection.locations(up to 8 unique stops) and the collection UUID with explicitget_trip_details/add_to_itinerarygrounding. See patterns/chat-and-llm.md and patterns/chat-and-llm.md.- Chat composer supports per-provider model override (persisted in browser
localStoragekeyvoyage_chat_model_prefs). DB-saved default provider/model (UserAISettings) is authoritative on initialization; localStorage is write-only sync target. Backendsend_messageaccepts optionalmodelparam; falls back to DB defaults → instance defaults →"openai". - Chat agent tools (
get_trip_details,add_to_itinerary) authorize usingQ(user=user) | Q(shared_with=user)— both owners and shared members can use them.list_tripsremains owner-only. See patterns/chat-and-llm.md. - Invalid required-argument tool calls are detected and short-circuited: stream terminates with
tool_validation_errorSSE event +[DONE]and invalid tool results are not replayed into conversation history. See patterns/chat-and-llm.md. - LiteLLM errors mapped to sanitized user-safe messages via
_safe_error_payload()(never exposes raw exception text). See patterns/chat-and-llm.md. - Tool outputs display as concise summaries (not raw JSON) via
getToolSummary(). Persistedrole=toolmessages are hidden from display; on conversation reload,rebuildConversationMessages()reconstructstool_resultson assistant messages. See patterns/chat-and-llm.md. - Embedded chat uses compact header (provider/model selectors in settings dropdown with outside-click/Escape close), bounded height, sidebar-closed-by-default, visible streaming indicator, and i18n aria-labels. See patterns/chat-and-llm.md.
- Frontend type:
ChatProviderCatalogEntryinsrc/lib/types.ts. - Reference: Plan: AI travel agent, Plan: AI travel agent redesign — WS4
Services (Docker Compose)
| Service | Container | Port |
|---|---|---|
| Frontend | web |
:8015 |
| Backend | server |
:8016 |
| Database | db |
:5432 |
| Cache | cache |
internal |
Authentication
Session-based via django-allauth. CSRF tokens from /auth/csrf/, passed as X-CSRFToken header. Mobile clients use X-Session-Token header.
Key File Locations
- Frontend source:
frontend/src/ - Backend source:
backend/server/ - Django apps:
adventures/,users/,worldtravel/,integrations/,achievements/,chat/ - Chat LLM config:
backend/server/chat/llm_client.py(CHAT_PROVIDER_CONFIG) - AI Chat component:
frontend/src/lib/components/AITravelChat.svelte - Types:
frontend/src/lib/types.ts - API proxy:
frontend/src/routes/api/[...path]/+server.ts - i18n:
frontend/src/locales/ - Docker config:
docker-compose.yml,docker-compose.dev.yml - CI/CD:
.github/workflows/ - Public docs:
documentation/(VitePress)