fix(chat): add saved AI defaults and harden suggestions
This commit is contained in:
20
.memory/knowledge/conventions.md
Normal file
20
.memory/knowledge/conventions.md
Normal file
@@ -0,0 +1,20 @@
|
||||
# Coding Conventions & Patterns
|
||||
|
||||
## Frontend Patterns
|
||||
- **i18n**: Use `$t('key')` from `svelte-i18n`; add keys to locale files
|
||||
- **API calls**: Always use `credentials: 'include'` and `X-CSRFToken` header
|
||||
- **Svelte reactivity**: Reassign to trigger: `items[i] = updated; items = items`
|
||||
- **Styling**: DaisyUI semantic classes + Tailwind utilities; use `bg-primary`, `text-base-content` not raw colors
|
||||
- **Maps**: `svelte-maplibre` with MapLibre GL; GeoJSON data
|
||||
|
||||
## Backend Patterns
|
||||
- **Views**: DRF `ModelViewSet` subclasses; `get_queryset()` filters by `user=self.request.user`
|
||||
- **Money**: `djmoney` MoneyField
|
||||
- **Geo**: PostGIS via `django-geojson`
|
||||
- **Chat providers**: Dynamic catalog from `GET /api/chat/providers/`; configured in `CHAT_PROVIDER_CONFIG`
|
||||
|
||||
## Workflow Conventions
|
||||
- Do **not** attempt to fix known test/configuration issues as part of feature work
|
||||
- Use `bun` for frontend commands, `uv` for local Python tooling where applicable
|
||||
- Commit and merge completed feature branches promptly once validation passes (avoid leaving finished work unmerged)
|
||||
- See [decisions.md](../decisions.md#workflow-preference-commit--merge-when-done) for rationale
|
||||
44
.memory/knowledge/domain/ai-configuration.md
Normal file
44
.memory/knowledge/domain/ai-configuration.md
Normal file
@@ -0,0 +1,44 @@
|
||||
# AI Configuration Domain
|
||||
|
||||
## WS1 Configuration Infrastructure
|
||||
|
||||
### WS1-F1: Instance-level env vars and key fallback
|
||||
- `settings.py`: `VOYAGE_AI_PROVIDER`, `VOYAGE_AI_MODEL`, `VOYAGE_AI_API_KEY`
|
||||
- `get_llm_api_key(user, provider)` falls back to instance key only when provider matches `VOYAGE_AI_PROVIDER`
|
||||
- Fallback chain: user key -> matching-provider instance key -> error
|
||||
- See [tech-stack.md](../tech-stack.md#server-side-env-vars-from-settingspy), [decisions.md](../../decisions.md#ws1-configuration-infrastructure-backend-review)
|
||||
|
||||
### WS1-F2: UserAISettings model
|
||||
- `integrations/models.py`: `UserAISettings` (OneToOneField to user) with `preferred_provider` and `preferred_model`
|
||||
- Endpoint: `/api/integrations/ai-settings/` (upsert pattern)
|
||||
- Migration: `0008_useraisettings.py`
|
||||
|
||||
### WS1-F3: Provider catalog enhancement
|
||||
- `get_provider_catalog(user=None)` adds `instance_configured` and `user_configured` booleans
|
||||
- User API keys prefetched once per request (no N+1)
|
||||
- `ChatProviderCatalogEntry` TypeScript type updated with both fields
|
||||
|
||||
### Frontend Provider Selection (Fixed)
|
||||
- No longer hardcodes `selectedProvider = 'openai'`; auto-selects first usable provider
|
||||
- Filtered to configured+usable entries only (`available_for_chat && (user_configured || instance_configured)`)
|
||||
- Warning alert + Settings link when no providers configured
|
||||
- Model selection uses dropdown from `GET /api/chat/providers/{provider}/models/`
|
||||
|
||||
## Known Frontend Gaps
|
||||
|
||||
### Root Cause of User-Facing LLM Errors
|
||||
Three compounding issues (all resolved):
|
||||
1. ~~Hardcoded `'openai'` default~~ (fixed: auto-selects first usable)
|
||||
2. ~~No provider status feedback~~ (fixed: catalog fields consumed)
|
||||
3. ~~`UserAISettings.preferred_provider` never loaded~~ (fixed: Settings UI saves/loads DB defaults; chat initializes from saved prefs)
|
||||
4. `FIELD_ENCRYPTION_KEY` not set disables key storage (env-dependent)
|
||||
5. ~~TypeScript type missing fields~~ (fixed)
|
||||
|
||||
## Key Edit Reference Points
|
||||
| Feature | File | Location |
|
||||
|---|---|---|
|
||||
| AI env vars | `backend/server/main/settings.py` | after `FIELD_ENCRYPTION_KEY` |
|
||||
| Fallback key | `backend/server/chat/llm_client.py` | `get_llm_api_key()` |
|
||||
| UserAISettings model | `backend/server/integrations/models.py` | after UserAPIKey |
|
||||
| Catalog user flags | `backend/server/chat/llm_client.py` | `get_provider_catalog()` |
|
||||
| Provider view | `backend/server/chat/views/__init__.py` | `ChatProviderCatalogViewSet` |
|
||||
62
.memory/knowledge/domain/collections-and-sharing.md
Normal file
62
.memory/knowledge/domain/collections-and-sharing.md
Normal file
@@ -0,0 +1,62 @@
|
||||
# Collections & Sharing Domain
|
||||
|
||||
## Collection Sharing Architecture
|
||||
|
||||
### Data Model
|
||||
- `Collection.shared_with` - `ManyToManyField(User, related_name='shared_with', blank=True)` - the primary access grant
|
||||
- `CollectionInvite` - staging table for pending invites: `(collection FK, invited_user FK, unique_together)`; prevents self-invite and double-invite
|
||||
- Both models in `backend/server/adventures/models.py`
|
||||
|
||||
### Access Flow (Invite -> Accept -> Membership)
|
||||
1. Owner calls `POST /api/collections/{id}/share/{uuid}/` -> creates `CollectionInvite`
|
||||
2. Invited user sees pending invites via `GET /api/collections/invites/`
|
||||
3. Invited user calls `POST /api/collections/{id}/accept-invite/` -> adds to `shared_with`, deletes invite
|
||||
4. Owner revokes: `POST /api/collections/{id}/revoke-invite/{uuid}/`
|
||||
5. User declines: `POST /api/collections/{id}/decline-invite/`
|
||||
6. Owner removes: `POST /api/collections/{id}/unshare/{uuid}/` - removes user's locations from collection
|
||||
7. User self-removes: `POST /api/collections/{id}/leave/` - removes their locations
|
||||
|
||||
### Permission Classes
|
||||
- `CollectionShared` - full access for owner and `shared_with` members; invite actions for invitees; anonymous read for `is_public`
|
||||
- `IsOwnerOrSharedWithFullAccess` - child-object viewsets; traverses `obj.collections`/`obj.collection` to check `shared_with`
|
||||
- `ContentImagePermission` - delegates to `IsOwnerOrSharedWithFullAccess` on `content_object`
|
||||
|
||||
### Key Design Constraints
|
||||
- No role differentiation - all shared users have same write access
|
||||
- On unshare/leave, departing user's locations are removed from collection (not deleted)
|
||||
- `duplicate` action creates a private copy with no `shared_with` transfer
|
||||
|
||||
## Itinerary Architecture
|
||||
|
||||
### Primary Component
|
||||
`CollectionItineraryPlanner.svelte` (~3818 lines) - monolith rendering the entire itinerary view and all modals.
|
||||
|
||||
### The "Add" Button
|
||||
DaisyUI dropdown at bottom of each day card with "Link existing item" and "Create new" submenu (Location, Lodging, Transportation, Note, Checklist).
|
||||
|
||||
### Day Suggestions Modal (WS3)
|
||||
- Component: `ItinerarySuggestionModal.svelte`
|
||||
- Props: `collection`, `user`, `targetDate`, `displayDate`
|
||||
- Events: `close`, `addItem { type: 'location', itemId, updateDate }`
|
||||
- 3-step UX: category selection -> filters -> results from `POST /api/chat/suggestions/day/`
|
||||
|
||||
## User Recommendation Preference Profile
|
||||
Backend-only feature: model, API, and system-prompt integration exist, but **no frontend UI** built yet.
|
||||
|
||||
### Auto-learned preference updates
|
||||
- `backend/server/integrations/utils/auto_profile.py` derives profile from user history
|
||||
- Fields: `interests` (from activities + locations), `trip_style` (from lodging), `notes` (frequently visited regions)
|
||||
- `ChatViewSet.send_message()` invokes `update_auto_preference_profile(request.user)` at method start
|
||||
|
||||
### Model
|
||||
`UserRecommendationPreferenceProfile` (OneToOne -> `CustomUser`): `cuisines`, `interests` (JSONField), `trip_style`, `notes`, timestamps.
|
||||
|
||||
### System Prompt Integration
|
||||
- Single-user: labeled as auto-inferred with structured markdown section
|
||||
- Shared collections: `get_aggregated_preferences(collection)` appends per-participant preferences
|
||||
- Missing profiles skipped gracefully
|
||||
|
||||
### Frontend Gap
|
||||
- No settings tab for manual preference editing
|
||||
- TypeScript type available as `UserRecommendationPreferenceProfile` in `src/lib/types.ts`
|
||||
- See [Plan: AI travel agent redesign](../../plans/ai-travel-agent-redesign.md#ws2-user-preference-learning)
|
||||
40
.memory/knowledge/overview.md
Normal file
40
.memory/knowledge/overview.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# Architecture Overview
|
||||
|
||||
## API Proxy Pattern
|
||||
Frontend never calls Django directly. All API calls go through `src/routes/api/[...path]/+server.ts` → Django at `http://server:8000`. Frontend uses relative URLs like `/api/locations/`.
|
||||
|
||||
## AI Chat (Collections → Recommendations)
|
||||
- AI travel chat is embedded in **Collections → Recommendations** via `AITravelChat.svelte` (no standalone `/chat` route).
|
||||
- Provider selector loads dynamically from `GET /api/chat/providers/` (backed by `litellm.provider_list` + `CHAT_PROVIDER_CONFIG` in `backend/server/chat/llm_client.py`).
|
||||
- Supported configured providers: OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, GitHub Models, OpenRouter, OpenCode Zen (`opencode_zen`, `api_base=https://opencode.ai/zen/v1`, default model `openai/gpt-5-nano`).
|
||||
- Chat conversations stream via SSE through `/api/chat/conversations/`.
|
||||
- `ChatViewSet.send_message()` accepts optional context fields (`collection_id`, `collection_name`, `start_date`, `end_date`, `destination`) and appends a `## Trip Context` section to the system prompt when provided. When a `collection_id` is present, also injects `Itinerary stops:` from `collection.locations` (up to 8 unique stops). See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#multi-stop-context-derivation).
|
||||
- Chat composer supports per-provider model override (persisted in browser `localStorage` key `voyage_chat_model_prefs`). DB-saved default provider/model (`UserAISettings`) is authoritative on initialization; localStorage is write-only sync target. Backend `send_message` accepts optional `model` param; falls back to DB defaults → instance defaults → `"openai"`.
|
||||
- Invalid required-argument tool calls are detected and short-circuited: stream terminates with `tool_validation_error` SSE event + `[DONE]` and invalid tool results are not replayed into conversation history. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#tool-call-error-handling-chat-loop-hardening).
|
||||
- LiteLLM errors mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#sanitized-llm-error-mapping).
|
||||
- Frontend type: `ChatProviderCatalogEntry` in `src/lib/types.ts`.
|
||||
- Reference: [Plan: AI travel agent](../plans/ai-travel-agent-collections-integration.md), [Plan: AI travel agent redesign — WS4](../plans/ai-travel-agent-redesign.md#ws4-collection-level-chat-improvements)
|
||||
|
||||
## Services (Docker Compose)
|
||||
| Service | Container | Port |
|
||||
|---------|-----------|------|
|
||||
| Frontend | `web` | :8015 |
|
||||
| Backend | `server` | :8016 |
|
||||
| Database | `db` | :5432 |
|
||||
| Cache | `cache` | internal |
|
||||
|
||||
## Authentication
|
||||
Session-based via `django-allauth`. CSRF tokens from `/auth/csrf/`, passed as `X-CSRFToken` header. Mobile clients use `X-Session-Token` header.
|
||||
|
||||
## Key File Locations
|
||||
- Frontend source: `frontend/src/`
|
||||
- Backend source: `backend/server/`
|
||||
- Django apps: `adventures/`, `users/`, `worldtravel/`, `integrations/`, `achievements/`, `chat/`
|
||||
- Chat LLM config: `backend/server/chat/llm_client.py` (`CHAT_PROVIDER_CONFIG`)
|
||||
- AI Chat component: `frontend/src/lib/components/AITravelChat.svelte`
|
||||
- Types: `frontend/src/lib/types.ts`
|
||||
- API proxy: `frontend/src/routes/api/[...path]/+server.ts`
|
||||
- i18n: `frontend/src/locales/`
|
||||
- Docker config: `docker-compose.yml`, `docker-compose.dev.yml`
|
||||
- CI/CD: `.github/workflows/`
|
||||
- Public docs: `documentation/` (VitePress)
|
||||
133
.memory/knowledge/patterns/chat-and-llm.md
Normal file
133
.memory/knowledge/patterns/chat-and-llm.md
Normal file
@@ -0,0 +1,133 @@
|
||||
# Chat & LLM Patterns
|
||||
|
||||
## Default AI Settings & Model Override
|
||||
|
||||
### DB-backed defaults (authoritative)
|
||||
- **Model**: `UserAISettings` (OneToOneField, `integrations/models.py`) stores `preferred_provider` and `preferred_model` per user.
|
||||
- **Endpoint**: `GET/POST /api/integrations/ai-settings/` — upsert pattern (OneToOneField + `perform_create` update-or-create).
|
||||
- **Settings UI**: `settings/+page.svelte` loads/saves default provider and model. Provider dropdown filtered to configured providers; model dropdown from `GET /api/chat/providers/{provider}/models/`.
|
||||
- **Chat initialization**: `AITravelChat.svelte` `loadUserAISettings()` fetches saved defaults on mount and applies them as authoritative initial provider/model. Direction is DB → localStorage (not reverse).
|
||||
- **Backend fallback precedence** in `send_message()`:
|
||||
1. Explicit request payload (`provider`, `model`)
|
||||
2. `UserAISettings.preferred_provider` / `preferred_model` (only when provider matches)
|
||||
3. Instance defaults (`VOYAGE_AI_PROVIDER`, `VOYAGE_AI_MODEL`)
|
||||
4. `"openai"` hardcoded fallback
|
||||
- **Cross-provider guard**: `preferred_model` only applied when resolved provider == `preferred_provider` (prevents e.g. `gpt-5-nano` leaking to Anthropic).
|
||||
|
||||
### Per-session model override (browser-only)
|
||||
- **Frontend**: model dropdown next to provider selector, populated by `GET /api/chat/providers/{provider}/models/`.
|
||||
- **Persistence**: `localStorage` key `voyage_chat_model_prefs` — written on selection, but never overrides DB defaults on initialization (DB wins).
|
||||
- **Compatibility guard**: `_is_model_override_compatible()` validates model prefix for standard providers; skips check for `api_base` gateways (e.g. `opencode_zen`).
|
||||
- **i18n keys**: `chat.model_label`, `chat.model_placeholder`, `default_ai_settings_title`, `default_ai_settings_desc`, `default_ai_save`, `default_ai_settings_saved`, `default_ai_settings_error`, `default_ai_provider_required`, `default_ai_no_providers`.
|
||||
|
||||
## Sanitized LLM Error Mapping
|
||||
- `_safe_error_payload()` in `backend/server/chat/llm_client.py` maps LiteLLM exception classes to hardcoded user-safe strings with `error_category` field.
|
||||
- Exception classes mapped: `NotFoundError` -> "model not found", `AuthenticationError` -> "authentication", `RateLimitError` -> "rate limit", `BadRequestError` -> "bad request", `Timeout` -> "timeout", `APIConnectionError` -> "connection".
|
||||
- Raw `exc.message`, `str(exc)`, and `exc.args` are **never** forwarded to the client. Server-side `logger.exception()` logs full details.
|
||||
- Uses `getattr(litellm.exceptions, "ClassName", tuple())` for resilient class lookup.
|
||||
- Security guardrail from critic gate: [decisions.md](../../decisions.md#critic-gate-opencode-zen-connection-error-fix).
|
||||
|
||||
## Tool Call Error Handling (Chat Loop Hardening)
|
||||
- **Required-arg detection**: `_is_required_param_tool_error()` matches tool results containing `"is required"` / `"are required"` patterns via regex. Detects errors like `"location is required"`, `"query is required"`, `"collection_id, name, latitude, and longitude are required"`.
|
||||
- **Short-circuit on invalid tool calls**: When a tool call returns a required-param error, `send_message()` yields an SSE error event with `error_category: "tool_validation_error"` and immediately terminates the stream with `[DONE]`. No further LLM turns are attempted.
|
||||
- **Persistence skip**: Invalid tool call results (and the tool_call entry itself) are NOT persisted to the database, preventing replay into future conversation turns.
|
||||
- **Historical cleanup**: `_build_llm_messages()` filters persisted tool-role messages containing required-param errors AND trims the corresponding assistant `tool_calls` array to only IDs that have non-filtered tool messages. Empty `tool_calls` arrays are omitted entirely.
|
||||
- **Multi-tool partial success**: When model returns N tool calls and call K fails, calls 1..K-1 (the successful prefix) are persisted normally. Only the failed call and subsequent calls are dropped.
|
||||
- **Tool iteration guard**: `MAX_TOOL_ITERATIONS = 10` with correctly-incremented counter prevents unbounded loops from other error classes (e.g. `"dates must be a non-empty list"` from `get_weather` does NOT match the required-arg regex but is bounded by iteration limit).
|
||||
- **Known gap**: `get_weather` error `"dates must be a non-empty list"` does not trigger the short-circuit — mitigated by `MAX_TOOL_ITERATIONS`.
|
||||
|
||||
## OpenCode Zen Provider
|
||||
- Provider ID: `opencode_zen`
|
||||
- `api_base`: `https://opencode.ai/zen/v1`
|
||||
- Default model: `openai/gpt-5-nano` (changed from `openai/gpt-4o-mini` which was invalid on Zen)
|
||||
- GPT models on Zen use `/chat/completions` endpoint (OpenAI-compatible)
|
||||
- LiteLLM `openai/` prefix routes through OpenAI client to the custom `api_base`
|
||||
- Model dropdown exposes 5 curated options (reasoning models excluded). See [decisions.md](../../decisions.md#critic-gate-travel-agent-context--models-follow-up).
|
||||
|
||||
## Multi-Stop Context Derivation
|
||||
Chat context derives from the **full collection itinerary**, not just the first location.
|
||||
|
||||
### Frontend - `deriveCollectionDestination()`
|
||||
- Located in `frontend/src/routes/collections/[id]/+page.svelte`.
|
||||
- Extracts unique city/country pairs from `collection.locations`.
|
||||
- Capped at 4 stops, semicolon-joined, with `+N more` overflow suffix.
|
||||
- Passed to `AITravelChat` as `destination` prop.
|
||||
|
||||
### Backend - `send_message()` itinerary enrichment
|
||||
- `backend/server/chat/views/__init__.py` `send_message()` reads `collection.locations` and injects `Itinerary stops:` into the system prompt `## Trip Context` section.
|
||||
- Up to 8 unique stops; deduplication and blank-entry filtering applied.
|
||||
|
||||
### System prompt - trip-level reasoning
|
||||
- `get_system_prompt()` includes guidance to treat collection chats as itinerary-wide and call `get_trip_details` before `search_places`.
|
||||
|
||||
## Itinerary-Centric Quick Prompts
|
||||
- Quick-action buttons use `promptTripContext` (reactive: `collectionName || destination || ''`) instead of raw `destination`.
|
||||
- Guard changed from `{#if destination}` to `{#if promptTripContext}`.
|
||||
- Prompt wording uses `across my ${promptTripContext} itinerary?`.
|
||||
|
||||
## search_places Tool Output Key Convention
|
||||
- Backend `agent_tools.py` `search_places()` returns `{"location": ..., "category": ..., "results": [...]}`.
|
||||
- Frontend must use `.results` key (not `.places`).
|
||||
- **Historical bug**: Prior code used `.places` causing place cards to never render. Fixed 2026-03-09.
|
||||
|
||||
## Agent Tools Architecture
|
||||
|
||||
### Registered Tools
|
||||
| Tool name | Purpose | Required params |
|
||||
|---|---|---|
|
||||
| `search_places` | Nominatim geocode -> Overpass PoI search | `location` |
|
||||
| `web_search` | DuckDuckGo web search for current travel info | `query` |
|
||||
| `list_trips` | List user's collections | (none) |
|
||||
| `get_trip_details` | Full collection detail with itinerary | `collection_id` |
|
||||
| `add_to_itinerary` | Create Location + CollectionItineraryItem | `collection_id`, `name`, `lat`, `lon` |
|
||||
| `get_weather` | Open-Meteo archive + forecast | `latitude`, `longitude`, `dates` |
|
||||
|
||||
### Registry pattern
|
||||
- `@agent_tool(name, description, parameters)` decorator registers function references and generates OpenAI/LiteLLM-compatible tool schemas.
|
||||
- `execute_tool(tool_name, user, **kwargs)` resolves from registry and filters kwargs via `inspect.signature(...)`.
|
||||
- Extensibility: adding a new tool only requires defining a decorated function.
|
||||
|
||||
### Function signature convention
|
||||
All tool functions: `def tool_name(user, **kwargs) -> dict`. Return `{"error": "..."}` on failure; never raise.
|
||||
|
||||
### Web Search Tool
|
||||
- Uses `duckduckgo_search.DDGS().text(..., max_results=5)`.
|
||||
- Error handling includes import fallback, rate-limit guard, and generic failure logging.
|
||||
- Dependency: `duckduckgo-search>=4.0.0` in `backend/server/requirements.txt`.
|
||||
|
||||
## Backend Chat Endpoint Architecture
|
||||
|
||||
### URL Routing
|
||||
- `backend/server/main/urls.py`: `path("api/chat/", include("chat.urls"))`
|
||||
- `backend/server/chat/urls.py`: DRF `DefaultRouter` registers `conversations/` -> `ChatViewSet`, `providers/` -> `ChatProviderCatalogViewSet`
|
||||
- Manual paths: `POST /api/chat/suggestions/day/` -> `DaySuggestionsView`, `GET /api/chat/capabilities/` -> `CapabilitiesView`
|
||||
|
||||
### ChatViewSet Pattern
|
||||
- All actions: `permission_classes = [IsAuthenticated]`
|
||||
- Streaming response uses `StreamingHttpResponse(content_type="text/event-stream")`
|
||||
- SSE chunk format: `data: {json}\n\n`; terminal `data: [DONE]\n\n`
|
||||
- Tool loop: up to `MAX_TOOL_ITERATIONS = 10` rounds
|
||||
|
||||
### Day Suggestions Endpoint
|
||||
- `POST /api/chat/suggestions/day/` via `chat/views/day_suggestions.py`
|
||||
- Non-streaming JSON response
|
||||
- Inputs: `collection_id`, `date`, `category`, `filters`, `location_context`
|
||||
- Provider/model resolution via `_resolve_provider_and_model()`: request payload → `UserAISettings` defaults → instance defaults (`VOYAGE_AI_PROVIDER`/`VOYAGE_AI_MODEL`) → provider config default. No hardcoded OpenAI fallback.
|
||||
- Cross-provider model guard: `preferred_model` only applied when provider matches `preferred_provider`.
|
||||
- LLM call via `litellm.completion` with regex JSON extraction fallback
|
||||
- Suggestion normalization: frontend `normalizeSuggestionItem()` handles LLM response variants (title/place_name/venue, summary/details, address/neighborhood). Items without resolvable name are dropped.
|
||||
- Add-to-itinerary: `buildLocationPayload()` constructs `LocationSerializer`-compatible payload (name/location/description/rating/collections/is_public) from normalized suggestion.
|
||||
|
||||
### Capabilities Endpoint
|
||||
- `GET /api/chat/capabilities/` returns `{ "tools": [{ "name", "description" }, ...] }` from registry
|
||||
|
||||
## WS4-F4 Chat UI Rendering
|
||||
- Travel-themed header (icon: airplane, title: `Travel Assistant` with optional collection name suffix)
|
||||
- `ChatMessage` type supports `tool_results?: Array<{ name, result }>` for inline tool output
|
||||
- SSE handling appends to current assistant message's `tool_results` array
|
||||
- Renderer: `search_places` -> place cards, `web_search` -> linked cards, fallback -> JSON `<pre>`
|
||||
|
||||
## WS4-F3 Add-to-itinerary from Chat
|
||||
- `search_places` card results can be added directly to itinerary when collection context exists
|
||||
- Flow: date selector modal -> `POST /api/locations/` -> `POST /api/itineraries/` -> `itemAdded` event
|
||||
- Coordinate guard (`hasPlaceCoordinates`) required
|
||||
65
.memory/knowledge/tech-stack.md
Normal file
65
.memory/knowledge/tech-stack.md
Normal file
@@ -0,0 +1,65 @@
|
||||
# Tech Stack & Development
|
||||
|
||||
## Stack
|
||||
- **Frontend**: SvelteKit 2, TypeScript, Bun (package manager), DaisyUI + Tailwind CSS, svelte-i18n, svelte-maplibre
|
||||
- **Backend**: Django REST Framework, Python, django-allauth, djmoney, django-geojson, LiteLLM, duckduckgo-search
|
||||
- **Database**: PostgreSQL + PostGIS
|
||||
- **Cache**: Memcached
|
||||
- **Infrastructure**: Docker, Docker Compose
|
||||
- **Repo**: github.com/Alex-Wiesner/voyage
|
||||
- **License**: GNU GPL v3.0
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Frontend (prefer Bun)
|
||||
- `cd frontend && bun run format` — fix formatting (6s)
|
||||
- `cd frontend && bun run lint` — check formatting (6s)
|
||||
- `cd frontend && bun run check` — Svelte type checking (12s; 0 errors, 6 warnings expected)
|
||||
- `cd frontend && bun run build` — build (32s)
|
||||
- `cd frontend && bun install` — install deps (45s)
|
||||
|
||||
### Backend (Docker required; uv for local Python tooling)
|
||||
- `docker compose exec server python3 manage.py test` — run tests (7s; 6/30 pre-existing failures expected)
|
||||
- `docker compose exec server python3 manage.py migrate` — run migrations
|
||||
|
||||
### Pre-Commit Checklist
|
||||
1. `cd frontend && bun run format`
|
||||
2. `cd frontend && bun run lint`
|
||||
3. `cd frontend && bun run check`
|
||||
4. `cd frontend && bun run build`
|
||||
|
||||
## Environment & Configuration
|
||||
|
||||
### .env Loading
|
||||
- **Library**: `python-dotenv==1.2.2` (in `backend/server/requirements.txt`)
|
||||
- **Entry point**: `backend/server/main/settings.py` calls `load_dotenv()` at module top
|
||||
- **Docker**: `docker-compose.yml` sets `env_file: .env` on all services — single root `.env` file shared
|
||||
- **Root `.env`**: `/home/alex/projects/voyage/.env` — canonical for Docker Compose setups
|
||||
|
||||
### Settings File
|
||||
- **Single file**: `backend/server/main/settings.py` (no split/environment-specific settings files)
|
||||
|
||||
### Server-side Env Vars (from `settings.py`)
|
||||
| Var | Default | Purpose |
|
||||
|---|---|---|
|
||||
| `SECRET_KEY` | (required) | Django secret key |
|
||||
| `GOOGLE_MAPS_API_KEY` | `""` | Google Maps integration |
|
||||
| `STRAVA_CLIENT_ID` / `STRAVA_CLIENT_SECRET` | `""` | Strava OAuth |
|
||||
| `FIELD_ENCRYPTION_KEY` | `""` | Fernet key for `UserAPIKey` encryption |
|
||||
| `OSRM_BASE_URL` | `"https://router.project-osrm.org"` | Routing service |
|
||||
| `VOYAGE_AI_PROVIDER` | `"openai"` | Instance-level default AI provider |
|
||||
| `VOYAGE_AI_MODEL` | `"gpt-4o-mini"` | Instance-level default AI model |
|
||||
| `VOYAGE_AI_API_KEY` | `""` | Instance-level AI API key |
|
||||
|
||||
### Per-User LLM API Key Pattern
|
||||
LLM provider keys stored per-user in DB (`UserAPIKey` model, `integrations/models.py`):
|
||||
- `UserAPIKey` table: `(user, provider)` unique pair → `encrypted_api_key` (Fernet-encrypted text field)
|
||||
- `FIELD_ENCRYPTION_KEY` env var required for encrypt/decrypt
|
||||
- `llm_client.get_llm_api_key(user, provider)` → user key → instance key fallback (matching provider only) → `None`
|
||||
- No global server-side LLM API keys — every user must configure their own per-provider key via Settings UI (or instance admin configures fallback)
|
||||
|
||||
## Known Issues
|
||||
- Docker dev setup has frontend-backend communication issues (500 errors beyond homepage)
|
||||
- Frontend check: 0 errors, 6 warnings expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`)
|
||||
- Backend tests: 6/30 pre-existing failures (2 user email key errors + 4 geocoding API mocks)
|
||||
- Local Python pip install fails (network timeouts) — use Docker
|
||||
Reference in New Issue
Block a user