diff --git a/.cursorrules b/.cursorrules new file mode 100644 index 00000000..7901b148 --- /dev/null +++ b/.cursorrules @@ -0,0 +1,71 @@ +# Voyage Cursor Rules (local-only, gitignored) + +## Project Summary +- Voyage is a self-hosted travel companion app (AdventureLog fork). +- Stack: SvelteKit 2 + TypeScript frontend, Django REST Framework backend, PostgreSQL/PostGIS, Memcached, Docker, Bun. + +## Pre-Release Policy +Voyage is **pre-release** — not yet in production use. During pre-release: +- Architecture-level changes are allowed, including replacing core libraries (e.g. LiteLLM). +- Prioritize correctness, simplicity, and maintainability over backward compatibility. +- Before launch, this policy must be revisited and tightened for production stability. + +## Architecture Essentials +- Frontend never calls Django directly. +- Route all API requests through `frontend/src/routes/api/[...path]/+server.ts` (proxy to `http://server:8000`). +- Services: `web:8015`, `server:8016`, `db:5432`, `cache` internal. +- Auth: session-based (`django-allauth`), CSRF from `/auth/csrf/`, send `X-CSRFToken` for mutating requests. + +## Key Locations +- Frontend: `frontend/src/` +- Backend: `backend/server/` +- Django apps: `adventures/`, `users/`, `worldtravel/`, `integrations/`, `achievements/`, `chat/` +- Types: `frontend/src/lib/types.ts` +- i18n: `frontend/src/locales/` + +## Commands +- Frontend: + - `cd frontend && bun run format` + - `cd frontend && bun run lint` + - `cd frontend && bun run check` + - `cd frontend && bun run build` + - `cd frontend && bun install` +- Backend: + - `docker compose exec server python3 manage.py test` + - `docker compose exec server python3 manage.py migrate` + - Use `uv` for local Python tooling when applicable +- Docker: + - `docker compose up -d` + - `docker compose down` + +## Pre-Commit +Run in order: format → lint → check → build. + +## Known Issues +- `bun run check`: 0 errors + 6 warnings expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`). +- Backend tests: 6/39 pre-existing failures expected (9 new chat tests all pass). +- Docker dev setup may show frontend-backend 500 errors beyond homepage. + +## Conventions +- Use `$t('key')` for user-facing strings. +- Use DaisyUI semantic colors/classes (`bg-primary`, `text-base-content`). +- Chat providers: dynamic catalog from `GET /api/chat/providers/`; configured in `CHAT_PROVIDER_CONFIG`. +- Chat model override: dropdown selector fed by `GET /api/chat/providers/{provider}/models/`; per-provider persistence via `localStorage` key `voyage_chat_model_prefs`; backend `send_message` accepts optional `model`. +- Chat context: collection chats inject collection UUID + multi-stop itinerary context; system prompt guides `get_trip_details`-first reasoning and confirms only before first `add_to_itinerary`. +- Chat tool output: `role=tool` messages hidden from display; tool outputs render as concise summaries; persisted tool rows reconstructed on reload via `rebuildConversationMessages()`. +- Chat errors: `_safe_error_payload()` maps LiteLLM exceptions to sanitized user-safe categories (never raw `exc.message`). +- Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. +- Chat agent tools (`get_trip_details`, `add_to_itinerary`) respect collection sharing — both owners and `shared_with` members can use them; `list_trips` remains owner-only. +- Do **not** attempt to fix known test/config issues during feature work. +- Commit and merge completed feature branches promptly once validation passes (avoid leaving finished work unmerged). + +## .memory Files +- At the start of any task, read `.memory/manifest.yaml` to discover available files, then read `system.md` and relevant `knowledge/` files for project context. +- Read `.memory/decisions.md` for architectural decisions and review verdicts. +- Check `.memory/plans/` and `.memory/research/` for prior work on related topics. +- These files capture decisions, review verdicts, security findings, and plans from prior sessions. +- Do **not** duplicate this info into code comments — `.memory/` is the source of truth for project history. + +## Instruction File Sync +- `AGENTS.md`, `CLAUDE.md`, `.cursorrules`, and the Copilot CLI custom instructions must always be kept in sync. +- Whenever any one is updated, apply the equivalent change to all the others. diff --git a/.github/copilot-instructions.md b/.github/copilot-instructions.md index 9aada32e..03124984 100644 --- a/.github/copilot-instructions.md +++ b/.github/copilot-instructions.md @@ -16,7 +16,7 @@ Voyage is **pre-release** — not yet in production use. During pre-release: **Key architectural pattern — API Proxy**: The frontend never calls the Django backend directly. All API calls go to `src/routes/api/[...path]/+server.ts`, which proxies requests to the Django server (`http://server:8000`), injecting CSRF tokens and managing session cookies. This means frontend fetches use relative URLs like `/api/locations/`. -**AI Chat**: The AI travel chat assistant is embedded in Collections → Recommendations (component: `AITravelChat.svelte`). There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime list + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. Provider config lives in `backend/server/chat/llm_client.py` (`CHAT_PROVIDER_CONFIG`). Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). Chat composer supports per-provider model override via dropdown selector fed by `GET /api/chat/providers/{provider}/models/` (persisted in browser `localStorage` key `voyage_chat_model_prefs`). Collection chats inject collection UUID + multi-stop itinerary context; system prompt guides `get_trip_details`-first reasoning and confirms only before first `add_to_itinerary`. LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. Tool outputs render as concise summaries (not raw JSON); `role=tool` messages are hidden from display and reconstructed on reload via `rebuildConversationMessages()`. +**AI Chat**: The AI travel chat assistant is embedded in Collections → Recommendations (component: `AITravelChat.svelte`). There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime list + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. Provider config lives in `backend/server/chat/llm_client.py` (`CHAT_PROVIDER_CONFIG`). Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). Chat composer supports per-provider model override via dropdown selector fed by `GET /api/chat/providers/{provider}/models/` (persisted in browser `localStorage` key `voyage_chat_model_prefs`). Collection chats inject collection UUID + multi-stop itinerary context; system prompt guides `get_trip_details`-first reasoning and confirms only before first `add_to_itinerary`. LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. Chat agent tools (`get_trip_details`, `add_to_itinerary`) respect collection sharing — both owners and `shared_with` members can use them; `list_trips` remains owner-only. Tool outputs render as concise summaries (not raw JSON); `role=tool` messages are hidden from display and reconstructed on reload via `rebuildConversationMessages()`. **Services** (docker-compose): - `web` → SvelteKit frontend at `:8015` @@ -75,7 +75,7 @@ Run these commands in order: **Backend (Django with Python — prefer uv for local tooling):** - Backend development requires Docker - local Python pip install fails due to network timeouts -- `docker compose exec server python3 manage.py test` - **7 seconds** - Run tests (6/30 pre-existing failures expected) +- `docker compose exec server python3 manage.py test` - **7 seconds** - Run tests (6/39 pre-existing failures expected; 9 chat tests all pass) - `docker compose exec server python3 manage.py help` - View Django commands - `docker compose exec server python3 manage.py migrate` - Run database migrations - Use `uv` for local Python dependency/tooling commands when applicable @@ -120,7 +120,7 @@ Run these commands in order: ### Expected Test Failures - Frontend check: 0 errors and 6 warnings expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`) -- Backend tests: 6 out of 30 Django tests fail (pre-existing: 2 user email key errors + 4 geocoding API mocks) - **DO NOT fix unrelated test failures** +- Backend tests: 6 out of 39 Django tests fail (pre-existing: 2 user email key errors + 4 geocoding API mocks; 9 new chat tests all pass) - **DO NOT fix unrelated test failures** ### Build Timing (NEVER CANCEL) - **Docker first startup**: 25+ minutes (image downloads) diff --git a/.memory/knowledge/conventions.md b/.memory/knowledge/conventions.md index 605aa9b6..a548599e 100644 --- a/.memory/knowledge/conventions.md +++ b/.memory/knowledge/conventions.md @@ -9,6 +9,7 @@ ## Backend Patterns - **Views**: DRF `ModelViewSet` subclasses; `get_queryset()` filters by `user=self.request.user` +- **Shared-access queries**: Use `Q(user=user) | Q(shared_with=user)).distinct()` for collection lookups that should include shared members (e.g. chat agent tools). Always `.distinct()` to avoid `MultipleObjectsReturned` when owner is also in `shared_with`. - **Money**: `djmoney` MoneyField - **Geo**: PostGIS via `django-geojson` - **Chat providers**: Dynamic catalog from `GET /api/chat/providers/`; configured in `CHAT_PROVIDER_CONFIG` diff --git a/.memory/knowledge/domain/collections-and-sharing.md b/.memory/knowledge/domain/collections-and-sharing.md index 11d6ece2..f0e25589 100644 --- a/.memory/knowledge/domain/collections-and-sharing.md +++ b/.memory/knowledge/domain/collections-and-sharing.md @@ -26,6 +26,12 @@ - On unshare/leave, departing user's locations are removed from collection (not deleted) - `duplicate` action creates a private copy with no `shared_with` transfer +### Chat Agent Tool Access +- `get_trip_details` and `add_to_itinerary` tools authorize using `Q(user=user) | Q(shared_with=user)` — shared members can use AI chat tools on shared collections. +- `list_trips` remains owner-only (shared collections not listed). +- `add_to_itinerary` assigns `Location.user = shared_user` (shared users own their contributed locations), consistent with REST API behavior. +- See [patterns/chat-and-llm.md](../patterns/chat-and-llm.md#shared-trip-tool-access). + ## Itinerary Architecture ### Primary Component diff --git a/.memory/knowledge/overview.md b/.memory/knowledge/overview.md index 51cb9053..29d4fd6a 100644 --- a/.memory/knowledge/overview.md +++ b/.memory/knowledge/overview.md @@ -10,10 +10,11 @@ Frontend never calls Django directly. All API calls go through `src/routes/api/[ - Chat conversations stream via SSE through `/api/chat/conversations/`. - `ChatViewSet.send_message()` accepts optional context fields (`collection_id`, `collection_name`, `start_date`, `end_date`, `destination`) and appends a `## Trip Context` section to the system prompt when provided. When a `collection_id` is present, also injects `Itinerary stops:` from `collection.locations` (up to 8 unique stops) and the collection UUID with explicit `get_trip_details`/`add_to_itinerary` grounding. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#trip-context-uuid-grounding) and [patterns/chat-and-llm.md](patterns/chat-and-llm.md#multi-stop-context-derivation). - Chat composer supports per-provider model override (persisted in browser `localStorage` key `voyage_chat_model_prefs`). DB-saved default provider/model (`UserAISettings`) is authoritative on initialization; localStorage is write-only sync target. Backend `send_message` accepts optional `model` param; falls back to DB defaults → instance defaults → `"openai"`. +- Chat agent tools (`get_trip_details`, `add_to_itinerary`) authorize using `Q(user=user) | Q(shared_with=user)` — both owners and shared members can use them. `list_trips` remains owner-only. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#shared-trip-tool-access). - Invalid required-argument tool calls are detected and short-circuited: stream terminates with `tool_validation_error` SSE event + `[DONE]` and invalid tool results are not replayed into conversation history. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#tool-call-error-handling-chat-loop-hardening). - LiteLLM errors mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#sanitized-llm-error-mapping). - Tool outputs display as concise summaries (not raw JSON) via `getToolSummary()`. Persisted `role=tool` messages are hidden from display; on conversation reload, `rebuildConversationMessages()` reconstructs `tool_results` on assistant messages. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#tool-output-rendering). -- Embedded chat uses compact header (provider/model selectors in settings dropdown), bounded height, sidebar-closed-by-default, and visible streaming indicator. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#embedded-chat-ux). +- Embedded chat uses compact header (provider/model selectors in settings dropdown with outside-click/Escape close), bounded height, sidebar-closed-by-default, visible streaming indicator, and i18n aria-labels. See [patterns/chat-and-llm.md](patterns/chat-and-llm.md#embedded-chat-ux). - Frontend type: `ChatProviderCatalogEntry` in `src/lib/types.ts`. - Reference: [Plan: AI travel agent](../plans/ai-travel-agent-collections-integration.md), [Plan: AI travel agent redesign — WS4](../plans/ai-travel-agent-redesign.md#ws4-collection-level-chat-improvements) diff --git a/.memory/knowledge/patterns/chat-and-llm.md b/.memory/knowledge/patterns/chat-and-llm.md index 4f7859dc..0630a0b8 100644 --- a/.memory/knowledge/patterns/chat-and-llm.md +++ b/.memory/knowledge/patterns/chat-and-llm.md @@ -41,7 +41,12 @@ - UUID injection only occurs when collection lookup succeeds AND user is owner or `shared_with` member (authorization gate). - System prompt includes two-phase confirmation guidance: confirm only before the **first** `add_to_itinerary` action; after explicit user approval phrases ("yes", "go ahead", "add them"), proceed directly without re-confirming. - `get_trip_details` DoesNotExist returns `"collection_id is required and must reference a trip you can access"` (does NOT match short-circuit regex due to `fullmatch` — correct, this is an invalid-value error, not missing-param). -- Known pre-existing: `get_trip_details` filters `user=user` only — shared-collection members get UUID context but tool returns DoesNotExist. Low severity. + +### Shared-Trip Tool Access +- `get_trip_details` and `add_to_itinerary` authorize collections using `Q(user=user) | Q(shared_with=user)` with `.distinct()` — both owners and shared members can access. +- `list_trips` remains owner-only by design. +- `.distinct()` prevents `MultipleObjectsReturned` when the owner is also present in `shared_with`. +- Non-members receive `DoesNotExist` errors through existing error paths. ## Tool Output Rendering - Frontend `AITravelChat.svelte` hides raw `role=tool` messages via `visibleMessages` filter (`messages.filter(msg => msg.role !== 'tool')`). @@ -62,7 +67,8 @@ - Sidebar defaults to closed in embedded mode (`let sidebarOpen = !embedded;`); `lg:flex` ensures always-visible on desktop. - Quick-action chips use `btn-xs` + `overflow-x-auto` for compact embedded fit. - Streaming indicator visible inside last assistant bubble throughout entire generation (conditioned on `isStreaming && msg.id === lastVisibleMessageId`). -- Known low-priority: `aria-label` values on sidebar toggle and settings button are hardcoded English (should use `$t()`). `
` dropdown does not auto-close on outside click. +- Aria-label values on sidebar toggle and settings button use i18n keys (`chat_a11y.show_conversations_aria`, `chat_a11y.hide_conversations_aria`, `chat_a11y.ai_settings_aria`); key parity across all 20 locale files. +- Settings dropdown closes on outside click (`pointerdown`/`mousedown`/`touchstart` listeners) and `Escape` keypress, with mount-time listener cleanup. ## OpenCode Zen Provider - Provider ID: `opencode_zen` diff --git a/AGENTS.md b/AGENTS.md index 43f37362..95b040e2 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -13,7 +13,7 @@ Voyage is **pre-release** — not yet in production use. During pre-release: ## Architecture Overview - **API proxy pattern**: Frontend never calls Django directly. All API calls go through `frontend/src/routes/api/[...path]/+server.ts`, which proxies to `http://server:8000`, handles cookies, and injects CSRF behavior. -- **AI chat**: Embedded in Collections → Recommendations via `AITravelChat.svelte` component. No standalone `/chat` route. Provider list is dynamic from backend `GET /api/chat/providers/` (sourced from LiteLLM runtime + custom entries like `opencode_zen`). Chat conversations use SSE streaming via `/api/chat/conversations/`. Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. +- **AI chat**: Embedded in Collections → Recommendations via `AITravelChat.svelte` component. No standalone `/chat` route. Provider list is dynamic from backend `GET /api/chat/providers/` (sourced from LiteLLM runtime + custom entries like `opencode_zen`). Chat conversations use SSE streaming via `/api/chat/conversations/`. Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. Chat agent tools (`get_trip_details`, `add_to_itinerary`) respect collection sharing — both owners and `shared_with` members can use them; `list_trips` remains owner-only. - **Service ports**: - `web` → `:8015` - `server` → `:8016` @@ -57,7 +57,7 @@ Run in this order: ## Known Issues (Expected) - Frontend `bun run check`: **0 errors + 6 warnings** expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`) -- Backend tests: **6/30 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks) +- Backend tests: **6/39 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks; 9 new chat tests all pass) - Docker dev setup has frontend-backend communication issues (500 errors beyond homepage) ## Key Patterns diff --git a/CLAUDE.md b/CLAUDE.md index efc60a4d..5f0e4d42 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -15,7 +15,7 @@ Voyage is **pre-release** — not yet in production use. During pre-release: - Use the API proxy pattern: never call Django directly from frontend components. - Route all frontend API calls through `frontend/src/routes/api/[...path]/+server.ts`. - Proxy target is `http://server:8000`; preserve session cookies and CSRF behavior. -- AI chat is embedded in Collections → Recommendations via `AITravelChat.svelte`. There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime providers + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. +- AI chat is embedded in Collections → Recommendations via `AITravelChat.svelte`. There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime providers + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. Default AI provider/model saved via `UserAISettings` in DB (authoritative over browser localStorage). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). Invalid tool calls (missing required args) are detected and short-circuited with a user-visible error — not replayed into history. Chat agent tools (`get_trip_details`, `add_to_itinerary`) respect collection sharing — both owners and `shared_with` members can use them; `list_trips` remains owner-only. - Service ports: - `web` → `:8015` - `server` → `:8016` @@ -65,7 +65,7 @@ Run in this exact order: ## Known Issues (Expected) - Frontend `bun run check`: **0 errors + 6 warnings** expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`) -- Backend tests: **6/30 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks) +- Backend tests: **6/39 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks; 9 new chat tests all pass) - Docker dev setup has frontend-backend communication issues (500 errors beyond homepage) ## Key Patterns