diff --git a/AGENTS.md b/AGENTS.md index 8ca20a32..c3931855 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -7,7 +7,7 @@ ## Architecture Overview - **API proxy pattern**: Frontend never calls Django directly. All API calls go through `frontend/src/routes/api/[...path]/+server.ts`, which proxies to `http://server:8000`, handles cookies, and injects CSRF behavior. -- **AI chat**: Embedded in Collections → Recommendations via `AITravelChat.svelte` component. No standalone `/chat` route. Provider list is dynamic from backend `GET /api/chat/providers/` (sourced from LiteLLM runtime + custom entries like `opencode_zen`). Chat conversations use SSE streaming via `/api/chat/conversations/`. +- **AI chat**: Embedded in Collections → Recommendations via `AITravelChat.svelte` component. No standalone `/chat` route. Provider list is dynamic from backend `GET /api/chat/providers/` (sourced from LiteLLM runtime + custom entries like `opencode_zen`). Chat conversations use SSE streaming via `/api/chat/conversations/`. Chat composer supports per-provider model override (persisted in browser `localStorage`). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). - **Service ports**: - `web` → `:8015` - `server` → `:8016` @@ -50,8 +50,8 @@ Run in this order: 4. `cd frontend && bun run build` ## Known Issues (Expected) -- Frontend `bun run check`: **3 type errors + 19 warnings** expected -- Backend tests: **2/3 fail** (expected) +- Frontend `bun run check`: **0 errors + 6 warnings** expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`) +- Backend tests: **6/30 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks) - Docker dev setup has frontend-backend communication issues (500 errors beyond homepage) ## Key Patterns @@ -60,8 +60,20 @@ Run in this order: - Styling: use DaisyUI semantic colors/classes (`bg-primary`, `text-base-content`, etc.) - Security: handle CSRF tokens via `/auth/csrf/` and `X-CSRFToken` - Chat providers: dynamic catalog from `GET /api/chat/providers/`; configured in `CHAT_PROVIDER_CONFIG` +- Chat model override: composer text input for per-provider model selection; persisted in `localStorage` key `voyage_chat_model_prefs`; backend accepts optional `model` param in `send_message` +- Chat error surfacing: `_safe_error_payload()` maps LiteLLM exceptions to sanitized user-safe categories (never forwards raw `exc.message`) ## Conventions - Do **not** attempt to fix known test/configuration issues as part of feature work. - Use `bun` for frontend commands, `uv` for local Python tooling where applicable. - Commit and merge completed feature branches promptly once validation passes (avoid leaving finished work unmerged). + +## .memory Files +- At the start of any task, read `.memory/knowledge.md` and `.memory/decisions.md` for project context. +- Check relevant files in `.memory/plans/` and `.memory/research/` for prior work on related topics. +- These files capture architectural decisions, code review verdicts, security findings, and implementation plans from prior sessions. +- Do **not** duplicate information from `.memory/` into code comments — keep `.memory/` as the single source of truth for project history. + +## Instruction File Sync +- `AGENTS.md` (OpenCode), `CLAUDE.md` (Claude Code), `.cursorrules` (Cursor), and the Copilot CLI custom instructions must always be kept in sync. +- Whenever any of these files is updated (new convention, new decision, new workflow rule), apply the equivalent change to all the others. diff --git a/CLAUDE.md b/CLAUDE.md index 3547e6dd..2565c0b0 100644 --- a/CLAUDE.md +++ b/CLAUDE.md @@ -9,7 +9,7 @@ - Use the API proxy pattern: never call Django directly from frontend components. - Route all frontend API calls through `frontend/src/routes/api/[...path]/+server.ts`. - Proxy target is `http://server:8000`; preserve session cookies and CSRF behavior. -- AI chat is embedded in Collections → Recommendations via `AITravelChat.svelte`. There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime providers + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. +- AI chat is embedded in Collections → Recommendations via `AITravelChat.svelte`. There is no standalone `/chat` route. Chat providers are loaded dynamically from `GET /api/chat/providers/` (backed by LiteLLM runtime providers + custom entries like `opencode_zen`). Chat conversations stream via SSE through `/api/chat/conversations/`. Chat composer supports per-provider model override (persisted in browser `localStorage`). LiteLLM errors are mapped to sanitized user-safe messages via `_safe_error_payload()` (never exposes raw exception text). - Service ports: - `web` → `:8015` - `server` → `:8016` @@ -58,8 +58,8 @@ Run in this exact order: **ALWAYS run format before committing.** ## Known Issues (Expected) -- Frontend `bun run check`: **3 type errors + 19 warnings** expected -- Backend tests: **2/3 fail** (expected) +- Frontend `bun run check`: **0 errors + 6 warnings** expected (pre-existing in `CollectionRecommendationView.svelte` + `RegionCard.svelte`) +- Backend tests: **6/30 fail** (pre-existing: 2 user email key errors + 4 geocoding API mocks) - Docker dev setup has frontend-backend communication issues (500 errors beyond homepage) ## Key Patterns @@ -68,8 +68,20 @@ Run in this exact order: - Styling: prefer DaisyUI semantic classes (`bg-primary`, `text-base-content`) - CSRF handling: use `/auth/csrf/` + `X-CSRFToken` - Chat providers: dynamic catalog from `GET /api/chat/providers/`; configured in `CHAT_PROVIDER_CONFIG` +- Chat model override: composer text input for per-provider model selection; persisted in `localStorage` key `voyage_chat_model_prefs`; backend accepts optional `model` param in `send_message` +- Chat error surfacing: `_safe_error_payload()` maps LiteLLM exceptions to sanitized user-safe categories (never forwards raw `exc.message`) ## Conventions - Do **not** attempt to fix known test/configuration issues as part of feature work. - Use `bun` for frontend commands, `uv` for local Python tooling where applicable. - Commit and merge completed feature branches promptly once validation passes (avoid leaving finished work unmerged). + +## .memory Files +- At the start of any task, read `.memory/knowledge.md` and `.memory/decisions.md` for project context. +- Check relevant files in `.memory/plans/` and `.memory/research/` for prior work on related topics. +- These files capture architectural decisions, code review verdicts, security findings, and implementation plans from prior sessions. +- Do **not** duplicate information from `.memory/` into code comments — keep `.memory/` as the single source of truth for project history. + +## Instruction File Sync +- `AGENTS.md` (OpenCode), `CLAUDE.md` (Claude Code), `.cursorrules` (Cursor), and the Copilot CLI custom instructions must always be kept in sync. +- Whenever any of these files is updated (new convention, new decision, new workflow rule), apply the equivalent change to all the others.