The conversations sidebar was force-shown via 'lg:flex' at large
breakpoints even when sidebarOpen was false, causing a broken split-
layout in the w-96 drawer panel (~144px left for actual chat).
Fixes:
- Sidebar no longer force-applies lg:flex in panelMode
- In panelMode the sidebar goes full-width and hides the chat area
entirely (stacked panel pattern instead of split-column)
- Hamburger toggle is always visible in panelMode (was lg:hidden)
- Selecting a conversation or creating a new one auto-closes the
sidebar in panelMode, returning to chat view
- Welcome screen and header title use compact sizing in panelMode
- search_places: detect HTTP 429 and mark retryable=False to stop the
retry loop immediately instead of spiraling until MAX_ITERATIONS
- get_weather: extract collection coordinates (lat/lng from first
location with coords) and retry when LLM omits required params;
uses sync_to_async for the DB query in the async view
- AITravelChat: deduplicate context-only tools (get_trip_details,
get_weather) in the render pipeline to prevent duplicate place cards
from appearing when the retry loop causes multiple get_trip_details calls
- Tests: 5 new tests covering 429 non-retryable path and weather
coord fallback; all 39 chat tests pass
- Add _COMMAND_VERBS guard to _is_likely_location_reply() so messages
starting with imperative verbs (find, search, show, get, ...) are not
mistakenly treated as user location replies. This prevented 'Find
good places' from being used as a retry location, which was causing
the clarification path to never fire and the tool loop to exhaust
MAX_ALL_FAILURE_ROUNDS instead.
- Extract city from comma-delimited fallback address strings when
city/country FKs are absent, e.g. 'Little Turnstile 6, London'
→ 'London', so context-based location retry works for manually-
entered itinerary stops without geocoded FK data.
- Add attempted_location_retry flag: if retry was attempted but all
retry attempts failed, convert result to an execution failure rather
than emitting a clarification prompt (user already provided context
via their itinerary).
- Fix test assertion ordering in test_collection_context_retry_extracts_
city_from_fallback_address: streaming_content must be consumed before
checking mock call counts since StreamingHttpResponse is lazy.
Replaced `node:22-alpine` + `npm install -g bun@1.2.22` with
`oven/bun:1.2.22-alpine` as the builder stage base image.
The npm-based bun install was failing on linux/arm64 with:
"Failed to find package @oven/bun-linux-aarch64"
The official oven/bun Docker image supports both linux/amd64 and
linux/arm64 natively, eliminating the need to install bun via npm.
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
- Fetch models dynamically from OpenCode Zen API (36 models vs 5 hardcoded)
- Add function calling support check before using tools
- Add retry logic (num_retries=2) for transient failures
- Improve logging for debugging API calls and errors
- Update system prompt for multi-stop itinerary context
- Clean up unused imports in frontend components
- Remove deleted views.py (moved to views/__init__.py)
- Connection error: wrap sync get_llm_api_key() in sync_to_async in
stream_chat_completion() to fix SynchronousOnlyOperation raised when
the async SSE generator calls a synchronous Django ORM function
- Models not loading: add opencode_zen handler to models endpoint
returning its default model; fix frontend to show 'Default' instead
of 'Loading...' indefinitely when no model list is returned
- Location in header: remove destination subtitle from Travel Assistant
header — collection-wide chat has no single meaningful location
Fix 1: Provider/Model Selection (Critical - unblocks LLM)
- Add /api/chat/providers/{id}/models/ endpoint to fetch available models
- Auto-select first configured provider instead of hardcoded 'openai'
- Add model dropdown populated from provider API
- Filter provider list to only show configured providers
- Show helpful error when no providers configured
Fix 2: Auto-Learn Preferences (Replaces manual input)
- Create auto_profile.py utility to infer preferences from user data
- Learn interests from Activity sport types and Location categories
- Learn trip style from Lodging types (hostel=budget, resort=luxury, etc.)
- Learn geographic preferences from VisitedRegion/VisitedCity
- Call auto-learn on every chat start (send_message)
- System prompt now indicates preferences are auto-inferred
Fix 3: Remove Manual Preference UI
- Remove travel_preferences section from Settings
- Remove preference form fields and save logic
- Remove preference fetch from server-side load
- Keep UserRecommendationPreferenceProfile type for backend use
The LLM should now work correctly:
- Users with any configured provider will have it auto-selected
- Model list is fetched dynamically from provider API
- Preferences are learned from actual travel history
Add two new sections to AGENTS.md and CLAUDE.md:
- .memory Files: consult knowledge.md, decisions.md, plans/, research/ at task start
- Instruction File Sync: keep AGENTS.md, CLAUDE.md, .cursorrules, and Copilot CLI instructions in sync when any is updated
Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
The optimize function always started nearest-neighbor from the first
array element, ignoring where the traveler actually is after preceding
anchors (flights, lodging). Now passes the preceding anchor's exit
coordinates (destination for transportation) so the algorithm picks
the spatially nearest item as the starting point.
Prevent API key and sensitive info leakage through exception messages:
- Replace str(exc) with generic error messages in all catch-all handlers
- Add server-side exception logging via logger.exception()
- Add ALLOWED_KWARGS per-tool allowlist to filter untrusted LLM kwargs
- Bound tool execution loop to MAX_TOOL_ITERATIONS=10
- Fix tool_call delta merge to use tool_call index