feat(chat): add LLM-powered travel agent with multi-provider support
Implement a full chat-based travel agent using LiteLLM for multi-provider LLM support (OpenAI, Anthropic, Gemini, Ollama, Groq, Mistral, etc.). Backend: - New 'chat' Django app with ChatConversation and ChatMessage models - Streaming SSE endpoint via StreamingHttpResponse - 5 agent tools: search_places, list_trips, get_trip_details, add_to_itinerary, get_weather - LiteLLM client wrapper with per-user API key retrieval - System prompt with user preference context injection Frontend: - New /chat route with full-page chat UI (DaisyUI + Tailwind) - Collapsible conversation sidebar with CRUD - SSE streaming response display with tool call visualization - Provider selector dropdown - SSE proxy fix to stream text/event-stream without buffering - Navbar link and i18n keys
This commit is contained in:
@@ -86,11 +86,20 @@ async function handleRequest(
|
||||
});
|
||||
}
|
||||
|
||||
const responseData = await response.arrayBuffer();
|
||||
// Create a new Headers object without the 'set-cookie' header
|
||||
const contentType = response.headers.get('content-type') || '';
|
||||
const cleanHeaders = new Headers(response.headers);
|
||||
cleanHeaders.delete('set-cookie');
|
||||
|
||||
// Stream SSE responses through without buffering
|
||||
if (contentType.includes('text/event-stream')) {
|
||||
return new Response(response.body, {
|
||||
status: response.status,
|
||||
headers: cleanHeaders
|
||||
});
|
||||
}
|
||||
|
||||
const responseData = await response.arrayBuffer();
|
||||
|
||||
return new Response(responseData, {
|
||||
status: response.status,
|
||||
headers: cleanHeaders
|
||||
|
||||
Reference in New Issue
Block a user