[v0.12.0 pre] Planning & Itinerary Overhaul, Recommendation Engine, OIDC Enhancements, and More (#931)

* Fixes [REQUEST] Email-based auto-linking for OIDC
Fixes #921

* Add ClusterMap integration for regions and cities with fit-to-bounds functionality

* Update COUNTRY_REGION_JSON_VERSION to v3.0 and modify state ID generation to use ISO2 code

* fix: handle email verification required case during signup

Updated the signup action to return a specific message when the backend responds with a 401 status, indicating that the signup succeeded but email verification is required. This allows the frontend to display the appropriate message using an i18n key.

* feat: add Advanced Configuration documentation with optional environment variables

* Fixes #511

* fix: update appVersion to v0.11.0-main-121425 and enhance socialProviders handling in settings page

* feat: implement social signup controls and update documentation for new environment variables

* fix: update LocationCard props and enhance restore data functionality

- Changed the user prop to null in LocationCard component on the dashboard page.
- Added isRestoring state to manage loading state during data restoration in settings.
- Updated the restore button to show a loading spinner when a restore operation is in progress.

* fix: update appVersion to v0.12.0-pre-dev-121625

* feat: implement itinerary planning feature with CollectionItineraryPlanner component and related updates

* feat: add overnight lodging indicator and functionality to CollectionItineraryPlanner

* feat: add compact display option to LocationCard and enhance lodging filtering in CollectionItineraryPlanner

* feat(itinerary): add itinerary management features and link modal

- Introduced ItineraryViewSet for managing itinerary items with create and reorder functionalities.
- Added itinerary linking capabilities in CollectionModal and CollectionItineraryPlanner components.
- Implemented new ItineraryLinkModal for linking existing items to specific dates.
- Enhanced the frontend with new modals for creating locations, lodging, transportation, notes, and checklists.
- Updated the backend to handle itinerary item creation and reordering with appropriate permissions.
- Improved data handling for unscheduled items and their association with the itinerary.
- Added new dependencies to the frontend for enhanced functionality.

* feat(itinerary): implement auto-generate functionality for itinerary items based on dated records

* feat(collection): enhance collection sharing logic and improve data handling on invite acceptance

* fix: update appVersion to correct pre-dev version

* feat(wikipedia): implement image selection from Wikipedia with enhanced results display

* Refactor code structure for improved readability and maintainability

* feat: add CollectionRecommendationView component for displaying location recommendations

- Implemented CollectionRecommendationView.svelte to handle location recommendations based on user input and selected categories.
- Added Recommendation and RecommendationResponse types to types.ts for better type safety and structure.
- Updated collections/[id]/+page.svelte to include a new view for recommendations, allowing users to switch between different views seamlessly.

* fix: update appVersion and improve button accessibility in collection views

* feat: add canModify prop to collection components for user permission handling

* feat: add itinerary removal functionality to various cards and update UI components

- Implemented `removeFromItinerary` function in `LodgingCard`, `NoteCard`, and `TransportationCard` to allow users to remove items from their itinerary.
- Replaced the trash icon with a calendar remove icon in `LocationCard`, `LodgingCard`, `NoteCard`, and `TransportationCard` for better visual representation.
- Updated the dropdown menus in `LodgingCard`, `NoteCard`, and `TransportationCard` to include the new remove from itinerary option.
- Enhanced `CollectionItineraryPlanner` to pass itinerary items to the respective cards.
- Removed `PointSelectionModal.svelte` as it is no longer needed.
- Refactored `LocationMedia.svelte` to integrate `ImageManagement` component and clean up unused code related to image handling.

* feat: enhance itinerary management with deduplication and initial visit date handling

* feat: add FullMap component for enhanced map functionality with clustering support

- Introduced FullMap.svelte to handle map rendering, clustering, and marker management.
- Updated map page to utilize FullMap component, replacing direct MapLibre usage.
- Implemented clustering options and marker properties handling in FullMap.
- Added utility functions for resolving theme colors and managing marker states.
- Enhanced user experience with hover popups and improved loading states for location details.
- Updated app version to v0.12.0-pre-dev-122225.

* feat: enhance map interaction for touch devices with custom popup handling

* feat: add progress tracker for folder views to display visited and planned locations

* feat: add map center and zoom state management with URL synchronization

* feat: add status and days until start fields to collections with filtering options

* Component folder structure changes

* feat: add LodgingMedia and LodgingModal components for managing lodging details and media attachments

feat: implement LocationSearchMap component for interactive location searching and mapping functionality

* fix: update contentType in ImageManagement component to 'lodging' for correct media handling

* feat: enhance lodging management with date validation and update messages

* feat: implement lodging detail page with server-side loading and image modal functionality

- Added a new server-side load function to fetch lodging details by ID.
- Created a new Svelte component for the lodging detail page, including image carousel and map integration.
- Implemented a modal for displaying images with navigation.
- Enhanced URL handling in the locations page to only read parameters.

* feat: add Transportation modal component and related routes

- Implemented TransportationModal component for creating and editing transportation entries.
- Added server-side loading for transportation details in the new route [id]/+page.server.ts.
- Created a new Svelte page for displaying transportation details with image and attachment handling.
- Integrated modal for editing transportation in the transportation details page.
- Updated lodging routes to include a modal for editing lodging entries.
- Removed unused delete action from lodging server-side logic.

* feat: add start_code and end_code fields to Transportation model and update related components

* feat: implement date validation for itinerary items and add day picker modal for scheduling

* Reorder town and county checks in geocoding.py

Fix detection if only town exists for a location but county is no city name

* Use address keys only if city is found

* Make sure reverse geocoding uses correct key for cities (#938)

* Reorder town and county checks in geocoding.py

Fix detection if only town exists for a location but county is no city name

* Use address keys only if city is found

* Refactor code structure for improved readability and maintainability

* Enhance collection management with modal updates and item handling

* feat: integrate CollectionMap component in collections page and update map titles in lodging and transportation pages

- Replaced inline map implementation with CollectionMap component in collections/[id]/+page.svelte for better modularity.
- Updated the map title in lodging/[id]/+page.svelte to reflect lodging context.
- Updated the map title in transportations/[id]/+page.svelte to reflect transportation context.
- Added functionality to collect and render GeoJSON data from transportation attachments in transportations/[id]/+page.svelte.

* chore: update copyright year to 2026 in various files

* feat: enhance backup export functionality with itinerary items and export IDs

* fix: improve dropdown close behavior by handling multiple event types

* fix: remove unnecessary cache decorator from globespin function

* feat: add initial visit date support in ChecklistModal and NoteModal, with UI suggestions for prefilled dates

* feat: add details view for checklist and note cards with edit functionality

* feat: add travel duration and GPX distance calculation to Transportation model and UI

* feat: add primary image support to Collection model, serializers, and UI components

* Refactor calendar components and enhance event detail handling

- Replaced direct calendar implementation with a reusable CalendarComponent in the calendar route.
- Introduced EventDetailsModal for displaying event details, improving modularity and readability.
- Added functionality to fetch event details asynchronously when an event is clicked.
- Implemented ICS calendar download functionality with loading state management.
- Enhanced collections page to support calendar view, integrating event handling and timezone management.
- Improved lodging and transportation pages to display local time for stays and trips, including timezone badges.
- Cleaned up unused code and comments for better maintainability.

* feat: enhance hero image handling in collection view by prioritizing primary image

* chore: update .env.example to include account email verification configuration

* feat: enhance LodgingCard and TransportationCard components with expandable details and improved layout

* feat: add price and currency fields to locations, lodging, and transportation components

- Introduced price and price_currency fields in LocationModal, LodgingDetails, LodgingModal, TransportationDetails, and TransportationModal components.
- Implemented MoneyInput and CurrencyDropdown components for handling monetary values and currency selection.
- Updated data structures and types to accommodate new price and currency fields across various models.
- Enhanced cost summary calculations in collections and routes to display total costs by currency.
- Added user preference for default currency in settings, affecting new item forms.
- Updated UI to display price information in relevant components, ensuring consistent formatting and user experience.

* feat: add Development Timeline link to overview and create timeline documentation

* feat: enhance map functionality with search and zoom features

- Updated availableViews in collection page to include map view based on lodging and transportation locations.
- Added search functionality to the map page, allowing users to filter pins by name and category.
- Implemented auto-zoom feature to adjust the map view based on filtered search results.
- Introduced a search bar with a clear button for better user experience.

* feat: enhance ISO code extraction and region matching logic in extractIsoCode function

* feat: enhance extractIsoCode function with normalization for locality matching

* feat: update extractIsoCode function to include additional ISO3166 levels for improved region matching

* feat: enhance extractIsoCode function to handle cases without city information and update CollectionMap to bind user data

* feat: add cron job for syncing visited regions and cities, enhance Docker and supervisord configurations

* feat: add CollectionItineraryDay model and related functionality for itinerary day metadata management

* feat: implement cleanup of out-of-range itinerary items and notify users of potential impacts on itinerary when dates change

* Refactor collection page for improved localization and code clarity

- Removed unused imports and consolidated cost category labels to be reactive.
- Updated cost summary function to accept localized labels.
- Enhanced localization for various UI elements, including buttons, headings, and statistics.
- Improved user feedback messages for better clarity and consistency.
- Ensured all relevant text is translatable using the i18n library.

* feat: add collaborator serialization and display in collections

- Implemented `_build_profile_pic_url` and `_serialize_collaborator` functions for user profile picture URLs and serialization.
- Updated `CollectionSerializer` and `UltraSlimCollectionSerializer` to include collaborators in the serialized output.
- Enhanced `CollectionViewSet` to prefetch shared_with users for optimized queries.
- Modified frontend components to display collaborators in collection details, including profile pictures and initials.
- Added new localization strings for collaborators.
- Refactored map and location components to improve usability and functionality.
- Updated app version to reflect new changes.

* feat: add dynamic lodging icons based on type in CollectionMap component

* feat: add CollectionStats component for detailed trip statistics

- Implemented CollectionStats.svelte to display various statistics related to the collection, including distances, activities, and locations visited.
- Enhanced CollectionMap.svelte to filter activities based on date range using new getActivityDate function.
- Updated LocationSearchMap.svelte to handle airport mode for start and end locations.
- Modified types.ts to include is_global property in CollectionItineraryItem for trip-wide items.
- Updated +page.svelte to integrate the new stats view and manage view state accordingly.

* feat: enhance itinerary management by removing old items on date change for notes and checklists; normalize date handling in CollectionMap

* feat: add functionality to change day and move items to trip-wide itinerary

- Implemented changeDay function in ChecklistCard, LocationCard, LodgingCard, NoteCard, and TransportationCard components to allow users to change the scheduled day of items.
- Added a button to move items to the global (trip-wide) itinerary in the aforementioned components, with appropriate dispatch events.
- Enhanced CollectionItineraryPlanner to handle moving items to the global itinerary and added UI elements for unscheduled items.
- Updated ItineraryDayPickModal to support the deletion of source visits when moving locations.
- Added new translations for "Change Day" and "Move Trip Wide" in the English locale.

* fix: specify full path for python3 in cron job and add shell and path variables

* fix: update appVersion to v0.12.0-pre-dev-010726

* feat: enhance CollectionItineraryPlanner and CollectionStats with dynamic links and transport type normalization

* Add Dev Container + WSL install docs and link in install guide (#944) (#951)

* feat: enhance internationalization support in CollectionMap and CollectionStats components

- Added translation support for various labels and messages in CollectionMap.svelte and CollectionStats.svelte using svelte-i18n.
- Updated English and Chinese locale files to include new translation keys for improved user experience.
- Simplified the rendering of recommendation views in the collections page.

* Refactor itinerary management and UI components

- Updated ItineraryViewSet to handle visit updates and creations more efficiently, preserving visit IDs when moving between days.
- Enhanced ChecklistCard, LodgingCard, TransportationCard, and NoteCard to include a new "Change Day" option in the actions menu.
- Improved user experience in CollectionItineraryPlanner by tracking specific itinerary items being moved and ensuring only the relevant entries are deleted.
- Added new location sharing options in LodgingCard and TransportationCard for Apple Maps, Google Maps, and OpenStreetMap.
- Updated translations in en.json for consistency and clarity.
- Minor UI adjustments for better accessibility and usability across various components.

* feat: implement action menus and close event handling in card components

* feat: refactor Dockerfile and supervisord configuration to remove cron and add periodic sync script

* feat: enhance LocationSearchMap and TransportationDetails components with initialization handling and airport mode logic

* feat: add airport and location search mode labels to localization file

* feat: enhance periodic sync logging and improve airport mode handling in LocationSearchMap

* feat: enhance unscheduled items display with improved card interactions and accessibility

* Add dev compose for hot reload and update WSL dev container docs (#958)

* feat: enhance localization for itinerary linking and transportation components

* Localization: update localization files with new keys and values

* fix: improve error messages for Overpass API responses

* chore: update dependencies in frontend package.json and pnpm-lock.yaml

- Updated @sveltejs/adapter-node from ^5.2.12 to ^5.4.0
- Updated @sveltejs/adapter-vercel from ^5.7.0 to ^6.3.0
- Updated tailwindcss from ^3.4.17 to ^3.4.19
- Updated typescript from ^5.8.3 to ^5.9.3
- Updated vite from ^5.4.19 to ^5.4.21

* chore: update dependencies in pnpm-lock.yaml to latest versions

* Refactor code structure for improved readability and maintainability

* Refactor code structure for improved readability and maintainability

* fix: update package dependencies to resolve compatibility issues

* Add "worldtravel" translations to multiple locale files

- Added "worldtravel" key with translations for Spanish, French, Hungarian, Italian, Japanese, Korean, Dutch, Norwegian, Polish, Brazilian Portuguese, Russian, Slovak, Swedish, Turkish, Ukrainian, and Chinese.
- Updated the navigation section in each locale file to include the new "worldtravel" entry.

* Add new screenshots and update email verification message in locale file

* feat: Implement data restoration functionality with file import

- Added a new action `restoreData` in `+page.server.ts` to handle file uploads for restoring collections.
- Enhanced the UI in `+page.svelte` to include an import button and a modal for import progress.
- Integrated file input handling to trigger form submission upon file selection.
- Removed unused GSAP animations from the login, profile, and signup pages for cleaner code.

* feat: Add modals for creating locations and lodging from recommendations, enhance image import functionality

* fix: Adjust styles to prevent horizontal scroll and enhance floating action button visibility

* feat: Enhance error handling and messaging for Google Maps and OpenStreetMap geocoding functions

* fix: Enhance error messaging for Google Maps access forbidden response

* feat: Add User-Agent header to Google Maps API requests and refine error messaging for access forbidden response

* fix: Update User-Agent header in Google Maps API requests for improved compatibility

* fix: Disable proxy settings in Google Maps API request to prevent connection issues

* fix: Update Trivy security scan configuration and add .trivyignore for known false positives

* fix: Refactor update method to handle is_public cascading for related items

* feat: Integrate django-invitations for user invitation management and update settings

* feat: Add Tailwind CSS and DaisyUI plugin for styling

* feat: Add Tailwind CSS and DaisyUI plugin for styling

* feat: Add "Invite a User" guide and update navigation links

* docs: Update "Invite a User" guide to include email configuration tip

* feat: Update email invitation template for improved styling and clarity

* fix: Remove trailing backslash from installation note in Unraid documentation

* feat: Add export/import messages and user email verification prompts in multiple languages

* Squashed commit of the following:

commit a993a15b93ebb7521ae2e5cc31596b98b29fcd6c
Author: Alex <div@alexe.at>
Date:   Mon Jan 12 20:44:47 2026 +0100

    Translated using Weblate (German)

    Currently translated at 100.0% (1048 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/de/

commit fdc455d9424fbb0f6b72179d9eb1340411700773
Author: Ettore Atalan <atalanttore@googlemail.com>
Date:   Sat Jan 10 23:24:23 2026 +0100

    Translated using Weblate (German)

    Currently translated at 100.0% (1048 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/de/

commit 5942129c55e89dd999a13d4df9c40e6e3189355c
Author: Orhun <orhunavcu@gmail.com>
Date:   Sun Jan 11 13:05:31 2026 +0100

    Translated using Weblate (Turkish)

    Currently translated at 100.0% (1048 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/tr/

commit 8712e43d8ba4a7e7fe163fb454d6577187f9a375
Author: Henrique Fonseca Veloso <henriquefv@tutamail.com>
Date:   Fri Jan 9 22:53:11 2026 +0100

    Translated using Weblate (Portuguese (Brazil))

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/pt_BR/

commit 18ee56653470413afe8d71ecd2b5028f6e4cf118
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:57 2026 +0100

    Translated using Weblate (Dutch)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/nl/

commit 57783c544e583c035c8b57b5c10ca320f25f399e
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:14 2026 +0100

    Translated using Weblate (Arabic)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/ar/

commit fb09edfd85bc85234b1c1ba7dd499f2915093fff
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:26 2026 +0100

    Translated using Weblate (Spanish)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/es/

commit 554a207d8e454a1f7ae826e2a40d389b94be5512
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:21 2026 +0100

    Translated using Weblate (German)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/de/

commit b70b9db27fb8607beefeb288185601c8f5eae28d
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:02 2026 +0100

    Translated using Weblate (Norwegian Bokmål)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/nb_NO/

commit 3b467caa9007c553e4ae7de97f53b6e462161ea3
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:07 2026 +0100

    Translated using Weblate (Polish)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/pl/

commit 30fbbfba3572c8f78ec7c7e1a231e363aca1ef10
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:17 2026 +0100

    Translated using Weblate (Russian)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/ru/

commit 8cecb492cfcac0a1f93ee8919f7b41d978d331ee
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:42 2026 +0100

    Translated using Weblate (Italian)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/it/

commit f0d3d41029c89bfa83d5891ee7af70241f27b7be
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:38 2026 +0100

    Translated using Weblate (Hungarian)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/hu/

commit 102e0f1912d010d38755a1713abb2a7f7564aafb
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:21 2026 +0100

    Translated using Weblate (Slovak)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/sk/

commit 428b8f18cf6195a96b55109e0221413d82415a2f
Author: Максим Горпиніч <gorpinicmaksim0@gmail.com>
Date:   Sat Jan 10 08:55:28 2026 +0100

    Translated using Weblate (Ukrainian)

    Currently translated at 100.0% (1048 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/uk/

commit 1a71aaf279ecab26c0c1fede05025732e6dcfa5e
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:27 2026 +0100

    Translated using Weblate (Swedish)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/sv/

commit 36ec3701f3a1a904e7c42ac4ffbe6a050dc6d1ed
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:53:43 2026 +0100

    Translated using Weblate (Chinese (Simplified Han script))

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/zh_Hans/

commit 65d8b74b340c877cad2028b7142c783a1b568d49
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:48 2026 +0100

    Translated using Weblate (Japanese)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/ja/

commit 4d11d1d31022583657e93aee70301a8ffcde1340
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:52 2026 +0100

    Translated using Weblate (Korean)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/ko/

commit bd1135bcb965ad73cf493771b15081cc97cf513a
Author: Orhun <orhunavcu@gmail.com>
Date:   Fri Jan 9 22:53:33 2026 +0100

    Translated using Weblate (Turkish)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/tr/

commit 2c3d814119f4cf2dabd20933699f5b991f20f3e6
Author: Anonymous <noreply@weblate.org>
Date:   Fri Jan 9 22:52:32 2026 +0100

    Translated using Weblate (French)

    Currently translated at 99.9% (1047 of 1048 strings)

    Translation: AdventureLog/Web App
    Translate-URL: https://hosted.weblate.org/projects/adventurelog/web-app/fr/

* Refactor code structure and remove redundant code blocks for improved readability and maintainability

* fix: Correct appVersion to match the latest pre-release version

* fix: Add missing vulnerability reference for jaraco.context in .trivyignore

---------

Co-authored-by: Lars Lehmann <33843261+larsl-net@users.noreply.github.com>
Co-authored-by: Lars Lehmann <lars@lmail.eu>
Co-authored-by: Nick Petrushin <n.a.petrushin@gmail.com>
This commit is contained in:
Sean Morley
2026-01-14 10:22:14 -05:00
committed by GitHub
parent d954ac057d
commit c39ff2449e
206 changed files with 33830 additions and 14254 deletions

View File

@@ -17,4 +17,5 @@ from .recommendations_view import *
from .import_export_view import *
from .trail_view import *
from .activity_view import *
from .visit_view import *
from .visit_view import *
from .itinerary_view import *

View File

@@ -2,11 +2,13 @@ from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from django.db.models import Q
from adventures.models import Checklist
from adventures.models import Checklist, CollectionItineraryItem
from adventures.serializers import ChecklistSerializer
from rest_framework.exceptions import PermissionDenied
from adventures.permissions import IsOwnerOrSharedWithFullAccess
from rest_framework.permissions import IsAuthenticated
from django.contrib.contenttypes.models import ContentType
from django.db import transaction
class ChecklistViewSet(viewsets.ModelViewSet):
serializer_class = ChecklistSerializer
@@ -43,15 +45,18 @@ class ChecklistViewSet(viewsets.ModelViewSet):
# Retrieve the current object
instance = self.get_object()
# Store the old date before updating
old_date = instance.date
# Partially update the instance with the request data
serializer = self.get_serializer(instance, data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
# Retrieve the collection from the validated data
new_collection = serializer.validated_data.get('collection')
new_date = serializer.validated_data.get('date', old_date)
user = request.user
print(new_collection)
if new_collection is not None and new_collection!=instance.collection:
# Check if the user is the owner of the new collection
@@ -62,34 +67,15 @@ class ChecklistViewSet(viewsets.ModelViewSet):
if instance.collection is not None and instance.collection.user != user:
raise PermissionDenied("You cannot remove the collection as you are not the owner.")
# Perform the update
self.perform_update(serializer)
# Return the updated instance
return Response(serializer.data)
def partial_update(self, request, *args, **kwargs):
# Retrieve the current object
instance = self.get_object()
# Partially update the instance with the request data
serializer = self.get_serializer(instance, data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
# Retrieve the collection from the validated data
new_collection = serializer.validated_data.get('collection')
user = request.user
print(new_collection)
if new_collection is not None and new_collection!=instance.collection:
# Check if the user is the owner of the new collection
if new_collection.user != user or instance.user != user:
raise PermissionDenied("You do not have permission to use this collection.")
elif new_collection is None:
# Handle the case where the user is trying to set the collection to None
if instance.collection is not None and instance.collection.user != user:
raise PermissionDenied("You cannot remove the collection as you are not the owner.")
# If the date changed, remove old itinerary items for this checklist on the old date
if old_date and new_date and old_date != new_date:
checklist_ct = ContentType.objects.get_for_model(Checklist)
old_itinerary_items = CollectionItineraryItem.objects.filter(
content_type=checklist_ct,
object_id=str(instance.id),
date=old_date
)
old_itinerary_items.delete()
# Perform the update
self.perform_update(serializer)

View File

@@ -4,9 +4,18 @@ from django.db import transaction
from rest_framework import viewsets
from rest_framework.decorators import action
from rest_framework.response import Response
from adventures.models import Collection, Location, Transportation, Note, Checklist, CollectionInvite, ContentImage
from rest_framework.parsers import MultiPartParser
from rest_framework import status
from django.http import HttpResponse
from django.conf import settings
import io
import os
import json
import zipfile
import tempfile
from adventures.models import Collection, Location, Transportation, Note, Checklist, CollectionInvite, ContentImage, CollectionItineraryItem, Lodging, CollectionItineraryDay, ContentAttachment, Category
from adventures.permissions import CollectionShared
from adventures.serializers import CollectionSerializer, CollectionInviteSerializer, UltraSlimCollectionSerializer
from adventures.serializers import CollectionSerializer, CollectionInviteSerializer, UltraSlimCollectionSerializer, CollectionItineraryItemSerializer, CollectionItineraryDaySerializer
from users.models import CustomUser as User
from adventures.utils import pagination
from users.serializers import CustomUserDetailsSerializer as UserSerializer
@@ -38,22 +47,47 @@ class CollectionViewSet(viewsets.ModelViewSet):
if order_by == 'name':
queryset = queryset.annotate(lower_name=Lower('name'))
ordering = 'lower_name'
if order_direction == 'desc':
if order_direction == 'asc':
ordering = f'-{ordering}'
elif order_by == 'start_date':
ordering = 'start_date'
if order_direction == 'asc':
if order_direction == 'desc':
ordering = 'start_date'
else:
ordering = '-start_date'
else:
order_by == 'updated_at'
ordering = 'updated_at'
if order_direction == 'asc':
if order_direction == 'desc':
ordering = '-updated_at'
return queryset.order_by(ordering)
def apply_status_filter(self, queryset):
"""Apply status filtering based on query parameter"""
from datetime import date
status_filter = self.request.query_params.get('status', None)
if not status_filter:
return queryset
today = date.today()
if status_filter == 'folder':
# Collections without dates
return queryset.filter(Q(start_date__isnull=True) | Q(end_date__isnull=True))
elif status_filter == 'upcoming':
# Start date in the future
return queryset.filter(start_date__gt=today)
elif status_filter == 'in_progress':
# Currently ongoing
return queryset.filter(start_date__lte=today, end_date__gte=today)
elif status_filter == 'completed':
# End date in the past
return queryset.filter(end_date__lt=today)
return queryset
def get_serializer_context(self):
"""Override to add nested and exclusion contexts based on query parameters"""
context = super().get_serializer_context()
@@ -80,45 +114,49 @@ class CollectionViewSet(viewsets.ModelViewSet):
def get_optimized_queryset_for_listing(self):
"""Get optimized queryset for list actions with prefetching"""
return self.get_base_queryset().select_related('user').prefetch_related(
return self.get_base_queryset().select_related('user', 'primary_image').prefetch_related(
Prefetch(
'locations__images',
queryset=ContentImage.objects.filter(is_primary=True).select_related('user'),
to_attr='primary_images'
)
),
'shared_with'
)
def get_base_queryset(self):
"""Base queryset logic extracted for reuse"""
if self.action == 'destroy':
return Collection.objects.filter(user=self.request.user.id)
if self.action in ['update', 'partial_update']:
return Collection.objects.filter(
queryset = Collection.objects.filter(user=self.request.user.id)
elif self.action in ['update', 'partial_update', 'leave']:
queryset = Collection.objects.filter(
Q(user=self.request.user.id) | Q(shared_with=self.request.user)
).distinct()
# Allow access to collections with pending invites for accept/decline actions
if self.action in ['accept_invite', 'decline_invite']:
elif self.action in ['accept_invite', 'decline_invite']:
if not self.request.user.is_authenticated:
return Collection.objects.none()
return Collection.objects.filter(
Q(user=self.request.user.id) |
Q(shared_with=self.request.user) |
Q(invites__invited_user=self.request.user)
).distinct()
if self.action == 'retrieve':
queryset = Collection.objects.none()
else:
queryset = Collection.objects.filter(
Q(user=self.request.user.id)
| Q(shared_with=self.request.user)
| Q(invites__invited_user=self.request.user)
).distinct()
elif self.action == 'retrieve':
if not self.request.user.is_authenticated:
return Collection.objects.filter(is_public=True)
return Collection.objects.filter(
Q(is_public=True) | Q(user=self.request.user.id) | Q(shared_with=self.request.user)
queryset = Collection.objects.filter(is_public=True)
else:
queryset = Collection.objects.filter(
Q(is_public=True)
| Q(user=self.request.user.id)
| Q(shared_with=self.request.user)
).distinct()
else:
# For list action and default base queryset, return collections owned by the user (exclude shared)
queryset = Collection.objects.filter(
Q(user=self.request.user.id) & Q(is_archived=False)
).distinct()
# For list action, include collections owned by the user or shared with the user, that are not archived
return Collection.objects.filter(
(Q(user=self.request.user.id) | Q(shared_with=self.request.user)) & Q(is_archived=False)
).distinct()
return queryset.select_related('primary_image').prefetch_related('shared_with')
def get_queryset(self):
"""Get queryset with optimizations for list actions"""
@@ -131,9 +169,11 @@ class CollectionViewSet(viewsets.ModelViewSet):
if not request.user.is_authenticated:
return Response({"error": "User is not authenticated"}, status=400)
# List should only return collections owned by the requesting user (shared collections are available
# via the `shared` action).
queryset = Collection.objects.filter(
(Q(user=request.user.id) | Q(shared_with=request.user)) & Q(is_archived=False)
).distinct().select_related('user').prefetch_related(
Q(user=request.user.id) & Q(is_archived=False)
).distinct().select_related('user', 'primary_image').prefetch_related(
Prefetch(
'locations__images',
queryset=ContentImage.objects.filter(is_primary=True).select_related('user'),
@@ -141,6 +181,7 @@ class CollectionViewSet(viewsets.ModelViewSet):
)
)
queryset = self.apply_status_filter(queryset)
queryset = self.apply_sorting(queryset)
return self.paginate_and_respond(queryset, request)
@@ -151,7 +192,7 @@ class CollectionViewSet(viewsets.ModelViewSet):
queryset = Collection.objects.filter(
Q(user=request.user)
).select_related('user').prefetch_related(
).select_related('user', 'primary_image').prefetch_related(
Prefetch(
'locations__images',
queryset=ContentImage.objects.filter(is_primary=True).select_related('user'),
@@ -171,7 +212,7 @@ class CollectionViewSet(viewsets.ModelViewSet):
queryset = Collection.objects.filter(
Q(user=request.user.id) & Q(is_archived=True)
).select_related('user').prefetch_related(
).select_related('user', 'primary_image').prefetch_related(
Prefetch(
'locations__images',
queryset=ContentImage.objects.filter(is_primary=True).select_related('user'),
@@ -184,67 +225,23 @@ class CollectionViewSet(viewsets.ModelViewSet):
return Response(serializer.data)
# this make the is_public field of the collection cascade to the locations
@transaction.atomic
def update(self, request, *args, **kwargs):
partial = kwargs.pop('partial', False)
instance = self.get_object()
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
def retrieve(self, request, pk=None):
"""Retrieve a collection and include itinerary items and day metadata in the response."""
collection = self.get_object()
serializer = self.get_serializer(collection)
data = serializer.data
if 'collection' in serializer.validated_data:
new_collection = serializer.validated_data['collection']
# if the new collection is different from the old one and the user making the request is not the owner of the new collection return an error
if new_collection != instance.collection and new_collection.user != request.user:
return Response({"error": "User does not own the new collection"}, status=400)
# Include itinerary items inline with collection details
itinerary_items = CollectionItineraryItem.objects.filter(collection=collection)
itinerary_serializer = CollectionItineraryItemSerializer(itinerary_items, many=True)
data['itinerary'] = itinerary_serializer.data
# Include itinerary day metadata
itinerary_days = CollectionItineraryDay.objects.filter(collection=collection)
days_serializer = CollectionItineraryDaySerializer(itinerary_days, many=True)
data['itinerary_days'] = days_serializer.data
# Check if the 'is_public' field is present in the update data
if 'is_public' in serializer.validated_data:
new_public_status = serializer.validated_data['is_public']
# if is_public has changed and the user is not the owner of the collection return an error
if new_public_status != instance.is_public and instance.user != request.user:
print(f"User {request.user.id} does not own the collection {instance.id} that is owned by {instance.user}")
return Response({"error": "User does not own the collection"}, status=400)
# Get all locations in this collection
locations_in_collection = Location.objects.filter(collections=instance)
if new_public_status:
# If collection becomes public, make all locations public
locations_in_collection.update(is_public=True)
else:
# If collection becomes private, check each location
# Only set a location to private if ALL of its collections are private
# Collect locations that do NOT belong to any other public collection (excluding the current one)
location_ids_to_set_private = []
for location in locations_in_collection:
has_public_collection = location.collections.filter(is_public=True).exclude(id=instance.id).exists()
if not has_public_collection:
location_ids_to_set_private.append(location.id)
# Bulk update those locations
Location.objects.filter(id__in=location_ids_to_set_private).update(is_public=False)
# Update transportations, notes, and checklists related to this collection
# These still use direct ForeignKey relationships
Transportation.objects.filter(collection=instance).update(is_public=new_public_status)
Note.objects.filter(collection=instance).update(is_public=new_public_status)
Checklist.objects.filter(collection=instance).update(is_public=new_public_status)
# Log the action (optional)
action = "public" if new_public_status else "private"
print(f"Collection {instance.id} and its related objects were set to {action}")
self.perform_update(serializer)
if getattr(instance, '_prefetched_objects_cache', None):
# If 'prefetch_related' has been applied to a queryset, we need to
# forcibly invalidate the prefetch cache on the instance.
instance._prefetched_objects_cache = {}
return Response(serializer.data)
return Response(data)
# make an action to retreive all locations that are shared with the user
@action(detail=False, methods=['get'])
@@ -477,10 +474,433 @@ class CollectionViewSet(viewsets.ModelViewSet):
return Response({"success": success_message})
@action(detail=True, methods=['get'], url_path='export')
def export_collection(self, request, pk=None):
"""Export a single collection and its related content as a ZIP file."""
collection = self.get_object()
export_data = {
'version': getattr(settings, 'ADVENTURELOG_RELEASE_VERSION', 'unknown'),
# Omit export_date to keep template-friendly exports (no dates)
'collection': {
'id': str(collection.id),
'name': collection.name,
'description': collection.description,
'is_public': collection.is_public,
# Omit start/end dates
'link': collection.link,
},
'locations': [],
'transportation': [],
'notes': [],
'checklists': [],
'lodging': [],
# Omit itinerary_items entirely
'images': [],
'attachments': [],
'primary_image_ref': None,
}
image_export_map = {}
for loc in collection.locations.all().select_related('city', 'region', 'country'):
loc_entry = {
'id': str(loc.id),
'name': loc.name,
'description': loc.description,
'location': loc.location,
'tags': loc.tags or [],
'rating': loc.rating,
'link': loc.link,
'is_public': loc.is_public,
'longitude': float(loc.longitude) if loc.longitude is not None else None,
'latitude': float(loc.latitude) if loc.latitude is not None else None,
'city': loc.city.name if loc.city else None,
'region': loc.region.name if loc.region else None,
'country': loc.country.name if loc.country else None,
'images': [],
'attachments': [],
}
for img in loc.images.all():
img_export_id = f"img_{len(export_data['images'])}"
image_export_map[str(img.id)] = img_export_id
export_data['images'].append({
'export_id': img_export_id,
'id': str(img.id),
'name': os.path.basename(getattr(img.image, 'name', 'image')),
'is_primary': getattr(img, 'is_primary', False),
})
loc_entry['images'].append(img_export_id)
for att in loc.attachments.all():
att_export_id = f"att_{len(export_data['attachments'])}"
export_data['attachments'].append({
'export_id': att_export_id,
'id': str(att.id),
'name': os.path.basename(getattr(att.file, 'name', 'attachment')),
})
loc_entry['attachments'].append(att_export_id)
export_data['locations'].append(loc_entry)
if collection.primary_image:
export_data['primary_image_ref'] = image_export_map.get(str(collection.primary_image.id))
# Related content (if models have FK to collection)
for t in Transportation.objects.filter(collection=collection):
export_data['transportation'].append({
'id': str(t.id),
'type': getattr(t, 'transportation_type', None),
'name': getattr(t, 'name', None),
# Omit date
'notes': getattr(t, 'notes', None),
})
for n in Note.objects.filter(collection=collection):
export_data['notes'].append({
'id': str(n.id),
'title': getattr(n, 'title', None),
'content': getattr(n, 'content', ''),
# Omit created_at
})
for c in Checklist.objects.filter(collection=collection):
items = []
if hasattr(c, 'items'):
items = [
{
'name': getattr(item, 'name', None),
'completed': getattr(item, 'completed', False),
} for item in c.items.all()
]
export_data['checklists'].append({
'id': str(c.id),
'name': getattr(c, 'name', None),
'items': items,
})
for l in Lodging.objects.filter(collection=collection):
export_data['lodging'].append({
'id': str(l.id),
'type': getattr(l, 'lodging_type', None),
'name': getattr(l, 'name', None),
# Omit start_date/end_date
'notes': getattr(l, 'notes', None),
})
# Intentionally omit itinerary_items from export
# Create ZIP in temp file
with tempfile.NamedTemporaryFile(delete=False, suffix='.zip') as tmp_file:
with zipfile.ZipFile(tmp_file, 'w', zipfile.ZIP_DEFLATED) as zipf:
zipf.writestr('metadata.json', json.dumps(export_data, indent=2))
# Write image files
for loc in collection.locations.all():
for img in loc.images.all():
export_id = image_export_map.get(str(img.id))
if not export_id:
continue
try:
file_name = os.path.basename(getattr(img.image, 'name', 'image'))
storage = getattr(img.image, 'storage', None)
if storage:
with storage.open(img.image.name, 'rb') as f:
zipf.writestr(f'images/{export_id}-{file_name}', f.read())
elif hasattr(img.image, 'path'):
with open(img.image.path, 'rb') as f:
zipf.writestr(f'images/{export_id}-{file_name}', f.read())
except Exception:
continue
# Write attachment files
for loc in collection.locations.all():
for att in loc.attachments.all():
try:
file_name = os.path.basename(getattr(att.file, 'name', 'attachment'))
storage = getattr(att.file, 'storage', None)
if storage:
with storage.open(att.file.name, 'rb') as f:
zipf.writestr(f'attachments/{file_name}', f.read())
elif hasattr(att.file, 'path'):
with open(att.file.path, 'rb') as f:
zipf.writestr(f'attachments/{file_name}', f.read())
except Exception:
continue
with open(tmp_file.name, 'rb') as fh:
data = fh.read()
os.unlink(tmp_file.name)
filename = f"collection-{collection.name.replace(' ', '_')}.zip"
response = HttpResponse(data, content_type='application/zip')
response['Content-Disposition'] = f'attachment; filename="{filename}"'
return response
@action(detail=False, methods=['post'], url_path='import', parser_classes=[MultiPartParser])
def import_collection(self, request):
"""Import a single collection from a ZIP file. Handles name conflicts by appending (n)."""
upload = request.FILES.get('file')
if not upload:
return Response({'detail': 'No file provided'}, status=status.HTTP_400_BAD_REQUEST)
# Read zip
file_bytes = upload.read()
with zipfile.ZipFile(io.BytesIO(file_bytes), 'r') as zipf:
try:
metadata = json.loads(zipf.read('metadata.json').decode('utf-8'))
except KeyError:
return Response({'detail': 'metadata.json missing'}, status=status.HTTP_400_BAD_REQUEST)
base_name = (metadata.get('collection') or {}).get('name') or 'Imported Collection'
# Ensure unique name per user
existing_names = set(request.user.collection_set.values_list('name', flat=True))
unique_name = base_name
if unique_name in existing_names:
i = 1
while True:
candidate = f"{base_name} ({i})"
if candidate not in existing_names:
unique_name = candidate
break
i += 1
new_collection = Collection.objects.create(
user=request.user,
name=unique_name,
description=(metadata.get('collection') or {}).get('description'),
is_public=(metadata.get('collection') or {}).get('is_public', False),
start_date=__import__('datetime').date.fromisoformat((metadata.get('collection') or {}).get('start_date')) if (metadata.get('collection') or {}).get('start_date') else None,
end_date=__import__('datetime').date.fromisoformat((metadata.get('collection') or {}).get('end_date')) if (metadata.get('collection') or {}).get('end_date') else None,
link=(metadata.get('collection') or {}).get('link'),
)
image_export_map = {img['export_id']: img for img in metadata.get('images', [])}
attachment_export_map = {att['export_id']: att for att in metadata.get('attachments', [])}
# Import locations
for loc_data in metadata.get('locations', []):
cat_obj = None
if loc_data.get('category'):
cat_obj, _ = Category.objects.get_or_create(user=request.user, name=loc_data['category'])
# Attempt to find a very similar existing location for this user
from difflib import SequenceMatcher
def _ratio(a, b):
a = (a or '').strip().lower()
b = (b or '').strip().lower()
if not a and not b:
return 1.0
return SequenceMatcher(None, a, b).ratio()
def _coords_close(lat1, lon1, lat2, lon2, threshold=0.02):
try:
if lat1 is None or lon1 is None or lat2 is None or lon2 is None:
return False
return abs(float(lat1) - float(lat2)) <= threshold and abs(float(lon1) - float(lon2)) <= threshold
except Exception:
return False
incoming_name = loc_data.get('name') or 'Untitled'
incoming_location_text = loc_data.get('location')
incoming_lat = loc_data.get('latitude')
incoming_lon = loc_data.get('longitude')
existing_loc = None
best_score = 0.0
for cand in Location.objects.filter(user=request.user):
name_score = _ratio(incoming_name, cand.name)
loc_text_score = _ratio(incoming_location_text, getattr(cand, 'location', None))
close_coords = _coords_close(incoming_lat, incoming_lon, cand.latitude, cand.longitude)
# Define "very similar": strong name match OR decent name with location/coords match
combined_score = max(name_score, (name_score + loc_text_score) / 2.0)
if close_coords:
combined_score = max(combined_score, name_score + 0.1) # small boost for coord proximity
if combined_score > best_score and (
name_score >= 0.92 or (name_score >= 0.85 and (loc_text_score >= 0.85 or close_coords))
):
best_score = combined_score
existing_loc = cand
if existing_loc:
# Link existing location to the new collection, skip creating a duplicate
loc = existing_loc
loc.collections.add(new_collection)
created_new_loc = False
else:
# Create a brand-new location
loc = Location.objects.create(
user=request.user,
name=incoming_name,
description=loc_data.get('description'),
location=incoming_location_text,
tags=loc_data.get('tags') or [],
rating=loc_data.get('rating'),
link=loc_data.get('link'),
is_public=bool(loc_data.get('is_public', False)),
longitude=incoming_lon,
latitude=incoming_lat,
category=cat_obj,
)
loc.collections.add(new_collection)
created_new_loc = True
# Images
# Only import images for newly created locations to avoid duplicating user content
if created_new_loc:
for export_id in loc_data.get('images', []):
img_meta = image_export_map.get(export_id)
if not img_meta:
continue
prefix = f"images/{export_id}-"
member = next((m for m in zipf.namelist() if m.startswith(prefix)), None)
if not member:
continue
file_bytes_img = zipf.read(member)
file_name_img = os.path.basename(member)
from django.core.files.base import ContentFile
image_obj = ContentImage(
user=request.user,
image=ContentFile(file_bytes_img, name=file_name_img),
)
# Assign to the generic relation for Location
image_obj.content_object = loc
image_obj.save()
if img_meta.get('is_primary'):
new_collection.primary_image = image_obj
new_collection.save(update_fields=['primary_image'])
# Attachments
if created_new_loc:
for export_id in loc_data.get('attachments', []):
att_meta = attachment_export_map.get(export_id)
if not att_meta:
continue
file_name_att = att_meta.get('name', '')
member = next((m for m in zipf.namelist() if m == f"attachments/{file_name_att}"), None)
if not member:
continue
file_bytes_att = zipf.read(member)
from django.core.files.base import ContentFile
attachment_obj = ContentAttachment(
user=request.user,
file=ContentFile(file_bytes_att, name=file_name_att),
)
# Assign to the generic relation for Location
attachment_obj.content_object = loc
attachment_obj.save()
serializer = self.get_serializer(new_collection)
return Response(serializer.data, status=status.HTTP_201_CREATED)
def perform_create(self, serializer):
# This is ok because you cannot share a collection when creating it
serializer.save(user=self.request.user)
def _cleanup_out_of_range_itinerary_items(self, collection):
"""Delete itinerary items and day metadata outside the collection's date range."""
if not collection.start_date or not collection.end_date:
# If no date range is set, don't delete anything
return
# Delete itinerary items outside the date range
deleted_items = CollectionItineraryItem.objects.filter(
collection=collection
).exclude(
date__range=[collection.start_date, collection.end_date]
).delete()
# Delete day metadata outside the date range
deleted_days = CollectionItineraryDay.objects.filter(
collection=collection
).exclude(
date__range=[collection.start_date, collection.end_date]
).delete()
return deleted_items, deleted_days
@transaction.atomic
def update(self, request, *args, **kwargs):
"""Override update to handle is_public cascading and clean up out-of-range itinerary items when dates change."""
instance = self.get_object()
old_is_public = instance.is_public
old_start_date = instance.start_date
old_end_date = instance.end_date
# Perform the standard update
partial = kwargs.pop('partial', False)
serializer = self.get_serializer(instance, data=request.data, partial=partial)
serializer.is_valid(raise_exception=True)
self.perform_update(serializer)
# Check if is_public changed
new_is_public = serializer.instance.is_public
is_public_changed = old_is_public != new_is_public
# Handle is_public cascading
if is_public_changed:
if new_is_public:
# Collection is being made public, update all linked items to public
serializer.instance.locations.filter(is_public=False).update(is_public=True)
serializer.instance.transportation_set.filter(is_public=False).update(is_public=True)
serializer.instance.note_set.filter(is_public=False).update(is_public=True)
serializer.instance.checklist_set.filter(is_public=False).update(is_public=True)
serializer.instance.lodging_set.filter(is_public=False).update(is_public=True)
else:
# Collection is being made private, check each linked item
# Only set an item to private if it doesn't belong to any other public collection
# Handle locations (many-to-many relationship)
locations_in_collection = serializer.instance.locations.filter(is_public=True)
for location in locations_in_collection:
# Check if this location belongs to any other public collection
has_other_public_collection = location.collections.filter(
is_public=True
).exclude(id=serializer.instance.id).exists()
if not has_other_public_collection:
location.is_public = False
location.save(update_fields=['is_public'])
# Handle transportations, notes, checklists, lodging (foreign key relationships)
# Transportation
transportations_to_check = serializer.instance.transportation_set.filter(is_public=True)
for transportation in transportations_to_check:
transportation.is_public = False
transportation.save(update_fields=['is_public'])
# Notes
notes_to_check = serializer.instance.note_set.filter(is_public=True)
for note in notes_to_check:
note.is_public = False
note.save(update_fields=['is_public'])
# Checklists
checklists_to_check = serializer.instance.checklist_set.filter(is_public=True)
for checklist in checklists_to_check:
checklist.is_public = False
checklist.save(update_fields=['is_public'])
# Lodging
lodging_to_check = serializer.instance.lodging_set.filter(is_public=True)
for lodging in lodging_to_check:
lodging.is_public = False
lodging.save(update_fields=['is_public'])
# Check if dates changed
new_start_date = serializer.instance.start_date
new_end_date = serializer.instance.end_date
dates_changed = (old_start_date != new_start_date or old_end_date != new_end_date)
# Clean up out-of-range items if dates changed
if dates_changed:
self._cleanup_out_of_range_itinerary_items(serializer.instance)
if getattr(instance, '_prefetched_objects_cache', None):
# If 'prefetch_related' has been applied to a queryset, we need to
# forcibly invalidate the prefetch cache on the instance.
instance._prefetched_objects_cache = {}
return Response(serializer.data)
def paginate_and_respond(self, queryset, request):
paginator = self.pagination_class()
page = paginator.paginate_queryset(queryset, request)

View File

@@ -91,8 +91,13 @@ class GenerateDescription(viewsets.ViewSet):
try:
candidates = self.get_candidate_pages(name, lang)
found_images = []
for candidate in candidates:
# Stop after finding 5 valid images
if len(found_images) >= 8:
break
page_data = self.fetch_page(
lang=lang,
candidate=candidate,
@@ -113,13 +118,29 @@ class GenerateDescription(viewsets.ViewSet):
# Try original image first
original_image = page_data.get('original')
if original_image and self.is_valid_image(original_image.get('source')):
return Response(original_image)
found_images.append({
'source': original_image.get('source'),
'width': original_image.get('width'),
'height': original_image.get('height'),
'title': page_data.get('title'),
'type': 'original'
})
continue
# Fall back to thumbnail
thumbnail_image = page_data.get('thumbnail')
if thumbnail_image and self.is_valid_image(thumbnail_image.get('source')):
return Response(thumbnail_image)
found_images.append({
'source': thumbnail_image.get('source'),
'width': thumbnail_image.get('width'),
'height': thumbnail_image.get('height'),
'title': page_data.get('title'),
'type': 'thumbnail'
})
if found_images:
return Response({"images": found_images})
return Response({"error": "No image found"}, status=404)
except requests.exceptions.RequestException:

View File

@@ -4,64 +4,89 @@ from rest_framework.decorators import action
from rest_framework.permissions import IsAuthenticated
from icalendar import Calendar, Event, vText, vCalAddress
from datetime import datetime, timedelta
from adventures.models import Location
from adventures.serializers import LocationSerializer
from django.db.models import Prefetch
from adventures.models import Location, Visit
class IcsCalendarGeneratorViewSet(viewsets.ViewSet):
permission_classes = [IsAuthenticated]
@action(detail=False, methods=['get'])
def generate(self, request):
locations = Location.objects.filter(user=request.user)
context={'nested': True, 'allowed_nested_fields': ['visits']}
serializer = LocationSerializer(locations, many=True, context=context)
locations = (
Location.objects.filter(user=request.user)
.prefetch_related(
Prefetch(
'visits',
queryset=Visit.objects.only('id', 'start_date', 'end_date')
)
)
.only('id', 'name', 'description', 'location', 'link')
)
user = request.user
name = f"{user.first_name} {user.last_name}"
cal = Calendar()
cal.add('prodid', '-//My Adventure Calendar//example.com//')
cal.add('version', '2.0')
for location in serializer.data:
if location['visits']:
for visit in location['visits']:
# Skip if start_date is missing
if not visit.get('start_date'):
continue
for location in locations:
visits = list(location.visits.all())
if not visits:
continue
# Parse start_date and handle end_date
for visit in visits:
start_value = getattr(visit, 'start_date', None)
if not start_value:
continue
# Normalize start date
if isinstance(start_value, str):
try:
start_date = datetime.strptime(visit['start_date'], '%Y-%m-%d').date()
start_dt = datetime.fromisoformat(start_value.replace('Z', '+00:00'))
except ValueError:
continue # Skip if the start_date is invalid
continue
else:
start_dt = start_value
start_date = start_dt.date() if hasattr(start_dt, 'date') else start_dt
# Normalize end date (inclusive of final day)
end_value = getattr(visit, 'end_date', None) or start_dt
if isinstance(end_value, str):
try:
end_dt = datetime.fromisoformat(end_value.replace('Z', '+00:00'))
except ValueError:
end_dt = start_dt
else:
end_dt = end_value
end_date = end_dt.date() if hasattr(end_dt, 'date') else end_dt
end_date = end_date + timedelta(days=1)
event = Event()
event.add('summary', location.name)
event.add('dtstart', start_date)
event.add('dtend', end_date)
event.add('dtstamp', datetime.now())
event.add('transp', 'TRANSPARENT')
event.add('class', 'PUBLIC')
event.add('created', datetime.now())
event.add('last-modified', datetime.now())
event.add('description', getattr(location, 'description', '') or '')
if getattr(location, 'location', None):
event.add('location', location.location)
if getattr(location, 'link', None):
event.add('url', location.link)
organizer = vCalAddress(f'MAILTO:{user.email}')
organizer.params['cn'] = vText(name)
event.add('organizer', organizer)
cal.add_component(event)
end_date = (
datetime.strptime(visit['end_date'], '%Y-%m-%d').date() + timedelta(days=1)
if visit.get('end_date') else start_date + timedelta(days=1)
)
# Create event
event = Event()
event.add('summary', location['name'])
event.add('dtstart', start_date)
event.add('dtend', end_date)
event.add('dtstamp', datetime.now())
event.add('transp', 'TRANSPARENT')
event.add('class', 'PUBLIC')
event.add('created', datetime.now())
event.add('last-modified', datetime.now())
event.add('description', location['description'])
if location.get('location'):
event.add('location', location['location'])
if location.get('link'):
event.add('url', location['link'])
organizer = vCalAddress(f'MAILTO:{user.email}')
organizer.params['cn'] = vText(name)
event.add('organizer', organizer)
cal.add_component(event)
response = HttpResponse(cal.to_ical(), content_type='text/calendar')
response['Content-Disposition'] = 'attachment; filename=adventures.ics'
return response

View File

@@ -19,7 +19,8 @@ from django.contrib.contenttypes.models import ContentType
from adventures.models import (
Location, Collection, Transportation, Note, Checklist, ChecklistItem,
ContentImage, ContentAttachment, Category, Lodging, Visit, Trail, Activity
ContentImage, ContentAttachment, Category, Lodging, Visit, Trail, Activity,
CollectionItineraryItem
)
from worldtravel.models import VisitedCity, VisitedRegion, City, Region, Country
@@ -52,7 +53,8 @@ class BackupViewSet(viewsets.ViewSet):
'checklists': [],
'lodging': [],
'visited_cities': [],
'visited_regions': []
'visited_regions': [],
'itinerary_items': []
}
# Export Visited Cities
@@ -75,6 +77,9 @@ class BackupViewSet(viewsets.ViewSet):
'icon': category.icon,
})
# Track images so we can reference them for collection primary images
image_export_map = {}
# Export Collections
for idx, collection in enumerate(user.collection_set.all()):
export_data['collections'].append({
@@ -175,7 +180,7 @@ class BackupViewSet(viewsets.ViewSet):
location_data['trails'].append(trail_data)
# Add images
for image in location.images.all():
for image_index, image in enumerate(location.images.all()):
image_data = {
'immich_id': image.immich_id,
'is_primary': image.is_primary,
@@ -184,6 +189,13 @@ class BackupViewSet(viewsets.ViewSet):
if image.image:
image_data['filename'] = image.image.name.split('/')[-1]
location_data['images'].append(image_data)
image_export_map[image.id] = {
'location_export_id': idx,
'image_index': image_index,
'immich_id': image.immich_id,
'filename': image_data['filename'],
}
# Add attachments
for attachment in location.attachments.all():
@@ -196,14 +208,21 @@ class BackupViewSet(viewsets.ViewSet):
location_data['attachments'].append(attachment_data)
export_data['locations'].append(location_data)
# Attach collection primary image references (if any)
for idx, collection in enumerate(user.collection_set.all()):
primary = collection.primary_image
if primary and primary.id in image_export_map:
export_data['collections'][idx]['primary_image'] = image_export_map[primary.id]
# Export Transportation
for transport in user.transportation_set.all():
for idx, transport in enumerate(user.transportation_set.all()):
collection_export_id = None
if transport.collection:
collection_export_id = collection_name_to_id.get(transport.collection.name)
export_data['transportation'].append({
'export_id': idx,
'type': transport.type,
'name': transport.name,
'description': transport.description,
@@ -225,12 +244,13 @@ class BackupViewSet(viewsets.ViewSet):
})
# Export Notes
for note in user.note_set.all():
for idx, note in enumerate(user.note_set.all()):
collection_export_id = None
if note.collection:
collection_export_id = collection_name_to_id.get(note.collection.name)
export_data['notes'].append({
'export_id': idx,
'name': note.name,
'content': note.content,
'links': note.links,
@@ -240,12 +260,13 @@ class BackupViewSet(viewsets.ViewSet):
})
# Export Checklists
for checklist in user.checklist_set.all():
for idx, checklist in enumerate(user.checklist_set.all()):
collection_export_id = None
if checklist.collection:
collection_export_id = collection_name_to_id.get(checklist.collection.name)
checklist_data = {
'export_id': idx,
'name': checklist.name,
'date': checklist.date.isoformat() if checklist.date else None,
'is_public': checklist.is_public,
@@ -263,12 +284,13 @@ class BackupViewSet(viewsets.ViewSet):
export_data['checklists'].append(checklist_data)
# Export Lodging
for lodging in user.lodging_set.all():
for idx, lodging in enumerate(user.lodging_set.all()):
collection_export_id = None
if lodging.collection:
collection_export_id = collection_name_to_id.get(lodging.collection.name)
export_data['lodging'].append({
'export_id': idx,
'name': lodging.name,
'type': lodging.type,
'description': lodging.description,
@@ -286,6 +308,41 @@ class BackupViewSet(viewsets.ViewSet):
'collection_export_id': collection_export_id
})
# Export Itinerary Items
# Create export_id mappings for all content types
location_id_to_export_id = {loc.id: idx for idx, loc in enumerate(user.location_set.all())}
transportation_id_to_export_id = {t.id: idx for idx, t in enumerate(user.transportation_set.all())}
note_id_to_export_id = {n.id: idx for idx, n in enumerate(user.note_set.all())}
lodging_id_to_export_id = {l.id: idx for idx, l in enumerate(user.lodging_set.all())}
checklist_id_to_export_id = {c.id: idx for idx, c in enumerate(user.checklist_set.all())}
for collection_idx, collection in enumerate(user.collection_set.all()):
for itinerary_item in collection.itinerary_items.all():
content_type_str = itinerary_item.content_type.model
item_reference = None
# Determine how to reference the item based on content type using export_ids
if content_type_str == 'location':
item_reference = location_id_to_export_id.get(itinerary_item.object_id)
elif content_type_str == 'transportation':
item_reference = transportation_id_to_export_id.get(itinerary_item.object_id)
elif content_type_str == 'note':
item_reference = note_id_to_export_id.get(itinerary_item.object_id)
elif content_type_str == 'lodging':
item_reference = lodging_id_to_export_id.get(itinerary_item.object_id)
elif content_type_str == 'checklist':
item_reference = checklist_id_to_export_id.get(itinerary_item.object_id)
if item_reference is not None:
export_data['itinerary_items'].append({
'collection_export_id': collection_idx,
'content_type': content_type_str,
'item_reference': item_reference,
'date': itinerary_item.date.isoformat() if itinerary_item.date else None,
'is_global': itinerary_item.is_global,
'order': itinerary_item.order
})
# Create ZIP file
with tempfile.NamedTemporaryFile(delete=False, suffix='.zip') as tmp_file:
with zipfile.ZipFile(tmp_file.name, 'w', zipfile.ZIP_DEFLATED) as zip_file:
@@ -402,6 +459,9 @@ class BackupViewSet(viewsets.ViewSet):
def _clear_user_data(self, user):
"""Clear all existing user data before import"""
# Delete itinerary items first (they reference collections and content)
CollectionItineraryItem.objects.filter(collection__user=user).delete()
# Delete in reverse order of dependencies
user.activity_set.all().delete() # Delete activities first
user.trail_set.all().delete() # Delete trails
@@ -439,7 +499,7 @@ class BackupViewSet(viewsets.ViewSet):
'transportation': 0, 'notes': 0, 'checklists': 0,
'checklist_items': 0, 'lodging': 0, 'images': 0,
'attachments': 0, 'visited_cities': 0, 'visited_regions': 0,
'trails': 0, 'activities': 0, 'gpx_files': 0
'trails': 0, 'activities': 0, 'gpx_files': 0, 'itinerary_items': 0
}
# Import Visited Cities
@@ -475,6 +535,9 @@ class BackupViewSet(viewsets.ViewSet):
category_map[cat_data['name']] = category
summary['categories'] += 1
pending_primary_images = []
location_images_map = {}
# Import Collections
for col_data in backup_data.get('collections', []):
collection = Collection.objects.create(
@@ -498,6 +561,13 @@ class BackupViewSet(viewsets.ViewSet):
collection.shared_with.add(shared_user)
except User.DoesNotExist:
pass
# Defer primary image assignment until images are created
if col_data.get('primary_image'):
pending_primary_images.append({
'collection_export_id': col_data['export_id'],
'data': col_data['primary_image'],
})
# Import Locations
for adv_data in backup_data.get('locations', []):
@@ -541,6 +611,7 @@ class BackupViewSet(viewsets.ViewSet):
)
location.save(_skip_geocode=True) # Skip geocoding for now
location_map[adv_data['export_id']] = location
location_images_map.setdefault(adv_data['export_id'], [])
# Add to collections using export_ids - MUST be done after save()
for collection_export_id in adv_data.get('collection_export_ids', []):
@@ -638,13 +709,14 @@ class BackupViewSet(viewsets.ViewSet):
for img_data in adv_data.get('images', []):
immich_id = img_data.get('immich_id')
if immich_id:
ContentImage.objects.create(
new_img = ContentImage.objects.create(
user=user,
immich_id=immich_id,
is_primary=img_data.get('is_primary', False),
content_type=content_type,
object_id=location.id
)
location_images_map[adv_data['export_id']].append(new_img)
summary['images'] += 1
else:
filename = img_data.get('filename')
@@ -652,13 +724,14 @@ class BackupViewSet(viewsets.ViewSet):
try:
img_content = zip_file.read(f'images/{filename}')
img_file = ContentFile(img_content, name=filename)
ContentImage.objects.create(
new_img = ContentImage.objects.create(
user=user,
image=img_file,
is_primary=img_data.get('is_primary', False),
content_type=content_type,
object_id=location.id
)
location_images_map[adv_data['export_id']].append(new_img)
summary['images'] += 1
except KeyError:
pass
@@ -682,14 +755,32 @@ class BackupViewSet(viewsets.ViewSet):
pass
summary['locations'] += 1
# Apply primary image selections now that images exist
for entry in pending_primary_images:
collection = collection_map.get(entry['collection_export_id'])
data = entry.get('data', {}) or {}
if not collection:
continue
loc_export_id = data.get('location_export_id')
img_index = data.get('image_index')
if loc_export_id is None or img_index is None:
continue
images_for_location = location_images_map.get(loc_export_id, [])
if 0 <= img_index < len(images_for_location):
collection.primary_image = images_for_location[img_index]
collection.save(update_fields=['primary_image'])
# Import Transportation
transportation_map = {} # Map export_id to actual transportation object
for trans_data in backup_data.get('transportation', []):
collection = None
if trans_data.get('collection_export_id') is not None:
collection = collection_map.get(trans_data['collection_export_id'])
Transportation.objects.create(
transportation = Transportation.objects.create(
user=user,
type=trans_data['type'],
name=trans_data['name'],
@@ -710,15 +801,19 @@ class BackupViewSet(viewsets.ViewSet):
is_public=trans_data.get('is_public', False),
collection=collection
)
# Only add to map if export_id exists (for backward compatibility with old backups)
if 'export_id' in trans_data:
transportation_map[trans_data['export_id']] = transportation
summary['transportation'] += 1
# Import Notes
note_map = {} # Map export_id to actual note object
for note_data in backup_data.get('notes', []):
collection = None
if note_data.get('collection_export_id') is not None:
collection = collection_map.get(note_data['collection_export_id'])
Note.objects.create(
note = Note.objects.create(
user=user,
name=note_data['name'],
content=note_data.get('content'),
@@ -727,9 +822,13 @@ class BackupViewSet(viewsets.ViewSet):
is_public=note_data.get('is_public', False),
collection=collection
)
# Only add to map if export_id exists (for backward compatibility with old backups)
if 'export_id' in note_data:
note_map[note_data['export_id']] = note
summary['notes'] += 1
# Import Checklists
checklist_map = {} # Map export_id to actual checklist object
for check_data in backup_data.get('checklists', []):
collection = None
if check_data.get('collection_export_id') is not None:
@@ -753,15 +852,19 @@ class BackupViewSet(viewsets.ViewSet):
)
summary['checklist_items'] += 1
# Only add to map if export_id exists (for backward compatibility with old backups)
if 'export_id' in check_data:
checklist_map[check_data['export_id']] = checklist
summary['checklists'] += 1
# Import Lodging
lodging_map = {} # Map export_id to actual lodging object
for lodg_data in backup_data.get('lodging', []):
collection = None
if lodg_data.get('collection_export_id') is not None:
collection = collection_map.get(lodg_data['collection_export_id'])
Lodging.objects.create(
lodging = Lodging.objects.create(
user=user,
name=lodg_data['name'],
type=lodg_data.get('type', 'other'),
@@ -779,6 +882,51 @@ class BackupViewSet(viewsets.ViewSet):
is_public=lodg_data.get('is_public', False),
collection=collection
)
# Only add to map if export_id exists (for backward compatibility with old backups)
if 'export_id' in lodg_data:
lodging_map[lodg_data['export_id']] = lodging
summary['lodging'] += 1
# Import Itinerary Items
# Maps already created during import of each content type
for itinerary_data in backup_data.get('itinerary_items', []):
collection = collection_map.get(itinerary_data['collection_export_id'])
if not collection:
continue
content_type_str = itinerary_data['content_type']
item_reference = itinerary_data['item_reference']
# Get the actual object based on content type
content_object = None
content_type = None
if content_type_str == 'location':
content_object = location_map.get(item_reference) # item_reference is export_id
content_type = ContentType.objects.get(model='location')
elif content_type_str == 'transportation':
content_object = transportation_map.get(item_reference) # item_reference is export_id
content_type = ContentType.objects.get(model='transportation')
elif content_type_str == 'note':
content_object = note_map.get(item_reference) # item_reference is export_id
content_type = ContentType.objects.get(model='note')
elif content_type_str == 'lodging':
content_object = lodging_map.get(item_reference) # item_reference is export_id
content_type = ContentType.objects.get(model='lodging')
elif content_type_str == 'checklist':
content_object = checklist_map.get(item_reference) # item_reference is export_id
content_type = ContentType.objects.get(model='checklist')
if content_object and content_type:
CollectionItineraryItem.objects.create(
collection=collection,
content_type=content_type,
object_id=content_object.id,
date=itinerary_data.get('date') if not itinerary_data.get('is_global') else None,
is_global=bool(itinerary_data.get('is_global', False)),
order=itinerary_data['order']
)
summary['itinerary_items'] += 1
return summary

View File

@@ -0,0 +1,518 @@
from adventures.models import Location, Collection, CollectionItineraryItem, Transportation, Note, Lodging, Visit, Checklist, Note, CollectionItineraryDay
import datetime
from django.utils.dateparse import parse_date, parse_datetime
from django.contrib.contenttypes.models import ContentType
from django.db import models
from adventures.serializers import CollectionItineraryItemSerializer, CollectionItineraryDaySerializer
from adventures.utils.itinerary import reorder_itinerary_items
from adventures.utils.autogenerate_itinerary import auto_generate_itinerary
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
from rest_framework.exceptions import ValidationError, PermissionDenied
from adventures.permissions import IsOwnerOrSharedWithFullAccess
from django.db.models import Q
from django.db import transaction
from django.utils import timezone
class ItineraryViewSet(viewsets.ModelViewSet):
serializer_class = CollectionItineraryItemSerializer
permission_classes = [IsOwnerOrSharedWithFullAccess]
def get_queryset(self):
user = self.request.user
if not user.is_authenticated:
return CollectionItineraryItem.objects.none()
# Return itinerary items from collections the user owns or is shared with
return CollectionItineraryItem.objects.filter(
Q(collection__user=user) | Q(collection__shared_with=user)
).distinct().select_related('collection', 'collection__user').order_by('date', 'order')
def create(self, request, *args, **kwargs):
"""
Accept 'content_type' as either a ContentType PK or a model name string
(e.g. 'location', 'lodging', 'transportation', 'note', 'visit'). If a
string is provided we resolve it to the appropriate ContentType PK and
validate the referenced object exists and the user has permission to
access it.
Optional parameter 'update_item_date': if True, update the actual item's
date field to match the itinerary date.
"""
if not request.user.is_authenticated:
return Response({"error": "User is not authenticated"}, status=status.HTTP_401_UNAUTHORIZED)
data = request.data.copy()
content_type_val = data.get('content_type')
object_id = data.get('object_id')
update_item_date = data.get('update_item_date', False)
target_date = data.get('date')
is_global = data.get('is_global', False)
# Normalize is_global to boolean
if isinstance(is_global, str):
is_global = is_global.lower() in ['1', 'true', 'yes']
data['is_global'] = is_global
# Support legacy field 'location' -> treat as content_type='location'
if not content_type_val and data.get('location'):
content_type_val = 'location'
object_id = object_id or data.get('location')
data['content_type'] = content_type_val
data['object_id'] = object_id
# If content_type is provided as a string model name, map to ContentType PK
if content_type_val and isinstance(content_type_val, str):
# If it's already numeric-like, leave it
if not content_type_val.isdigit():
content_map = {
'location': Location,
'transportation': Transportation,
'note': Note,
'lodging': Lodging,
'visit': Visit,
'checklist': Checklist,
'note': Note,
}
if content_type_val not in content_map:
return Response({
'error': f"Invalid content_type. Must be one of: {', '.join(content_map.keys())}"
}, status=status.HTTP_400_BAD_REQUEST)
model_class = content_map[content_type_val]
# Validate referenced object exists
try:
content_object = model_class.objects.get(id=object_id)
except (ValueError, model_class.DoesNotExist):
return Response({'error': f"{content_type_val} not found"}, status=status.HTTP_404_NOT_FOUND)
# Permission check
permission_checker = IsOwnerOrSharedWithFullAccess()
if not permission_checker.has_object_permission(request, self, content_object):
return Response({'error': 'User does not have permission to access this content'}, status=status.HTTP_403_FORBIDDEN)
ct = ContentType.objects.get_for_model(model_class)
data['content_type'] = ct.pk
# If update_item_date is True and target_date is provided, update the item's date
if update_item_date and target_date and content_object:
# Extract just the date part if target_date is datetime
clean_date = str(target_date).split('T')[0] if 'T' in str(target_date) else str(target_date)
# For locations, create an all-day visit instead of updating a date field
if content_type_val == 'location':
# Determine start/end bounds. Support single date or optional start_date/end_date in payload.
# Prefer explicit start_date/end_date if provided, otherwise use the single target date.
start_input = data.get('start_date') or clean_date
end_input = data.get('end_date') or clean_date
def parse_bounds(val):
if not val:
return None
s = str(val)
# If datetime string provided, parse directly
if 'T' in s:
dt = parse_datetime(s)
return dt
# Otherwise parse as date and convert to datetime at start/end of day
d = parse_date(s)
if d:
return d
return None
# Normalize to date or datetime values
parsed_start = parse_bounds(start_input)
parsed_end = parse_bounds(end_input)
# If both are plain dates, convert to datetimes spanning the day
if isinstance(parsed_start, datetime.date) and not isinstance(parsed_start, datetime.datetime):
new_start = datetime.datetime.combine(parsed_start, datetime.time.min)
elif isinstance(parsed_start, datetime.datetime):
new_start = parsed_start
else:
new_start = None
if isinstance(parsed_end, datetime.date) and not isinstance(parsed_end, datetime.datetime):
new_end = datetime.datetime.combine(parsed_end, datetime.time.max)
elif isinstance(parsed_end, datetime.datetime):
new_end = parsed_end
else:
new_end = None
# If we couldn't parse bounds, fallback to the all-day target date
if not new_start or not new_end:
try:
d = parse_date(clean_date)
new_start = datetime.datetime.combine(d, datetime.time.min)
new_end = datetime.datetime.combine(d, datetime.time.max)
except Exception:
new_start = None
new_end = None
# Update existing visit or create new one
# When moving between days, update the existing visit to preserve visit ID and data
if new_start and new_end:
source_visit_id = data.get('source_visit_id')
# If source visit provided, update it
if source_visit_id:
try:
source_visit = Visit.objects.get(id=source_visit_id, location=content_object)
source_visit.start_date = new_start
source_visit.end_date = new_end
source_visit.save(update_fields=['start_date', 'end_date'])
except Visit.DoesNotExist:
# Fall back to create logic below
pass
# If no source visit or update failed, check for overlapping visits
if not source_visit_id:
# Check for exact match to avoid duplicates
exact_match = Visit.objects.filter(
location=content_object,
start_date=new_start,
end_date=new_end
).exists()
if not exact_match:
# Check for any overlapping visits
overlap_q = Q(start_date__lte=new_end) & Q(end_date__gte=new_start)
existing = Visit.objects.filter(location=content_object).filter(overlap_q).first()
if existing:
# Update existing overlapping visit
existing.start_date = new_start
existing.end_date = new_end
existing.save(update_fields=['start_date', 'end_date'])
else:
# Create new visit
Visit.objects.create(
location=content_object,
start_date=new_start,
end_date=new_end,
notes="Created from itinerary planning"
)
else:
# For other item types, update their date field and preserve duration
if content_type_val == 'transportation':
# For transportation: update date and end_date, preserving duration and times
if hasattr(content_object, 'date') and hasattr(content_object, 'end_date'):
old_date = content_object.date
old_end_date = content_object.end_date
if old_date and old_end_date:
# Extract time from original start date
original_time = old_date.time()
# Create new_date with the new date but preserve the original time
new_date = datetime.datetime.combine(parse_date(clean_date), original_time)
# Duration = end_date - date
duration = old_end_date - old_date
# Apply same duration to new date
new_end_date = new_date + duration
else:
# No original end date, set to same as start date
new_date = datetime.datetime.combine(parse_date(clean_date), datetime.time.min)
new_end_date = new_date
content_object.date = new_date
content_object.end_date = new_end_date
content_object.save(update_fields=['date', 'end_date'])
elif content_type_val == 'lodging':
# For lodging: update check_in and check_out, preserving duration and times
if hasattr(content_object, 'check_in') and hasattr(content_object, 'check_out'):
old_check_in = content_object.check_in
old_check_out = content_object.check_out
if old_check_in and old_check_out:
# Extract time from original check_in
original_time = old_check_in.time()
# Create new_check_in with the new date but preserve the original time
new_check_in = datetime.datetime.combine(parse_date(clean_date), original_time)
# Duration = check_out - check_in
duration = old_check_out - old_check_in
# Apply same duration to new check_in
new_check_out = new_check_in + duration
else:
# No original dates: check_in at midnight on selected day, check_out at midnight next day
new_check_in = datetime.datetime.combine(parse_date(clean_date), datetime.time.min)
new_check_out = new_check_in + datetime.timedelta(days=1)
content_object.check_in = new_check_in
content_object.check_out = new_check_out
content_object.save(update_fields=['check_in', 'check_out'])
else:
# For note, checklist, etc. - just update the date field
date_field = None
if hasattr(content_object, 'date'):
date_field = 'date'
elif hasattr(content_object, 'start_date'):
date_field = 'start_date'
if date_field:
setattr(content_object, date_field, clean_date)
content_object.save(update_fields=[date_field])
# Ensure order is unique for this collection+group combination (day or global)
collection_id = data.get('collection')
item_date = data.get('date')
item_order = data.get('order', 0)
# Basic XOR validation between date and is_global
if is_global and item_date:
return Response({'error': 'Global itinerary items must not include a date.'}, status=status.HTTP_400_BAD_REQUEST)
if (not is_global) and not item_date:
return Response({'error': 'Dated itinerary items must include a date.'}, status=status.HTTP_400_BAD_REQUEST)
# Validate that the itinerary date (if provided) falls within the
# collection's start_date/end_date range (if those bounds are set).
if collection_id and item_date and not is_global:
# Try parse date or datetime-like values
parsed_date = None
try:
parsed_date = parse_date(str(item_date))
except Exception:
parsed_date = None
if parsed_date is None:
try:
dt = parse_datetime(str(item_date))
if dt:
parsed_date = dt.date()
except Exception:
parsed_date = None
if parsed_date is not None:
try:
collection_obj = Collection.objects.get(id=collection_id)
except Collection.DoesNotExist:
return Response({'error': 'Collection not found'}, status=status.HTTP_404_NOT_FOUND)
if collection_obj.start_date and parsed_date < collection_obj.start_date:
return Response({'error': 'Itinerary item date is before the collection start_date'}, status=status.HTTP_400_BAD_REQUEST)
if collection_obj.end_date and parsed_date > collection_obj.end_date:
return Response({'error': 'Itinerary item date is after the collection end_date'}, status=status.HTTP_400_BAD_REQUEST)
if collection_id:
if is_global:
# Max order within global group
existing_max = CollectionItineraryItem.objects.filter(
collection_id=collection_id,
is_global=True
).aggregate(max_order=models.Max('order'))['max_order']
if existing_max is None:
existing_max = -1
if item_order is None or item_order <= existing_max:
data['order'] = existing_max + 1
elif item_date:
# Find the maximum order for this collection+date
existing_max = CollectionItineraryItem.objects.filter(
collection_id=collection_id,
date=item_date,
is_global=False
).aggregate(max_order=models.Max('order'))['max_order']
# Check if the requested order conflicts with existing items
if existing_max is not None and item_order <= existing_max:
# Assign next available order
data['order'] = existing_max + 1
# Proceed with normal serializer flow using modified data
serializer = self.get_serializer(data=data)
serializer.is_valid(raise_exception=True)
self.perform_create(serializer)
# If we updated the item's date, include the updated object in response for frontend sync
response_data = serializer.data
if update_item_date and content_type_val and object_id:
if content_type_val == 'transportation':
try:
t = Transportation.objects.get(id=object_id)
from adventures.serializers import TransportationSerializer
response_data['updated_object'] = TransportationSerializer(t).data
except Transportation.DoesNotExist:
pass
elif content_type_val == 'lodging':
try:
l = Lodging.objects.get(id=object_id)
from adventures.serializers import LodgingSerializer
response_data['updated_object'] = LodgingSerializer(l).data
except Lodging.DoesNotExist:
pass
headers = self.get_success_headers(serializer.data)
return Response(response_data, status=status.HTTP_201_CREATED, headers=headers)
@transaction.atomic
def destroy(self, request, *args, **kwargs):
"""
Override destroy to remove associated visits when deleting a location itinerary item.
When removing a location from the itinerary, any PLANNED visits (future visits) at
that location on the same date as the itinerary item should also be removed.
If preserve_visits=true query parameter is provided, visits will NOT be deleted.
This is useful when moving items to global/trip context where we want to keep the visits.
"""
instance = self.get_object()
preserve_visits = request.query_params.get('preserve_visits', 'false').lower() == 'true'
# Check if this is a location type itinerary item
location_ct = ContentType.objects.get_for_model(Location)
if instance.content_type == location_ct and instance.object_id and not preserve_visits:
try:
location = Location.objects.get(id=instance.object_id)
itinerary_date = instance.date
if itinerary_date:
# Convert itinerary date to datetime for comparison
if isinstance(itinerary_date, str):
itinerary_date = parse_date(itinerary_date)
# Find and delete visits at this location on this date
# When removing from itinerary, we remove the associated visit
visits_to_delete = Visit.objects.filter(
location=location,
start_date__date=itinerary_date
)
deleted_count = visits_to_delete.count()
if deleted_count > 0:
visits_to_delete.delete()
except Location.DoesNotExist:
# Location doesn't exist, just proceed with deleting the itinerary item
pass
# Call parent destroy to delete the itinerary item
return super().destroy(request, *args, **kwargs)
@action(detail=False, methods=['post'], url_path='reorder')
@transaction.atomic
def reorder(self, request):
"""
Reorder itinerary items in bulk.
Expected payload:
{
"items": [
{"id": "uuid", "date": "2024-01-01", "order": 0},
{"id": "uuid", "date": "2024-01-01", "order": 1},
...
]
}
"""
items_data = request.data.get('items', [])
# Delegate to reusable helper which handles validation, permission checks
# and the two-phase update to avoid unique constraint races.
updated_items = reorder_itinerary_items(request.user, items_data)
serializer = self.get_serializer(updated_items, many=True)
return Response(serializer.data, status=status.HTTP_200_OK)
@action(detail=False, methods=['post'], url_path='auto-generate')
@transaction.atomic
def auto_generate(self, request):
"""
Auto-generate itinerary items for a collection based on dated records.
Only works when:
- Collection has zero itinerary items
- Collection has dated records (visits, lodging, transportation, notes, checklists)
Expected payload:
{
"collection_id": "uuid"
}
Returns: List of created itinerary items
"""
collection_id = request.data.get('collection_id')
if not collection_id:
return Response(
{"error": "collection_id is required"},
status=status.HTTP_400_BAD_REQUEST
)
# Get collection and check permissions
try:
collection = Collection.objects.get(id=collection_id)
except Collection.DoesNotExist:
return Response(
{"error": "Collection not found"},
status=status.HTTP_404_NOT_FOUND
)
# Permission check: user must be collection owner or in shared_with
if not (collection.user == request.user or collection.shared_with.filter(id=request.user.id).exists()):
return Response(
{"error": "You do not have permission to modify this collection"},
status=status.HTTP_403_FORBIDDEN
)
try:
created_items = auto_generate_itinerary(collection)
serializer = self.get_serializer(created_items, many=True)
return Response({
"message": f"Successfully generated {len(created_items)} itinerary items",
"items": serializer.data
}, status=status.HTTP_201_CREATED)
except ValidationError as e:
return Response(e.detail, status=status.HTTP_400_BAD_REQUEST)
class ItineraryDayViewSet(viewsets.ModelViewSet):
"""ViewSet for managing itinerary day metadata (names and descriptions)"""
serializer_class = CollectionItineraryDaySerializer
permission_classes = [IsOwnerOrSharedWithFullAccess]
def get_queryset(self):
user = self.request.user
if not user.is_authenticated:
return CollectionItineraryDay.objects.none()
# Return day metadata from collections the user owns or is shared with
return CollectionItineraryDay.objects.filter(
Q(collection__user=user) | Q(collection__shared_with=user)
).distinct().select_related('collection', 'collection__user').order_by('date')
def perform_create(self, serializer):
"""Ensure the user has permission to modify the collection"""
collection = serializer.validated_data.get('collection')
if not collection:
raise ValidationError("Collection is required")
# Check if user has permission to modify this collection
if not (collection.user == self.request.user or
collection.shared_with.filter(id=self.request.user.id).exists()):
raise PermissionDenied("You do not have permission to modify this collection")
serializer.save()
def perform_update(self, serializer):
"""Ensure the user has permission to modify the collection"""
instance = self.get_object()
collection = instance.collection
# Check if user has permission to modify this collection
if not (collection.user == self.request.user or
collection.shared_with.filter(id=self.request.user.id).exists()):
raise PermissionDenied("You do not have permission to modify this collection")
serializer.save()
def perform_destroy(self, instance):
"""Ensure the user has permission to modify the collection"""
collection = instance.collection
# Check if user has permission to modify this collection
if not (collection.user == self.request.user or
collection.shared_with.filter(id=self.request.user.id).exists()):
raise PermissionDenied("You do not have permission to modify this collection")
instance.delete()

View File

@@ -1,15 +1,16 @@
from django.utils import timezone
from django.db import transaction
from django.core.exceptions import PermissionDenied
from django.db.models import Q, Max
from django.db.models import Q, Max, Prefetch
from django.db.models.functions import Lower
from rest_framework import viewsets, status
from rest_framework.decorators import action
from rest_framework.response import Response
import requests
from adventures.models import Location, Category
from adventures.models import Location, Category, CollectionItineraryItem, Visit
from django.contrib.contenttypes.models import ContentType
from adventures.permissions import IsOwnerOrSharedWithFullAccess
from adventures.serializers import LocationSerializer, MapPinSerializer
from adventures.serializers import LocationSerializer, MapPinSerializer, CalendarLocationSerializer
from adventures.utils import pagination
class LocationViewSet(viewsets.ModelViewSet):
@@ -208,6 +209,29 @@ class LocationViewSet(viewsets.ModelViewSet):
serializer = self.get_serializer(queryset, many=True, context={'nested': nested, 'allowed_nested_fields': allowedNestedFields})
return Response(serializer.data)
@action(detail=False, methods=['get'])
def calendar(self, request):
"""Return a lightweight payload for calendar rendering."""
if not request.user.is_authenticated:
return Response({"error": "User is not authenticated"}, status=400)
queryset = (
self.get_queryset()
.filter(visits__isnull=False)
.select_related('category')
.prefetch_related(
Prefetch(
'visits',
queryset=Visit.objects.only('id', 'start_date', 'end_date', 'timezone')
)
)
.only('id', 'name', 'location', 'category__name', 'category__icon')
.distinct()
)
serializer = CalendarLocationSerializer(queryset, many=True)
return Response(serializer.data)
@action(detail=True, methods=['get'], url_path='additional-info')
def additional_info(self, request, pk=None):
"""Get adventure with additional sunrise/sunset information."""
@@ -277,6 +301,25 @@ class LocationViewSet(viewsets.ModelViewSet):
raise PermissionDenied(
f"You don't have permission to remove this location from one of the collections it's linked to.'"
)
else:
# If the removal is permitted, also remove any itinerary items
# in this collection that reference this Location instance.
try:
ct = ContentType.objects.get_for_model(instance.__class__)
# Try deleting by native PK type first, then by string.
qs = CollectionItineraryItem.objects.filter(
collection=collection, content_type=ct, object_id=instance.pk
)
if qs.exists():
qs.delete()
else:
CollectionItineraryItem.objects.filter(
collection=collection, content_type=ct, object_id=str(instance.pk)
).delete()
except Exception:
# Don't raise on cleanup failures; deletion of itinerary items
# is best-effort and shouldn't block the update operation.
pass
def _validate_collection_permissions(self, collections):
"""Validate permissions for all collections (used in create)."""

View File

@@ -1,11 +1,13 @@
from rest_framework import viewsets, status
from rest_framework.response import Response
from django.db.models import Q
from adventures.models import Note
from adventures.models import Note, CollectionItineraryItem
from adventures.serializers import NoteSerializer
from rest_framework.exceptions import PermissionDenied
from adventures.permissions import IsOwnerOrSharedWithFullAccess
from rest_framework.decorators import action
from django.contrib.contenttypes.models import ContentType
from django.db import transaction
class NoteViewSet(viewsets.ModelViewSet):
queryset = Note.objects.all()
@@ -53,15 +55,18 @@ class NoteViewSet(viewsets.ModelViewSet):
# Retrieve the current object
instance = self.get_object()
# Store the old date before updating
old_date = instance.date
# Partially update the instance with the request data
serializer = self.get_serializer(instance, data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
# Retrieve the collection from the validated data
new_collection = serializer.validated_data.get('collection')
new_date = serializer.validated_data.get('date', old_date)
user = request.user
print(new_collection)
if new_collection is not None and new_collection!=instance.collection:
# Check if the user is the owner of the new collection
@@ -72,34 +77,15 @@ class NoteViewSet(viewsets.ModelViewSet):
if instance.collection is not None and instance.collection.user != user:
raise PermissionDenied("You cannot remove the collection as you are not the owner.")
# Perform the update
self.perform_update(serializer)
# Return the updated instance
return Response(serializer.data)
def partial_update(self, request, *args, **kwargs):
# Retrieve the current object
instance = self.get_object()
# Partially update the instance with the request data
serializer = self.get_serializer(instance, data=request.data, partial=True)
serializer.is_valid(raise_exception=True)
# Retrieve the collection from the validated data
new_collection = serializer.validated_data.get('collection')
user = request.user
print(new_collection)
if new_collection is not None and new_collection!=instance.collection:
# Check if the user is the owner of the new collection
if new_collection.user != user or instance.user != user:
raise PermissionDenied("You do not have permission to use this collection.")
elif new_collection is None:
# Handle the case where the user is trying to set the collection to None
if instance.collection is not None and instance.collection.user != user:
raise PermissionDenied("You cannot remove the collection as you are not the owner.")
# If the date changed, remove old itinerary items for this note on the old date
if old_date and new_date and old_date != new_date:
note_ct = ContentType.objects.get_for_model(Note)
old_itinerary_items = CollectionItineraryItem.objects.filter(
content_type=note_ct,
object_id=str(instance.id),
date=old_date
)
old_itinerary_items.delete()
# Perform the update
self.perform_update(serializer)

View File

@@ -5,197 +5,398 @@ from rest_framework.response import Response
from django.conf import settings
import requests
from geopy.distance import geodesic
import logging
from ..geocoding import search_google, search_osm
logger = logging.getLogger(__name__)
class RecommendationsViewSet(viewsets.ViewSet):
permission_classes = [IsAuthenticated]
BASE_URL = "https://overpass-api.de/api/interpreter"
OVERPASS_URL = "https://overpass-api.de/api/interpreter"
NOMINATIM_URL = "https://nominatim.openstreetmap.org/search"
HEADERS = {'User-Agent': 'AdventureLog Server'}
# Quality thresholds
MIN_GOOGLE_RATING = 3.0 # Minimum rating to include
MIN_GOOGLE_REVIEWS = 5 # Minimum number of reviews
MAX_RESULTS = 50 # Maximum results to return
def calculate_quality_score(self, place_data):
"""
Calculate a quality score based on multiple factors.
Higher score = better quality recommendation.
"""
import math
score = 0.0
# Rating contribution (0-50 points)
rating = place_data.get('rating')
if rating is not None and rating > 0:
score += (rating / 5.0) * 50
# Review count contribution (0-30 points, logarithmic scale)
review_count = place_data.get('review_count')
if review_count is not None and review_count > 0:
# Logarithmic scale: 10 reviews = ~10 pts, 100 = ~20 pts, 1000 = ~30 pts
score += min(30, math.log10(review_count) * 10)
# Distance penalty (0-20 points, closer is better)
distance_km = place_data.get('distance_km')
if distance_km is not None:
if distance_km < 1:
score += 20
elif distance_km < 5:
score += 15
elif distance_km < 10:
score += 10
elif distance_km < 20:
score += 5
# Verified/business status bonus (0-10 points)
if place_data.get('is_verified') or place_data.get('business_status') == 'OPERATIONAL':
score += 10
# Has photos bonus (0-5 points)
photos = place_data.get('photos')
if photos and len(photos) > 0:
score += 5
# Has opening hours bonus (0-5 points)
opening_hours = place_data.get('opening_hours')
if opening_hours and len(opening_hours) > 0:
score += 5
return round(score, 2)
def parse_google_places(self, places, origin):
"""
Parse Google Places API results into unified format.
Enhanced with quality filtering and comprehensive data extraction.
"""
locations = []
api_key = getattr(settings, 'GOOGLE_MAPS_API_KEY', None)
for place in places:
location = place.get('location', {})
types = place.get('types', [])
# Updated for new API response structure
formatted_address = place.get("formattedAddress") or place.get("shortFormattedAddress")
# Extract display name
display_name = place.get("displayName", {})
name = display_name.get("text") if isinstance(display_name, dict) else display_name
# Extract coordinates
lat = location.get('latitude')
lon = location.get('longitude')
if not name or not lat or not lon:
continue
# Extract rating information
rating = place.get('rating')
review_count = place.get('userRatingCount', 0)
# Quality filter: Skip low-rated or unreviewed places
if rating and rating < self.MIN_GOOGLE_RATING:
continue
if review_count < self.MIN_GOOGLE_REVIEWS:
continue
# Calculate distance
distance_km = geodesic(origin, (lat, lon)).km
adventure = {
"id": place.get('id'),
"type": 'place',
# Extract address information
formatted_address = place.get("formattedAddress") or place.get("shortFormattedAddress")
# Extract business status
business_status = place.get('businessStatus')
is_operational = business_status == 'OPERATIONAL'
# Extract opening hours
opening_hours = place.get('regularOpeningHours', {})
current_opening_hours = place.get('currentOpeningHours', {})
is_open_now = current_opening_hours.get('openNow')
# Extract photos and construct URLs
photos = place.get('photos', [])
photo_urls = []
if photos and api_key:
# Get first 5 photos and construct full URLs
for photo in photos[:5]:
photo_name = photo.get('name', '')
if photo_name:
# Construct Google Places Photo API URL
# Format: https://places.googleapis.com/v1/{name}/media?key={key}&maxHeightPx=800&maxWidthPx=800
photo_url = f"https://places.googleapis.com/v1/{photo_name}/media?key={api_key}&maxHeightPx=800&maxWidthPx=800"
photo_urls.append(photo_url)
# Extract contact information
phone_number = place.get('nationalPhoneNumber') or place.get('internationalPhoneNumber')
website = place.get('websiteUri')
google_maps_uri = place.get('googleMapsUri')
# Extract price level
price_level = place.get('priceLevel')
# Extract editorial summary/description
editorial_summary = place.get('editorialSummary', {})
description = editorial_summary.get('text') if isinstance(editorial_summary, dict) else None
# Filter out unwanted types (generic categories)
filtered_types = [t for t in types if t not in ['point_of_interest', 'establishment']]
# Build unified response
place_data = {
"id": f"google:{place.get('id')}",
"external_id": place.get('id'),
"source": "google",
"name": name,
"description": place.get('businessStatus', None),
"description": description,
"latitude": lat,
"longitude": lon,
"address": formatted_address,
"tag": types[0] if types else None,
"distance_km": round(distance_km, 2),
"rating": rating,
"review_count": review_count,
"price_level": price_level,
"types": filtered_types,
"primary_type": filtered_types[0] if filtered_types else None,
"business_status": business_status,
"is_open_now": is_open_now,
"opening_hours": opening_hours.get('weekdayDescriptions', []) if opening_hours else None,
"phone_number": phone_number,
"website": website,
"google_maps_url": google_maps_uri,
"photos": photo_urls,
"is_verified": is_operational,
}
locations.append(adventure)
# Sort by distance ascending
locations.sort(key=lambda x: x["distance_km"])
# Calculate quality score
place_data['quality_score'] = self.calculate_quality_score(place_data)
locations.append(place_data)
return locations
def parse_overpass_response(self, data, request):
def parse_overpass_response(self, data, request, origin):
"""
Parse Overpass API (OSM) results into unified format.
Enhanced with quality filtering and comprehensive data extraction.
"""
nodes = data.get('elements', [])
locations = []
all = request.query_params.get('all', False)
origin = None
try:
origin = (
float(request.query_params.get('lat')),
float(request.query_params.get('lon'))
)
except(ValueError, TypeError):
origin = None
for node in nodes:
if node.get('type') not in ['node', 'way', 'relation']:
continue
tags = node.get('tags', {})
lat = node.get('lat')
lon = node.get('lon')
name = tags.get('name', tags.get('official_name', ''))
# Get coordinates (for ways/relations, use center)
lat = node.get('lat') or node.get('center', {}).get('lat')
lon = node.get('lon') or node.get('center', {}).get('lon')
# Extract name (with fallbacks)
name = tags.get('name') or tags.get('official_name') or tags.get('alt_name')
if not name or lat is None or lon is None:
if not all:
continue
continue
# Flatten address
address_parts = [tags.get(f'addr:{k}') for k in ['housenumber', 'street', 'suburb', 'city', 'state', 'postcode', 'country']]
formatted_address = ", ".join(filter(None, address_parts)) or name
# Calculate distance
distance_km = round(geodesic(origin, (lat, lon)).km, 2) if origin else None
# Calculate distance if possible
distance_km = None
if origin:
distance_km = round(geodesic(origin, (lat, lon)).km, 2)
# Extract address information
address_parts = [
tags.get('addr:housenumber'),
tags.get('addr:street'),
tags.get('addr:suburb') or tags.get('addr:neighbourhood'),
tags.get('addr:city'),
tags.get('addr:state'),
tags.get('addr:postcode'),
tags.get('addr:country')
]
formatted_address = ", ".join(filter(None, address_parts)) or None
# Unified format
adventure = {
"id": f"osm:{node.get('id')}",
"type": "place",
# Extract contact information
phone = tags.get('phone') or tags.get('contact:phone')
website = tags.get('website') or tags.get('contact:website') or tags.get('url')
# Extract opening hours
opening_hours = tags.get('opening_hours')
# Extract rating/stars (if available)
stars = tags.get('stars')
# Determine category/type hierarchy
category_keys = ['tourism', 'leisure', 'amenity', 'natural', 'historic', 'attraction', 'shop', 'sport']
types = [tags.get(key) for key in category_keys if key in tags]
primary_type = types[0] if types else None
# Extract description and additional info
description = tags.get('description') or tags.get('note')
wikipedia = tags.get('wikipedia') or tags.get('wikidata')
# Extract image if available
image = tags.get('image') or tags.get('wikimedia_commons')
# Quality filters for OSM data
# Skip if it's just a generic POI without specific category
if not primary_type:
continue
# Skip construction or disused places
if tags.get('disused') or tags.get('construction'):
continue
# Build unified response
place_data = {
"id": f"osm:{node.get('type')}:{node.get('id')}",
"external_id": str(node.get('id')),
"source": "osm",
"name": name,
"description": tags.get('description'),
"description": description,
"latitude": lat,
"longitude": lon,
"address": formatted_address,
"tag": next((tags.get(key) for key in ['leisure', 'tourism', 'natural', 'historic', 'amenity'] if key in tags), None),
"distance_km": distance_km,
"powered_by": "osm"
"rating": None, # OSM doesn't have ratings
"review_count": None,
"price_level": None,
"types": types,
"primary_type": primary_type,
"business_status": None,
"is_open_now": None,
"opening_hours": [opening_hours] if opening_hours else None,
"phone_number": phone,
"website": website,
"google_maps_url": None,
"photos": [image] if image else [],
"is_verified": bool(wikipedia), # Has Wikipedia = more verified
"osm_type": node.get('type'),
"wikipedia": wikipedia,
"stars": stars,
}
locations.append(adventure)
# Sort by distance if available
if origin:
locations.sort(key=lambda x: x.get("distance_km") or float("inf"))
# Calculate quality score (will be lower without ratings)
place_data['quality_score'] = self.calculate_quality_score(place_data)
locations.append(place_data)
return locations
def query_overpass(self, lat, lon, radius, category, request):
"""
Query Overpass API (OpenStreetMap) for nearby places.
Enhanced with better queries and error handling.
"""
# Limit radius for OSM to prevent timeouts (max 5km for OSM due to server limits)
osm_radius = min(radius, 5000)
# Build optimized query - use simpler queries and limit results
# Reduced timeout and simplified queries to prevent 504 errors
if category == 'tourism':
query = f"""
[out:json];
[out:json][timeout:25];
(
node(around:{radius},{lat},{lon})["tourism"];
node(around:{radius},{lat},{lon})["leisure"];
node(around:{radius},{lat},{lon})["historic"];
node(around:{radius},{lat},{lon})["sport"];
node(around:{radius},{lat},{lon})["natural"];
node(around:{radius},{lat},{lon})["attraction"];
node(around:{radius},{lat},{lon})["museum"];
node(around:{radius},{lat},{lon})["zoo"];
node(around:{radius},{lat},{lon})["aquarium"];
nwr["tourism"~"attraction|viewpoint|museum|gallery|zoo|aquarium"](around:{osm_radius},{lat},{lon});
nwr["historic"~"monument|castle|memorial"](around:{osm_radius},{lat},{lon});
nwr["leisure"~"park|garden|nature_reserve"](around:{osm_radius},{lat},{lon});
);
out;
out center tags 50;
"""
elif category == 'lodging':
query = f"""
[out:json];
(
node(around:{radius},{lat},{lon})["tourism"="hotel"];
node(around:{radius},{lat},{lon})["tourism"="motel"];
node(around:{radius},{lat},{lon})["tourism"="guest_house"];
node(around:{radius},{lat},{lon})["tourism"="hostel"];
node(around:{radius},{lat},{lon})["tourism"="camp_site"];
node(around:{radius},{lat},{lon})["tourism"="caravan_site"];
node(around:{radius},{lat},{lon})["tourism"="chalet"];
node(around:{radius},{lat},{lon})["tourism"="alpine_hut"];
node(around:{radius},{lat},{lon})["tourism"="apartment"];
);
out;
[out:json][timeout:25];
nwr["tourism"~"hotel|motel|guest_house|hostel"](around:{osm_radius},{lat},{lon});
out center tags 50;
"""
elif category == 'food':
query = f"""
[out:json];
(
node(around:{radius},{lat},{lon})["amenity"="restaurant"];
node(around:{radius},{lat},{lon})["amenity"="cafe"];
node(around:{radius},{lat},{lon})["amenity"="fast_food"];
node(around:{radius},{lat},{lon})["amenity"="pub"];
node(around:{radius},{lat},{lon})["amenity"="bar"];
node(around:{radius},{lat},{lon})["amenity"="food_court"];
node(around:{radius},{lat},{lon})["amenity"="ice_cream"];
node(around:{radius},{lat},{lon})["amenity"="bakery"];
node(around:{radius},{lat},{lon})["amenity"="confectionery"];
);
out;
[out:json][timeout:25];
nwr["amenity"~"restaurant|cafe|bar|pub"](around:{osm_radius},{lat},{lon});
out center tags 50;
"""
else:
return Response({"error": "Invalid category."}, status=400)
logger.error(f"Invalid category requested: {category}")
return {"error": "Invalid category.", "results": []}
overpass_url = f"{self.BASE_URL}?data={query}"
try:
response = requests.get(overpass_url, headers=self.HEADERS)
response = requests.post(
self.OVERPASS_URL,
data=query,
headers=self.HEADERS,
timeout=30
)
response.raise_for_status()
data = response.json()
except Exception as e:
print("Overpass API error:", e)
return Response({"error": "Failed to retrieve data from Overpass API."}, status=500)
except requests.exceptions.Timeout:
logger.warning(f"Overpass API timeout for {category} at ({lat}, {lon}) with radius {osm_radius}m")
return {"error": f"OpenStreetMap query timed out. The service is overloaded. Radius limited to {int(osm_radius)}m.", "results": []}
except requests.exceptions.HTTPError as e:
if e.response.status_code == 504:
logger.warning(f"Overpass API 504 Gateway Timeout for {category}")
return {"error": "OpenStreetMap server is overloaded. Try again later or use Google source.", "results": []}
logger.warning(f"Overpass API HTTP error: {e}")
return {"error": f"OpenStreetMap error: please try again later.", "results": []}
except requests.exceptions.RequestException as e:
logger.warning(f"Overpass API error: {e}")
return {"error": f"OpenStreetMap temporarily unavailable: please try again later.", "results": []}
except ValueError as e:
logger.error(f"Invalid JSON response from Overpass: {e}")
return {"error": "Invalid response from OpenStreetMap.", "results": []}
locations = self.parse_overpass_response(data, request)
return Response(locations)
origin = (float(lat), float(lon))
locations = self.parse_overpass_response(data, request, origin)
logger.info(f"Overpass returned {len(locations)} results")
return {"error": None, "results": locations}
def query_google_nearby(self, lat, lon, radius, category, request):
"""Query Google Places API (New) for nearby places"""
"""
Query Google Places API (New) for nearby places.
Enhanced with comprehensive field masks and better error handling.
"""
api_key = settings.GOOGLE_MAPS_API_KEY
# Updated to use new Places API endpoint
url = "https://places.googleapis.com/v1/places:searchNearby"
# Comprehensive field mask to get all useful information
headers = {
'Content-Type': 'application/json',
'X-Goog-Api-Key': api_key,
'X-Goog-FieldMask': 'places.displayName.text,places.formattedAddress,places.location,places.types,places.rating,places.userRatingCount,places.businessStatus,places.id'
'X-Goog-FieldMask': (
'places.id,'
'places.displayName,'
'places.formattedAddress,'
'places.shortFormattedAddress,'
'places.location,'
'places.types,'
'places.rating,'
'places.userRatingCount,'
'places.businessStatus,'
'places.priceLevel,'
'places.websiteUri,'
'places.googleMapsUri,'
'places.nationalPhoneNumber,'
'places.internationalPhoneNumber,'
'places.editorialSummary,'
'places.photos,'
'places.currentOpeningHours,'
'places.regularOpeningHours'
)
}
# Map categories to place types for the new API
# Map categories to place types - use multiple types for better coverage
type_mapping = {
'lodging': 'lodging',
'food': 'restaurant',
'tourism': 'tourist_attraction',
'lodging': ['lodging', 'hotel', 'hostel', 'resort_hotel', 'extended_stay_hotel'],
'food': ['restaurant', 'cafe', 'bar', 'bakery', 'meal_takeaway', 'meal_delivery'],
'tourism': ['tourist_attraction', 'museum', 'art_gallery', 'aquarium', 'zoo', 'amusement_park', 'park', 'natural_feature'],
}
payload = {
"includedTypes": [type_mapping[category]],
"includedTypes": type_mapping.get(category, ['tourist_attraction']),
"maxResultCount": 20,
"rankPreference": "DISTANCE", # Sort by distance first
"locationRestriction": {
"circle": {
"center": {
@@ -208,7 +409,7 @@ class RecommendationsViewSet(viewsets.ViewSet):
}
try:
response = requests.post(url, json=payload, headers=headers, timeout=10)
response = requests.post(url, json=payload, headers=headers, timeout=15)
response.raise_for_status()
data = response.json()
@@ -216,41 +417,265 @@ class RecommendationsViewSet(viewsets.ViewSet):
origin = (float(lat), float(lon))
locations = self.parse_google_places(places, origin)
return Response(locations)
logger.info(f"Google Places returned {len(locations)} quality results for category '{category}'")
return Response(self._prepare_final_results(locations))
except requests.exceptions.Timeout:
logger.warning("Google Places API timeout, falling back to OSM")
return self.query_overpass(lat, lon, radius, category, request)
except requests.exceptions.RequestException as e:
print(f"Google Places API error: {e}")
# Fallback to Overpass API
logger.warning(f"Google Places API error: {e}, falling back to OSM")
return self.query_overpass(lat, lon, radius, category, request)
except Exception as e:
print(f"Unexpected error with Google Places API: {e}")
# Fallback to Overpass API
logger.error(f"Unexpected error with Google Places API: {e}")
return self.query_overpass(lat, lon, radius, category, request)
def _prepare_final_results(self, locations):
"""
Prepare final results: sort by quality score and limit results.
"""
# Sort by quality score (highest first)
locations.sort(key=lambda x: x.get('quality_score', 0), reverse=True)
# Limit to MAX_RESULTS
locations = locations[:self.MAX_RESULTS]
return locations
def _deduplicate_results(self, google_results, osm_results):
"""
Deduplicate results from both sources based on name and proximity.
Prioritize Google results when duplicates are found.
"""
from difflib import SequenceMatcher
def is_similar(name1, name2, threshold=0.85):
"""Check if two names are similar using fuzzy matching."""
return SequenceMatcher(None, name1.lower(), name2.lower()).ratio() > threshold
def is_nearby(loc1, loc2, max_distance_m=50):
"""Check if two locations are within max_distance_m meters."""
dist = geodesic(
(loc1['latitude'], loc1['longitude']),
(loc2['latitude'], loc2['longitude'])
).meters
return dist < max_distance_m
# Start with all Google results (higher quality)
deduplicated = list(google_results)
# Add OSM results that don't match Google results
for osm_loc in osm_results:
is_duplicate = False
for google_loc in google_results:
if (is_similar(osm_loc['name'], google_loc['name']) and
is_nearby(osm_loc, google_loc)):
is_duplicate = True
break
if not is_duplicate:
deduplicated.append(osm_loc)
return deduplicated
@action(detail=False, methods=['get'])
def query(self, request):
"""
Query both Google Places and OSM for recommendations.
Returns unified, high-quality results sorted by quality score.
Query Parameters:
- lat (required): Latitude
- lon (required): Longitude
- radius (optional): Search radius in meters (default: 5000, max: 50000)
- category (required): Category - 'tourism', 'food', or 'lodging'
- sources (optional): Comma-separated sources - 'google', 'osm', or 'both' (default: 'both')
"""
lat = request.query_params.get('lat')
lon = request.query_params.get('lon')
radius = request.query_params.get('radius', '1000')
category = request.query_params.get('category', 'all')
# Allow a free-text `location` parameter which will be geocoded
location_param = request.query_params.get('location')
radius = request.query_params.get('radius', '5000')
category = request.query_params.get('category')
sources = request.query_params.get('sources', 'both').lower()
# If lat/lon not supplied, try geocoding the free-text location param
if (not lat or not lon) and location_param:
geocode_results = None
# Try Google first if API key configured
if getattr(settings, 'GOOGLE_MAPS_API_KEY', None):
try:
geocode_results = search_google(location_param)
except Exception:
logger.warning("Google geocoding failed; falling back to OSM")
geocode_results = None
# Fallback to OSM Nominatim
if not geocode_results:
try:
geocode_results = search_osm(location_param)
except Exception:
logger.warning("OSM geocoding failed")
geocode_results = None
# Validate geocode results
if isinstance(geocode_results, dict) and geocode_results.get('error'):
# Log internal geocoding error but avoid exposing sensitive details
logger.warning("Geocoding helper returned an internal error")
return Response({"error": "Geocoding failed. Please try a different location or contact support."}, status=400)
if not geocode_results:
return Response({"error": "Could not geocode provided location."}, status=400)
# geocode_results expected to be a list of results; pick the best (first)
best = None
if isinstance(geocode_results, list) and len(geocode_results) > 0:
best = geocode_results[0]
elif isinstance(geocode_results, dict):
# Some helpers might return a dict when only one result found
best = geocode_results
if not best:
return Response({"error": "No geocoding results found."}, status=400)
try:
lat = float(best.get('lat') or best.get('latitude'))
lon = float(best.get('lon') or best.get('longitude'))
except Exception:
return Response({"error": "Geocoding result missing coordinates."}, status=400)
# Replace location_param with display name when available for logging/debug
location_param = best.get('display_name') or best.get('name') or location_param
# Validation: require lat and lon at this point
if not lat or not lon:
return Response({"error": "Latitude and longitude parameters are required."}, status=400)
valid_categories = {
'lodging': 'lodging',
'food': 'restaurant',
'tourism': 'tourist_attraction',
}
return Response({
"error": "Latitude and longitude parameters are required (or provide a 'location' parameter to geocode)."
}, status=400)
try:
lat = float(lat)
lon = float(lon)
radius = min(float(radius), 50000) # Max 50km radius
except ValueError:
return Response({
"error": "Invalid latitude, longitude, or radius value."
}, status=400)
valid_categories = ['lodging', 'food', 'tourism']
if category not in valid_categories:
return Response({"error": f"Invalid category. Valid categories: {', '.join(valid_categories)}"}, status=400)
return Response({
"error": f"Invalid category. Valid categories: {', '.join(valid_categories)}"
}, status=400)
valid_sources = ['google', 'osm', 'both']
if sources not in valid_sources:
return Response({
"error": f"Invalid sources. Valid options: {', '.join(valid_sources)}"
}, status=400)
api_key = getattr(settings, 'GOOGLE_MAPS_API_KEY', None)
# Fallback to Overpass if no API key configured
if not api_key:
return self.query_overpass(lat, lon, radius, category, request)
# Use the new Google Places API
return self.query_google_nearby(lat, lon, radius, category, request)
google_results = []
osm_results = []
# Query Google Places if available and requested
if api_key and sources in ['google', 'both']:
try:
url = "https://places.googleapis.com/v1/places:searchNearby"
headers = {
'Content-Type': 'application/json',
'X-Goog-Api-Key': api_key,
'X-Goog-FieldMask': (
'places.id,places.displayName,places.formattedAddress,'
'places.shortFormattedAddress,places.location,places.types,'
'places.rating,places.userRatingCount,places.businessStatus,'
'places.priceLevel,places.websiteUri,places.googleMapsUri,'
'places.nationalPhoneNumber,places.internationalPhoneNumber,'
'places.editorialSummary,places.photos,'
'places.currentOpeningHours,places.regularOpeningHours'
)
}
type_mapping = {
'lodging': ['lodging', 'hotel', 'hostel', 'resort_hotel'],
'food': ['restaurant', 'cafe', 'bar', 'bakery'],
'tourism': ['tourist_attraction', 'museum', 'art_gallery', 'aquarium', 'zoo', 'park'],
}
payload = {
"includedTypes": type_mapping.get(category, ['tourist_attraction']),
"maxResultCount": 20,
"rankPreference": "DISTANCE",
"locationRestriction": {
"circle": {
"center": {"latitude": lat, "longitude": lon},
"radius": radius
}
}
}
response = requests.post(url, json=payload, headers=headers, timeout=15)
response.raise_for_status()
data = response.json()
places = data.get('places', [])
origin = (lat, lon)
google_results = self.parse_google_places(places, origin)
logger.info(f"Google Places: {len(google_results)} quality results")
except Exception as e:
logger.warning(f"Google Places failed: {e}")
# Query OSM if requested or as fallback
osm_error = None
if sources in ['osm', 'both'] or (sources == 'google' and not google_results):
osm_response = self.query_overpass(lat, lon, radius, category, request)
osm_results = osm_response.get('results', [])
osm_error = osm_response.get('error')
if osm_error:
logger.warning(f"OSM query had issues: {osm_error}")
# Combine and deduplicate if using both sources
if sources == 'both' and google_results and osm_results:
all_results = self._deduplicate_results(google_results, osm_results)
else:
all_results = google_results + osm_results
# Prepare final results
final_results = self._prepare_final_results(all_results)
logger.info(f"Returning {len(final_results)} total recommendations")
# Build response with metadata
response_data = {
"count": len(final_results),
"results": final_results,
"sources_used": {
"google": len(google_results),
"osm": len(osm_results),
"total_before_dedup": len(google_results) + len(osm_results)
}
}
# Add warnings if there were errors but we still have some results
warnings = []
if osm_error and len(osm_results) == 0:
warnings.append(osm_error)
if warnings:
response_data["warnings"] = warnings
# If no results at all and user requested only OSM, return error status
if len(final_results) == 0 and sources == 'osm' and osm_error:
# Log internal error notice for investigation but do not expose details to clients
logger.debug("OSM query error (internal)")
return Response({
"error": "OpenStreetMap service temporarily unavailable. Please try again later.",
"count": 0,
"results": [],
"sources_used": response_data["sources_used"]
}, status=503)
return Response(response_data)

View File

@@ -45,41 +45,90 @@ class ReverseGeocodeViewSet(viewsets.ViewSet):
@action(detail=False, methods=['post'])
def mark_visited_region(self, request):
# searches through all of the users locations, if the serialized data is_visited, is true, runs reverse geocode on the locations and if a region is found, marks it as visited. Use the extractIsoCode function to get the region
"""
Marks regions and cities as visited based on user's visited locations.
Uses the pre-stored region/city data on locations to avoid expensive reverse geocoding.
"""
new_region_count = 0
new_regions = {}
new_city_count = 0
new_cities = {}
locations = Location.objects.filter(user=self.request.user)
serializer = LocationSerializer(locations, many=True)
for adventure, serialized_adventure in zip(locations, serializer.data):
if serialized_adventure['is_visited'] == True:
lat = adventure.latitude
lon = adventure.longitude
if not lat or not lon:
continue
# Use the existing reverse_geocode function which handles both Google and OSM
data = reverse_geocode(lat, lon, self.request.user)
if 'error' in data:
continue
# data already contains region_id and city_id
if 'region_id' in data and data['region_id'] is not None:
region = Region.objects.filter(id=data['region_id']).first()
visited_region = VisitedRegion.objects.filter(region=region, user=self.request.user).first()
if not visited_region:
visited_region = VisitedRegion(region=region, user=self.request.user)
visited_region.save()
new_region_count += 1
new_regions[region.id] = region.name
if 'city_id' in data and data['city_id'] is not None:
city = City.objects.filter(id=data['city_id']).first()
visited_city = VisitedCity.objects.filter(city=city, user=self.request.user).first()
if not visited_city:
visited_city = VisitedCity(city=city, user=self.request.user)
visited_city.save()
new_city_count += 1
new_cities[city.id] = city.name
return Response({"new_regions": new_region_count, "regions": new_regions, "new_cities": new_city_count, "cities": new_cities})
# Get all visited locations with their region and city data
visited_locations = Location.objects.filter(
user=self.request.user
).select_related('region', 'city')
# Track unique regions and cities to create VisitedRegion/VisitedCity entries
regions_to_mark = set()
cities_to_mark = set()
for location in visited_locations:
# Only process locations that are marked as visited
if not location.is_visited_status():
continue
# Collect regions
if location.region:
regions_to_mark.add(location.region.id)
# Collect cities
if location.city:
cities_to_mark.add(location.city.id)
# Get existing visited regions for this user
existing_visited_regions = set(
VisitedRegion.objects.filter(
user=self.request.user,
region_id__in=regions_to_mark
).values_list('region_id', flat=True)
)
# Create new VisitedRegion entries
new_visited_regions = []
for region_id in regions_to_mark:
if region_id not in existing_visited_regions:
new_visited_regions.append(
VisitedRegion(region_id=region_id, user=self.request.user)
)
if new_visited_regions:
VisitedRegion.objects.bulk_create(new_visited_regions)
new_region_count = len(new_visited_regions)
# Get region names for response
regions = Region.objects.filter(
id__in=[vr.region_id for vr in new_visited_regions]
)
new_regions = {r.id: r.name for r in regions}
# Get existing visited cities for this user
existing_visited_cities = set(
VisitedCity.objects.filter(
user=self.request.user,
city_id__in=cities_to_mark
).values_list('city_id', flat=True)
)
# Create new VisitedCity entries
new_visited_cities = []
for city_id in cities_to_mark:
if city_id not in existing_visited_cities:
new_visited_cities.append(
VisitedCity(city_id=city_id, user=self.request.user)
)
if new_visited_cities:
VisitedCity.objects.bulk_create(new_visited_cities)
new_city_count = len(new_visited_cities)
# Get city names for response
cities = City.objects.filter(
id__in=[vc.city_id for vc in new_visited_cities]
)
new_cities = {c.id: c.name for c in cities}
return Response({
"new_regions": new_region_count,
"regions": new_regions,
"new_cities": new_city_count,
"cities": new_cities
})