An intelligent image analysis dashboard that scans your local photo library, extracts metadata (EXIF), generates AI-powered descriptions and embeddings, and enables semantic search and geospatial exploration. All running locally on your machine!!!
This application empowers you to explore your image collection like never before. Instead of relying solely on filenames or folders, you can search for images using natural language (e.g., "A dog playing in the snow"), filter by camera settings, view photos on a global map, and analyze your collection's statistics.
Key Features:
- Dashboard: View collection stats and recently added photos.
- Semantic Search: Find images by description using AI embeddings.
- Expert Search: Filter by camera make/model, ISO, aperture, and more.
- Advanced Filters: Narrow down results by Country, File Type, Size, and Year.
- Interactive Map: Visualize your photo locations.
- Automated Scanning: Extracts EXIF data and geolocation (City/Country) locally.
- Backend: Python (FastAPI)
- Database: PostgreSQL with
pgvector(for vector similarity search) - AI/ML: Ollama (Local LLM inference)
- Vision Model: LLaVA (default) for image description
- Embedding Model: Nomic Embed Text (default) for search vectors
- Frontend: Native HTML5, CSS3, JavaScript (No framework bloat)
- Geolocation:
reverse_geocoder+pycountry(Offline/Local)
- Scanner (
scanner.py,main.py): Iterates through the configured image directory, reads EXIF tags, generates AI metadata using Ollama, and stores everything in the DB. - API Server (
src/server.py): A FastAPI application that serves the frontend and provides endpoints for search, stats, and data retrieval. - Database: Stores image paths, metadata, and 768-dimensional vector embeddings.
- Frontend: A responsive dashboard to interact with the system.
- Docker & Docker Compose: Installed and running.
- Ollama: Installed and running locally (default:
http://localhost:11434).- Pull required models:
ollama pull llavaandollama pull nomic-embed-text
- Pull required models:
-
Clone the repository.
-
Configuration:
- Review
docker-compose.yml. - By default, it mounts
./test_imagesto/app/test_images. Update the volume mapping to point to your actual photo library:volumes: - /path/to/my/photos:/app/test_images
- Environment variables can be set in
.env(optional, defaults are in compose file):OLLAMA_HOST: Url of your Ollama instance.IMAGE_PATH: Path inside the container to scan.
- Review
-
Run with Docker:
docker compose up --build -d
-
Access the Dashboard: Open your browser and navigate to
http://localhost:8666.
You can adjust the following in .env or docker-compose.yml:
OLLAMA_VISION_MODEL: Model for describing images (default:llava).OLLAMA_EMBED_MODEL: Model for embeddings (default:nomic-embed-text).DB_DSN: Database connection string.
- Scanning: Go to the Admin tab in the dashboard and click "Start Scan" to index new images.
- Resetting: Use the "Reset Database" button in the Admin tab to clear all metadata (images remain on disk).
- Ollama Status is "Offline": Ensure Ollama is running on your host machine. If running Docker on Windows/Mac, use
host.docker.internalas the host (default config). On Linux, you may need--network hostor specific IP configuration. - No Search Results: Ensure you have scanned your library. Check the logs (
docker compose logs -f app) to see if the scanner is processing images. - Map is Empty: Only photos with GPS data in their EXIF tags will appear on the map.
- Slow Scanning: AI processing is resource-intensive. Using a GPU for Ollama significantly speeds up description and embedding generation.
Application created by Antigravity, with a little help from myself.
Built with:
- FastAPI for the backend framework.
- Ollama for democratizing local AI.
- Leaflet.js for the interactive maps.
- PostgreSQL & pgvector for the robust vector engine.
- Google Fonts (Outfit) and Feather Icons (via SVG) for the UI.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.






