Nextcloud with a few addons. Now this might look like overkill for your use case but I get the impression that you might want to go further in future.
Stock NC gets you a very solid general purpose document management system and with a few addons, you basically get self hosted SharePoint and OneDrive without the baggage. The images/pictures side of things has seen quite a lot of development and with some addons you get image classification with fairly minimal effort.
The system as a whole will quite happily handle many 100,000 files with pretty rubbish hardware, if you are happy to wait for batch jobs to run or you throw more hardware at it and speed up the job schedules.
NC has a stock phone app which works very well these days, including camera folder uploads. There are several more apps that integrate with the main one to add optional functionality. For example notes and voip.
It is a very large and mature setup with loads of documentation and hence extensible by a determined hacker if something is missing.
Immich is what I'm using right now. I'm running it in a Docker container on my Synology. It was very advantageous to spin up another docker container on my laptop to do the face recognition work because the Synology was going to take forever on it.
We no longer are auto uploading to Google or Apple.
So far, I really like it. I haven't quite gone 100%, as we're still uploading with Synology's photo app, but Immich provides a much more refined, featured interface.
May I ask: why not use Synology's own photo stack? The web UI is pretty good, the iPhone app is great, it runs locally without depending on Synology servers, and does have face recognition and all other features.
This. It's a fascinating project, it is hard to believe how can an FLOSS project be so high quality. In my book it's on the level of Postgres (although it's a smaller project, probably).
Their frontend is amazing, their apps are not as performant, and the backend is (IMHO) the worst of them all.
No hate here, I'm really grateful for what they've achieved so far, but I think there's a lot of room for improvement (e.g: proper R/W query split, native S3 integration, faster endpoints, ...). I already mentioned it in their channel (they're a really welcoming community!) and I'm working on an alternative drop-in replacement backend (written in Go) [1] that will hopefully bring all the needed improvements.
TL;DR: It's definitely good, especially for an open-source project, and the team is very dedicated - but it's definitely not Postgres-good
Why the focus on S3 for a self-hosted app? Anyway kudos for the effort, I'm not experiencing performance issues in my locally self-hosted Immich installation but more performant software is always welcome.
I have and love my self-hosted immich install. If self-hosted could also use S3 storage, that allows me to use Garage (https://git.deuxfleurs.fr/Deuxfleurs/garage) , which also lets me play games with growable/redundant storage on a pile of second-hand hard drives. IIRC it can only use a mounted block device at the moment, (unless there is a nfs-exposed s3 translator ....)
A lot of existing tooling supports the s3 protocol, so it would simplify the storage picture (no pun intended).
S3-compatible storage. In my case, Backblaze B2. The idea is to make the backend compatible with rclone, so that one can pick whatever storage they want (including B2 / S3 and others)
Looking at the world around me, so much of it is driven by open source. In fact, I can't name a single piece of electronics around me that isn't using it.
Been running immich on my home server for about a year now.
Near zero maintenance stack, incredibly easy to update, the client mobile apps even notify you (unobtrusively) when your server has an update available. The UI is just so polished & features so stable it's hard to believe it's open source.
This may not interest you, but Ente checks most of these boxes for me. It has face recognition and AI-based object search out of the box, and you can self-host their open-source server without any restrictions. The models they used might be useful for your project.
Ente is a tremendous proposal. I don't know why I hadn't heard of it before, but I don't think it meets what I'm looking for. But the fact that the software is completely open is impressive.
The Ente self-hosting proposition seems strange. Why would I want to e2e encrypt my photos that I self-host? Sounds like it will only make life more difficult.
> Some people rent VPSes. This helps keep their data safe.
This is exactly how I self-host Ente and it has been great.
Machine leaning for image detection has worked really well for me, especially facial recognition for family members (easy to find that photo to share).
I have the client on my Android mobile, Fire tablet (via F-Droid), and my Windows laptop.
My initial motivation was to replace "cloud" storage for getting photos copied off the phone as soon as possible.
Their pricing page doesn't say anything as far as I can find but do you still pay pay Ente if you self host the server as well as the photos ("S3-compatible object storage")?
I don't know about the photo-management aspects. However, I've had very good experiences running gemma3 (4b and 12b) locally via ollama
I've used gemma to process pictures and get descriptions and also to respond questions about the pictures (eg. is there a bicycle in the picture?). Haven't tried it for face recognition, but if you already have identified someone in one photo, it can probably tell you if the person in that photo is also in another photo
Just one caveat, if you are processing thousands of pictures, it will take a while to process them all (depending on your hardware and picture size). You could also try creating a processing pipeline, first extracting faces or bounding boxes of the faces with something like opencv, and then passing those to gemma3
Please post repo link if you ever decide to open source
Thanks nico for sharing your experience! That's really helpful. The idea of using OpenCV to create a processing pipeline for face detection before passing it to Gemma is brilliant I hadn't thought of that. I'll definitely look into using gemma with ollama.
And for sure, if I get this to a point where it's open-source, I'll post the link here!
I currently use photoprism, but it's moving rather slowly. Facial recognition misses a lot of faces, the automatic clustering works fine at first but once you tagged a few thousand faces the implementation grinds to a halt and the background worker runs for hours pegging single cpu core.
The dev is really reluctant of accepting external contributions, which has driven away a lot of curious folks willing to contribute.
Immich seems to be the other extreme. Moving really fast with a lot of contributors, but stuff occasionally breaks, the setup is fiddly, but the Ai features are 100x more powerful. I just don't like the ui as much as photoprism. I with there was some kind of blend of the two, on a middle ground of their dev philosophies.
While Immich development release versions every 2-3 weeks on average, and a breaking one every 4-6 months, they are approaching the stable release, so the pace should also down a bit. The setup to be honest is pretty standard IMO.
I have been building something like this but for personal use.
As of now, I use SentenceTransformer model to chunk files, blip for captioning (“Family vacation in Banff, February 2025”)) and mtcnn with InsightFace for face detection. My index stores captions, face embeddings, and EXIF metadata (date, GPS) for queries like “show photos of us in Banff last winter.” I’m working on integrating ChromaDB for faster searches.
I've been running Nextcloud in Docker with the Recognize and Memories apps for about a year and half now. It's in an off-lease refurbished Dell Precision tower from 2018.
I'm using docker compose to include some supporting containers like go-vod (for hardware transcoding), another nextcloud instance to handle push notifications to the clients, and redis (for caching). I can share some more details, foibles and pitfalls if you'd like.
I initiated a rescan last week, which stacks background jobs in a queue that gets called by cron 2 or 3 times a day. Recognize has been cranking through 10k-20k photos per day, with good results.
I've installed a desktop client on my dad's laptop so he can dump all of the family hard drives we've accumulated over the years. The client does a good job of clearing up disk space after uploading, which is a huge advantage in my setup. My dad has used the OneDrive client before, so he was able to pick up this process very quickly.
Nextcloud also has a decent mobile client that can auto-upload photos and videos, which I recently used to help my mother-in-law upload media from her 7-year-old iPhone.
I run a pretty similar configuration on a pi 4 mounted to an external hard drive which I offload to other hard drives from time to time. The mobile app auto sync specific folders when my phone is connected at the home network. It's not flying performance wise but I mainly need a backup solution.
Gonna check the apps that you mentioned. Feel free to share more details of your set up. Why are you running 2 instances?
Edit: I see, probably for the memories app.
It's not self-hosted, but https://ente.io/ is an independent commercial solution with E2E encrypted cloud storage and local AI (EDIT: apparently you can also self-host)
take my photo catalog stored in google photos, apple pictures, Onedrive, Amazon photos. collate into a single store, dedupe. Then build a proper timeline and geo/map view for all the photos.
Take a look at something like rclone and it immediately becomes clear that the photo app vendors you listed have no interest in allowing their users to easily access their data programmatically from their services in any meaningful way.
> The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
This is my dream. I started building something that would upload all my photos from my phone to my desktop, back them up somewhere and then present them 6 at a time on a local website solely so you could look at them again and decide if you wanted to keep them. Heart any you wanted to keep, favorite some, and delete the rest then show me 6 more.
In addition to all of that I want an AI solution that pre-selects good images for me, so I do not have to go through all of them manually. Similar to Apple Memories or Featured Photos.
Is there anything self-hosted like that?
Are any of these systems doing true image based entity resolution? It seems like its only pair-wise similarity checking. If you are trying to index say 20 years of family photos how do they do linking kindergardeners to thier adult images?
Haven’t tried it yet (I’d love to find something like this too) but I saw a conference talk on https://docs.voxel51.com/ that looked pretty interesting. It is kind of a data frame for images with a GUI for exploring them. They make it pretty easy to rip various models over your images to add tags, and to evaluate the results.
There are some spectacular local models for generating text descriptions of images now. I suggest starting with Mistral Small 3.2, Gemma 3 and Qwen 2.5VL - all available via Ollama.
I wanted to like Photoprism because unlike Ente and Immich, it supports SQLite databases and doesn't require postgres (I want to keep home lab maintenance to a minimum) but the UI was difficult to like and I couldn't get hardware encoding working on my Intel N100 GPU.
I've used PhotoPrism and Immich. Everyone's definition is different I have about 100k photos and videos which are a bit over 1 TiB (original data, not thumbnails and previews). Nether had any performance issues with a few minor exceptions on Immich (I don't recall anything from PhotoPrism but it has been a while now since I switched)
1. The Immich app's performance is awful. It is a well known problem and their current focus. I have pretty high confidence that it will be fixed within a few months. Web app is totally fine though.
2. Some background operations such as AI indexing, face detection and video conversion don't work gracefully when restarted from scratch. They all basically first delete all the old data, then start processing assets. So for many days (depending on your parallelism settings and server performance) you may be completely missing some assets from search or converted videos. But you only need to do this very rarely (change encoding settings and want to apply to the back catalog or switch AI search model). I don't upload at a particularly high rate but my sever can very easy handle the steady state.
1 is pretty major but being worked on and you can work around it by just opening the website. 2 is less important but I don't think there is any work on it.
It looks as you are primarily using a phone to view and share? We often (visually) share via our living room TV (via attached computer). Is that something you're looking to incorporate?
I built this same solution for myself last year, used Hugging Face's "SmolVLM". It works surprisingly well. I use the model to generate verbose descriptions of each image, embed the descriptions using another model, which I also use for the query embedding.
The stack is hacky, since it was mostly for myself...
Nextcloud with a few addons. Now this might look like overkill for your use case but I get the impression that you might want to go further in future.
Stock NC gets you a very solid general purpose document management system and with a few addons, you basically get self hosted SharePoint and OneDrive without the baggage. The images/pictures side of things has seen quite a lot of development and with some addons you get image classification with fairly minimal effort.
The system as a whole will quite happily handle many 100,000 files with pretty rubbish hardware, if you are happy to wait for batch jobs to run or you throw more hardware at it and speed up the job schedules.
NC has a stock phone app which works very well these days, including camera folder uploads. There are several more apps that integrate with the main one to add optional functionality. For example notes and voip.
It is a very large and mature setup with loads of documentation and hence extensible by a determined hacker if something is missing.
I think Immich checks a lot of these
https://immich.app/
Immich is what I'm using right now. I'm running it in a Docker container on my Synology. It was very advantageous to spin up another docker container on my laptop to do the face recognition work because the Synology was going to take forever on it.
We no longer are auto uploading to Google or Apple.
So far, I really like it. I haven't quite gone 100%, as we're still uploading with Synology's photo app, but Immich provides a much more refined, featured interface.
May I ask: why not use Synology's own photo stack? The web UI is pretty good, the iPhone app is great, it runs locally without depending on Synology servers, and does have face recognition and all other features.
If you want a solid "just upload the photos" experience, PhotoSync on iOS is really great.
I think you can use Immich to just look at a folder and not use the backup from phone bits.
> We no longer are auto uploading to Google or Apple.
May I ask why? Just curious as the main reason I use Immich is for the auto upload
Edit: Ugh. Can’t read. I somehow read don’t auto upload to Immich.
because you don't want your data being held by Google or Apple?
Self hosting and owning your own data
This. It's a fascinating project, it is hard to believe how can an FLOSS project be so high quality. In my book it's on the level of Postgres (although it's a smaller project, probably).
Their frontend is amazing, their apps are not as performant, and the backend is (IMHO) the worst of them all.
No hate here, I'm really grateful for what they've achieved so far, but I think there's a lot of room for improvement (e.g: proper R/W query split, native S3 integration, faster endpoints, ...). I already mentioned it in their channel (they're a really welcoming community!) and I'm working on an alternative drop-in replacement backend (written in Go) [1] that will hopefully bring all the needed improvements.
TL;DR: It's definitely good, especially for an open-source project, and the team is very dedicated - but it's definitely not Postgres-good
[1]: https://github.com/denysvitali/immich-go-backend
Why the focus on S3 for a self-hosted app? Anyway kudos for the effort, I'm not experiencing performance issues in my locally self-hosted Immich installation but more performant software is always welcome.
I have and love my self-hosted immich install. If self-hosted could also use S3 storage, that allows me to use Garage (https://git.deuxfleurs.fr/Deuxfleurs/garage) , which also lets me play games with growable/redundant storage on a pile of second-hand hard drives. IIRC it can only use a mounted block device at the moment, (unless there is a nfs-exposed s3 translator ....)
A lot of existing tooling supports the s3 protocol, so it would simplify the storage picture (no pun intended).
I'm wondering the same thing. He had me until he said "S3".
Likely means S3 compatibility so it can be used with anything, be it a cloud provider or a locally hosted solution like minio
S3-compatible storage. In my case, Backblaze B2. The idea is to make the backend compatible with rclone, so that one can pick whatever storage they want (including B2 / S3 and others)
Looking at the world around me, so much of it is driven by open source. In fact, I can't name a single piece of electronics around me that isn't using it.
Been running immich on my home server for about a year now.
Near zero maintenance stack, incredibly easy to update, the client mobile apps even notify you (unobtrusively) when your server has an update available. The UI is just so polished & features so stable it's hard to believe it's open source.
This may not interest you, but Ente checks most of these boxes for me. It has face recognition and AI-based object search out of the box, and you can self-host their open-source server without any restrictions. The models they used might be useful for your project.
Ente is a tremendous proposal. I don't know why I hadn't heard of it before, but I don't think it meets what I'm looking for. But the fact that the software is completely open is impressive.
The Ente self-hosting proposition seems strange. Why would I want to e2e encrypt my photos that I self-host? Sounds like it will only make life more difficult.
You may want to self-host for your family or close friends while guaranteeing them privacy.
1. "Self-hosted" doesn't always mean "on your own hardware." Some people rent VPSes. This helps keep their data safe.
2. The software is provided without modification; I think it would be stranger to remove the encryption.
> Some people rent VPSes. This helps keep their data safe.
This is exactly how I self-host Ente and it has been great.
Machine leaning for image detection has worked really well for me, especially facial recognition for family members (easy to find that photo to share).
I have the client on my Android mobile, Fire tablet (via F-Droid), and my Windows laptop.
My initial motivation was to replace "cloud" storage for getting photos copied off the phone as soon as possible.
Their pricing page doesn't say anything as far as I can find but do you still pay pay Ente if you self host the server as well as the photos ("S3-compatible object storage")?
> do you still pay pay Ente if you self host the server as well as the photos ("S3-compatible object storage")?
No. (I self-host Ente and use their published ios app.)
I don't know about the photo-management aspects. However, I've had very good experiences running gemma3 (4b and 12b) locally via ollama
I've used gemma to process pictures and get descriptions and also to respond questions about the pictures (eg. is there a bicycle in the picture?). Haven't tried it for face recognition, but if you already have identified someone in one photo, it can probably tell you if the person in that photo is also in another photo
Just one caveat, if you are processing thousands of pictures, it will take a while to process them all (depending on your hardware and picture size). You could also try creating a processing pipeline, first extracting faces or bounding boxes of the faces with something like opencv, and then passing those to gemma3
Please post repo link if you ever decide to open source
Thanks nico for sharing your experience! That's really helpful. The idea of using OpenCV to create a processing pipeline for face detection before passing it to Gemma is brilliant I hadn't thought of that. I'll definitely look into using gemma with ollama.
And for sure, if I get this to a point where it's open-source, I'll post the link here!
I currently use photoprism, but it's moving rather slowly. Facial recognition misses a lot of faces, the automatic clustering works fine at first but once you tagged a few thousand faces the implementation grinds to a halt and the background worker runs for hours pegging single cpu core.
The dev is really reluctant of accepting external contributions, which has driven away a lot of curious folks willing to contribute.
Immich seems to be the other extreme. Moving really fast with a lot of contributors, but stuff occasionally breaks, the setup is fiddly, but the Ai features are 100x more powerful. I just don't like the ui as much as photoprism. I with there was some kind of blend of the two, on a middle ground of their dev philosophies.
While Immich development release versions every 2-3 weeks on average, and a breaking one every 4-6 months, they are approaching the stable release, so the pace should also down a bit. The setup to be honest is pretty standard IMO.
I have been building something like this but for personal use.
As of now, I use SentenceTransformer model to chunk files, blip for captioning (“Family vacation in Banff, February 2025”)) and mtcnn with InsightFace for face detection. My index stores captions, face embeddings, and EXIF metadata (date, GPS) for queries like “show photos of us in Banff last winter.” I’m working on integrating ChromaDB for faster searches.
Eventually, I aim to store indexes as:
{
}I also built an UI (like Spotlight Search) to search through these indexes.
Code (in progress): https://github.com/neberej/smart-search
I've been running Nextcloud in Docker with the Recognize and Memories apps for about a year and half now. It's in an off-lease refurbished Dell Precision tower from 2018.
I'm using docker compose to include some supporting containers like go-vod (for hardware transcoding), another nextcloud instance to handle push notifications to the clients, and redis (for caching). I can share some more details, foibles and pitfalls if you'd like.
I initiated a rescan last week, which stacks background jobs in a queue that gets called by cron 2 or 3 times a day. Recognize has been cranking through 10k-20k photos per day, with good results.
I've installed a desktop client on my dad's laptop so he can dump all of the family hard drives we've accumulated over the years. The client does a good job of clearing up disk space after uploading, which is a huge advantage in my setup. My dad has used the OneDrive client before, so he was able to pick up this process very quickly.
Nextcloud also has a decent mobile client that can auto-upload photos and videos, which I recently used to help my mother-in-law upload media from her 7-year-old iPhone.
I run a pretty similar configuration on a pi 4 mounted to an external hard drive which I offload to other hard drives from time to time. The mobile app auto sync specific folders when my phone is connected at the home network. It's not flying performance wise but I mainly need a backup solution.
Gonna check the apps that you mentioned. Feel free to share more details of your set up. Why are you running 2 instances? Edit: I see, probably for the memories app.
It's not self-hosted, but https://ente.io/ is an independent commercial solution with E2E encrypted cloud storage and local AI (EDIT: apparently you can also self-host)
You can, in fact, self host it.
https://help.ente.io/self-hosting/
i swear the single best feature for me would be:
take my photo catalog stored in google photos, apple pictures, Onedrive, Amazon photos. collate into a single store, dedupe. Then build a proper timeline and geo/map view for all the photos.
Take a look at something like rclone and it immediately becomes clear that the photo app vendors you listed have no interest in allowing their users to easily access their data programmatically from their services in any meaningful way.
Example: https://rclone.org/googlephotos/#limitations
Glaring example:
> The current google API does not allow photos to be downloaded at original resolution. This is very important if you are, for example, relying on "Google Photos" as a backup of your photos. You will not be able to use rclone to redownload original images. You could use 'google takeout' to recover the original photos as a last resort
(and semantically index/search, face recognition... what else does AI get us these days?)
iPhoto used to do this. The Mac photos app that has replaced it since is nowhere near as good.
In fact I would go so far as to say my personal photo management never really recovered from the transition.
This is my dream. I started building something that would upload all my photos from my phone to my desktop, back them up somewhere and then present them 6 at a time on a local website solely so you could look at them again and decide if you wanted to keep them. Heart any you wanted to keep, favorite some, and delete the rest then show me 6 more.
The addition of an AI tool is a great idea.
In addition to all of that I want an AI solution that pre-selects good images for me, so I do not have to go through all of them manually. Similar to Apple Memories or Featured Photos. Is there anything self-hosted like that?
Are any of these systems doing true image based entity resolution? It seems like its only pair-wise similarity checking. If you are trying to index say 20 years of family photos how do they do linking kindergardeners to thier adult images?
Haven’t tried it yet (I’d love to find something like this too) but I saw a conference talk on https://docs.voxel51.com/ that looked pretty interesting. It is kind of a data frame for images with a GUI for exploring them. They make it pretty easy to rip various models over your images to add tags, and to evaluate the results.
There are some spectacular local models for generating text descriptions of images now. I suggest starting with Mistral Small 3.2, Gemma 3 and Qwen 2.5VL - all available via Ollama.
I expect we will see a Qwen 3VL soon.
I have used https://www.photoprism.app/ and have found the face recognition to work quite well.
Photoprism is ok, but the AI features of immich are far superior
https://immich.app/
https://ente.io/
https://photonix.org/
https://github.com/LibrePhotos/librephotos
https://github.com/photoprism/photoprism
I wanted to like Photoprism because unlike Ente and Immich, it supports SQLite databases and doesn't require postgres (I want to keep home lab maintenance to a minimum) but the UI was difficult to like and I couldn't get hardware encoding working on my Intel N100 GPU.
Have you tried all of these? How are they with very large photo collections?
I've used PhotoPrism and Immich. Everyone's definition is different I have about 100k photos and videos which are a bit over 1 TiB (original data, not thumbnails and previews). Nether had any performance issues with a few minor exceptions on Immich (I don't recall anything from PhotoPrism but it has been a while now since I switched)
1. The Immich app's performance is awful. It is a well known problem and their current focus. I have pretty high confidence that it will be fixed within a few months. Web app is totally fine though.
2. Some background operations such as AI indexing, face detection and video conversion don't work gracefully when restarted from scratch. They all basically first delete all the old data, then start processing assets. So for many days (depending on your parallelism settings and server performance) you may be completely missing some assets from search or converted videos. But you only need to do this very rarely (change encoding settings and want to apply to the back catalog or switch AI search model). I don't upload at a particularly high rate but my sever can very easy handle the steady state.
1 is pretty major but being worked on and you can work around it by just opening the website. 2 is less important but I don't think there is any work on it.
The gallery I use has an "internals" page in their docs: https://docs.home-gallery.org/internals/
It gives a sort of high level system overview that might provide some useful insights or inspiration for you.
It looks as you are primarily using a phone to view and share? We often (visually) share via our living room TV (via attached computer). Is that something you're looking to incorporate?
i'm still old school syncthing + photoprism. Perhaps I should give immich a better look
https://www.digikam.org/ does a lot of what you're looking for.
Not web based, and really starts to show its age.
I'm also curious as to the best local high quality background removal, such as for gradation images where people are wearing tassels
Stable Diffusion (Web UI or whatever) has add-ons (e.g. rembg) that are really good at this last time I checked
I would try the Qwen models before LLaVa
Do you need the embeddings to be private? Or just the photos?
For photo indexing I'd run CLIP directly and save on compute, no need to use a whole language model.
I believe Ente supports all of this, and can be self-hosted. All of the AI stuff is done locally.
I pay them for service/storage as it’s e2ee and it doesn’t matter to me if they or I store the encrypted blobs.
They also have a CLI tool you can run from cron on your NAS or whatever to make sure you have a complete local copy of your data, too.
https://ente.io - if you use the referral code SNEAK we both get additional free storage.
I built this same solution for myself last year, used Hugging Face's "SmolVLM". It works surprisingly well. I use the model to generate verbose descriptions of each image, embed the descriptions using another model, which I also use for the query embedding.
The stack is hacky, since it was mostly for myself...
Photoprism and Immich
From all the comments I've been reading, this combination seems solid. I'll definitely be checking it out thoroughly.
The Browser. Just pure JavaScript, HTML, CSS and WebGPU running on a bulletproof sandbox.