I've been using Nextcloud now for 4+ years. The latest major versions pretty much have no features that benefit regular home users. They are now chasing government contracts and AI hype.
Nextcloud can't even get Notes done right. I lost the entire contents of the note randomly not long ago. And the mobile Note app refuses to load the editor sometimes.
That being said, most of the time, Nextcloud works ok. I don't want to replace Nextcloud with another jack of all trades, master of none. Instead, I'm slowly migrating to good alternatives that do one thing well: Immich for photos, Obsidian for notes.
> That being said, most of the time, Nextcloud works ok.
Most of the time isn’t enough when dealing with data these days unfortunately. I’ve been using google docs since 2009, and I have lost 0 data in that time. I still have my student essays from back then. Things need to be this reliable to compete unfortunately.,
Ok, I hope you take regular backups. Depending on Google risks sudden account lockout w/ no recourse. Everything suddenly gone, "poof". Others in this thread can do a better job than me describing other tradeoffs for self-hosting vs dependence on Google.
Others in the thread also do a good job of describing the risk of self hosting - these tools are buggy, have synchronisation issues, have workflow quirks and sharp edges. For all their pain points, google docs works and it’s reliable.
I’ve had way more issues of unrecoverable data with self managed tools than the major cloud hosted tools. There’s risks with everything, and I only have so much time in the day to spend on these things..
How have you liked Obsidian? I was going to use it but realized it pay walled sharing notes between devices. Looking into this again - are you self hosting Obsidian via LiveSync plugin?
Except I wanted more security and multiple users. Instead of using the default admin user, I created one user for each person. Done in the "_users" database. Then create one database for each person. Assign each user as a "Member" to their respective database, not admin. Now each person has their own credentials that can access only their database.
Not OP but I'm a very happy longtime Obsidian user. Maybe a little unfair to characterize as "paywalled" given their sync service is optional (and works very well), and a directory of markdown files is about as portable and flexible as it gets for self-hosting.
OCIS made use of the Go Micro framework, which I wrote. It means the fork by OpenCloud does as well. OSS is funny that way. You could write something used by all sorts of other users, profited in all sorts of different ways but see no help or contribution back. Hope OpenCloud does better for their community and the software they depend on than ownCloud or others. Not a knock on them or anyone else. Just the realities of open source.
It seems like they, too, have no calendar application?
Google Calendar is the single most indispensable feature of the entire Google suite for me (apart from Mail, of course), so I can't see myself switching to something without, and yet Nextcloud continues being the seemingly only self-hosted alternative that has it (including the web interface: I don't want to have to run a second web browser like Thunderbird to edit calendar entries on my computer).
What is it about JS calendar shells that makes them so seemingly hard to implement? Even the big-name open source CalDAV servers like Baikal that flirt with corporate adoption never seem to implement them.
There are dozens of open source Google calendar clones in JS; picking a random not-react example off of the front page of an npm search: https://fullcalendar.io/
Was this done because the performance of Nextcloud is poor? I have been using Nextcloud for a few years on my own (admittedly overkill) hardware and haven't had performance issues, but I am the only user.
Iirc this stems from owncloud, and a programmer from them gave an interview in a German Foss podcast. Yes, it was performance related. They had high profile clients (CERN?) where the old code base was not sufficient, they thought.
It's nice to see Heinlein groups' activities see some more publicity.
They have been continuously chipping away at making OSS more suitable for business and government use-cases (from Big Blue Button to NextCloud). OpenCloud and OpenTalk are some of their current in-house developed efforts in this sector.
What about PHP merits this statement? It's 2025, PHP has become modern, performant, and has good frameworks that are battle-tested and reliable. Yikes. (And this from a former PHP and now Node developer.)
It is not really suited for software which need to maintain long lived connections and that's the case with nextcloud when uploading or downloading large files.
If you are okay with Rust or C# idk why Go is an issue? Go powers most infrastructure on the internet that we rely on. Docker, Kubernetes, Hugo, Caddy, MinIO, a lot of these are used in backend services so you would never even realize you are using them but you are. I think Go is fine.
I'd prefer Rust as well, but anything with a decent type system is okay. I have bad associations with Java software, but idk if that's the language itself, or enterprise brainworns.
This was a nice surprise. It looks simple but it does the job. It allows me to put my pics and videos on my private cloud. My pdf also and search is fun. It allows me to create markdown docs too.
A simple docker install and you have your private cloud. Top!
It is not NextCloud. Nor it pretends to be. I like it, might keep this running and maybe put it to work.
Thanks.
It's file sharing with organizational features (shared directories, autz) and some value adds. They have integrated collabora online[0] for web office tasks and provide search on some indexable files (full-text + metadata).
> OpenCloud is based on a fork of the open source software ‘ownCloud Infinite Scale' (OCIS), whose components were co-developed by developers from the science organisation CERN and other active contributors. OpenCloud is now being further developed.. clear focus on data protection, interoperability..
> modern file-sync and share platform.. oCIS breaks down the old [PHP] ownCloud 10 user specific namespace.. makes the individual parts accessible to clients as storage spaces and storage space registries.. WebDAV based oc sync protocol to manage files and folders, ocs to manage shares and TUS to upload files in a resumable way. On the server side REVA is the reference implementation of the CS3 apis which is defined using protobuf. By embedding libregraph/idm, oCIS provides a LDAP interface to make accounts, including guests available to firewalls and other systems.
> [oCIS] first production deployment.. CERN IT department‘s storage team has engaged in UI, API and backend development with ownCloud for many years.. users have access to the underlying data repository containing 1.4 billion files and 12 petabytes of data.
> OpenCloud discusses the challenges that arise when proprietary products are discontinued or acquired by competitors.. examines a current case.. new company is likely to include over a dozen employees who previously worked on Infinite Scale for ownCloud.
There was a recent "why nextcloud feels slow' submission. Submission itself seemed off-course, to not really have analyzed the problem well (picked a pretty conventional a-priori whipping boy of bundle sizes). To me the clear winner in the comments was pointing out that there are massive waterfalls of fetching data to the client. https://ounapuu.ee/posts/2025/11/03/nextcloud-slow/https://news.ycombinator.com/item?id=45799860
This data architecture problem burns you whether you are native or mobile, is the secret boss lurking in most application development. I wonder if OpenCloud is doing better than NextCloud with their middle-layers!
If you need a Docker to run what is essentially a webserver, I’m not surprised it will run slow.
In my stack software shipped with Docker are the most unreliable, opaque messes. Whereas bare metals are usually light, lean and sustainably maintainable, especially single executables.
It's not docker itself. It's the mindset of some of its users: just throw in every garbage and don't bother about maintenance.
It's also often used by users shying away from maintenance -- often due to a demanding schedule. Sometimes not grasping the level of investment they need to do to have a good Dockerfile.
If seen too many self-managed installations where the user mapping is utter crap an the process are running with random system users or random end-user UIDs.
If you have to do may round-trips to display a page after a UI action, and most round-trips involve hitting the storage on the server, the UI will feel slow sometimes.
I understood that their point was not that Docker makes it slow, but that if it needs Docker to run, it probably needs a complicated environment which makes it slow.
I’m not using docker to deploy things because the things I’m deploying need a complicated environment, I’m using it because it’s an incredibly easy and consistent way to deploy things. It’s an immutable image that is highly convenient to distribute, update, and manage, which is pretty much the opposite experience of installing software on virtual machines.
Dropping a docker compose file into Portainer and I’m up and running with a new service in a few seconds. I’ve removed the overhead and spin up time of VMs, there’s no more running Chef/Ansible to do basic VM management for every single service I’m running, no more cookbooks/playbooks or manual SSHing to get software updated, no more minutes to hours of fixing configuration management that never seems to work the first time, no more bad in-place upgrade states, etc.
> it’s an incredibly easy and consistent way to deploy things
I keep seeing people saying this but my experience has always been otherwise.
Docker makes it really difficult to tinker with the internals of the container. They call it a development environment but you can’t easily edit a file and restart a service. There is bind mounts but the IO performance is terrible, necessitating use of volumes. Every base image is opinionated in how things are done, where things are stored even for the same software.
Since it’s so difficult to tinker with the internals most vendors will provide a web interface abstraction on top of their software (like NPM for nginx) and if you so much so veer off the happy path by 1 inch the abstraction can no longer track the state of things and breaks, necessitating a full reinstall or editing the config manually.
Of course this is in the context of self hosting. If you’re paid in your day job to maintain a tower of babel then by all means fire up all those dynos.
I don’t think of it as a “development environment” so much as a “deployment environment.” Yes, it is more difficult to “tinker” with a running container. And for services that are just supposed to run and not be tinkered with, that’s wonderful. I’ve deployed services that run on literally tens of thousands of containers and needing to tinker with the innards at this point is kind of a smell, like if you said “yeah, but I need to be able to add oil to my car while driving down the street because you never know when it’s all gonna suddenly leak out,”
Why do I need to tinker with the internals of the container?
Even if I need to do that, the existence of a Docker image doesn’t stop me from making my own implementation as long as the application in question provides some kind of alternate distribution.
E.g., if there’s an RPM/DEB package, binary executable, JAR file, source code, etc, I can just make my own docker container with my own implementation and mess around with the internals as much as I want.
100%. This person is a very specific anti-* hater, for something that was such a rampantly popular hatred 10 years ago. But the FUDites rarely bother with bona-fides, with real argument.
We should feel bad for them, those decoupled folks who needs help. It's sad pathetic and remarkable how these weird software enmities crop up, are let to grow and never addressed. Their time of their outrage being popular & hip fades but the disdain-without-argument sticks around.
Thankfully container hatred is a pretty tiny frakking force, of very disparate widely scattered eccentrics these days. But there's so many weird FUD proclivities folks can opt into, can find to stoke their lifelong hatreds against. Theres just so few warnings: such audience acuity is required to parse, realize the windmill tilting, & move along.
Good job to the authors. I have been waiting for something like that for years.
I just dislike the scripted languages as they are a mess to handle while docker is a resource waste, not to mention golang single statically compiled binary and speed of execution.
Authors, please think really well about:
- upgrade strategies (owncloud/nextcloud were a huge mess, for long time, currently looks that nextcloud is handling it well - I have upgraded it for 2 versions and it didnt break anything)
- what external dependencies you are using, make additional layer of OS abstraction to avoid incompatibilities between various linux distributions, freebsd and windows. There isn't a lot to handle differently but once you tie yourself to linux only, it is hard to add support afterwards (try to not call external binaries that you havent installed yourself, if you must, put it into compatibility layer). If you do this one right, people will port it to different environments, if you blow it, you will have to - or you wont.
- do not rely on docker "installation", presume that it is installed directly on the system and you wont go far wrong. Treat docker just as another system. Docker is going to make you become "lazy" to not think about vital details while developing.
- do check how to handle reverse proxies gracefully, this is something everyone forgets while for any serious environment, there will be nginx frontend
- dont support all the databases, pick one and stick to it, to support it really well, including backups, upgrades and versions
- sooner or later redis is going to be a must, think upfront
- make a backup system, backup before upgrades and be sure you can restore it if something goes wrong, including binaries, database,...
- make an installation/upgrade layer that doesn't depend on "run this sql script", have a well versioned database revision system that can get database from version "0.1" to "2.0" without breaking anything and migrate the data. There are hardly any database changes where database upgrade cant be handled with sql statements.
- think really well about external dependencies, dont pick it just as it is popular and you need one functionality. An example, recently I did a benchmark of 15 concurrent maps in go and the differences were huge where the fastest one was one that you can hardly find by searching while the author did things like aligning the structures with cpu cache, full of unsafe pointers etc., but beating the first selected "popular" map by 2x, and the worse by 15x+. Dont trust authors self promotion, measure it.
- try to not make it confusingly strange, you have the whole usage/administration well done with nextcloud, stick to it, dont reinvent what works, as for instance, sftpgo did and I hate every second using it.
- if something needs to be documented, think about how to implement it, in a way, that doesnt need to be documented. Over time those documented features become a huge burden for you and for users.
- please, if you dethrone nextcloud, dont become evil, like projects normally do. Get the money from where the money is (smb, corporations), spare the home users. :)
The product seems more 'focused' than Nextcloud, for sure.
But their docker choices are quite opinionated: no longer than yesterday I've tried (once again!) to make it run, and the fact that I have Caddy + Authelia in front of seems to be rather detrimental. I dropped the ball, and will try again in a few weeks or months.
If it does webdav, it automatically does passwords. My password manager has otp anyway. Calendar's important, but I'm pretty sure that it (and contacts) are also just webdav.
Notes though... ouch. I can't find anything for that, there's no decent Notes client that does webdav natively.
Obsidian looks interesting, I will have to see if I can get the plugin installed and tested. At one point the guy writing Notebooks did webdav, but Apple yanked the rug out from under him so that webdav no longer worked well and he just decided it was no longer a feature. And my notes have been a mess for years afterward. Joplin looked like it would be a good replacement, but it spams up the md files, so that if you ever switch away from it you'd spend months cleaning them up. So basically I've just been using an open Sublime window and syncing by hand... no fun.
I've been using Enpass for years. It's webdav functionality is sufficient, and it's available on every platform except my kids' Xbox (wish they built one for that, some passwords I have to keep short because they need them for the games).
It's not perfect, last year or so they keep trying to shove some ads into it, but nothing too obnoxious yet. And if you have any spaces in your webdav path where it saves the passwords, takes a little thought to work through.
I've been using Nextcloud now for 4+ years. The latest major versions pretty much have no features that benefit regular home users. They are now chasing government contracts and AI hype.
Nextcloud can't even get Notes done right. I lost the entire contents of the note randomly not long ago. And the mobile Note app refuses to load the editor sometimes.
That being said, most of the time, Nextcloud works ok. I don't want to replace Nextcloud with another jack of all trades, master of none. Instead, I'm slowly migrating to good alternatives that do one thing well: Immich for photos, Obsidian for notes.
> That being said, most of the time, Nextcloud works ok.
Most of the time isn’t enough when dealing with data these days unfortunately. I’ve been using google docs since 2009, and I have lost 0 data in that time. I still have my student essays from back then. Things need to be this reliable to compete unfortunately.,
Ok, I hope you take regular backups. Depending on Google risks sudden account lockout w/ no recourse. Everything suddenly gone, "poof". Others in this thread can do a better job than me describing other tradeoffs for self-hosting vs dependence on Google.
Others in the thread also do a good job of describing the risk of self hosting - these tools are buggy, have synchronisation issues, have workflow quirks and sharp edges. For all their pain points, google docs works and it’s reliable.
I’ve had way more issues of unrecoverable data with self managed tools than the major cloud hosted tools. There’s risks with everything, and I only have so much time in the day to spend on these things..
thanks for the immich suggestion.
How have you liked Obsidian? I was going to use it but realized it pay walled sharing notes between devices. Looking into this again - are you self hosting Obsidian via LiveSync plugin?
Thanks!
I'm self-hosting Obsidian sync. I mostly followed the tutorial here: https://www.reddit.com/r/selfhosted/comments/1eo7knj/guide_o...
Except I wanted more security and multiple users. Instead of using the default admin user, I created one user for each person. Done in the "_users" database. Then create one database for each person. Assign each user as a "Member" to their respective database, not admin. Now each person has their own credentials that can access only their database.
Not OP but I'm a very happy longtime Obsidian user. Maybe a little unfair to characterize as "paywalled" given their sync service is optional (and works very well), and a directory of markdown files is about as portable and flexible as it gets for self-hosting.
You can simply put your vault in a cloud folder if you don’t want to pay for Livesync
OCIS made use of the Go Micro framework, which I wrote. It means the fork by OpenCloud does as well. OSS is funny that way. You could write something used by all sorts of other users, profited in all sorts of different ways but see no help or contribution back. Hope OpenCloud does better for their community and the software they depend on than ownCloud or others. Not a knock on them or anyone else. Just the realities of open source.
It seems like they, too, have no calendar application?
Google Calendar is the single most indispensable feature of the entire Google suite for me (apart from Mail, of course), so I can't see myself switching to something without, and yet Nextcloud continues being the seemingly only self-hosted alternative that has it (including the web interface: I don't want to have to run a second web browser like Thunderbird to edit calendar entries on my computer).
What is it about JS calendar shells that makes them so seemingly hard to implement? Even the big-name open source CalDAV servers like Baikal that flirt with corporate adoption never seem to implement them.
They integrate with radicale since May this year for card- and caldav
https://opencloud.eu/en/news/opencloud-calendar-and-contact-...
There are dozens of open source Google calendar clones in JS; picking a random not-react example off of the front page of an npm search: https://fullcalendar.io/
Was this done because the performance of Nextcloud is poor? I have been using Nextcloud for a few years on my own (admittedly overkill) hardware and haven't had performance issues, but I am the only user.
Iirc this stems from owncloud, and a programmer from them gave an interview in a German Foss podcast. Yes, it was performance related. They had high profile clients (CERN?) where the old code base was not sufficient, they thought.
It's nice to see Heinlein groups' activities see some more publicity.
They have been continuously chipping away at making OSS more suitable for business and government use-cases (from Big Blue Button to NextCloud). OpenCloud and OpenTalk are some of their current in-house developed efforts in this sector.
We really need an owncloud/nextcloud alternative with zero PHP.
What about PHP merits this statement? It's 2025, PHP has become modern, performant, and has good frameworks that are battle-tested and reliable. Yikes. (And this from a former PHP and now Node developer.)
It is not really suited for software which need to maintain long lived connections and that's the case with nextcloud when uploading or downloading large files.
Owncloud already has zero PHP with the introduction of Owncloud Infinite Scale (written in Go) introduced in 2022.
I see php code in that repo
You mean you want a Javascript variant?
Anything with types. And that doesn't have "wat" videos about its behaviour.
I'd prefer something in Rust or C#
If you are okay with Rust or C# idk why Go is an issue? Go powers most infrastructure on the internet that we rely on. Docker, Kubernetes, Hugo, Caddy, MinIO, a lot of these are used in backend services so you would never even realize you are using them but you are. I think Go is fine.
I'd prefer Rust as well, but anything with a decent type system is okay. I have bad associations with Java software, but idk if that's the language itself, or enterprise brainworns.
Rust, C#, Java, Go and maybe even Erlang would suffice for modern stack and IO heavy and performant workloads.
This was a nice surprise. It looks simple but it does the job. It allows me to put my pics and videos on my private cloud. My pdf also and search is fun. It allows me to create markdown docs too. A simple docker install and you have your private cloud. Top! It is not NextCloud. Nor it pretends to be. I like it, might keep this running and maybe put it to work. Thanks.
Filesystem as data backend? Does it do caldav? Carddav? Photo sync? A kanban board? So many questions…
I am intrigued… for sure Nextcloud is too slow at times and too often in maintenance mode after an update (but I’m still on the single container!)
Is there a simple list of features, and their completeness/ maturity?
I've not dug, but on first look https://opencloud.eu/en is a vague brochure with no real information.
The GitHub org page gives a better overview: https://github.com/opencloud-eu/
It's file sharing with organizational features (shared directories, autz) and some value adds. They have integrated collabora online[0] for web office tasks and provide search on some indexable files (full-text + metadata).
[0]: https://www.collaboraonline.com/
There's a list of features here (no mention of completeness/maturity though) https://opencloud.eu/en/features
it would be nice to include a feature comparison chart between nextcloud and opencloud
https://opencloud.eu/sites/default/files/media/documents/202...
> OpenCloud is based on a fork of the open source software ‘ownCloud Infinite Scale' (OCIS), whose components were co-developed by developers from the science organisation CERN and other active contributors. OpenCloud is now being further developed.. clear focus on data protection, interoperability..
https://owncloud.dev/ocis/ | https://github.com/owncloud/ocis
> modern file-sync and share platform.. oCIS breaks down the old [PHP] ownCloud 10 user specific namespace.. makes the individual parts accessible to clients as storage spaces and storage space registries.. WebDAV based oc sync protocol to manage files and folders, ocs to manage shares and TUS to upload files in a resumable way. On the server side REVA is the reference implementation of the CS3 apis which is defined using protobuf. By embedding libregraph/idm, oCIS provides a LDAP interface to make accounts, including guests available to firewalls and other systems.
2021, https://owncloud.com/news/owncloud-infinite-scale-live-at-ce... | https://www.youtube.com/watch?v=1oBQfD9QrCs
> [oCIS] first production deployment.. CERN IT department‘s storage team has engaged in UI, API and backend development with ownCloud for many years.. users have access to the underlying data repository containing 1.4 billion files and 12 petabytes of data.
2025 fork, https://github.com/orgs/opencloud-eu/discussions/262 | https://www.youtube.com/watch?v=6cZKzpEw62M | https://www.heise.de/en/news/Ex-ownCloud-devs-seek-new-start...
> OpenCloud discusses the challenges that arise when proprietary products are discontinued or acquired by competitors.. examines a current case.. new company is likely to include over a dozen employees who previously worked on Infinite Scale for ownCloud.
There was a recent "why nextcloud feels slow' submission. Submission itself seemed off-course, to not really have analyzed the problem well (picked a pretty conventional a-priori whipping boy of bundle sizes). To me the clear winner in the comments was pointing out that there are massive waterfalls of fetching data to the client. https://ounapuu.ee/posts/2025/11/03/nextcloud-slow/ https://news.ycombinator.com/item?id=45799860
This data architecture problem burns you whether you are native or mobile, is the secret boss lurking in most application development. I wonder if OpenCloud is doing better than NextCloud with their middle-layers!
If you need a Docker to run what is essentially a webserver, I’m not surprised it will run slow.
In my stack software shipped with Docker are the most unreliable, opaque messes. Whereas bare metals are usually light, lean and sustainably maintainable, especially single executables.
Hmm docker is resource isolation not a VM. Strange unless running it on windows or Mac or something cross architecture
It's not docker itself. It's the mindset of some of its users: just throw in every garbage and don't bother about maintenance.
It's also often used by users shying away from maintenance -- often due to a demanding schedule. Sometimes not grasping the level of investment they need to do to have a good Dockerfile. If seen too many self-managed installations where the user mapping is utter crap an the process are running with random system users or random end-user UIDs.
If you have to do may round-trips to display a page after a UI action, and most round-trips involve hitting the storage on the server, the UI will feel slow sometimes.
Docker makes things slow now? What?
I understood that their point was not that Docker makes it slow, but that if it needs Docker to run, it probably needs a complicated environment which makes it slow.
Which makes it an inaccurate and incorrect point.
I’m not using docker to deploy things because the things I’m deploying need a complicated environment, I’m using it because it’s an incredibly easy and consistent way to deploy things. It’s an immutable image that is highly convenient to distribute, update, and manage, which is pretty much the opposite experience of installing software on virtual machines.
Dropping a docker compose file into Portainer and I’m up and running with a new service in a few seconds. I’ve removed the overhead and spin up time of VMs, there’s no more running Chef/Ansible to do basic VM management for every single service I’m running, no more cookbooks/playbooks or manual SSHing to get software updated, no more minutes to hours of fixing configuration management that never seems to work the first time, no more bad in-place upgrade states, etc.
> it’s an incredibly easy and consistent way to deploy things
I keep seeing people saying this but my experience has always been otherwise.
Docker makes it really difficult to tinker with the internals of the container. They call it a development environment but you can’t easily edit a file and restart a service. There is bind mounts but the IO performance is terrible, necessitating use of volumes. Every base image is opinionated in how things are done, where things are stored even for the same software.
Since it’s so difficult to tinker with the internals most vendors will provide a web interface abstraction on top of their software (like NPM for nginx) and if you so much so veer off the happy path by 1 inch the abstraction can no longer track the state of things and breaks, necessitating a full reinstall or editing the config manually.
Of course this is in the context of self hosting. If you’re paid in your day job to maintain a tower of babel then by all means fire up all those dynos.
I don’t think of it as a “development environment” so much as a “deployment environment.” Yes, it is more difficult to “tinker” with a running container. And for services that are just supposed to run and not be tinkered with, that’s wonderful. I’ve deployed services that run on literally tens of thousands of containers and needing to tinker with the innards at this point is kind of a smell, like if you said “yeah, but I need to be able to add oil to my car while driving down the street because you never know when it’s all gonna suddenly leak out,”
Also, wasn’t this comment about perf?
Why do I need to tinker with the internals of the container?
Even if I need to do that, the existence of a Docker image doesn’t stop me from making my own implementation as long as the application in question provides some kind of alternate distribution.
E.g., if there’s an RPM/DEB package, binary executable, JAR file, source code, etc, I can just make my own docker container with my own implementation and mess around with the internals as much as I want.
Nextcloud isn't lighter and leaner outside of docker. It isn't faster either
You wouldn’t deploy NextCloud if not for docker.
On the other side, docker is just installation system, so why even care.
100%. This person is a very specific anti-* hater, for something that was such a rampantly popular hatred 10 years ago. But the FUDites rarely bother with bona-fides, with real argument.
We should feel bad for them, those decoupled folks who needs help. It's sad pathetic and remarkable how these weird software enmities crop up, are let to grow and never addressed. Their time of their outrage being popular & hip fades but the disdain-without-argument sticks around.
Thankfully container hatred is a pretty tiny frakking force, of very disparate widely scattered eccentrics these days. But there's so many weird FUD proclivities folks can opt into, can find to stoke their lifelong hatreds against. Theres just so few warnings: such audience acuity is required to parse, realize the windmill tilting, & move along.
Good job to the authors. I have been waiting for something like that for years.
I just dislike the scripted languages as they are a mess to handle while docker is a resource waste, not to mention golang single statically compiled binary and speed of execution.
Authors, please think really well about:
- upgrade strategies (owncloud/nextcloud were a huge mess, for long time, currently looks that nextcloud is handling it well - I have upgraded it for 2 versions and it didnt break anything)
- what external dependencies you are using, make additional layer of OS abstraction to avoid incompatibilities between various linux distributions, freebsd and windows. There isn't a lot to handle differently but once you tie yourself to linux only, it is hard to add support afterwards (try to not call external binaries that you havent installed yourself, if you must, put it into compatibility layer). If you do this one right, people will port it to different environments, if you blow it, you will have to - or you wont.
- do not rely on docker "installation", presume that it is installed directly on the system and you wont go far wrong. Treat docker just as another system. Docker is going to make you become "lazy" to not think about vital details while developing.
- do check how to handle reverse proxies gracefully, this is something everyone forgets while for any serious environment, there will be nginx frontend
- dont support all the databases, pick one and stick to it, to support it really well, including backups, upgrades and versions - sooner or later redis is going to be a must, think upfront
- make a backup system, backup before upgrades and be sure you can restore it if something goes wrong, including binaries, database,...
- make an installation/upgrade layer that doesn't depend on "run this sql script", have a well versioned database revision system that can get database from version "0.1" to "2.0" without breaking anything and migrate the data. There are hardly any database changes where database upgrade cant be handled with sql statements.
- think really well about external dependencies, dont pick it just as it is popular and you need one functionality. An example, recently I did a benchmark of 15 concurrent maps in go and the differences were huge where the fastest one was one that you can hardly find by searching while the author did things like aligning the structures with cpu cache, full of unsafe pointers etc., but beating the first selected "popular" map by 2x, and the worse by 15x+. Dont trust authors self promotion, measure it.
- try to not make it confusingly strange, you have the whole usage/administration well done with nextcloud, stick to it, dont reinvent what works, as for instance, sftpgo did and I hate every second using it.
- if something needs to be documented, think about how to implement it, in a way, that doesnt need to be documented. Over time those documented features become a huge burden for you and for users.
- please, if you dethrone nextcloud, dont become evil, like projects normally do. Get the money from where the money is (smb, corporations), spare the home users. :)
Good luck!
Nice to read these points/warnings;)
The product seems more 'focused' than Nextcloud, for sure.
But their docker choices are quite opinionated: no longer than yesterday I've tried (once again!) to make it run, and the fact that I have Caddy + Authelia in front of seems to be rather detrimental. I dropped the ball, and will try again in a few weeks or months.
One word for you and for developers: Nix.
curious about the concurrent map benchmark, any more info.
real great list to consider for the project!
Cool. Once they have notes, tasks, calendar, passwords and an otp manager, I'm dropping nextcloud in a heartbeat!
If it does webdav, it automatically does passwords. My password manager has otp anyway. Calendar's important, but I'm pretty sure that it (and contacts) are also just webdav.
Notes though... ouch. I can't find anything for that, there's no decent Notes client that does webdav natively.
Calendar and tasks uses CalDav and contacts are CardDav. Very similar to WebDAV but have their own idiosyncrasies.
The biggest issue is the web interfaces, there are a ton of edge cases that has taken Nextcloud years to work through.
Not to mention the exploration of wedav push by the Davx5 team https://manual.davx5.com/webdav_push.html.
For notes I currently use obsidian with the remotely-save plugin https://github.com/remotely-save/remotely-save
Obsidian looks interesting, I will have to see if I can get the plugin installed and tested. At one point the guy writing Notebooks did webdav, but Apple yanked the rug out from under him so that webdav no longer worked well and he just decided it was no longer a feature. And my notes have been a mess for years afterward. Joplin looked like it would be a good replacement, but it spams up the md files, so that if you ever switch away from it you'd spend months cleaning them up. So basically I've just been using an open Sublime window and syncing by hand... no fun.
For passwords, I can make a compromise and use another option. But for everything else I would prefer everything in one place.
I've been using Enpass for years. It's webdav functionality is sufficient, and it's available on every platform except my kids' Xbox (wish they built one for that, some passwords I have to keep short because they need them for the games).
It's not perfect, last year or so they keep trying to shove some ads into it, but nothing too obnoxious yet. And if you have any spaces in your webdav path where it saves the passwords, takes a little thought to work through.
jtx ? https://jtx.techbee.at/
Available on F-Droid
Looked interesting... then I noticed it was Android only. I need something for iOS and desktop, ideally.