Hey HN,
I created PixSpeed to optimize images for my own websites. While tools like TinyPNG are fantastic, I wanted a more custom solution. PixSpeed compresses PNG, JPEG, and WebP images efficiently, helping to improve site loading times without compromising quality.
I've been using it myself and hope it can be useful for others as well.
The tool is completely free.
I'd love to get feedback on what could make PixSpeed even better.
Thanks!
Long term, how do you intend to sustain the service if users become dependent on it and need to use it at scale?
How do you intend to maintain reliability and uptime for users whose business depends on access to the service?
What happens to users if/when you lose interest/ability to continue the project?
Don't get me wrong, you don't owe other people anything. On the other hand, free is often a way of avoiding really hard (and very interesting) engineering problems. People are the hard part of engineering.
I'm considering offering some additional features as paid options. Currently, the system only optimizes images from the URL you provide, but one potential premium feature could be to allow the system to crawl all internal pages of a website, optimizing images across the entire site. This would make it easier for users who want to optimize all their content by simply submitting the homepage URL.
However, this is a double-edged sword: I might encounter sites with a large number of internal pages, which would complicate the resources needed to provide an efficient and sustainable service.
Another option I've thought about is offering a CDN service to directly serve optimized images, improving performance without the need for users to manually download and upload images. It does become more complex when charging users, as reliability, support, and infrastructure take on a much greater role.
"All images are deleted after 1 hour" - so they're using a server and it's a valid ask.
My question is - why does this require a server?
Write it in something that compiles to LLVM / WASM and just make a static page. Infinitely scalable and just pay for the domain (and CDN provider if applicable) both likely nominal cost.
ImageMagick in Wasm lets you do lots of nifty stuff client-side. Here's a photocopy simulation thingie I made for instance: https://photocopy.fuglede.dk/
It can pull images from a domain.
That would be difficult (since for client-side rendered sites it would require loading the site in a sandboxed environment) or impossible (if the site actively prevents that using CORS or similar, or if it happens to include scripts which expect to be run in a normal environment).
There is also the ability to run things like Playwright on the edge a la Cloudflare Workers. You can fallback to rendering JS required sites through that and do clientside imagemagick WASM shenanigans on the client once the worker returns direct links to all the images.
Yup totally fair - would likely require you upload the images manually for pure client-side.
You might still be able to offset a portion of the compute by just figuring out the URLs of the images on the back-end, sending that list to the client, which would then download and do the optimization / resizing.
I look forward to your version of this service, also to be provided for free, right? And for your sake, hopefully you won't have people in the comments demanding that you completely redo your project in the special way that suits them.
> Long term, how do you intend to sustain the service if users become dependent on it and need to use it at scale? ...
Is what I was responding to. Wasn't criticizing the posted product implementation choices or demanding a rewrite. I see how it could be read that way, which is unfortunate.
Parent comment asked a number of questions that all suggested a server was a requirement. If scale becomes any issue for a service like this, there are approaches other than shutting down the service due to server / maintenance costs.
> By clicking the Submit button you automatically accept our policies. Please take a momento to read them.
You can make it easier for your users to read the policies if "policies" is a link to your policies. Also, I think you mean "moment to" instead of "momento."
Awesome stuff, would love if there were a total tally at the bottom that indicated the before and after size and percent difference! E.g. before: 10MB, after: 2MB, change: -80%
The best feature of my purpose-built static site generator is that it automatically builds (mostly) optimized WEBPs from any source image [1]. Not only does it reduce the image size, but it outputs many sizes of the image so that I can use an image `srcset`. The browser then automatically downloads the optimally sized image for the element.
It's a game changer to be able to copy photos directly from my Google Photos and not worry about it bloating my web pages.
Since I'm committing my blog into git, I found it better to pre-optimize my images so that my repo won't balloon. At first, I wrote a script that simply `find`s my non-WebP images, turns them into WebP. It then looks for those image references in my *.md files and replaces the extension with WebP.
But now, I have a better workflow where I use the Image Converter plugin[^1] in Obsidian since that's how I'm composing my notes primarily. As I paste the images into the markdown file, it gets converted and embedded. There's also Clop[^2] for macOS which auto converts if I copy an image/file to my clipboard into my desired format and quality.
If you're using Jekyll, Jekyll Picture Tag (https://github.com/rbuchberger/jekyll_picture_tag) does the same thing. I use that in my pipeline for my blog, and I agree, it's very nice to just not have to worry about stuffing a 5MB image down someone's tiny cell phone connection.
Though I'm not sure many browsers actually use anything other than the largest image option...
I've debated encoding to AVIF as well, but that's such a painfully long process on a blog with somewhere around 8700 images.
I've done a similar thing[^1] as well, but as a post-processing step after Hugo build my site, because I haven't written my own static site generator (yet).
I tried this a couple years ago with Netlify and Hugo targeting 1x, 2x and 3x sized images. The processing time is too slow with more than around 500 images, resulting in build timeouts.
I switched to checking in the alternate image sizes and using a hugo image render hook to check for the existence of the alternative images to generate the appropriate srcset.
I have long felt that the best way to optimize photograph JPEGs was to scale them to the right size and then have a human turn the quality down until just before the eye notices a decrease in quality, and this has worked well for me. But I have to say, you beat me in the two images I tested, one by 30% file size, and I could not discern a quality difference. Very nice!
Or maybe I have just been clinging to notion too long?
FYI, your tester did miss an image that my site was including via in the style sheet.
Hey HN, I created PixSpeed to optimize images for my own websites. While tools like TinyPNG are fantastic, I wanted a more custom solution. PixSpeed compresses PNG, JPEG, and WebP images efficiently, helping to improve site loading times without compromising quality. I've been using it myself and hope it can be useful for others as well. The tool is completely free. I'd love to get feedback on what could make PixSpeed even better. Thanks!
The tool is completely free
Long term, how do you intend to sustain the service if users become dependent on it and need to use it at scale?
How do you intend to maintain reliability and uptime for users whose business depends on access to the service?
What happens to users if/when you lose interest/ability to continue the project?
Don't get me wrong, you don't owe other people anything. On the other hand, free is often a way of avoiding really hard (and very interesting) engineering problems. People are the hard part of engineering.
I'm considering offering some additional features as paid options. Currently, the system only optimizes images from the URL you provide, but one potential premium feature could be to allow the system to crawl all internal pages of a website, optimizing images across the entire site. This would make it easier for users who want to optimize all their content by simply submitting the homepage URL.
However, this is a double-edged sword: I might encounter sites with a large number of internal pages, which would complicate the resources needed to provide an efficient and sustainable service.
Another option I've thought about is offering a CDN service to directly serve optimized images, improving performance without the need for users to manually download and upload images. It does become more complex when charging users, as reliability, support, and infrastructure take on a much greater role.
You may also want to consider offering an API. I currently use (and pay for) the TinyPNG API.
If people aren't paying, none of those are problems. The second you charge, that's when the headaches come.
"All images are deleted after 1 hour" - so they're using a server and it's a valid ask.
My question is - why does this require a server?
Write it in something that compiles to LLVM / WASM and just make a static page. Infinitely scalable and just pay for the domain (and CDN provider if applicable) both likely nominal cost.
Or why is it not a simple shell call to Imagemagick?
Then it needs a server
Not if you port the shell and OS and ImageMagick to WASM first!
ImageMagick in Wasm lets you do lots of nifty stuff client-side. Here's a photocopy simulation thingie I made for instance: https://photocopy.fuglede.dk/
That is fun indeed :) Good effect!
It can pull images from a domain. That would be difficult (since for client-side rendered sites it would require loading the site in a sandboxed environment) or impossible (if the site actively prevents that using CORS or similar, or if it happens to include scripts which expect to be run in a normal environment).
There is also the ability to run things like Playwright on the edge a la Cloudflare Workers. You can fallback to rendering JS required sites through that and do clientside imagemagick WASM shenanigans on the client once the worker returns direct links to all the images.
Yup totally fair - would likely require you upload the images manually for pure client-side.
You might still be able to offset a portion of the compute by just figuring out the URLs of the images on the back-end, sending that list to the client, which would then download and do the optimization / resizing.
> Write it in...
I look forward to your version of this service, also to be provided for free, right? And for your sake, hopefully you won't have people in the comments demanding that you completely redo your project in the special way that suits them.
> Long term, how do you intend to sustain the service if users become dependent on it and need to use it at scale? ...
Is what I was responding to. Wasn't criticizing the posted product implementation choices or demanding a rewrite. I see how it could be read that way, which is unfortunate.
Parent comment asked a number of questions that all suggested a server was a requirement. If scale becomes any issue for a service like this, there are approaches other than shutting down the service due to server / maintenance costs.
> By clicking the Submit button you automatically accept our policies. Please take a momento to read them.
You can make it easier for your users to read the policies if "policies" is a link to your policies. Also, I think you mean "moment to" instead of "momento."
Awesome stuff, would love if there were a total tally at the bottom that indicated the before and after size and percent difference! E.g. before: 10MB, after: 2MB, change: -80%
The best feature of my purpose-built static site generator is that it automatically builds (mostly) optimized WEBPs from any source image [1]. Not only does it reduce the image size, but it outputs many sizes of the image so that I can use an image `srcset`. The browser then automatically downloads the optimally sized image for the element.
It's a game changer to be able to copy photos directly from my Google Photos and not worry about it bloating my web pages.
[1] https://github.com/JosephNaberhaus/naberhausj.com/blob/05846...
Since I'm committing my blog into git, I found it better to pre-optimize my images so that my repo won't balloon. At first, I wrote a script that simply `find`s my non-WebP images, turns them into WebP. It then looks for those image references in my *.md files and replaces the extension with WebP.
But now, I have a better workflow where I use the Image Converter plugin[^1] in Obsidian since that's how I'm composing my notes primarily. As I paste the images into the markdown file, it gets converted and embedded. There's also Clop[^2] for macOS which auto converts if I copy an image/file to my clipboard into my desired format and quality.
Quite happy with this setup for the time being.
[^1]: https://github.com/xRyul/obsidian-image-converter [^2]: https://lowtechguys.com/clop/
If you're using Jekyll, Jekyll Picture Tag (https://github.com/rbuchberger/jekyll_picture_tag) does the same thing. I use that in my pipeline for my blog, and I agree, it's very nice to just not have to worry about stuffing a 5MB image down someone's tiny cell phone connection.
Though I'm not sure many browsers actually use anything other than the largest image option...
I've debated encoding to AVIF as well, but that's such a painfully long process on a blog with somewhere around 8700 images.
I've done a similar thing[^1] as well, but as a post-processing step after Hugo build my site, because I haven't written my own static site generator (yet).
[^1]: https://github.com/PowerSnail/PowerSnail.github.io/blob/mast...
This is also pretty easy is Hugo, albeit maybe less automatic [0].
[0] https://gohugo.io/content-management/image-processing/I tried this a couple years ago with Netlify and Hugo targeting 1x, 2x and 3x sized images. The processing time is too slow with more than around 500 images, resulting in build timeouts.
I switched to checking in the alternate image sizes and using a hugo image render hook to check for the existence of the alternative images to generate the appropriate srcset.
I have long felt that the best way to optimize photograph JPEGs was to scale them to the right size and then have a human turn the quality down until just before the eye notices a decrease in quality, and this has worked well for me. But I have to say, you beat me in the two images I tested, one by 30% file size, and I could not discern a quality difference. Very nice!
Or maybe I have just been clinging to notion too long?
FYI, your tester did miss an image that my site was including via in the style sheet.
This is super cool https://flyimg.io
especially when you cannot control what images users upload
nit: "Please take a momento to read them."
Should say "Please take a moment to read them.