For as long as emulators supported shaders I've gotten into the habit of configuring them to scale output 4x nearest neighbor and then downscaling that to the display resolution using bilinear, which has roughly the same results; it gets rid of shimmering without blurring everything to a smudge. On any 1080p display with lower resolution content it looks great, but the method starts to fall apart once you try to scale anything higher than 480p.
With a 4K display the pixel density is high enough that virtually everything looks good scaled this way, though once you go higher than SD content you're usually dealing with 720p and 1080p, both of which 2160p divides into evenly anyway.
It's surprising how often I see bad pixel art scaling given how easy it is to fix.
On that topic, Pillow so-called binilnear isn't actually bilinear interpolation [1][2], same with Magick IIRC (but Magick at least gives you -define filter:blur=<value> to counteract this)
Downscaling using bilinear interpolation doesn't really make sense, since what you want is a weighted average of pixels to make one new pixel at the lower resolution.
Single bilinear samples can lose information and leave out pixels of the higher res image, it's essentially a worse triangle filter.
> Single bilinear samples can lose information and leave out pixels of the higher res image, it's essentially a worse triangle filter.
Can you do [A B] -> [A 0.5*(A+B) B] 1.5x upscaling with a triangle filter? (I think this is not possible, but I might be wrong).
Also triangle filter samples too many pixels and makes a blurry mess of pixel-art images/sprites/...
Linear downscaling under the assumptions of pixel-center mapping and clamp-to-edge always simplifies into a polyphase filter with position-independent coefficients using at most the current input pixel and the previous one; and integer upscaling obviously is too.
Therefore any form of "sharp bilinear" that does not use bilinear upscaling reduces into such a polyphase filter. [A B] -> [A 0.5*(A+B) B] is equivalent to 2x integer upscale -> 0.75 bilinear scale (= 1.5x of input), and works on GPUs without fragment shaders too.
First, upscaling with a filter kernel (weighted average) doesn't make as much sense because you aren't weighting multiple pixels to make a single pixel, you are interpolating, so "upscaling with a triangle filter" isn't something practical.
Second, lots of signal processing things that can be technically applied to pixels on a row by row basis don't work well visually and don't make a lot of sense when trying to get useful results. This is why things like a fourier transform is not the backbone of image processing.
Polyphase filtering doesn't make any sense here, you have access to all the data verbatim, and you want to use it all when you upscale or downsample. There is no compression and no analog signal that needs to be sampled.
Third, any filter kernel is going to use the pixels under it's width/support. Using 'too many pixels' isn't something that makes sense and isn't the problem. How they are weighted when scaling an image down is what makes sense. If you want a sharper filter you can always use one. What I actually said was that linear interpolating samples to downsample an image doesn't make sense and is like using a triangle filter or half of a triangle filter.
This all seems to be work arounds for what people probably actually want if they are trying to get some sharpness, which is something like a bilateral filter, that weights similiar pixels more. This
Huh (having scanned but not read in detail the post), interesting approach. I'm not that well-versed in this area (as a game developer, I tend to jump straight to nearest-neighbour), but hadn't come across this before. I love the pathological example of a checkerboard pattern - very pleasing worst-case scenario, where I suspect it would just be a grey blur. However, the developer doesn't show us the equivalent for the suggested filter - systemically showing side-by-side comparisons of different filters would be useful. I suspect the resulting artefacts would be randomly blurry lines, which could also stand out. But nice to see people thinking about these things...
Here's a related disucssion on what 'pixelated' should mean from the css working group
(every so often browsers break/rename how nearest-neighbouring filtering works. I hope at some point it stabilizes lol - I note in the discussion linked nobody else cares about backwards compatibility ...).
An easy way to do this that I’ve used when resizing images in photoshop is to first scale it to the closest larger integer scaling factor of the target output using nearest neighbor, and then scale that down to the final result with bilinear or bicubic.
For as long as emulators supported shaders I've gotten into the habit of configuring them to scale output 4x nearest neighbor and then downscaling that to the display resolution using bilinear, which has roughly the same results; it gets rid of shimmering without blurring everything to a smudge. On any 1080p display with lower resolution content it looks great, but the method starts to fall apart once you try to scale anything higher than 480p.
With a 4K display the pixel density is high enough that virtually everything looks good scaled this way, though once you go higher than SD content you're usually dealing with 720p and 1080p, both of which 2160p divides into evenly anyway.
It's surprising how often I see bad pixel art scaling given how easy it is to fix.
> that to the display resolution using bilinear
On that topic, Pillow so-called binilnear isn't actually bilinear interpolation [1][2], same with Magick IIRC (but Magick at least gives you -define filter:blur=<value> to counteract this)
[1] https://pillow.readthedocs.io/en/stable/releasenotes/2.7.0.h...
[2] https://github.com/python-pillow/Pillow/blob/main/src/libIma...
Sounds like exactly the same thing since bilinear filtering in the upscaled image only has an effect near the edges of the fat pixels.
Downscaling using bilinear interpolation doesn't really make sense, since what you want is a weighted average of pixels to make one new pixel at the lower resolution.
Single bilinear samples can lose information and leave out pixels of the higher res image, it's essentially a worse triangle filter.
> Single bilinear samples can lose information and leave out pixels of the higher res image, it's essentially a worse triangle filter.
Can you do [A B] -> [A 0.5*(A+B) B] 1.5x upscaling with a triangle filter? (I think this is not possible, but I might be wrong).
Also triangle filter samples too many pixels and makes a blurry mess of pixel-art images/sprites/...
Linear downscaling under the assumptions of pixel-center mapping and clamp-to-edge always simplifies into a polyphase filter with position-independent coefficients using at most the current input pixel and the previous one; and integer upscaling obviously is too.
Therefore any form of "sharp bilinear" that does not use bilinear upscaling reduces into such a polyphase filter. [A B] -> [A 0.5*(A+B) B] is equivalent to 2x integer upscale -> 0.75 bilinear scale (= 1.5x of input), and works on GPUs without fragment shaders too.
I think you're confusing a few things.
First, upscaling with a filter kernel (weighted average) doesn't make as much sense because you aren't weighting multiple pixels to make a single pixel, you are interpolating, so "upscaling with a triangle filter" isn't something practical.
Second, lots of signal processing things that can be technically applied to pixels on a row by row basis don't work well visually and don't make a lot of sense when trying to get useful results. This is why things like a fourier transform is not the backbone of image processing.
Polyphase filtering doesn't make any sense here, you have access to all the data verbatim, and you want to use it all when you upscale or downsample. There is no compression and no analog signal that needs to be sampled.
Third, any filter kernel is going to use the pixels under it's width/support. Using 'too many pixels' isn't something that makes sense and isn't the problem. How they are weighted when scaling an image down is what makes sense. If you want a sharper filter you can always use one. What I actually said was that linear interpolating samples to downsample an image doesn't make sense and is like using a triangle filter or half of a triangle filter.
This all seems to be work arounds for what people probably actually want if they are trying to get some sharpness, which is something like a bilateral filter, that weights similiar pixels more. This
Huh (having scanned but not read in detail the post), interesting approach. I'm not that well-versed in this area (as a game developer, I tend to jump straight to nearest-neighbour), but hadn't come across this before. I love the pathological example of a checkerboard pattern - very pleasing worst-case scenario, where I suspect it would just be a grey blur. However, the developer doesn't show us the equivalent for the suggested filter - systemically showing side-by-side comparisons of different filters would be useful. I suspect the resulting artefacts would be randomly blurry lines, which could also stand out. But nice to see people thinking about these things...
Here's a related disucssion on what 'pixelated' should mean from the css working group
https://github.com/w3c/csswg-drafts/issues/5837
(every so often browsers break/rename how nearest-neighbouring filtering works. I hope at some point it stabilizes lol - I note in the discussion linked nobody else cares about backwards compatibility ...).
An easy way to do this that I’ve used when resizing images in photoshop is to first scale it to the closest larger integer scaling factor of the target output using nearest neighbor, and then scale that down to the final result with bilinear or bicubic.
Ok but what does that image at the top look like with this new filter applied?