[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ wiki / twitter / cytube / git ] [ GET / ref / marx / booru ]

/hobby/ - Hobby

"Our hands pass down the skills of the last generation to the next"
Password (For file deletion.)

Join our Matrix Chat <=> IRC: #leftypol on Rizon

File: 1660200226016-0.png (425.04 KB, 1432x1945, 24 color.png)

File: 1660200226016-1.png (1.39 MB, 1432x1945, fullcolor.png)


people usually only think about format when compressing images

they never think about indexed coloring

consider. Same pic. same size. one only uses 24 colors and is only 435 kb. the other is 1.5mb and uses 254 colors. This isn't very significant because the photo is grayscale and doesn't use many colors in the first place. However…


However what?


File: 1660200480545-0.png (7.72 MB, 2200x1462, ClipboardImage.png)

File: 1660200480545-1.png (1.01 MB, 2200x1462, 24 color 2.png)

this image has 532693 colors and is 7.4mb

but the indexed image has only 24 colors, and looks almost as vibrant. that is a 22,195-fold decrease in the number of colors used. at 1.1mb it is a 6.7fold decrease in size


File: 1660200608484.png (559.95 KB, 727x444, ClipboardImage.png)

don't believe me about the 2nd image only having 24 colors? It is because of an old technique used in image compression in the 90s on old computers. Floyd-Steinburg dithering. Not used very often anymore, but it allows for the vibrancy of a photo despite many thousands of colors being dropped


File: 1660201179547-0.png (2.47 MB, 1200x1600, stencil 1.png)

File: 1660201179547-1.png (8.88 KB, 1200x1600, stencil 2.png)

where this really comes in handy compression-wise is images that don't really *need* to have as many colors as they do. consider this stencil. 38,529 colors. 2.6 MB

It only needs two colors. With indexed coloring we drop 38,527 unnecessary colors. No dithering technique even required. 9.1kb. the image is now thousands of times smaller in file size while keeping its pixel dimensions and everything important about its detail.


Based, I am unironically interested in this subject.


File: 1660201496159.png (61.16 KB, 648x309, ClipboardImage.png)

How to do this? GIMP (which is free) has an indexed coloring option in the "image" menu. It lets you even choose the number of colors in the image.


File: 1660201619629.png (816.81 KB, 883x903, ClipboardImage.png)

in GIMP, to find out the number of colors in an existing image, use "colorcube analysis" in the "color" menu.


now it's quite astonishing how much detail remains through floyd-steinberg dithering. both these images have been compressed down to only two colors, and are originally based on the photo with 532,693 colors. Obviously all the vibrant colors are lost, but you can make out nearly every face, the sky, and the building, with only 2 colors! One has dithering, the other has no ditherhing. see file names.


File: 1660202200555.png (590.34 KB, 1675x738, ClipboardImage.png)

this does not count as "grayscale" since grayscale images use all 0-saturated shades of gray between between white and black. These images only use 2 arbitrary colors. No in between shades whatsoever. The dithering uses different ratios of "spackling" of the two colors to create the illusion of a gradient when zoomed out.

Floyd-Steinberg dithering was used the most in the 90s, but was invented all the way back in 1976. That's 46 years ago!


File: 1660202575462-0.png (66.68 KB, 1920x1080, dithered gradient.png)

File: 1660202575462-1.png (12.85 KB, 749x705, ClipboardImage.png)

consider this dithered gradient. this image is indexed to use only 2 colors. And yet the gradient is perfectly visible with all the "in between" colors! To see how this is done we can zoom in. pic 2 shows the zoomed in view.

Dithering pushes the "debt" of an individual pixel being inaccurate in its color onto its neighbors. Through the collective action of the pixels, the inaccuracy of their individual colors is compensated through rearrangement!


Thanks! If this reaches even one person, it's worth it. :)


You can see the difference in this one in the thumb even.


File: 1660203298441-0.png (99.75 KB, 559x488, ClipboardImage.png)

File: 1660203298441-1.png (56.82 KB, 743x241, ClipboardImage.png)

File: 1660203298441-2.png (69.56 KB, 924x243, ClipboardImage.png)

Some nice lad has also put together a bunch of side by side examples of all the dithering algos


Also one downside I just noticed is I have leftypol zoomed out 10% by default and it makes these look all fucked up lol


The indexed looks like it was taken in dense smoke. Get your eyes checked.


thanks for sharing!
my vision is good. i got my eyes checked when I got my license renewed. I appreciate your opinion.


Jarvis, Stucki, Sierra, and Atkinson algos all strike me as having particularly strong edges compared to Floyd-Steinberg


>The indexed looks like it was taken in dense smoke
the image looks good not on its own merits, but it looks good considering it has 532,939 fewer colors than the original. Going from 532,693 to only 24 colors and maintaining that level of detail is impressive! Feel free to disagree. I get what you mean about the smokey look. A sevenfold drop in size is also pretty impressive. For a lot of images posted on the internet (especially non photographs) it could be useful. You know how many anime pictures only really use 4 or 5 hues, but without compression it comes out to thousands of colors?


tfw no retrofuturism




so what's the lesson here


size and number of colors go way down with very little essential detail being lost


that's really based


OP needs to go to the optometrist


File: 1660284999314-0.jpg (322.84 KB, 1432x1945, lenin.jpg)

File: 1660284999314-1.jpg (1019.17 KB, 2200x1462, image.jpg)

Protip: use JPEG for photos


These both look better than OP's compressed pictures and have smaller file size. OP BTFO


bro the smaller size images look like shit in both of these examples. data is cheap nowadays, why the hell do we need to cut the size so much, fidelity is more important


see >>27791
>why the hell do we need to cut the size so much
so we don't have to wait an hour for your image to load


bro is trying to emulate lossy compression with a lossless format


File: 1660383045879-0.jpg (101.42 KB, 590x729, stalin.jpg)

File: 1660383045879-1.gif (77.19 KB, 590x729, stalin.gif)

You are like a little baby… watch this!


File: 1660384060014-0.jpg (69.65 KB, 590x729, stalin-70.jpg)

File: 1660384060014-1.jpg (51.43 KB, 590x729, stalin-50.jpg)

File: 1660384060014-2.jpg (31.26 KB, 590x729, stalin-30.jpg)

File: 1660384060014-3.jpg (15.44 KB, 590x729, stalin-15.jpg)

File: 1660384060014-4.jpg (5.29 KB, 590x729, stalin-05.jpg)


File: 1660384519122-1.pdf (16.8 MB, 153x255, inferno00dant_2.pdf)

File: 1660384519122-2.pdf (33.95 MB, 153x255, inferno00dant_2_jpg.pdf)

Popular image formats usually have very good compression. Compare this to scanned documents, where PDF has proliferated, despite DJVU having objectively better compression and faster rendering by a wide margin.
Look at this random book from archive.org (https://archive.org/details/inferno00dant_2/). The DJVU looks much clearer than the native PDF with more than double the size and the PDF containing JPEGs has marginally more detailed illustrations at four times the size. The text of the DJVU looks crisper than that of both PDFs.
Most books on archive.org don't have DJVUs anymore, so I usually losslessly convert a directory of JPEG2K files to a PDF and transcode it to DJVU with this script:
t=`mktemp`; d=`mktemp -d`
for i in $1/*.[Jj][Pp]2; do
  jpeg2ktopam $1|pamtotiff >$t &&
  tiff2pdf -o "$d/`basename $i|cut -d. -f 1`.pdf" $t
pdfunite $d/* $t &&
djvu2pdf $t > $2
rm $t; rm -r $d

This requires netpbm and pdf2djvu.


I just did it because I thought it looked cool lol, I had already made it before the thread. I like how jpeg improves in some spots until it suddenly goes downhill massively, although that's mostly because the image has a weird quality already. jpeg wins in compression but it doesn't give any aesthetics like dithering or color reduction.
Although .avif is probably the best


I think what would help with aesthetics more than anything is an AI step when compressing. Reducing colors often leads to objects blending into each other or the background. Fixing this by hand takes a small eternity, so AI finding the borders of objects to find a setting with a given reduction of colors with little blending would be good.

And AI should also do a bit of phony coloring (not taking the nearest color from the reduced set) if that helps a lot against blending.

Oh and also, the AI should distinguish between things like machines and buildings on the one hand and organic structures on the other and apply something like the Bayer dither to the former and a less orderly dither to the latter.


That's because dithering looks like shit and we have better displays now than we did in the 90s when color quantization was still a passable compression solution. But now we have jpeg.

Use jpeg instead and an encoder like mozjpeg for your Lenin portraits

Unique IPs: 14

[Return][Go to top] [Catalog] | [Home][Post a Reply]
Delete Post [ ]
[ home / rules / faq ] [ overboard / sfw / alt ] [ leftypol / siberia / edu / hobby / tech / games / anime / music / draw / AKM ] [ meta / roulette ] [ wiki / twitter / cytube / git ] [ GET / ref / marx / booru ]