So I've discovered some weird output values after drawing
some text. The destination alpha would become 0xFE even
when the back buffer had a background with 0xFF alpha.
Example:
Dest is 0xff00ff00 (green).
Color is 0xffffffff (white).
Current font alpha is 170 (0xaa).
--> Output was 0xFEaaFEaa instead of 0xFFaaFFaa.
This is because of some slightly invalid calculation
when doing the font masking (mtab[v] = 0x55 above).
Indeed, MUL_256 takes alpha values in the range [1-256]
and not [0-256] as was assumed.
This should ensure that the difference between the original
pixel value and the rle4 encoded one is <= 8.
The previous fix was a bit stupid as it was not taking into
account the conversion a4 to a8 (which is a8 = (a4 << 4) | a4).
The structure should not be changed, despite the union modification.
I am renaming for consistency with older branches that had a mask
field in RGBA_Image. Also, the mask.data or data8 is really just
a way to avoid casting between DATA8 and DATA32 (and it shows
clearly what kind of data you are dealing with).
Well, raster did some great job at optimizing font draw... but only
to RGBA32 targets. In this font effects case, we also want to render
text on ALPHA buffers.
For now, reuse the existing alpha blending & glyph decompress
functions. It's MUCH easier, and works. Definitely slower than
decompressing on-the-fly and optimizing everything. But for now,
this will not even be the performance bottleneck in an effect
(blur will be a lot slower).
this changes the internal encoding of font glyphs in evas to use 4bit
uncompressed if small, or 4bit rle (run length encoded) if larger.
this caves at least 50% of memory on fonts - and more if bigger. with
large fonts (40-80pixel size) we can save in the region of 80% of
memory used for glyphs. this also happesn to allow speedups in
rendering too.