This flag should be set iif the string passed is to be executed
rather than assigned. This is used to pass complex arguments
as data, like tables (eg. color class).
Makes sure that buffers don't override already existing
globals vars such as 'mask' (a function name). Yeah, it happened
to me.
CC support is a little bit hackish. Need to find a better way.
This should preserve ABI stability with earlier versions of
edje_cc while still providing more advanced control over
proxy bindings for evas filters from EDC.
Also fix proxy binding for filters.
@feature
Reuse previous code for buffer. Keeps API stability.
The new class "color" is here for a more convenient color
representation. This way, colors can be represented in more
natural ways like: {r,g,b[,a]}, 0xaarrggbb, "red", "#rrggbb"
Class color is implemented in pure Lua, and adds a .lua file
to Evas' share folder.
This will improve the debug output of evas and specifically
allow setting "evas_filter" log level to a higher or lower
value depending on what you are debugging :)
Now we're ready to implement runtime changes to the filters'
state (color classes, edje state, etc...), as the Lua function
will be run whenver required.
This is to prepare the changeable states (animation, color, scale...)
- Remove use of Eina_Value (simplifies code)
- Use proper Lua type for buffers (with pretty __tostring)
This adds the buffer methods: width, height, type, name, source
This will allow changing the state of the filter and re-run it
without re-creating the Lua_State object. This is to handle size,
color, animation state and scale changes (amongst other things).
Make check would even fail on 32bit machines because of that:
Lua tables are not arrays and lua_next doesn't ensure the order
of the elements as I wrongly assumed.
@fix
Fixes T1615
This adds filter support to Image objects as well.
The exact same filters can run on Text and on Images
(provided some colorspace limitations are respected).
This basically adds:
- Support for RGBA input buffer
- Eo entry points for Image filter support
- Implement basic filter support in Evas_Image
It was a pretty stupid idea to write a parser for a custom language
when we already have Lua as a dependency and it's so beautiful and
easy.
There is a fallback function to allow for compatibility with legacy
filters. But that broken syntax is not recommended. I'll probably
remove it soon.
All the test cases I have in my example app work fine with this
compatibily layer.
During padding calculation, ox and oy should be ignored unless the
blend operation is neither repeating nor stretching. Otherwise,
the buffer will grow without necessity.
This will allow forcing a specific value for the filter padding,
instead of relying on auto calculation.
Two advantages:
- Auto calculation can't be perfect, since it will add as much
padding as required for the full blur effect
- This prepares the path for animations with effects, where the
object size does not change over time
BOX blur is a lot faster (and easier to optimize, too)
than GAUSSIAN blur. Repeating 2x or 3x BOX blur will also
give similar results to GAUSSIAN blur (very smooth), but
in much less time.
Add a count parameter to the BOX blur instruction.
A CRItical message was always displayed when setting a filter
on a text object, saying that proxy rendering is not supported on GL.
Reduce CRI to ERR and skip proxy rendering altogether if there are
no proxy sources.
This @fix needs to be backported.
Thanks zmike for reporting this.
Signed-off-by: Jean-Philippe Andre <jp.andre@samsung.com>
The documentation said color was used as a multiplier, but in
reality the image drawing functions don't use the context's
color when drawing. So the color is only defined for Alpha -> RGBA
operations.
The Windows build (mingw) does not know about strtok_r.
So, let's use the non-safe variant strtok instead.
Currently, this function is called from the main thread only,
so this should be fine :)
In the future it would be nice to not use strtok anymore,
but strtok_r everywhere, and add it to evil. Considering the
release coming soon, I'm not going to change something like that
now.
Test case was:
buffer : a (alpha);
blur (20, dst = a);
blend (src = a, ox = 30);
In that case, padding was 20, 30, 20, 20.
So the blurred buffer was clipped on screen.
In Doxygen format, write the reference documentation for the filters.
It will contain a few examples only, should serve more as a reference
just like edcref.
This is for the script language itself, not for the Eo APIs or the
internal APIs (those are already documented).
Since the transform operation is (for now) a very simple tool,
it only works when src and dst have the same colorspace.
This commit forces users to specify dst, since "input" and "output"
have different colorspaces.
Also, remove globals A, R, G, B from parser.c... these are
temp variables used in a macro.
My CFLAGS didn't include -Wshadow so I missed those.
Thanks Tom for spotting :)
If source_set was called after program_set, then parsing would fail.
It used to work because the program was re-parsed at source_set.
Now, save the code, mark the filter as changed, and reparse again
if the source changed (keep track of invalid programs to avoid
excessive parsing).
Proxy sources & objects were not properly unset.
This results either in crashes (especially in the Edje tests)
or dangling objects with tons of references.
Remove the refcount increase/decrease, as it is redundant.
Store pairs proxy+source instead of just the source in all hashes,
so we can unset the is_proxy flag on the proxy when there are no
sources anymore.
Remove compilation warnings: we don't really need cubic
interpolation at this point, we can still add it back
later if wanted.
Also, make it clear that buffer #2 is the output buffer.
Remove meaningless FIXME.
It is not possible to logically handle padding and offset at the same
time for a proper mirror effect, unless this is handled directly at the
transformation level.
Also, add support for blend() operation padding computation.
Add parameters l, r, t, b to clip the fill area.
While l=x and t=y, the width and height of the clip are determined
at filter run-time, since we don't know the buffer size before.
Syntax was: buffer(name=bla,alpha=bool);
Changed to: buffer:bla(alpha);
There's a semicolon between buffer and its name because ALL whitespaces
are discarded. This might prove useful sometime in the future, so let's
keep it this way for now :)
Padding was brutally calculated by suming ALL the filters'
individual paddings. Now we try to be a bit smarter and propagate
the padding between buffers in the filter chain.
So, the (font) effects will be described by a string. It's
basically a new language (yeah yeah sorry), VERY simple, based
on function calls a la Python, with sequential and named arguments.
This string is intended to be passed directly to an evas text object
and embedded into the evas textblock's markup tags.
This file implements both the basic parsing functions, the
compilation of instructions into a queue of commands, and the glue
code for the rest of the filter infrastructure.