Wiki pages threading_pg created: 1 main + 5 PG + index + 3 images

Signed-off-by: Clément Bénier <clement.benier@openwide.fr>
This commit is contained in:
Clément Bénier 2015-09-07 12:21:29 +02:00 committed by Cedric BAIL
parent c23ca903cd
commit 43f6bc0db9
12 changed files with 374 additions and 0 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 30 KiB

View File

@ -68,6 +68,7 @@ Go check the current available version of EFL on each distro/platform:
* [[program_guide/focus_ui_pg|Managing UI Component Focus PG]]
* [[program_guide/customizing_ui_pg|Customizing UI Components PG]]
* [[program_guide/main_loop_pg|Main Loop PG]]
* [[program_guide/threading_pg|Threading PG]]
=== Samples ===

View File

@ -11,4 +11,5 @@
* [[program_guide/customizing_ui_pg|Customizing UI Components PG]]
* [[program_guide/focus_ui_pg|Managing UI Component Focus PG]]
* [[program_guide/main_loop_pg|Main Loop PG]]
* [[program_guide/threading_pg|Threading PG]]
++++

View File

@ -0,0 +1,5 @@
++++ Threading Menu|
^ [[/program_guide/threading_pg|Threading PG]] ^^^^^
| [[/program_guide/threading/thread_safety|Thread Safety]] | [[/program_guide/threading/thread_pools|Thread Pools]] | [[/program_guide/threading/thread_management_with_ecore|Thread Management with Ecore]] | [[/program_guide/threading/low-level_functions|Low-level Functions]] | [[/program_guide/threading/thread_use_example|Thread Use Example]] |
++++

View File

@ -0,0 +1,71 @@
{{page>index}}
-------
===== Low-level Functions =====
Eina offers low-level functions that are portable across the operating system,
such as locks, conditions, semaphores, barriers, and spinlocks. The functions
follow closely the logic of pthreads.
While these functions are useful, they are building blocks and not usually
useful in EFL applications considering the higher-level functions that are
available in Ecore.
For an introduction to threads and pthreads in particular, see:
* [[http://www.ibm.com/developerworks/library/l-pthred/index.html|Basic use of pthreads]] (IBM developerWorks)
* [[https://computing.llnl.gov/tutorials/pthreads/|POSIX Threads Programming]] (Lawrence Livermore National Laboratory)
* [[http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/pthread.h.html|POSIX 2003 specification]] (opengroup)
If you are already familiar with threads, see the standard pthreads
documentation and the Eina reference documentation, or the following function
lists. Remember that the Eina functions map very closely to the pthreads
functions.
^ Lock (mutual exclusions) ^^
| **pthreads function** | **eina equivalent** |
|''pthread_mutex_new()''|''eina_lock_new()''|
|''pthread_mutex_destroy()''|''eina_lock_free()''|
|''pthread_mutex_lock()''|''eina_lock_take()''|
|''pthread_mutex_trylock()''|''eina_lock_take_try()''|
|''pthread_mutex_unlock()''|''eina_lock_release()''|
|none (prints debug information on the lock)|''eina_lock_debug()''|
^ Conditions (notifications when condition objects change) ^^
| **pthreads function** | **eina equivalent** |
|''pthread_cond_init()'' |''eina_condition_new()'' |
|''pthread_cond_destroy'' |''eina_condition_free()'' |
|''pthread_cond_wait()'' |''eina_condition_wait()'' |
|''pthread_cond_timedwait()'' |''eina_condition_timedwait()'' |
|''pthread_cond_broadcast()'' |''eina_condition_broadcast()'' |
|''pthread_cond_signal()'' |''eina_condition_signal()'' |
^ RWLocks (Read-write locks, for multiple-readers/single-writer scenarios) ^^
| **pthreads function** | **eina equivalent** |
|''pthread_rwlock_init()'' |''eina_rwlock_new()'' |
|''pthread_rwlock_destroy()'' |''eina_rwlock_free()'' |
|''pthread_rwlock_rwlock_rdlock()''|''eina_rwlock_take_read()'' |
|''pthread_rwlock_rwlock_wrlock()''|''eina_rwlock_take_write()'' |
|''pthread_rwlock_unlock()'' |''eina_rwlock_release()'' |
^ TLS (Thread-Local Storage) ^^
| **pthreads function** | **eina equivalent** |
|''pthread_key_create()'' |''eina_tls_new()'' |
|''pthread_key_delete()'' |''eina_tls_free()'' |
|''pthread_getspecific()'' |''eina_tls_get()'' |
|''pthread_setspecific()'' |''eina_tls_set()'' |
^ Semaphores (access restrictions for a set of resources) ^^
| **pthreads function** | **eina equivalent** |
|''sem_init()'' |''eina_semaphore_new()'' |
|''sem_destroy()'' |''eina_semaphore_free()'' |
|''sem_wait()'' |''eina_semaphore_lock()'' |
|''sem_post()'' |''eina_semaphore_release()'' |
^ Barriers ^^
| **pthreads function** | **eina equivalent** |
|''pthread_barrier_init()'' |''eina_barrier_new()'' |
|''pthread_barrier_destroy()''|''eina_barrier_free()'' |
|''pthread_barrier_wait()'' |''eina_barrier_wait()'' |
-------
{{page>index}}

View File

@ -0,0 +1,61 @@
{{page>index}}
===== Thread Management with Ecore =====
Ecore offers a simplified API for managing threads in EFL applications. The
Ecore API applies to a typical scenario where the main thread creates another
thread, which in turn sends data back to the main thread or calls GUI-related
functions. GUI-related functions are not thread-safe.
==== Creating Threads with Ecore ====
The threads created with Ecore are by default integrated with the thread pool
and offer simple callback-based ways to interact with the main loop. New
threads are created as needed until the maximum capacity of the thread pool is
reached.
== To return values to the main thread ==
Use the ''ecore_thread_feedback_run()'' function to send intermediate feedback
from the thread to the main loop.
== To return only the final value to the main thread ==
To create and run a thread, use the ''ecore_thread_run()'' function. It runs a
function inside a thread from the thread pool and takes care of all the
low-level work. It returns the corresponding thread handler or ''NULL'' on
failure.
The most common way to return data from one thread to the main thread is
to put a pointer to it in the data. When the thread is aborted or finishes,
either ''func_cancel()'' or ''func_end()'' is called from the main loop. The
functions are running in the simpler context of a single thread running at
once and therefore avoid race-conditions.
The data pointer approach can only be used when the data is shared between the
one thread and the main thread only. However, this does not prevent you from
using the ''func_end()'' callback to merge the results into a single data
structure. For example, you can add all the values computed by the threads to
an ''Eina_List'', as all the operations on the list happen from a single
thread and therefore one after the other and not concurrently.
==== Running Callbacks from the Main Loop ====
If you are performing operations in another thread and want to update a
progress bar, the update operation must be done from the main thread. The
simplest way is to use the ''ecore_main_loop_thread_safe_call_async()''
function, which takes a function and some data as parameters and instructs the
main loop to execute the given function with the given data.
Depending on the kind of thread the function is called from, the process
differs:
* If the function is called from a thread that is not the main one, the function sends a message to the main loop and returns quickly. The message is processed in order, similarly to others.
* If the function is called from the main thread, the function is called immediately as if it were a direct call.
If you want to wait until the callback is called and returns, use the
''ecore_main_loop_thread_safe_call_sync()'' function, which is similar but
synchronous. Since it is synchronous, it can also return the value returned by
the callback.
-----
{{page>index}}

View File

@ -0,0 +1,52 @@
{{page>index}}
-----------
===== Thread Pools =====
Threads are operating system resources: while much lighter than processes,
they still have a cost. Moreover, spawning a thousand threads means that each
of them only gets 1/1000th of the total CPU time: each thread is progressed
slowly and, in the worst case, the system wastes all of its time switching
between threads without doing any actual work.
Thread pools solve this problem. In thread pools, upto a maximum number of
threads are created on-demand and used to execute tasks. When the tasks are
finished, they are kept alive but sleeping. This avoids the cost of creating
and destroying them.
In EFL, the thread pool is controlled by a ''thread_max'' parameter, which
defines the maximum number of threads running at the same time. Another
control feature is the ''func_end()'' callback that runs from the main loop
thread after a task has completed and is typically used to extract the data
from the finished task and make it available to the main loop.
To manage the maximum number of threads:
* To retrieve the current value, use the ''ecore_thread_max_get()'' function.
* To set the value, use the ''ecore_thread_max_set()'' function. The value has a maximum of 16 times the CPU count.
* To reset the maximum number of threads, use the ''ecore_thread_max_reset()'' function.
* To get the number of available threads in the pool, use the ''ecore_thread_available_get()'' function. The function returns the current maximum number of threads minus the number of running threads. The number can be a negative value, if the maximum number of threads has been lowered.
The following figures illustrate the thread pool. The first figure shows the
occupancy of a hypothetical thread pool. There are several tasks, of which 4
are running. The ''thread_max'' parameter of the pool is 4, and the other
tasks are waiting. There is no thread with its ''func_end()'' callback
currently called.
{{ :threading_pool_lifecycle_1.png }}
When a task, applying the sepia filter on image1, finishes, the corresponding
''func_end()'' function is invoked from the main loop.
{{ :threading_pool_lifecycle_2.png }}
With the task done, one of the threads from the pool becomes available, and
another thread, adding the reverberation effect on audio3, can run in it.
{{ :threading_pool_lifecycle_3.png }}
As long as there are tasks to be done, the thread pool continues the same way,
running tasks in its threads whenever a thread is available.
-----
{{page>index}}

View File

@ -0,0 +1,30 @@
{{page>index}}
-------
===== Thread Safety =====
If several strings have to work on the same resources, conflicts can happen as
the threads are run in parallel. For example, if thread A modifies several
values while thread B is reading them, it is likely that some of the values
read by B are outdated. Similar issues can happen if both threads are
modifying data concurrently.
These kinds of conflicts are called race-conditions: depending on which thread
is faster, the output changes and can be incorrect. Avoiding such issues is
called thread safety. Thread safety involves critical sections, which are
blocks of code that operate on shared resources and must not be accessed
concurrently by another thread.
The usual solution for ensuring exclusive access to shared resources is mutual
exclusion: only 1 thread can operate on the data at any given time. Mutual
exclusion is often implemented through locks. Before attempting to operate on
a shared resource, the thread waits until it can lock something called a mutex
(stands for mutual exclusion), then operates on the resource, and unlocks the
mutex. Operating systems guarantee that only 1 thread can lock a mutex at a
given time: this ensures that only 1 thread operates on the shared resource at
one time.
For more information on thread safety, see
[[/program_guide/threading/low-level_functions|Low-level Functions]].
------
{{page>index}}

View File

@ -0,0 +1,124 @@
{{page>index}}
-------
===== Thread Use Example =====
The following examples display a window with a label. An auxiliary thread
semi-regularly changes the text of the label. If you want to display a regular
animation, use the Ecore animators described in the
[[/program_guide/main_loop_pg|Main Loop guide]].
To use the ''ecore_thread_feedback()'' function:
**__1__**. Implement the GUI function that sets the text of a label and can be
called from the main thread.
<code c>
static void
_set_label_text(void *data, Ecore_Thread *thread __UNUSED__, void *msgdata)
{
char buf[64];
Evas_Object *label = data;
snprintf(buf, sizeof(buf), "Tick %d", (int)(uintptr_t)msgdata);
elm_object_text_set(label, buf);
}
</code>
**__2__**. Send the feedback from the other thread using the ''ecore_thread_feedback()''
function. The following function does nothing besides sending the feedback and
sleeping.
<code c>
static void
_long_function(void *data __UNUSED__, Ecore_Thread *thread)
{
int iteration;
// Change the text roughly every 1 second. This is only an example; if you
// want regular animations, use Ecore animators!
for (iteration = 0; ; iteration++)
{
// Since you are running from another thread, you need to take special
// care and instead send data to the main thread and have it run the
// feedback function given when creating the thread
ecore_thread_feedback(thread, (void*)(uintptr_t)iteration);
// Sleep for roughly one second
sleep(1);
}
}
</code>
**__3__**. Create an end function that is called when the thread exits. In
this example, the end function is called only right before the application
exits. However, if the blocking function is more complex, it can trigger the
end function.
<code c>
static void
_end_func(void *data, Ecore_Thread *thread __UNUSED__)
{
Evas_Object *label = data;
elm_object_text_set(label, "Ticks over");
}
</code>
**__4__**. Call the ''ecore_thread_feedback_run()'' function to start the thread:
<code c>
ecore_thread_feedback_run(_long_function, _set_label_text, _end_func, NULL, label, EINA_FALSE);
</code>
To use the ''ecore_main_loop_thread_safe_call_sync()'' function:
**__1__**. Implement the GUI function that sets the text of a label and can be
called from the main thread. The function receives data as a structure and
alternatively displays "Tick d" or "Tock d".
<code c>
struct thd
{
Evas_Object *label;
Eina_Bool tick_not_tock;
int iteration;
};
static void *
_set_label_text_tick_tock(void *data)
{
char buf[64];
struct thd *thd = data;
snprintf(buf, sizeof(buf), "%s %d", (thd->tick_not_tock ? "Tick" : "Tock"), thd->iteration);
elm_object_text_set(thd->label, buf);
return NULL;
}
</code>
**__2__**. Use the ''ecore_main_loop_thread_safe_call_sync()'' function call
the GUI function. Differentiate between the ticks and the tocks:
<code c>
static void
_long_function_tick_tock(void *data, Ecore_Thread *thread __UNUSED__)
{
struct thd *thd = malloc(sizeof(struct thd));
thd->label = data;
for (thd->iteration = 0; ; (thd->iteration)++)
{
thd->tick_not_tock = EINA_TRUE;
ecore_main_loop_thread_safe_call_sync(_set_label_text_tick_tock, thd);
sleep(1);
thd->tick_not_tock = EINA_FALSE;
ecore_main_loop_thread_safe_call_sync(_set_label_text_tick_tock, thd);
sleep(1);
}
free(thd);
}
</code>
**__3__**. Start the thread through the ''ecore_thread_run()'' function:
<code c>
ecore_thread_run(_long_function_tick_tock, _end_func, NULL, label);
</code>
-------
{{page>index}}

View File

@ -0,0 +1,29 @@
{{page>index}}
===== Threading Programming Guide =====
Threads are concurrent execution environments that are lighter than full-blown
processes because they share some operating system resources. Threads make it
possible to do several things at the same time while using less resources and
offering simpler synchronization and data exchange compared to processes.
If you move a blocking operation to a separate thread, it cannot block the
event loop and keeps the user interface reactive. Blocking the event loop and
using long-running callbacks means the application cannot update its graphical
user interface.
While threads can be useful, they are not always the best choice:
* The first rule to using threads is to avoid them as much as possible, as there are often better tools and approaches. For example, to do network transfers, use ''Ecore_Con'' that integrates with the event loop to use a function based on callbacks. Being able to use such a function means that specific support work has been done in libraries. In some cases, using a function and libraries is impossible, and in those situations threads are required.
* Use threads in CPU-intensive tasks and disk IOs. For example, a thread is the appropriate way to apply filters to an image or to a video without blocking the interface.
* Use threads to take advantage of multiple available CPU cores, if the workload can be split into several units of work and spread across the cores. A typical example for an image processing application on a quad-core CPU is to process 4 images at once, each on 1 thread. For such tasks, the thread pool helps with the creation and scheduling of the threads, handling all the grunt work.
=== Table of Contents ===
* [[/program_guide/threading/thread_safety|Thread Safety]]
* [[/program_guide/threading/thread_pools|Thread Pools]]
* [[/program_guide/threading/thread_management_with_ecore|Thread Management with ECore]]
* [[/program_guide/threading/low-level_functions|Low-level Functions]]
* [[/program_guide/threading/thread_use_examples|Thread Use Example]]
--------
{{page>index}}