Intel(R) Threading Building Blocks Doxygen Documentation  version 4.2.3
tbb::interface9::internal::start_reduce< Range, Body, Partitioner > Class Template Reference

Task type used to split the work of parallel_reduce. More...

#include <parallel_reduce.h>

Inheritance diagram for tbb::interface9::internal::start_reduce< Range, Body, Partitioner >:
Collaboration diagram for tbb::interface9::internal::start_reduce< Range, Body, Partitioner >:

Public Member Functions

 start_reduce (const Range &range, Body *body, Partitioner &partitioner)
 Constructor used for root task. More...
 
 start_reduce (start_reduce &parent_, typename Partitioner::split_type &split_obj)
 Splitting constructor used to generate children. More...
 
 start_reduce (start_reduce &parent_, const Range &r, depth_t d)
 Construct right child from the given range as response to the demand. More...
 
void run_body (Range &r)
 Run body for range. More...
 
void offer_work (typename Partitioner::split_type &split_obj)
 spawn right task, serves as callback for partitioner More...
 
void offer_work (const Range &r, depth_t d=0)
 spawn right task, serves as callback for partitioner More...
 
- Public Member Functions inherited from tbb::task
virtual ~task ()
 Destructor. More...
 
internal::allocate_continuation_proxy & allocate_continuation ()
 Returns proxy for overloaded new that allocates a continuation task of *this. More...
 
internal::allocate_child_proxy & allocate_child ()
 Returns proxy for overloaded new that allocates a child task of *this. More...
 
void recycle_as_continuation ()
 Change this to be a continuation of its former self. More...
 
void recycle_as_safe_continuation ()
 Recommended to use, safe variant of recycle_as_continuation. More...
 
void recycle_as_child_of (task &new_parent)
 Change this to be a child of new_parent. More...
 
void recycle_to_reexecute ()
 Schedule this for reexecution after current execute() returns. More...
 
void set_ref_count (int count)
 Set reference count. More...
 
void increment_ref_count ()
 Atomically increment reference count. More...
 
int add_ref_count (int count)
 Atomically adds to reference count and returns its new value. More...
 
int decrement_ref_count ()
 Atomically decrement reference count and returns its new value. More...
 
void spawn_and_wait_for_all (task &child)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void __TBB_EXPORTED_METHOD spawn_and_wait_for_all (task_list &list)
 Similar to spawn followed by wait_for_all, but more efficient. More...
 
void wait_for_all ()
 Wait for reference count to become one, and set reference count to zero. More...
 
taskparent () const
 task on whose behalf this task is working, or NULL if this is a root. More...
 
void set_parent (task *p)
 sets parent task pointer to specified value More...
 
task_group_contextcontext ()
 This method is deprecated and will be removed in the future. More...
 
task_group_contextgroup ()
 Pointer to the task group descriptor. More...
 
bool is_stolen_task () const
 True if task was stolen from the task pool of another thread. More...
 
state_type state () const
 Current execution state. More...
 
int ref_count () const
 The internal reference count. More...
 
bool __TBB_EXPORTED_METHOD is_owned_by_current_thread () const
 Obsolete, and only retained for the sake of backward compatibility. Always returns true. More...
 
void set_affinity (affinity_id id)
 Set affinity for this task. More...
 
affinity_id affinity () const
 Current affinity of this task. More...
 
void __TBB_EXPORTED_METHOD change_group (task_group_context &ctx)
 Moves this task from its current group into another one. More...
 
bool cancel_group_execution ()
 Initiates cancellation of all tasks in this cancellation group and its subordinate groups. More...
 
bool is_cancelled () const
 Returns true if the context has received cancellation request. More...
 
void set_group_priority (priority_t p)
 Changes priority of the task group this task belongs to. More...
 
priority_t group_priority () const
 Retrieves current priority of the task group this task belongs to. More...
 

Static Public Member Functions

static void run (const Range &range, Body &body, Partitioner &partitioner)
 
static void run (const Range &range, Body &body, Partitioner &partitioner, task_group_context &context)
 
- Static Public Member Functions inherited from tbb::task
static internal::allocate_root_proxy allocate_root ()
 Returns proxy for overloaded new that allocates a root task. More...
 
static internal::allocate_root_with_context_proxy allocate_root (task_group_context &ctx)
 Returns proxy for overloaded new that allocates a root task associated with user supplied context. More...
 
static void spawn_root_and_wait (task &root)
 Spawn task allocated by allocate_root, wait for it to complete, and deallocate it. More...
 
static void spawn_root_and_wait (task_list &root_list)
 Spawn root tasks on list and wait for all of them to finish. More...
 
static void enqueue (task &t)
 Enqueue task for starvation-resistant execution. More...
 
static void enqueue (task &t, priority_t p)
 Enqueue task for starvation-resistant execution on the specified priority level. More...
 
static void enqueue (task &t, task_arena &arena, priority_t p=priority_t(0))
 Enqueue task in task_arena. More...
 
static task &__TBB_EXPORTED_FUNC self ()
 The innermost task being executed or destroyed by the current thread at the moment. More...
 

Private Types

typedef finish_reduce< Body > finish_type
 

Private Member Functions

taskexecute () __TBB_override
 Should be overridden by derived classes. More...
 
void note_affinity (affinity_id id) __TBB_override
 Update affinity info, if any. More...
 

Private Attributes

Body * my_body
 
Range my_range
 
Partitioner::task_partition_type my_partition
 
reduction_context my_context
 

Friends

template<typename Body_ >
class finish_reduce
 

Additional Inherited Members

- Public Types inherited from tbb::task
enum  state_type {
  executing, reexecute, ready, allocated,
  freed, recycle
}
 Enumeration of task states that the scheduler considers. More...
 
typedef internal::affinity_id affinity_id
 An id as used for specifying affinity. More...
 
- Protected Member Functions inherited from tbb::task
 task ()
 Default constructor. More...
 

Detailed Description

template<typename Range, typename Body, typename Partitioner>
class tbb::interface9::internal::start_reduce< Range, Body, Partitioner >

Task type used to split the work of parallel_reduce.

Definition at line 82 of file parallel_reduce.h.

Member Typedef Documentation

◆ finish_type

template<typename Range , typename Body , typename Partitioner >
typedef finish_reduce<Body> tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::finish_type
private

Definition at line 83 of file parallel_reduce.h.

Constructor & Destructor Documentation

◆ start_reduce() [1/3]

template<typename Range , typename Body , typename Partitioner >
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( const Range &  range,
Body *  body,
Partitioner &  partitioner 
)
inline

Constructor used for root task.

Definition at line 98 of file parallel_reduce.h.

◆ start_reduce() [2/3]

template<typename Range , typename Body , typename Partitioner >
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( start_reduce< Range, Body, Partitioner > &  parent_,
typename Partitioner::split_type &  split_obj 
)
inline

Splitting constructor used to generate children.

parent_ becomes left child. Newly constructed object is right child.

Definition at line 107 of file parallel_reduce.h.

107  :
108  my_body(parent_.my_body),
109  my_range(parent_.my_range, split_obj),
110  my_partition(parent_.my_partition, split_obj),
112  {
113  my_partition.set_affinity(*this);
114  parent_.my_context = left_child;
115  }
Partitioner::task_partition_type my_partition

References tbb::interface9::internal::left_child, and tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context.

◆ start_reduce() [3/3]

template<typename Range , typename Body , typename Partitioner >
tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::start_reduce ( start_reduce< Range, Body, Partitioner > &  parent_,
const Range &  r,
depth_t  d 
)
inline

Construct right child from the given range as response to the demand.

parent_ remains left child. Newly constructed object is right child.

Definition at line 118 of file parallel_reduce.h.

118  :
119  my_body(parent_.my_body),
120  my_range(r),
121  my_partition(parent_.my_partition, split()),
123  {
124  my_partition.set_affinity(*this);
125  my_partition.align_depth( d ); // TODO: move into constructor of partitioner
126  parent_.my_context = left_child;
127  }
Partitioner::task_partition_type my_partition
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d

References d, tbb::interface9::internal::left_child, and tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context.

Member Function Documentation

◆ execute()

template<typename Range , typename Body , typename Partitioner >
task * tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::execute ( )
privatevirtual

Should be overridden by derived classes.

Implements tbb::task.

Definition at line 178 of file parallel_reduce.h.

178  {
179  my_partition.check_being_stolen( *this );
180  if( my_context==right_child ) {
181  finish_type* parent_ptr = static_cast<finish_type*>(parent());
182  if( !itt_load_word_with_acquire(parent_ptr->my_body) ) { // TODO: replace by is_stolen_task() or by parent_ptr->ref_count() == 2???
183  my_body = new( parent_ptr->zombie_space.begin() ) Body(*my_body,split());
184  parent_ptr->has_right_zombie = true;
185  }
186  } else __TBB_ASSERT(my_context==root_task,NULL);// because left leaf spawns right leafs without recycling
187  my_partition.execute(*this, my_range);
188  if( my_context==left_child ) {
189  finish_type* parent_ptr = static_cast<finish_type*>(parent());
190  __TBB_ASSERT(my_body!=parent_ptr->zombie_space.begin(),NULL);
191  itt_store_word_with_release(parent_ptr->my_body, my_body );
192  }
193  return NULL;
194  }
Partitioner::task_partition_type my_partition
void itt_store_word_with_release(tbb::atomic< T > &dst, U src)
#define __TBB_ASSERT(predicate, comment)
No-op version of __TBB_ASSERT.
Definition: tbb_stddef.h:165
task * parent() const
task on whose behalf this task is working, or NULL if this is a root.
Definition: task.h:835
T itt_load_word_with_acquire(const tbb::atomic< T > &src)

References __TBB_ASSERT, tbb::aligned_space< T, N >::begin(), tbb::interface9::internal::finish_reduce< Body >::has_right_zombie, tbb::internal::itt_load_word_with_acquire(), tbb::internal::itt_store_word_with_release(), tbb::interface9::internal::left_child, tbb::interface9::internal::finish_reduce< Body >::my_body, parent, tbb::interface9::internal::right_child, tbb::interface9::internal::root_task, and tbb::interface9::internal::finish_reduce< Body >::zombie_space.

Here is the call graph for this function:

◆ note_affinity()

template<typename Range , typename Body , typename Partitioner >
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::note_affinity ( affinity_id  id)
inlineprivatevirtual

Update affinity info, if any.

Reimplemented from tbb::task.

Definition at line 90 of file parallel_reduce.h.

90  {
91  my_partition.note_affinity( id );
92  }
Partitioner::task_partition_type my_partition

◆ offer_work() [1/2]

template<typename Range , typename Body , typename Partitioner >
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::offer_work ( typename Partitioner::split_type &  split_obj)
inline

spawn right task, serves as callback for partitioner

Definition at line 151 of file parallel_reduce.h.

151  {
152  task *tasks[2];
153  allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
154  new((void*)tasks[0]) finish_type(my_context);
155  new((void*)tasks[1]) start_reduce(*this, split_obj);
156  spawn(*tasks[1]);
157  }
void * allocate_sibling(task *start_for_task, size_t bytes)
allocate right task with new parent
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
task()
Default constructor.
Definition: task.h:599

References tbb::interface9::internal::allocate_sibling().

Here is the call graph for this function:

◆ offer_work() [2/2]

template<typename Range , typename Body , typename Partitioner >
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::offer_work ( const Range &  r,
depth_t  d = 0 
)
inline

spawn right task, serves as callback for partitioner

Definition at line 159 of file parallel_reduce.h.

159  {
160  task *tasks[2];
161  allocate_sibling(static_cast<task*>(this), tasks, sizeof(start_reduce), sizeof(finish_type));
162  new((void*)tasks[0]) finish_type(my_context);
163  new((void*)tasks[1]) start_reduce(*this, r, d);
164  spawn(*tasks[1]);
165  }
void * allocate_sibling(task *start_for_task, size_t bytes)
allocate right task with new parent
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
task()
Default constructor.
Definition: task.h:599
void const char const char int ITT_FORMAT __itt_group_sync x void const char ITT_FORMAT __itt_group_sync s void ITT_FORMAT __itt_group_sync p void ITT_FORMAT p void ITT_FORMAT p no args __itt_suppress_mode_t unsigned int void size_t ITT_FORMAT d

References tbb::interface9::internal::allocate_sibling(), and d.

Here is the call graph for this function:

◆ run() [1/2]

template<typename Range , typename Body , typename Partitioner >
static void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run ( const Range &  range,
Body &  body,
Partitioner &  partitioner 
)
inlinestatic

Definition at line 128 of file parallel_reduce.h.

128  {
129  if( !range.empty() ) {
130 #if !__TBB_TASK_GROUP_CONTEXT || TBB_JOIN_OUTER_TASK_GROUP
131  task::spawn_root_and_wait( *new(task::allocate_root()) start_reduce(range,&body,partitioner) );
132 #else
133  // Bound context prevents exceptions from body to affect nesting or sibling algorithms,
134  // and allows users to handle exceptions safely by wrapping parallel_for in the try-block.
135  task_group_context context(PARALLEL_REDUCE);
136  task::spawn_root_and_wait( *new(task::allocate_root(context)) start_reduce(range,&body,partitioner) );
137 #endif /* __TBB_TASK_GROUP_CONTEXT && !TBB_JOIN_OUTER_TASK_GROUP */
138  }
139  }
task_group_context * context()
This method is deprecated and will be removed in the future.
Definition: task.h:848
static void spawn_root_and_wait(task &root)
Spawn task allocated by allocate_root, wait for it to complete, and deallocate it.
Definition: task.h:778
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
static internal::allocate_root_proxy allocate_root()
Returns proxy for overloaded new that allocates a root task.
Definition: task.h:633

References tbb::task::allocate_root(), and tbb::task::spawn_root_and_wait().

Referenced by tbb::parallel_reduce().

Here is the call graph for this function:
Here is the caller graph for this function:

◆ run() [2/2]

template<typename Range , typename Body , typename Partitioner >
static void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run ( const Range &  range,
Body &  body,
Partitioner &  partitioner,
task_group_context context 
)
inlinestatic

Definition at line 141 of file parallel_reduce.h.

141  {
142  if( !range.empty() )
143  task::spawn_root_and_wait( *new(task::allocate_root(context)) start_reduce(range,&body,partitioner) );
144  }
task_group_context * context()
This method is deprecated and will be removed in the future.
Definition: task.h:848
static void spawn_root_and_wait(task &root)
Spawn task allocated by allocate_root, wait for it to complete, and deallocate it.
Definition: task.h:778
start_reduce(const Range &range, Body *body, Partitioner &partitioner)
Constructor used for root task.
static internal::allocate_root_proxy allocate_root()
Returns proxy for overloaded new that allocates a root task.
Definition: task.h:633

References tbb::task::allocate_root(), and tbb::task::spawn_root_and_wait().

Here is the call graph for this function:

◆ run_body()

template<typename Range , typename Body , typename Partitioner >
void tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::run_body ( Range &  r)
inline

Run body for range.

Definition at line 147 of file parallel_reduce.h.

147 { (*my_body)( r ); }

Friends And Related Function Documentation

◆ finish_reduce

template<typename Range , typename Body , typename Partitioner >
template<typename Body_ >
friend class finish_reduce
friend

Definition at line 94 of file parallel_reduce.h.

Member Data Documentation

◆ my_body

template<typename Range , typename Body , typename Partitioner >
Body* tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_body
private

Definition at line 84 of file parallel_reduce.h.

◆ my_context

template<typename Range , typename Body , typename Partitioner >
reduction_context tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_context
private

◆ my_partition

template<typename Range , typename Body , typename Partitioner >
Partitioner::task_partition_type tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_partition
private

Definition at line 86 of file parallel_reduce.h.

◆ my_range

template<typename Range , typename Body , typename Partitioner >
Range tbb::interface9::internal::start_reduce< Range, Body, Partitioner >::my_range
private

Definition at line 85 of file parallel_reduce.h.


The documentation for this class was generated from the following file:

Copyright © 2005-2019 Intel Corporation. All Rights Reserved.

Intel, Pentium, Intel Xeon, Itanium, Intel XScale and VTune are registered trademarks or trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

* Other names and brands may be claimed as the property of others.