Serving the Quantitative Finance Community

 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 25th, 2015, 9:30 am

QuoteOriginally posted by: outrunthis is a interesting idea: job stealingIt looks like the Peer (Boss/Worker) modelGood to give it a name. Here's a nice description in pthreadsTry to port it to C++11 is a nice challenge. And Microsoft makes it even easier.
Last edited by Cuchulainn on August 24th, 2015, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 26th, 2015, 4:12 am

The goal is to increase throughput and load balancing. There is a central repository but each worker has his own copy, hence no mutex is needed.Like a bunch of workers cleaning an office; each worker works on his own piece of data without needing a specific Master. When they finish with job 1 they can start immediately with job 2.. etc.The workers can synchnronise with a cv.
Last edited by Cuchulainn on August 25th, 2015, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 26th, 2015, 8:24 am

QuoteOriginally posted by: outrunyes exactly, so then don't have to wait in line in front of the elevator to the management floor and bother them with questions about what to do next.LOL, yes :D Like the A team
Last edited by Cuchulainn on August 25th, 2015, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 26th, 2015, 7:54 pm

PPL and C++11 support for futures
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 27th, 2015, 9:01 am

Some initial tests/experiments on a number of C++ thread, concurrency and parallel libraries. Sort 4 vectors each of size 10^8 on a 4-core machine: (fastest is 1 response time unit).Sequential 2OMP loops 1OMP parallel sections 2PPL loops 1PPL tasks 1PPL Futures 2 (intuitive code)C++11 threads 1C++11 futures 2C++11 packaged_task (not so intuitive, could not get the code working...) first impressions:a. PPL support many kinds of patterns and is well documented.b. C+11 is for systems programming (e.g. boost asio?); writing parallel algorithms will be a lot of work and non-scalable IMO.c. PPL can do everything OMP can do, but more scalable.d. Don't yet understand _why_ the Futures' implementation don't speed up on the example.e. C++11 futures need more.If you can create a task graph of your algorithm, then it is relatively easy to implement it in PPL Futures.
Last edited by Cuchulainn on August 26th, 2015, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 27th, 2015, 9:19 am

Here is the C+11 futures code that is slow. Maybe it is being used wrong??
 
User avatar
Polter
Posts: 1
Joined: April 29th, 2008, 4:55 pm

C++ 11 Concurrency

August 27th, 2015, 10:29 am

> Here is the C+11 futures code that is slow. Maybe it is being used wrong??Yeah, when you call `get` member function you're forcing it to wait: http://en.cppreference.com/w/cpp/thread ... getInstead, first launch threads -- and then call `get` on all (I'd do both stages using a loop).// Also -- remember about the launch policy: http://www.justsoftwaresolutions.co.uk/ ... mises.html
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 27th, 2015, 12:13 pm

QuoteOriginally posted by: outrunYes, futi.get() are blocking till the i-th thread finished. First launch all threads with std::future, then futi.get() all threads, when the first thread finished, the first get will stop blocking and at that time the other 3 threads should be ready too.Indeed. In this case since there is no shared data we can define the rendezvous point after firing up all futures. Now I get the same results as expected, both for C+11 and PPL.
 
User avatar
Polter
Posts: 1
Joined: April 29th, 2008, 4:55 pm

C++ 11 Concurrency

August 27th, 2015, 2:59 pm

Futures make sense when you need to easily communicate the results / synchronize operations (which includes expressing a possible dependency among these) between different threads (in particular, futures created using std::async can be used for returning results, where "returning" in a special case of "communication").If you just want to launch a function, you may as well use threads, but that's a different use case.And you can still communicate the results using threads, but it's less convenient and more error-prone (right tool - right job, and all that).In short, from Bjarne's "Tour":Quote13.7 Communicating TasksThe standard library provides a few facilities to allow programmers to operate at the conceptuallevel of tasks (work to potentially be done concurrently) rather than directly at the lower level ofthreads and locks:[1] future and promise for returning a value from a task spawned on a separate thread[2] packaged_task to help launch tasks and connect up the mechanisms for returning a result[3] async() for launching of a task in a manner very similar to calling a function.These facilities are found in <future>
 
User avatar
Cuchulainn
Topic Author
Posts: 20253
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

C++ 11 Concurrency

August 27th, 2015, 4:55 pm

QuoteOriginally posted by: PolterFutures make sense when you need to easily communicate the results / synchronize operations (which includes expressing a possible dependency among these) between different threads (in particular, futures created using std::async can be used for returning results, where "returning" in a special case of "communication").If you just want to launch a function, you may as well use threads, but that's a different use case.And you can still communicate the results using threads, but it's less convenient and more error-prone (right tool - right job, and all that).In short, from Bjarne's "Tour":Quote13.7 Communicating TasksThe standard library provides a few facilities to allow programmers to operate at the conceptuallevel of tasks (work to potentially be done concurrently) rather than directly at the lower level ofthreads and locks:[1] future and promise for returning a value from a task spawned on a separate thread[2] packaged_task to help launch tasks and connect up the mechanisms for returning a result[3] async() for launching of a task in a manner very similar to calling a function.These facilities are found in <future> PPL has quantified many of these rules-of-thumbs as parallel design patterns.
Last edited by Cuchulainn on August 26th, 2015, 10:00 pm, edited 1 time in total.