Serving the Quantitative Finance Community

 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 23rd, 2009, 11:10 am

According to the TBB manual there are scalable, fair, spin, recursive, yield-or-block mutex types supported be the library. Is there similar description of Boost mutex types? Thank you.
 
User avatar
Cuchulainn
Posts: 22937
Joined: July 16th, 2004, 7:38 am

Boost mutex types

December 23rd, 2009, 11:14 am

If you are looking for a 20-page pdf on how to use them, I think not.I reckon that boost.org is the only choice at this moment. It's sparse, to be sure.
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 23rd, 2009, 3:54 pm

According to the manual, tbb::mutex sleeps while it is waiting, but tbb::spin_mutex doesn't sleep. I only need to know how boost mutex types behave while waiting to acquire a lock.
 
User avatar
Cuchulainn
Posts: 22937
Joined: July 16th, 2004, 7:38 am

Boost mutex types

December 23rd, 2009, 4:53 pm

QuoteOriginally posted by: kevin08According to the manual, tbb::mutex sleeps while it is waiting, but tbb::spin_mutex doesn't sleep. I only need to know how boost mutex types behave while waiting to acquire a lock.I have not yet found a spin wait in boost, but it could be emulated with sleep(0) which preempts the running thread and it goes into Wait mode. But a spin wait means that you expect the thread to be in front 'very soon'. Not sure if it is using resources while spinning..
Last edited by Cuchulainn on December 22nd, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
bojan
Posts: 0
Joined: August 8th, 2008, 5:35 am

Boost mutex types

December 23rd, 2009, 5:20 pm

See http://www.open-std.org/jtc1/sc22/wg21/ ... ost.thread should basically implement what is presented in this paper.
 
User avatar
Cuchulainn
Posts: 22937
Joined: July 16th, 2004, 7:38 am

Boost mutex types

December 23rd, 2009, 6:54 pm

I had a look at the full doc on mutex (it's one option for locking), but nothing on spin locking (as in C# or TBB as you state).Is your question general or is it a specific problem you want to solve? I reckon TBB is closer to Intel h/w and boost thread needs to be portable.
Last edited by Cuchulainn on December 22nd, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 24th, 2009, 10:42 pm

I am interested in locking behavior of each boost mutex type. The difference between spin and sleep locking can be big. Also, TBB manual states that some mutex types are scalable. I am not 100% sure what does that mean, though. Can we say anything about boost mutex types?
Last edited by kevin08 on December 23rd, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 25th, 2009, 8:19 am

AFAIK, TBB is a template library which works on any platform as long as your compiler supports it .
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 25th, 2009, 12:07 pm

By the way, I am trying to marry QuantLib and TBB. My early experience with TBB is very positive. Does Java or C# has similar high level constructs for concurrent programming?
 
User avatar
Cuchulainn
Posts: 22937
Joined: July 16th, 2004, 7:38 am

Boost mutex types

December 26th, 2009, 11:22 am

QuoteOriginally posted by: kevin08By the way, I am trying to marry QuantLib and TBB. My early experience with TBB is very positive. Does Java or C# has similar high level constructs for concurrent programming?Will you adopt an incremental approach or do you feel parallel design (patterns) is needed?C# has a Thread class and notification/synch mechanisms. At a higher level the new TPL (task) looks very promising. I would not rule it out. See example on 'parallel for' below. The book by Doug Lea on Java Concurrent Programming is very good, especially for design issues.QuoteTBB manual states that some mutex types are scalable. I am not 100% sure what does that meanRefers to the ability to increase performance as the number of processors incereases (see Isoefficiency Metric). QuoteI am interested in locking behavior of each boost mutex type. The difference between spin and sleep locking can be bigEven a sleep(0) or a yield()?Is there anything (anecdotal or otherwise) documented on this? Whatever, experimentation is advisable, ///
Last edited by Cuchulainn on December 25th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 26th, 2009, 5:16 pm

What is incremental approach?No patters yet. TBB parallel algorithms seem to be adequate so far. If possible, I will try not to use threads explicitly.
 
User avatar
Cuchulainn
Posts: 22937
Joined: July 16th, 2004, 7:38 am

Boost mutex types

December 27th, 2009, 1:01 pm

QuoteOriginally posted by: kevin08What is incremental approach?One particular scenario://Some Guidelines on Parallelising C++ Code (v 0.1)When developing multi-threaded applications one of the options is to attempt to rewrite single-threaded code to take advantage of the available multi-core and multi-processor hardware. To this end, we decide on a multi-threaded version of the original sequential code. The *goals* of this migration project are:. Improved speedup: the program should run faster on a CPU with N processors than a CPU with one processor. Linear speedup on a CPU with N processors is equivalent to saying that the program runs N times faster than the same program on a single-processor CPU. A program whose serial fraction is greater than 10% has a maximal speedup of 10 in the limit when the number N of processors tends to infinity!. Serial equivalence: does the multi-threaded program give the same results and output as the (original) single-threaded program? This can be a hard requirement, for example in option pricing codeHow do we proceed and how long should the porting process take? First of all, we need to decide if it is *feasible* to port our sequential code. There are some issues to address before we embark on this project:. The current application code is so badly documented and/or designed that there is no point trying to port it. It will take too long to understand what the intent it, in which case it is better to design the problem again using parallel design techniques and threading code. . What is the program’s serial fraction? This is defined as the fraction of the program’s execution time taken up by the parts that must execute serially. Knowing what the serial fraction is will allow us to determine (or find an upper bound) on the speedup using the Amdahl and Gustafson-Barsis laws. You should use these laws as a back-of-envelope estimate for project GO/NOGO.. Micro-optimisation: in some situations we may find ourselves trying to parallelise code that runs almost as fast in the single-threaded case. Multi-threaded code is suitable for compute-intensive algorithms and problems with large data sets. There is an application size threshold below which single-threaded code has better speedup than multi-threaded code! Having determined that the effort in producing a multi-threaded solution is worth the returns, we need to define a process that will realise the dual goals of speedup and serial equivalence. We adopt an *incremental* approach by step-wise migration of sequential code to multi-threaded code. There are no golden rules here but the following guidelines and checks may be useful:. Create a back-of-envelope data dependency graph. Do you know what the main tasks in the program are and their dependencies?. What are the main algorithms in the program? What are the ‘glue’ data structures than connect these algorithms. Which patterns are being used, for example, dataflow, object-oriented (hierarchical), event-driven?Answering these questions will give an indication of where to begin with code migration. Some issues are:1) Many technical applications contain code containing for-loops. These are ideal candidates for parallelization (in principle). The caveats are:a) Silent data corruption; this corresponds to shared data being accessed by multiple threads, leading to non-deterministic behaviour. Data needs to be protected.b) Cache problems: the data in a for-loop may not fit into cache. Consider loop optimization methods, for example loop fusion, fission and unrollingc) Avoid critical sections in loops; use reduction variables2) Non-thread code and libraries: for example, a Monte Carlo option pricer that employs non-thread-safe random-number generators will give different results in the single-threaded and multi-threaded cases when different threads are used to generate these numbers. It is possible to place the code in a critical block but this will have disastrous performance implications in double for-loops. A possible solution is some form of loop fission or using a data flow pattern such as Producer-Consumer3) Round-off errors and reduction variables: when parallelizing reductions when the type is floating point the result in the multi-threaded case may not be the same as in the single-threaded case.4) Last but not least, knowledge of the underlying hardware is essential if good speedup is wanted, especially L1, L2, L3 cache, pinning threads to processors, …
Last edited by Cuchulainn on December 26th, 2009, 11:00 pm, edited 1 time in total.
 
User avatar
kevin08
Topic Author
Posts: 0
Joined: October 24th, 2008, 11:42 pm

Boost mutex types

December 29th, 2009, 9:15 am

Thanks Cuchulainn, that was very illuminating.Happy new year to you all wilmotters .