Serving the Quantitative Finance Community

 
User avatar
quantmeh
Topic Author
Posts: 0
Joined: April 6th, 2007, 1:39 pm

LMM?

December 16th, 2011, 1:25 am

guys, how far are you in this project in terms of working term structure models? do you have LMM code?
 
User avatar
ZhuLiAn
Posts: 0
Joined: June 9th, 2011, 7:21 am

LMM?

December 16th, 2011, 9:14 am

yeah
 
User avatar
rmax
Posts: 374
Joined: December 8th, 2005, 9:31 am

LMM?

December 16th, 2011, 9:36 am

ZhuLiAn - does this mean you will contribute it to the framework?
 
User avatar
renorm
Posts: 1
Joined: February 11th, 2010, 10:20 pm

LMM?

December 17th, 2011, 2:22 pm

I wrote some LMM code (both plain and Longstaff-Scwartz) that can run almost as fast as CUDA (or even faster, depending on hardware), but can't publish it due to lack of time and some other reasons.
 
User avatar
Polter
Posts: 1
Joined: April 29th, 2008, 4:55 pm

LMM?

December 17th, 2011, 3:43 pm

QuoteOriginally posted by: renorm I wrote some LMM code (both plain and Longstaff-Scwartz) that can run almost as fast as CUDA (or even faster, depending on hardware):-)QuoteOriginally posted by: renorm but can't publish it due to lack of time and some other reasons.:-(
 
User avatar
Cuchulainn
Posts: 22928
Joined: July 16th, 2004, 7:38 am

LMM?

December 17th, 2011, 3:56 pm

QuoteOriginally posted by: PolterQuoteOriginally posted by: renorm I wrote some LMM code (both plain and Longstaff-Scwartz) that can run almost as fast as CUDA (or even faster, depending on hardware):-)QuoteOriginally posted by: renorm but can't publish it due to lack of time and some other reasons.:-(The product is available but not shipping?
 
User avatar
quantmeh
Topic Author
Posts: 0
Joined: April 6th, 2007, 1:39 pm

LMM?

December 17th, 2011, 9:09 pm

QuoteOriginally posted by: renormI wrote some LMM code (both plain and Longstaff-Scwartz) that can run almost as fast as CUDA (or even faster, depending on hardware), but can't publish it due to lack of time and some other reasons.if you can't share it with us then it doesn't count
 
User avatar
renorm
Posts: 1
Joined: February 11th, 2010, 10:20 pm

LMM?

December 18th, 2011, 7:18 am

I should have said "...can't publish yet". Once this project gets started I will contribute my code.In the context of this project, I am interesting only in things that can't be done easily without C/C++. Right now there are several issues that need to be resolved:1. AVX capable CPU/compiler/OS.2. C-API for LAPACK/BLAS.3. GSL.4. TBB.My code requires AVX, but many don't have it yet. Let's wait a bit (6-12 months?).C-API for LAPACK/BLAS is problematic for Windows users. The best option is to buy Windows version of Intel Parallel Studio.Only small subset of GSL is needed. I am sure Cuch and team will come up with a good replacement.TBB shouldn't be a problem, but still it is another dependency.
Last edited by renorm on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 22928
Joined: July 16th, 2004, 7:38 am

LMM?

December 18th, 2011, 10:45 am

QuoteOnly small subset of GSL is needed. A nice, compact with standardised interface library to do algebra is possible. Even now, the Boost Math Toolkit has a lot of overlap with GSL. And building matrix algos on top of uBLAS is easy and transparent.What are the most important functions you use in GSL?
Last edited by Cuchulainn on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 22928
Joined: July 16th, 2004, 7:38 am

LMM?

December 18th, 2011, 10:45 am

double.
Last edited by Cuchulainn on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
renorm
Posts: 1
Joined: February 11th, 2010, 10:20 pm

LMM?

December 18th, 2011, 1:14 pm

Not sure about nice part, but LAPACK/BLAS and corresponding C-API (as it is found in MKL 10.3 and newer) are standardized interface for linear algebra.uBlas is no replacement for LAPACK/BLAS. As far as I know it doesn't work well with Intel/AMD/ATALS BLAS. uBlas is OK for small matrices, but too slow for large matrices. It is about 50 times slower than Intel/AMD/ATLAS BLAS and doesn't provide LAPACK functionality. It would be very bad idea to reinvent LU/QR/SVD/OLS and etc instead of using LAPACK. C++ wrappers over LAPACK/BLAS are OK, but reinventing core functionality is really really bad idea.QuoteWhat are the most important functions you use in GSL? Quadratures and some special functions.Btw, Boost math toolkit isn't designed for high performance. For example, inverse Gaussian CDF from Boost is several times slower than Acklam's method found in QuantLib. Acklam's relative error is about 1e-9, which is more than enough for MC. IIRC, MKL's implementation is about about 50 times faster than Boost. I think that is a lot, especially in LMM type of simulations. The same could be said about other distributions (Poisson, discrete and etc).
 
User avatar
Cuchulainn
Posts: 22928
Joined: July 16th, 2004, 7:38 am

LMM?

December 18th, 2011, 1:56 pm

QuoteOriginally posted by: renormNot sure about nice part, but LAPACK/BLAS and corresponding C-API (as it is found in MKL 10.3 and newer) are standardized interface for linear algebra.uBlas is no replacement for LAPACK/BLAS. As far as I know it doesn't work well with Intel/AMD/ATALS BLAS. uBlas is OK for small matrices, but too slow for large matrices. It is about 50 times slower than Intel/AMD/ATLAS BLAS and doesn't provide LAPACK functionality. It would be very bad idea to reinvent LU/QR/SVD/OLS and etc instead of using LAPACK. C++ wrappers over LAPACK/BLAS are OK, but reinventing core functionality is really really bad idea.QuoteWhat are the most important functions you use in GSL? Quadratures and some special functions.Btw, Boost math toolkit isn't designed for high performance. For example, inverse Gaussian CDF from Boost is several times slower than Acklam's method found in QuantLib. Acklam's relative error is about 1e-9, which is more than enough for MC. IIRC, MKL's implementation is about about 50 times faster than Boost. I think that is a lot, especially in LMM type of simulations. The same could be said about other distributions (Poisson, discrete and etc).We should then provide a choice for the user:1. Portable, easy to install libraries, not necessarily optimised. BTW Math Toolkit's focus was accuracy, initially.2. Specific, possibly platform-dependent libraries (that MKL costs $?) with bigger footprint and learning curve (e.g. big challenge of LAPACK on Windows3. There is a real need for lightweight libraries without having to drag in the kitchen sink in order to use them (e.g. novice C++ users).For MC and speed you need 2. But not always. More speedup <==> more effort.QuoteIt would be very bad idea to reinvent LU/QR/SVD/OLS and etc instead of using LAPACKWhy a bad idea? LU is about 8 lines of code using uBLAS. Here is a scenario; a casual C++ user creates an accurate POC app using local LU and then hands the code to C++ system integrators for acceptance testing, just by switching the interfaces.QuoteuBlas is OK for small matrices300X300? QuoteQuadratures and some special functions.Outrun has some remarks on Boost Quadrature.
Last edited by Cuchulainn on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
renorm
Posts: 1
Joined: February 11th, 2010, 10:20 pm

LMM?

December 18th, 2011, 3:59 pm

QuotePortable, easy to install libraries, not necessarily optimised. BTW Math Toolkit's focus was accuracy, initially.Does anyone is planning to run this project on anything but x86? And what do you mean by "easy to install"?LAPACK/BLAS interface must be available to developers. All serious linear algebra packages can link to standard BLAS interface, so that "not necessarily optimized" doesn't expand our choices. uBlas can't link to optimized BLAS, but it shouldn't prevent anyone from using it, since uBlas included with Boost.QuoteSpecific, possibly platform-dependent libraries (that MKL costs $?) with bigger footprint and learning curve (e.g. big challenge of LAPACK on WindowsLet's leave buzzwords alone and keep things straight. MKL is x86, so is Windows. MKL, ATLAS and AMD's library provide standard LAPACK/BLAS interface.QuoteThere is a real need for lightweight libraries without having to drag in the kitchen sink in order to use them (e.g. novice C++ users).If novice C++ users can learn Boost, they should have no problem with basic C-API such LAPACK/BLAS.QuoteLU is about 8 lines of code using uBLAS.How about QR, OLS, SVD and etc? Why one would want re-implement all that? With all due respect, I don't think you can get it right the first time, or second, or tenth. LAPACK/BLAS is defacto industry standard. It took many years for LAPACK to get to its current state.Quote300X300?Smaller than that.
Last edited by renorm on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 22928
Joined: July 16th, 2004, 7:38 am

LMM?

December 18th, 2011, 4:35 pm

Well, I programmed QR, SVD etc. before (in Fortran) and it is part of graduate numerical analysis studies. No problem, and we only need a small subset. QuoteDoes anyone is planning to run this project on anything but x86? And what do you mean by "easy to install"?For MFE/MSc students. Can we have this StyleI think x86 on Windows is popular. QuoteIf novice C++ users can learn Boost, they should have no problem with basic C-API such LAPACK/BLAS.Maybe I am missing something, *but* LAPACK install on Windows is NOT easy.QuoteSpecific, possibly platform-dependent libraries (that MKL costs $?) with bigger footprint and learning curve (e.g. big challenge of LAPACK on WindowsThis is still an issue. 'Footprint' ==> we need the full package, 'learning curve' is known to each cost centre. If it takes 2 weeks to learn something just to use QR, then you can use PLAN B.
Last edited by Cuchulainn on December 17th, 2011, 11:00 pm, edited 1 time in total.
 
User avatar
renorm
Posts: 1
Joined: February 11th, 2010, 10:20 pm

LMM?

December 18th, 2011, 6:26 pm

QuoteWell, I programmed QR, SVD etc. before (in Fortran) and it is part of graduate numerical analysis studies. No problem, and we only need a small subset.Sorry, but that isn't very convincing. Can you deliver SVD implementation that is not 50 times slower than MKL? And who decides which subset we need?QuoteMaybe I am missing something, *but* LAPACK install on Windows is NOT easy.Learning curve and install are different things. Let's keep them separated.Lapack on Windows is easy if you use Intel or AMD libs. Both run well on competitor's CPUs. AMD libs are free and just as good as Intel libs for basic stuff such as Lapack.QuoteThis is still an issue. 'Footprint' ==> we need the full package, 'learning curve' is known to each cost centre.Let's reinvent LAPACK just because there are some functions that won't be need?QuoteIf it takes 2 weeks to learn something just to use QR, then you can use PLAN B.Again, not convincing. If someone needs 2 week to learn simple C API such as LAPACK then he has no business developing qunt library. As far as learning curve concerned, I would worry more about dragging certain thing from Boost or fancy design patterns into the numerical layer.
Last edited by renorm on December 17th, 2011, 11:00 pm, edited 1 time in total.