SERVING THE QUANTITATIVE FINANCE COMMUNITY

 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

October 29th, 2011, 4:23 pm

QuoteOriginally posted by: outrunI have no strong opinion on it.Typically the developer of the internals will write the tests? He then might as well *also* test internal structures.Black box is very good from a functional point of view: does it function like it is defined to be? That's essential. However, you can't be sure without looking at the internals that your black box test cases cover all internal code paths. E.g. I can write a complicates sorting routine with 175 internal if statements, and you will never know if the tests cover all paths.I agree.That's why there are 3 kinds of tests based on 'users' and developers as documented in my link that explain each one. Most of the work/energy is in inter-system interface compliance and interoperabilty (e.g. Apollo 11 ==> 75%, and remember the infamous Mars lander s/w error (wrong units in this case)). Of course , we are going to specify the interfaces *before* we write each module, or at the least give it our best shot, I hope. Will save much iteration.
Last edited by Cuchulainn on October 28th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

October 29th, 2011, 4:28 pm

While white-box testing can be applied at the unit, integration and system levels of the software testing process, it is usually done at the unit level. It can test paths within a unit, paths between units during integration, and between subsystems during a system level test. Though this method of test design can uncover many errors or problems, it might not detect unimplemented parts of the specification or missing requirements.Black-box testing is a method of software testing that tests the functionality of an application as opposed to its internal structures or workings (see white-box testing). Specific knowledge of the application's code/internal structure and programming knowledge in general is not required. Test cases are built around specifications and requirements, i.e., what the application is supposed to do. It uses external descriptions of the software, including specifications, requirements, and designs to derive test cases. These tests can be functional or non-functional, though usually functional. The test designer selects valid and invalid inputs and determines the correct output. There is no knowledge of the test object's internal structure.Grey Box Testing (American spelling: gray box testing) involves having knowledge of internal data structures and algorithms for purposes of designing the test cases, but testing at the user, or black-box level. The tester is not required to have a full access to the software's source code.[26][not in citation given] Manipulating input data and formatting output do not qualify as grey box, because the input and output are clearly outside of the "black-box" that we are calling the system under test. This distinction is particularly important when conducting integration testing between two modules of code written by two different developers, where only the interfaces are exposed for test. However, modifying a data repository does qualify as grey box, as the user would not normally be able to change the data outside of the system under test. Grey box testing may also include reverse engineering to determine, for instance, boundary values or error messages.
Last edited by Cuchulainn on October 28th, 2011, 10:00 pm, edited 1 time in total.
 
User avatar
Traden4Alpha
Posts: 23951
Joined: September 20th, 2002, 8:30 pm

Testing

October 29th, 2011, 11:18 pm

Requirements NFR04 and NFR05 and some of the user populations in BR01 would seem to imply that high performance is one of the long-term goals of this library. Does that imply that some form of performance testing needs to be part of the latter stages of the testing program for the computationally-intensive components of the library?Obviously correct, well-documented, functional code is priority #1 but some of the more professional and demanding usage scenarios will also need latency, throughput, and memory performance if the library is to be widely adopted beyond student and prototyping applications.
 
User avatar
Alan
Posts: 9783
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Testing

October 29th, 2011, 11:38 pm

QuoteOriginally posted by: outrunthanks!Just for to get a clear understanding; gbm is thus *not* defined by the SDE?because I can't see why we can't set S(0) to -1 and getdSt = -mu dt -sigma dWtIt's probably that gbm is defined more, and it merely satisfies the SDE, but so do other non GBM processes (negative GBM).gbm is defined as "gbm is such that log(gbm) -> bm", right?Well, there's no official defn you can look up; by everybody's convention but yours, GBM is a soln to thatsde that starts on the positive axis (and so stays there). Sure, call the other one negative GBM if you want.The point is that, whatever half-line it starts on, it's going to stay there.
 
User avatar
quartz
Posts: 424
Joined: June 28th, 2005, 12:33 pm

Testing

November 1st, 2011, 3:07 pm

Quote Does that imply that some form of performance testing needs to be part of the latter stages of the testing program for the computationally-intensive components of the library?Definitely (isn't everybody doing it already, atleast to some extent?). Here again we should decide how to handle automatic comparisons to previous versions: inside normal tests or as external ex post checks? Just as for accuracy changes... (maybe also keep accuracy information in the outputs instead of just spitting out passed/not-passed when the error is below a threshold).It'd be nice to finally have some finance HPC standard benchmark, just like any other field!
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 5th, 2012, 4:13 pm

A feature some people wanted to have...I have now ported the 2-factor FDM/ADE to using uBLAS which has major advantages. Now we can use uBLAS and STL algos to perform numerical postprocessing NPP error analysis and extrapolated solution. The processing can be done in parallel and offline and it avoids us having to look at numbers and graphs on the screen.Input: 2 matrices, one for target values (e.g. exact) , the other for actual (e.g. ADE) values.I can do;1. Error at a hotspot.2. Error at a region around the hotspot.3. Error for full matrix.4. Adaptive mesh methods.And display in Excel if you wish.Will post when PDE101 (2-factor models) is documented.This NPP will save lots of time.//uBLAS is cool. STL-compatible and much more.
Last edited by Cuchulainn on February 4th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 5th, 2012, 4:23 pm

The NPP idea can also be applied to 1d and 3d PDE. What about a feature list for MC?Or other features?
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 7th, 2012, 5:47 pm

I have tested 4 types 2-factor options using ADE and NPP. Here is the complete error report in each region. Now time to write up this PDE101 and NPP101. Idea is it will work on other methods. Saves a lot of debugging timeedit: file too big ! (NX = 800)Now NX = 100 etc. x = y ~ 1 is almost at S1, S2 infinity. Next is automated testing, and have some coffee.
Attachments
npp2.zip
(155.85 KiB) Downloaded 20 times
Last edited by Cuchulainn on February 6th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 7th, 2012, 8:32 pm

Nice! Since we are working on a square, we have to look at the compatibility at these corners (some are far away as well). I took NX = NY = NT = 100 which is awful conservative (normally NX >= 200 etc. but big file for Wilmott). This one is a spread option, I think..Might need to smooth the payoff but let's see as we proceed. The geomety of x - y - K is important here. Different regions. NPP could be a very generic post-processor utility. (1,1) is infinity^2; I suppose that is the BIG error in your diagram? payoff is like x/(1-x) with x ~ 1. x, y >= .95 is the EMPTY QUARTER(0,0) , lim (x-y-k) = lim (x-y-k), so hunch is V(0,0) = K for boundary condition at (0,0). I reckon then this will disappear. .......... x->0 ........... y->0
Last edited by Cuchulainn on February 6th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Alan
Posts: 9783
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

Testing

February 7th, 2012, 11:48 pm

Axes labels, a scale, and a payoff definition would be helpful.
Last edited by Alan on February 7th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 8th, 2012, 6:25 am

QuoteInteresting results to be able to analyze errors. Very informative, might give new ideas about different transforms, right? Absolutely; different kinds of everything can be visualised (e.g. ADI vs ADE vs Soviet Splitting, Janenko). And time saving. As I mentioned, this approach replaces myriads of plots and tables. And more importantly it can be run as a batch job, offlne. At x = y =1 I truncate the domain, so a bit of smoothing can done since don't truncate the exact solution, if I had done that the 2 solutions would be aligned. Another big test is ADI with domain truncation and linearity BC.
Last edited by Cuchulainn on February 7th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 59665
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

Testing

February 8th, 2012, 6:28 am

QuoteOriginally posted by: AlanAxes labels, a scale, and a payoff definition would be helpful.I will be posting the full code PDE101 in the PDE thread. The payoffs I tested were those fom Espen's bookquotientproductspreadexchange Axes , box (0,1)^2x = S1/(S1 +c1)y = S2/(S2 +c2)Hotspot (1/2, 1/2)
Last edited by Cuchulainn on February 7th, 2012, 11:00 pm, edited 1 time in total.
ABOUT WILMOTT

PW by JB

Wilmott.com has been "Serving the Quantitative Finance Community" since 2001. Continued...


Twitter LinkedIn Instagram

JOBS BOARD

JOBS BOARD

Looking for a quant job, risk, algo trading,...? Browse jobs here...


GZIP: On