Serving the Quantitative Finance Community

  • 1
  • 5
  • 6
  • 7
  • 8
  • 9
  • 18
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

The ultimate Monte Carlo framework

January 9th, 2012, 6:59 pm

Quote1. How do you provide SDE info, an Ito SDE with two function (drift function, diffusion function). If so we need to add a 3rd for the jumps.I would reuse the code in GBM SDE by defining a composition of former and code/classes for pure jumps. Tried and tested method in general.Quote2. We should have general result aggregators. If we estimate means, then we can assume independence and use the central limit theorem to aggregate means based of the relative number of l samples in each batchInstead of shovelling the value at t = T over to MCPostProcess, shovel whole array over to several (multi-theaded) MCPP. Just add new slots. The right slots (e.g. Asian) must be coupled with the corret payoff class (how?).Quote3. Interesting, how would be validate a constant changing result? With a likelihood or distribution test?One scenario: take the idea in Kloeden et al (create a new MCMIS component, again a slot) on finding a confidence interval for the L1 error betwen approx solution and exact. M batches with each batch having N simulations. Then use Student's t-distribution to find the variance of batch averages.This is a MIS system and the analogy is so obvious (batch == division, simulation == department, path == individual resource). I would use loop-level parallelism and then ideally with TRNG. And just let it run to completion. In all cases. code is non-intrusive.Quote4.? //Code change!
Last edited by Cuchulainn on January 8th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

The ultimate Monte Carlo framework

January 10th, 2012, 9:19 am

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteQuote2. We should have general result aggregators. If we estimate means, then we can assume independence and use the central limit theorem to aggregate means based of the relative number of l samples in each batchInstead of shovelling the value at t = T over to MCPostProcess, shovel whole array over to several (multi-theaded) MCPP. Just add new slots. The right slots (e.g. Asian) must be coupled with the corret payoff class (how?).Is this how it goes?1) generate a sample path2) send the path to MCPostProcess, and convert the path into a derivative price sample3) send that "derivative price sample" (a float) to some aggregator -in general one that computer the mean-4) goto 11) and 2) can bet done in parallel using independent seeds of the rng3) can be done in per computain unit, and then at the end we can merge all the aggregates into a single aggregatesomething like that?Sounds very good to me.In fact, I think a given single path can be used to generate payoff for multiple option types, which improves performance. One issue as mentioned is to worry that RN sequences are interrelated.At this stage, I think MCMediator is a candidate for the concepts approach?
Last edited by Cuchulainn on January 9th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

The ultimate Monte Carlo framework

January 17th, 2012, 11:02 am

QuoteAnother option is that we can have 2^D orthogonal anti-thetic paths for any given D dimensional problem. I have a 1d example in MC102.What's a good definitive source in n-factor case?
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

The ultimate Monte Carlo framework

January 17th, 2012, 11:36 am

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteAnother option is that we can have 2^D orthogonal anti-thetic paths for any given D dimensional problem. I have a 1d example in MC102.What's a good definitive source in n-factor case?Factors -n- are number of time-steps * number of underlying assets, right?What do you mean by definitive source? Is that the random generator used to draw n random numbers every time you generate a single (multi-factor) scenario?n-factor is the number of underlying assets. (maybe 'D' is better?)By source, I mean text or article on which to base my algorithms. what like d == underlying factorsm == underlying GBM (m <= d)NTNSIMNBATCH
Last edited by Cuchulainn on January 16th, 2012, 11:00 pm, edited 1 time in total.
 
User avatar
Cuchulainn
Posts: 20255
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

The ultimate Monte Carlo framework

January 17th, 2012, 1:23 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteAnother option is that we can have 2^D orthogonal anti-thetic paths for any given D dimensional problem. I have a 1d example in MC102.What's a good definitive source in n-factor case?Factors -n- are number of time-steps * number of underlying assets, right?What do you mean by definitive source? Is that the random generator used to draw n random numbers every time you generate a single (multi-factor) scenario?n-factor is the number of underlying assets. (maybe 'D' is better?)By source, I mean text or article on which to base my algorithms. what like d == underlying factorsm == underlying GBM (m <= d)NTNSIMNBATCHOK, .. unfortunately I don't have any good idea's about algo sources, maybe someone else here? I suppose it's process dependent, e.g. for GBM one typically use Cholesky on the COV to correlate the random numbers, and then make steps forward in time, but for stoch vol it would be completely different (also double the number of factors?).Is it going well?There is MC discussion on the other forums indeed at the moment MC103 in preparation, more generic than MC102.
 
User avatar
Traden4Alpha
Posts: 3300
Joined: September 20th, 2002, 8:30 pm

The ultimate Monte Carlo framework

January 17th, 2012, 2:07 pm

QuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnIn fact, I think a given single path can be used to generate payoff for multiple option types, which improves performance. One issue as mentioned is to worry that RN sequences are interrelated.At this stage, I think MCMediator is a candidate for the concepts approach?Good idea about attaching multiple payoffs to a single path!This a very good idea for some applications, but a very bad one for others. For a snapshot pricing of instruments, this can be OK. But this approach will create correlation in the MC errors between different instruments that were co-simulated. If, for example, a particular run of RNs accidentally contain an over-abundance of high paths, then all call-like pay-offs will have a higher than true price and all put-like pay-offs will have a lower than true price. In general, any studies of correlations in prices of different instruments will find more extreme values of correlation that is true (higher positive correlations among all instruments with the same signs on delta and lower negative correlations among all instruments with opposite signs on delta).This issue is just another example of the non-trivial coupling between methods used inside a blackbox and spurious errors or pathologies created when that blackbox is used in certain ways.QuoteOriginally posted by: outrunI've thought about the independent RN before in a distributed project. One approach we wanted to take is to let the RN run once for a long time, and then log the state/seed every 1M or 1B. You can then use that list of seeds to draw independent chuncks (of max size 1M/1B).Also, some RN have random access seeds/forward stepping to the n-th draw, e.g. with Sobol you can easily (const time) skip forward 1M samples.Can we come up with something like this? Or something better? Should we investigate random stepping for e.g. de mersenne twister? A number of years ago, we talked about creating a buffered serial RNG that created and buffered RNs for use by parallel path/price calculators. One processor core would spend up to 100% of it's time creating RNs and storing them. A second dispatch process would deal the buffered FIFO buffer of RNs to the N-1 cores that consume RNs. As long as one core can create enough RNs to feed N-1 cores of path, price, and aggregation calculations, then the approach would be efficient.