- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

A new dedicated IV thread on Student (or Numerical | Technical?) indeed would not do any harm I suppose

Last edited by Cuchulainn on November 30th, 2014, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- theotherguy
**Posts:**1**Joined:**

Hi allthanks Collector for this book - it is a great resource.I have a couple of questions about the discrete arithmetic asian option section for the cases that [$]m>0[$]. First on pages 193 and 196 I think there are (the same) typos in the formulas for the adjusted strike price, ie they should read[$] X = \frac{nX - mS_a}{n-m} [$].Second, if we write (as in the Wilmott article you reference in the text)[$]Payoff = \frac{n-m}{n} \left( A_{n-m}^{*} - \frac{nX - mS_a}{n-m} \right)[$]should we be using [$]n-m[$] in place of [$]n[$] in your pricing formulas for [$]E(A_T)[$] and [$]E(A_{T}^{2})[$] for both the Levy and Curran approximations? I see that you use [$]n[$] in the code - is it correct as written and my understanding wrong?Last, but a related question - when [$]m>0[$] what is the definition of [$]t_1[$]? Is it time to the next print, or zero (ie max(time to first print, 0) where time to first print can be less than zero if [$]m>0[$]. Thanks again for fielding these questions and again for the great book.Cheers

QuoteOriginally posted by: theotherguyHi allthanks Collector for this book - it is a great resource.I have a couple of questions about the discrete arithmetic asian option section for the cases that [$]m>0[$]. First on pages 193 and 196 I think there are (the same) typos in the formulas for the adjusted strike price, ie they should read[$] X = \frac{nX - mS_a}{n-m} [$].Second, if we write (as in the Wilmott article you reference in the text)[$]Payoff = \frac{n-m}{n} \left( A_{n-m}^{*} - \frac{nX - mS_a}{n-m} \right)[$]should we be using [$]n-m[$] in place of [$]n[$] in your pricing formulas for [$]E(A_T)[$] and [$]E(A_{T}^{2})[$] for both the Levy and Curran approximations? I see that you use [$]n[$] in the code - is it correct as written and my understanding wrong?Last, but a related question - when [$]m>0[$] what is the definition of [$]t_1[$]? Is it time to the next print, or zero (ie max(time to first print, 0) where time to first print can be less than zero if [$]m>0[$]. Thanks again for fielding these questions and again for the great book.CheersThank you, I am not sure of the answer, will try to look into it before 2015. Now that I am "finished" with my physic book I will likely work on updating my option pricing formula collection, including fixing errors/bugs in existing one.

Last edited by Collector on December 22nd, 2014, 11:00 pm, edited 1 time in total.

Last edited by Collector on December 26th, 2014, 11:00 pm, edited 1 time in total.

QuoteOriginally posted by: theotherguyHi allthanks Collector for this book - it is a great resource.I have a couple of questions about the discrete arithmetic asian option section for the cases that [$]m>0[$]. First on pages 193 and 196 I think there are (the same) typos in the formulas for the adjusted strike price, ie they should read[$] X = \frac{nX - mS_a}{n-m} [$].I looked into this now, I see what you say above is what I use in the VBA code "DiscreteAsianHHM", and yes it makes more sense, I will add this to my typo list. Thank you. Not looked into problem 2 much yet.

Last edited by Collector on December 26th, 2014, 11:00 pm, edited 1 time in total.

- DominicConnor
**Posts:**11684**Joined:**

Is the typo list published anywhere?

QuoteOriginally posted by: DominicConnorIs the typo list published anywhere?Links to the typo list pdf and some code changes are just below the book in question on the following page:http://www.espenhaug.com/books.html

- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

Stupid question regarding equation (1.1) page 2:What happens/should happens when you plug S = 0 into the formula?In fairness, there's nowhere on page 2 to say that I can't. The bottom line is that the formula and its related qualifications (or lack thereof) becomes ambiguous so that different developers will interpret the formuia in different ways.In mathematics, one saysif S <> 0 then C = BS(S,T,...)if S == 0 then C = something else//I could ask the same question page 108, eq. 3.3 an 3.4. i.e. what happens if y1 = 1 or y2 =1 .

Last edited by Cuchulainn on March 25th, 2015, 11:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

A remark on the option pricing formulae on page 281. ..The computation can be speeded when we realize it is really using Bernstein polynomials that can efficiently computed (and reused for various payoffs) in a Pascal-style triangle using the de Casteljau recursive algorithm that is used in CAD and I use it for holography/optical tech. The accuracy is similar to the normal binomial method (O(1/n) where n is the degree of the polynomial).And it can be done in 2 factors e.g. Bernstein copula.Convergence is uniform and there is no Gibbs phenomenon.de Casteljau It avoids expensive calls to pow() and such like.

Last edited by Cuchulainn on April 30th, 2016, 10:00 pm, edited 1 time in total.

http://www.datasimfinancial.com

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

Cox & Rubinstein (1985, p178) give their own even better option pricing formula that needs just two applications of the binomial cumulative distribution function

It is may day todayQuoteit is also a traditional spring holiday in many cultures. Dances, singing, and cake are usually part of the celebrations that the day includes. Let us celebrate with binomial-cake and the trinomial-song and then the option(tional) de-copula-dance!

Last edited by Collector on April 30th, 2016, 10:00 pm, edited 1 time in total.

- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

QuoteCox & Rubinstein (1985, p178) give their own even better option pricing formula that needs just two applications of the binomial cumulative distribution functionIt is a more convenient formula but it will be ~ twice as slow. I'm not so sure how better it is. The formula does make it look like BS formula in the limit. ==Actually, Cox and Rubinstein call it the complementary binomial distribution, not cdf.It seems (e.g. Boost C++) the regularized incomplete beta function is used to compute cdf which is fine. But is can also be computes using de Casteljau algorithm with a Pascal-like triangle/lattice (the up and down are p and 1-p, respectively).I have no idea which algo is more efficient.

Last edited by Cuchulainn on May 1st, 2016, 10:00 pm, edited 1 time in total.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

The formulae on pages 282 (1 factor) and 319 (2 factors) of the Collector's book (2nd edition) are interesting IMO for a number of reasons:

1. They are exactly as the binomial methods (checked in 1 factor).

2. Very easy to program and can be parallelized (parallel loops) more easily than the normal binomial method.

3. I tested 2 factor problems as well and results agree with book.

4. Without any optimization we get a 2-digits accuracy in less than 0.01 second for 2 factors. So, the formula looks intimidating but the C++ libraries seem to be doing their work well.

*5. Collector mentions the formulae and I would be interested in his expert view on using them with various payoffs.*

6. is there a corresponding formula in 3 factors S1, S2, S3?

Now ... my question is why these formulate are seemingly not used, or am I missing something?

If nothing else, you can generate prices to test against when using other methods?

// In fact, the formulate are Bernstein polynomials (a binomial distribution approximating normal distribution) and are used in CAD for years. I have not yet tried de Casteljau's algo on the formulae but this would be a big performance improvement.

1. They are exactly as the binomial methods (checked in 1 factor).

2. Very easy to program and can be parallelized (parallel loops) more easily than the normal binomial method.

3. I tested 2 factor problems as well and results agree with book.

4. Without any optimization we get a 2-digits accuracy in less than 0.01 second for 2 factors. So, the formula looks intimidating but the C++ libraries seem to be doing their work well.

6. is there a corresponding formula in 3 factors S1, S2, S3?

Now ... my question is why these formulate are seemingly not used, or am I missing something?

If nothing else, you can generate prices to test against when using other methods?

// In fact, the formulate are Bernstein polynomials (a binomial distribution approximating normal distribution) and are used in CAD for years. I have not yet tried de Casteljau's algo on the formulae but this would be a big performance improvement.

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

yes these are very effective and very flexible

"my question is why these formulate are seemingly not used, or am I missing something?"

not sure why so few uses them. In general, one possibility is as quant friend (with very many years experience) told me a few years back, that many quants programmers educated these days do not care (or do not know) so much about efficient algorithms and methods because computers are so fast. Could explain why some software seems slower than ten years ago, even with todays much faster computers.

"pages 282 (1 factor) and 319 (2 factors) "

people so much on twitter, fat-books etc these days, they evolved into excellent at reading and writing many short messages (flicker brains), few of them can possibly get past page 100 in a book But if enlightened like P, one know one simply can skip the first 281 pages to get to page 282. I forgot to write in the preface that my book actually starts on page 282

"my question is why these formulate are seemingly not used, or am I missing something?"

not sure why so few uses them. In general, one possibility is as quant friend (with very many years experience) told me a few years back, that many quants programmers educated these days do not care (or do not know) so much about efficient algorithms and methods because computers are so fast. Could explain why some software seems slower than ten years ago, even with todays much faster computers.

"pages 282 (1 factor) and 319 (2 factors) "

people so much on twitter, fat-books etc these days, they evolved into excellent at reading and writing many short messages (flicker brains), few of them can possibly get past page 100 in a book But if enlightened like P, one know one simply can skip the first 281 pages to get to page 282. I forgot to write in the preface that my book actually starts on page 282

- Cuchulainn
**Posts:**61593**Joined:****Location:**Amsterdam-
**Contact:**

Life begins at 282.

Maybe the reason is that the formulae look intimidating and factorials have to be integer-valued. And just trying it will tell us if the formula is feasible or not.

Looking back at pages 281 to 1 we can use the 1 factor and 2 factor formulae for a wide range of payoffs instead of

1. boiler-plate (grunge) binomial code

2. cases for which no expllcit solution is known (Maragrabe, Stulz)

3. No need for bivariate normal CDF M(a,b;rho), BTW I compute a whole grid of CDF values in one sweep by solving a Goursat hyperbolic PDE and is an order of magnitude faster (10 times AFAIR) >> Genz).

//

4. Could we use the formulae as rough-and-ready initial estimates in other algorithms? e.g. Boundary conditions.

5. If you smooth the payoff can you use the same formulae to compute the greeks?

(6. Inverse problems//)

7. You can probably use Shanks' method to accelerate convergence of the series. So create a sequence of partial sums and this is a Cauchy sequence, so we stop when two sequence values are within a given tolerance. Haven't tried it yet...

Maybe the reason is that the formulae look intimidating and factorials have to be integer-valued. And just trying it will tell us if the formula is feasible or not.

Looking back at pages 281 to 1 we can use the 1 factor and 2 factor formulae for a wide range of payoffs instead of

1. boiler-plate (grunge) binomial code

2. cases for which no expllcit solution is known (Maragrabe, Stulz)

3. No need for bivariate normal CDF M(a,b;rho), BTW I compute a whole grid of CDF values in one sweep by solving a Goursat hyperbolic PDE and is an order of magnitude faster (10 times AFAIR) >> Genz).

//

4. Could we use the formulae as rough-and-ready initial estimates in other algorithms? e.g. Boundary conditions.

5. If you smooth the payoff can you use the same formulae to compute the greeks?

(6. Inverse problems//)

7. You can probably use Shanks' method to accelerate convergence of the series. So create a sequence of partial sums and this is a Cauchy sequence, so we stop when two sequence values are within a given tolerance. Haven't tried it yet...

http://www.datasim.nl

Every Time We Teach a Child Something, We Keep Him from Inventing It Himself

Jean Piaget

GZIP: On