I didn't want to completely discourage the OP from clicking on it.

I've seen worse, e,g. my [$]e^5[$] and the horrendous Heston benchmark.Statistics: Posted by Cuchulainn — 7 minutes ago

]]>

Brenda Spencer's song (and 40 years later the penny hasn't dropped over there)

www.youtube.com/watch?v=-Kobdb37Cwc

Statistics: Posted by Cuchulainn — 17 minutes ago

]]>

]]>

So she, nomen omen, concluded exactly at the time before the election to keep everyone in the maximum suspense?

------

@"I checked the broad informal public opinion poll of Twitter and that is always interesting. (I disagree with people who say it is just for idiots - a wide array is represented there.)"

Be careful though, the Twitter polls and sentiment analyses failed to predict Brexit and (miserably) to predict the UK election result.

Statistics: Posted by katastrofa — 33 minutes ago

]]>

Also, this old thread, unfortunately somewhat corrupted, has a nice tricky case for testing putative SABR MC's or other approaches.

'somewhat'.Statistics: Posted by Cuchulainn — 38 minutes ago

]]>

Statistics: Posted by Alan — 44 minutes ago

]]>

1. Machine Learning

https://www.datasim.nl/application/file ... 423101.pdf

2. PDE approach using ADI, for example. I think Joerg Kienitz is active here.

3. If SABR is like Heston, then standard FD and MC will have a lot of bias (i guess). You could try a basic Euler to see if it is good ..

Joerg and I did a bunch of FDM for Heston in our C# MC book and QE2 was best. Here's the same idea for SABR

https://papers.ssrn.com/sol3/papers.cfm ... id=2575539

Statistics: Posted by Cuchulainn — 49 minutes ago

]]>

Maybe Brexit killing the financial services industry in London would not be such a bad thing after all: https://www.bbc.co.uk/news/uk-54226107

BBC tomorrow 19.00 BST.Statistics: Posted by Cuchulainn — 57 minutes ago

]]>

While my derivations are complicated, you can just trust me that the tables are correct -- at least until you've found a MC source that you like and can code yourself. As a rule of thumb, everything in finance should be calculated 3 different ways anyway!

Statistics: Posted by Alan — Today, 6:43 pm

]]>

In my current work, I have to deal with a lot of iterated integrals of the form [$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} ds[$], [$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s)[$] and [$]\int_{t_1}^{t_2} t dt \int_{t_1}^{t} dz(s)[$] and even thrice or more iterated integrals like [$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]. These integrals would not have been difficult if the lower limit of all the integrals involved is zero but once lower limit is different like we have [$]t_1[$] in above integral, they become difficult. However using a trick, we can very easily solve all these integrals. Since we will be dealing with a lot of these integrals in further path integrals work, I decided to share how to solve these integrals with friends in this post. Using this technique, you would be able to solve a large number of difficult stochastic integrals.

Basically all we have to do is to break down the iterated integral into larger number of simpler integrals where the lower limit of all integrals is zero. Let us take this thrice iterated integral [$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

We start breaking it down into more integrals with lower limit zero as

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$] Integral(1)

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$] Integral(2)

[$]- \int_{0}^{t_1} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv \Big][$] Integral(3)

So we have changed the limits of outer integral to zero by breaking it down into two integrals. Now taking the first of the two integrals (Integral2), we have

[$]\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{t_1}^{s} dv\Big][$]

which will be further broken down into four integrals when we change the limits of inner integrals from lower limit zero as

[$]\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{t_1}^{s} dv \Big][$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

so what we have that Integral(2) can be written as

[$]\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv)[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

Similarly Integral(3) can be written as

[$]\int_{0}^{t_1} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$]

[$]=\Big[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

So our main integral(1) becomes which is sum(or difference) of integral (2) and integral(3)

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

Fortunately we can very easily calculate variances of integrals starting from lower limit zero and it is not difficult at all. I will calculate all these variances here one by one.

[$]Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv]= {t_2}^6/18 [$]

[$]-Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv]= -{t_2}^4 {t_1}^2/4[$]

[$]-Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv]= -{t_2}^3 {t_1}^3/9[$]

[$]+Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv]={t_2}^3 {t_1}^3/3[$]

[$]-Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv]= -{t_1}^6/18[$]

[$]+Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv]={t_1}^6/4[$]

[$]+Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv]={t_1}^6/9[$]

[$]-Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv]=- {t_1}^6/3[$]

So summing up the variances and re-arranging, we get the total variance as

[$]({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3[$]

or we have

[$]Var[\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv][$]

[$]=({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3[$]

and we can represent our integral as

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\sqrt{(({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3)} \frac{H_2(N)}{\sqrt{2!}}[$]

where [$]H_2(N)=N^2-1[$] and N is a standard Gaussian.

Please pardon any inadvertent mistakes. I think I have done the calculations right.

.Basically all we have to do is to break down the iterated integral into larger number of simpler integrals where the lower limit of all integrals is zero. Let us take this thrice iterated integral [$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

We start breaking it down into more integrals with lower limit zero as

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$] Integral(1)

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$] Integral(2)

[$]- \int_{0}^{t_1} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv \Big][$] Integral(3)

So we have changed the limits of outer integral to zero by breaking it down into two integrals. Now taking the first of the two integrals (Integral2), we have

[$]\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{t_1}^{s} dv\Big][$]

which will be further broken down into four integrals when we change the limits of inner integrals from lower limit zero as

[$]\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{t_1}^{s} dv \Big][$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

so what we have that Integral(2) can be written as

[$]\int_{0}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv)[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

Similarly Integral(3) can be written as

[$]\int_{0}^{t_1} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv [$]

[$]=\Big[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

So our main integral(1) becomes which is sum(or difference) of integral (2) and integral(3)

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\Big[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv[$]

[$]+\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv[$]

[$]-\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv\Big][$]

Fortunately we can very easily calculate variances of integrals starting from lower limit zero and it is not difficult at all. I will calculate all these variances here one by one.

[$]Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv]= {t_2}^6/18 [$]

[$]-Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv]= -{t_2}^4 {t_1}^2/4[$]

[$]-Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv]= -{t_2}^3 {t_1}^3/9[$]

[$]+Var[\int_{0}^{t_2} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv]={t_2}^3 {t_1}^3/3[$]

[$]-Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{s} dv]= -{t_1}^6/18[$]

[$]+Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t} dz(s) \int_{0}^{t_1} dv]={t_1}^6/4[$]

[$]+Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{s} dv]={t_1}^6/9[$]

[$]-Var[\int_{0}^{t_1} t dz(t) \int_{0}^{t_1} dz(s) \int_{0}^{t_1} dv]=- {t_1}^6/3[$]

So summing up the variances and re-arranging, we get the total variance as

[$]({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3[$]

or we have

[$]Var[\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv][$]

[$]=({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3[$]

and we can represent our integral as

[$]\int_{t_1}^{t_2} t dz(t) \int_{t_1}^{t} dz(s) \int_{t_1}^{s} dv[$]

[$]=\sqrt{(({t_2}^6-{t_1}^6)/18 -({t_2}^4 {t_1}^2-{t_1}^6)/4-({t_2}^3 {t_1}^3-{t_1}^6)/9+({t_2}^3 {t_1}^3-{t_1}^6)/3)} \frac{H_2(N)}{\sqrt{2!}}[$]

where [$]H_2(N)=N^2-1[$] and N is a standard Gaussian.

Please pardon any inadvertent mistakes. I think I have done the calculations right.

.

Very sorry friends for this error. I just calculated the variances on diagonal right in the above copied post but did not include co-variances between terms. formula for variance of sum of two terms goes by [$]Var(a+b)=Var(a) + Var(b) +2 COV(a,b)[$] and [$]Var(a-b)=Var(a) + Var(b) -2 COV(a,b)[$] and this is the formula that has to be applied to all the terms by adding all individual variances of each term and then accounting for co-variances between each of the pair of two terms. I will fix this error when I come back to transition probabilities based stochastic integrals.

Of course, I knew the variance formulas but for some reason I was totally blank about them when I wrote the above copied post.

My work on the new project is going well and I hope to come back with results in another two to three days.

Statistics: Posted by Amin — Today, 6:33 pm

]]>

Funny that you ask about rare events in RL!

https://papers.nips.cc/paper/5249-weighted-importance-sampling-for-off-policy-learning-with-linear-function-approximation.pdf

https://sites.ualberta.ca/~szepesva/papers/ICLR2019-Risk.pdf

It's just using weighted sampling, isn't it? I even sense some *-armed-bandids in it - not a good idea for high-cost risk problems, imho. Furthermore, from what I understand, they don't look for rare events in the data, but rather in the algorithm performance (quoted below in bold). I would expect that in general this approach makes the agents even more fallible on the actual rare occasions. After all, the more they train on the same data, the worse they generalise. You can't beat statistics. Or am I missing something?https://papers.nips.cc/paper/5249-weighted-importance-sampling-for-off-policy-learning-with-linear-function-approximation.pdf

https://sites.ualberta.ca/~szepesva/papers/ICLR2019-Risk.pdf

"The overarching idea is thus to screen out situations that are unlikely to be problematic, and focus evaluation on the most difficult situations. The difficulty arises in identifying these situations – since failures are rare, there is lit-tle signal to drive optimization. To address this problem, we introduce a continuation approach learning a failure probability predictor(AVF), which estimates the probability the agent fails given some initial conditions.

Having said that, self-driving cars are a cool idea - it will probably work if we build the infrastructure.

I will miss London driving, though

It seems that RL is now being hyped up as the way to the ultimate AI ("AGI"), which will solve problems at which humans were failing... and it will fail for the exact same reasons: one lockdown evening I played with multi-agent RL magic a bit and wrote a toy model of contagion. A bunch of players were walking around the screen: young guys and old guys, for whom the infection was lethal. They could wash hands for protection or go to the pub. The model trained pretty fast (if you don't count catastrophic forgetting when I tried too hard). The outcome was that young players went straight to the pub, whereas the old players waited at a safe distance for the youth to infect each other and recover or die, and then they safely entered the pub.

Anyway, I'm curious in what form the hype will rebound - now it silenced because there's an actual problem that needs to be solved - the Covid-19 crisis.

Another reality check of the Deepmind's Alphastar (you won't read about it in Nature):

Statistics: Posted by katastrofa — Today, 5:42 pm

]]>

Statistics: Posted by ISayMoo — Today, 5:41 pm

]]>

Here are some plums I bought at a local farm/orchard yesterday - I have a whole bag of them and will make a plum tart this afternoon.

The little placemat is one of the latest completed items here.

And that's all she wrote; enjoy the fall, people. Cold nights here mean deep sleep and plenty of energy! : )

Statistics: Posted by trackstar — Today, 2:11 pm

]]>

But I'll go back to my sabbatical now - I am doing research on US and international environmental law, human rights. and I am also rowing, cycling, hiking, and pumping iron.

Whatever happens this fall, if I have to drive, walk, bike, or row to Cape Breton any time in the months to come, I am super-fit and ready for the journey! ; )

PS: I checked to see if the invitation from Canada still stands after 4 years and it does!

If... (Canadian Humor, eh?)

Welcome to Cape Breton

So, they are writing an American call, at least for now.

Statistics: Posted by trackstar — Today, 1:34 pm

]]>

]]>