• 1
• 4
• 5
• 6
• 7
• 8 Cuchulainn
Topic Author
Posts: 63748
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Silly questions

It is BM, indeed. Maybe it is better to try to describe the problem (already discussed "ODE asymptotic .. thread (Aluffi, Gemam et al))

Recall

$d \vec{x}_t = -\nabla f(\vec{x_t}) dt + \sqrt{(2T)} d \vec{W}_t$ (1)

When there is no random fluctuation I can solve constrained optimisation as a gradient system. Now I want to do hlll-climbing (basin hopping) by solving (1).

My issues are

A) Solving (1) as a SDE is not ideal (?) we lose out on ODE solver
B) What I want is to use Strang  splitting into deterministiic and random legs, and which form the latter leg should be ie really the trigger for my question. So, what would be the 'best' approach for the stochastic part?

Here is idea for stochastic Burgers

https://arxiv.org/pdf/1907.12747.pdf
My C++ Boost code gives
262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com
http://www.datasim.nl Alan
Posts: 10493
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Silly questions

I may be missing something, but since the time-step $\Delta t$ is small, I would just do the usual simplest possible Monte Carlo for the 'random part':

At each step, draw a std-normally distributed n-vector $\vec{Z}$ and take
$\vec{x}_{t + \Delta t} = \vec{x}_t + c \vec{Z} \sqrt{\Delta t}$, where $c = \sqrt{2 T}$. Cuchulainn
Topic Author
Posts: 63748
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Silly questions

I may be missing something, but since the time-step $\Delta t$ is small, I would just do the usual simplest possible Monte Carlo for the 'random part':

At each step, draw a std-normally distributed n-vector $\vec{Z}$ and take
$\vec{x}_{t + \Delta t} = \vec{x}_t + c \vec{Z} \sqrt{\Delta t}$, where $c = \sqrt{2 T}$.
Yes, this would be one acceptable approach I reckon.

Let me explain the  full context using the equations here as baseline

https://arxiv.org/abs/1707.06618

The objective is to solve the SDE (1.2) by Algorithm 1 (GLD). No surprises really as it is essentially just Euler's method.

Now, here's the nub:

1. Euler for the deterministic part is shaky/brittle.
2.  What I want (heresy, it works in practice but does it work in theory) ?) is to split 2.1 into an ODE part and an SDE part.
3. Proposal; use Lie-Trotter/Strang operator splitting on the ODE/SDE in part 2.

Interested in the mathematical justification for this approach. If it works then we can find a global minimum (and we can send Gradient Descent out to pasture).
My C++ Boost code gives
262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com
http://www.datasim.nl Alan
Posts: 10493
Joined: December 19th, 2001, 4:01 am
Location: California
Contact:

### Re: Silly questions

Can you show it works in practice" in an example? Cuchulainn
Topic Author
Posts: 63748
Joined: July 16th, 2004, 7:38 am
Location: Amsterdam
Contact:

### Re: Silly questions

Can you show it works in practice" in an example?
Yes! I even programmed the SGLD (Algorithm 2) which is what ML people do.
For the ODE part, that works very well, even with Lagrangians and KKT (equality, inequality constraints)

So, I need to resurrect and clean up my C++ code from last year (I build it from the ground up). The algorithm is easy

1. GLD (SDE)
2. Split 1) ODE, 2) SDE

I'll go back to bunker and get back soon! I think it will be a very useful exercise.

A test case is Rastrigin, many locals, 1 global.

https://en.wikipedia.org/wiki/Rastrigin_function
My C++ Boost code gives
262537412640768743.999999999999250072597198185688879353856337336990862707537410378210647910118607313

http://www.datasimfinancial.com
http://www.datasim.nl  