Serving the Quantitative Finance Community

 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: If you are bored with Deep Networks

December 19th, 2019, 9:05 pm

Test code 
// Interval strategy on (O,inf):
//
// 1. Truncation to large finite T
//  2. Transform (0,inf) to (0,1)
//
// 2. tends to give more accurate rounded results.
//
// (C) Datasim Education BV 2020
//


#include "LinearSystemGradientDescent.hpp"
#include <boost/numeric/ublas/vector.hpp>
#include <boost/numeric/ublas/matrix.hpp>
#include <boost/numeric/ublas/io.hpp>
namespace ublas = boost::numeric::ublas;

#include <boost/numeric/odeint.hpp>
namespace Bode = boost::numeric::odeint;

int main()
{
 
 // 2X2 matrices
 //
 // A1 == symmetric and positive definite (pd)
 // A2 == symmetric and NOT positive definite (pd)
 // A3 == NOT symmetric and positive definite(pd)
 // A4 == NOT symmetric and NOT positive definite(pd)
 std::size_t r = 2; std::size_t c = 2;

 ublas::matrix<double> A1(r, c);
 A1(0, 0) = 2; A1(0, 1) = 1; A1(1, 0) = 1; A1(1, 1) = 2;
 
 
 ublas::vector<double> b1(r); 
 b1[0] = 4;  b1[1] = 5;
 

 
 // ODE Solver, x = (1 2) is solution in all cases
 ublas::vector<double> x(r); x[0] = x[1] = 0.0;
 ublas::vector<double> x2(r);  x2[0] = x2[1] = 0.0;

 // Integrate on [L,T]
 // EXX. Try T = 0.1 0.25, 0.5, 0.75, 0.95. 0.9999 etc.
 double L = 0.0; double T = 0.99584959825795;
 double dt = 1.0e-5;


 LinearSystemGradientDescentOde ode(A1, b1);

 // Cash Karp (Ford Cortina)
 std::size_t steps = Bode::integrate(ode, x, L, T, dt);

 std::cout << "Number of steps " << steps << '\n';
 std::cout << "Solution " << x << '\n';

 // BS upmarket model
 Bode::bulirsch_stoer<state_type, value_type> bsStepper;
 
 std::size_t steps3 = Bode::integrate_adaptive(bsStepper, ode, x2, L, T, dt);

 std::cout << "Number of steps, Bulirsch-Stoer " << steps3 << '\n';
 std::cout << "Solution II " << x2 << '\n';
 

 return 0;
}
Thanks for the sharing the boost::odeint example, I was not aware of this boost library.
By the way, reading the posts, I am not sure to understand your point. Is not SGD an optimization method, as all those present in boost::odeint ? Isn'it this discussion equivalent to compare for instance a Godunov scheme to a glimm one for numerical analysis of PDEs ?
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

December 21st, 2019, 2:10 pm

By the way, reading the posts, I am not sure to understand your point. Is not SGD an optimization method, as all those present in boost::odeint ? 

Boost odeint is a generic ODE solver...(?) It can be used to solve optimisation problems in dynamical systems as my example above shows.
The authors of odeint do have examples of Hamiltonian systems which is a special case of dynamical systems.

Isn'it this discussion equivalent to compare for instance a Godunov scheme to a glimm one for numerical analysis of PDEs ?

In which aspects is it equivalent? AFAIK Godunov is for conservation laws(?) I don't see much overlap.
 
User avatar
JohnLeM
Posts: 379
Joined: September 16th, 2008, 7:15 pm

Re: If you are bored with Deep Networks

December 23rd, 2019, 8:08 am

By the way, reading the posts, I am not sure to understand your point. Is not SGD an optimization method, as all those present in boost::odeint ? 

Boost odeint is a generic ODE solver...(?) It can be used to solve optimisation problems in dynamical systems as my example above shows.
The authors of odeint do have examples of Hamiltonian systems which is a special case of dynamical systems.

Isn'it this discussion equivalent to compare for instance a Godunov scheme to a glimm one for numerical analysis of PDEs ?

In which aspects is it equivalent? AFAIK Godunov is for conservation laws(?) I don't see much overlap.
You are right, Godunov scheme, as Glimm schemes, are indeed numerical schemes designed to solve both a particular problem, that are conservation laws, described by hamiltonians. For me, the connection with ODE minimization problem is that both methods basically uses gradient descent algorithms, that are energy-minimization based, more precisely entropic ones, to design an ODE, that is a dynamical system, consistent with the conservation law under study. Both are optimization problems solving. The difference being that

- Godunov schemes are gradient-based minimization methods, as are a bunch of other finite-difference type methods. They try to minimize the total system entropy.
- Glimm schemes are also gradient minimization methods, but they use random selection to achieve that. 

To try clarifying the connection : Glimm schemes are Stochastic gradient Descent based algorithms, Godunov corresponds to a particular finite-difference based minimization method. I guess that I could use boost ODEINT to describe both resulting numerical schemes ?
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

January 8th, 2020, 1:38 pm

A Review on Neural Network Models of Schizophrenia and Autism Spectrum Disorder

https://arxiv.org/pdf/1906.10015.pdf
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

January 13th, 2020, 3:44 pm

Learning guarantees for Stochastic Gradient Descent

In a wide range of problems, SGD is superior to GD.
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

January 14th, 2020, 3:06 pm

Image
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

January 23rd, 2020, 8:13 pm

Despite some evidence for top-down connections in the brain, there does not appear to be a global objective that is optimized by backpropagating error signals. Instead, the biological brain is highly modular and learns predominantly based on local information. 

https://arxiv.org/pdf/1905.11786.pdf

In addition to lacking a natural counterpart, the supervised training of neural networks with end-to-end backpropagation suffers from practical disadvantages as well. Supervised learning requires labeled inputs, which are expensive to obtain. As a result, it is not applicable to the majority of available data, and suffers from a higher risk of overfitting, as the number of parameters required for a deep model often exceeds the number of labeled datapoints at hand. At the same time, end-to-end backpropagation creates a substantial memory overhead in a naïve implementation, as the entire computational graph, including all parameters, activations and gradients, needs to fit in a processing unit’s working memory. Current approaches to prevent this require either the recomputation of intermediate outputs [Salimans and Bulatov, 2017] or expensive reversible layers [Jacobsen et al., 2018]. This inhibits the application of deep learning models to high-dimensional input data that surpass current memory constraints. This problem is perpetuated as end-to-end training does not allow for an exact way of asynchronously optimizing individual layers [Jaderberg et al., 2017]. In a globally optimized network, every layer needs to wait for its predecessors to provide its inputs, as well as for its successors to provide gradients.
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

July 12th, 2020, 1:14 pm

Algorithmic AI Decolonianism

https://arxiv.org/pdf/2007.04068.pdf

Artificial Intelligence (AI) is viewed as amongst the technological advances that will reshape modern societies and their relations

That's also what they said in 1965.
 
User avatar
ISayMoo
Topic Author
Posts: 2332
Joined: September 30th, 2015, 8:30 pm

Re: If you are bored with Deep Networks

July 13th, 2020, 8:55 am

Do you mean colonialism or AI?
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

July 13th, 2020, 9:25 am

I have read the half of that paper; it's all gobbelly-gook. 
“It's a beautiful thing, the destruction of words.”   
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

August 6th, 2020, 12:10 pm

So, AlphaGo is using reinforcement learning. And reinforcement learning works for games; it works for situations where you have a small number of discrete actions, and it works because it requires many, many, many trials to run anything complex. AlphaGo Zero [the latest version of AlphaGo] has played millions of games over the course of a few days or few weeks, which is possibly more than humanity has played at a master level since Go was invented thousands of years ago. This is possible because Go is a very simple environment and you can simulate it at thousands of frames per second on multiple computers. [...] But this doesn’t work in the real world because you cannot run the real world faster than real time.

The only way to get out of this is to have machines that can build, through learning, their own internal models of the world, so they can simulate the world faster than real time. The crucial piece of science and technology we don’t have is how we get machines to build models of the world.
The example I use is when a person learns to drive, they have a model of world that lets them realize that if they get off the road or run into a tree, something bad is going to happen, and it’s not a good idea. We have a good enough model of the whole system that even when we start driving, we know we need to keep the car on the street, and not run off a cliff or into a tree. But if you use a pure reinforcement learning technique, and train a system to drive a car with a simulator, it’s going to have to crash into a tree 40,000 times before it figures out it’s a bad idea. So claiming that somehow just reinforcement learning is going to be the key to intelligence is wrong.
 
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

September 19th, 2020, 11:20 am

Maybe high-level plumbing? Like in the old days of Fortran libraries, system designs  and UML. So say Grady Booch.

https://en.wikipedia.org/wiki/Grady_Booch

Better to get away from monolithic ML architectures. History repeats itself.
Design patterns for AI systems might enjoy a renaissance.


Image
 
User avatar
katastrofa
Posts: 7440
Joined: August 16th, 2007, 5:36 am
Location: Alpha Centauri

Re: If you are bored with Deep Networks

September 20th, 2020, 5:42 pm

It's just using weighted sampling, isn't it? I even sense some *-armed-bandids in it - not a good idea for high-cost risk problems, imho. Furthermore, from what I understand, they don't look for rare events in the data, but rather in the algorithm performance (quoted below in bold). I would expect that in general this approach makes the agents even more fallible on the actual rare occasions. After all, the more they train on the same data, the worse they generalise. You can't beat statistics. Or am I missing something?
"The overarching idea is thus to screen out situations that are unlikely to be problematic, and focus evaluation on the most difficult situations. The difficulty arises in identifying these situations – since failures are rare, there is lit-tle signal to drive optimization. To address this problem, we introduce a continuation approach learning a failure probability predictor(AVF), which estimates the probability the agent fails given some initial conditions. The idea is to leverage data from less robust agents, which fail more fre-quently, to provide a stronger learning signal. In our implementation, this also allows the algorithm to reuse data gathered for training the agent, saving time and resources during evaluation."

Having said that, self-driving cars are a cool idea - it will probably work if we build the infrastructure.
Image
I will miss London driving, though :twisted:

It seems that RL is now being hyped up as the way to the ultimate AI ("AGI"), which will solve problems at which humans were failing... and it will fail for the exact same reasons: one lockdown evening I played with multi-agent RL magic a bit and wrote a toy model of contagion. A bunch of players were walking around the screen: young guys and old guys, for whom the infection was lethal. They could wash hands for protection or go to the pub. The model trained pretty fast (if you don't count catastrophic forgetting when I tried too hard). The outcome was that young players went straight to the pub, whereas the old players waited at a safe distance for the youth to infect each other and recover or die, and then they safely entered the pub.
Anyway, I'm curious in what form the hype will rebound - now it silenced because there's an actual problem that needs to be solved - the Covid-19 crisis.

Another reality check of the Deepmind's Alphastar (you won't read about it in Nature):
 
User avatar
Cuchulainn
Posts: 20250
Joined: July 16th, 2004, 7:38 am
Location: 20, 000

Re: If you are bored with Deep Networks

December 18th, 2020, 5:55 pm

AI godfather Geoff Hinton: “Deep learning is going to be able to do everything”
https://itcareersholland.nl/ai-godfathe ... verything/

It's entering the realm of religion; in the After Life?

I pointed out that faith is not a legitimate basis for science. And the fact that renowned scholars are now openly relying on faith to salvage deep learning's promise indicates that we've crossed over from science and into religion. His response - the math says it's possible and the math is unquestionable.