Is physics "better" than maths in this context?
Some believe that PINNs (physics informed NNs) are the new wave.
AI scientists love claiming that they invented this. More word inflatiion. BTW, IHTLAEBMs are 20 years old: https://cs.nyu.edu/~yann/research/ebm/Is physics "better" than maths in this context?
Some believe that PINNs (physics informed NNs) are the new wave.
#include <cmath>
#include <vector>
#include <boost/numeric/odeint.hpp>
using value_type = double;
using state_type = std::vector<value_type>;
struct FiaccoOde
{ // Boost C++ odeint library used
FiaccoOde() {}
void operator()(const state_type& input, state_type& dxdt, value_type t)
{ // ODE based on minimisation problem
value_type x = input[0]; value_type y = input[1]; value_type z = input[2];
value_type lambda1 = input[3]; value_type lambda2 = input[4];
// Transform (0, infinity to (0,1)
double tmp = (1.0 - t);
auto g1 = [](double x, double y, double z) {return x - y * y - z * z;};
auto g2 = [](double x, double y, double z) {return x* x + y * y - 1.0;};
// Compute gradients of f = abs(x-2) + abs(y-2)
double fx = 1.0; double fy = 1.0; double fz = 0.0;
if (x < 2.0) fx = -1.0; if (y < 2.0) fy = -1.0;
// Exact gradient of constraint functions
// g1(x,y,z) = x - y^2 - z^2 = 0 // z is slack variable
double g1x = 1.0; double g1y = -2.0 * y; double g1z = -2.0 * z;
// g2(x, y, z) = x ^ 2 + y ^ 2 - 1 = 0
double g2x = 2.0 * x; double g2y = 2.0 * y; double g2z = 0.0;
// Equation (12) of Barr and Platt; notice the signs
dxdt[0] = -(fx + lambda1 * g1x + lambda2 * g2x) / (tmp * tmp);
dxdt[1] = -(fy + lambda1 * g1y + lambda2 * g2y) / (tmp * tmp);
dxdt[2] = -(fz + lambda1 * g1z + lambda2 * g2z) / (tmp * tmp);
dxdt[3] = g1(x, y, z) / (tmp * tmp);
dxdt[4] = g2(x, y, z) / (tmp * tmp);
}
};
value_type f(value_type x, value_type y)
{ // f() is not a functions of the slack variable z
return (x - 2) * (x - 2) + (y - 2) * (y - 2);
}
int main()
{
namespace Bode = boost::numeric::odeint;
// Initial condition
value_type L = 0.0;
value_type T = 0.9995;
value_type dt = 0.1051001001;
FiaccoOde ode;
// 5-d space: x,y,z and 2 Lagrange multipliers
state_type x = { 1,1,1,1,1 }; // the state, vector of 1 element (scalar problem)
std::size_t steps = Bode::integrate(ode, x, L, T, dt);
std::cout << "Number of steps Cash Karp 54: " << std::setprecision(16) << steps
<< "\napproximate: " << x[0] << ", " << x[1] << ", " << x[2] << '\n';
std::cout << "Function value: " << std::setprecision(8) << f(x[0], x[1]) << '\n';
x = { 1,1,1,1,1 };
Bode::bulirsch_stoer<state_type, value_type> bsStepper; // O(variable), controlled stepper
steps = Bode::integrate_const(bsStepper, ode, x, L, T, dt);
std::cout << "Number of steps Bulirsch-Stoer: " << steps << ", exact: " << ", approximate: " <<
x[0] << ", " << x[1] << "," << x[2] << ", " << x[3] << ", " << x[4] << '\n';
std::cout << "Function value: " << std::setprecision(8) << f(x[0], x[1]) << '\n';
}
Look, if I argue with you. I must take a contrary position.You got me fooled here! For a moment I thought you were actually interested in the subject.AI scientists love claiming that they invented this. More word inflatiion. BTW, IHTLA
EBMs are 20 years old: https://cs.nyu.edu/~yann/research/ebm/
Lagrange, Hamilton et al did this stuff for breakfast.
AI Rulez: every so often, come up with a new buzzword.
Not necessarily. I could be arguing in my spare timeFull on contrarianism is boring. So predictable.
I'm sorry, but I'm not allowed to argue anymore.Still boring
So is mine. Due to Covid I developed a minimal epidemic model which, of course, doesn't grow exponentially. i.e. Doesn't break the laws of Maths.My maths background is as a modeller.
I found those complaints informative and had a think. I asked myself "What about affine returns?"And obviously I've complained about the accuracy of these models as well.
Have you ever seen the ugly swan that is the logit-normal distribution? With affine returns I get a mixture distribution that is the sum of the log-normal and, what looks like, a scaled logit-normal. It certainly appears to be fat tailed. Click here for a slide show of growth curves fitted to the noise of S&P 500 stocks and a link to the paper.What maths do we see? Martingales, Monte Carlo simulations, and the same linear diffusion equation, with the occasional trivial integral equation. Ad nauseam.