- katastrofa
**Posts:**6387**Joined:****Location:**Alpha Centauri

I have a very general question (for the beginning, by an absolute beginner): what are the most common proofs of convergence of numerical algorithms? (I've encountered proofs based on energy minimisation so far.)

That's difficult. I practical you see empirical demonstrations of convergence to *some* value for *some* cases.

Do you/can you know the true value?

In maximum likelihood you can sometimes compute analytical derivatives of the likelihood function given your data, find the value where you derivatives is zero, proof that you have all the zeros, and then you can say the maximum must be at one of these positions where the derivatives is zero.

Do you/can you know the true value?

In maximum likelihood you can sometimes compute analytical derivatives of the likelihood function given your data, find the value where you derivatives is zero, proof that you have all the zeros, and then you can say the maximum must be at one of these positions where the derivatives is zero.

.. and for e.g. solving ode's numerically you have numerical methods like Euler, Runge Kuta,.. and for those you have "region of stability" that tell you what step size to pick, which numerical methods will work.

- Cuchulainn
**Posts:**57041**Joined:****Location:**Amsterdam-
**Contact:**

It depends on the problem (my very general answer ::). But in many cases you want to solve a problem [$]F(x) = 0[$] in some Banach space by replacing it by a computable discretized problem [$]G(x^h,h) = 0 [$] where [$]h[$] is a parameter. Then all convergence proofs centre around computing the norm of [$]x^h - x[$] as a function of [$]h.[$]I have a very general question (for the beginning, by an absolute beginner): what are the most common proofs of convergence of numerical algorithms? (I've encountered proofs based on energy minimisation so far.)

In all cases a problem in uncountable space must be posed in countable space and then projected to a finite-dimensional problem. You don't need to know the exact solution (and in most cases it's not feasible).

The methods for proving convergence is a part of numerical analysis:

1. Taylor expansions

2. Polynomial and rational function approximation.

3. Divided differences to approximate derivatieves.

4. etc. etc.

5. Iterative processes that depend on an integer parameter [$]n[$], e.g. Newton.

Classic intros are Dahlquist/Bjorck 1974 and esp. Conte/de Boor 1980. A good 101 case is numerical quadrature or [$]e^A[$].

Many numerical analysts in academia spend time establishing convegence results for numerical processes.

http://web.mit.edu/10.001/Web/Course_No ... ration.pdf

http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf

Do you have a specific problem in mind?

Last edited by Cuchulainn on December 11th, 2017, 8:54 am, edited 3 times in total.

- Cuchulainn
**Posts:**57041**Joined:****Location:**Amsterdam-
**Contact:**

.. and for PDE/FDM use the famous Peter Lax theorem to prove stability and convergence.

https://people.maths.ox.ac.uk/trefethen/4all.pdf

https://people.maths.ox.ac.uk/trefethen/4all.pdf

- katastrofa
**Posts:**6387**Joined:****Location:**Alpha Centauri

Thanks!

Cuchulainn, I will start from the info and sources you've suggested in your YACHAIC (yet another Cuchlainn's helpful and informative comment)

Some time ago I needed to do a simple classification of data for my simulations. People commonly use k-means for this task, but I noticed that the method was not particularly suitable owing to the character of the data, so I typed in some formulas from the top of my head. Since the new method turned out to be better and performed well in tests, I gave my word that I would write it down, but I didn't realise that doing it properly requires more experience in algorithmics that I've earned during a course at the institute.

Cuchulainn, I will start from the info and sources you've suggested in your YACHAIC (yet another Cuchlainn's helpful and informative comment)

Some time ago I needed to do a simple classification of data for my simulations. People commonly use k-means for this task, but I noticed that the method was not particularly suitable owing to the character of the data, so I typed in some formulas from the top of my head. Since the new method turned out to be better and performed well in tests, I gave my word that I would write it down, but I didn't realise that doing it properly requires more experience in algorithmics that I've earned during a course at the institute.

- Cuchulainn
**Posts:**57041**Joined:****Location:**Amsterdam-
**Contact:**

Any progress/insights?

- Cuchulainn
**Posts:**57041**Joined:****Location:**Amsterdam-
**Contact:**

Are you referring to "probabilistic numerics" perchance? (Robbins and Monro)

well explained, great paper.. and for PDE/FDM use the famous Peter Lax theorem to prove stability and convergence.

https://people.maths.ox.ac.uk/trefethen/4all.pdf

GZIP: On