I have a very general question (for the beginning, by an absolute beginner): what are the most common proofs of convergence of numerical algorithms? (I've encountered proofs based on energy minimisation so far.)
It depends on the problem (my very general answer ::). But in many cases you want to solve a problem [$]F(x) = 0[$] in some Banach space by replacing it by a computable discretized problem [$]G(x^h,h) = 0 [$] where [$]h[$] is a parameter. Then all convergence proofs centre around computing the norm of [$]x^h - x[$] as a function of [$]h.[$]
In all cases a problem in uncountable space must be posed in countable space and then projected to a finite-dimensional problem. You don't need to know the exact solution (and in most cases it's not feasible).
The methods for proving convergence is a part of numerical analysis:
1. Taylor expansions
2. Polynomial and rational function approximation.
3. Divided differences to approximate derivatieves.
4. etc. etc.
5. Iterative processes that depend on an integer parameter [$]n[$], e.g. Newton.
Classic intros are Dahlquist/Bjorck 1974 and esp. Conte/de Boor 1980. A good 101 case is numerical quadrature or [$]e^A[$].
Many numerical analysts in academia spend time establishing convegence results for numerical processes.
http://web.mit.edu/10.001/Web/Course_No ... ration.pdf
http://citeseerx.ist.psu.edu/viewdoc/do ... 1&type=pdf
Do you have a specific problem in mind?