Are all ill-posed numerical processes caused by floating-point finite precision?
Hmmmm... It depends on how you define ill-posed.
1. Some iterative solutions may produce gross inaccuracies due to poorly defined stopping conditions. Just because the most recent terms are less than some threshold of accuracy does not imply the solution is within some threshold of accuracy.
2. Measurement inaccuracy (as distinct from measurement precision) can certainly propagate poorly. Even if one uses infinite precision representations of the numbers, noise in those numbers
3. Processes linked to empirical data can be ill-posed if the historical sample size is small relative to the counterfactual future sample size (e.g., predicting 1-in-a-million occurrences from only 1000 data points).
4. Finally, there's the issue of model error especially for models that contain singularities or restricted ranges. Is the log-normal model "ill-posed" for interest rates than may be negative?