Hi everyone, and sorry for the long delay of my answer! First let me thank you for all you responses.OK so let me try to answer you all.QuoteIt sounds technically very interesting, have you written papers? Yes during my PhD I have wrote several papers, you can find them on my webpage on the university here: http://perso.univ-perp.fr/arnault.ioual ... .phpMainly
in my thesis I was focusing on expressions of synchronous programs but now we have extended it to other types of language.QuoteAre current methods not good enough? Well that is a wide question. Current methods could, in some industries, be just switch to double or wider precision (but then performance could be an issue). Others will change their algorithms to try new one with better convergence properties (whenever these actually exist). Others will just buy some well-written libraries (hoping that accuracy issues are, indeed, resolved). Finally other will just switch to fully equipped environment, like Matlab or Simulink, but then they cannot really know what is going to happen when it will be implemented in C++ for example.From my perspective very few people give the answer we propose, which is : write the code differently so that mathematically it's the same, but for the computer it's more accurate. Also what we have is a methodology to find the good tradeoff between accuracy and performance for the need of the programmer.QuoteI do think that from a startup perspective you're looking at a very narrow business opportunity:That's a way to put it, on the other hand there are more and more calculations and simulations everywhere. So at some point accuracy and performance could be a hot topic. But, yes, currently it is an ?initiate? topic, not many people know about it.QuoteIf I had a financial company that has numerous stability issues due to low level floating point truncation then my first reaction would be to use check if we can increase the number of bits inside the compiler framework, or else start using an arbitrary precision library.Indeed you could do such a thing, but if you calculations were already slow and intensive this will make it even slower. I have seen some place were for performance gain, programmer were okay to use only single precision number for example. So I am not sure this approach could scale very much.QuoteMaybe look at large scale supercomputer simulations?That is an interesting take indeed! I will look closer to that, Thanks!In a more business perspective we though about quants in investment banking at first, then we were approach by some military contractors about the safety of critical piece of software. But I do think that simulations are also a possibility for us, I was not planning at first on supercomputer-like simulations, but more on geological simulations or such (for oil drilling for example). Do you know some people working in this kind of very large supercomputer simulations?QuoteI have a problem I encountered; a C++ application that creates a DLL that I call from Excel. Sometimes the app crashed in Excel but why? is it VBA, C++, Excel or is the maths wrong? For example, a Newton Raphson that is not converging, how can I detect it in IEEE 754, for example? I want to trace the variable across process and language boundaries. or calling log(1.0/0.0) by accident leads to sNaN. That's interesting; do you still have the way to reproduce such bugs?Concerning Newton Raphson one of my associate publish an interesting paper where even when your algorithm converge, it still loose iteration due to IEEE754 rounding errors. So, improving accuracy actually can make more convergent algorithms. You can look it up here if your interested: http://perso.univ-perp.fr/mmartel/lopstr15.pdfFor
us any algorithm can be qualified to know if it is unstable or not. If you give me some code, and some data to run it, then I can tell you such a thing.Quote Couple broke links (e.g. documentation and pdf links).Thanks for the head?s up, we launched this version not very long ago, we have not yet review each pages. I will correct that.QuoteNumerical accuracy and performance IMO is determined by the quality of the numerical method being used. Does your approach address this or is it more like machine accuracy and reliability?This is a key question indeed. We focus on the accuracy of the machine, not the accuracy of the method. What we want to do it to make the software performed the same as the mathematical function. If the math is broken I cannot help it, but if the math is correct it does not say that the software is correct too. That is where we come in.Quote Or maybe it has potential relations to communication over noisy channels, error correcting codes and cryptography?On this one I am not sure, maybe we can improve DSP performing calculations in fixed-point arithmetic, but cryptography seems to be more of an ?integer world? than floating-point.Quote Here is an example Crank Nicolson for BS. The accuracy is O(ds^2 + dt^2) and that's as good as it gets IMO.Okay, thank you very much for that material. I let you know how it turns out using our tools. It will probably take me some time as I do not know much about this algorithm and I will have to make it work first.Quote Speaking as a numerical analyst, what is interesting is to analyse machine accuracy and robustness of the answer and avoiding NaNs, under/overflow nasties.Sure, these are the reason that makes the calculation actually crash. However does the numerical drift not important? For example you can have some result with no correct digit at all, without any overflow or underflow for example. I am not sure that over/under-flow could be the only operational risk in the business.I hope I have not forgot any answer for you, but there is still much to talk I guess.