February 10th, 2006, 2:46 pm
Hi all,Thanks for the interest and comments so far.Let me reply to Russel's points first.1. The .Net 2.0 requirement can be a problem, certainly in the short term. You should find that version 2.0 and 1.1 coexist quite happily on a machine, but an application like Excel can only load one version at a time. So there is a problem if you are also using add-ins that require 1.1. By default Excel will still load 1.1 of the runtime if present, and the ExcelDna add-ins will not work. You need some fiddling to ensure that 2.0 is loaded, as explained in the Getting Started.2. Which brings me to your second point. I made peace with the 2.0 requirement because of the much improved interop performance. Performance via ExcelDna is excellent, and compares favourably with pure C/C++ code using the .xll interface. Code from text files is compiled behind the scenes, then works like a normal assembly. I use reflection to hook up the functions initially, but the functions are JIT compiled and the call is through a native (.Net generated) stub that does the native-to-runtime transition. If I recall correctly, the performance overhead per call from Excel to a UDF is in the region of 60 instructions. This is noticable for really simple functions - checking the 'add two numbers' function is a great test as it represents the worst case. So there is some inevitable transition cost. Once on the .Net side of the function call, you should find the performance to be outstanding, also for numerical routines. The JIT compiler generates very good code, and can optimise for the particular processor at runtime. I suspect you need the Intel optimising compilers to do much better. (You should also be careful when comparing with VBA performance, especially for UDFs, which are slow when recalculation is triggered from the sheet, but pretty fast when recalculation is triggered from within VBA.)3. As to the concern about using the library in some institutions, I hope that by making the software open-source with few strings attached, institutions of the type you describe can access at the library source and convince themselves that everything is OK, or even develop their own 'in-house' version, if need be.Mark asks what the advantages are over creating automation add-ins. Well, the Excel version issue is a big one for me, I need to support Excel 2000. But mostly, automation add-ins seem like a distant cousin to Excel - integration into the function wizard is poor, and the whole story is basically undocumented. Versioning, installation and registration of automation add-ins is tricky. Also, I could figure out how to do the dynamic code generation and hook-up through the .xll, whereas for the automation add-ins you need a class and interface to be on disk and registered before it is hooked up into Excel. (Maybe I've just had enough of COM for a while.)ExcelDna uses the well documented, supported and widely used (albeit not often fixed) .xll interface, which will also be updated for Excel 12 and glues into Excel in the right way. The downside: making .xll add-ins in managed code is discouraged and unsupported by Microsoft. However, I think it can be made to work very well.A very useful contribution to the project would be a set of benchmarks of raw C/C++ vs. ExcelDna vs. automation add-ins vs. VBA for some different kinds of calculations and caller scenarios. I think many people here will be surprised by the results.Why do I think it is compelling? C# is much, much nicer than C++, VB.NET is nicer than VBA, and new .Net languages (think F#) and libraries are fun to play with.Regards,Govert