with a bit guessing ( random = uniform in [0,1] and timing = execution time [not CPU time] ):on my medium sized PC Maple needs about 10 times more, 90% seems to be spent oncomputing the exponentials (working with hardware precision, no symbolics intended) edited: forgot to say it needs ~ 2 seconds

Last edited by AVt on August 25th, 2007, 10:00 pm, edited 1 time in total.

Yes, you guessed right on the random numbers. The timings are simply the standard timings for each software system respectively. According to Mathematica: Timing[ ] "includes only CPU time spent in the Mathematica kernel"I don't have Matlab, so not sure there.My newest desktop is a couple years old, and I get 2.5 secs for the Mathematica problem.But user:tonyd reports these times I have posted on a 2.66GHz Intel PC, but we don't know the rest of his setup. My deskotp is a 2.99GHZ Intel PC running Mathematica Ver. 5.0 (Ver. 6.0.1 is current as of this post)I would like to learn what it takes to improve my setup to get tonyd's timings -- simply V6 or new hardware or ? -- just made a similar query to the MM newsgroup.

Last edited by Alan on August 26th, 2007, 10:00 pm, edited 1 time in total.

My timing was for concurrent Maple 11 on Win XP using an Athlon 2.2 GHz & 1 GB memory (so 64 Bit is not supported).So in MMA does not give the time for symbolics outside the kernel - same as Maple. But it should be small, it is almostpure floating point computation.Almost: looking into the help and the code there should be some symbolics involved, it uses a general MatrixFunction(A, F)where F is analytic (so works for F=sin as well as for exp).QuoteThe MatrixFunction(A) command returns the Matrix obtained by interpolating [lambda, F( lambda )] for each of theeigenvalues lambda of A, including multiplicities. Here the Matrix polynomial is r(lambda) = F(lambda) - p(lambda)*q(lambda) where p(x) is the characteristic polynomial,q(lambda) is the quotient, and r(lambda) is the remainder.The latter only makes sense for me by switching to rationals for the characteristic polynomial.BTW: the determinant of such a beast is quite large and like 10^146, I have doubts in the reliability of the result,but forgot the command for numerical stability concerning matrices (i.e. I think there is some).---u:=RandomTools:-Generate(float(range=0.0 .. 1.0, digits=14, method=uniform), makeproc=true);M:=evalf(RandomMatrix(d, generator=u));st:=time():MatrixInverse(M).(MatrixExponential(-t*M) - MatrixExponential(-T*M));time()-st;results in 2 seconds.

- spacemonkey
**Posts:**443**Joined:**

Tried this on the free Matlab clone octave:-octave:26> M = rand(300,300);octave:27> t = 0.25; T = 2;octave:28> tic; inv(M) * (expm(-t*M) - expm(-T*M)); tocans = 3.2403 (seconds)Not bad when you consider my computer - 1.7Ghz P4, 256Mb. I also tried Maxima, but I couldn't get it to calculate the answer at all. Maxima has a lot of potential, but it seems pretty difficult to use. Still it beats the hell out of Mathematica on price.

QuoteOriginally posted by: AlanCan anyone match or beat either of these running times?If so what's your setup? (Matlab/Mathematica Version #, CPU, Brand, Model, Op. Sys., Memory)Matlab---------------------------M = rand(300,300);t=0.25; T=2;tic; inv(M) * (expm(-t*M) - expm(-T*M)); toc;Elapsed time is 0.288171 secondsMathematica---------------------------M = Table[Random[], {i, 1, 300}, {j, 1, 300}];t = 0.25; T= 2;Timing[Inverse[M].(MatrixExp[-M t] - MatrixExp[-M T]);] {0.375 Second, Null}Hi Alan,My setup : Windows XP, AMD Opteron P250, 2.39GHz, 4Go RAMMathematica v5.0 -> {1.852 Second, Null}Mathematica v6.0 -> {0.422 Second, Null}When I ran the Mathematica benchmarks (Help -> About Mathematica -> System Information -> Kernel)it 'rated me' 1.37, worse being 0.21 and best being 2.84.My conclusion : simply v6 [ + intel oriented hardware better than AMD (for mathematica at least) ].Personal comment : I also tried to do my own MatrixExp. I did compute the eigensystem and did the exponentiation with exp(M) = P.exp(D).P^(-1). This should be less efficient than native MatrixExpbecause there are faster algorithm to do matrix exponentiation, as quoted in their help.Anyway, with v5.0, home made exponentiation is faster, which is NOT normal.With v6.0, it's no longer the case.I'll try to get a matlab licence.

QuoteOriginally posted by: AlanCan anyone match or beat either of these running times?If so what's your setup? (Matlab/Mathematica Version #, CPU, Brand, Model, Op. Sys., Memory)Matlab---------------------------M = rand(300,300);t=0.25; T=2;tic; inv(M) * (expm(-t*M) - expm(-T*M)); toc;Elapsed time is 0.288171 secondsMathematica---------------------------M = Table[Random[], {i, 1, 300}, {j, 1, 300}];t = 0.25; T= 2;Timing[Inverse[M].(MatrixExp[-M t] - MatrixExp[-M T]);] {0.375 Second, Null}And interestingly,>> M = rand(300,300);>> t=0.25; T=2.;>> tic; inv(M) * (expm(-t*M) - expm(-T*M)); toc;Elapsed time is 0.698117 seconds.which means that with my setup, on this problem,Matlab Version 7.2.0.232 is slower than Mathematica v6.0.

Thanks, blondie -- exactly what I was looking for.

Nice!Maxima is also free and pretty nice to work with. Otherwise I would vote for Maple (because I know it fairly well).However, I have noted bugs, regarding Legendre functions of the 1st and 2nd kind, both in Mathematica and Maple

HiSometimes you may cut Elapsed time from 100% to 10% with Real-Time Workshop, Real-Time Workshop Embedded Coder, 'Embedded MATLAB'

- exneratunrisk
**Posts:**3559**Joined:**

People argue that Mathematica is "esoteric", because it forces you to "structure" (nested functional configurations).(or you can exploit its full power, only when you structure..)If you think of multi core high performance computing, this "weakness" becomes an unprecedented "advantage"?Mathematica's "Parallel Computing Toolkit" provides symbolic parallelisation techniques.We have applied this. In a few lines we support the dsitribution of single valuations to cores.With close to linear speed up (instead of sequential evaluations of "tables of functions" on one core they are evaluated in parallel on the cores available).The analytics of portfolios in scenarios is of good nature for such coarse grain parallelism.It works in simple Microsoft based compute cluster set ups (a mix of PCs with different nr of cores in a LAN).In a few years we will get hundreds of cores in a "cube of 1 m^3"?And SW?

With regard to Matlab's parallel toolkit; I've been using it in some form or another for over a decade. First at Cornell's Theory Center, where parallel matlab was first created as a neat hack on its SP2. Definitely interesting but not ready for prime time.Much later, I started using the parallel matlab. Hey, parfor is a great idea. Think OpenMP and matlab parallelizing your loops. Easy. But not terribly stable. My impression on it is that it's not ready for prime time. Or Windows or Java or (...) isn't.Parallelizing is a big deal. It's THE big deal going forward in a world of diminishing CPU speed gains. Did you know that recent versions of Matlab 2007 support multithreaded computation. Look under general preferences. That's where you see the ominous warning under the [ ] Enable multithread"Note: Upon encountering a fatal condition when multithread is enabled, MATLAB cannot attempt to return control to the Command Window and exits instead."I'm not sure who's winning the parallel race but perhaps it's time I looked at Mathematica again.

- Cuchulainn
**Posts:**64397**Joined:****Location:**Drosophila melanogaster-
**Contact:**

QuoteThink OpenMP and matlab parallelizing your loops. Easy. But not terribly stable. My impression on it is that it's not ready for prime time. Or Windows or Java or (...) isn't.Windows is just not right feel for real number crunching... But nice for the desktop.QuoteParallelizing is a big deal. It's THE big deal going forward in a world of diminishing CPU speed gains. It's the only way, perhaps. What do we do with legacy/sequential code?You don't mention MPI! (?)

Last edited by Cuchulainn on November 19th, 2007, 11:00 pm, edited 1 time in total.

"Compatibility means deliberately repeating other people's mistakes."

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

The original parallel Matlab was MultiMatlab at Cornell. It used .mexrs6k addins compiled with MPI to do the dirty work IIRC. Of course, the syntax for parallel matlab is really a wrapper for distributed (not shared) computing so there is data being flipped around and shared. It's really doing something like MPI, not OpenMP. On today's hardware you have the luxury of multiple cores so something like OpenMP can give you some very easy wins without the burden of nonlocal data. Nonlocal data is problem with MPI of course, since there are fewer easy wins. Because of communication requirements, the distributed code can easily be slower than the serial code. Also, once you move away from data locality, things get complicated to debug. I tuned my algorithm in terms of flop counts, O(n^3), and calculated an "Equivalent" flop effort for flipping one number across the network. Depending on if that number's 1000 or 1000000 you will have to tune your algorithm differently for optimal performance. It may even make sense to not communicate, rather, to redundantly compute some things locally.And of course, of course, use profiling tools to decide what to parallelize first. A good rule of thumb is the upper bound of your speedup is the reciprocal of the fraction of the algorithm you leave serialized. With MC you can parallelize pretty much 100%; it's "embarassingly" parallel. If you have an 80% parallelizable algorithm, then the best you can ever do is 5x faster on a wallclock. And that's an upper bound.

Last edited by eredhuin on November 20th, 2007, 11:00 pm, edited 1 time in total.

- Cuchulainn
**Posts:**64397**Joined:****Location:**Drosophila melanogaster-
**Contact:**

Good post, eredhuin.

"Compatibility means deliberately repeating other people's mistakes."

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

David Wheeler

http://www.datasimfinancial.com

http://www.datasim.nl

QuoteOriginally posted by: AlanCan anyone match or beat either of these running times?If so what's your setup? (Matlab/Mathematica Version #, CPU, Brand, Model, Op. Sys., Memory)Matlab---------------------------M = rand(300,300);t=0.25; T=2;tic; inv(M) * (expm(-t*M) - expm(-T*M)); toc;Elapsed time is 0.288171 secondsMathematica---------------------------M = Table[Random[], {i, 1, 300}, {j, 1, 300}];t = 0.25; T= 2;Timing[Inverse[M].(MatrixExp[-M t] - MatrixExp[-M T]);] {0.375 Second, Null}Matlab 7.1:Elapsed time is 0.377443 seconds.Mathematica 4.2:{6.672 Second, Null}CPU: P4 3.6,XP,2 GB