June 26th, 2014, 10:30 pm
QuoteOriginally posted by: CuchulainnQuoteOriginally posted by: outrunQuoteOriginally posted by: CuchulainnQuoteOriginally posted by: outrunand I'm sure there is need for complex<double, long> somewhereand Matrix<complex<double>, complex<long> >complex<Matrix<double>, Matrix<long> >could both be relenant for their different memory layouts.Indeed. The polar form for complex numbers is complex<double, Degree> and in 3d we have curvilinear coordinates Point<T1, T2, T3> e.g. Point<double, Degree, Degree> etc.AlsoPolynomial<Matrix> e.g. Cayley-Hamilton theoremMatrix<Polynomial> aka matrix polynomialsI wonder if anyone has developed efficient data structures for these in C++?Efficient could be defined be in the contect of algorithms that act on them, ... and that could be expressed as performance measures of the algorithms (speed, memory usage, parallel friendly)Time: how long does the algorithm take to complete.Space: how much working memory (typically RAM) is needed by the algorithm. This has two aspects: the amount of memory needed by the code, and the amount of memory needed for the data on which the code operates.What about bandwidth? The Time and Space performance dimensions of the algorithm are linked to the processing speed and memory capacity dimensions of the platform. But the platform also has an intrinsic bandwidth between memory and CPU and different algorithms may call for different levels of utilization of this bandwidth. To a first approximation, the bandwidth requirement of an algorithm is lower-bounded by the space requirement (the algorithm presumably read/writes each data location at least once) and the bandwidth requirement is upper-bounded by the time requirement (an algorithm may spend all of it's time in memory operations). Yet this interval might be quite large and different platforms might have different bandwidth levels (which is especially salient if we employ the algorithm on big data where memory access may be to secondary storage via IP channels).