January 28th, 2012, 11:32 am
QuoteOriginally posted by: AlanI am not saying use GPU's for everything. I am also not saying: don't use multi-cores.If you can distribute a pde problem across multi-cores, wonderful!I am just saying: for MC, given the existence of GPU's, why bother with MC on multi-cores?Sure, GPU is a niche -- agree. But MC is also a niche and these two niches are made for each other. Also, because of the 1/sqrt(N) error scaling for MC, 1/sqrt(100) is interesting, while IMO, 1/sqrt(6) simply is not.I agree that the h/w dependence is highly irritating, both in having to buy cards and worry so much about thehardware details in the programming. That's why I would like to use the hardware remotely, if possible.Anyway, everybody has their own agenda. Maybe later in the year, I can explore this one and contributesomething to the library.I would hardly consider MC as a niche market The issues are how many stakeholders/users use CPU versus GPU. Another issue is software design of MC and accurate schemes as well as PRNG. Something that tends not to get mentioned is the amount of developer effort to get a MC/GPU engine up and running? Is that 1 week, 2, 6, 30 weeks? Anyone? QuoteI think it's fun to learn about it (developer push instead of client demand ), but I also think that results can be quite surprising, the MC might run 100x faster than without that E250 card. It's very plausible that is someone who actually needs MC for production would invest in GPU. For me it's about acquiring new skills: algorithms for large scale parallel coding.This will appeal to a very small stakeholder group, but you know that already. GPU is still too proprietary for my taste and I have no available time at the moment to play with this (hardware) technology. Used to do this kind of h/w stuff a lot before, so it's yet another kid on the block I am playing the hurler in the ditch
Last edited by
Cuchulainn on January 27th, 2012, 11:00 pm, edited 1 time in total.