

But in order to get Krakatoa on the GPU, we would first need new graphics cards in our workstations (read: $$$), time to learn CUDA or DirectCompute or whatever (time=$$$), and people to do the port (people=more $$$). It was not designed primarily as a commercial product, that was a side effect and we are happy people are embracing it. Keep in mind Krakatoa is being developed mainly to serve our internal VFX needs. In both cases, I am not sure the RnD effort would be worth it. This would probably require an external viewer or waiting for XBR to deliver the new Max viewports. One area where GPU rendering could be interesting would be fast previews of fractions of the particle cloud in a real time environment (like the nVidia smoke fluid simulation demos). We could theoretically speed it up by possibly writing multiple streams to the same file to better use the available cores, but the GPU wouldn’t change anything in this case. Saving particles depends mostly on the speed of the ZIP library. Making KCMs execute on the GPU would probably make them a bit faster, but since that is already one of the fastest portions of the system, the effort wouldn’t pay off much. The KCMs run like hell on the CPU (the overhead of typical KCMs is rather negligible). Running 3ds Max materials and maps on the GPU might be a problem since the Max materials we support are currently not designed to run on the GPU (Autodesk and mental images are doing something in that direction, but MetaSL might not be the right answer for Krakatoa). If we could multi-thread it, we would rather go with 8 cores and 16 or 32 GB RAM instead of hundreds of cores and only 1GB RAM. Making it multi-threaded on the CPU or GPU is quite a challenge because the process itself is not well suited for parallelization. The portions that are slow on the CPU wouldn’t get much faster on the GPU.įor example, drawing particles as points is currently single-threaded.


But the portions of the code that the GPU could potentially make faster are already pretty fast on the CPU. With a typical 1GB card today, the amount of particles one could process would be somewhere between 15 and 40 megapoints. Krakatoa requires ALL particles to be present in memory to perform sorting, lighting and drawing. The current limitation of GPU rendering is the amount of memory you can fill up with particles. What improvements over the current system would you expect from GPU rendering and particle saving? Was just wondering if there are any plans to implement GPU rendering and particle saving?
