Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They must do them heavily in parallel then; otherwise it would take 170 years to render the movie.


Haha, yes, yes they do. You can see a few pictures of Pixar's render farm in [1]. According to [2] (which is where that 11.5 hours comes from) for Cars 2, they had 12,500 CPU cores for rendering.

[1] http://www.slashfilm.com/cool-stuff-a-look-at-pixar-and-luca... [2] http://jalopnik.com/5813587/12500-cpu-cores-were-required-to...


Just for comparison Weta Digital had 35,000 cores for rendering Avatar.


well that still begs the question: "how many cores did they use per frame?" Did they just render 12500 frames in parallel?


Yep! Rendering is very parallelizable, thank goodness. And at more than one level: You can assign different regions of a single frame to different processors, and you can also assign different frames to different processors. It's one of those special computing problems that really can be solved by throwing more silicon at it. Which is a real blessing, considering how unworkably slow it would be otherwise.


I understand how you would break up an individual frame if you are using the 'local illumination' described in the ancestor post, but if the 'global illumination' has interactions across the entire frame, how is that compatible with parallel processing?

It sounds a bit like the n-body problem, which has parallel approximation algorithms, but nothing terribly straightforward.


In global illumination there are lighting interactions between objects in the scene, but not between pixels. Each pixel is independent of the others, and so can (in theory) be processed in parallel.

In another way of thinking about it, raytracing simulates photons. Photons don't interact with each other, so the problem of simulating photons is massively parallel.


I'm not an expert on how the GI algorithms are parallelized. First, it's worth noting there are a lot of them, so the strategy probably varies. But here's a guess for a popular one known as photon mapping. With that technique, you bounce around virtual photons, and they contribute to the lighting of each point they hit (to simplify a bit). AFAIK, each photon's path is only a function of the light source, the scene geometry, and the scene materials. I.e. it's not a function of what other photons are doing. So I think you can parallelize individual photons bouncing around. As a final step, you have to fold all the light contributions together, which I believe could in turn be parallelized per polygon.

But I'm just guessing about all this.


Generally you're using some kind of sampling algorithm to randomly sample from the light distribution in the scene, so it's easy to calculate N different images of the same scene, and then average them together. If it's not the whole image that's sampled at once, then you can still parallelize the sampling step.


I believe the photons are simulated backwards; they emanate from the camera.

So you don't have to simulate the same photons lots of times for different parts of the frame. You just simulate the photons that will eventually end up in a certain part of the image.


based on the two numbers: 170 years and 11.5 hours, that means 129581.580089 tasks in parallel over 11.5 hours


I don't know for sure, but I would kind of doubt they are rendering all 130,000 frames in parallel. I would think it is more likely that when a scene is done and ready then that scene would be rendered and completed.

There are a lot of other things going into each scene other than just the rendering (storyboarding, animating, lighting, etc) so it would be silly to wait for the entire film to be done and then render all at once.

Also the 11.5 hours is the average time. Single frames could take up to 80 or 90 hours to render according to that article.

So as for how many they do in parallel that would be very interesting to find out, but I would guess possibly all the frames devoted to a single scene so that would be around 5,000-10,000 frames at once which seems somewhat reasonable considering they have 12,500 cores.


Other than grading, compositing and sound, rendering's generally the final step - definitely, the modelling, lighting, animation and texturing have to be done before the rendering can be started.


Or alternatively, 90 minutes * 60 seconds * 24 fps = 129, 600 ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: