Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For a bit of background: The article talks a lot about global illumination. Here's what that means.

First, you have to understand the cheaper alternative, which is called local illumination. With local illumination, for each pixel, you figure out what object you're looking at, and where on that object. You take into account the normal (direction of the surface at that point) and the optical properties of the object at that point. You also take into account the position, intensity, color, etc. of any light sources in the scene. Optionally, you may also take into account any shadow casting. That's it.

What's missing from that list? It's a big one: You're not taking into account the way other objects in the scene affect that little point. In the real world, light bounces all around. Each little point is affected by pretty much each other little point. All the points are interdependent.

But with local illumination, you ignore the way other surfaces contribute to the point's illumination. You're just looking at that one point and the light sources. That's why it's called local.

Global illumination, by contrast, does take into account the interplay between different points in the scene. Its main purpose is to simulate light bouncing between polygons.

As you can imagine, managing the complexity of all those interactions is a tall order. We have quite a few algorithms for this; all are approximations. It's worth noting that some of these approximations can converge towards a provably physically correct result if you let them run long enough.

In any case, running global illumination often causes a major increase in rendering time. So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.



In any case, running global illumination often causes a major increase in rendering time. So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.

There's also another factor at play, which is directability. Physical correctness is not usually a priority except as far as it advances the artistic goals of the people making the movie. If the director says, "can you make the right side of that table look less red?", you need to have some way for the artist to achieve that goal, even if that's not how the scene would "really" look. I expect that the development of new tools and processes to allow precise manipulation of the lighting in globally illuminated scenes was just as much, if not more, of a barrier than the additional cost in rendering time.


For an interesting parallel, this is analogous to my experience with emergent gameplay when I was in the game industry. Everyone really likes the idea of emergent gameplay and the open-ended-ness and flexibility that gives you. But you sacrifice a lot of control when you go that way. This can leave game designers and producers feeling like their hands are tied when the game doesn't play the way they want.

Less flexible, more scripted behavior is often the smarter choice when you want to be able to ensure a certain gameplay experience.


And less flexible, more scripted behaviour is one of the biggest things driving me away from gaming these days. Most seem to end up as a sequence of action bubbles punctuated by cut-scenes, often with super-heavy hints about the "correct" way to handle the situation - sometimes even unwinnable (through e.g. infinitely spawning enemies) until you do things the "right" way.

And the resulting primary gameplay experience is boredom; felt most heavily recently with Bioshock Infinite.

The other type of game is the open world formula, featured in Assassin's Creed and GTA, and to a certain extent Fallout, Skyrim etc. But these become boring in another way; they rely on making navigating the territory interesting, but eventually the novelty wears off and you just want to enable the "instant teleport" function.

I still miss games like Thief, where navigating the territory was the main challenge of the game, but the territory was carefully enough designed, yet still very open, and not seen repeatedly enough to become boring. Dishonored came within 60%, but the player character was too powerful.


>I still miss games like Thief

To that list i'll add system shock 2 and Dues Ex 1


I disagree.


Saying "I disagree" is a pretty useless comment. Say why you disagree, or don't bother saying anything at all.


'Dark Souls' and 'Demon Souls' do this extremely well I feel.

Team Ico games come close too.


This is something that is often overlooked in any analysis of global vs. local illumination. Local illumination gives you perfect control, and allows you to "paint with light", which is the cornerstone of the pixar lighting process.

We used GI at pixar when it was appropriate, even at the expense of long render times - that is to say, only when it made the final product look better. How you get to the result doesn't matter, only what it looks like on screen.


I appreciate the clear description of local vs global illumination. This isn't quite what the article is discussing though.

Pixar have had a global illumination system in place at least since Up, and maybe earlier [1]. However, it was one that integrated with their rasterizer.

The article is now claiming that Pixar have switched to Raytracing exclusively, which really is actually a HUGE change, as Renderman only introduced raytracing at all with Cars 2. Every prior Pixar movie exclusively used a micropolygon rasterizer for rendering.

The article also claims:

> ray tracing is a relatively advanced CG lighting technique

Well, not really. Ray tracing - at least Whitted-style ray tracing - is about as simple as physically-based rendering gets. It's making it fast that gets complex, but it's possible to write a basic ray tracer in a few hours if you know what you're doing.

[1] http://graphics.pixar.com/library/PointBasedGlobalIlluminati...


Wait a sec, I thought that customers were asking for ray tracing in RenderMan before then and they used the first Cars as a testbed for those capabilities.



> So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.

Do we have any ball-park estimate of how long it takes Pixar to render a single frame of a movie like Monsters U?

EDIT: Many people are mentioning it's done massively parallel, which I meant to include in my question. So, what I mean is, how long does it take to render a whole Pixar movie?



They must do them heavily in parallel then; otherwise it would take 170 years to render the movie.


Haha, yes, yes they do. You can see a few pictures of Pixar's render farm in [1]. According to [2] (which is where that 11.5 hours comes from) for Cars 2, they had 12,500 CPU cores for rendering.

[1] http://www.slashfilm.com/cool-stuff-a-look-at-pixar-and-luca... [2] http://jalopnik.com/5813587/12500-cpu-cores-were-required-to...


Just for comparison Weta Digital had 35,000 cores for rendering Avatar.


well that still begs the question: "how many cores did they use per frame?" Did they just render 12500 frames in parallel?


Yep! Rendering is very parallelizable, thank goodness. And at more than one level: You can assign different regions of a single frame to different processors, and you can also assign different frames to different processors. It's one of those special computing problems that really can be solved by throwing more silicon at it. Which is a real blessing, considering how unworkably slow it would be otherwise.


I understand how you would break up an individual frame if you are using the 'local illumination' described in the ancestor post, but if the 'global illumination' has interactions across the entire frame, how is that compatible with parallel processing?

It sounds a bit like the n-body problem, which has parallel approximation algorithms, but nothing terribly straightforward.


In global illumination there are lighting interactions between objects in the scene, but not between pixels. Each pixel is independent of the others, and so can (in theory) be processed in parallel.

In another way of thinking about it, raytracing simulates photons. Photons don't interact with each other, so the problem of simulating photons is massively parallel.


I'm not an expert on how the GI algorithms are parallelized. First, it's worth noting there are a lot of them, so the strategy probably varies. But here's a guess for a popular one known as photon mapping. With that technique, you bounce around virtual photons, and they contribute to the lighting of each point they hit (to simplify a bit). AFAIK, each photon's path is only a function of the light source, the scene geometry, and the scene materials. I.e. it's not a function of what other photons are doing. So I think you can parallelize individual photons bouncing around. As a final step, you have to fold all the light contributions together, which I believe could in turn be parallelized per polygon.

But I'm just guessing about all this.


Generally you're using some kind of sampling algorithm to randomly sample from the light distribution in the scene, so it's easy to calculate N different images of the same scene, and then average them together. If it's not the whole image that's sampled at once, then you can still parallelize the sampling step.


I believe the photons are simulated backwards; they emanate from the camera.

So you don't have to simulate the same photons lots of times for different parts of the frame. You just simulate the photons that will eventually end up in a certain part of the image.


based on the two numbers: 170 years and 11.5 hours, that means 129581.580089 tasks in parallel over 11.5 hours


I don't know for sure, but I would kind of doubt they are rendering all 130,000 frames in parallel. I would think it is more likely that when a scene is done and ready then that scene would be rendered and completed.

There are a lot of other things going into each scene other than just the rendering (storyboarding, animating, lighting, etc) so it would be silly to wait for the entire film to be done and then render all at once.

Also the 11.5 hours is the average time. Single frames could take up to 80 or 90 hours to render according to that article.

So as for how many they do in parallel that would be very interesting to find out, but I would guess possibly all the frames devoted to a single scene so that would be around 5,000-10,000 frames at once which seems somewhat reasonable considering they have 12,500 cores.


Other than grading, compositing and sound, rendering's generally the final step - definitely, the modelling, lighting, animation and texturing have to be done before the rendering can be started.


Or alternatively, 90 minutes * 60 seconds * 24 fps = 129, 600 ;)


That is the most clear description I've ever seen of local vs global illumination. Thank you.


I'm still learning on this subject, but I find local and global illumination easier to grasp when using the term direct and indirect illumination.


I wonder what would happen if you offloaded the rendering of a movie like Monsters University to something like Google Compute Engine. Would it cost an arm and a leg? Or would it solve a lot of scaling/cost/time challenges?


I think Pixar will do low-resolution runs all through the day (so people can get immediate feedback on what they are doing), and a high resolution run overnight (so they can see the current product). If you're at close to 100% utilisation, and have a large cluster, renting hardware is very cost inefficient.


The cost would be completely prohibitive. Pixar is at the point where they have to micromanage the electricity cost of their rendering clusters. (At one point they did a multimillion dollar hardware upgrade on the grounds that they would actually save money thanks to new hardware being more power efficient.)


It would cost an arm and a leg... it would be interesting to run the calculation though. There are a lot of bottlenecks in running a render farm like that, but one of them is just moving the data around. A typical scene uses upwards of 1TB of data (textures, etc) and every scene has a different set of data. So there is a lot of local caching on a fast SAN. Since the cost of hardware is amortized over many simultaneous productions and the farm runs at pretty close to 100% 24/7, I think the hardware investment is paid for quickly.


I once tried offloading rendering (Blender to be exact) to Amazon EC2. It turned out to be extremely costly. Rendering a 3-minute video at 1080 would have cost me $60. Yeah, peanuts for Pixar, but remember this is 3 minutes at 1080, not the ungodly amount of pixels they have to render. Also it was a pretty simple scene; a more complex one would have cost more.


This came up in a discussion I had with a tech at Disney Animation. The biggest issue is actually getting the assets from the workstation over to the rendering farm. Since the assets for a single scene can be many gigabytes using someone else's far away render farm would be prohibitively slow.


I actually wonder if Google, Amazon etc have enough idling servers to suddenly handle rendering a 90 minute movie (at approx. 11.5 hours / frame, if not more with these new lighting technologies).


Here's a nice refresher, "Global Illumination in a Nutshell": http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: