Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Pixar's Chris Horne Sheds New Light on Monsters University (thisanimatedlife.blogspot.com)
321 points by randall on May 28, 2013 | hide | past | favorite | 115 comments


For a bit of background: The article talks a lot about global illumination. Here's what that means.

First, you have to understand the cheaper alternative, which is called local illumination. With local illumination, for each pixel, you figure out what object you're looking at, and where on that object. You take into account the normal (direction of the surface at that point) and the optical properties of the object at that point. You also take into account the position, intensity, color, etc. of any light sources in the scene. Optionally, you may also take into account any shadow casting. That's it.

What's missing from that list? It's a big one: You're not taking into account the way other objects in the scene affect that little point. In the real world, light bounces all around. Each little point is affected by pretty much each other little point. All the points are interdependent.

But with local illumination, you ignore the way other surfaces contribute to the point's illumination. You're just looking at that one point and the light sources. That's why it's called local.

Global illumination, by contrast, does take into account the interplay between different points in the scene. Its main purpose is to simulate light bouncing between polygons.

As you can imagine, managing the complexity of all those interactions is a tall order. We have quite a few algorithms for this; all are approximations. It's worth noting that some of these approximations can converge towards a provably physically correct result if you let them run long enough.

In any case, running global illumination often causes a major increase in rendering time. So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.


In any case, running global illumination often causes a major increase in rendering time. So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.

There's also another factor at play, which is directability. Physical correctness is not usually a priority except as far as it advances the artistic goals of the people making the movie. If the director says, "can you make the right side of that table look less red?", you need to have some way for the artist to achieve that goal, even if that's not how the scene would "really" look. I expect that the development of new tools and processes to allow precise manipulation of the lighting in globally illuminated scenes was just as much, if not more, of a barrier than the additional cost in rendering time.


For an interesting parallel, this is analogous to my experience with emergent gameplay when I was in the game industry. Everyone really likes the idea of emergent gameplay and the open-ended-ness and flexibility that gives you. But you sacrifice a lot of control when you go that way. This can leave game designers and producers feeling like their hands are tied when the game doesn't play the way they want.

Less flexible, more scripted behavior is often the smarter choice when you want to be able to ensure a certain gameplay experience.


And less flexible, more scripted behaviour is one of the biggest things driving me away from gaming these days. Most seem to end up as a sequence of action bubbles punctuated by cut-scenes, often with super-heavy hints about the "correct" way to handle the situation - sometimes even unwinnable (through e.g. infinitely spawning enemies) until you do things the "right" way.

And the resulting primary gameplay experience is boredom; felt most heavily recently with Bioshock Infinite.

The other type of game is the open world formula, featured in Assassin's Creed and GTA, and to a certain extent Fallout, Skyrim etc. But these become boring in another way; they rely on making navigating the territory interesting, but eventually the novelty wears off and you just want to enable the "instant teleport" function.

I still miss games like Thief, where navigating the territory was the main challenge of the game, but the territory was carefully enough designed, yet still very open, and not seen repeatedly enough to become boring. Dishonored came within 60%, but the player character was too powerful.


>I still miss games like Thief

To that list i'll add system shock 2 and Dues Ex 1


I disagree.


Saying "I disagree" is a pretty useless comment. Say why you disagree, or don't bother saying anything at all.


'Dark Souls' and 'Demon Souls' do this extremely well I feel.

Team Ico games come close too.


This is something that is often overlooked in any analysis of global vs. local illumination. Local illumination gives you perfect control, and allows you to "paint with light", which is the cornerstone of the pixar lighting process.

We used GI at pixar when it was appropriate, even at the expense of long render times - that is to say, only when it made the final product look better. How you get to the result doesn't matter, only what it looks like on screen.


I appreciate the clear description of local vs global illumination. This isn't quite what the article is discussing though.

Pixar have had a global illumination system in place at least since Up, and maybe earlier [1]. However, it was one that integrated with their rasterizer.

The article is now claiming that Pixar have switched to Raytracing exclusively, which really is actually a HUGE change, as Renderman only introduced raytracing at all with Cars 2. Every prior Pixar movie exclusively used a micropolygon rasterizer for rendering.

The article also claims:

> ray tracing is a relatively advanced CG lighting technique

Well, not really. Ray tracing - at least Whitted-style ray tracing - is about as simple as physically-based rendering gets. It's making it fast that gets complex, but it's possible to write a basic ray tracer in a few hours if you know what you're doing.

[1] http://graphics.pixar.com/library/PointBasedGlobalIlluminati...


Wait a sec, I thought that customers were asking for ray tracing in RenderMan before then and they used the first Cars as a testbed for those capabilities.



> So it's understandable that Pixar, which has to render a huge number of frames at huge resolutions, did not traditionally use it much.

Do we have any ball-park estimate of how long it takes Pixar to render a single frame of a movie like Monsters U?

EDIT: Many people are mentioning it's done massively parallel, which I meant to include in my question. So, what I mean is, how long does it take to render a whole Pixar movie?



They must do them heavily in parallel then; otherwise it would take 170 years to render the movie.


Haha, yes, yes they do. You can see a few pictures of Pixar's render farm in [1]. According to [2] (which is where that 11.5 hours comes from) for Cars 2, they had 12,500 CPU cores for rendering.

[1] http://www.slashfilm.com/cool-stuff-a-look-at-pixar-and-luca... [2] http://jalopnik.com/5813587/12500-cpu-cores-were-required-to...


Just for comparison Weta Digital had 35,000 cores for rendering Avatar.


well that still begs the question: "how many cores did they use per frame?" Did they just render 12500 frames in parallel?


Yep! Rendering is very parallelizable, thank goodness. And at more than one level: You can assign different regions of a single frame to different processors, and you can also assign different frames to different processors. It's one of those special computing problems that really can be solved by throwing more silicon at it. Which is a real blessing, considering how unworkably slow it would be otherwise.


I understand how you would break up an individual frame if you are using the 'local illumination' described in the ancestor post, but if the 'global illumination' has interactions across the entire frame, how is that compatible with parallel processing?

It sounds a bit like the n-body problem, which has parallel approximation algorithms, but nothing terribly straightforward.


In global illumination there are lighting interactions between objects in the scene, but not between pixels. Each pixel is independent of the others, and so can (in theory) be processed in parallel.

In another way of thinking about it, raytracing simulates photons. Photons don't interact with each other, so the problem of simulating photons is massively parallel.


I'm not an expert on how the GI algorithms are parallelized. First, it's worth noting there are a lot of them, so the strategy probably varies. But here's a guess for a popular one known as photon mapping. With that technique, you bounce around virtual photons, and they contribute to the lighting of each point they hit (to simplify a bit). AFAIK, each photon's path is only a function of the light source, the scene geometry, and the scene materials. I.e. it's not a function of what other photons are doing. So I think you can parallelize individual photons bouncing around. As a final step, you have to fold all the light contributions together, which I believe could in turn be parallelized per polygon.

But I'm just guessing about all this.


Generally you're using some kind of sampling algorithm to randomly sample from the light distribution in the scene, so it's easy to calculate N different images of the same scene, and then average them together. If it's not the whole image that's sampled at once, then you can still parallelize the sampling step.


I believe the photons are simulated backwards; they emanate from the camera.

So you don't have to simulate the same photons lots of times for different parts of the frame. You just simulate the photons that will eventually end up in a certain part of the image.


based on the two numbers: 170 years and 11.5 hours, that means 129581.580089 tasks in parallel over 11.5 hours


I don't know for sure, but I would kind of doubt they are rendering all 130,000 frames in parallel. I would think it is more likely that when a scene is done and ready then that scene would be rendered and completed.

There are a lot of other things going into each scene other than just the rendering (storyboarding, animating, lighting, etc) so it would be silly to wait for the entire film to be done and then render all at once.

Also the 11.5 hours is the average time. Single frames could take up to 80 or 90 hours to render according to that article.

So as for how many they do in parallel that would be very interesting to find out, but I would guess possibly all the frames devoted to a single scene so that would be around 5,000-10,000 frames at once which seems somewhat reasonable considering they have 12,500 cores.


Other than grading, compositing and sound, rendering's generally the final step - definitely, the modelling, lighting, animation and texturing have to be done before the rendering can be started.


Or alternatively, 90 minutes * 60 seconds * 24 fps = 129, 600 ;)


That is the most clear description I've ever seen of local vs global illumination. Thank you.


I'm still learning on this subject, but I find local and global illumination easier to grasp when using the term direct and indirect illumination.


I wonder what would happen if you offloaded the rendering of a movie like Monsters University to something like Google Compute Engine. Would it cost an arm and a leg? Or would it solve a lot of scaling/cost/time challenges?


I think Pixar will do low-resolution runs all through the day (so people can get immediate feedback on what they are doing), and a high resolution run overnight (so they can see the current product). If you're at close to 100% utilisation, and have a large cluster, renting hardware is very cost inefficient.


The cost would be completely prohibitive. Pixar is at the point where they have to micromanage the electricity cost of their rendering clusters. (At one point they did a multimillion dollar hardware upgrade on the grounds that they would actually save money thanks to new hardware being more power efficient.)


It would cost an arm and a leg... it would be interesting to run the calculation though. There are a lot of bottlenecks in running a render farm like that, but one of them is just moving the data around. A typical scene uses upwards of 1TB of data (textures, etc) and every scene has a different set of data. So there is a lot of local caching on a fast SAN. Since the cost of hardware is amortized over many simultaneous productions and the farm runs at pretty close to 100% 24/7, I think the hardware investment is paid for quickly.


I once tried offloading rendering (Blender to be exact) to Amazon EC2. It turned out to be extremely costly. Rendering a 3-minute video at 1080 would have cost me $60. Yeah, peanuts for Pixar, but remember this is 3 minutes at 1080, not the ungodly amount of pixels they have to render. Also it was a pretty simple scene; a more complex one would have cost more.


This came up in a discussion I had with a tech at Disney Animation. The biggest issue is actually getting the assets from the workstation over to the rendering farm. Since the assets for a single scene can be many gigabytes using someone else's far away render farm would be prohibitively slow.


I actually wonder if Google, Amazon etc have enough idling servers to suddenly handle rendering a 90 minute movie (at approx. 11.5 hours / frame, if not more with these new lighting technologies).


Here's a nice refresher, "Global Illumination in a Nutshell": http://www.thepolygoners.com/tutorials/GIIntro/GIIntro.htm


> ray tracing is a relatively advanced CG lighting technique

Not really correct. Ray tracing is more like the most simple GC lightning technique you could imagine, but so incredibly computationally expensive people have been mostly waiting for the hardware to be good enough for the last 50 years.

And in the meantime, they've been using an incredible pile of hack and tricks to try and approach levels of visual quality and complexity trivial on a raytracer, except said pile of hack could actually be computed before the heat death of the universe.


    I was surprised that ray tracing in Pixar
    was historically a clunky, haphazard process.
    I always thought of it as this smooth, polished
    machine like something you would see at an Apple store.
Life inside the sausage factory never quite looks like what outsiders would expect.


A lot of the tools in pixar are wonderfully clunky.

They went through a phase of hiring hip young things fresh out of MIT to write tools. Instead of nice friendly tools that play well together, they got a lot of domain specific languages.


It's been this way in the visual effects industry for decades. Most of the tools are used through a Posix layer with Perl to keep things tidy. Going into the VFX forums even a few years ago I was amazed at illustrators knowing enough Perl and Python to get the job done, because, "Designers can't code good!"

Even the commercial stuff looks like some graduate students thesis work. A basic Java GUI and about 50+ commandline arguments.


Yeah, they're starting to use off-the-shelf software a lot more recently...


So very true! In fact, having worked at an Apple store, and later on the products sold at the Apple store, I'd venture to say that Pixar's process is probably precisely like the process behind something you would see at an Apple store!


I think it's awesome how they thought something was too hard, and so they faked it to be "good enough."

I'm going to have to look more critically at Pixar movies now, knowing that the old ones didn't have actual calculated light sources.


Actually light sources have always been the only thing 'traced'. We used to use them to trick ray tracing in the early days of RenderMan (put an image in the lightsource so you can 'reflect' it). It's surface to surface and other GI effects that where late to the party.

Nevertheless we often faked that with massive light counts. For the opening shot of Armageddon with the astroid hitting earth I used 20,000 point lights to represent secondary debris reentering the atmosphere. In 1998 or when ever that was it still only took RenderMan a few minutes to render each frame.


OT: (fanboi alert) I think it's really cool that you're posting here. Pixar is one of those places that I think would be really really cool to work for but that I don't have the skillset for. For the record, I have an original print of the RenderMan book that I read cover-to-cover when I was a kid.


Even if it renders everything you ever saw.


Most people probably think that Pixar uses Apple computers (e.g. the Apple references in Wall-E). In fact, Pixar is a Linux shop.


Dreamworks is also Linux based (RedHat).


This is wrong actually - they've been using ray tracing for ages (all ambient occlusion in PRMan is done with raytracing, by sending out occlusion rays in a hemisphere around the shading point).

What's new with MU is they're using both physically-plausible shading (where the shading is based off physically-based BRDF lighting algorithms, which gives much more realistic results), and global illumination path-tracing for the entire light transfer equation.


They explain the extent to which they used to use raytracing in the article - is it misrepresented? It's in a direct quote so I doubt it.


Yes, because the article (and the person in it) doesn't seem to understand the difference between path tracing (ray tracing with global illumination - multiple bounces even with diffuse surfaces), and ray tracing = sending rays around a scene and bouncing them off specular reflective/refractive surfaces - which Pixar have been doing for years. It's been possible to write raytraced shaders in PRMan for over 12 years now.


Cars was in production about 10-12 years ago. Maybe you're thinking Cars 2?


No, the guy in the article was wrong - it's been possible to do in PRMan since at least 2000, but it was very slow (they didn't have any decent acceleration structures for the ray intersection), so it generally wasn't used that much. But it was possible.

For PRMan 13 (which Pixar used for Cars in 2006), they added semi-decent acceleration structures which sped up raytracing a bit. But you still had to use custom shaders to cast rays.

With PRMan 17, ray tracing is now a first-class citizen in PRMan, and it can also trace rays from the camera instead of doing the traditional (pre 17) REYES rasterization of the surface and then shading that surface for reflection based on ray tracing.


The reflections on Buzz's helmet in Toy Story (the original) looked raytraced to me, although it might have been very well faked.

I usec renderman in the mid 90s and it allowed selective raytracing per shader.

Certaingly the guts at cgsociety think there was raytracing in both Toy Story and A Bug's Life.

http://forums.cgsociety.org/archive/index.php/t-60329.html


Actually there's some incorrect information here, Pixar started using ray tracing in films as far back as A Bug's Life. I can't find a picture on line but you can see it in the scene with the glass bottle. That was done by integration with a separate render but since then RenderMan as added support for GI and other ray tracing features.

I'm sure this update is significant, and sounds like a ground up reworking of the engine, perhaps replacing REYES? But it's not at all accurate to say the Pixar is moving to raytracing. Pixar has been in that neighborhood for more then a decade.


Yep. A "Ray Server" technique where PRMan farmed out calls trace() calls to BMRT was used in "A Bug's Life" for the glass bottles at the grasshopper's hideout. All other reflection and refraction were done using 'standard-issue' environment maps.

Source: Apodaca,Gritz, Advanced Renderman -- MK 2000.


I was wondering what Toy Story would look like if they re-rendered it with today's technology. Well, it turns out they already did! It was part of the theatrical re-releases of Toy Story and Toy Story 2 in 3D.

http://www.bigscreenanimation.com/2008/09/toy-story-re-relea...

Were these re-rendered versions released in 2D on Blu-ray?


I wonder if you really see a difference.


Well, unless they changed the sources (models, lighting etc), probably not, however, re-rending would allow for higher resolution (1080P or cinema resolutions) and higher frame rates (not sure what blu-ray does in terms of frames), as well as tiny adjustments for display on modern cinema screens and TV's.


You won't see it but for sure you will perceive it ! ;)


Well put!


An article about a new dimension in rendering quality, and they're demonstrating that quality with two images the size of a stamp...WAT?



Wow! There's actually a huge difference! Not that I'm really surprised but still.


You also have to consider that Monsters Inc. was one of the earlier Pixar movies.

The contrast to Brave isn't that extreme.


Since the movie isn't actually out, this is more of a promotional piece. Also the first image is a bit bigger http://pixartimes.com/wp-content/uploads/2012/08/Monsters-Un... but is fairly low-quality and for some reason is scaled down to stamp-size on the page.


Comparing raster graphics from 12 years ago to ray-traced graphics from today isn't a fair anyway. Part of why raster has persisted against ray-tracing for so long (despite numerous predictions to the contrary) is that raster graphics techniques are constantly improving.


From what I gathered from John Carmack, ray tracing is done much more efficiently with voxels than with triangles, so hopefully this will push game engine companies to incorporate voxels sooner into their engines, too.


"Let's add voxels!!!111one" is one of the most irritating refrains of gamers.

There are a lot of very nice mathematical properties of triangles, and the trend towards graphical fidelity has really only served to balloon budgets in the gaming space. We don't need voxels, and honestly neither do we need ray tracing.

It's not even new tech--games going back to Outcast, some Build engine games, Novalogic stuff, and so on have used voxels. Ray tracing has been used in a handful of nifty tech demos, but otherwise nobody cares.

Ray tracing and voxel tech is of marginal utility for games, and things have moved on--it'd be like switching the US construction industry over to metric; too late to make a difference and too minor to matter.

EDIT: In spite of all this, voxel cone tracing looks sexy as hell though.


Why do people assume that voxels and polygons don't play well together?

There are voxel engines now that store world data using voxels while what the player actually sees is polygons based on that voxel data.

I'm reminded of one guy's side project[1] that accomplishes exactly that.

[1]http://procworld.blogspot.com/2012/12/videos-of-caves-and-bu...


There's an awesome library that goes by the name of Polyvox that does exactly this. It's worth a look if you're interested in voxel-y things.


They're already on it. The Unreal 4 engine will use voxel cone tracing: http://www.unrealengine.com/files/misc/The_Technology_Behind...


An invisible voxelized-representation of polygonal scene geometry is used there for approximating global illumination of visibly rendered polygonal geometry.

Hence, it's not a realtime "voxel engine" as far as visual rendering goes.


Unfortunately pre-rendering and real-time rendering are worlds apart.

Pixar probably didn't need to do a voxel -> triangle transformation to their data set if they were using voxels, but any real-time renderer does.


what's the point of incorporating it in engines if the hardware is not made for it ? By the way, I remember carmack talking about accelerated ray casting. That would be a killer feature...

Also don't forget voxels take an order of magnitude more memory. Voxels are hyped because of minecraft, and in terms of acceleration they're really not trivial at all.

I remember some nifty demo with voxels though. But honestly wait for the real-time graphics API mess to smooth up a little before going voxels. You can still do a big lot with triangles.


What sort of raytracing are they using?

Are they going all the way to an unbiased global light transport algorithm (like LuxRender) or just using basic raytracing (like PovRay)?

Are they using an existing renderer? If not, are they releasing their own like they did with RenderMan?

Are they rendering with CPUs or with GPUs?

How much time per frame does it take them with how many cores of what sort?


Interesting questions. I don't have the answers, but I can say that I can't imagine that Pixar would ever 'go all the way' to an 'unbiased' GI renderer.

Our entire function in the filmmaking business is to tell a story visually, and for that you need complete control and directability of the image. This is the opposite goal of unbiased renderers. Nevertheless more tools in the tool box is always good.


Unbiased actually isn't slow - Arnold has proved this - what's slow is using hundreds of bounces per path and bi-directional path tracing (like Maxwell, Indigo, LuxRender) which takes a lot longer.

Biased generally means it's interpolated with a point or irradiance cache.


They're using uni-directional path tracing - if they're using PRMan 17 (Which I think they were), it'll probably be unbiased like with Arnold. So it's unbiased, but unidirectional, unlike LuxRender's MLT (two-way path tracing).

PRMan 17's a fairly good raytracer now (Arnold was giving PRMan a bit of a kicking in this department over the last two years).

All on the CPU - there's no way GPUs can cope with the size of textures and geometry feature films need to cope with (up to 200 GB of textures and Geometry in some of the complicated scenes) - there's no way that's fitting on a GPU.

I don't know what Pixar have, but SPI (who use their own version of Arnold) used to have quad socket i7s, so 64-thread machines with 96 GB of RAM two years ago - some of the more complex frames were taking +30 hours at 2k.


Combining bi-directional path tracing and ray differentals(needed for texture filtering and geometry subdivision) doesn't really work well together at the moment, sadly.


Ray differentials are just two extra rays one pixel up and right of the main ray to give the ray width. It's pretty trivial to keep them up to date with the main ray at surface intersections (it's technically more work, sure), but it's doable, so I don't see a problem with it.

BDPT is more concerned with the surface area of meshes and solid angles of hits, so that the light path vertex weightings can be accurate.


1) http://renderman.pixar.com/view/raytracing-fundamentals 2) Unbiased renderers like Maxwell and arnold very different beasts. Unbiased renderes have only become practical with moore's law. Renderman is so prevalent because its both fast and flexible. Maxwell specifically has a very distinct look. 3) its renderman. They eat their own dog food. 4) CPU, with GPU bits for specialist stuff 5) AAAAAAGGGEESS 24hours a frame in some cases. even more if there is lots of fur/water


Wow, I had always assumed Pixar was doing GI/ray tracing by now. Looking forward to seeing MU to check out the graphics porn.


Ironically Renderman has had global illumination and ray tracing for over ten years.

Having said that, what renderman is and what pixar animation do/use are rather orthogonal. For example pixar made heavy use of Subsurface scattering in the incredibles. Something that was at the time rather time intensive.

The title is a bit misleading as they've been using raytracing for years 1

[1]http://graphics.pixar.com/library/RayTracingCars/paper.pdf


They were doing GI before, but using an approximate point-based solution. The point-based stuff isn't as accurate but it's very fast and still looks pretty good.


Btw, there's a pretty impressive real-time ray tracer in development: http://raytracey.blogspot.cz/


Here's a paper from last summer on how Pixar is doing the computations for global illumination more efficiently with a "multiresolution radiosity caching method." http://graphics.pixar.com/library/RadiosityCaching/paper.pdf

I don't know much about graphics but maybe some of you will find it illuminating (no pun intended....)


I would love them to re-generate Brave with ray tracing, I'm guessing though that would take too many resources even for a blue-ray edition.


I think the worst part is the amount of man-hours it would take, not compute hours (but maybe that's what you meant?). If the change is as deep as it sounds like, the entire tooling support changed, so all scenes would have to be re-lit by artists, not-entirely-from-scratch but worse than you think.

REYES-rendered scenes that fake global illumination are pretty arcanely hacked together. Just making a legacy scene ray-traced would make it look worse, not better.


Absolutely, which is why it won't happen. You'd have to actually place lights, remove the fakes, and render check every scene. Basically the only part of the movie you wouldn't have to do is come up with dialog, sound, and geometry. (not to mention I have never met a movie person who, given a chance to reshoot a scene says "Yeah, it was perfect when I shot it, the only changes here are mechanical." :-)


I'm guessing the disco ball scene is Pixar flexing their muscles with the new tech. It's pretty impressive.


Anybody knows how other companies like universal's animation department (ice age, despicable me, ...) stand technology-wise? From the visuals, i always assumed pixar is setting the standards but now knowing that they just start to use unified raytracing, the gap might not be that big...


Pixar's generally ahead in terms of story, animation (they hand-animate everything) and look (shading and lighting), but in terms of pure tech, other companies like Weta, ILM and SPI are generally ahead of them as they work on multiple shows at once and per year.

SPI have been using full GI pathtracing with Arnold renderer for the last 5 years, and Blue Sky (studio which did Ice Age) has their own GI raytracer they use as well.

Also, Pixar don't actually have that big a renderfarm - they don't need it. Other places like Weta and ILM have renderfarms that are much bigger, but are used for multiple productions at once, and for doing things like compositing and fluid/cloth/physics sims.


Pixar have a lot of technology, but they come from an animation tradition rather than a studio that is concerned with photoreal CGI. Renderman has always been a system for "painting" the scene you want - it's an artist's toolbox.

It's only recently that advances in processing power have made physically-based ray tracing practical for film production - particularly with the take-up of the Arnold renderer by various other studios. Suddenly lighting becomes a matter of placing lights and letting the computer do the work rather than needing to carefully set up the correct impression of light in the way a painter might. So it requires quite a bit of change of approach from the artist, and you can imagine why there'd be a bit of a cultural problem introducing this.


Why does it matter that much what technology is in the backend? You lead the industry with results, not with the means to get to those results.

If I can make my webapp better (however you define "better") than my competitors' using PHP and MySQL, while they're making theirs using Ruby on Rails, MongoDB, etc,etc. Does the tech stack in the background matter, aside from making a nice article?


Yes, it makes a lot of difference.

There's the obvious render time, but actually render time isn't that important - studios are happy to wait up to 30 hours for a 4k frame on the farm if that's what it takes for a shot. But they don't want artists waiting around, so they want very quick iterations and previews of what the artists are doing, as it's the artists who cost money.

This is why global illumination has taken off over the last 5/6 years (thanks largely to SPI and Bluesky showing it could be done), as although the render times are slower, it means lighting the scene (by phyiscally-based shading) is much quicker and you don't need as many hacks as you did with PRMan (light maps, shadow maps, reflection maps, point caches, etc). You can literally model scenes with lights as they are in the real world.

On top of this, there's how easy it is to do very complex shots and change just bits of it - tools like Katana allow hugely complex scenes to be managed and rendered very efficiently, with very little work from artists. Studios who don't have similar tools often duplicate and waste a lot of time doing things that should be easy.

For example, Weta on IronMan 3 wasted a lot of time doing all the different suits, as they didn't have a decent asset-based pipeline that would have allowed them to re-use a lot of shaders, assets for each suit.


> Does the tech stack in the background matter

I think it does, because the tech stack in the background allows for things that might not be possible for other tech stacks.

You can duplicate somebody else's webapp in your backend of choice, but you can't have true GI if your rendering engine doesn't support it, and while you can fake some of the effects, they ultimately won't look as good as the real thing (unless you're aiming for a different 'good').


When the output of the tech stack is the product the means that achieve it and the level of accuracy reached matter a great deal.


Pretty bad analogy. In the realm of making 'pretty pictures that eerily realistic' the looks 'realistic' part is pretty significant... and usually it is Pixar itself that starts the promotional pieces of what new technology they have whenever they have a big new movie coming out...


Better tech could theoretically allow you to develop more movies at once by reducing the amount of specialization required, or simply develop movies faster or on a cheaper budget for the same results. If you are getting results that match your competitor but have to spend 10x as much time rendering it because you are doing it "the old way" then you are at a disadvantage even if your movies both do well.


Many of us are here because we like knowing how things work, not just their end result.


Sorry, but the comparison is incredibly flawed. Your PHP or Rails code will still generate HTML in the end.


Pixar's focus has always been on renderer efficiency for cinematic storytelling not really for accuracy or 'simulation'.

So while I'd say they are still way ahead in efficacy they are at par or behind with respect to light simulation.


At least historically, Pixar wasn't just a studio but also sold their renderer to others as software.

http://en.wikipedia.org/wiki/RenderMan


At least super-historically, Pixar wasn't just a software company but a hardware company as well:

http://en.wikipedia.org/wiki/Pixar_Image_Computer


Ice Age / Rio / Epic are Blue Sky, not Universal.


i think it's James Cameron that sets the standards


Now that Pixar is only ray tracing, can real time move to scanline? I'd really like to see a lot more smooth shapes and complicated geometry in games if possible.


That's pretty much what games already use. The only difference is polygon count--doubling the polygons tends to halve the framerate.

That said, of the magic comes from a toolchain that lets artists work with curved surfaces (NURBS or similar) and that converts those surfaces to polygons at the last minute. We finally started getting hardware support for that sort of thing with DirectX 11 and OpenGL 4.


I suppose the holy grail is when ray tracing can be done in real time. At that point things would look so real we might as well call it quits from real life.


sigh WHY has he locked the font size down on his blog such that CMD + and - only change the text area width and image sizes?


It works just fine in Chrome and Firefox here. Are you perhaps using an old and/or awful browser? He's using a px unit on his font size, but Firefox and Chrome have done full-page scaling for ages now.


I'm using the latest chrome for mac. I tested on a different blogspot blog and it works perfectly there.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: