How long does pixar render




















Click for More. Prev Rendering. This video shows the final stage of production for a scene from the film Luca. In the clip, Giulia rides her bike into the piazza and jumps off.

She grabs a fish from the wagon on the back of the bike and waves it in the face of Ercole. They argue and Giulia rides away. The sets, surfaces, animation, simulation, and lighting are all in place. And the images that were pixelated in the Lighting video are now sharp and clear. This is the version of the film as it was shown in theaters.

Rendering turns a virtual 3D scene into a 2D image What is rendering? What does a rendering technical director do? Actors start coming in to voice the script. Tom Hanks takes his turn at the Pixar recording studio to lay down his vocal tracks.

Hanks reads every line dozens of times, varying his interpretations and emphasis. The sessions are also filmed, so animators can use the actor's expressions as reference when they start animating the characters' faces. The shaders add colour and texture to characters' bodies and other surfaces. One issue is the fact that Woody and Buzz are made of plastic: some plastics are slightly translucent, so they absorb light. The shaders use a subsurface scattering algorithm to simulate this effect, which makes the toys look more realistic.

The pictures now move. Each character is defined by up to 1, avars -- points of possible movement -- that the animators can manipulate like strings on a puppet. Each morning, the team gathers to review the second or two of film from the day before. The frames are ripped apart as the team searches for ways to make the sequences more expressive. The technical challenges start to pile up -- simulating a wet bear is especially complex. The animators work late into the night in their highly personalised offices.

Each has been decorated in a variety of themes, from Polynesian tiki to 70s-era love lounge. The animators even have their own working bars, complete with beer on tap and a collection of single malt whiskies.

Rendering -- using computer algorithms to generate a final frame -- is under way. The average frame a movie has 24 frames per second takes about seven hours to render, although some can take nearly 39 hours of computing time.

The Pixar building houses two massive render farms, each of which contains hundreds of servers running 24 hours a day. The film is mostly finished. The team has completed 25 of the film's sequences and is putting the finishing touches to an elaborate action scene that involves a runaway model train, smoke, dust clouds, force fields, lasers, mountainous terrain and a massive bridge explosion.

It has taken 27 technical artists four months to perfect the scene. With only weeks to go before the film is released, the audio mixers at Skywalker Sound combine dialogue, music and sound effects. After a four-year production process, it can be hard to let go of Woody, Lotso, Buzz and the rest of the characters.

We're just forced to release it. There are 49, of these sketches in the movie's story reel, which is used as a sort of rough draft of the film.

Not only that, they are massive, kickout a whole bunch of heat in new and interesting ways. Does anyone know why there's such a difference? Training neural nets is somewhat analogous to starting on the top of a mountain looking for the lowest of the low points of the valley below. But instead of being in normal 3d space you might have d determining your altitude, so you can't see where you're going, and you have to iterate and check.

But ultimately you just calculate the same chain of the same type of functions over and over until you've reached a pretty low point in the hypothetical valley. OTOH, Vfx rendering involves a varying scene with moving light sources, cameras, objects, textures, and physics. Much more dynamic interactions. This is a gross simplification but I hope it helps. Pixar renders their movies with their commercially available software, Renderman.

In the past they have partnered with Intel [1] and Nvidia [2] on optimizations I'd imagine another reason is that Pixar uses off-the-shelf Digital Content Creation apps DCCs like Houdini and Maya in addition to their proprietary software, so while they could optimize some portions of their pipeline, it's probably better to develop for more general computing tasks. Other than that, most of the tooling that modern studios use is off the shelf, for example, Autodesk Maya for Modelling or Sidefx Houdini for Simulations.

I'm assuming these 1TiB textures are procedural generated or composites? Where do this large of textures come up? So it's not uncommon for hero assets in large-scale VFX movies to have more than 10 different sets of texture files that represent different portions of a shading model. For large assets, it may take more than fifty 4KK images to adequately cover the entire model such that if you were to render it from any angle, you wouldn't see the pixelation.

And these textures are often stored as mipmapped 16 bit images so the renderer can choose the most optimal resolution at rendertime. So that can easily end up being several hundred gigabytes of source image data.

At rendertime, only the textures that are needed to render what's visible in the camera are loaded into memory and utilized, which typically ends up being a fraction of the source data.

Large scale terrains and environments typically make more use of procedural textures, and they may be cached temporarily in memory while the rendering process happens to speed up calculations. CyberDildonics 10 months ago root parent prev next [—]. I would take that with a huge grain of salt.

Typically the only thing that would be a full terabyte is a full resolution water simulation for an entire shot. I'm unconvinced that is actually necessary, but it does happen. An entire movie at 2k, uncompressed floating point rgb would be about 4 terabytes. Can be either. You usually have digital artists creating them. CyberDildonics 10 months ago root parent next [—]. Texture artists aren't painting 1 terabyte textures dude.

The largest texture sets are heading towards 1TB in size, or at least they were when I was last involved in production support. I saw Mari projects north of gb and that was 5 years ago. Disclaimer : I wrote Mari, the vfx industry standard painting system. Some large robots In particular had 65k 4K textures if you count the layers.

I think we both realize that it's a bit silly to have so much data in textures that you have x the pixel data of a 5 second shot at 4k with 32 bit float rgb. Even a single copy of the textures to freeze an iteration would be thousands in expensive disk space. I know it doesn't makes sense to tell your clients that what they are doing is nonsense, but if I saw something like that going on, the first thing I would do is chase down why it happened.

Massive waste like that is extremely problematic while needing to make a sharper texture for some tiny piece that gets close to the camera is not a big deal. Texture caching in modern renders tends to be on demand and paged so it is very unlikely the full texture set is ever pulled from the filers.

Over texturing like this can be a good decision depending on the production. Asset creation often starts a long time before shots or cameras are locked down. This was easier to rely on in the days before ray tracing, when texture filtering was consistent because everything was from the camera. Ray differentials from incoherent rays aren't quite as forgiving. That's not the same as putting 65, 4k textures on something because each little part is given its own 4k texture.

I know that you know this, but I'm not sure why you would conflate those two things. There isn't a fine line between these things, there is a giant gap between that much excessive texture resolution and needing to upres some piece because it gets close to the camera. This is actually fairly rare. That's rarely how the time line fits together. It's irrelevant though, because there is no world where 65, 4k textures on a single asset makes sense.

It's multiple orders of magnitude out of bounds of reality. I am glad that you have that insane amount of scalability as a focus since you are making tools that people rely on heavily, and I wish way more people on the tools end thought like this. Still, it is about x what would set off red flags in my mind. I apologize on behalf of whoever told you that was necessary, because they need to learn how to work within reasonable resources which is not difficult given modern computers , no matter what project or organization they are attached to.

Mari was designed in production at Weta, based off the lessons learned from, well, everything that Weta does. Take for example, a large hero asset like King Kong.

Kong look development started many months before a script was locked down. All of which will have to match shot plates in detail.

We could address each of these as the shots turn up and tell the director who owns the company he needs to wait a few days for his new shot, or you can break Kong into patches and create a texture for each of the diffuse, 3 spec, 3 subsurface, 4 bump, dirt, blood, dust, scratch, fur, flow etc etc inputs to our shaders.

When working the artist uses 6 paint layers for each channel 6 is a massive underestimate for most interesting texture work. Not all of these will need be at 4K however. Heli carriers, oil rigs, elven great halls, space ships, giant robots Things like 3 4k subsurface maps on patches maps on a black skinned creature that is mostly covered by fur is grossly unnecessary no matter who tells you it's needed.

We both know that stuff isn't showing up on film and that the excess becomes a CYA case of the emperor's new clothes where no one wants to be the one to say it's ridiculous. This is intermediary and not what is being talked about. Maybe some day I'll know what I'm talking about. Which part specifically do think is wrong? Focusing on the technical steps and what might be technically feasible or not versus the existing world and artists workflows.

Also speaking as an authority that knows best patronizing who actually works in the industry. I would say it's the opposite. There is nothing necessary about 10, 4k maps and definitely nothing typical. Workflows trade a certain amount of optimization for consistency, but not like this.

I don't think I was patronizing. This person is valuable in that they are trying to make completely excessive situations work. Telling people or demonstrating to them they are being ridiculous is not his responsibility and is a tight rope to walk in his position. This is an exercise in appeal to authority. This person and myself aren't even contradicting each other very much. He is saying the extremes that he has seen, I'm saying that 10, pixels of texture data for each pixel in a frame is enormous excess.

The only contradiction is that he seems to think that because someone did it, it must be a neccesity. Instead of confronting what I'm actually saying, you are trying to rationalize why you don't need to. The artist job is getting the shot done regardless of technology and they have very short deadlines. They usually push the limits. Digital artists are not very tech savvy in a lot of disciplines, it is not feasible to have a TD in the delivery deadlines of the shots for a show.

The person at Weta also told you how Weta actually worked in Kong which is very typical. You don't know upfront what you need. And you dismissed it as something unnecessary, still, is how every big VFX studio works. If that is the case you might have a business opportunity for a more efficient VFX studio!

Your post is an actual example of being patronizing. Before I was just trying to explain what the person I replied to probably already knew intuitively. What has been typical when rendering at 2k is a set of 2k maps for face, torso, arms and legs. Maybe a single arm and leg can be painted and the UVs can be mirrored, though mostly texture painters will layout the UVs separately and duplicate the texture themselves to leave room for variations. Actually most of the people working on shots are considered TDs.

Specific asset work for some sequence with a hero asset is actually very common, which makes sense if you think about it from a story point of view of needing a visual change to communicate a change of circumstances.

Maps like diffuse, specular and subsurface albedo are also just multiplicative, so there is no reason to have multiple maps unless they need to be rebalanced against each other per shot such as for variations. You still never actually explained a problem or inconsistency with anything I've said. An interesting exercise might be working out a texture budget for this asset. Set replacements, destruction, the works. Once in production you will be the only texture artist supporting 30 TDS.

How do you spend your 6 months to make sure production runs smoothly? You clearly understand many of the issues involved but downplay the complexity in running high end assets in less than perfect production. Render time and storage is one factor as is individual artist iteration, but the real productivity killer is an inter discipline iteration. I would trade a whole bunch of cpu and storage to avoid that. Computers are cheap, people are expensive, people coordinating even more so. I do not think you said anything wrong, is much less about what you're saying and more about how you're saying as if it was a simple thing to get right and people are dumb for not doing it in an optimal way.

That's not true in the studios I've been. TDs is usually reserved to more close to pipeline folks that aren't doing shot work as in, delivering shots. They're supporting folks doing so. For the record, I haven't downvoted you at all. This is probably true today, but leaves the wrong impression IMHO. The clear trend is moving toward GPUs, and surprisingly quickly.

RenderMan is releasing a GPU renderer this year. At the time RenderMan was still mostly rasterization based. Transitioning that multi-decade code and hardware tradition over to GPU would be a huge project that I think they just wanted to put off as long as possible. Getting good quality results per ray is very difficult. Much easier to throw hardware at the problem and just cast a whole lot more rays.

So, they moved over to being heavily GPU-based around that time. I do not know the specifics. So while you can get a huge speed up at rendering time you get a huge slowdown creating the assets in the first place because you have new issues. Using normal maps and displacement maps instead of millions of polygons.

Keeping textures to the minimal size that will get the job done, etc Is any of that true? Not for the ILM use-case. I expect they would stick to finely-tessellated geometry.

It would require a very intelligent streaming solution. I imagine their farm usage after this could change. Moru 10 months ago root parent next [—].

Isn't CPU's sturdier too? Amusingly, Pixar did build the "Pixar Image Computer" [1] in the 80s and they keep one inside their renderfarm room in Emeryville as a reminder. Basically though, Pixar doesn't have the scale to make custom chips the entire Pixar and even "Disney all up" scale is pretty small compared to say a single Google or Amazon cluster.

Until recently GPUs also didn't have enough memory to handle production film rendering, particularly the amount of textures used per frame which even on CPUs are handled out-of-core with a texture cache, rather than "read it all in up front somehow".

Even still, however, those GPUs are extremely expensive. You'll probably start seeing more and more studios using GPUs particularly with RTX for shot work, especially in VFX or shorts or simpler films, but until the memory per card here now! That wikipedia article could be its own story!

ArtWomb 10 months ago parent prev next [—]. At the highest, most practical level of abstraction, it's all software. De-coupling the artistic pipeline from underlying dependence on proprietary hardware or graphics APIs is probably the only way to do it. On a personal note, I had a pretty visceral "anti-" reaction to the movie Soul.

I just felt it too trite in its handling of themes that humankind has wrestled with since the dawn of time. And jazz is probably the most cinematic of musical tastes. But it felt generic here. That said the physically based rendering is state of the art! If you've ever taken the LIE toward the Queensborough Bridge as the sun sets across the skyscraper canyons of the city you know it is one of the most surreal tableaus in modern life. It's just incredible to see a pixel perfect globally illuminated rendering of it in an animated film, if only for the briefest of seconds ;.

Relative to the price of a standard node, FPGA's aren't magic : You have to find the parallelism in order to exploit it. As for custom silicon, anything on a close to a modern process costs millions in NRE alone.

From a different perspective, think about supercomputers - many supercomputers do indeed do relatively specific things and I would assume some do run custom hardware , but the magic is in the interconnects - getting the data around effectively is where the black magic is. Also, if you aren't particularly time bound, why bother? FPGAs require completely different types of engineers, and are generally a bit of pain to program for even ignoring how horrific some vendor tools are - your GPU code won't fail timing for example.

And probably needing a complete rewrite of all the tooling they use. Arelius 10 months ago parent prev next [—]. Because they have a big legacy in CPU code, and because of mostly political reasons they haven't invested in making their GPU Realtime preview renderer production ready till very recently.

There are some serious technical challenges to solve, and not having GPUs with tons of ram among them, but the investment to solve them hasn't really been there yet.

I'm "anyone" since I know very little about the subject but I'd speculate that they've done a cost-benefit analysis and figured that would be overkill and tie them to proprietary hardware, so that they couldn't easily adapt and take advantage of advances in commodity hardware.

Probably because CPU times fall within acceptable windows. That would be my guess. You can go faster with FPGA or silicon, but it also has a very high cost, on the order of 10 to as expensive.

You can get a lot of hardware for that. In addition to what others have said, I remember reading somewhere that CPUs give more reliably accurate results, and that that's part of why they're still preferred for pre-rendered content. Ah, that makes sense. One of the things they mentioned briefly in a little documentary on the making of Soul is that all of the animators work on fairly dumb terminals connected to a back end instance.

I can appreciate that working well when people are in the office, but I'm amazed that worked out for them when people moved to work from home. I have trouble getting some of my engineers to have a stable connection stable enough for VS Code's remote mode.

I can't imagine trying to use a modern GUI over these connections. The entire studio is VDI based except for the Mac stations, unsure about Windows , utilizing the Teradici PCoIP protocol, 10Zig zero-clients, and at the time, not sure if they've started testing the graphical agent , Teradici host cards for the workstations.

We finally figured out a quirk with our VPN terminal we sent home with people that was throttling connections, but the experience post-that fix is actually really good. The vast majority being sub There are some outliers that end up in mids, but we still need to investigate those. I used to work at Teradici. It was always interesting that Pixar went with VDI because it meant the CPUs that were being used as desktops during the day could be used for rendering at night.

Roughly speaking. The economics made a lot of sense. A guy from Pixar came to Teradici and gave a talk all about it. Amazing stuff. Interesting contrast with other companies that switched to VDI where it made very little sense. But oftent here is some other factor that tips things in VDI's favour.

Each VM had dedicated memory, but no ownership of the cores they were pinned to, so overnight if the 'workstations' were idle, another VM also with dedicated memory would spin up the other VMs would be backgrounded and consume the available cores and add itself to the render farm. An artist could then log in and suspend the job to get their performance back I believe this was one of the reasons behind the checkpointing feature in RenderMan.

The Teradici stuff was great, and from an admin perspective having everything located in the DC made maintenance SO much better. Switching over to VDI is a long term goal for us at Blue Sky as well, but it'll take a lot more time and planning. That's one reason for the checkpoint feature, yes, but there are others. Few years ago I spoke to some ILM people about their VDI setup, which at the time was cobbled together out of mesos and a bunch of xorg hacks to get VDI server scheduling working on a pool of remote machines with GPUs I think they might even have used AWS intially but not sure - this is going back a fair few years now.

I was doing a lot of work with mesos at the time, and we chatted a bit about this as our work overlapped a fair bit. Are you still using a similar sort of setup to orchestrate the backend of this, and if so have you published anything about it?

I've had a few people ask me about this sort of problem lately and there aren't too many great resources out there I can point people new to this sort of tech towards. JustinGarrison 10 months ago root parent next [—]. My team at WDAS mirrored pretty closely what Pixar did with VDI although we didn't fully switch to it for different reasons power and heat constraints in the datacenter and price.

There was no dynamic orchestration for us because we only had 60ish users using full VDI VMs, but even our plans of hundreds of users was still to use teradici and standard VMs on each host. We rendered different than Pixar which also made our system a bit more static. We didn't have a separate render VM and instead rendered directly on the workstation VM when users were idle or disconnected. I wish I could answer this, but I really can't. Not because of any NDA, just that I don't know.

There are a lot of tools that Pixar use that I don't have the full picture of how they work together. Here at Blue Sky we are in our infancy for thin client based work.

Remote terminals aren't too new as they were used for contract workers and artists who needed to WFH on the prior show, but we don't have VDI as we still use deskside workstations. For COVID, the workstations have been retrofitted with Teradici remote workstation hostcards and we send the artists home with a VPN client and zero client, utilizing direct connect.

It was enough to get us going, but we have a long road ahead in optimizing this stack and eventually if our datacenters can handle it switching over to VDI. That is correct. Not to mention that the entirety of the movie is stored on a NFS and can approach many hundreds of terabytes. When you're talking about that amount of power and data, it makes more sense to connect into the on-site datacenter. Do you have any idea what storage system render farms tend to use? Everywhere I worked has been traditional NFS, and I've seen more than 3 times the figure you quoted working well.

How do studios manage disaster recovery? What happens when a multi petabyte NFS server keels over? Are there tape drive backups? It seems risky to have a such a critical system serviced by only a single node. At Weta we divided up the NFS servers into "src" and "dat" - "src" was everything made by artists, and "dat" was the output from the renderwall.

We backed up "src" every night, but "dat" was never backed up. Every once in a while there would be some mass deletion event but it was always faster to re-render the lost data than to restore from backups.

Also none of the high end commercial filers are single node - they're all clusters of varying sizes. They're serviced by multiple nodes and have very strong backup policies. At rhythm and Hues, you could request footage all the way back from the founding of the studio for example. CG work is fairly IO intensive for tasks like rendering where you're reading hundreds or even thousands of geometry caches per frame.

JustinGarrison 10 months ago root parent prev next [—]. Same here. I work as a compositor on a visual effects studio that had to adapt, and can say that I'm impressed too! The studio internally uses PCOIP boxes, which I don't like due the added tiny delay I'm a bit like those developers who complain about miliseconds of latency on their text editors Anyways, for the work for home setup, we are using NoMachine, which doesn't feel any different from the PCoIP boxes - unless if using the MacOS client, which is much laggier than the Windows or Linux versions.

I've tried a few remote desktop systems. The last one I tried was Parsec, which works well, but always made me feel queasy since it requires you to trust their connection service.

To be clear, I know of no security issues there, I just don't like relying on a third party for my security NoMachine looks like a good answer for people like me. Thanks for the pointer, I'll check it out. They're pretty great overall and the bandwidth requirements aren't crazy high but it does max out your data usage if you're capped pretty quickly. The faster you can be, the better the experience. Some studios like Imageworks don't even have the backend data center in the same location.

So the thin clients connect to a center in Washington state when the studios are in LA and Vancouver. I think most connections could be massively improved with a VPN that supports Forward Error Correction, but there doesn't seem to be any that do.

Seems very strange to me. Where can I watch the documentary? It's very much not technical, more focused on the emotional impact of WFH.

I just checked it out. So interesting they use Linux for non-developer staff! Most of the big animation and visual effects studios are Linux based. I worked at Disney Animation on the Linux engineering team for a few years. The flexibility of Linux was a key enabler for us being able to produce movies the way we did.

Artists overall seemed to love the power of the Linux desktop setup we provided. My understanding I am not an authority is that for a long time, it has taken Pixar roughly an equal amount of time to render one frame of film. Something on the order of 24 hours.

Edit: When I say "simple wall clock", I'm talking about the elapsed time from start to finish for rendering one frame, disregarding how many other frames might be rendering at the same time. The shot just needs to rendered before dailies in the morning.

If it takes longer than maybe hours, it risks not being done by the next day, and that means each iteration with the director takes two days instead of one. The shots that took 24 hours or more usually caused people to investigate whether something was wrong. I worked on one such shot that was taking more than 24 hours. A scene in the film Madagascar where the boat blows a horn and all the trees on the island blow over.

The trees and plants were modeled for close-ups, including flowers with stamens and pistils, but the shot was a view of the whole island. One of my co-workers wrote a pre-render filter with only a few lines of code, to check if pieces of the geometry were smaller than a pixel, and if so just discard them.

IIRC, render times immediately dropped from 24 hours to 8 hours. Its been roughly true for 30ish years. And it hurts. But man are the images gorgeous! ChuckNorris89 10 months ago parent prev next [—]. Wait, what? At the standard 24fps it takes you 24 days per film second which works out to years for the average 2 hour long film which can't be right.

In high-end VFX, hours wall clock per frame is a roughly accurate time frame for a final 2k frame at final quality. Hi Berkut, I'd love to get in touch with you, unfortunately I couldn't find any contact info in your profile.

You can find my email in my profile. And each frame might have parallizable processes to get them under 1 day wall time, but still cost 1 "day" of core-hours. Each and every asset, animation, lighting, texturing sim and final comp will go through a number of revisions before being accepted. Its probably now in the k cpu mark. It's definitely not 24 hours per frame outside of gargantuan shots, at least by wall time. If you're going by core time, then it assumes you're serial which is never the case.

That also doesn't include rendering multiple shots at once. It's all about parallelism. Finally, those frame counts for a film only assume final render. There's a whole slew of work in progress renders too, so a given shot may be rendered times. Often they'll render every other frame to spot check and render at lower resolutions to get it back quick. Maybe they mean 24 hours per frame per core.

Again, I'm not sure whether this is core-hours, machine-hours, or wall clock. And to be clear, when I say "wall clock", what I'm talking about is latency between when someone clicks "render" and when they see the final result.

My experience running massive pipelines is that there's a limited amount of parallelization you can do. It's not like you can just slice the frame into rectangles and farm them out. Funny thing, you sure can!

Distributed rendering of single frames been a thing for a long time already. What about GI? You can't just slice GI into pieces. How I've seen it work in the past, it'll totally work with GI and more generally, raytracing. Normally this would happen locally, and if you have 8 CPU cores, each one of them get responsible for a small size of the frame.

Now if you're doing distributed rendering, replace CPU core with a full machine, and you have the same principle. Slicing the image plane pretty much works for parallelizing GI just as well as it does for raster. It does help to use small-ish tiles, that way you get some degree of automatic load balancing. CyberDildonics 10 months ago parent prev next [—]. Not every place talks about frame rendering times the same.

Some talk about the time it takes to render one frame of every pass sequentially, some talk about more about the time of the hero render or the longest dependency chain, since that is the latency to turn around a single frame. Core hours is usually separate because most of the time you want to know if something will be done overnight or if broken frames can be rendered during the day. If you can't render reliably over night, your iterations slow down to molasses and the more iterations you can do the better something will look.



0コメント

  • 1000 / 1000