[Closed] scripted zdepth renders
Your workflow quest hinges on the word ‘working’, though.
anti-aliasing is a problem you can’t easily get around… there’s two basic options there…
A. Don’t anti-alias. Problem completely solved but you need to render a larger image and you don’t get to take advantage of any adaptive anti-aliasing routines.
B. render everything in layers (render layers, that is). 3ds max does support render layers, and with per-layer z-buffer data, a post-app that would take advantage of this would have a ‘correct’ z-depth value for each pixel regardless of anti-aliasing (as the anti-aliasing becomes a mask for the layer’s pixels, rather than pixels getting blended together on each layer).
I say ‘correct’ in single quotes because it’s still the z-depth as measured, typically, through the center of the ray… there is no integration of the entire surface underneath that pixel to get a more accurate value. Unlikely to have a huge impact, but there ya go.
transparency is relatively easy, as long as it isn’t refractive transparency; the ray passes straight through. Z-Buffers ignore transparency so you’re out of luck on that one, but a material replacement routine could preserve such simple transparency… the result would then work with both Falloff:Distance as well as Fog (as long as the fog plays nice with transparency) type solutions.
The above should work for the vast majority of scenes.
–
But now you’ve got something as simple as a mirror in the scene… all of a sudden you don’t just need the z-depth value for the hit on the mirror, but also for the hit of the reflected ray. You can’t get both in one go with a material replacement method.
Worse still is having a glass object… the surface hit, the reflected ray hit, the refracted ray hit – perhaps another refracted ray hit if it hits an inner surface first, and so on.
Before long, you realize that for panacea you would need per-layer, per-element, per-raydepth, with ray association data, and probably stuff I’m forgetting, z-depth values… hellish on the renderer and I’d imagine not a simple task for writing a compositor for either.
So instead, for these special cases, studios tend to send the output to the 2D guy and ask them to paint things to specification and they’ll have it done in no time; it might not be mathematically accurate but it’ll be done and it’ll probably look either good or even better than the mathematically accurate result would have been… usually this is part of the regular process of post-work anyway. They can still benefit greatly from the z-depth data output in the earlier two sections, of course.
Hmmmm… actualy a lot of people using this stufff nowdays… PRMAN for example…