These are the original MODO Rendering Notes from the hellomodo blog I ran for a while (back in 2013). I extracted all of these posts by Allen Hastings (developer of the MODO render engine) from the Luxology forums at that time. Keep in mind there have been quite a few changes to MODO since then, so some of the stuff here may be out of date.

 

As a bonus I throw in the MODO Shading Rate Cheat Sheet:

 

modo_Cheat_Sheet_The_Shading_Rate_001

—cut—

—- Materials:

The exit color appears when all available bounces are used up. In this case, it’s happening when rays experience total internal reflection more than 32 times. Basically the light is trapped inside the glass between the parallel surfaces, bouncing in a zig zag pattern. This is the same principle that optical fibers use. As you discovered, if the glass is thick enough, the zig zag pattern will be larger and there will be fewer than 32 bounces before the ray reaches the end of the object, allowing it to exit naturally. My recommendation is to pick an exit color similar to the average background color but a bit darker and less saturated, simulating absorption and scattering (which affects those ray paths more since they’re inside the glass for a greater distance and glass isn’t perfectly transparent).

Reflections can occur whenever the refractive index changes, so you’ll get them from the outer (air to glass) and inner (glass to air or glass to liquid) surfaces of the walls of the bottle.

It looks to me like your modo metal has too much diffuse shading and too high of a roughness. My recommendation is to set up the material in a physically based way, which means turning on Conserve Energy and Match Specular (so there’s no need for separate reflection textures — specular textures control both). The material should be primarily blurry reflection, with little or no diffuse except in the painted areas. Specular Fresnel would be close to 100%, and Specular Amount would come from your image map. To give the texture a stronger effect (like the Octane renders) you can lower the Gamma in the texture properties. After doing this, a roughness texture might not even be necessary, but if you do use one you may need to lower the High Value of the texture.

You’re right, specular highlights are simply blurry reflections of direct light sources. The purpose of the Match Specular feature in 501 is to control both phenomena together, so that they have the same intensity, color, and Fresnel effect. They already had matching roughness and anisotropy. Match Specular should always be turned on for physically based rendering (along with Conserve Energy and Blurry Reflections).

I played with the scene a bit in 401 and I think the pattern may be a consequence of the way bump mapping is calculated for procedural solid textures. Basically for every point being shaded, modo evaluates the texture function at six nearby points (offset by a small amount in the positive and negative XYZ directions) to determine how to perturb the surface normal vector. In this case I think there may be a correlation in the value of the cellular texture at the six evaluation points (in other words, the cellular function may be somewhat periodic in a way that interacts with the spacing of those points). I don’t remember ever seeing this before so I suspect it’s pretty rare. It could probably be avoided by changing the texture scale or using a different kind of projection (but it sounds like you’ve solved it by using noise instead). I also tried the scene in the latest build and didn’t see the problem, but that’s not surprising since bump mapping uses a different algorithm in 501.

In areas where you see the background through a transparency mapped surface, the transparency is part of the surface’s shading (computed by firing refraction rays) and therefore it’s affected by the Shading Rate. The result may be fewer samples and lower DOF quality than when you see the background directly. There are two things you can do to improve the situation. You can specify a finer Shading Rate in the shader item (0.1 is best), causing more refraction rays to be fired per pixel in the transparent areas. Or you can use a stencil map instead of a transparency map, causing the initial camera rays to no longer hit the stenciled-out areas. The second method is most likely faster.

If the noise is coming from recursive blurry reflection (i.e. if the post is reflecting another blurry reflective surface), then dropping the Ray Threshold to zero should help. If the noise is due to the post reflecting complex geometry like the grass, then making the grass hidden to reflection rays and inserting a simple green surface that is only visible to reflection rays should help. The number of AA samples shouldn’t have a big effect on blurry reflections. More AA samples is not necessarily better, and might even make blurry reflections worse if the shading rate is low. This is because shading a pixel once with 2000 reflection rays is better than shading it 256 times with 8 rays each time (2000 divided by 256), since the ray distribution will be more even in the first case. I think a lot of that is due to the Ray Threshold. With 2048 indirect rays, each one will have an importance below the default threshold, so some of the second bounce rays will be terminated (and those that survive will be boosted to keep the results unbiased). Dropping the threshold to zero reduces the noise.

By “self-illuminating” I assume you mean having a nonzero Luminous Amount? In that case be aware that a Transparency Amount texture map will not affect the luminosity. In other words, a fully transparent surface (one that passes through all light from behind it) can still emit its own light. So you may want to set transparency to 100% and apply a Luminous Amount texture map instead, or you can apply maps to both amounts (and one map can be an instanced version of the other so that their settings can be shared).

Basically I would ask myself what would the label look like if it was fully lit on the front side and there was an opaque black surface just behind it — that’s what the diffuse color map should look like. The second question is what would the label look like if there was no front lighting but instead there was a bright white surface just behind it — that’s the transparency color map. If the maps were defined that way, then the corresponding diffuse and transparency amounts would be 100% (because the two color maps fully account for those two aspects of shading). Edit: Actually that’s not quite right since modo ensures energy conservation by reducing diffuse shading based on the transparency amount. So the transparency amount would have to be less than 100% in areas where you want some diffuse shading. But personally I think this label would look good with no diffuse shading in the blue letters, just colored transparency.

Lots of transparent surfaces can cause the number of rays (and thus render time) to grow exponentially. Imagine a single ray from the camera hitting a glass surface. To shade the hit point, a refraction ray and a reflection ray have to be fired. If one of those new rays hits glass, at least two more rays are fired, and so on, until the maximum reflection or refraction depth is reached. After eight bounces that initial camera ray may have generated 256 new rays. Perhaps even more than that, since shadow rays may also have to be fired from each hit point to each direct light. Blurry reflection, blurry refraction, or dispersion can also multiply the ray count. Some things that can reduce the ray count are to lower the maximum ray depths, raise the Ray Threshold, and make sure the glass surfaces have a diffuse amount of exactly 0%.

The rainbow effect seen in real diamonds is due to dispersion, a phenomenon in which the refractive index of a material varies with wavelength. The modo 401 renderer takes dispersion into account,

Bump mapping is part of shading, so the antialiasing of the bumps is determined by the Shading Rate (as opposed to geometric edges, whose antialiasing is determined by the AA Samples setting). Using a finer rate in the shader properties of the hull would help. Shading refinement does this automatically in many cases, but the thin panel lines can leave gaps which are too large for refinement to handle. For image-based bump maps, the Antialiasing controls in the image map properties can also prove useful.

To make an image-mapped surface that is a constant brightness regardless of scene lighting, just set the material amounts (diffuse, specular, etc) to 0%, set Luminous Intensity to 1.0, and set the effect of the image map to Luminous Color. If you’re rendering with gamma correction and the image map is already gamma corrected, set its Gamma (in the image map’s properties) to one divided by the output gamma. For example, enter “1/1.6” if using an output gamma of 1.6. Now you have an image-mapped surface that looks exactly like the original image in any lighting conditions.

Actually solid projections are great for procedural wood if you want something to appear to be carved from a solid block, since you can see the tree rings (concentric cylinders really) cut through the object in a realistic way. UV mapping would be more like placing a veneer on the surface. If you don’t like how the center of the tree trunk runs down through the head and neck, you could always offset the texture locator or rotate it.

You are correct, the Specular Color is multiplied by the Specular Amount to give the specular coefficient actually used for shading. For example, if the amount is 20% and the color is 1 1 1, that’ll have exactly the same effect as an amount of 100% and a color of 0.2 0.2 0.2, or an amount of 50% and a color of 0.4 0.4 0.4, etc. This same logic applies to the Diffuse Color and Diffuse Amount and the Reflection Color and Reflection Amount. As far as whether to apply an image map to the color or the amount, if the map is grayscale then it doesn’t matter, you can apply it to either effect. If the map is a color image then it should be applied to the color.

One of the advantages of procedural textures is that you often don’t need any UVs. The Wood texture in particular works well with a Solid projection, which makes objects appear to be carved out of a block of wood, as in this modo render by Alex Rooth.

By the way, normally lightbox materials should have zero diffuse, and just rely on their luminosity to emit light. Otherwise things like shadows and GI have to be computed on their surface.

White in an alpha channel does mean opaque. But for texturing purposes, the alpha channel is considered a grayscale image like any other. When applying grayscale images to a material amount (such as a Transparency Amount texture), black means 0% and white means 100% of the named effect. By inverting the alpha image, areas of white alpha become black and thus 0% transparent, and areas of black alpha become white and thus 100% transparent.

You didn’t say how the texture was applied. If its effect is Diffuse Color, then in the white alpha areas, the leaf will use the colors of the image map, and in the black alpha areas, the underlying diffuse color of the material will show through. If you want the black alpha areas to be transparent, then you’ll need to duplicate the texture and give the copy an effect of Transparent Amount or Stencil. Specify Alpha Only to make the new texture use the alpha channel of the image, and invert it as needed.

Do you mean your light source is a luminous polygon? In that case the image map should be used as a Reflection Amount texture instead of (or in addition to) a Specular Amount texture. The Specular Amount only controls highlights from direct light sources, while the Reflection Amount controls highlights from geometry (such as luminous polygons) or the environment. Whether you’re using direct lights or luminous polygons, in order to see a highlight you’ll have to position the light so that it bounces off the surface and into the camera. Increasing the Roughness can help you find the “sweet spot” by making the highlight bigger.

Just add a gradient inside the group that you want to fade in or out, set the gradient’s Input Parameter to Incidence Angle, and set its Effect to Group Mask.

Turning up the number of indirect rays or blurry reflection rays means that each ray will have a lower importance, a measure of how much a particular ray can affect the color of a pixel (up to a maximum of 100%). When these rays hit other surfaces, the resulting secondary reflection or refraction rays are more likely to be skipped if their importance falls below the ray threshold. So your findings are not surprising for multiple bounce situations. However in single bounce situations, such as non-cached indirect illumination from the environment (other than caustics), or blurry reflections of the environment, the ray threshold doesn’t really matter, since there are no secondary rays. The quality in those cases is determined simply by the number of rays.

Fine scale grids can be tough from a sampling standpoint. Here are some things you can try:

– If the fabric is an image map with a transparency effect, you can make sure Antialiasing is on in the image map’s properties and increase the Antialiasing Strength as needed.
– If the fabric is a procedural grid with a transparency effect, you can manually antialias it by blurring it for more distant shots. For example, instead of a Line Width of 20% and a Line Value of 0%, you could use a Line Width of 100% and a Line Value of 80%, which gives the same average effect but without moires. Or you could increase shading density by upping the overall Antialising setting to 64 samples and using a finer Shading Rate (like 0.1) in the fabric’s shader item.
– If the fabric is an image map or procedural grid with a stencil effect, it will act just like a grid modeled as actual geometry. In that case, texture antialiasing and shading rates are irrelevant, and it may be best to just boost Antialiasing to 64 samples.

modo computes the effect of luminous polygons in a physically based manner, so for a particular point on a surface being shaded, the amount of illumination received from a luminous polygon is a function of three things:

1. The radiance of the luminous polygon’s material. This is basically the “luminous amount” of the material expressed in physical units.

2. The apparent size of the luminous polygon as seen by the point being shaded. For example, if you double the width and height of the luminous polygon, the amount of illumination would increase by a factor of four. Or if you kept the size the same but moved the polygon to be twice as far from the point being shaded, the amount of illumination would go down by a factor of four, because it would appear to have only one quarter as much area as seen by the point being shaded. This is the cause of the well known inverse square law for light falloff.

3. The incidence angle of from the luminous polygon to the surface. If light from the luminous polygon is hitting the surface at a glancing angle, it will have less effect than if it is shining directly down onto the surface. Specifically, the illumination will be proportional to the cosine of the angle between the surface normal and the direction to the light.

I’m guessing that the variation you’re seeing may be due to the second factor I mentioned. The amount of illumination is very sensitive to the size of the luminous polygon and its distance. Another point is that the luminous polygon must be big enough to be reliably sampled by the semi-random indirect rays. If it’s too small, it might be missed by those rays, in which case either the number of rays should be increased or the luminous polygon should be replaced with an equivalent area light source.

Turning on the Conserve Energy check box replaces the traditional Lambert/Blinn/Phong shading model (which doesn’t obey physical restrictions) with a more complex model based on the work of Ashikhmin and Shirley. Here’s the abstract from their 2000 JGT paper: “We present a BRDF model that combines several advantages of the various empirical models currently in use. In particular, it has intuitive parameters, is anisotropic, conserves energy, is reciprocal, has an appropriate non-Lambertian diffuse term, and is well-suited for use in Monte Carlo renderers.” The behavior you noticed is due to the non-Lambertian diffuse term. One of the nice effects it captures is the way diffuse reflection goes to zero at glancing incidence. Note that to properly use any physically plausible BRDF, the specular and reflection amounts should be the same.

A simple way to think about the specular component of shading is that it represents blurry reflections of direct light sources (directional, point, spot, etc). This can be computed analytically in many cases, making it cheap in terms of rendering time. The reflection component represents reflections of everything else (geometry and the environment). Blurriness is optional in this case, since it can’t be computed analytically — it requires tracing lots of rays and is therefore expensive. In the real world these two components are the same thing, so the distinction is artificial, but it’s traditional in computer graphics to specify them separately.

Two things that can smooth out blurry reflections are reducing the material’s roughness or increasing the number of reflection rays. Sometimes thousands of rays are needed if the roughness is high.

Everything behind the transparent polygon is being seen via refraction rays, and the number of those per pixel is affected by the Shading Rate. I would try using a finer rate (like 0.1) in the properties of the shader applied to the transparent polygon.

Ambient lighting shouldn’t affect metals directly since it’s part of a surface’s diffuse shading (which for metals is zero). But it will affect diffuse surfaces seen reflected in the metal.

From a physically based standpoint I would recommend not using a reflection color map. Most materials do not tint the color of their specular and mirror reflections, the exceptions being bare metals (like gold or copper) and certain clear-coated materials (like red or blue anodized aluminum). In this case I would just leave the reflection color white.

Conserve Energy uses a more physically based BRDF that enforces energy conservation and reciprocity. For example, when used with Fresnel, it accounts for the fact that less energy is available for diffuse reflection at glancing angles because of the increase in specular reflection, an effect missed by most reflection models. Another difference is that the Specular Amount is the actual fraction of specularly reflected light (at normal incidence), rather than just the height of the specular peak as in the Blinn-Phong model. So realistic values should be used when CE is on. By the way, it’s easy to layer different shading models in the Shader Tree if you want. You can combine a Lambert diffuse with anistropic specular or whatever using a variety of blend modes, and there’s no limit to how many materials, textures, and shaders can be applied to a surface.

Bump maps basically work by comparing the brightness of the image map at the current pixel with the brightness one pixel over in the U direction and one pixel over in the V direction. These two differences deflect the surface normal along the U and V tangent vectors. Since they only consider a tiny area of the image at a time, bump maps are really more for small scale details as opposed to large scale dents. You’re probably getting more of the dented effect in the low res render because texture antialiasing is blurring the small scale details of the map. I bet you could get a similar look at higher resolutions by increasing the AA Strength of the map along with the material’s Bump Strength. If that blurs the details too much, then you could add a second bump map with low or no AA Strength and a lower amplitude (which you can do by making the texture’s Low Value and High Value closer together, for example 40% and 60% instead of 0% and 100%). So essentially you’d have a high amplitude blurry bump map for big features and a low amplitude sharp bump map for details, with both referencing the same image.

The higher the AA Strength of the texture, the blurrier it will be. In theory, 100% should be just the right amount to suppress texture aliasing without being too blurry, but this depends on some internal calculations which determine how many texture pixels are found within one screen pixel. These calculations are different for each projection type. A strength of 200% would make the texture twice as blurry as the default. For more sharpness you might try something like 50%, or turn it off . You should probably leave Pixel Blending on all the time. It determines what happens when the camera is close enough that a single texture pixel covers multiple screen pixels (texture magnification), sort of the opposite of texture antialiasing (or “minification”).

I’ve been pondering your label problem. If you’ve double-checked that all three gamma settings are 1.0, and it’s not an image loading issue, then I can think of three other possibilities.

1. Other shading components: The six basic components of shading (diffuse, specular, mirror reflection, transparency, subsurface scattering, and luminosity) are additive. Adding a constant value to the RGB of the diffuse image map would have the effect of desaturating it. One way this could happen is if there was any reflection of a grayish environment (like the one in your F8 Preview screen shot). Even if the material’s Reflection Amount was 0%, it could become nonzero due to Fresnel or Reflection Amount texture layers.

2. Color clamping: The RGB values of the label diffuse image map are approximately 0.9, 0.6, 0.3, but the RGB values resulting from diffuse shading can be higher if the total illumination is strong enough. When viewed on a monitor or saved in a non-HDR image, each value will be limited to 1.0 (corresponding to 255 in a 24-bit image), and the result will be a desaturated color. As an extreme example, if the amount of light caused the diffuse shading to be double the map color, the result would be 1.8, 1.2, 0.6, which would be clamped to 1.0, 1.0, 0.6, which has a different hue and saturation than the original color.

3. Colored lighting: If the average color of the light illuminating the label was not white or gray, that would change the apparent hue and saturation. Even if the direct light’s color was 1.0, 1.0, 1.0, indirect illumination (especially from the environment) can still tint the overall color. For instance, if the average light color was proportional to 1.0, 1.5, 3.0, then the 0.9, 0.6, 0.3 label color would turn perfectly gray!

One of the effects of the Conserve Energy option is to make the Specular Amount be the actual fraction of light energy that is specularly reflected from the surface. This means that the peak brightness of specular highlights will go up as the Roughness comes down (think of a bell curve — as it gets narrower, it must become taller to preserve the area under the curve). These peaks can be very high compared with the non-energy-conserving BRDFs that are traditional in computer graphics. So with CE, it’s important to use real-world Specular Amounts, which are typically less than 5%. Anyway my guess is that the specks in your image were reflections or refractions of overly bright specular highlights, and turning off CE fixed the problem by greatly reducing the peak brightness of the highlights. Turning down the Specular Amount would have done the same thing.

The Shader Tree made setting up this scene really easy. I didn’t even have to assign different material names to groups of polygons in the traditional way, but instead used item masks (since each ball was in its own mesh layer). If I wanted to add, say, a dusting of snow on top of all the objects, I could just insert a slope-based gradient above the masks. You might say the Shader Tree provides a more holistic approach to surfacing.

Unlike LightWave, modo considers back-facing polygons when tracing rays, so it can determine when a ray is leaving a material without the need for “air polygons” (although those should still work too).

The texture antialiasing feature makes your “average surrounding pixels” idea efficient by precomputing smaller versions of the image (half size, quarter size, etc), which are often called mipmap levels. You can get smoothly varying levels of blur by using trilinear or tricubic interpolation among pairs of these levels. The trick is to determine the correct amount of blurring, which should be based on the solid angle of each ray. “Thin” rays, like when the environment is seen directly by the camera, call for little or no blurring, whereas “thick” rays, like those used when sampling indirect lighting or soft reflections, can benefit from more blurring to reduce noise.

—- AO:

The lack of bump mapping when rendering only an AO output was due to an overzealous optimization. There are several render outputs that non-displacement textures do not affect (like Depth and Surface ID) so to save time, modo skips such textures if those are the only kinds of outputs being rendered. Ambient Occlusion was mistakenly included in that group. But if any outputs not in that group (such as Final Color) are also being rendered, then bump mapping will be evaluated and AO outputs will account for it.

Ambient occlusion is basically the fraction of the environment visible to a point on a surface. It’s computed by sending many rays from each point and counting how many of them reach the infinitely distant environment and how many are blocked by geometry. If they all reach the environment, the result is white, and if they all hit geometry, the result is black. The computation is independent of any lighting in the scene. With that in mind, the result inside an enclosed space is naturally going to be black (since each point is fully occluded). However it’s possible to make the occlusion rays ignore geometry past a certain distance by specifying a maximum occlusion range. If no geometry is hit within that distance, the ray is assumed to reach the environment. So if the range is set smaller than the size of an enclosed space, non-black results will be possible within that space.

Infinite range allows the rays to hit the walls of the room, which stop them from seeing the environment. When the range is limited to one meter, the rays ignore any geometry beyond that distance (like the walls) and just return the environment color, which is probably much brighter than what the walls would have been. This can be useful for ambient occlusion baking but is very bad for GI renders!

—- Displacement:

The main mechanism controlling the dicing of surfaces into micropolygons is the Displacement Rate, which is adaptive based on how big the resulting micropolygons will appear given the render resolution, distance from the camera, zoom factor, etc. The Minimum Edge Length is a secondary mechanism originally intended to prevent too much dicing in areas outside the camera view, but it’s rarely needed and can be safely set to zero in almost all cases.

It looks like the displacement map has been saved in a standard 8-bit image format, which would only give 256 possible levels of displacement. In that case those steps probably represent single bit differences in pixel values. If GeoControl can save images in a higher fidelity format like 16-bit TIFF or OpenEXR, try one of those.

If you’re set on using displacement, I would recommend a Minimum Edge Length of zero and as small a Displacement Rate as your memory can handle. If the back side won’t be visible, make sure it has an undisplaced material. Also make sure the displacement texture image is a bit blurred so that you don’t have sharp edges cutting across the “grain” of the micropolygon grid. Another option to consider is using a bump map instead of displacement. There would be no memory issues, and the results can look great as shown in this modo render by lightshock_studio:

The micropolygons generated by the renderer are as “lightweight” as possible and lack the adjacency data that would be needed to compute average vertex normal vectors for smoothing. The idea is to make the micropolygons small enough (roughly pixel sized) so that normal vector interpolation isn’t necessary. But if the displacement rate is too coarse, then facets may be visible as you discovered. Instead of increasing the displacement rate to save memory, it’s usually possible to trim the displaced areas to approximately fit the camera view, since micropolygons outside of the view are often the cause of high memory consumption. If you really want LW-style displacement (in other words, larger micropolygons that have smoothing), you can freeze the displaced subdivision surfaces into triangles before rendering, and then the displacement texture can be turned off.

Was there micropolygon displacement on any surfaces? Displacement can increase memory usage proportional to the number of pixels in the render. For example, doubling the width and height in pixels can result in four times as many micropolygons. To prevent this you’ll need to increase the Displacement Rate. Normally modo can free no longer needed micropolygons from memory during rendering, but they may all be needed at the same time if GI is on. The numbered frame storage buttons work by saving and loading full sized frame buffers, so they don’t work for Write Buckets to Disk renders. It’s best to save the rendered image before recalling a previous frame, or let modo automatically save the image at the end of the render (by rendering a one frame animation for example).

modo’s displacement is similar to PRMan’s in many ways. Both programs “dice” segments of objects into micropolygons as those segments become visible, and both can free the memory used by micropolygons in segments that are no longer needed. However modo scenes generally rely on ray tracing for things like shadows, reflections, and global illumination, which means that most or all of the micropolygons in the entire scene may be needed at any time during the render. PRMan traditionally uses non-ray traced techniques like shadow mapping, reflection mapping, and baked global illumination, which means that only the micropolygons in the buckets currently being rendered need to be in memory. PRMan is also better at not displacing parts of surfaces that are offscreen, saving more time and memory. And of course the Pixar guys have been optimizing their displacement for a few decades now… Actually I think modo’s displacement does all right. As a test just now I added a default sphere, fit the camera to it, added a fractal noise displacement, and hit F9. With full antialiasing, ray traced shadows, and close to a million polygons it rendered in five seconds on my old laptop.

what you’re thinking of is the tessellation of subdivision surfaces, in which modo picks a single subdivision level for an entire surface if Adaptive Subdivision is turned on. It’s true that the subdivision level it chooses can be more than enough for small subdiv polygons if the same surface also contains much larger subdiv polys. However the way modo dices surfaces into micropolygons for displacement is much different and does not have this limitation. The dicing is completely adaptive, making roughly pixel-sized micropolygons everywhere. So a large original polygon will automatically be diced into a lot more micropolygons than a small one. Your example is just a consequence of the recursive nature of the dicing, in which polygon edges that are too long (relative to the Displacement Rate) are cut in half, and then those that are still too long are cut in half again, and so on. What happened in your second image is that an additional round of splitting occurred within some or all of the edges inside the larger polygons. An example may make this more clear. Let’s say you have a triangle whose edges are 8 pixels long as seen by the camera. Its edges will need to be cut in half three times to make all final edges no longer than a pixel (in fact they’ll end up exactly one pixel long). But if the initial triangle had edges that were 10 pixels long, then after three rounds of splitting they’ll be 1.25 pixels long, so an additional round of splitting is necessary. The final edges will be 0.625 pixels long and the number of micropolygons will naturally be higher.

At each step, the decision to split or not is made on an edge-by-edge basis, so the amount of subdivision will often vary even within a single initial polygon (critical for things like terrain), making displacement dicing much more adaptive than the way subdivision surfaces are tessellated. Even if you start with a mix of polygon sizes, all micropolygon edges will end up close to the Displacement Rate in length. This is different than adaptive subdivision, where a mix of polygon sizes may cause some of the resulting final edges to end up much smaller than the Subdivision Rate.

modo’s displacement works similarly to that of PRMan (RenderMan). Chunks of geometry called segments are diced into micropolygons at the time they are first needed. If memory consumption reaches the size limit specified in Preferences, old micropolygons in segments that are no longer needed (from completed buckets for example) are automatically flushed from memory to make room for new ones. This works best when global illumination has been baked (as it usually is in PRMan), because otherwise the indirect rays will be frequently hitting segments scattered all over the scene, so all the micropolygons may have to be present at the same time. Another case that can cause trouble is when the initial geometry isn’t sufficiently subdivided into segments to begin with. For example, making a displaced landscape out of a single polygon means it all gets diced at once, but this can be easily fixed by using Shift-D a few times first.

My guess is that the curved boundary is where micropolygons are getting close enough to the camera that they need to be split again to keep their edge length within the displacement rate. In other words, the closer they are, the bigger they appear in screen space, and at some point they get big enough to need further subdivision. This is normally not noticeable, since the feature size of most textures is larger than the size of a micropolygon, but in this case the texture appears to have a much higher frequency. One thing you could try is to use a tiny displacement rate (like 0.2) and rely on the Minimum Edge Length to control the size of the micropolygons. This removes the “adaptiveness” of the displacement and makes the edges a constant size in world space. Start with a big edge length and lower it carefully, keeping in mind that the number of micropolygons will quadruple every time the length drops in half. The faint checkerboard is probably due to the initial triangulation of the surface having alternating diagonals. You might try triangulating it in advance (using the Triple command) and making sure all the diagonals are oriented the same way.

The problem is that the image map doesn’t have a high enough resolution for such extreme closeups. The facets visible in your render are not micropolygons but rather image map pixels. Your displacement settings are actually OK, as you can verify by replacing the image map with a high frequency noise displacement (for example, a solid projection with a 10 mm texture size).

For displacement, the low and high values of the image map should be -100% and 100%. The material’s displacement distance then specifies the amount that the surface should be raised wherever the image map is white. The surface will be lowered by that amount wherever the image map is black, and it will remain undisplaced wherever the image map is 50% gray.

modo does support floating point displacement maps (.hdr and .exr formats for example), but 16 bits should be enough in most cases, providing over 65,000 steps of displacement for each micropolygon vertex. And for showing detail, the precision of the in-and-out dimension is not as important as the rate of variation between one micropolygon vertex and its neighbors (which depends on the width and height of the image). One reason the details are more apparent in the ZBrush image is that it seems to have some specular reflection. Since specular shading varies more rapidly with incidence angle than diffuse shading, it emphasizes small changes in the surface normal.

Micropolygon displacement is done in two stages. First the surfaces are “diced” into micropolygons such that the length of each micropolygon edge in pixels is roughly equal to the Displacement Rate. If the original surface has Smoothing turned on, the diced (but not yet displaced) surface is affected by the average vertex normals in order to make the shading look roughly the same as on the original undiced object. That’s why the faces of the green cube are bulging out — the vertex normals at the corners of the faces are slightly diverging. The second stage involves actually evaluating the displacement texture, resulting in a value from -100% to 100% at each micropolygon vertex. Each vertex is moved along its normal by that value times the surface’s Displacement Distance. If the normals on two sides of an edge are different, then a crack can open up if the displacement texture is nonzero along that edge. Using a sufficiently large Smoothing Angle can ensure that the normals are the same on both sides of an edge. Cuts can be made close to an edge (parallel to it) to prevent adjoining surfaces from curving.

It’s kind of ironic that LightWave-style displacement would be considered the regular kind. Displacement mapping was originally introduced by the folks at Pixar in the 1980’s and they’ve always used the micropolygon method, the results of which you can see in just about anything rendered with their PRMan product (Jurassic Park dinosaurs for example). When I was developing LightWave Layout, I considered that method too difficult at the time, so I settled for an approach that would simply move existing vertices according to a texture, requiring the user to make sure their objects were subdivided in advance. Later (in LW 6 I believe) we added the ability for subpatches to be tesselated at render time, which helped make the LW version a little more like “true” displacement. Now in modo we finally have something very much like PRMan’s displacement.

Displacement interpolates positions based on average vertex normals. In other words, smoothing affects the shape. For example, if you apply displacement to a faceted sphere (even a Constant texture with a value of zero), it’ll become a smooth sphere, no subdivs needed. Anyway, one solution in your case would be to turn off smoothing for the polygons affected by the displacement. Or cut and paste back that section of the screwdriver (without merging) so that the average vertex normals at each end aren’t affected by the adjacent sections.

In my experience, the most common cause of running out of memory with displacement happens when displaced surfaces extend outside of the camera’s field of view. This is common when the camera is hovering over a landscape, because parts of the surface directly below the camera may be finely diced into vast numbers of micropolygons even if the camera is pointed straight ahead. Dicing is based on the size of a pixel projected onto the surface, so the closer the surface is to the camera, the smaller the micropolygons will be. There are several solutions to this problem which can be used individually or in combination:

– Trim the surface to better fit the camera’s field of view. A good example of this is the 2PolyCanyon.lxo scene which you can find in the 201 content’s Landscape folder.
– Increase the Displacement Rate, which is the approximate edge length of micropolygons measured in pixels. Each time the rate is doubled, the number of micropolygons should drop by about a factor of four. The drawback is that micropolygons bigger than a pixel can sometimes be seen as facets.
– Increase the Minimum Edge Length for micropolygons. This can be better than increasing the rate since it can limit the density of the micropolygons right under the camera (outside its field of view) without affecting those parts seen by the camera.
– Divide up the surface rather than using one big polygon. Micropolygon dicing is performed on a geometry segment (a subset of the mesh typically containing tens of polygons) whenever a ray hits the segment’s bounding box. If the mesh is just one big polygon, it will be one segment so the whole thing will get diced all at once. But if it is divided into enough polygons in advance, it’ll consist of multiple segments, and the segments outside the camera’s field of view may never need to be diced at all (unless a shadow, reflection, or GI ray happens to hit them). By the way, you can visualize the segmentation of your models by setting Color Output to Segment ID.

modo knows the maximum displacement distance of each material, so it can determine if the displaced surface can possibly appear in the current bucket. If so, the micropolygons are generated. Even if they end up outside the current bucket, they’ll probably be needed soon by another nearby bucket. This is one reason that the Hilbert bucket order is beneficial, because buckets close to each other on the screen are rendered close together in time. The more typical left to right, top to bottom bucket order is not as good because of the big leaps after each row.

LW doesn’t have micropolygon displacement, so you’d have to subdivide that single polygon in Modeler or make it a subpatch with an extremely high level. But micropolygons are still better because they’re automatically generated based on image space (in other words, their size in world coordinates is smaller in the foreground where more detail is needed and larger in the background). The funny thing, as Brad just pointed out to me, is that whenever I take a shot at LightWave’s renderer, I’m really criticizing my (younger) self! Anyway, kidcodea is right about buckets facilitating the rendering of huge scenes by only keeping in memory the geometry within the current bucket, a technique made famous by Photorealistic RenderMan. modo has this capability (which we call geometry caching), but the use of global illumination means rays are being fired all over the place, in which case a lot of geometry outside the current bucket is also needed. Prman is typically used in situations where all shading can be done locally, using shortcuts like precomputed shadow maps, reflection maps, and ambient occlusion instead of real global illumination.


—- Caustics:

The Indirect Caustics option determines whether reflection or refraction should be computed when an indirect ray hits something. It looks like it might be turned off in your scene, or set to Reflection Only, preventing indirect rays from passing through the glass.

It’s not only true for renderers but also in the real world. You can think of caustics as basically a series of overlapping images of the light sources, so spread out sources (such as area lights or luminous polygons) are going to make blurrier caustics than tight concentrated sources (such as the sun, or the overhead spotlights used in jewelry stores).

Previous versions of modo could render caustics due to indirect light sources such as luminous polygons. The old shot glass scene is an example of that. To get the best results out of 401’s new direct caustics, use concentrated light sources like spotlights rather than big sources like area lights. Directional lights work too (preferably with a spread angle of zero), but you may need to adjust the light’s position and photon emitter size (the dotted square drawn around the light in OpenGL views) so that the column of photons will just encompass the reflective or refractive surfaces.

When Direct Caustics are on, the photons account for all light passing through transparent objects, so shadow rays can immediately stop when they hit one. Without caustics, modo continues tracing shadow rays until they hit the light, passing through multiple layers of transparency if necessary to determine how dark the shadow should be. That extra ray tracing might be more expensive than caustics in some scenes.

The amount of noise in the caustics will depend primarily on two things — the number of indirect rays, and the nature of the environment image. The example I posted worked well partly because the bright areas of the environment image were relatively large. If you use images containing small concentrated light sources, then the number of indirect rays required the get smooth results could be much higher. This can be improved by blurring the image (increasing the Minimum Spot setting for example). Also, you may have already tried this, but the reflection and refraction noise in your renders could probably be improved by reducing the Ray Threshold setting.

Caustics are a natural result of the indirect lighting calculation, in which rays are traced to sample the hemisphere above the point being shaded. If some of those rays happen to refract through transparent surfaces and then hit something bright (like luminous geometry or an HDR environment), then that point is in a caustic. Generating caustics from direct light sources usually requires a preprocess in which rays are traced in the other direction (from lights to surfaces), and the results stored in a “photon map” which can be consulted during the main rendering process.

—- Anisotropy:

Anisotropic specular highlights and blurry reflections simulate the effect of microscopic scratches on a surface. If there is no anisotropy texture map, then the scratches will be parallel to dPdu, the direction in which the U coordinate increases in the UV map specified in the material’s properties. Anisotropy texture maps modify the direction of the scratches relative to dPdu. Basically the red and blue components encode the cosine and sine of the rotation angle, where no red or blue means -1 and full red or blue means +1. A neutral anisotropy map would thus be full red and half blue, since the cosine and sine of zero degrees are +1 and 0. Full blue and half red would be perpendicular to this, etc. We chose this method because it was already an established standard used by XSI, and because it has a significant advantage over the first method we tried, which was a grayscale map that directly encoded the rotation angle itself. The problem with that method was that circular patterns had artifacts where the rotation angle wrapped around, at the border between black and white (which would end up gray due to texture filtering, and gray encodes an unwanted direction). The XSI method has no discontinuities at these wraparound points.

Normally there shouldn’t be any render differences between operating systems or CPUs, but certain conditions like uninitialized data might cause different results. This is just a wild guess, but if the metal material had a nonzero Anisotropy value and no UV map was specified (or if a map was specified but some vertices in the metal surface were not members of that map), that might do it, since the direction information that anisotropy relies on would be uninitialized.

Anisotropic highlights and reflections definitely require a UV vertex map (no texture is needed though). The reason is that the anisotropy effect is based on a direction on the surface (technically called a tangent vector), and that’s supplied by either the U or V axis of the vertex map. You can think of this as the direction of microscopic scratches on the surface, like brushed metal. When the Anistropy setting is positive, the scratches lie parallel to the U axis, and when it’s negative, they’re parallel to the V axis. The magnitude of the Anisotropy value (how far it is from zero) determines how strong the effect is.

Even without supersampling, there’s still more noise in the shadows of your last render than I would expect for 6000 indirect rays. What I would do is drag out a small render region in one of the noisy areas of the image and run some experiments. For example, if the noise is triggering shading refinement in a particular pixel, then instead of a single 6000 ray indirect evaluation, you might end up with something like eight separate 750 ray evaluations, which is not as good because the rays are not as well stratified. So one experiment would be to disable refinement by setting the Refinement Shading Rate to 1.0. Another experiment would be to set the Ray Threshold to 0.0% and see if that improves the noise. Using a small render region allows these kinds of tests to be conducted more quickly.

Anisotropy is highly dependent on UV coordinates, so you might want to select the affected polygons and make sure their UVs look good in the UV view. For example, they shouldn’t appear to be collapsed into a point or line.

Anisotropy requires a UV map to define the “brushing” direction. If no UV map is chosen, modo will try to synthesize a direction based on the cross product of the normal vector and the world “up” vector (the positive Y direction). This works pretty well on spheres and upright lathed items (chess pawns for example), but the result is undefined on surfaces that face straight up as in this case. We should probably change it to just ignore the Anisotropy setting if no UV map is chosen.

Anisotropy is designed to work with a UV map. Positive anisotropy values simulate the effects of microscopic scratches along the direction of increasing U, and negative values simulate scratches along V. The specular highlights or blurry reflections will be elongated perpendicular to the scratches, as seen on brushed metal surfaces.
—- Camera:

One technique you might consider is to make the camera level (no X rotation) and instead use Film Offset Y to frame the shot. That way all the vertical lines of the building will be perfectly vertical in the image.

Front projected textures are only defined within the frustum of the camera, as if the camera was acting as a slide projector. But reflection rays will most likely be hitting the environment in places outside of that frustum, so what happens there? That would depend on the texture’s repeat settings, so probably the texture is just being tiled. More useful results could be obtained by reflecting an environment in which the texture image completely surrounds the camera, such as a spherical or lightprobe projection.

Incidentally the reason for the film size and resolution channels being independent is that back in 2003 when modo was being designed, it seemed that most 3D users were being trained in Maya, and that’s how they did it. The film size was considered part of the camera attributes, and the resolution was part of what they called Render Globals. One way to think about this is to imagine that the film size and focal length define an optical system that records frames on “analog” film, which are then digitized at a certain resolution on a film scanner. If these two rectangles have different aspect ratios, then the Film Fit channel determines how they should align with each other.

Set the Projection Type in the texture locator to Front Projection, then pick the camera from which you want to project in the Projection Camera popup. The other thing you’ll want to do is make sure that the film size of the camera is the same shape as the image. The default film size is 36mm x 24mm, which is a 3:2 aspect ratio (matching a standard film SLR). If you’re projecting an image with a 4:3 aspect ratio, I recommend setting the film width to 32mm.

Orthographic cameras are designed to match perspective cameras at the focus distance. This was done to make it easy to switch between the two types — if your subject is at the focus distance, the same portion of it will be visible in either projection type. As far as the change in AO goes, my guess is that it’s due to the near clip distance of the AO rays increasing as the camera gets farther away. The near clip distance of AO rays (actually any kind of ray) depends on the length of the incoming ray in order to reduce the chances of self-intersection due to the limited precision of floating-point numbers. In other words, the potential error in the position of a ray hit point depends on the length of the ray, and any new rays spawned from that hit point have to ignore surfaces within a certain distance or else they risk hitting the same surface they’re supposed to be leaving.


—- Lights:

By default, the shadows cast by opaque objects are the physically correct darkness. In other words, the light source will have no direct effect on the shadowed areas, which will be as dark as if the light didn’t even exist. I can think of several ways to make shadows darker than this. First I would take a look at what else is adding light to the shadowed areas. This will typically be indirect illumination, or else the constant ambient light if indirect illumination is turned off. It’s easy to dim the ambient light, and indirect illumination can be reduced by dimming indirect light sources (typically the environment), or by reducing the Indirect Illum Multiplier in the shader item.

Another approach is to give the light source a negative shadow color, so that it actually makes the shadowed areas darker than if the light didn’t exist. This is a little tricky because most (all?) color pickers don’t let you set negative colors, but you can still do using the Channels viewport. Select the light material and look for the lines Shadow Color R, Shadow Color G, and Shadow Color B. Those values will be 0% by default, but you can click on them and enter -50% or something like that. Finally you could lower the output gamma, which will have a nonlinear darkening effect (it will darken shadows relatively more than bright areas).

In a vacuum the radiance from a light isn’t attenuated at all. It isn’t attenuated much in an atmosphere either (although a little bit may be scattered or absorbed). However in both cases the illumination of a surface due to the light would fall off proportional to the inverse distance squared, simply because the apparent size of the light source gets smaller by that much. Here’s another way to think about it: Imagine you’re in space and you take a digital photo of the sun. Let’s say the sun occupies four pixels of the sensor. If you move twice as far away from the sun and take another picture, the sun will appear half as wide and will only fill one pixel of the sensor. That’s where the inverse square law comes from. But that one pixel will be just as bright as any of the four original pixels because radiance isn’t attenuated in a vacuum.

If Simple Shading is on, soft shadows will be computed using the entire cylinder but diffuse and specular shading will be computed using the center of the cylinder. Also I think OpenGL uses only the center, so don’t judge the light’s effect based on OpenGL (turn on Ray GL to see it).

Normally lights obey the “eyeball” to determine whether or not to participate in rendering, but this behavior can be overridden. Check the Render popup in the light’s properties. If it reads “Yes” then the light will affect the render regardless of the eyeball. Set it to “Default” to make it obey the eyeball.

In 302, the number of samples used for area light shading was the full number specified in the light’s properties unless the importance of the shading evaluation was below a certain threshold, in which case only one sample would be used. (Importance is a measure of how much a particular calculation will affect a pixel.) In 401, the number of samples used for area light evaluation varies continuously with the importance of the shading evaluation, so in some cases fewer samples will be used (speeding up rendering). If there is noise, increasing the number in the light’s properties will help. Also make sure Simple Shading is turned on, and that the Shading Rate is not too low.

The simplest way is to greatly increase the number of samples in the properties of whichever light is causing the grain. An alternative would be to replace the area light with a luminous polygon of the same size and intensity, which should provide grain-free results assuming Irradiance Caching is enabled.

Shadow softness is going to depend on the Radius setting of the spotlight and the distance between the shadow casting and shadow receiving objects. Increase those two things to increase the width of the soft edges, and if necessary increase the light’s Samples setting to reduce noise.

The intensity falloff of lights in modo is physically based, following the inverse square law. The Radius setting for point lights is not a falloff radius. What it does is make the point light cast soft shadows as if it were a sphere of the specified radius. It will increase the render time of the light proportional to the Samples setting, so I would leave the radius at zero unless you really need soft shadows.

when modo loads an IES or EULUMDAT file, the maximum luminous intensity from the table of angles is placed in the light item’s intensity channel, and the intensities in the table are then remapped to range from 0 to 1. During rendering, the appropriate value from the table is looked up (based on the direction from the light to the point being shaded) and multiplied by the light’s intensity channel, reconstructing the original intensity specified in the file. This makes the new photometric lights consistent with modo’s existing point and spotlights, in which the intensity channel in the light’s properties represents the maximum intensity (at the center of a spotlight cone, for example). It also means you can read the maximum intensity numerically after loading an IES file and directly edit that value if desired.

Spot lights are just a special kind of point light, and point light sources are subject to inverse square falloff. To understand why this happens, think about how the photons emitted by a point source spread out. The density of photons at two meters from the source is four times less than the density at one meter. In contrast, the photons from a directional source are all following parallel paths. Their density does not change as they travel and thus there is no falloff. The default directional light has a radiant exitance of 3.0 W/m2, and its effect on a surface does not change as the surface is moved toward or away from the light. A point or spot light with a radiant intensity of 3.0 W/sr will have the same effect as the directional light when the surface is one meter away from it. If the surface is moved to be two meters away, the effect will be the same as a directional light with a radiant exitance of only 0.75 W/m2 (four times less). At ten meters, the effect will be 100 times less than at one meter.

Shadow rays (actually all rays) have a near clip distance, a tiny distance they are allowed to travel before any hit testing occurs. This is necessary to prevent unwanted self shadowing, which would occur if the shadow ray happened to hit the same surface it was leaving (which is possible due to the limited precision of floating point numbers). I think the speckle in the box scene is a point on the wall which is so close to the ceiling that the near clip distance allows the shadow ray to skip over the ceiling and hit the light. If there were another polygon slightly above the ceiling (like a roof) then the shadow ray would hit that and the light leak would be gone.

The principle behind soft shadows is the same for all light types. Basically a bunch of shadow rays are fired from each surface point toward various sample locations on the light source (or in various directions for a spread directional light). The fraction of rays that hit something before reaching the light determine the amount of shadow. The penumbra (partial shadow) regions are where some of the rays made it through and others were blocked. So with that in mind, a point light with a particular radius and a circular area light with the same size (twice the radius in width and height) and with the same number of samples should take about the same amount of time. Personally I would prefer the area light since its samples will be distributed a bit more evenly, giving slightly smoother results.

They’re not necessarily better, but are just a different way of lighting a scene. Luminous polygons affect the scene as part of indirect illumination, so they can take advantage of irradiance caching (see ShotGlass.lxo in the Food directory for a good example). Also they look nice in mirror reflections. On the other hand, if they’re small it may take a lot of indirect rays to find them, resulting in noise or blotchiness. Area lights are part of direct illumination, so they don’t require indirect illumination to be enabled. The renderer has no problem finding them because it knows exactly where they are in advance, and the smoothness of their shading is easy to control using the Samples setting. From an energy standpoint, a luminous polygon with a particular radiance will have the same effect as an area light with the same size and radiance.

The larger the light (as seen by the surface being shaded), the more samples will be necessary to get smooth shading. Instead of dome lights, I recommend Indirect Illumination, which will gather light from the environment and can take advantage of irradiance caching.

—- SDS:

If the bounding box of a geometry cache surface (which is similar to an item) is completely offscreen, its subdivs won’t be tesselated, and if a segment’s bounding box is completely offscreen, its polygons won’t be diced into micropolygons, because they’re not being hit by any camera rays. However, unseen surfaces or segments may still need to be tesselated or diced if they are hit by other types of rays. For example, if there is a mirrored ball in the frame, it’s going to be casting reflection rays in lots of directions that will probably hit offscreen segments (even those behind the camera), causing them to be diced and displaced. Likewise if GI is on, the indirect rays will be bouncing around hitting segments all over the scene. That’s why when you start an irradiance cache render, the first few “dots” of the first pre-pass are slow — the geometry cache is being populated due to all the indirect ray hits.

Brad was talking specifically about subdivision surface tesselation, in which each mesh item is assigned a subdivision level based on the screen size of its largest edge and the Subdivision Rate. In the future we’d like to allow the subdivision level to vary within a single item, which is a bit tricky since adjacent subdivs can have different levels. However the terrain issue John brings up is a separate matter, because modo’s micropolygon displacement works in a different way and is not based on items. Instead, smaller pieces of geometry that we call segments (typically about 16 polygons) are diced into micropolygons only when needed during rendering (when a ray hits the segment’s bounding box). Unlike subdivision surface tesselation, the density of this dicing can vary greatly within a single item or even within a single original polygon, since each micropolygon is individually sized according to the Displacement Rate. One thing you can do to reduce unnecessary dicing in areas outside of the camera’s field of view is to make sure that the original undisplaced surface is broken into enough segments. For terrains, this can often be done by just hitting Shift-D a few times. You can visualize how big the resulting segments are by rendering a Segment ID output. The advantage of PRMan is that their segmentation is smarter (based on a recursive process they call splitting). This is another area we’re planning to work on.

I’ll give you an example. Andrew Brown’s radio car had a single mesh layer containing highly detailed wheels and tires with knobby tread patterns, all modeled out of tiny subdivision surfaces. That wouldn’t be a problem if that’s all there was in the layer — the subdivs were small as seen by the camera, so the mesh layer would automatically get a low subdivision level, keeping the final polygon count reasonable. However, the mesh layer also contained a full length axle between two of the wheels, basically a cylinder made of long thin subdivs extending across many pixels as seen by the camera. Adaptive subdivision therefore was choosing a high subdivision level for the mesh layer. Since all subdivs in a layer use the same level, the wheels and tires were being very finely tesselated, causing the final polygon count to explode. Possible solutions include cutting and pasting the axle into a different mesh layer, or chopping up the axle into smaller pieces, or turning off Adaptive Subdivision. In a future release we plan to solve this by allowing the subdivision level to vary within a single mesh layer.

The way modo’s adaptive SDS rendering works is that a subdivision level is chosen for each mesh item so that the largest edge in the final subdivided geometry appears no longer than the Subdivision Rate (which defaults to 10 pixels). It determines this based on the distance from the center of the mesh item to the camera, the frame size, and the camera’s field of view. For example, items close to the camera will automatically be subdivided more finely than distant objects, and zooming in or increasing the resolution will also cause things to be subdivided more finely. If a mesh item contains a mix of large and small subdiv polys, the largest one is used in determining the subdivision level for the entire item. This may cause a high subdivision level to be chosen which can be overkill for the smaller subdivs, resulting in a huge number of polygons, possibly too many to fit in memory. The easiest solution is to raise the Subdivision Rate (doubling it can result in up to four times fewer polygons), but it can also be helpful to put large subdivs into their own layers, separate from layers containing lots of small subdivs.
—- General Notes:

It sounds like the geometry cache was “thrashing” in your SSS experiment. Displacement plus high resolution means there will be a lot of micropolygons. If the geometry cache size preference is exceeded when a bucket finishes, modo will start freeing chunks of these micropolygons to make room for new ones in the next bucket. This can be helpful for large scenes in which polygons in one bucket don’t affect other buckets, but things like SSS or global illumination throw around so many rays that it isn’t really practical — the geometry in old buckets is still needed, so it has to be recreated. If this happens frequently it’s known as thrashing and can greatly increase render time. The best solution is increasing the cache size preference. It can even be set higher than the system RAM amount. The only downside is the possibility that if all the RAM does get filled up, virtual memory might be used which would again be slow.

If I wanted to simulate more bounces I wouldn’t change the gamma, but would instead adjust the ambient light settings, which are used when the bounce limit is reached. The color of the ambient light should be roughly an average of the scene (so a slightly orange white in this case).

Colored surfaces that are brighter than the white level of the render can appear white. Let’s say one of the stripes has a luminous color of 0.5 0.5 1.0 (light blue) and a luminous intensity of 3.0. The indirect illumination on the ground will have a nice blue tint, but when you see the stripe directly it will have a final color of 1.5 1.5 3.0. Since the red, green, and blue components are all above the white level, the stripe will simply appear white. The red stripes are probably a bit more saturated, so when you see them slightly dimmed in a reflection, they appear pink. If you want the directly viewed stripes not to appear white, you can give them a highly saturated color. For example, if the luminous color was 0.0 0.0 1.0 (pure blue), the stripe would still appear blue no matter how bright it was. Another option would be to turn off Clamp Colors and turn up the Tone Mapping percentage, which tends to preserve the colors of very bright surfaces.

Black or white pixel blocks and/or magenta IC values both indicate that NaNs (invalid floating point numbers) are being generated during shading. They’re usually caused by either illegal math operations or uninitialized variables. For example, there was a bug in the initial 401 release where some fur shading code was taking the inverse cosine of a number that in rare cases was very slightly more than one. Cosines can never be more than one so this caused a NaN. We added some “bulletproofing” to prevent this case in one of the 401 service packs. I tried loading the robot scene in my copy of 401 but my test renders did not show any NaN symptoms. I know James was getting them this morning so I’ll have to try rendering on his machine tomorrow. However I did notice something strange about the scene — some of the surfaces have textures which reference UV maps that are not defined on that surface. For example, the arms have some textures using the Arms UV map (no problem there), but they also have a texture using the Legs UV map. Looking at Info viewport, I saw that the arm vertices do not define any Legs UV values. Likewise the torso has a texture that uses the Head UV map. So perhaps NaNs are arising not due to math operations but rather to uninitialized variables. That would also explain the seemingly random nature of the artifacts, since they would vary based on whatever was previously in memory.

I believe the PSD saver plug-in requires a full size buffer while saving (thus undoing the benefits of Write Buckets to Disk), making it increasingly unlikely that a single chunk of memory that big can be found as the resolution goes up. Most of the other savers should be able to work line by line and avoid this problem. OpenEXR might be a good option, or the built-in Targa saver.

A couple comments: The Film Fit setting is an idea from Maya to resolve cases where the film size has a different aspect ratio than the rendered image (as determined by the resolution). If you make the aspect ratio of your film match that of your rendered image, then you needn’t worry about Film Fit — it won’t matter. Borders when compositing can be caused by having the alpha affect the foreground image. Ideally the alpha should only affect the background plate, cutting a hole in it into which the foreground should be added at full strength. That’s because foreground elements rendered against black are already “premultiplied” by alpha thanks to antialiasing. Also the gamma should be 1.0 for any elements used in compositing.

Both images show the effect of bad shading normals, which are sometimes written by other applications when saving object files. In the first image, the shading normals are so different from the geometric normals that the irradiance gradients become very confused. As Greg suggested, the solution is to delete the bad normals in the Vertex Maps list, which will let modo compute correct normals.

Magenta is a special color that indicates a NaN value in the irradiance cache (NaN means “Not a Number”, an illegal floating point value). These are usually caused by a particular surface whose shading sometimes evaluates as a NaN when the surface is hit by indirect rays, causing the resulting irradiance value to also be a NaN (which modo detects and changes to magenta). Sometimes the surface will have bad texture settings or UVs for example. Also there was a bug in the original 401 where fur could sometimes cause NaNs, although that was fixed in SP1 or SP2. A good way to track down these issues is to turn off various material groups until the problem goes away, or alternatively turn them all off first (so everything is plain white), then turn them back on until the problem reoccurs. Once you know which material or texture causes the problem, you can look at its settings or UVs to see if there is anything strange.

The projection type should be set based on the image. If the image looks like a mirrored ball contained in a black square, use Light Probe. If it looks like a panorama (usually rectangular), then use Spherical. An environment may not appear to produce much light if it is dark above the horizon. Be sure the projection axis is correct (for example, it should be Y for a Y-up scene), and increase the environment’s intensity if necessary. And of course make sure Indirect Illumination is on.

We don’t know enough about your scene to determine what is causing the out of memory condition. If the cause is displacement, then increasing the Displacement Rate should help. If it’s adaptive subdivision, then increasing the Subdivision Rate should help. If it’s the frame buffer size, then turning on Write Buckets to Disk should help.

Actually Preview is primarily the work of our man in France, Gregory Duquesne. Glad you like it!

One change is the way shading refinement works. In previous releases, the Refinement Threshold was interpreted as a simple difference in pixel color values. For example, the default threshold of 10% corresponded to a difference of 0.1. If two neighboring pixels had values of 0.1 and 0.21, they would be reshaded (using the more expensive Refinement Shading Rate) because their difference was 0.11, greater than the threshold. But if their values were 0.1 and 0.19, they wouldn’t be reshaded, because the difference was only 0.09, less than the threshold. This worked well in bright areas but often left “jaggies” along reflection edges or hard shadow edges in dark areas, often making it necessary to reduce the threshold value quite a bit. In 401, the Refinement Threshold is interpreted as a measure of contrast rather than an absolute difference, so the difference in pixel values required to trigger reshading is now much smaller in dark areas and larger in bright areas. Darker scenes will get more refinement than before, especially if their threshold had been reduced to avoid aliasing in previous releases. Increasing the threshold should help speed up such scenes.

Keep in mind that the true geometry, the polygonal surface actually hit by traced rays, is faceted. The way renderers create the illusion of smoothness on such surfaces is by computing average surface normals at the vertices and interpolating these normals across polygons (a technique often called Phong smoothing). This works very well in most cases but there are situations where it can cause artifacts, especially if the facets are too big. In this case the problem may be due to the two closely spaced glass layers and the linear nature of the normal interpolation. The smoothed normals are continuous across polygon edges, but the rate of change of the normals is not, and the edges where the rate of change varies on the front glass surface are slightly offset from where they vary on the back glass surface. Since refraction is extremely sensitive to surface normals, this can result in discontinuities in the directions of the refracted rays leaving the back surface. My recommendation is to run smooth subdivide (with a smaller angle than the default of 89.5 degrees) a few times on the glass. This makes the true geometry much closer to the curved surface that the Phong normal interpolation is simulating, and doesn’t seem to affect render times very much.

Because fog is based on exponential decay, you can get equally good fog effects at any scale. It all depends on the fog density (the decay constant), which determines how quickly fog increases with distance. For example, a full scale scene with a density of 0.25% will look the same in terms of fog as a 1/100 scale scene with a density of 25%.

What I would do is define a small render region in the problem area, then turn off the various features that are sometimes slow, such as indirect illumination, reflection, refraction, and subsurface scattering. Test renders should be fast at this point. Then you can selectively turn those things back on, doing a test render each time to find out which feature was the cause. Another thing to keep in mind is that when there is only one bucket remaining in a render, then only one CPU core is working. If you have a multi-core machine, it can help to make the bucket size smaller (perhaps 20 x 20 instead of the default 40 x 40), so that all the cores can help out in the problem area.

You could set up a small render region in the noisy area to do some quick tests. My guess is that many of the reflection rays traveling from the floor through the glass table are being killed by the Ray Threshold, so I would reduce that value (0.0% is best but may be slow). Also the low Refinement Threshold may be hurting things. Finally it looks like the image has had some sharpening applied to it, which will make noise look worse.

The main benefit is reduced memory consumption. Instancing can be used to render scenes with literally billions of polygons for example.

Interesting discovery about the speedup. My guess is that it’s due to a lack of shading refinement, which happens where contrasting pixels are found in the Final Color output. Without a Final Color output, there won’t be any refinement. This theory could be tested by turning refinement off (by setting the Refinement Shading Rate to 1.0) and seeing if that gives the same speedup with the default outputs. All the shading-related outputs get the benefit of any refinement that happens, so it may be beneficial in some cases to include final color even if it won’t be saved. Outputs not related to shading (such as alpha, depth, or surface ID) wouldn’t be helped by it though.

You can tell Write Buckets to Disk is on because the frame memory usage is very low. But geometry memory usage is very high, so my recommendation is to turn off Adaptive Subdivision.

Stuart is the primary architect of modo, and as such he’s probably the busiest of us all. I wouldn’t want to see him spend as much time here as I have been.

When you double the width and height of the render, the number of pixels goes up by a factor of four. Because micropolygons are about the size of a pixel, their number (and the memory required to store them) also goes up by a factor of four. As a result, the memory usage might be exceeding the Geometry Cache Size limit set in Preferences, in which case modo will try to flush unused micropolygons and regenerate them when they’re needed again (which is all the time for that particular scene). Or perhaps the memory usage is exceeding your available RAM, in which case the OS will start using the hard drive. Either of these cases will greatly slow down rendering. My recommendation is to increase the Displacement Rate. If you double it, then at 1600 x 800 you should get about the same number of micropolygons as you did at 800 x 400 with the original rate.

Rendering issues can often be debugged using the “turn things off” method. To see where the bright specks on the floor are coming from, you could set up a small limited region in the problem area, then start doing test renders after turning various things off. You could try turning off individual lights, setting the environment intensity to zero, setting various specular or reflection amounts to zero, etc. Once you know where the problem is coming from, you can fix it by changing related settings. For example, maybe the bright specks are a blurry reflection of a bright specular highlight on the glass. Then they could be improved by lowering the Affect Specular of the light’s material, lowering the specular amount of the glass (it should be about 4% for real glass by the way), increasing the number of blurry reflection rays on the floor, or some combination of those. I would also recommend using a higher antialiasing setting. Unless Depth of Field is on, using a higher number of AA samples often doesn’t affect render times in modo as much as in other renderers.

If you were using Write Buckets to Disk, the buckets should still be on disk, since it sounds like modo crashed before it got to the bucket file deletion phase. In that case, you can recover the render by starting it again with Skip Existing Buckets turned on (and Irradiance Caching turned off to skip the pre-passes). It’ll look like it’s starting over, but then the buckets will all be skipped, it’ll finish very quickly, and it can then be saved. If you weren’t using Write Buckets to Disk, then you should be able to recover the render from the appropriate saved frame slot in the render window. modo wouldn’t have had a chance to write which slot was most recently used (that happens when it quits normally) so you’ll have to be careful not to do any renders and accidentally overwrite your big one. You need to hit F9 and then abort immediately, then find the slot where the big render was saved. Again, this method only works if Write Buckets to Disk was off.

In general, when investigating a rendering issue I start by lowering the resolution and AA samples (to speed up test renders) and then start turning things off. Some of the typical things I turn off include indirect illumination, micropolygon displacement, and adaptive subdivision, which all have convenient check boxes. Some other potentially time consuming options don’t have a single check box but are still easy to turn off, including blurry reflections and subsurface scattering. For those, you can use the Filter popup at the top of the Shader Tree to show only material items, then multi-select the materials (by shift-clicking) and edit the appropriate property. Lights can be turned off by toggling their eyeballs in the Item List, and textures by the eyeballs in the Shader Tree. Ray traced reflection and refraction can be disabled by setting the corresponding depths to 0. I think you get the idea… At some point you’ll notice whatever problem you were having (slow rendering in this case) has gone away, and whatever you changed to make that happen should be very useful in determining how to fix the scene.

There are two potential issues with rendering huge sizes. One is that the frame buffer (the memory needed to store the image) becomes very large. That’s easily solved by turning on the Write Buckets to Disk option, which eliminates the need for a full size frame buffer. The other issue has to do with geometry, specifically the fact that the final number of polygons can go up a lot as the resolution is increased if Adaptive Subdivision or Micropoly Displacement are active. For subdivs, one solution is to increase the Subdivision Rate. For displacement, increasing the Displacement Rate helps, as does deleting parts of displaced surfaces that are outside of the camera’s field of view.

The geometry cache size setting in Preferences is just an upper limit. The actual size of the geometry cache will be just enough to hold all the vertices, polygons, and associated data needed by a particular render. This will generally be less than the size limit.

In general, render times should be roughly proportional to the number of pixels multiplied by some function of the scene complexity. By “complexity” I mean things like the number of polygons and the number of irradiance cache values. Because modo is adaptive (increasing the level of detail as the frame size increases), those numbers go up along with the number of pixels. For example, if you double the width and height of the image, you’ll get four times as many pixels, and probably also four times as many polygons (if you’re using subdivision surfaces or micropolygon displacement), and four times as many irradiance cache values. So you might expect the render time to go up by a factor of 16. Fortunately, render times aren’t proportional to scene complexity directly, but rather to some function of it that grows at a less than linear rate, so the actual increase will be somewhere between four and 16. The rise in complexity with resolution is intentional. For example, it helps prevent the old problem of seeing faceted silhouettes on curved surfaces when you blow up your render to print size. But if you want scene complexity to remain constant, you can force modo to be non-adaptive by turning off Adaptive Subdivision and by increasing the Displacement Rate and Irradiance Rate by the same factor that you increase the frame width or height.

We designed the renderer to share as much data as possible between threads, so very little extra memory is needed to support more cores. Mainly it’s the size of the extra buckets, which you can see in the render window statistics display (about 10.7 MB per thread for the default 40 x 40 buckets).

The various rates in modo are all distances measured in pixels. So you can think of large values as “coarse” (bigger micropolygons in the case of the Displacement Rate, and sparser shading in the case of the Shading Rate), and small values as “fine” (smaller micropolygons and denser shading).

modo’s renderer is indeed physically based, and it does compute quantitative results — the pixel colors in modo renders are actually radiance values measured in Watts per steradian per square meter (assuming an exposure multiplier of 100% and no gamma correction). Of course the results are only meaningful if reasonable settings and input values are used. When you start talking about isoline renders and pseudocolor lighting level images, you’re no longer talking about radiance (the amount of light leaving a surface in a particular direction, such as toward the camera), but rather irradiance (the total amount of incoming light at a surface arriving from all directions). Irradiance values are useful for things like determining if there’s enough light in some part of a building interior to allow people to read comfortably, for example. modo computes irradiance as a necessary part of global illumination rendering, but it does not yet have a way of showing it directly.

When you say “render boxes” are you talking about buckets? At the default size of 40 x 40, they only use 10.7 MB per thread (as you can see in the render window stats display), which is tiny compared to the typical amounts needed for geometry or image maps. I’ve never encountered a situation where I’ve had to change the bucket size (although I sometimes do it just for fun). Likewise irradiance caches are usually not that big, on the order of a few megabytes. The main things that can lead to running out of memory are displacement mapping (especially when applied to surfaces that extend outside of the camera view) and situations where adaptive subdivision picks too high of a subdivision level. Very high resolution rendering can also use a lot of memory. For example, at 6400 x 4800, the frame buffer will use 491 MB (as you can see in the render window stats display). The solution is turning on Write Buckets to Disk, which drops the frame buffer memory usage to only 28 KB! Pixel-based settings (things with “rate” in their name) will also automatically generate more detail at higher resolutions. If this is not desired, then the rates can be increased. For example, if you’ve rendered a scene at 640 x 480 and want to try it at 6400 x 4800 but without increasing the amount of detail (i.e. you want to keep the geometry memory usage the same), you can multiply the subdivision and displacement rates by 10.

With the right settings, modo is capable of unbiased physically based rendering. Or you can use it more like a traditional RenderMan-style renderer. By default, it acts as a kind of hybrid between those two extremes. I’ve been meaning to write up a document about using modo for quantitatively accurate lighting simulations. This would include restrictions on settings, such as keeping the Direct Illum Multiplier and Indirect Illum Multiplier at 100%, and keeping the shadow color of each light black. Material settings must also be plausible, for example the specular color and amount of each material should be the same as its reflection color and amount, since those actually refer to the same real world phenomena (it’s just CG tradition to divide them into direct and indirect components). Irradiance caching is slightly biased, although the professional lighting simulator Radiance is based on it so it can’t be too bad, and of course it can always be turned off. One might think that the Ray Threshold feature is biased, but actually it’s not — those rays which are not terminated are strengthened by the right amount so that the overall expected value remains correct. Using it too heavily will increase noise though.

Here’s a little explanation I wrote for the 202 beta testers about the new geometry memory recycling system. Note that the term segment used below refers to groups of polygons (and is not related to LightWave-style frame buffer segments). At the start of each bucket, the memory used by the geometry cache is checked. If the amount is over the limit set in Preferences, older geometry segments will be flushed from memory to make room for new geometry that might need to be generated for the bucket. This means it’s now possible to render many scenes (especially those involving micropolygon displacement) that 201 would have given up on. There are a few things to keep in mind though. First, the system can’t help unless the geometry occupies multiple segments. Segments are determined automatically, but you can influence them by subdividing your geometry (so hit “d” a few times before attempting a “one polygon landscape”), and you can see their boundaries by rendering in Segment ID mode. Second, the system won’t work if most or all of the scene’s geometry is needed for rendering an individual bucket, which can occur when indirect illumination or blurry reflections are on, since those types of rays are likely to spray all over the place, hitting almost everything. In the future we may solve this problem by maintaining lower resolution versions of the geometry for use by such spread out rays, a technique used in films like Shrek 2. For now, I would avoid using GI (or bake it using lower resolution geometry) in scenes that require memory recycling. Third, it’s likely that some previously flushed segments will need to be regenerated again for other buckets. If this happens frequently, it’s called thrashing and the render time will suffer a lot. Theoretically the Hilbert bucket order should help to reduce this (and Random order would be the worst choice).

You might be interested in some experiments I just did with two modos running at once, one for rendering and one for editing. I found that if I opened the Windows Task Manager and changed the priority of the rendering modo’s process from Normal to Below Normal, the editing modo had great interactivity. Also successful was to leave the rendering modo at Normal and set the editing modo to Above Normal. You could try this if you’re on Windows, and there’s probably an equivalent method for OS X if it has the same issue.

Write Buckets to Disk means there will be no full size frame buffer in RAM. For a 5500 x 4500 render, it will save 396 MB (you can see this by looking at the Frame readout in the Memory Usage section of the render window and comparing the value with and without Write Buckets turned on). Bucket size won’t affect memory usage as much. Going from 40 x 40 to 20 x 20 will save above 16 MB when using two threads (you can see this by looking at the Buckets readout in the Memory Usage section of the render window). Using smaller buckets will result in a lot more bucket files in your temp directory, which may affect render times. With 20 x 20 buckets, there will be (5500 / 20) * (4500 / 20) = 61,875 separate files!

The polygon count and memory readout in the render window are not up to date while a geometry segment is in the process of being diced. Their values will be updated after the dicing is done, but it’s running out of memory before that happens. I’m not sure why the memory allocations are failing when the task manager shows there’s a lot left — I haven’t seen that happen before. Quitting and restarting might be a good thing to do after a memory error. You might find the information in this post useful. For example, your screen shot shows that the geometry lies in a single geometry cache segment, probably because there are so few initial polygons. That means it will all have to be diced at once. Subdividing it a few times would help make sure that multiple segments are used. Also the scene appears to be very large (with five kilometer grid squares in some viewports), so the default Minimum Edge Length of 1 millimeter is not going to be any help in limiting the dicing if the landscape extends under the camera. Try a much larger value, maybe 50 meters (just a guess). Tests like that usually can be done very quickly by temporarily setting the Color Output mode to Segment ID.

modo’s rendering speed mainly comes from its unusual rendering architecture, a design which has not been used before as far as we know. There’s no platform-specific code in the rendering functions, which are coded in straight ANSI C.

The amount of memory that can be saved by turning on Write Buckets to Disk can be seen in the frame memory readout on the render window. At 1024 x 768 it’s only about 12 MB, but at print resolutions the savings can be huge. It has allowed us to do test renders with over 30,000 pixels on a side.

Right, for example shadow rays from the surfaces in a bucket to a distant light will all be going in a particular direction and therefore will probably miss large areas of the scene. Also a limited range can be specified for indirect (GI) rays so that distant objects are ignored. I probably could have sped up the canyon render even more by using that option. The original reason for going with buckets is that we wanted our rendering architecture to usable in the PRMan style (with baked GI, etc) for high-end film production someday, in which case it might not need to trace any rays during final frame rendering and geometry caching could be used to full advantage. Other benefits of buckets are big savings in frame buffer memory (allowing output of enormous images) and good load balancing for multithreaded and multi-machine rendering.

Memory usage in rendering is a big and complex topic, but I’ll try to summarize some of the most important points. Since many of you are familiar with LightWave, I’ll make a few comparisons that may be instructive. The data that take up memory when rendering an image can be divided into three basic categories. First is the scene description, which specifies the geometry, materials, textures, lights, camera, and other render settings to use. As in LightWave, this will need to be loaded by each of the network nodes participating in the render. For subdivision surfaces this is not too bad since the subdivs aren’t tesselated into polygons yet, and in most cases the other two kinds of data usually dominate. A second category is all the temporary internal data computed from the scene description, such as the final polygons to be rendered (including tesselated subdivs and displaced micropolygons), acceleration structures for ray tracing, mipmap levels for texture images, shadow maps, etc. In LightWave, this kind of data is generally precomputed at the beginning of each frame or pass (regardless of whether it will actually be needed) and remains in memory throughout the render.

Here modo’s bucket architecture has major advantages, two of which are lazy evaluation and caching. Lazy evaluation means computing data only when it’s needed. For example, modo doesn’t generate any displaced micropolygons until the bucket they appear in is encountered, so parts of objects that are unseen might never be displaced. This principle is applied to tesselating subdivs and computing acceleration structures and mipmap levels as well. Caching means that previously computed data (such as micropolygons from a finished bucket) can be discarded to make room for new data if a user-specified memory limit has been reached. The third category of render data consists of various kinds of frame buffers. Usually there is a full size frame buffer with channels for the requested outputs (red, green, blue, alpha, etc), and also temporary “working” buffers (segments in LightWave and buckets in modo). The temporary buffers are typically smaller than the frame in order to conserve memory (since they often contain lots of additional channels for depth peeling or accumulation) and to allow for multithreading. modo has a big advantage here too, since it makes the full size frame buffer optional. Finished buckets can instead be written to disk and then reassembled into a final output image without requiring the full image to fit in memory, making really enormous renders possible.

It’s hard to make a direct comparison with FPrime though, since modo’s rendering architecture is so different (actually different from anything that’s ever been tried before as far as I know). As just one example, there are no antialiasing passes in the LightWave sense — as in PRMan, each bucket pops out with fully finished antialiasing, motion blur, DOF, etc. This is important for being able to handle giant scenes, since it becomes possible to flush geometry and image data used in completed buckets to make room for the data needed by subsequent buckets. Chappo, you’re right, speed has been one of my top goals, and rendering directly to a particular level of quality should be faster than is possible with progressive refinement. As far as Steve Worley is concerned, he’s a good friend of mine and has been for many years, and we would have been thrilled to hire him, but I think he’s quite happy now being his own boss.

The main advantage of bucket rendering is that it greatly increases the complexity of the scenes that can be rendered in a given amount of memory, which is why high-end film production renderers rely on it (PRMan and mental ray being the best known examples).

We decided to take a more focused approach with procedural textures in 201. What I mean is that there are just a few basic types, but they were implemented with the goal of making them more general and flexible (a bit like how modo’s tool pipe amplifies the number of ways in which the basic modeling tools can be applied). For example, instead of separate entries for Fractal Noise, fBm, Turbulence, etc, you simply pick Noise, which has options that let you get achieve those various looks. Similarly the Grid texture can do 2D and 3D rectangular grids, triangular grids, and honeycombs. There are no restrictions on which procedurals are allowed to be used for particular effects (any of them can be used to modulate color, or displacement, or whatever). In addition to the usual local or world coordinate options, they can all be UV mapped. They all have common bias and gain controls to remap their outputs which have proven very useful. Wood is the really the most special purpose texture in 201. I admit getting a little carried away with that one since I was enjoying experimenting with it. In fact that’s a problem with procedurals from a development standpoint — they’re so much fun to write that we could easily spend a lot of time adding new ones when we should be concentrating on more fundamental areas of the program.

Extreme photorealism has been one of my goals for the renderer. As fas as VRay is concerned, I think we’ll be easier to set up (fewer arcane parameters to learn). They may be faster for some kinds of scenes since they have a few years head start on us, but we’ll do our best to meet or beat their speed. This is actually one of the reasons that I’m reluctant to try to cram in a lot of last minute feature requests, since they would eat up development time that could be spent on optimizations.
—- GI/Irradiance Caching:

The indirect rays are just seeing a texture color based on the pixel coordinates of the point being shaded (i.e. where the rays originated from), giving an illusion of transparency. The two solutions are to project from a different camera (which can have the same settings), or to use a separate environment item for indirect rays, ideally one which is defined in all directions. 601 uses the pixel coordinates of the point being shaded to determine the color of front projected textures (even for indirect rays). The reason for the change is that some people were confused by the old system, which used the camera frustum to project the texture and thus depended on the film width and height having the same aspect as the output image. If you prefer the old behavior, you can replicate it in 601 by duplicating the camera and projecting the texture from the duplicate instead of the render camera. However I don’t think it’s really correct to use a front projected texture for the environment seen by indirect rays. Indirect rays are going to be hitting the environment in all directions, with most of them going in directions outside of the camera’s field of view (including behind the camera). A front projected texture image contains no information for those parts of the environment. A spherical or lightprobe texture would be a better option.

Walkthrough Mode simply means that the irradiance cache is not cleared between frames. Any new IC values created during a subsequent frame are added to those from the previous frames. This includes values loaded from an irradiance file, so make sure Load Irradiance is turned off if Walkthrough is on. We should probably change the Load Irradiance feature so that it only happens for the first frame of an animation render. Incidentally a kind of mini-walkthrough mode happens when you do a stereoscopic render — the irradiance cache is not cleared in between the two eyes. This is safe even if there are animated objects because both eyes use the same shutter open and close times. It’s OK to use Save Irradiance with Walkthrough Mode. The file will be overwritten every frame, but at the end the file will contain all of the IC values created during the animation, which might be useful. For example you could render the scene again, perhaps with different AA settings or something, and load the previously created file instead of using Walkthrough Mode. The option that should not be used with Walkthrough is Load Irradiance, since the cache will grow by at least the size of the file every frame. The worst case would be to have both Load and Save active with Walkthrough on — the cache would at least double in size every frame!

Bright speckles are usually the result of some rare event, like a secondary ray hitting something small but very bright. The Mitchell and Catmull Rom filters can make them worse by outlining them (a phenomenon called “ringing”). Very bright things in a scene can include materials with a high luminous intensity, bright pixels in an HDR image map (typically the environment), specular highlights with Conserve Energy turned on, and lights nearly touching a surface (where inverse square falloff can cause a hot spot). If you can’t find any of the above in your scene, another thing to try is turning off Unbiased (which you can find in the Channels viewport after selecting the render item). Most of the benefits of indirect illumination come from the first bounce. That’s as far as they go in a lot of professional animation (even Pixar movies). Each bounce beyond the first becomes dimmer and more uniform, so there are diminishing returns. Two or three bounces can be useful for architectural rendering, but aren’t necessary for brightly lit product shots like this one.

Speaking of diffuse shading, irradiance caching is not well suited for transparent objects, so Monte Carlo might be better for the cap. Lowering indirect bounces (three is overkill for this scene), raising the ray threshold, and lowering the reflection and refraction depths could help render times too. The exit color can be used to prevent black areas if you lower the depth settings.

The density of irradiance values in the cache is adaptive so it varies with local scene complexity. The minimum spacing is determined by the Irradiance Rate (measured in pixels) and the maximum spacing is the minimum spacing times the Irradiance Ratio. If you’re talking about the irradiance pre-passes, those can be controlled with the Pre-pass Spacing channels. You can find them in the Channel List after selecting the render item. In practice I rarely change any of these. The Rate is probably the most useful. Sometimes I’ll lower it if the indirect illumination is missing some details, or raise it if I’m doing a very high res render.

Also you might want to set Irradiance Gradients to Rotation Only, since you’ve got a transparent surface overlaying the diffuse edges of the screen, and the translational gradient can cause a mottled appearance in that situation. Translational irradiance gradients are an estimate of how indirect illumination is changing across a surface. The way they’re computed depends on the lengths of indirect rays, and when a transparent surface is very close to a diffuse surface, the indirect rays from the diffuse surface are very short (even though the indirect light is coming from far away), leading to incorrect gradients. It’s not really necessary to understand all this, just that it’s a good idea to turn off translational gradients when glass or other clear surfaces are lying on top of diffuse surfaces. Rotational gradients are still fine in these cases, thus the Rotation Only setting.

The optimum number of irradiance rays depends on the complexity of the indirect illumination. For example, objects sitting under a constant sky can usually get away with less than the default number of irradiance rays, but scene lit by small luminous polygons might need a lot more rays than the default number.

New IC values are sometimes computed during bucket rendering as well as during the pre-passes. On large flat surfaces this is rarely necessary since there are almost always nearby IC values from the pre-passes that can be reused. However on very complex objects like grass, the chances of finding IC values that can be reused (i.e. that are in the same plane) is much smaller, forcing lots of new ones to be computed. You can see when this is happening by noticing if the count of IC values in the render window is still increasing during bucket rendering. A useful trick is to use MC for things like grass but still use IC on large flat surfaces like buildings or the ground.

A saved irradiance file should not be reused if anything has been changed that affects indirect illumination, which includes such things as geometry, materials and textures, lights, and the environment. Things that can be safely changed include most camera settings (position, direction, focal length, depth of field, etc), antialiasing and resolution, and render output settings. Some of these changes will result in additional IC values being calculated, but the old ones from the file will still be valid.

Increasing the rate does mean fewer IC values should be created (at least on flat surfaces), but it also means that whenever any shading is computed, a larger volume of the cache needs to be searched to ensure that all usable values are found. So theoretically the render time could get worse with a very large rate. It would depend on the particular scene. An Irradiance Rate of 500 pixels is way too big. It basically tells modo that every irradiance value can be reused over about one quarter of the image! The resulting indirect illumination would be very inaccurate, and the time required to search for and interpolate all the usable irradiance values would be huge.

You do not need to save an irradiance file to use Walkthrough Mode — just turn it on and render the animation. You could save an irradiance file if you were going to render the scene multiple times though (assuming you don’t change anything that affects irradiance). In that case, just turn on Save Irradiance and after the full animation is done, you’ll have a file that contains all the irradiance values computed during the entire animation. When rendering the animation a second time, turn on Load Irradiance. Pre-passes are always performed but they will be very fast if the needed values are already present in the cache.

Lighting a scene with luminous polygons means that indirect illumination must be turned on, and the number of indirect rays must be high enough to consistently hit the luminous polygons. It works pretty well if the luminous polygons are fairly big, and as you discovered, it can be fast by taking advantage of irradiance caching (in which lighting calculations are shared over many pixels). However if the luminous polygons are too small or not enough indirect rays are used, the number of indirect rays that end up hitting the luminous polygons from various points can become inconsistent (maybe even zero), resulting in splotchy lighting. In such cases it would be better to use area lights instead, because the renderer knows exactly where they all are in advance (that’s basically the definition of a direct light source) so it can always sample them consistently. So the general rule is to use luminous polygons for relatively large light sources, and area lights for smaller, more intense ones, or when indirect illumination is turned off.

When irradiance caching is enabled, irradiance values have to be computed on any surfaces that have a nonzero diffuse amount. Transparency multiplies the amount of work needed since irradiance may need to be computed on multiple layers (not just the frontmost surface at each pixel). We’ve added a feature in 401 that will be helpful in optimizing such situations: For each shader, you will be able to specify what GI method to use (ambient light, Monte Carlo, or irradiance caching).

The actual irradiance computations are multithreaded in all the pre-passes including the first one. However geometry is created on demand in modo’s renderer, and most of that occurs during the first pre-pass whenever an irradiance ray happens to hit an object’s bounding box for the first time. While the renderable polygons for a particular object are being generated by one thread, any other threads trying to access the same object have to temporarily pause (since otherwise they’d be reading unfinished data). So it’s normal for the amount of parallelism to be lower during the first pre-pass.

First of all, is the flickering coming from the indirect diffuse shading? You can determine that by turning Indirect Illumination off and seeing if the problem goes away. In fact you may not even need it, since there are plenty of direct lights. Professional FX shots typically don’t use it (they either fake it with extra direct lights or pre-bake it into maps). That would obviously speed up your renders as well. If you want to keep using irradiance caching, one option would be to use Walkthrough Mode, in which the cache is retained from frame to frame to reduce flickering. This requires stationary objects, so you’d do the animation by moving the camera (just like they do with model photography). In this case it might be best to render the frames backwards, so that the ship is big in the first frame and the irradiance cache captures a lot of detail from the beginning. Another possibility is that the little luminous surfaces are not being consistently sampled by irradiance rays. If that’s the cause of the flickering, you could solve it by giving the luminous material groups a shader item that has Visible to Indirect Rays turned off (and move that group to be above the Base Shader).

When irradiance caching is used on a diffuse surface that is just behind a transparent surface (as on the cat face), the translational gradient can cause a mottled appearance. This can be fixed by setting the Irradiance Gradients control to Rotation Only or None.

That makes sense. When Walkthrough Mode is on, old irradiance cache values are never cleared, so the cache can get a lot bigger during an animation, increasing the cost of irradiance value lookups (and thus rendering time). Moving geometry would make this much worse since lots of new irradiance values would be added each frame.

Without the scene I can’t tell for sure. Is Walkthrough Mode on? That can cause the irradiance cache to basically double in size on every frame if Load Irradiance and Save Irradiance are used, so make sure it’s off. Is Interpolation Values higher than one? That can force new values to be computed even where there are old values already in the cache.

I would use either Walkthrough Mode or irradiance file loading and saving. Walkthrough is good when rendering a complete animation on a single machine (it doesn’t work with network rendering in 301, although we’ve fixed that for 302). Loading and saving irradiance to the same file is equivalent to Walkthrough Mode but the values are stored on disk instead of just in memory, so it allows the rendering of the animation to be interrupted and later resumed. Using both methods together is bad because irradiance values are being retained in memory from the previous frame and are also being loaded from disk, doubling the size of the cache each frame!

The problem is that it’s difficult for a limited number of indirect rays to consistently find the small concentrated luminous polygons. For example, at one point where an irradiance cache value is computed, perhaps 6 out of 512 indirect rays happen to hit luminous polys, but at a neighboring point maybe only 4 of the 512 rays hit them. That first point is going to end up 50% brighter, resulting in local fluctuations in the amount of indirect illumination. To get more consistent values would require cranking up the number of indirect rays quite a bit. What I was originally thinking is that the lighting on the ceiling could be baked into a luminous image map which would look more like the ceiling in your first two renders. The bright areas would be more spread out and thus easier for the indirect rays to sample. Basically the whole ceiling would be glowing to some extent.

Both scenes use irradiance caching, which means that indirect illumination is computed at sparse points on surfaces and interpolated in between those points. In addition to that, the first version of the scene computes direct illumination (due to the three area lights) at every pixel, which can be expensive depending on the number of samples in the lights. But in the second version, there is no direct illumination — light from the luminous geometry is automatically handled as part of the indirect calculation. I’m guessing the image is a bit dimmer because the luminous polygons block some of the light from the environment. The rule of thumb is that for large area lights, using luminous geometry and irradiance caching is often more efficient. But as the lights gets smaller and more intense, it takes more and more indirect rays to consistently find them. Eventually there’s a crossover point where direct lights become more efficient. In fact for direct lights, smaller is better, since the number of samples needed to get smooth shadows will go down.

Irradiance cache values are computed on all surfaces that have a nonzero diffuse amount and a diffuse color that is not pure black. Other material properties (specular, etc) have no effect on the placement of IC values, except for properties that affect the surface normal (bump maps and especially displacement maps), which can increase the density of IC values. That’s because an existing IC value at one location can only be reused at different nearby location if the surface normals are similar enough. Likewise the number of polygons doesn’t affect irradiance caching unless the surface normals of the polygons are varying. In other words, the density of IC values on a single giant polygon will be the same as that on a massively subdivided version of the polygon (assuming the surface is still flat). More polygons can increase ray tracing time (and thus the time required to compute a new irradiance value), but the high resolution tests you guys have been doing are dominated by the time required to look up and interpolate existing IC values rather than the time required the create them in the first place.

The indirect illumination at a point on a surface is computed by sending many rays out from the point in random directions, sampling the hemisphere seen by the point. The average color seen by the rays is an estimate of how much light is coming from other surfaces and the environment. In this case the luminous polygons of the lamp are small enough that the number of random rays hitting them can vary a lot. For example, at a particular point on the wall, two indirect rays might hit the lamp, but at another nearby point, maybe no rays happen to hit it. When the shading of these points is interpolated, the result will be blotchiness. Increasing the number of indirect rays from 1024 to 2048 would help, but really you would be better off using direct lights for such a concentrated source of illumination. In other words, you could create area lights at the luminous polygons of the lamp, which would result in much more precise shadows.

The motivation for irradiance caching is to reduce the number of times that an expensive hemispherical sampling of the indirect illumination has to be done. This is accomplished by storing irradiance values in a cache and then reusing them at other nearby pixels instead of computing new values. Each time diffuse shading is computed at a new location, a search is made through the irradiance cache to find any previously computed values that can be reused. If enough values are found (where “enough” is defined by the Interpolation Values setting), they are interpolated (blended together in a special way) and no rays need to be traced. If not, rays are traced to sample the hemisphere above the point and the resulting irradiance value is added to the cache. In order for a previously stored irradiance value to be reused at a new location, its normal vector must be similar enough to that of the new point (within about 10 degrees), and its position must be close enough (within a valid radius). The valid radius of an irradiance value is based on how far its hemispherical sampling rays traveled before hitting anything, in other words, the distance to the closest surface that might affect the indirect illumination. These normal and position constraints cause the spacing of irradiance values to be automatically adaptive — more dense in corners and on curved surfaces, and more sparse in wide open flat areas where the indirect illumination changes more slowly-

In order to prevent irradiance values from becoming too dense in corners, a minimum distance (in pixels) is imposed on the valid radius, and this minimum is known in modo as the Irradiance Rate. A maximum distance is also imposed, which is currently six times the Irradiance Rate. So the rate basically acts as a master control over the density of the irradiance values, although they will always be at least as dense as the most coarse pre-pass (the first set of dots you see at the beginning of an IC render). The obvious effect of increasing the Irradiance Rate is that the number of hemispherical evaluations needed will decrease, which means fewer rays will need to be traced and rendering time can be expected to go down. However there is another effect — since a higher rate means that cached irradiance values are valid over longer distances, the search for reusable values must consider a bigger volume of the cache, and will become more expensive. And probably more values will be found, so the interpolation will also become more expensive. Since the IC search and interpolation process is done at every pixel, it may start to dominate the rendering time as the resolution goes up. The search time depends on the volume of the cache that needs to be searched (which goes up with irradiance rate) and how many values are in the cache (which goes up with the number of pixels).

Irradiance in this case just means incoming indirect illumination. The irradiance at a particular point on a surface can be estimated by firing a bunch of rays from that point, sampling the hemisphere of directions centered around the point’s surface normal. The average color seen by those rays is the irradiance estimate and it gets stored in the cache. Whenever a new point is being shaded, the cache is checked for nearby irradiance values, and if there are enough, the hemispherical sampling doesn’t have to be done and the cached values can be interpolated instead. The main idea of the paper is that you can look at the way the ray colors are distributed over the hemisphere (as well as how far they traveled before hitting something) and estimate how the irradiance would change if you moved to a different nearby point, or if you angled its surface normal a bit differently. These rate-of-change estimates are called translational and rotational gradients, and they improve the quality when interpolating cached irradiance values at other positions and other surface normals than at the points where the hemispherical sampling was originally done.

Indirect Range is a feature inspired by mental ray that makes indirect rays ignore geometry beyond a certain distance. This would be very bad if the range was smaller than your room, since those rays would start seeing the environment instead of the far walls. However it can speed up exteriors and is especially useful for ambient occlusion (where you’re mainly interested in the shadowing effect of nearby geometry). I recommend just leaving it at zero, which disables range checking. About Indirect Range — geometry past that range will simply not be hit tested by indirect rays. The default range of zero is shorthand for no range checking (or you can think of it as infinity if you like). I strongly recommend always leaving it at zero, unless you are doing ambient occlusion baking where you only care about local obstructions. The only other cases where I’d consider using it are open outdoor scenes, for example a city where you might not care about the effect of buildings several blocks away, or an asteroid field where the more distant rocks don’t matter.

Interpolation Values forces modo to find at least that many precomputed values in the irradiance cache. If it’s set to eight and there are only seven values close enough to the point being shaded, a new hemispherical sampling procedure must be done. More than eight is not really useful. What’s more important is the quality of each irradiance value, and that’s determined by the number of rays. The default of 256 is usually OK for outdoor scenes, but interiors often need a lot more. For example, mental ray’s irradiance caching (which they call “final gathering”) defaults to 1000 rays.

Right, Iradiance Caching is generally faster. Both methods use the same procedure to compute a new irradiance value, firing a number of indirect rays out into the hemisphere of directions around the point being shaded. The average color seen by those rays is an estimate of the irradiance (the incoming light). The difference is that without irradiance caching, this hemisphere sampling procedure must be done from scratch at every pixel, which is naturally time consuming, so the number of indirect rays used in this method is usually less. If adjacent irradiance values differ too much, the result is noise, which can be reduced by increasing the number of rays. With irradiance caching, new irradiance values are stored in a cache. Then when a new point is being shaded, the cache is consulted first to see if there are nearby stored values that can be reused. If so, the nearby values are blended together and used as the irradiance estimate, and no new rays need to be fired. Since the number of hemisphere sampling procedures needed is much less, the quality of each one (the number of rays used) can be higher. And since the irradiance values are farther apart, if adjacent values differ too much, the result is splotchiness rather than noise. To summarize, you can choose between a vast number of lower quality evaluations (IC off), or a smaller number of higher quality evaluations (IC on).

Any setting with the word “Rate” in its name refers to a distance in pixels. The ones affecting the amount of geometry (SDS levels and micropolygon dicing) are the Subdivision Rate and the Displacement Rate. If you increase those proportional to the increase in frame size, you should get about the same amount of geometry in the bigger render. The other setting that might need to be increased is the Irradiance Rate, which affects the spacing of irradiance cache samples. Adjust those three things and you should get the expected time versus resolution relationship. Of course the reason those settings are pixel-based is that usually you want more detail in higher res renders (and less in low res test renders). That all happens automatically if you leave the rates alone.


—- SSS:

I wouldn’t recommend using 1.0 0.0 0.0 as the Subsurface Color. A color component of exactly 1.0 means there will be no absorption at all, no matter how far we are from the lit part of the object. The 0.0 color components mean that those wavelengths will be completely absorbed immediately. So basically you’ve got red light traveling through the material forever, and green and blue going nowhere.

SSS works best on enclosed volumes with a single surface layer (like a balloon). What I mean is that the shape can be complex, but there shouldn’t be holes or doubled areas.

I’m not sure I agree with your subsurface color map though. If you think of it as the color of the tissue beneath the skin, it should look fairly bloody everywhere, like raw meat. Then the surface layer (the diffuse color map) could actually be a bit more grayish (as you can see whenever your skin flakes off) since much of the overall color comes from the deeper layers.

The main subsurface scattering technique used in computer graphics is known as the diffusion approximation and was popularized by Henrik Jensen in papers presented at SIGGRAPH 2001 and 2002. It assumes that the effect is dominated by multiple scattering, in other words light only travels a very short distance within the material before being reflected or absorbed by particles. Typical applications include skin, jade, wax, and snow. The diffusion approximation is not really valid for materials that you can see through, so the Transparent Amount should generally be 0% when using SSS. What you need is more of a volumetric simulation, although it might be possible to fake it with transparent sheets that have luminous textures or something like that.


—- DOF:

For optimum DOF in modo, make sure Edge Weighting is 50% or close to it. Also use plenty of AA samples.

You can increase the F-stop to make the DOF more subtle. You can render at a higher resolution and scale down in PS. For example, doubling the width and height and using 256 AA samples will effectively give you 1024 samples per pixel at your final resolution.

Real cameras and eyeballs have a limited depth of field due to the nonzero size of their apertures, and that’s true for modo as well. When you turn on the Depth of Field option, the rays modo uses to determine what is visible in each pixel are all sent through different random locations within the camera’s iris. Averaging the resulting ray colors within each pixel provides the depth of field effect in much the same way as it happens in real life. This is much different from 2D postprocessing solutions which blur an already rendered image. Currently the shape of the iris in modo’s camera is circular (so the resulting bokeh is circular too), but I agree that polygonal shapes would be a nice option.

The F-Stop setting in modo doesn’t affect the brightness of the render. Its purpose is to control the amount of depth of field. modo computes the incoming radiance from the scene at each pixel (in Watts per steradian per square meter). It does not attempt to simulate the response of film grains or CCD elements to that incoming light. Here’s another way to think about it: If you had a monitor capable of displaying high dynamic range images, then viewing a modo render would be more like looking at the real world scene rather than looking at a photograph of the scene.

Although it’s more physically accurate, ray tracing is a rather inefficient way to achieve depth of field effects. The number of samples required to get smooth results rises with the square of the diameter of the circle of confusion, quickly climbing into the thousands for more extreme blurs, which is not very practical. The Captain’s suggestion is basically a way to get more samples — for example, rendering with 64 samples and then scaling by 50% gets you 256 samples per pixel. Running a despeckle filter before scaling would help too. A much more efficient way to get depth of field effects is through postprocessing (which is how X-Dof works). You might look into DOF PRO as a good solution that works with modo’s depth output.

—- Baking:

“Baking rays” are fired from the low poly object back along its shading normals to determine what part of the high poly object should be shaded and incorporated into the image. So if the shading normals vary across the polygons of the low poly object due to smoothing, they will hit the high poly object at different angles and there may be some distortion. To prevent that, you can turn off smoothing on the low poly object.

Make sure that there is an alpha channel output when you bake ambient occlusion. That will allow modo to expand the borders around the UV “islands” in the occlusion image to help prevent seams.

The Bake command in the Render menu uses the currently selected vertex map to determine which polygons to bake. You can see the vertex map’s name or select a different one in the UV Maps section of the Vertex Map List (the topmost viewport under the Lists tab). Or if you want to bake an already-applied texture, you can right-click on it in the Shader Tree and choose Bake. There are two basic types of baking in modo. Baking applied textures in the Shader Tree (the right-click method I mentioned) is useful for converting procedural textures into image maps. Any textures of the same type below the one being baked will be incorporated into the image. In your case there might not have been any other textures, so if you baked a diffuse color texture, for example, it would contain just the diffuse color of the material itself. Another common use of this type of baking is creating normal maps based on micropolygon displacement. However it sounds like what you want to do is bake the results of shading instead (known as render outputs in modo). That’s what the Bake command in the Render menu does. My guess is that perhaps you have multiple objects all sharing the same UV vertex map (the one called “Texture”), so they’re all getting baked at once. Giving different names to the maps used by different objects would solve that problem.

Believe it or not, moving the camera might help! modo determines the density of irradiance values in the cache adaptively based on how big surfaces appear in the camera view (a technique developed by PDI during the making of Shrek 2). This is still used even during baking, so that the baked results will be appropriate for that view. If your camera is close to the geometry, I recommend pulling it back. Using a higher Irradiance Rate and an Interpolation Values setting of one should also help. Another common speedup for baking (unrelated to irradiance caching) is to use one AA sample per pixel instead of the default of 8.

It just so happens that I wrote about something about this earlier today in the modo discussion forum. Here are a few things to check:

– The high poly object can be larger or smaller than the low poly object (in fact it can be larger in some areas and smaller in others), as long as it is always within the displacement distance of the low poly object’s surface. The distance I’m referring to is the one specified in the low poly object’s material. Areas where the high poly surface is higher than the low poly surface will bake as medium gray to white (depending on how much higher it is) and areas where the high poly surface is lower will bake as medium gray to black.

– The high poly object does not need to have any UVs at all, but if it does, they must be named differently (in the Vertex Map list) than the low poly object’s UV vertex map.

– The visibility “eyeballs” next to both the low and high poly objects in the Item List should be turned on.

– Since Bake From Object relies on “feeler rays” that follow the normal vectors of the low poly object, it helps if the low poly object is fully smoothed. In other words, be sure to use a high enough smoothing angle in its material.

Normal mapping works by defining a new normal vector at each point on a surface relative to a local 3D coordinate system at that point. One axis of this system is the original surface normal, which sticks straight out of the surface. The other two axes are tangent to the surface, and run in the direction of increasing U (sometimes called dPdu) and the direction of increasing V (sometimes called dPdv). The red channel of a normal map image defines the new normal vector relative to dPdu, the green channel defines it relative to dPdv, and the blue channel defines it relative to the original normal. What this all means is that U and V are very important for normal mapping. The Advanced OpenGL image shown above may seem correct, but really it’s only correct on the front face. I haven’t looked at the code, but I think what’s happening is that for non-UV textures, Advanced OpenGL is falling back on a default dPdu of (1, 0, 0) which is just the world X axis, and a default dPdv of (0, 1, 0) which is the world Y axis. These directions are basically correct for the front face but not for faces with other orientations.

Right, object-space maps look more colorful, and tangent-space maps are mostly bluish. The only way I know of to get object-space normals out of modo is by setting the Color Output to Surface Normal and using the Bake command in the Render menu. If instead you bake by right-clicking on your normal map in the Shader Tree, you should get tangent-space normals.
—- Motion Blur:

I think the render is correct. If the camera is moving at a constant speed, the motion blur should look the same whether the camera is moving forward or backward. The shutter opens, light accumulates evenly during the exposure, then the shutter closes. No extra weight is given to the beginning or end of the exposure. Film cameras are not like old-fashioned TV cameras that showed trails behind bright lights.

The offset is basically due to a timing difference. When motion blur is off and there is no motion vector output, the scene is evaluated exactly at each frame (for example, frame 1.0) to determine where everything is. When motion blur is on or a motion vector output is present, the scene is evaluated twice, once at shutter open and once at shutter close, to determine how everything is moving. By default these occur half the blur length before and after the exact frame (so the two times in our example would be frame 0.75 and frame 1.25). In the motion vector case, objects are rendered at the shutter open time for various technical reasons. So the offset is the difference between frame 0.75 and frame 1.0. There’s an easy way to eliminate the offset by using the Blur Offset channel of the camera. In the Camera Effects tab, enable motion blur and set the Blur Offset to 25% (half of the blur length). This puts the shutter open time on exact frames.

You shouldn’t have to re-render any jump frames if the jumps occur while the shutter is closed. To do that, you can offset the jump keyframes by half a frame, or use the Blur Offset feature to shift the exposure forward or backward by half a frame.

I thought I’d explain a bit about how modo’s motion blur works. At the beginning of rendering a frame, the position and orientation of each object and the camera are computed for the start of the exposure and the end of the exposure. Deformations (if any) are also computed for these two times. Then during rendering we linearly interpolate between these extremes to place things where they should be at the exact time of each ray that is traced. One enhancement that is new for modo 401 is that the interpolated orientations are now adjusted to provide curved motion blur for rotating objects. If more complex motion blur is desired, the render item’s Frame Passes channel (which can be found in the Channels viewport) can be used to break up the frame into multiple subexposures, each of which will be evaluated as described above. For example, if Frame Passes is set to four, positions and orientations will be computed for five times during the frame (0%, 25%, 50%, 75%, and 100% of way through the exposure), so complex motions will be represented by four linear segments instead of just one. The number of AA samples can often be reduced when using this method.

Motion blur quality mainly depends on the number of antialiasing samples, so try increasing that. Certain effects of motion blur on shading can also be improved by using a finer (numerically lower) shading rate in the shader item.

The unmorphed vertex positions determine where things are at the time the camera shutter opens. The Velocity morph map determines vertex positions at the shutter close time. Since real camera shutters are closed for a while between frames, the Velocity map positions shouldn’t be where things are at the next frame but rather somewhere in between the two frames (halfway is good, corresponding to a 180 degree shutter). The default film gate (36 mm x 24 mm) is that of a 35 mm SLR, which actually has a different aspect ratio (3:2 or 1.5) than that of the default resolution (4:3 or 1.3333). Any time the film gate and resolution gate have different aspect ratios, the Film Fit option determines which one fits inside the other. This should all be familiar to Maya users.

—- Render Outputs / Compositing:

If your postprocessing app can handle unclamped floating point pixel values, you could just add a render output with an effect of World Coordinates. The green component of the pixels in that output would be literally the Y coordinates in meters — no messing with gradients required.

Some of the super-bright highlights or reflections appear a bit jaggy, but you can fix that by turning on Clamp Colors in the render output.

I meant the one in the render output. It’s the most important because it will be applied to that output when it is saved to an image file. The Default Output Gamma in Preferences simply determines what gamma the render output will start out with whenever you create a new scene. The Display Gamma in Preferences is used by the render window if Independent Display Gamma is on, and does not affect any saved images. It allows you to preview the effects of gamma correction even if you are saving linear (gamma 1.0) images.

I recommend additive compositing, in which the alpha channel is used only to cut out the background, and the foreground is added at full strength (without multiplying by alpha). One advantage is that additive effects like bloom and volumetric lights will be properly composited. To use this method, render against a black background and make sure Unpremultiply is turned off. The Unpremultiply option is for applications that force the foreground to be multiplied by alpha. Purely additive effects won’t be possible though.

One of the changes in 401 was to make Clamp Colors off by default. This makes the render window’s white level and tone mapping controls more useful, but it does mean that a few super-bright samples can dominate a pixel’s color.
As an example, consider the edge between the super-bright highlight and the dark saucer behind it in your first image. For simplicity let’s say the highlight has a color of 2 2 2 and the saucer has a color of 0 0 0, and that the White Level is 1. In the render window, or in a saved low dynamic range image, pixels that contain only highlight end up as 1 1 1 (full white). With Clamp Colors off, a pixel along the edge containing half highlight and half saucer will also end up as 1 1 1, because it’s a mix of 2 2 2 and 0 0 0. So it will appear as if there is no antialiasing along the edge. However if Clamp Colors is on, the 2 2 2 samples will be clamped to 1 1 1 before being mixed with the 0 0 0 samples, and the pixel will end up with a color of 0.5 0.5 0.5, giving a nice antialiased edge. If you don’t want to turn on Clamp Colors, another way to soften the sharp edges of super-bright areas is to use some Bloom.

modo supports several image formats through the FreeImage library. I believe the FreeImage savers start by allocating a buffer for the entire image, so it’s possible that the saver can’t find a single contiguous chunk of memory big enough to hold the image, even though the total amount of free memory may be more than enough. For really big renders I recommend using one of the non-FreeImage savers (like OpenEXR, or the built-in Targa saver) which can save images line by line.

It turns out that many apps discard areas of zero alpha in PNG files, which is annoying when you want to keep your background. You can avoid this by turning off the alpha output in the Shader Tree before rendering something that you want to save as PNG.

When you render objects against black, the edges of the objects are inherently “premultiplied” with alpha due to antialiasing. The proper way to composite such images over other backgrounds is to have the alpha channel affect the background, but add in the foreground at full strength, unaffected by alpha. This is sometimes called additive compositing. If you use the type of compositing in which the alpha channel affects the foreground, it will darken the already darkened edges and you’ll end up with fringes. I’m sure AfterEffects has a way to do additive compositing, probably by having it interpret the modo footage as premultiplied.

I should point out that modo’s depth output is already floating point internally, and it’s not hard to derive the true floating point depth values from what you get now. Just enter a Maximum Depth that is farther than any geometry in your scene, and render and save the output using a floating point image format like OpenEXR. Then in the compositing app, invert the pixel values and multiply them by the Maximum Depth value that you specified in modo. So the formula is trueDepth = (1.0 – pixelVal) * maxDepth.

In case anyone is wondering, the way modo’s depth output works now is based on my discussions with the author of DOF PRO about the ideal input for that application. I agree that an option to simply output depth values in meters would be a nice addition.

The DPI setting doesn’t affect the render time or the rendered image. It’s only used for two things — to convert between pixels and inches in the render properties interface, and to be written into certain image file formats that store the DPI value along with the image.

The Write Buckets to Disk option was designed to allow modo to render extremely high resolutions without requiring much memory. For example, during 201 beta testing we rendered images up to 32000 x 29000 pixels using less than 2 GB. Besides Write Buckets to Disk, the key is to keep the geometry cache under control, primarily by turning Adaptive Subdivision off and choosing a subdivision level for each item yourself, or increasing the Subdivision Rate proportional to the increase in resolution. Also it would be better to use faked or baked GI, as irradiance caching at such resolutions is likely to be slow, but if you do use IC be sure to increase the Irradiance Rate. Another thing to be aware of is that some image saver plug-ins may have trouble dealing with such huge sizes. I can’t remember which ones work best, so it would be prudent to save in multiple formats.

modo does all its shading calculations with floating point accuracy. The result is a set of high dynamic range colors at various sample locations within each pixel. These colors are then filtered, which just means performing a weighted average (using weights determined by the user’s choice of antialiasing filter). The result of this filtering process is a single overall color for each pixel. What Clamp Colors does is to limit the individual sample colors within each pixel to the range of 0.0 to 1.0 before the filtering takes place. Even if Clamp Colors is off, the overall pixel colors will still be clamped upon saving the image to a low dynamic range file format, and of course the colors you see on a normal monitor are clamped (although high dynamic range monitors may change that someday). But by clamping before rather than after filtering, jaggies along the edges of extremely bright areas are avoided. This is the purpose of Clamp Colors. An example should make this more clear. Let’s say you’re rendering an image that includes a luminous polygon with a radiance of 10.0, in other words ten times brighter than what your monitor can show, and ten times brighter than full white in a normal image file. Now imagine a pixel along the edge of that polygon containing half polygon and half black background (with a radiance of 0.0). If Clamp Colors is off, the average color of that pixel will be 5.0 (half of 10.0 plus half of 0.0). But that’s still five times full white, so it will appear that there is no antialiasing along that edge. However if Clamp Colors is on, then the samples with a radiance of 10.0 will be limited to 1.0 before filtering, and the average color of the pixel will be 0.5 (half of 1.0 plus half of 0.0), providing a nice antialiased edge.

Premultiplication is inherent to the rendering process — there is no “premultiplication step” that can simply be omitted. To understand this, consider a red sphere (RGB color 1 0 0) rendered against a black environment. Some of the pixels along its antialiased silhouette may contain half sphere and half environment, giving them a color of 0.5 0 0 (basically the average color of all the AA samples within the pixel). The alpha channel in those same pixels will also have a value of 0.5, indicating half coverage. The proper way to composite this over a different background is to use the alpha channel to mask out the background, and then add in the rendered foreground color channels at full strength, a process sometimes called additive compositing. What you don’t want to do is multiply the foreground by alpha, which will cause a dark fringe (the 0.5 0 0 color in our example will become 0.25 0 0). For applications that aren’t capable of additive compositing, the foreground color channels should be divided by alpha as Arnie mentioned.

modo’s alpha output is already perfect for either style of compositing. It’s the color channels that need to be “un-antialiased” when used in a way that multiplies them with alpha. Actually if you’re using Photoshop, you can do additive compositing, since it has an additive blend mode (called “Linear Dodge” for some reason). The alpha channel is just used to make a hole in the background. Here’s a page that talks about it:
http://www.digitalartform.com/archives/2005/10/compositing_pre.html Those images demonstrate one of the big problems with unpremultiplied compositing, which is that highlights and reflections on foreground transparent objects have to be incorporated into the alpha channel. This is unrealistic since in the real world, highlights on glass do not affect how much light is transmitted from the background through the glass. The same problem also affects things like lens flares, which don’t block the background at all, but simply add to it. Additive (premultiplied) compositing handles these situations correctly. The alpha channel represents the actual opacity of the foreground, and therefore is only used to block the background to various degrees. The foreground is then added at full strength.

modo adds the exact amount of dithering needed to prevent banding in normal dynamic range image files with 8 bits per color component. This is basically a random number between 0 and 1/255 so that perfect black will stay black when quantized from floating point to 8 bit integer. For high dynamic range formats like OpenEXR, dithering is unnecessary, and we’ve added a switch in modo 301 to turn it off. A way you could avoid it in 203 would be to boost the exposure multiplier by, say, a factor of ten, then scale the image colors back down in Shake by the same factor, which would make the dither amplitude ten times smaller.

The Surface ID color output option kinda does that now. The “surfaces” it refers to are internal geometry cache structures, and each instance is a different one. What the Surface ID option actually does is randomly choose one of seven “Apple approved” colors for each surface (left over from an old Macworld keynote modo demo I believe).


Samples / Shading / AA:

One AA sample per pixel in modo means no antialiasing (unless you mean antialiasing of image map textures, which is a separate issue). When Antialiasing is set to “1 Sample/Pixel”, there will only be a single shading sample in the center of each pixel. Any softness of geometric edges will be due to the filter kernel, which determines how samples affect neighboring pixels. Choose “Box” if you want to ensure that samples only affects the pixels they’re inside of (as in Preview, which always uses a box filter). In modo, render times are often not proportional to the number of AA samples per pixel because shading is decoupled from geometric antialiasing, and render times are usually dominated by shading cost. Shading density is controlled by a separate setting called the Shading Rate (the approximate distance between shading samples measured in pixels). This is similar to the way Pixar’s PRMan works, but unlike more conventional renderers such as EI. With the default Shading Rate of 1.0, shading samples can be spaced up to one projected pixel apart regardless of the number of AA samples. So shading might only need to be calculated once per pixel even if Antialiasing is set to 8 or 128 or 1024 samples per pixel. This decoupling can be disabled by using a Shading Rate of 0.0, which ensures that every AA sample is shaded separately. Even then the render time might not be proportional to the number of AA samples per pixel, because the concept of “importance” is also used. What this means is that shading evaluations that have a bigger effect on the pixel are done with higher quality. If there is only one shading sample in the pixel, its importance is 100% and full quality is used, but if there are eight separately shaded samples in the pixel, each has an importance of 12.5% (1/8), and the number of rays used for things like blurry reflection is reduced accordingly. Note that the pixel as a whole will still have the same number of blurry reflection rays as in the first case, so the render time might not be much different. modo doesn’t really use a smoothing pass, and I don’t think texture antialiasing is the issue either (Preview and final rendering use the same method for that, which is controlled in the properties of each image map). I believe the main difference is that Preview is using a lower importance than 100% for its shading evaluations, resulting in fewer blurry reflection rays, making it both faster and noisier. You could get the same effect in final rendering by lowering the Reflection Rays setting in the material’s properties.

One of the main differences between modo and most other renderers is that antialiasing and shading are decoupled. In other words, the number of AA samples and the number of shading evaluations per pixel can be quite different. It’s actually much like Pixar’s PRMan renderer in that respect. High AA levels are only needed for geometric antialiasing (tiny details, motion blur, and depth of field). The quality of multisampled shading features (such as soft shadows and blurry reflections) is controlled separately. I don’t really like the idea of using a low refinement threshold. The way refinement works is that first all pixels in a bucket are shaded using the initial shading rate from the shader item. Then the contrast between each pixel and its immediate neighbors is computed, and if it’s greater than the refinement threshold, the pixel color is thrown out and it gets shaded all over again using the refinement shading rate. Lowering the refinement threshold can cause a lot more pixels to be shaded twice, which is inefficient. Instead of lowering the threshold, at some point it’s better to just use a finer shading rate in the shader item.

Another problem with refinement is that small clusters of pixels that randomly end up with about the same color won’t be reshaded even if they need it, leaving occasional unrefined “clumps” in aliased areas. Some good examples are thin lines in a procedural grid texture, a thin highlight, or shadows of thin objects. With a coarse shading rate, such features can end up as dashed lines. Refinement won’t help if the gaps in the lines are several pixels long because there is no contrast to trigger it. Again the solution is to use a finer shading rate in the shader item. This discussion of shading rates is really about aliasing, not noise. For reducing noise, the best approach is to increase the sampling of whatever is causing the noise. For example, increase Reflection Rays if blurry reflections are noisy — no need to play with the AA level, shading rates, or refinement. Yes, render times are less affected by the AA level in modo due to shading being decoupled. Good point about the numbers in the reflection rays popup. Hopefully nobody will assume that 256 is the maximum — values into the thousands can be useful in some cases.

Keep in mind that shading and antialiasing are decoupled in the modo renderer, so in that sense it’s more like RenderMan than LightWave. For example if you specify 256 indirect rays per pixel, that’s how many you get regardless of AA settings (assuming the surface is full white). Same for other multi-sample effects like area lights, blurry reflections, SSS, etc.

If you want a technical explanation, the way modo determines what is visible in a pixel is to send a ray through every AA sample within that pixel. Even if a pixel looks empty in the viewport, in rare cases one of the rays might hit something. For example, if motion blur is on, every AA sample has a different time, and a fast moving object might only appear in the pixel for a fraction of the exposure. Or if depth of field is on, every AA sample has a different iris position, and a very close object might only be visible from one part of the iris. Such cases can affect as few as one AA sample (out of 1024 or whatever) in the outer fringes of a motion blurred or out of focus object. There are optimizations we could do to determine if it’s impossible for anything to appear in a pixel, and that’s on my to-do list. I’ll admit that blank areas were not a high priority for the rendering acceleration work I did last year, but they deserve more attention.

modo does something kind of like that already for shading. After modo has determined what is visible at each AA sample within a pixel, it tries to group them together before shading. For example if all AA samples show the background environment, the environment will only be evaluated once. Likewise if they all show a localized part of the same surface. If they show different surfaces, or the surface has a fine shading rate, then subsets of the AA samples (or possibly individual samples) will be shaded separately. That said, modo still fires a ray through every AA sample just to find out what’s there. One possible optimization would be to fire some fraction of the rays, and if none of them hit anything, assume that the rest won’t either. This would speed up background pixels but could slightly reduce the quality of object edges and blurred effects as explained in my previous post.

Shading refinement is mainly intended to improve sharp edged shading details such as hard shadow edges, sharp reflection and refraction, procedural grid textures, etc. For distributed light sources such as area lights or cylinder lights, reducing noise is better accomplished by increasing the number of light samples instead.

One way to think about it is that every pixel contains a certain number of samples as specified in the Antialiasing popup. These samples represent points on a surface. If some of these samples are similar enough, they can be merged and shaded as a group rather than individually, reducing render time. The Shading Rate in the shader item determines how much merging is allowed. It’s basically a distance measured in pixels. If the rate is 1.0 (the default), it’s possible for all the samples in a pixel to be shaded as a single group. If the rate is smaller, then the samples have to be closer together. For example, a rate of 0.5 means that samples within half a pixel of each other can potentially be merged, so shading is likely be evaluated four times in each pixel. Once all the pixels in a bucket are shaded using the rate from the shader item, modo checks for neighboring pixels that contrast with each other. This might occur due to sharp edges in textures, shadows, reflections, or refractions. Such pixels might need more thorough antialiasing, so they are shaded again using the Refinement Shading Rate, which is generally finer than the rate in the shader item, giving higher quality where needed. The highest possible quality is when the rate is small enough that all the AA samples are shaded individually.

I wouldn’t worry too much about the Refinement Shading Rate. It’s not something that usually needs tweaking, regardless of the number of AA samples. If you want maximum quality in any refined pixels, just set it to 0.1 and forget it. You don’t need 32 in that case. If you really want 16 shading samples, then all you need is 16 AA samples and a rate of 0.25 or finer. The reason we don’t show an explicit scale like that is because the merging of samples in a pixel depends on other factors besides the shading rate. The shading rate just puts a limit on how far apart two samples can be and still be considered for merging. But even if two samples are closer together than the shading rate, they still won’t be merged if they belong to different materials, or if their normal vectors are too different, or if their UVs are too different. So getting back to the example of 16 AA samples in a pixel, if the shading rate is 1.0 it’s possible that all the samples will be merged and shaded just once, but it’s also possible that all the samples will be shaded individually (if none could be merged because they’re all too different), or anywhere in between those extremes. There might end up being three shading evaluations, or nine, or 12, etc. Now if the shading rate was 0.25, then there’s no way a sample near the top left of the pixel can be merged with one near the bottom right, since they’d be further than one quarter of a pixel apart. In fact it’s likely that all 16 samples will be shaded separately no matter how similar they might be, because the fine shading rate will prevent merging.

Jaggies in shading are normally handled by the shading refinement system.

There are actually good reasons for decoupling the shading rate from the number of antialiasing samples, including speed and consistency. Almost all commercial renderers use point sampling, which means each pixel contains a certain number of antialiasing sample points. You can think of them as the points where rays from the camera hit geometric surfaces or the environment. In most renderers, these points are also where shading is evaluated. That means increasing the antialiasing level will increase render times quite a bit, since shading is usually the most expensive part of rendering.

But there are a few programs (such as modo and REYES-type renderers like Pixar’s PRMan) in which the points where shading is evaluated may be completely different than the sample points used for geometric antialiasing. In such renderers, geometric antialiasing and motion blur quality are controlled by integers (the Antialiasing popup in modo and the Pixel Samples setting in RenderMan), but the density of shading evaluations on surfaces is controlled by a floating point number called the Shading Rate, which relates to the distance between shading points. In modo this is consistent with other rates which are also expressed as distances in pixels (such as the Subdivision Rate, the Displacement Rate, and the Irradiance Rate). It’s true that a Shading Rate of 0.5, for example, implies four shading evaluations per pixel, but there may be three, five, or some other number in some pixels depending on how the shading points happen to be laid out on the surfaces (in REYES-type renderers these points are actually the corners of micropolygons).

One of the advantages of decoupling shading from antialiasing is speed, since in the interiors of surfaces (away from their edges), one shading evaluation is often enough. The other extreme is to make modo work like a conventional renderer, which can be done by using the minimum Shading Rate of 0.1, causing shading to be computed at every antialiasing sample point. I’ll agree that the concept of a rate isn’t as simple as an integral number of samples, but it reinforces the fact that it’s a continuous and approximate setting (unlike AA samples), it’s consistent with other settings as described earlier, and the term should already be familiar to RenderMan users.

That’s a common problem in 3D renderering and is often called “highlight aliasing.” Basically it involves specular highlights or mirror reflections on highly curved surfaces, which compress the bright areas of highlights and reflections into very thin strips, much thinner than a pixel in some cases. The result is that the shading samples within each pixel might happen to fall on bright areas in one frame and dark areas the next, causing the sparkling effect. One way to combat this is to use more shading samples per pixel to make the average reflection brightness more consistent. Your instinct to increase the number of AA samples was good, but to really benefit from that you also need to use a finer Shading Rate (0.1 is best) in the shader item. If that slows things down too much, you could limit the fine rate to just the reflective surfaces by giving them their own shader item. Put it in their shader tree group (right above the material) and move the group above the Base Shader. Other ways to combat highlight aliasing are to lower the contrast and frequency of what’s being reflected (mostly useful for environment images), or to increase the roughness of the surfaces, or to lower the specularity and reflectivity of the surfaces. One thing I noticed in your settings was that the black painted material had a specular amount of 30%, whereas most real paints would have an amount of 5% or less. Also your chrome material might benefit from more reflection rays, and perhaps a lower Ray Threshold (maybe 0.01% instead of 0.5%).

The way supersampling works is that after a single ray has been fired into each hemispherical cell, a second pass is performed in which those cells that differ from their neighbors by more than a certain amount are subdivided in altitude and azimuth into four subcells. A ray is then fired into each subcell, so the original cell now has information from five rays instead of one. Since the main purpose of the Supersampling option is to increase the consistency of irradiance cache values (to reduce blotchiness in the resulting shading), the “outliers” (the brightest and dimmest rays) are then discarded, and the three remaining rays are averaged to represent the color of that cell. This can cut down on caustics resulting from “freak accidents” (such as a low probability series of ray bounces), which is often desirable since those can cause unwanted bright blotches in the middle of a wall or floor. Of course if caustics are your goal, then you want such blotches. Supersampling should probably always be done (as it is with blurry reflections in 301), and the option replaced with a new one specifying whether to do the outlier removal. By the way, that 25% figure mentioned in the manual is now obsolete, since in 301 the number of hemispherical cells that get supersampled is adaptive.

I suspect those final buckets would have eventually finished, but it might have taken a while. Certain situations can cause an “explosion” in which the number of rays keep rising with each recursion level. These often involve glass (in which a single ray hit spawns a new reflection ray, a new refraction ray, and shadow rays toward each light source) or blurry reflections (in which a single ray hit spawns multiple reflection rays). The Ray Threshold prevents such situations from getting out of control by randomly terminating some low importance rays and boosting the importance of the survivors in order to remain unbiased. As you discovered, a value smaller than the default but larger than zero is often the best answer.

modo’s renderer decouples shading from antialiasing, meaning that the set of shading samples within a pixel is different from the set of antialiasing samples. By default, a pixel that contains only one surface might only have one shading sample, even though geometric edges are all being antialiased using 8 samples per pixel. This is obviously a big time saver and is somewhat similar to the way Pixar’s PRMan works. The density of shading samples is controlled by the Shading Rate, which (like all other rates in modo) is expressed as a distance in pixels. The default of 1.0 means that adjacent shading samples are roughly one pixel apart, resulting in one per pixel (although there may be more on surfaces seen at glancing angles). A rate of 0.25 means the shading samples are spaced roughly one quarter of a pixel apart, so there should be about 16 of them per pixel. However the actual number used is never higher than the AA Samples setting, which is 8 by default.

Depending on the specified antialiasing filter, the color and alpha of a pixel can depend on shading samples within neighboring pixels:

– If the filter type is Box, each pixel’s color is based only on the samples within its borders, all weighted evenly. In technical terms, the width of the filter kernel is one pixel. This is equivalent to LightWave’s regular antialiasing.

– If the filter type is Triangle, each pixel will use the samples within its borders plus any samples up to halfway across the neighboring pixels, weighted with a linear falloff. The width of the filter kernel is two pixels. This is equivalent to LightWave’s “enhanced” antialiasing, and is sometimes called a Tent or Bartlett filter.

– If the filter type is Gaussian, each pixel will use the samples within its borders plus the samples within the neighboring pixels, weighted with a bell curve. The width of the filter kernel is three pixels.

Anyway, what this means is that pixels whose neighbors are black will appear darker if the Triangle or Gaussian filters are used. The edges of the render region get special handling to prevent dark fringes

The shading rate in modo and PRMan, relates to all aspects of shading (material and texture evaluation, direct lighting, ray traced reflection and refraction, etc). In more traditional renderers like LW or mental ray, that stuff gets computed at each antialiasing sample. If you increase the number of geometric AA samples per pixel, you’re also increasing the number of shading samples, because they’re the same. But in modo and PRMan, shading samples are distributed over surfaces with a different density and at different locations than where the geometric AA samples are. This means you can usually increase the spatial AA quality with much less of a hit in rendering time.

One unusual aspect of modo is that geometric antialiasing (controlled by AA Samples) and shading detail (controlled by the Shading Rate) are decoupled, which is quite different from accumulation buffer renderers like LightWave and adaptive ray tracers like mental ray, in which antialiasing and shading samples are the same. It’s more like PRMan, in which antialiasing is controlled by RiPixelSamples and shading detail is controlled by RiShadingRate. But unlike PRMan, micropolygon displacement in modo can be performed at a different density than the shading rate. If you’re talking about the way an individual shading sample is evaluated, then I’d agree, that aspect is similar to other physically based renderers.


—- Network Rendering:

Distributing some of the work of IC pre-pass calculations to slave machines is something we’ve thought about, but there are significant technical challenges. The irradiance cache is basically a cloud of points, each of which consists of an irradiance value and some other information. Some of these are primary (first bounce) values and some are secondary (later bounces). The dots you see during the pre-passes are locations where primary values might be created, if not enough previously created values can be found nearby. When a value is created, lots of indirect rays are fired from that location. What is not shown is that when these rays hit other diffuse surfaces, secondary values are created, and these are often outside the frame and even behind the camera. Even if each slave worked on the primary values for just one region of the frame, they would end up creating secondary values scattered all over the scene. Primary values might also be created outside of the region if there are reflective objects like mirrors within the region.

What would need to happen is that after each pre-pass, the slaves would send their partial irradiance caches back to the master which would combine them. Then the master would have to somehow filter out the redundant values (where more than one machine happened to create a value in almost the same place) or else the cache would be too dense, slowing down rendering. Finally the combined and simplified cache would be transmitted back to each slave for the next pre-pass, since each one builds on the previous one. The problems are not insurmountable, but the redundant work and the required synchronization between passes mean that the potential speedup would probably not be as much as expected. If we stick with irradiance caching we’ll probably try to do this, but we’re also investigating alternatives that might be better suited for network rendering.

The minimum size for a single network rendering job is four buckets, so you’ll see at least four blue buckets for each participating slave. The actual number of threads that each slave uses to render its current job may be less than four however. In the case of your Macbook it would be two. You should be able to get good speedups on scenes dominated by bucket rendering time. The best I’ve seen is a 1.8X speedup when adding an iMac slave to my ThinkPad (both Core Duos, so the theoretical maximum speedup would be about 2X). Scenes dominated by things like micropolygon dicing (which is done independently on each machine) or irradiance cache pre-passes (which are done on the master) will not be sped up as much. We’d like to distribute the IC pre-passes for 501 but it’s tricky, since the irradiance cache is a global structure that builds on itself. Basically each pre-pass would be divided up among all the machines, which would send back their partial caches to the master (as .lxi files). The master would then read those files and combine them into one cache, which the slaves would all have to load before beginning the next pre-pass. There are four pre-passes so this would happen four times before bucket rendering could begin.

It turns out to be more efficient to give bigger jobs to slaves, so the minimum job size is four buckets. Each slave will still use its natural number of threads to render those four buckets (i.e. your quads will be rendering with four threads and the dual with two).