?

Log in

No account? Create an account

Animation Pipeline
onGrass
sashaandaigul
Here we are - only six years since the last time I posted in this blog :)

Anyways, today's topic is: "Asset versioning in CG animation pipelines". What I want to explain below is one minor, yet a quite critical part of it. Apparently, not everyone understands it, including some established production houses.

So here we go:

Why treating assets as source code in terms of versioning is plain wrong, at least in Maya-based animation pipelines.

First, let's look into how we version source code. Version Control Systems (VCSs) like git, Subversion, Perforce, CVS, etc would do that for us. No matter how different those systems are, they pretty much follow the same paradigm: there is only one version of every source file you want to use when compiling your project. You never need or want to use two or more for the same build process. It would actually be a big problem if you could compile a source file with a different version of a header file. To compile and deliver the project we make a disk snapshot of the source file tree, with one version of each file being present for the compiler to use. All previous versions are kept in the system (just not in that disk snapshot) too, to preserve the history and for cases when we want to roll back some changes, since they were wrong, or if we want to branch the code from some point in the past.

Let's formulate a few important principles of this paradigm here:
1. The goal of any software project is to compile and deliver a set of binary files that can be used as a whole.
2. To compile (deliver) a project you need to put all its files into a single directory structure, where each file is represented by one version, considered appropriate for the build.
3. Compilation (delivery) of a project olny takes minutes or hours. The whole projects can be built overnight. Compared to development time it's literally nothing.
4. At any moment we should be able to restore the state of the entire project as it was at any given point of time in the past, if we need to, and compile it with the same result, no matter how many times we do that or when we do that.

The same paradigm sounds pretty natural for all projects that consist of source code objects or their alikes. For example, game engines, like Unreal Engine use a similar approach for all elements of the game: source files, levels, assets, textures, etc. You do need only one version of each to compile a game, and you should be able to re-compile it as it was a month ago, if you need to.

The truth is: most animation projects are quite different from what I've described above, and here's why:

1. The goal of an animation project is to deliver a set of animated image sequences, usually split into episodes, sequences, shots. Shots are usually the units of development and delivery.
2. Delivery and approval of individual shots is usually spread in time over months or even years. Multiple shots are being worked on concurrently by different shot artists. A shot, once its final render is approved, is rarely touched again, although that happens too. Rendering of each shot might take a few days. Re-rending the whole project is not feasible at all, since it would take weeks if not months.
3. Different shots might use different versions of assets. As long as they visually look the same, it's not a problem at all. In fact it's even a requirement: the same semantic entity (a character, a prop, a set) has to be represented by different assets in different shots, for example, if the character uses different clothing, or the prop gets dirty or broken.
4. At any moment we should be able to restore the state of any shot as it was at any given point of time in the past, if we need to, and render it with the same result as if it was done back then.

So the only item that stays unchanged between the two paradigms is #4. Besides, there are a few new requirements that do not exist in software development:

5. Some files incorporate live links to other files using file paths (referencing in Maya).
6. Same assets are usually referenced by many other files, e.g. a character asset used in many shots this character is in.

Those might sound similar to the concept of header files, yet they are critically different, due to # 2 and #3.

To resume, you can have multiple versions of the same asset used in multiple scenarios the same time and its completely normal. Even more than that: using just one (correct/latest/approved) version of each asset for all scenarios would cause a lot of troubles: publishing a new version of an asset and making it the only available (correct/latest/approved) would implicitly change the state of all files that refer to it by its path while they are still in development, or (even worse) are done: there will be no way to go back to the previous state of an individual shot.

So if your asset repository directory structure looks like this:
asset_name/
       asset_name.ma                 <- latest/approved
       versions/
            asset_name_001.ma    <- previous versions
            asset_name_002.ma
            asset_name_003.ma


then you are in trouble. Instead you should always use:

asset_name/
       version_001/

            asset_name.ma
    version_002/
            asset_name.ma
    version_003/
            asset_name.ma

Your next question is probably: "How would shot artists using all those versions know when a new version of an asset is published? What if it fixes some critical bugs?". My answer is: "we should give artists proper tools to make an informed decision and update to the version they need, but only if they want to". It could be a dialog that would warn the artists about availability of newer versions of assets, along with descriptive commit notes. Yes, we are losing the ability to auto-update multiple scenes by publishing a new fix, but we are also reducing a lot of surprises. Remember our #4? When a shot artist opens a scene that was working a month ago and finds all characters and props slightly changed, because they were all "fixed", it's not a good surprise, trust me. They should be able to explicitly choose between the "old" and the "new" states instead, for every asset in the scene.

Color transformations
onGrass
sashaandaigul
No bitching today. Some working python code, demonstrating color space transformations instead. It's here.

Color Science
onGrass
sashaandaigul
Okay guys, did you miss me? So did I :)

I've been busy doing stuff. Among the stuff I've been busy with was something called the Color Science. Today my rant is totally about that, by far the most confusing and intimidating part of computer graphics.

After days of banging my head over those wonderful 3x3 matrices, DCI P3, XYZ, X'Y'Z', xyz (they are all different btw) color spaces, white points, primaries and you name what, I was almost ready to kill myself. I have to warn the world: there is a huge conspiracy by color scientists all over the place. They seem to be nice and sharing, writing detailed books and creating helpful online resources like http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html. This is all bullshit. In fact, they are evil creatures secretly invading our industry from Mars, Aldebaran or even further away. I also think they all share the same Aldebaranic DNA strands. At least I cannot find any other reason why all books, web sites and whatnot use column-major matrices without even mentioning that. All those days when I have tried inverting and multiplying these matrices using Imath::Matrix44 (which obviously did not result in proper results without transposing them first), they must have been high-fiving each others' tentacles in joy.

Rise people of CG! Don't let them kill you! Always transpose your color matrices!

Autodesk again
onGrass
sashaandaigul
This time I will rant not about their software but about their customer support. It's not been that great all those years, but it looks like it's only getting worse. A week ago a co-worker of mine submitted a support request on their portal. He attached a short python script to illustrate his problem. After a few days a support guy came up with a response. He said he could not open the .py file. You got it right. A support person of a billion dollar vendor cannot open a file attached to a support ticket created in their system. How's that? I immediately tried it and downloaded the script without any problems.

Seriously, I don't know how much the company I work for pays Autodesk a year, but I believe it's a 5-digit number. I really think they should've payed us instead for this kind of support.

Some free stuff to share
onGrass
sashaandaigul
Hi everyone,

Today I'm not going to complain about mental ray. Instead, I will share some C++ code I wrote in my free time as a part of a bigger project, not announced yet. Not even decided whether it will ever be. So here is the tarball:

http://www.alexsegal.net/ftp/imgutils.tar

It contains source code of two utilities and their Makefiles for Linux.

1. linearize - performs color transformation into linear space for textures. It can read an sRGB or a Rec.709 texture and save it as a linear image.
2. exr4nuke - post-processes rendered Open EXR images to optimize them for comping in Nuke. It does two things: re-compresses the image as zip individual scanline and crops dataWindow according to the bounding box of all non-empty pixels.

To compile them, you have to have the following libraries installed on your Linux system:
- OpenEXR
- OpenImageIO
- boost

For Windows you'll have to re-create the Makefiles (or Visual Studio projects) from scratch.

Enjoy!

sRGB, Rec.709 to linear and back
onGrass
sashaandaigul
I did some color conversion these days. Just to keep it somewhere I will post the formulas here. It was simple with sRGB - everything is on Wikipedia, but for some reason Rec.709 standard does no have the reverse formula in the doc describing the standard (http://www.itu.int/rec/R-REC-BT.709-5-200204-I/en) so I had to derive it myself.

Conversion to linear:

if f > thresh:
    f = pow((f+a) / (1+a), c)
else:
    f = f / b

From Rec.709: use the following values:
thresh = 0.0801
a = 0.099
b = 4.5
c = 2.2222

From sRGB: use the following values:
thresh = 0.04045
a = 0.055
b = 12.92
c = 2.4
       
Conversion from linear:

if f > thresh:
    f = (1+a) * pow(f, 1.0/c) - a
else:
    f = f * b

To Rec.709:
thresh = 0.018
a = 0.099
b = 4.5
c = 2.2222

To sRGB:
thresh = 0.0031308
a = 0.055
b = 12.92
c = 2.4

Announcement
onGrass
sashaandaigul
Just wanted to let everyone know: I had to reject a livejournal user who sent me a request to join the community. I had two reasons to think it was a bot:

1. His/her live journal is empty
2. The user name is unreadable (at least for me).

So if you are human but your account matches two of these criteria, and you want to join the community, please send me a private livejournal message or leave a comment to this entry before attempting to join.

Subsurface in reflections
onGrass
sashaandaigul
There is officially no known way to render character reflections in mirrors. More precisely, there is no method to render subsurface scattering in reflected/refracted rays, well, unless you use your own sss shader. The misss* subsurface shader that comes with mental ray will only substitute missing SSS with lambert, but it means the reflections will not look like skin.
It would be funny, if we didn't have a feature-length project ahead with lots (I mean LOTS) of character looking into mirrors.

UPD: The issue seems to be more complicated. It looks like stock skin shaders in maya/xsi/max do render sss in reflections. I will need to do more investigation on this.

Stereo
onGrass
sashaandaigul
A nice new feature in mental ray version 3.8: the ability to render two images for left and right eyes during the same render session. As usual, mental images were good at little tiny things that make the entire thing useless. New "stereo" flag in camera declaration is missing a way to specify the convergence (or zero parallax plane) distance. For some strange reason the cameras are always converged at the focal length point.

It looks like they at last managed to confuse themselves by using incorrect terminology. The documentation refers to cameras converged at a "focal distance", but there is no such thing neither in real world nor digital photography. There are two things instead: "focal length" which is a lens characteristic, directly affecting its angle of view, and "focus distance" which defines the distance of objects the lens will project to film keeping them in perfect focus.

I would still mind having both the focus distance and the convergence plane to be controlled by the same parameter, but that at least would make some sense. But the focal length has nothing to do with eye convergence, that's why the entire thing is just useless.

This is amazing
onGrass
sashaandaigul
One more wonderful discovery I made these days, this time about mental ray database.

If you think leaf instance names have their unique tags in the database, you are wrong. There could be more than one tag for the same leaf instance name in case there is more than one material attached to the object.

This is very "useful", especially for fancy things you would write yourself, like per-object user data cache. I did it using stl::map, with tags as keys. After a few reports from the lighters (it's not working!) I was able to track down the offenders: two different leaf instance tags have two different instance parents, but after mi_api_tag_lookup() they resolve into the same name.

The worst part is my cache is populated on the first time the surface shader is called, and if this happens in the displacement evaluation stage, not all of the tags are present there, so the cache ends up being incomplete.