Color transformations
onGrass
sashaandaigul
No bitching today. Some working python code, demonstrating color space transformations instead. It's here.

Color Science
onGrass
sashaandaigul
Okay guys, did you miss me? So did I :)

I've been busy doing stuff. Among the stuff I've been busy with was something called the Color Science. Today my rant is totally about that, by far the most confusing and intimidating part of computer graphics.

After days of banging my head over those wonderful 3x3 matrices, DCI P3, XYZ, X'Y'Z', xyz (they are all different btw) color spaces, white points, primaries and you name what, I was almost ready to kill myself. I have to warn the world: there is a huge conspiracy by color scientists all over the place. They seem to be nice and sharing, writing detailed books and creating helpful online resources like http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html. This is all bullshit. In fact, they are evil creatures secretly invading our industry from Mars, Aldebaran or even further away. I also think they all share the same Aldebaranic DNA strands. At least I cannot find any other reason why all books, web sites and whatnot use column-major matrices without even mentioning that. All those days when I have tried inverting and multiplying these matrices using Imath::Matrix44 (which obviously did not result in proper results without transposing them first), they must have been high-fiving each others' tentacles in joy.

Rise people of CG! Don't let them kill you! Always transpose your color matrices!

Autodesk again
onGrass
sashaandaigul
This time I will rant not about their software but about their customer support. It's not been that great all those years, but it looks like it's only getting worse. A week ago a co-worker of mine submitted a support request on their portal. He attached a short python script to illustrate his problem. After a few days a support guy came up with a response. He said he could not open the .py file. You got it right. A support person of a billion dollar vendor cannot open a file attached to a support ticket created in their system. How's that? I immediately tried it and downloaded the script without any problems.

Seriously, I don't know how much the company I work for pays Autodesk a year, but I believe it's a 5-digit number. I really think they should've payed us instead for this kind of support.

Some free stuff to share
onGrass
sashaandaigul
Hi everyone,

Today I'm not going to complain about mental ray. Instead, I will share some C++ code I wrote in my free time as a part of a bigger project, not announced yet. Not even decided whether it will ever be. So here is the tarball:

http://www.alexsegal.net/ftp/imgutils.tar

It contains source code of two utilities and their Makefiles for Linux.

1. linearize - performs color transformation into linear space for textures. It can read an sRGB or a Rec.709 texture and save it as a linear image.
2. exr4nuke - post-processes rendered Open EXR images to optimize them for comping in Nuke. It does two things: re-compresses the image as zip individual scanline and crops dataWindow according to the bounding box of all non-empty pixels.

To compile them, you have to have the following libraries installed on your Linux system:
- OpenEXR
- OpenImageIO
- boost

For Windows you'll have to re-create the Makefiles (or Visual Studio projects) from scratch.

Enjoy!

sRGB, Rec.709 to linear and back
onGrass
sashaandaigul
I did some color conversion these days. Just to keep it somewhere I will post the formulas here. It was simple with sRGB - everything is on Wikipedia, but for some reason Rec.709 standard does no have the reverse formula in the doc describing the standard (http://www.itu.int/rec/R-REC-BT.709-5-200204-I/en) so I had to derive it myself.

Conversion to linear:

if f > thresh:
    f = pow((f+a) / (1+a), c)
else:
    f = f / b

From Rec.709: use the following values:
thresh = 0.0801
a = 0.099
b = 4.5
c = 2.2222

From sRGB: use the following values:
thresh = 0.04045
a = 0.055
b = 12.92
c = 2.4
       
Conversion from linear:

if f > thresh:
    f = (1+a) * pow(f, 1.0/c) - a
else:
    f = f * b

To Rec.709:
thresh = 0.018
a = 0.099
b = 4.5
c = 2.2222

To sRGB:
thresh = 0.0031308
a = 0.055
b = 12.92
c = 2.4

Announcement
onGrass
sashaandaigul
Just wanted to let everyone know: I had to reject a livejournal user who sent me a request to join the community. I had two reasons to think it was a bot:

1. His/her live journal is empty
2. The user name is unreadable (at least for me).

So if you are human but your account matches two of these criteria, and you want to join the community, please send me a private livejournal message or leave a comment to this entry before attempting to join.

Subsurface in reflections
onGrass
sashaandaigul
There is officially no known way to render character reflections in mirrors. More precisely, there is no method to render subsurface scattering in reflected/refracted rays, well, unless you use your own sss shader. The misss* subsurface shader that comes with mental ray will only substitute missing SSS with lambert, but it means the reflections will not look like skin.
It would be funny, if we didn't have a feature-length project ahead with lots (I mean LOTS) of character looking into mirrors.

UPD: The issue seems to be more complicated. It looks like stock skin shaders in maya/xsi/max do render sss in reflections. I will need to do more investigation on this.

Stereo
onGrass
sashaandaigul
A nice new feature in mental ray version 3.8: the ability to render two images for left and right eyes during the same render session. As usual, mental images were good at little tiny things that make the entire thing useless. New "stereo" flag in camera declaration is missing a way to specify the convergence (or zero parallax plane) distance. For some strange reason the cameras are always converged at the focal length point.

It looks like they at last managed to confuse themselves by using incorrect terminology. The documentation refers to cameras converged at a "focal distance", but there is no such thing neither in real world nor digital photography. There are two things instead: "focal length" which is a lens characteristic, directly affecting its angle of view, and "focus distance" which defines the distance of objects the lens will project to film keeping them in perfect focus.

I would still mind having both the focus distance and the convergence plane to be controlled by the same parameter, but that at least would make some sense. But the focal length has nothing to do with eye convergence, that's why the entire thing is just useless.

This is amazing
onGrass
sashaandaigul
One more wonderful discovery I made these days, this time about mental ray database.

If you think leaf instance names have their unique tags in the database, you are wrong. There could be more than one tag for the same leaf instance name in case there is more than one material attached to the object.

This is very "useful", especially for fancy things you would write yourself, like per-object user data cache. I did it using stl::map, with tags as keys. After a few reports from the lighters (it's not working!) I was able to track down the offenders: two different leaf instance tags have two different instance parents, but after mi_api_tag_lookup() they resolve into the same name.

The worst part is my cache is populated on the first time the surface shader is called, and if this happens in the displacement evaluation stage, not all of the tags are present there, so the cache ends up being incomplete.

Hair/Fur: dead end...
onGrass
sashaandaigul
I've already mentioned hair/fur rendering in mental ray is a huge problem in many aspects. To name a few:

- The only rendering method that can produce good results is rasterizer, which in general is slower.
- The only kind of shadows that can produce good results with hair/fur is detail shadow map. Let's talk more about the latter.More...Collapse )

?

Log in

No account? Create an account