Home Artists Posts Import Register

Content

Guest video by Nathan! :D

Color is a confusing thing. As I said in the color tools video, basically everything I know about light and rendering (color/eyesight/bit-depth/transfer functions/path tracers/etc) comes from Nathan sitting down with a sketchbook and explaining it to me.  

Which is why I'm really excited to post this video- the first in a series as he goes through how computers represent light/color, and how that affects you as an artist.  

And having this information floating around in your head means a lot of things in the world just make more intuitive sense- jpgs and compression and the way we perceive light and send images through the air and all that fun stuff.

I hope you enjoy it as much as I did! 

Files

Everything Nathan Knows About Color Part 1 - Transfer Functions

Guest video by Nathan! Color is a confusing thing. As I said in the color tools video, basically everything I know about light and rendering (color/eyesight/bit-depth/transfer functions/path tracers/etc) comes from Nathan sitting down with a sketchbook and explaining it to me. Which is why I'm really excited to post this video- the first in a series as he goes through how computers represent light/color, and how that affects you as an artist. And having this information floating around in your head means a lot of things just make more intuitive sense- jpgs and compression and the way we perceive light and send images through the air and all that fun stuff.

Comments

Anonymous

Man this is exciting stuff, have only watched a few minutes and I'm loving it.

Anonymous

*stops everything* Seeya later everyone, Nathan's colour tutorial is out

Anonymous

I’m so glad that this is a series; it’s absolutely fantastic!

Anonymous

Super exciting to see you guys using your reach and influence to try to raise the level of knowledge about this stuff! I wonder if you might want to be careful about using the phrase "linear color" when you're still just talking about gamma and transfer functions, and intending to get to gamuts later? Especially with the widespread misunderstandings about what "linear" actually means—e.g. "oh, we don't need to use ACES, we render in linear,"—"I'm trying to convert my footage in Resolve, but I can't find linear in the list of color spaces"—etc

Anonymous

This was excellent. The rationale for colour profiles has always eluded me, and your explanation was just perfect.

Raf Stahelin

This is what I’ve been waiting for in thr Blender metaverse

Anonymous

That's a fair point! But please keep in mind that this is the first video in what will be a larger series. My intention here is to focus the first two (or maybe three--I'm considering covering dynamic range earlier as well) videos on just issues related to luminance, since that's generally more important to get right (with the human visual system being more sensitive to luminance than chrominance, and all that). Covering gamuts etc. properly also requires laying a lot more groundwork about how the human eye, camera sensors, etc. work before you get to the "useful" stuff, and I wanted to start off with something that is immediately and practically useful, to hook people a little bit better. But absolutely, I will be making a very clear distinction between whether your color representation is linear or not, and what gamut your color representation is using.

Anonymous

Really looking forward to the Scene Referred values conversation. I’ve been trying to get a reliable way to emulate a light meter in blender that is transferable to the real world. Thanks for this and all of the tools!

Anonymous

Thanks so much for this Nathan! So excited for the full series!

Tolga Katas

good stuff, thanks

Anonymous

I just want to mention that subcscription to your Patreon is definitely the best investment of money to value I have. Superb! Thank you so much!

Anonymous

Wonderful explanation. Thank you.

Anonymous

This is straight up no joke. I've learned literally SO much in such a short amount of time

Anonymous

great content! unrelated but is nathan related to Ian? you have a lot of similar mannerisms and speech characteristics

Anonymous

Thank you for this. Very educational and useful stuff well presented.

Anonymous

Nope, not related! But I *am* trying to steal his identity (don't tell him!).

Anonymous

I'm glad it was helpful! One of the things I super glossed over in this video is *why* there's more than just one transfer function out there. With the explanation I gave, you'd think there would just be one ideal curve that perfectly matches human's perception of luminance. But there are actually other factors involved as well. As just one example, there are different considerations for a transfer function intended for footage that will later be graded vs a transfer function intended for final delivery of a completed film. And that's when you start getting into weird acronyms like OETF and EOTF. Throw in a good bit of historical baggage on top of all that, plus companies constantly tweaking things, and that's how you get a truck load of transfer functions, ha ha.

Anonymous

Have either of you looked at the display prep demo / Steve Yedlin's work in color? Following his argument any high quality source can be transformed to match another camera / source look, this is through (i think) 3d tetrahedral color transformations and then in the case of film emulation, grain / halation, gate weave. One exciting use is to make digital look effectively identical to film. But also to things like make black magic, or sony a7 footage look like alexa footage etc. https://www.yedlin.net/DisplayPrepDemo/DispPrepDemoFollowup.html is any of this of interest in working in blender? Would be exciting to take various sources (3d render, camera footage such as sony s log) and using 3D display prep, emulating a particular film stock etc? just ideas. Also, folks have started reconstructing Yedlins tools in various open source tools. Blinkscript for nuke but also matlab etc. https://www.juanjosalazar.com/color-science https://www.reddit.com/r/colorists/comments/lku2t2/tetrahedral_interpolation_dctls/

Anonymous

> Following his argument any high quality source can be transformed to match another camera / source look This is not true, but the reason it's not true isn't easy to grasp without first understanding the relationship between color vision (both human and camera) and light spectra. This is one of the things I'll be covering later in the color series, and is actually (IMO) one of the most fascinating and fun things about color science. Having said that, you certainly can transform colors to *artistically feel* like they were shot with other devices. So from that perspective he's correct. But you can't do it in an objectively, quantitatively color-accurate way. It's one of the reasons why even with a proper color pipeline, footage from different cameras never *quite* match without manual case-by-case human intervention and/or involving things like color checkers.

Manuel Grewer

Great stuff! I'm a Physicist and I love spectra and power distributions. Makes everything so much more complex :D

Anonymous

Thank you Nathan! Excited for the next videos. Supporting Ian here on patreon is proving to be the best investment

Anonymous

You guys are not only incredible artists but amazing educators too! Thank you!

Anonymous

Hey Nathan! About the iPhone app Filmic Pro and color: There’s an option called “Linear” in the color management! Should I use that instead of “filmic/filmic log” when doing VFX? (Would that make the process simpler?) Ps. Good stuff! Your tone, clearance in speech, the information.. You nailed that video! Good stuff!

Anonymous

This was amazing! I know you're just getting started, but I need to know this stuff from the ground up so this series will be perfect for me. Thank you Nathan!

Anonymous

I'd also like to know this, though it sounds like phones could be performing other changes to the images. Thanks Nickolai for digging into this subject! I know phones are inferior to mid-grade DSLR cameras, but I like their convenience. Also, because so many people use phones, I think their video gives the feeling of reality, which is a great place to start when doing VFX.

Anonymous

I don't understand why the data after the transfer function is more efficient than when is linear. It shouldn't be the same information (same quantity) but transferred in a different (transformed) way?

Anonymous

Colors in the real world have many millions of possible brightness levels. Our image file formats store brightness (lumincance) in 8 (or sometimes 10 or 12-bit), which gives you only 256 possible values with 8-bit color. Let's say your camera sensor can distinguish 10,000 levels of brightness, but your image data format has only room for 256 levels of brightness (like any image format with 8-bit color, 8 bits giving you 256 possibilities). If you simply compress 40 brightness levels from the sensor into every brightness level of the image (linear color space, 40 * 256 = 10,000), the problem is that you use a lot of your 256 brightness levels on very high brightness values that look very similar to the human eye. The very darkest regions of your image where the human eye can notice the smallest differences are compressed in the same way. So, by using a transfer function, you split up your 256 possible brightness levels by giving over the first 90 or so to the darkest 10% of the image (where the human eye needs that kind of detail) and sacrifice the detail in higher brigtness levels (where the human eye doesn't care). Thus, you have made better use of the available 256 levels of brightness by using more of them where the will make a visible difference. The result is that after applying the transfer function, you have lost 97.44% of your brightness information (256 is 2.56% of 10,000 after all), but you have lost a lot less in the darker areas where detail is more important. By doing that, you have sacrificed detail in the brighter areas, so there your actual loss will be even bigger than 97.44%, but that hardly matters to the human eye.

Anonymous

Great video thanks a lot Ian and Nathan! Very well explained! I've been struggling lately matching photography and CG for more accurate compositings in Blender and although I tried different color profiles, googled everything I could on how to do it properly. I was never able to figure out the right way to go through this workflow. I read many times "...work on linear" but that really never meant anything to me until now with your clear explanation. This video explains a lot of the issues when dealing with images from the camera and color space in Blender.

Ian Letarte

Brilliant! Nathan is such a good teacher

Anonymous

Excellent video. I work in colour at a printing company so I've only ever looked at it in depth RGB to CMYK, this felt like the half of my brain that was missing. Looking forward to the subsequent parts. Do you think the bias to shadows (helpful to see prey or predators) in the human eye is an evolutionary created mechanism, like how we see more shades of green than other colours?

Anonymous

almost all human senses have some sort of logarithmic curve, likely for evolutionary reasons as you suggested. A lion is scary. But 256 lions are not 2^8 times more scary than 1. Sound, vision, pressure etc. it's called weber's law

Anonymous

It's not my area of expertise, so I can only make a guess. As Zeke mentioned, our vision is approximately logarithmic. My suspicion as to why that's the case is that perhaps the ratios between values are more useful than absolute differences. If your eyes work in terms of brightness *ratios*, then (for example) turning up the brightness of a light still leaves things looking basically the same: the ratios between illuminated objects haven't changed, only their absolute differences. And when you develop a number system where equal increases in your numbers actually represent an equal increase in ratio, you've just created logarithms.

Anonymous

I'm in Australia, so I'm far more concerned with small venomous things in the shadows than I am with lions, but I get your point. Great responses, and some good reading on Weber's Law. Cheers!

Anonymous

interesting topic and well explained, learned a lot from this!

Anonymous

Really nice explainer Nathan, I'm certainly looking forward to subsequent installments. I was just wondering if any of your color stuff had been run through the fiery forge of Troy?

Anonymous

Always nice to have things demistifyed like this... takes all the buzzwords out from it so it's clear and simple.

Anonymous

I'VE BEEN SEARCHING EVERYWHERE FOR SOMEONE TO EXPLAIN COLOUR MORE. THANK YOU!!!!!

Anonymous

Thanks Nathan and Ian!

Anonymous

Incredibly helpful! Thanks so much, you guys!

Anonymous

Nathan! Thank you for the Shakify PlugIn.. Do u think it is possible to have a funktion that converts everythink into keyframes? (Good for Renderfarms which will disable Drivers like Sheepit)