BACK TO #NerdyFilmTechStuff

Below is an email exchange I had with an articulate and passionate filmmaker, Mario Carvalhal, about my Display Prep Demo (which you can see here: www.yedlin.net/DisplayPrepDemo ) and the “look” of camera formats.

The thread is long, but I think it’s worth sticking with it, even if you have to do some skimming. As you follow Mario and myself challenging each other to refine our thought and language, maybe it’ll challenge you to refine yours.

29 Dec, 2015

Hi Steve.

Loved the demo, great job. Where can I find the cypher for it? 

Thanks

Mario

30 Dec, 2015

Hi, Mario

Thanks so much for your interest.

I’m not releasing a cypher for now. 

I’m trying to keep the focus of the demo on the overall “look,” and not on the historical details of the making of the demo. 

On the one hand, I believe that there are forensic clues in the demo that can be used to identify which is which to a trained eye, but on the other hand my whole point is that that doesn’t matter in the slightest — it’s the overall look and not the inconsequential trace forensic evidence that matters. 

If you can’t immediately tell with certainty which is which, this point is proven in my mind.

I also believe that in the current morass of our religious squabbles over format, filmmakers and the public alike are being lured into a sort of confirmation bias: where the “look” of a technical attribute does not come off the screen and into the eye, but is projected from the eye to the screen. The mind is primed by advertising before viewing, and then by the time of actual viewing, the belief is confirmed no matter what the actual evidence is.

For example, when we’re told something is “4k” or “70mm” or something else that’s supposed to be highly resolute, then we’re primed for this and the mind enforces a perception of this resolution whether or not it is present on the screen (even if the projector can’t show more resolution than the competing format offers)! 

I want to avoid this confirmation bias or placebo effect or whatever you want to call it, so I’m not publishing a cypher for now. Once there’s certainty about which is which, people will suddenly project all kinds of attributes into the two images that they didn’t see before.

I know that’s not the answer you were looking for, but hope it helps.

-Steve

30 Dec 2015

Hi Steve

Actually I really appreciate your answer and your reasoning. As a big proponent of film I'm also very weary of arguments that sound superstitious and find that people in the community frequently obsess more about tools than meaning. 

That being said I sometimes wonder if our brain is able to discern, even on an unconscious level, the difference between the formats. My point being that the difference is not one of resolution, color or technical artifacts, but one of real vs simulated. Studies have shown that the same exact piece of music played to a group that was told it was written by a human, and to a group that was told it was composed by an algorithm, displayed higher levels of enjoyment when they thought it was composed by a human. Mind you, this isn't self reported, they are actually analyzing the response in the pleasure center of their brains. Is it possible that once we realize that an image is digital, therefore everything in it could be fake, our enjoyment decreases?

The other question is one of process. You look at the films of 2015 and 3 of the 5 most impressively shot films in my opinion (in terms of lighting, composition, camera choices, etc) were shot on film - and not because they were shot on film. Is there a self-selection bias because some of the best artists in the business get to demand film? Or is it just that shooting film inherently makes you think of cinema differently? I think that the mantra of shooting coverage instead of shooting film has unfortunately infected not just mainstream cinema but indie cinema too. And perhaps there's no causation between the rise of digital and the fall of even basic standards in filmic language, but there certainly appears to be a correlation. 

Best

Mario

31 Dec 2015

Hi, Mario

Thanks so much for such a thoughtful reply. This is getting into some psychology that I am no expert in and can only speculate on, but I’ll try to answer with an educated guess. See below:

That being said I sometimes wonder if our brain is able to discern, even on an unconscious level, the difference between the formats.

This statement is loaded with quite a few implied premises. First off, it presumes that all of film is one monolithic “format” and all of digital is another single monolithic thing. 

But this is contrary to the truth, which is that each of these (film and digital) is a mere substrate for an image and is also one small step in a long chain of processes that make the thing you’re calling “format" (I mean, you’re not even differentiating between scanned film and traditionally-printed film, which would be two entirely different things if the premise itself — that capturing an image digitally is fundamentally different — is true).

I’m not sure about “our brain.” Humans certainly do not have any innate evolutionary skills in distinguishing movie formats from one another, as any evolutionary change takes  many thousands of years and film formats aren’t even 200 years old. Also, the distinguishing of film formats is not so vital to survival or success in life that it would likely be an evolutionary trait.

I’d say, if the human eye/brain can distinguish any difference, then that means there is (ipso facto!) an actual, physical difference that can be observed (our senses don’t use magic to do the discerning, but must rely on physical clues that actually exist in the real world). Whatever that difference is — it’s certainly within the scope of human endeavor to identify, model and emulate it. The fact that we haven’t identified it or modeled it yet  does not prove it can’t be done. 

People are so religious about this that they’re resistant to even trying. But not trying does not prove it’s not possible. 

If you believe there are attributes that haven’t been identified and/or properly modeled in my personal film emulation, then that means you believe those attributes exist. If they exist, they can be identified. If they don’t exist, then, well, they don’t exist and the premise is false. 

It just doesn’t seem like a real option that these attributes exist but can never be identified and are effectively made out of intangible magic that can never be understood or studied. 

To insist that film is pure magic and to deny the possibility of usefully modeling its properties would be like saying to Kepler in 1595 as he tried to study the motion of the planets:  “Don’t waste your time, no one can ever understand the ways of God so don’t bother. You’ll never be able to make an accurately predictive mathematical model of the crazy motions of the planets — they just do whatever they do.”

My point being that the difference is not one of resolution, color or technical artifacts, but one of real vs simulated

This is quite a value judgment. Are you saying that film is “real” and digital isn’t? I don’t even quite know what it means to relegate digital to a lesser status of existence than film.

If you take the terms literally, then of course digital is real and the distinction between “real” and “simulated” doesn’t exist. If you take the terms more figuratively, then I’m not sure what you’re getting at other than expressing an a priori belief that is at odds with the empirical evidence rather than making any kind of statement about the empirical evidence itself. 

It’s not as though film “exists” but digital is “virtual.” They both exist.

Holding fast to vaguely-defined sweeping value judgments rather than making rigorous statements that are meaningful and well enough defined to be studied and proven true or false seems like a way to continually mystify the questions and to hold them at arm’s length so that they can’t be addressed. 

The words “real” and “simulated” can’t be a literal or rigorous distinction since both digital and film exist in the world. But the words taken in their more figurative meanings are so vague as to be mere amorphous judgments with no unambiguous meaning. When you use loaded, vague terms with no unambiguous meaning, nothing can be studied empirically, assertions are shifty, and you enforce remaining in a realm of amorphous/abstract emotional statements that are not related to the physical phenomena they supposedly describe. 

Studies have shown that the same exact piece of music played to a group that was told it was written by a human, and to a group that was told it was composed by an algorithm, displayed higher levels of enjoyment when they thought it was composed by a human. Mind you, this isn't self reported, they are actually analyzing the response in the pleasure center of their brains. Is it possible that once we realize that an image is digital, therefore everything in it could be fake, our enjoyment decreases?

If I understand this study correctly from your description (I’ve not read the actual study), it precisely supports what I’ve said above: that physical attributes may be indistinguishable but people psychologically project amorphous value judgments that have nothing to do with empiricism and are unrelated to the phenomena supposedly being described. Like a placebo effect: the reaction is generated by the psyche’s belief that a stimulus causes the effect, not by the actual stimulus.

Presumably, you could do an analogous study where you could tell one group of people that the Alexa footage in my demo is film and another group that the film footage is film and each group would prefer the shots that they were told was film. 

Hence: the value judgment is a self-perpetuating belief system and has nothing to do with the actual content. To me, the study seems to indicate that human perception is not capable of distinguishing the two stimuli, despite an individual’s deep belief in his/her ability to.

Remember, I am not speaking about all film and all digital in general, but images that have been successfully prepped to have the same “look."

The other question is one of process. You look at the films of 2015 and 3 of the 5 most impressively shot films in my opinion (in terms of lighting, composition, camera choices, etc) were shot on film - and not because they were shot on film. Is there a self-selection bias because some of the best artists in the business get to demand film?

Thanks for introducing me to the concept of self selection bias — I wasn’t previously familiar with it. Absolutely, yes, I think that’s happening, though other things are too:

Let’s take the very concrete example of someone like P.T. Anderson, who is a genius filmmaker. P.T. Anderson has not and doesn’t claim to have studied color science or done mathematically rigorous work to become master of these perceptual attributes that have always been assumed to be inherent to the format — in fact, he’s probably had no reason to question the collective societal belief that formats have such inherent looks. 

That’s a lot of minute science that he just hasn’t done. He’s a master filmmaker and has put all his study and life experience into being a story teller, not a maverick in the rigors of color science. So, he’s a master filmmaker, but not an expert in the fundamental physical processes of imaging. So, he uses the tools he’s familiar with and that have worked before, and of course the results are great. 

That doesn’t prove that film is better than digital. It proves that humans have limited lifetimes and must select a sphere of expertise on which to concentrate. Any human today who has as many years as P.T. Anderson to truly master filmmaking has grown up with celluloid and has concentrated their knowledge in story telling and not color science or physics, so they know better than anyone how to make a great movie but not necessarily the mechanics behind the physical processes of imaging. So it has no meaning one way or another for color science that he uses the same tools he’s familiar with, has mastered, and has had previous success with.

A genius filmmaker like Anderson is permitted (and rightly so!) to choose his shooting format. And film is a perfectly valid choice, and he’s comfortable with it. 

Also, very few people do the kind of detailed color science that I do or have adopted this attitude towards display prep, therefor digital has in fact usually looked different than film (even if it is not inherently necessary for it to do so). So statements that differentiate the film “look” from the digital “look” are historically true even if not inherently true.

There is a self fulfilling prophecy where many decision-making filmmakers with impeccable taste can quite rightly say “all these movies on film look great and all those movies on digital look bad.” They’re right; absolutely. But they are not necessarily right about what is cause and what is effect. What they’re identifying is true but not inherent, it’s an accident of history. Because many filmmakers believe it’s inherent, then those people do not use digital, so many of the virtuoso masters stay away from digital because of a superstitious belief that it’s inherent, but that creates a feedback loop: the superstition propagates the superstition because the good looking movies continue to be shot on film. 

It is the logical fallacy of post hoc ergo propter hoc. 

This has been a viscous cycle but like any deeply held belief that just doesn’t fit the facts, it can’t last forever, and we’re already seeing it waver as some of our most revered masters such as Roger Deakins and Chivo not only acquiesce to but actively prefer digital acquisition.

Or is it just that shooting film inherently makes you think of cinema differently?

I don’t think any of it is inherent — it’s conventional.

I think that the mantra of shooting coverage instead of shooting film has unfortunately infected not just mainstream cinema but indie cinema too. And perhaps there's no causation between the rise of digital and the fall of even basic standards in filmic language, but there certainly appears to be a correlation. 

Maybe true, but you can’t hold back the tide. I mean, no one uses old-style film editing instead of digital editing to force the kind of rigorous thoughtful (rather than scattergun) methods that were necessary in eras gone by. In the future, if you want to shoot thoughtful meaningful auture-like cinematic coverage instead of scattergun haphazard crazy coverage, then the rigor and discipline will have to be self-imposed and not format-imposed, as is already the case with digital editing.

1 Jan 2016

Thank you for your response Steve. This is quite simply the most well argued and thoughtful case I've ever seen put forth on this matter. You have clearly put a lot of thought into this and while I'm a filmmaker this is not my area of expertise so I'm mostly just trying to examine my own bias as an audience member and to try to understand if there are empirical merits to my personal experience with the mediums. To that end I'd just like to clarify one point.

I failed to explain what I meant with real vs simulated and I see why that can be interpreted as a value judgment, which was not my intention. "Real" is indeed the wrong word because, as you said, both formats are real in the literal sense. In fact, the more I think about it the more I realize how much of a misnomer it is, so I'll try my best to isolate what I'm talking about:

It is less about the image per se, as in any given frame either shot on film or digital could be virtually indistinguishable, but in the means of transforming a reality - human movement - into an artifact - a recording of human movement. Film as a means of acquisition captures a lie, a series of still images, and when reproduced creates something that's akin to magic but which is allowed only by a mere flaw in our brains, the phi phenomenon. Every time we watch something shot on film, we are seeing that process happening right in front of us, a lie becoming true.

Digital captures human movement in one fluid, static process. What it's acquiring and what it's exhibiting are one and the same. This is probably closer to what the human eye sees (or the brain interprets), but could it be that because of that it creates a "too-close-for-confort" sense when we are using the medium to recreate dreams and fantasy? 

What I'm propounding is that we may get more pleasure from watching what we perceive as magic and skill than what we perceive as science. And that 24 frames per second could be perceptible only on a subconscious level. (I think this is also related to why even mediocre practical effects tend to be more interesting to me than just average-to-good digital effects, but that's a different conversation). 

I have to be honest and say that I'm not even entirely convinced of this argument and am only proposing it as a hypothesis to explain my very own personal reaction when exposed to both formats. I love the work Deakins has done since he started working digitally but he has yet to do something that creates an emotional response in me similar to his work in Jesse James or No Country. I'm sure he and others will prove me wrong in the very near future and I actually look forward to it. I assure you that my attachment is to the result, not to the medium.

And granted, there's nothing about the process just described that's inherent to the film medium, and could potentially be achieved digitally. Looking at the slate of movies that came out in 2015 it appears to me that we're not quite there yet, close as we may be, but your incredible work takes a giant step in convincing me otherwise.

Happy new year and thank you for advancing this conversation with such acumen.

1 Jan 2016

Hi, Mario!

Thanks so much! It’s very exciting to have an interesting, well reasoned and intelligent discussion between two people that don’t share precisely the same opinion. 

The current political climate plus the current internet culture breed a propensity for people to curate their acquaintances and information sources so that they’re insulated in a micro-culture of like-mindedness where differing views, when examined at all, are presented through a distorted lens of derision and dismissiveness.

So, it’s a welcome relief to have an intelligent discussion without having to see eye to eye or sweep differences under the rug. 

So back to the topic at hand:

I’m pretty confident on the concept that film is not made out of some sort of magic that can never be studied or understood but must forever be described in vague religious terms. 

Humans have sent men to the moon, figured out how to interconnect every person with a wireless handheld computer, uncover the secrets of DNA, and uncover laws of quantum mechanics and general relativity. I don’t think it’s beyond our capability to identify which attributes of motion picture film are perceptually recognizable. 

If the attributes exist, they can be studied and described precisely without ambiguous, mystical, quasi-religious language. 

That fact that one specific person (even if it’s a genius film director or DP) who has never studied the physics of these attributes speaks in vague wine-tasting terms about it does not prove that it can only ever be discussed in such vague terms, but is merely re-assurance that that person has indeed never studied the underlying physical processes.

The vague language does not present empirical facts about the physical processes; it’s just a way to restate that there exists a deeply held belief that is ignorant of or indifferent to those facts. 

Anything at all that is “true” and not vague intuition can be stated unambiguously and then verified or contradicted by experiment.

So, if you don’t mind, let me go through some of your admittedly well-reasoned thoughts and show why I believe they’ve been shaped by societally prevalent but unfounded tropes and not on by empirical facts. In other words, that they merely restate a premise rather than proving or exemplifying it:

It is less about the image per se, as in any given frame either shot on film or digital could be virtually indistinguishable, but in the means of transforming a reality - human movement - into an artifact - a recording of human movement.

This already seems off to a start where we’re trying to push away from empiricism or well-defined terms that have a definite meaning and towards vagueness, but let’s take it for what it’s worth and dive in.

Film as a means of acquisition captures a lie, a series of still images, and when reproduced creates something that's akin to magic

Okay, so when you call it magic, that’s definitely (not maybe) a pure value judgment, since in the literal sense it’s not made out of magic. I could say that I feel that digital is more magical (I don’t think that — I’m just saying it is pointedly avoiding describing the actual physical phenomena. It’s a pure statement of judgment and not of physical properties). Stating that something is magic certainly gets us farther not closer to the truth of the underlying physical processes.

but which is allowed only by a mere flaw in our brains, the phi phenomenon.

This has not distinguished digital from film in the least. The phi phenomenon is -- not just analogously but literally -- the same in film and digital. You have a sequence of 24 still frames every second. Each one is a still image that was exposed for 1/48 of a second. Digital doesn’t present any sort of magically continuous, non-quantized morphing that doesn’t have individual frames or is any different from film’s individual frames.

Every time we watch something shot on film, we are seeing that process happening right in front of us, a lie becoming true.

Again, a mere value judgment that sounds a lot like literary criticism language, not like a studied physical differentiation. If a sequence of stills as appearing to move deserves the poetic phrase “a lie becoming true,” then digital acquisition is just as deserving of that poetry, since digital uses the precisely (not approximately but precisely) the same mechanism to create the illusion of motion in the human brain.

Digital captures human movement in one fluid, static process.

That is not true in the sense that it does not describe how digital imaging works at all, let alone a way that it differs from film. 

The temporal and geometric attributes of most currently used digital processes are not just analogous but literally identical to that of film. For normal contemporary cinema, both use 24 fps playback, 1/48th second exposures, and both even use a similar sweep-on/sweep-off shutter. So every attribute from exposure time to motion blur is identical. (Some of the inferior digital cameras have a problematically slow sweep-on/sweep-off time, but I’m not talking about inferior cameras)

What it's acquiring and what it's exhibiting are one and the same.

Not any more so with digital than film —not in any sense. In both cases, acquisition is a record of light that was given off by tangible objects and exhibition is only an immaterial image on a screen, devoid of the mass of the original objects. Digital does not offer the physical tangible objects themselves to the audience any more than film does, so its exhibition is equally removed from its acquisition. Moreover, the image is captured using the same temporal and geometric logic, and if you prep the “look” as per my demo, then that too is the same. 

But, both are image, not literal solid reality. And both have to be processed from pure photometric data into a viewable image (though some of the processing of digital images is so lightning fast that it can be done inside the camera, but that seems trivial).

The only difference is the substrate for recording the photometric data, not a religious/existential difference of “one and the same” versus duality. 

This is probably closer to what the human eye sees (or the brain interprets), but could it be that because of that it creates a "too-close-for-confort" sense when we are using the medium to recreate dreams and fantasy? 

How so? Again, this is an assertion of a belief with no reference to evidence provided by the actual physical processes. If there is some attribute that digital has that film doesn’t that is “too-close-for-comfort,” then it must be possible to identify it rather than merely alluding to the emotional reaction it causes.

What I'm propounding is that we may get more pleasure from watching what we perceive as magic and skill than what we perceive as science.

What I myself am propounding is that we perceive it as magic precisely because of a prior held belief that it’s magic and not because of any actual (as opposed to imagined) attribute. It’s just like the computer-composed music study you quoted — it’s a placebo effect. People believe they can “see” and “feel” film because it has some kind of “soul” that digital doesn’t, but the fact that they believe it does not prove that it’s a physical property of the objects rather than a psychological projection of an imagined truth. 

A priori belief has a long history of leading people far afield of the empirical truth. People's absolute certainty in a truth that they believe but refuse to put to the empirical test has yielded things like The Inquisition, but hasn’t yielded too many advances in human knowledge of the physical world.

And that 24 frames per second could be perceptible only on a subconscious level. (I think this is also related to why even mediocre practical effects tend to be more interesting to me than just average-to-good digital effects, but that's a different conversation). 

Again, this is an assertion of a belief. You’d have to do a completely even-handed test rather than cherry picking memorable results to prove it. I bet that for every bad CG shot you actively remember, you’ve seen 200 others that were so good that you didn’t even realize they were CG, so confirmation bias and availability heuristic are causing you to cherry pick memorable examples that reinforce your belief. 

(Also, why the mention of “24fps,” that doesn’t distinguish digital from film acquisition since they’re both quite literally 24fps.)

I have to be honest and say that I'm not even entirely convinced of this argument and am only proposing it as a hypothesis to explain my very own personal reaction when exposed to both formats.

I love that you’re open to discussion and so thoughtful on this. That’s why I’m taking the time to discuss, otherwise we’d be in an intractable impasse.

I love the work Deakins has done since he started working digitally but he has yet to do something that creates an emotional response in me similar to his work in Jesse James or No Country.

We know that choosing to cherry pick data to support an agenda or prior belief rather than doing a proper scientific study is not very useful in discovering a truth rather than confirming the agenda with indifference to truth. And with something as unique and creative as a movie, cherry picking examples is even less informative.

Why cherry pick those two films? If there were reason to believe that whatever big feeling you got from those two films was due to the capture format, then it shouldn’t matter that Deakins shot it and also it would follow that all films ever shot on film would have this big feeling for you (not just those two Deakins films and not just Deakins films) and that all films shot on digital wouldn’t. 

Why not mention Chaplin’s Modern Times, Kubrick’s The Shining, or Adam Sandler’s The Waterboy? Those three all used photochemical acquisition instead of digital, so if we’re chalking up the look of No Country to the acquisition format, shouldn’t those movies have that same recognizable look as No Country? In fact, I’d say the Prisoners (which was digitally acquired) looks more similar to No Country than those other three films do.

Also, if we are to presume that digital acquisition inherently and at all times somehow smears the image with a mystical, magical and recognizable crappiness that can never be overcome, then what are we to think of the fact that both of the films you mentioned were scanned with a digital scanner for digital color correction and that you didn’t choose to mention any of the dozens of films Deakins shot that were not scanned or the three films I mentioned above which were also not scanned for a DI?

If this je ne sais quoi were truly an attribute of the capture medium and not of Deakins’ own particular work on that project, then you would be able to identify it by speaking only about the capture medium and not mixing the capture medium together with all the other attributes of that amazing work by Mr Deakins.

I'm sure he and others will prove me wrong in the very near future and I actually look forward to it. I assure you that my attachment is to the result, not to the medium.

I absolutely agree that results have been historically much more favorable with film than digital, but I also maintain it’s for all the reasons we’ve been discussing — confirmation bias, comfort level of the most established filmmakers, superstition, and the insistence by film professionals on keeping the discussion in the realm of vague wine-tasting terms instead of empirical study that can be confirmed or contradicted by experiment. 

Very simply —  it’s a self fulfilling prophecy. As long as filmmakers believe that digital has one indelible look and film has another, then that means ipso facto that they’re not even trying to give digital a different look, since they don’t believe it’s possible. So they’re repeating the same dreary mistakes. As I said in the demo “if you go in believing the look is inherent, it means that you're not doing anything to control it, and that means you're not taking control of the sprawling variables.”

If film only had one look, though, I don’t think there’d be any reason for you to have mentioned No Country or Jesse James, since those two films would look exactly like every other movie ever acquired photochemically. 

Also, you’ve left out the fact that those two films were made right in the fecund era when Deakins was working with the amazing Michael Hatzer as colorist. I have no special reason to think that, if you’re seeing something especially recognizable in those two films, that it's not more from that unique collaboration of Deakins and Hatzer than from the capture format — the Hatzer collaboration is an attribute that’s much more unique to those two specific films, so seems like a better candidate for explaining why they might stand out.

And granted, there's nothing about the process just described that's inherent to the film medium, and could potentially be achieved digitally. Looking at the slate of movies that came out in 2015 it appears to me that we're not quite there yet, close as we may be, but your incredible work takes a giant step in convincing me otherwise.

Thanks! Yes, please don’t forget that I’m not saying “digital always looks like film.” Absolutely not — you’re right that for the most part, film has had the “film look” and digital hasn’t. 

My point is that that is a historical accident, not an inherent technical necessity. If we break some of our prejudices, we can break some of our repetitive behavior of always treating digital the same way and always getting the same inferior results.

Happy new year and thank you for advancing this conversation with such acumen.

Happy new year to you too!

3 Jan 2016

I agree! It's rare these days that you get to have a vigorous fact based debate and leave with a different perspective on an issue. As people get more polarized and insulated in their sources I've been thinking more often of the great "facts and opinions" gag in Inside Out. You do a good job keeping them apart. 

As for our discussion, I didn’t mean to imply mystical or vaguely religious language, I only meant to leave open the possibility that the distinction between the formats, concrete and solvable as it may be, has not been identified yet. If my language was unclear, it's partly because I think there's some distinction between the formats that's still unclear. True that we mapped the human DNA, but we had a lot more men on that job than we do in making digital look like film! Alluding to the emotional reaction a process causes serves me just to point out that there is an issue - I'm not suggesting that it identifies the issue, much less suggest a possible solution. A je ne sais quois is just that, and if the entire raison d'etre of the medium is to evoke emotion, than a visceral emotional reaction can't be ignored. I hope this doesn't come across as pedantic or nitpicky, but I would be lying if I didn't admit that I haven't seen a digital film yet that didn't have moments where I was distracted by this elusive to define "digital look". This is not to say that I didn't love the photograpy in some digital films. 

I concede the first point to you, mainly on the fact that while I’ve studied  how film works on a technical level, my knowledge of digital is rudimentary at best, and I was trying to describing a perceived effect and not the technical process. 24 frames is 24 frames, fair enough. (Also, thanks for introducing me to Hatzer, who's contributions I was not familiar with.  Seeing that he colored Bridge of Spies, possibly the most gorgeous film of last year, it's easy to agree that he probably contributed to my love for the photography in Jesse James and No Country.)

As for established filmmakers who persist in choosing to use film, I can understand why they would do it even if they would be open to the possibility that digital could create the same results. I suspect that their approach stems from a "if it's not broken don't fix it attitude". You can say that it's conservative but it's not necessarily superstitious.

I'm left convinced that there's no reason why digital would not be able to render moving images that are wholly indistinguishable from their analogue counterparts, although still not entirely persuaded that we have reached that point yet, or that we have been able to identify what that difference is. 

3 Jan 2016

Hi, Mario!

This is a lot of fun. Thanks so much for such a fantastic discussion.

It feels like we’ve either come around to the same viewpoint or have realized that we already had the same point of view but were using different language. As Bob Dylan says, “we always did feel the same, we just saw it from a different point of view.”

I absolutely agree with you that not every perceptual attribute has been modeled precisely. I further agree that my personal model that we’ve been looking at is different from those usually used in actual practice (or that in actual practice usually no model at all is used), so that in most cases we have actually seen, digital movies still have one recognizable “look” and film another. But you can see why I’m passionate that filmmakers have done that to themselves rather than it being an inherent quality of formats. So I object to all the rhetoric along the lines of “you just can’t get that look with digital” and “film just has that soul” and “ones and zeros can never do what celluloid does.”

I also enthusiastically agree with you that an attribute that has not yet been identified may well exist, in which case it would certainly be a je ne sais quois until it is identified.

So, it seems we agree that attributes are comprehensible and modelable, that my personal model has gotten some important such attributes but maybe not all, and that many digital films have had a recognizable “video” look because filmmakers have used poor models or no models at all and not because of an inherent and unalterable “look” enforced by the camera.

So, it remains a matter of opinion whether there is a perceptual relevance or interest in identifying and modeling attributes that I haven’t yet modeled or in making more precise models of attributes that I have modeled. 

So, on the one hand, if you see an outlying je ne sais quois that you think needs to be addressed, maybe you can be the one to spearhead identifying it.

On the other hand, I’m quite satisfied for now with my model — it’s a “look” I’m happy with as a filmmaker. That doesn’t mean I’m not always working on it, but most of my efforts are not focused any longer on building a more precise and less approximate model. Instead, they’re focused on things like: can I build a perceptually matching model using a different neutral space, or can I build an equivocal perceptual model using math that’s more uniform and less discontinuous within the color cube. Or: what’s a better way to integrate these processes with VFX and final color.

I’d also like to mention a few reasons why it is that I’m personally satisfied (for now!) with my current model and don’t feel there’s still a big outlying je ne sais quois

The first reason is totally subjective and admittedly indefensible but I’ll lay it out: I’ve looked at film for many years and really studied how it renders imagery both when scanned and when printed traditionally, and I’m simply happy with my current model. I’ve been working on film emulation for years and it was always good but it never felt like it was totally there yet, and now it does. It’s just a subjective feeling. Not to say it can’t be refined, but just that it’s good enough that refinements would be trivial.

Second, I have projected a DCP of the demo to some very seriously accomplished filmmakers (who have lots of experience with both film and digital) including DPs, DITs, Directors, Producers, VFX supervisors, (even ADs), etc. Pretty much across the board everyone was like “wow, you’ve done it” But more important than their stated response (which may have been mere politeness) is that almost no one is able to pick out film from digital with better accuracy than you’d have with random chance — many decline to even try. 

And in fact, when people have been extremely sure of one particular shot like “there’s just something in the skin tones in that one,” it’s been about 50/50 that their right — which is the same odds as chance, even when they get that big feeling that’s supposed to only be caused by the magic of film. 

(Also, note that if they’re getting it right about 50% of the time, that’s the same as chance but it’s worse than they should be able to do given that they have the knowledge that there is always one film and shot and one digital shot. In a scenario of pure chance, you’re not given any statistical/pattern information like that to help you guess what’s coming.)

Also, most telling of all: even though the demo shows the exact same shots first sequentially and then side-by-side, viewers are likely to pick one as film when sequential and the other as film when side-by-side. Clearly, if there were any visible je ne sais quois attribute — any clue that were actually perceptually recognizable, then these people would pick out the film every time based on that visual cue or at very least they would guess wrong in the same way between the sequential and side-by-side versions. 

I don’t understand in what sense the outlying visual cue can exist if no one can see it — they wouldn’t even need to identify it, they’d just have to see it even if subconsciously and let it guide their choice. Since the hypothetical attribute would be a perceptual one, then if no one can perceive it, then it seems ipso fact that it must not exist.

Third, I’m happy with the model because I believe I’ve now modeled film so well that there are some attributes I actually might not want to use when shooting a digital movie. The model is too precise rather than not precise enough — I can actually pick which attributes I want for a movie. 

For example, in the demo I’ve applied my emulation of film’s halation. I love film’s colors and grain, but I don’t know if I love the halation. I like halation in flares (sometimes), but I don’t know if I especially like it on more ordinary high-contrast subjects. When you shoot film, you have no say in this halation — it’s just there. In a digital movie, I can apply the halation algorithm if I merely want to be faithful to film, but I can also make a creative decision not to apply it at all or to apply it only to flares. Another example is gate weave: I’ve applied my gate-weave algorithm in the demo (again to prove that if you can identify an attribute you can model it). For faithful film emulation, I’d have to use it. But for an actual movie I may choose to leave it out. Again, I love film’s grain and colors but not sure about the gate weave. So, at this point, to my mind: the model is currently too precise rather than too approximate; to the point where some of its attributes can be seen as slavishly and brainlessly copying film for no particular aesthetic reason. 

Also: I agree that you can call it “conservatism" as opposed to “superstition." Very valid. Especially for filmmakers who are not making any assertions at all about film’s attributes as compared to digital's but merely continuing to use it quietly. But who is doing that other than the Coen brothers — most people using film are also publicly making rhapsodic statements about it? I might have said Woody Allen was another such quiet conservatist, but he just shot his last project digitally.

“Superstition" seems more precise to me, though, for describing the (outspoken) belief that the perceptual attributes traditionally associated with film are always and only seen in photochemically acquired images and the perceptual attributes traditionally associated with “video” are always and only  seen in digitally acquired images. That’s because “superstition” very pointedly connotes an unfounded belief, especially one that ascribes a causal role to a non-essential object or phenomenon. The word also implies a preference for cherry-picked belief-confriming anecdotal evidence over rigorous unbiased empirical evidence.

Anyway, I’m also glad that you’re now a Hatzer fan. He’s absolutely amazing. I had the incredible good fortune to work with him on a movie called ‘Danny Collins’ and he’s not only as good as he seems, he’s also a total gentleman and has a great working style (some people get a good result through a circuitous route, but he’s more of the direct type). He’s one of the few colorists that comes from being a traditional film timer, so he has to really know how to see color — in traditional film timing, you can’t spiral; you have to home right in.

Thanks again for such a challenging and fun discussion.

-Steve

4 Jan 2016

Thank you for sharing your extensive insight into this subject, it's interesting that these discussions are usually spearheaded by directors in the media when clearly cinematographers should have a word in the matter.

I think we have a similar viewpoint when it comes to the possibility of the existence of an unquantifiable quality, but you actually convinced me of the likelihood that all the technical processes of film could potentially be reproduced digitally. 

Your description of showing the demo to filmmakers sounds fascinating (I have to confess that I wrote down my own guesses for future reference) and as far as I'm concerned, the moment we can replicate the range of results we've become accustomed to in film and can't notice the difference any better than a randomized guess I think that settles it. This model could be a revolutionary step in the evolution of the medium. But does it mean that for you film is done, or would you consider ever shooting it again in the future? (Presupposing that it would be entirely your choice) 

I have not had a chance to see Danny Collins yet but I'm looking forward to checking it out. And very excited to see a full feature shot on this new model. Thanks again for engaging in this thoughtful discussion, I'm gonna go re-watch Blow Out to remember Vilmos! 

Mario

4 Jan 2016

Thanks so much, Mario

I’d certainly consider film a valid capture format, as my whole point is that the capture format (as long as it’s a high quality one) is not very essential to the look, so I’m not especially passionate about using any one type of format. Although I have to admit that once the “look” isn’t a part of the format decision, then digital is very attractive for a lot of reasons of pure pragmatism.

As for Danny Collins, Hatzer’s work is fantastic but my own work is nowhere near up to Deakins level so I hope you like it. Also, just so you know: I did indeed use my film emulation stuff on it (color and grain but not halation and gate weave), but it was before I’d made some of the most recent and important advances, so it’s an older draft of the algorithm.

Also, I’m thinking of posting some of what I’ve written from our exchange, because I think it could be helpful and informative to other people. Would you mind if I made public entire sections of the email thread that included your writing too? No pressure — if you object, no problem, I can stick to only posting my own writing.

Happy new year.

-Steve

4 Jan 2016

Hey Steve

Ordered Blu-Ray, excited to see it.

No objection whatsoever, like I said, I've been reading about this issue for years and you made the most well founded evidence-based arguments I've seen so far, I just hope I didn't have many typos! Feel free to edit, shorten and cut for the sake of context or clarity.

Happy new year

5 Jan 2016

Thanks, Mario!

I’ll let you know if I post it. I think I had more typos than you did — I’ll see how many I can catch on both of our parts.

Happy New Year!

-Steve