BACK TO #NerdyFilmTechStuff

REPLY TO YET ANOTHER LARGE FORMAT ARTICLE
April 9, 2022


This is a reply to someone who asked me what I thought of a new article in Y.R. Cinema Magazine.

It's a piece that, like my own Large Format Misconceptions post, tries to debunk some common ardent but false beliefs about the optical characteristics (or lack there of) associated with different sensor sizes in photography.

It's more informal and less technical than my own pice, and I think it's overall quite good. At least in general (if not in all specifics), it comes down on the side of truth over spurious propaganda, unlike the "large format look" article that I critique in my piece. But this new article is still a bit problematic in my opinion. Here is a non-exhaustive list of some things I take exception to in this Y.M. Cinema article:

1.

Not sure where they got the 1.81 ratio of frame size between Alexa Mini and Alexa65.

The custom framing area for the specific project described in the article may well have had a 1.81-to-1 ratio (or any ratio at all for that matter) because you can draw any project-specific frame lines you like within the sensor area. But that doesn't mean it's meaningful to say that the ratio of the cameras themselves should be called 1.81:1. The Alexa has an active sensor size of 54.12mm across and Alexa Mini has an active sensor size of 28.17mm across. So, if you use the full width of each camera and then crop the vertical to your project's aspect ratio, the ratio of sensor size is 54.12 / 28.17 = 1.92:1. Similarly, if you don't use the full sensor width and choose to have a pad, if you use the same (horizontal) percentage pad on both cameras, you'll still have a 1.92-to-1 ratio.

I'm not just nitpicking here: it's a common occurrence to calculate something a little bit wrong (not spectacular wrong, just a little bit wrong) and then mis-attribute the difference between expected result to actual result to some sort of magic rather than going back and just fixing the math error, which would show that the expected and actual results agree.

(Again, in the particular project being described in the piece, 1.81-to-1 may well have been the ratio of their custom frame lines, but firstly the article gives no specifics to support that. And secondly, even if true, it's still misleading to say that 1.81 is the ratio between the sizes of the camera sensors as opposed to saying it's the ratio of the one-time custom framing areas.)

2.

In trying to debunk common misconceptions about "large format look," we should not give credence to the misleading mental crutch of "crop factor."

There is no one single magically correct or biblically mandated imaging width that everything is either bigger than or smaller than or the same size as. So, in comparing (in this case) an Alexa65 to an Alexa Mini, you can just directly compare the two framing areas of the two cameras you're actually using without also comparing them to a third imaginary "full frame" camera that we're not using here and is totally unrelated to what we're doing.

Not only does using the unnecessarily elaborate mental model cause conceptual confusion that gets people spun around, it also adds extra steps to the hard calculation. These are extra steps that can cause precisely the kind of error mentioned above in #1. (Like: why first calculate the two crop factors then calculate the ratio of the two crop factors. You can just directly calculate the ratio of the two frame sizes without the extra gyrations of doing a crop factor.)

So, I'm not saying that the math of crop factor is wrong, but that it's a completely unnecessary superadded concept that does nothing at all other than add a mental stumbling block to a topic that already has people confused and prolong false credence in the idea that there are certain frame sizes that have magical primacy over others.

So, no: it's not "all in the 'crop factor.'"

3.

In matching size of blur circles, the article seems to make a mistake in which it gets the math itself right but doesn't explain that math correctly.

As I explain in more detail elsewhere the article is correct that, to match blur circles, you do what they actually did: multiply the f-number by the ratio of the frame widths.

But it's not correct to do what they say that they did (in their explanation that precedes the actual math): they say you need to change the f/stop by the number of stops that equals the ratio of frame widths.

To see that what they actually did (which is correct) is not the same thing as what they said they did (which is not correct), all you have to do is run the math backwards and see that in the incorrect version, you get different results if you try to match camera A to camera B than if you match camera B to camera A.

If camera A has a sensor that's twice as big as camera B, then to match camera B to camera A using their incorrect formulation, you'd have to open its f/stop by 2 stops, because camera A's sensor is twice as big. But to go the other direction and match camera A to camera B, you'd have to close the f/stop by only half a stop, because camera A's sensor is half as big. This is absurd: it can't be true both that the blur circles match when the cameras are set two stops apart and when they're set half a stop apart.

Even more absurdly: if camera A and camera B have the same frame width, by then in the mistaken formulation, they'd have to be set 1 stop apart to match, since the ratio of the two sizes is 1.

So what they said to do in the explanation (change the aperture by 1.8 stops) is wrong while what they actually did (multiplied the f-number by 1.8) was correct (or, at least it's correct if 1.8 is indeed your ratio of frame widths, which I'm not sure it was).

And multiplying an aperture's f-number by a value is not the same thing as closing that aperture by that same value. Here are some examples to help illustrate that:

If the aperture's f-number is f/1.0 and the value is 10: multiplying f/1.0 by 10 gives you f/10 whereas closing f/1.0 by 10 stops gives you f/32.

If the aperture's f-number is f/2.8 and the value is 1.0: multiplying f/2.8 by 1.0 gives you f/2.8 while closing f/2.8 by one stop gives you f/4.0

If the aperture's f-number is f/22 and the value is 0.1: multiplying f/22 by 0.1 gives you f/2.2 while closing f/22 by 0.1 stops gives you f/23.4.

4.

I object to the statement that "larger sensors require one to get closer to their subject."

At least in this case, the phrase is given in a context that's much more clear-sighted than the usual forum in which that same sentiment is often given. So the spurious phrase is delivered in such a way that the sentence in which it lives taken as a whole is technically true even if misleading.

But saying something that's only misleading and not technically false is still a problem.

Sensor size does not force you to change camera position. It just doesn't. If you think you need to move the camera because of the sensor size, you've already failed to understand the geometry of optics and the crux of shot design.

In motion or still photography (or in animation or in realistic painting or drawing for that matter), the single thing that most defines a shot is camera placement: where the entrance pupil (the perspective point) is in space in relation to the scene. If you move the camera, you're simply doing a different shot.

So, when you compare two format sizes fairly instead of deceptively, you have to compare photographing the same shot on the two formats, not doing a totally different shot on each format, which is pointedly a false comparison.

A fair comparison would consist of setting up each camera so that its entrance pupil has the exact same position and aimed the same direction and then selecting a lens and aperture that give the same angle-of-view and blur circles (not the same lens and the same aperture, but equivalent ones that give the same result for the sensor size. I've written elsewhere about how to unambiguously calculate this.)

Doing two different shots with two different cameras and then saying, "see, these two cameras have different looks because these shots aren't the same" is misleading.

So while the article's section called "Sensor size DOESN'T effect DOF" may not have statements that are technically untrue, it does use suggestive and charged language that implicitly reinforces the same commonly held misconceptions it's trying to debunk.

5.

I know the article is brief and not as technical as some of my own resources on the topic, but they probably should have mentioned the difference between f/stop and t/stop.

That's a difference that can cause some confusion: it's the (main) reason that doing the calculation to match blur circles on different cameras/lenses using only the marked apertures on the lenses is an approximation.

Lenses are marked in t/stops, not f/stops. If you were to include in the calculation the conversion from t/stop to f/stop for your actual lens models being used, you could get a more precise match.

6.

It's quite misleading (and bizarrely so, why do this -- it seems so random?), that the article concludes the entire discussion by selecting an arbitrary, hypothetical, and strangely hyper-specific example of a way that you could actively jimmy a comparison so that there would be a difference in look that usually wouldn't exist between two format sizes and then proclaims that "this is the large format look."

Their specific example is to take one specific lens model that is uniquely engineered to be blurry at the edges of a larger sensor area and compare how it looks when you use that lens with exactly the type of larger taking area it was designed for that can see the blurry edges to how it looks when you use it on a smaller format whose smaller image area is contained within the non-blurry center. This is transparently a gamed example.

The "look" here is not the "look of format size" but "the look of lens engineering." This particular lens was engineered to have blurry edges. You can have a blurry edge look just as easily on one format size or another. All you'd have to do is shoot the larger format in their example with any other lens model and it would have the same look as the small format in their example. Or, conversely, you could use a different lens model (not the same lens model) that's designed to have blurry edges on the smaller format size and now the small format would have the "big format look."

This is not hypothetical: the reality is that, because lenses have been being made for standard motion picture framing areas (around 22mm to 24mm across) for over 100 years, there are currently many more options for different specific models with specific engineering looks for standard motion picture frame sizes than there are for the new bigger sizes. There are many new and legacy lenses that can offer a clean high quality look or a vintage degraded look or a blurry-edge look. And there are so many more options too! There are lens models that are quite nearly perfectly rectilinear across the field, others that are not rectilinear at all, and yet others that are rectilinear in the middle and non-rectilinear at the edges.

There are many different lens looks: but (ipso facto) the lens engineering looks are not attributes of the sensor size but of the engineering of the specific lens model. It's deceptive to say "if I select one lens engineering look for a bigger sensor and select a different look for a smaller sensor, then that's a look of the sensor size." You could just as easily have reversed the types of lenses selected or made them both the same instead of opposing. It's like photographing one specific dark/contrasty scene with a large sensor and a different bright/flat scene with a smaller sensor and then proclaiming that larger sensors have a darker/contrastier look than smaller sensors.

As is often the case when misconceptions are propagated: the example is deceptive because it purports to be a comparison of one attribute (sensor size) but fails to hold other variables constant (in this case lens engineering) and then perpetuates the magical thinking by misattributes differences created by one variable to the other variable. And in this case, it does so slyly: it may seem like it's holding lens engineering constant by using the same lens model, but it's actually not: a fair comparison would use equivalent lens models, not the same lens models (like: either use lenses on each camera that are designed to be blurry on the edges of that camera's sensor for both or use lenses that are designed to be clean on the edges for both).

This is the same mistake found in many of the other false/deceptive comparisons as well: a fair comparison demands doing the equivalent thing in both cases, not doing the same thing in both cases. If you want to compare which of two different brands of shoe is more comfortable and one brand is marked in US sizes and the other is marked in UK sizes, you don't compare them by trying on the same marked size number in each; you compare them by trying on the sizes that actually fit you.