Breaking of the Images

The pictures assembled by generative machine learning models, I've realized, lack juice in a very specific way: the total absence of any suggestion of a relationship between the object/subject of the image and the subject/object of the image-maker.

Any creation of a visual representation by a human mind and hand is the product of multiple intersecting relationships binding the depicted to the depictor, and as viewers our experience of the image incorporates our thoughts and feelings about what we infer regarding those relationships.

So much of what we discuss about visual art encompasses how the artist appears to feel about the subject and how the subject (if sentient) might feel about the artist. This tension is a source of vibrance and engagement. Without it the image feels cold, dead, empty and without purpose.

A painting, drawing or photograph of a person radiates the complexity of these emotional and material relationships. Exploitation, consent, love, hate, indifference, commercial exchange, eroticism (exhbitionistic and voyeuristic), ownership, power and submission, aesthetic axiology, alienation, etc.

Even creations which are entirely produced from the creator's mind, without a subject or model present, at minimum evoke these relational tensions between the creator and their own interiority, which is something we as an audience can all identify with.

Prompting a generative machine learning model is relationless. The object/subject being depicted exists nowhere, not even in the mind of the prompter, other than as a reactive impulse toward whatever the model has conglomerated and a decision whether or not to roll the dice again.

A prompter cannot fear the nonsubject of the depiction it iterates toward. Neither can it love, hate, be bored by or indifferent to, overcome shyness toward, boss around, be shamed by, be praised by, collaborate with, ignore, disappoint or thrill the nothing which was never anywhere.

Certainly not all visual depiction is "art". There is plenty of purely mercenary slop out there which serves no purpose other than to fill space and attract a tiny amount of attention (but not too much!) Still we feel a pathetic emptiness even in the degraded version of this swill produced by generative machine learning.

It is this palpably total lack of relationship involved in the creation of even these images which produces a void, an anti-experience for anyone involved.

Scene Breakdown: Madam Secretary ep. 507

This scene I shot as 2nd unit DP for episode 507 of the CBS series Madam Secretary serves as a decent illustration of what goes into a simple day exterior photographed with available light, so I thought I would walk through it briefly.

The master setup, establishing the scene’s geography, the relationships between characters, and the color palette in one image. On the tech scout our series DP Learan Kahanov noted that the sun rose over the stands in the background so we planned our day to begin looking in that direction to provide a striking edge light and deep shadows to outline our subjects. This was shot at a T4/5.6 on the wider end of a Fujinon 25-300 zoom lens, which was the only lens we were budgeted to carry for the day (times 2, as we had two cameras for the scene), with a filter pack of a Formatt Firecrest True ND 1.8 to control exposure plus a soft-edge grad ND .6 to provide a little “Days of Thunder” feel from the top left corner of the frame. VFX would eventually change the signage above the characters.

507_724_1.3.1.jpg

Tighter coverage looking the same direction. Same ND, no grad. James is “keyed” by the bounce off the dragstrip he’s standing on from the intense sun over his left shoulder. For the color of the scene, I wanted the skintones to seem healthyish but the overall feel to have a kind of kerosene-polluted warmth; more red/magenta than I would normally go, to be evocative of a racetrack ambience.

507_724_1.9.1.jpg

For the reverse coverage of Erich, I didn’t want the harsh direct sun on his face if I could help it. For the tighter shots we pulled out the major piece of grip gear for the day: a 12x20 light grid diffusion frame. In the final version I asked the colorist to stretch the highlights a bit on both Erich and the background so the contrast shift wouldn’t feel so extreme when cutting back and forth between James and Erich.

Here’s a little behind-the-scenes shot of what it looked like to fly that 20x:

58057.jpeg

However, we had a problem in the wider shots of Erich.

507_724_1.4.1.jpg

One thing I did not know about the front windshield of a stock car is that it is raked backward at a very shallow angle, but we discovered while blocking a dolly move which revealed Blake that it was impossible to both cover Erich with the frame and keep its reflection out of the car. Another thing I learned is that the windows are polycarbonate, which made using our polarizing filter to attenuate the reflection impossible because of the rainbow moire interference patterns created when the filter was introduced. We minimized it as much as possible, but then got lucky to have a thin cloud layer roll in while we still had time to go back and reshoot the wider shot without the frame.

507_724_1.11.1.jpg

We didn’t get to the coverage of Usuki’s Assistant until that cloud layer had settled in, so in this setup I miss a bright back/edge light I’d otherwise want to be there to match what was established in the master. If this were a main unit scene with the electric truck available I might have asked for the gaffer to recreate that hot backlight, but alas.