Breaking of the Images

The pictures assembled by generative machine learning models, I've realized, lack juice in a very specific way: the total absence of any suggestion of a relationship between the object/subject of the image and the subject/object of the image-maker.

Any creation of a visual representation by a human mind and hand is the product of multiple intersecting relationships binding the depicted to the depictor, and as viewers our experience of the image incorporates our thoughts and feelings about what we infer regarding those relationships.

So much of what we discuss about visual art encompasses how the artist appears to feel about the subject and how the subject (if sentient) might feel about the artist. This tension is a source of vibrance and engagement. Without it the image feels cold, dead, empty and without purpose.

A painting, drawing or photograph of a person radiates the complexity of these emotional and material relationships. Exploitation, consent, love, hate, indifference, commercial exchange, eroticism (exhbitionistic and voyeuristic), ownership, power and submission, aesthetic axiology, alienation, etc.

Even creations which are entirely produced from the creator's mind, without a subject or model present, at minimum evoke these relational tensions between the creator and their own interiority, which is something we as an audience can all identify with.

Prompting a generative machine learning model is relationless. The object/subject being depicted exists nowhere, not even in the mind of the prompter, other than as a reactive impulse toward whatever the model has conglomerated and a decision whether or not to roll the dice again.

A prompter cannot fear the nonsubject of the depiction it iterates toward. Neither can it love, hate, be bored by or indifferent to, overcome shyness toward, boss around, be shamed by, be praised by, collaborate with, ignore, disappoint or thrill the nothing which was never anywhere.

Certainly not all visual depiction is "art". There is plenty of purely mercenary slop out there which serves no purpose other than to fill space and attract a tiny amount of attention (but not too much!) Still we feel a pathetic emptiness even in the degraded version of this swill produced by generative machine learning.

It is this palpably total lack of relationship involved in the creation of even these images which produces a void, an anti-experience for anyone involved.

Shooting a "Poor-man's Process" Car Interior Scene

"Poor-man's Process"! This scene that I shot for Madam Secretary ep. 409 features a common alternative workflow to either a freedrive or a process trailer, involving a static car and compositing driving plates into greenscreen footage. First, the scene:

We set up the three-sided greenscreen box in the parking lot at the stage. The box is topped with a silk to allow the sun to light the greenscreens while we use various units to suggest ambience and sunlight entering the car itself. The lighting diagram is from memory; please forgive any inaccuracies:

My key light was a 10K through the front window half-topped with a silk to keep the faces softish while feeling harder light in the lower portion of the frame. (All of the following stills are uncorrected frame grabs with the shooting LUT applied.)

Erich Bergen in the back seat had his own special light through the side window which a grip would occasionally pass a solid through to feel some movement.

For the cross-coverage we brought in smaller, lightly-diffused units through the side which did double duty as edge lights and fill.

I shot the driving plates that were comped into the windows on a special trip to Washington D.C., as examined in detail in my previous post.
In the end is this cheaper than a process trailer day? Ask a UPM. It's certainly more controlled, which is nice.