Classification is a useful way to help us organise things, be they objects in your toolbox, music in a collection or the contents of the fridge. Technological tools can help us with this task. As time goes on we have invented new classifications to divide in ever more granular detail the qualities of the world around us. We have also turned the technology inward to classify ourselves. Is this a partnership between humans and the machines they have created? And how do these new ways of seeing affect how we view ourselves and others?

The above image is the starting point for the performance. It is generated by the calibration bars of the surveillance system used to mix live video. It is before this abstract cityscape that the performance happens.

Arranged on the floor is a collection of obsolete audio and video equipment. There are also a range of personal objects such as photographs, children’s toys and other objects.

A live camera and tripod are repositioned during the performance to create new image compositions. A computer, stripped back to it’s most essential components (power supply, main board, video card) is manipulated (circuit bent) live to produce video imagery. Sound created by VHS audio playback (and time shifting) and cassette tape loops is mixed live. VHS video tapes provide another image source.

Performance

Here are some stills from the performance.

What is the performance about?

The performance is intended to stimulate the audience to consider the role of technology as a way of classifying humans. On some level I am asking the audience to draw conclusions about me, by the use of personal objects, but also to think about how the lens of technology might affect the conclusions they draw. I invite the audience to consider the power structures the technology supports. The abstract narrative allegorically traverses a spectrum from a hard-edged system to a more human face.

The visual and aural aesthetic makes use of generational loss, glitch, error and resampling and is mostly analog in nature.

Where did this project come from?

Looking back over my research, I think the first element that fell into place was the arrangement of the quad. By this I mean, the division of the screen into four smaller screens. Two pieces of equipment contributed to this.

The first was the ActionSampler lomo camera, a sample of who’s image you see above. The second is the ROBOT. The quad gives the feeling of an enhanced way of seeing. In the lomo camera you see four exposures in rapid succession on the same frame of film. On the ROBOT surveillance switch, you can see the same room from several angles, or choose to enlarge one image to see details.

Around the same time I acquired a number of switchable time-base video recorders. And with them the first element of found footage. I had to repair the tape mechanism of all three machines, and when I did, out popped a tape.

This is a frame of the footage that was on that tape. The faces are clearly human, but not distinct, as in this close up.

The limitations of the system’s vision are clearly shown. A geometric composition enhanced by pixelation and the regular arrangement of furniture in the rooms is striking.

Clearly a raster system of image display is employed here. I wondered if other manipulations or equipment glitches could be employed to create a more photographic feeling image. The result was some video cyanotypes.

I used three different old video cameras to capture a quarter of each face. This image was then displayed on a tiny black and white tv and photographed.

From that point I started to think about how to make the project more performative. I created a small theatre where the characters were connected via TV.

This used the ROBOT sequence function to cycle the video sources. I also disassembled a PC using a drill, removing all un-necessary parts while it was plugged in to a projector.