Video Synthesizer

rutt etra

scan processor studies

A Video Synthesizer is a device that electronically creates a video signal. A video synthesizer is able to generate a variety of visual material without camera input through the use of internal video pattern generators. It can also accept and ‘clean up and enhance’ or ‘distort’ live television.  Video pattern generators may produce static or moving or evolving imagery. Examples include geometric patterns ( in 2D or 3D ), subtitle text characters in a particular font, or weather maps.

The history of video synthesis is tied in to a ‘real time performance’ ethic. The equipment is usually expected to function on input camera signals the machine has never seen before, delivering a processed signal continuously and with a minimum of delay in response to the ever changing live video inputs.

Following in the tradition of performance instruments of the audio synthesis world such as the Theremin, video synthesizers were designed with the expectation they would be played in live concert theatrical situations or set up in a studio ready to process a videotape from a playback VCR in real time while recording the results on a second VCR. Venues of these performances included ‘Electronic Visualization Events’ in Chicago, ‘The Kitchen’ in NYC, and museum installations. In 1983, video artist/performer Don Slepian designed, built and performed a foot-controlled Visual Instrument at the Centre Pompideau in Paris and the NY Open Center that combined multiple Apple II Plus computers with the Chromaton 14 Video Synthesizer and channels of colorized video feedback.

Analog and early real time digital synthesizers existed before modern computer 3D modeling. Typical 3D renderers are not real time, as they concentrate on computing each frame from, for example, a recursive ray tracing algorithm, however long it takes. This distinguishes them from video synthesizers, which deliver a new output frame by the time the last one has been shown, and repeat this performance continuously (typically delivering a new frame regularly every 1/30 or 1/25 of a second ) . The real time constraint results in a difference in design philosophy between these two classes of systems. Video synthesizers overlap with video special effects equipment used in real time network television broadcast and post-production situations. Many innovations in television broadcast equipment as well as computer graphics displays evolved from synthesizers developed in the video artists’ community and these industries often support ‘electronic art projects’ in this area to show appreciation of this history.

Many principles used in the construction of early video synthesizers reflected a healthy and dynamic interplay between electronic requirements and traditional interpretations of artistic forms. For example, Rutt & Etra and Sandin carried forward as an essential principle ideas of Robert Moog that standardized signal ranges so that any module’s output could be connected to ‘voltage control’ any other module’s input. The consequence of this in a machine like the Rutt-Etra was that position, brightness, and color were completely interchangeable and could be used to modulate each other during the processing that led to the final image. Videotapes by Louise and Bill Etra and Steina and Woody Vasulka dramatized this new class of effects. This led to various interpretations of the multi-modal synthesesia of these aspects of the image in dialogues that extended the McLuhanesque language of film criticism of the time.

In the 70s, Richard Monkhouse working for EMS (Electronic Music Studios) in the UK developed a hybrid video synthesiser – Spectre – later renamed ‘Spectron’ which used the EMS patchboard system to allow completely flexible connections between module inputs and outputs. The video signals were digital, but they were controlled by analog voltages. There was a digital patchboard for image composition and an analog patchboard for motion control.

In 1976, video synthesizers moved from analog to the precision control of digital. The first digital effects as exemplified by Stephen Beck’s ‘Video Weavings.’ Schier and Vasulka advanced the state of the art from address counters to programmable AMD microprocessors. On the data path, they used arithmetic and logic units, previously thought of as a component for doing arithmetic instructions in minicomputers, to process real time video signals, creating new signals representing the sum, difference, AND, XOR, and so on, of two input signals. These two elements, the address generator, and the video data pipeline, recur as core features of digital video architecture.

The address generator supplied read and write addresses to a real time video memory, which can be thought of as evolution into the most flexible form of gating the address bits together to produce the video. While the video frame buffer is now present in every computer’s graphics card, it has not carried forward a number of features of the early video synths. The address generator counts in a fixed rectangular pattern from the upper left hand corner of the screen, across each line, to the bottom. This discarded a whole technology of modifying the image by variations in the read and write addressing sequence provided by the hardware address generators as the image passed through the memory. Today, address based distortions are more often accomplished by blitter operations moving data in the memory, rather than changes in video hardware addressing patterns.

Tags:

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.