Investigations

 
 

Tangle E

for-site_2

E : D5 (Installation View), 2014. Split-screen, single channel HD video, 1280 x 720 pixels

 
 

In the manner of DNA replication, Tangle E is a five-part series using open-source signal-analysis software to operate on the byte information in 5 pairs of raw image files – as enzymes do on strand templates in a double-helix – to synthesize a new, but inherited information apparatus: in this case, plotted data and video.

The code from each raw file in a pair was individually imported, then routed into color channels of red, green and blue, so corresponding with the original RGB color space of the file. Now a signal, the data was then mapped into a spectrum window – its metadata and bit densities on the x-axis as a function of frequency and magnitude; its pixel dimensions along the y-axis as a function of time.

Next, one signal of the pair was treated as a nucleotide chain in a leading strand of DNA, while the information from the second signal was regarded as a chain of nucleotides in a lagging strand. To excavate and focus in on the spectral energy inherent to each pair of signals, as well as to map the anti-parallel, 3-5-prime directionality of the byte strands, the domains of frequency, magnitude and time in the red channel of the leading signal were inverted and summed to one, while the frequency domains of the green and blue channels were flipped upside down. The lagging signal’s red and green channels were flipped similarly, and the bit order in its blue channel was reversed.

Charted as enzyme activities in the space of a cell, then, the data of the leading file moves to the right of its original order, and plays forward in time when displayed in the video window. Conversely, the information of the lagging file moves right, but also moves in reverse time within the scrolling display buffer. Five split-screen videos offer a record of these polymeric, enzyme-based actions, and an information plot containing all five pairs therein, is a transcript of the strand-files rejoined.

 

This statement appears in different form in the textual materials for Interphase, Sorry Archive, Select Fair, Miami, FL. Dec. 2-7, Press Release

 
 

E : D5, 2014. Split-screen, single channel HD video, 1280 x 720 pixels, (Excerpt) 00:12:26 of 01:12:12 minutes

 
 
 

E : D4, 2014. Split-screen, single channel HD video, 1280 x 720 pixels, (Excerpt) 00:14:21 of 01:12:12 minutes

 
 
 

E : D3, 2014. Split-screen, single channel HD video, 1280 x 720 pixels, (Excerpt) 00:15:37 of 01:12:13 minutes

 
 
 

E : D2, 2014. Split-screen, single channel HD video, 1280 x 720 pixels, (Excerpt) 00:07:06 of 01:12:13 minutes

 
 
 

E : D1, 2014. Split-screen, single channel HD video, 1280 x 720 pixels, (Excerpt) 00:04:09 of 01:12:13 minutes

 
 
 
 
 
 

Miscellany
00.01.54.19 [Background Energy]”Way of the Dragon”, 2014. Single channel HD video with sound, 1280 x 546 pixels

 
 

00.01.54.19 [Background Energy]”Way of the Dragon” explores the digital artifact, as a means to coping with information loss as a result of data compression. The algorithms that drive the compression methods we use to convert the world from analog to digital and back again to blur cannot recover all of the data that they quantize and crunch. Paradoxically, however, these methods are also inherently prone to error, and can as such introduce new data, (new) information on the other side of an operation or a decompressed file in the form of digital artifacts – leaving traces of the very same entropic characteristics by which they function. The audio-visual component of 00.01.54.19 takes that information enigma as its point of departure, and offers a dialogue-deleted, artifact-laden clip from the film “Way of the Dragon.” Removing the clip’s dialogue released its background noise, for exploration, and it generated new artifacts stemming from the manipulation and repair of the audio track’s frequency spectrum. Exported as unstyled XML, the archive listing those audio-editing procedures was then further quantized to form this work’s diagrammatic component. Its 9,516,632 characters of code can be reconstituted as a set of instructions – an editing score – and as a wall of data, allowing us a very human experience in direct relation to the digital artifice therein.

 
 
 
 
00.01.54.19_XML History Export

{XML Audio Editing Score: 00.01.54.19 [Background Energy] “Way of the Dragon”}, 2014. Unstyled XML history export, compressed, 4161 x 2938 pixels

 
 
 
 
00.01.54.19_XML_detail

Detail: Top left; 2080 x 1469 pixels of 22930 x 9948 pixels actual

 
 
 
 
 
 

Investigations

 

You and I, We’re on the Same Wavelength, 2014. Single channel HD video with sound, 1280 x 720 pixels, (Excerpt) 00.02.59.24 minutes

 
 
 

You and I, We’re on the Same Wavelength is a performance across New York in multiple locations wherein I attempt to instinctually connect with strangers by yawning repeatedly, positing that a few will yawn back at me. Taking as its impetus the rapid evolution of virtual, discrete sites of connection in online social networks and their accompanying meme contagions, this piece seeks an offline analog to better understand these hidden loci through the mutually invisible properties of empathy and bonding. Subconscious and spreadable, it sees yawn contagion as just such a signal, and as a phenomenological site ripe for excavation.

 
 
 
 
 
 
 

I-V-IV17, 2014. Single channel HD video with sound, 1280 x 720 pixels, 01:11 minutes

 
 

I-V-IV17 explores how we process and adapt to change via sensory (dis)comfort. A painstakingly familiar I-V-IV progression of guitar chords in the Who’s “Baba O’ Riley” was isolated from the mix and placed into a spectrogram editor, where it was developed into a shifting visual score of colored frequency bands. The familiarity of the chord progression quickly fades in isolated repetition, and the visual score is disorienting, but the repetitions also allow for new sounds and color patterns to be perceived. Thus the more one discovers new sensory qualities in the piece, the more one adapts to the initial discomfort.

 
 
 
 
 
 
 

I-V-IV17_Visual Editing Score

I-V-IV17: Video Score for Spectral Audio Editing Software & Isolated Guitar Power Chords from The Who’s “Baba O’Riley,” 2014. Pencil and charcoal on paper, 11 x 14 inches

 
 
 
 
 
 
 

Swinger Data_spectrogram

Swinger Data | Swinger Data_spectrogram_~08:23.509-14:37.023, 2014. Audio file in Aiff format | Spectrogram in Tiff format | in thumbdrive, (Excerpt) 12:08 of 41:56 minutes| 2552 × 1250 pixels

 

Swinger Data began as a field recording of a pre-recorded liquidation advertisement on loop outside of an electronics store in Midtown Manhattan. The store’s closing sale was being broadcasted with the same wildly exaggerated enthusiasm as normally accompanies a grand opening sale, which was like listening to strategic effect announcing its ineffectiveness as strategy. This got me to thinking. So here was this ad eating itself, glitching away in some nonsensical feedback loop, but in relation to what; the expanding as-of-yet that we experience as reality, which is somehow supposed to be more logical, more stable in contrast? Trying to grasp the situation was like being in some kind of zero-coordinate perceptual gap, listening to senselessness as it infinitely reflects into a reality it can never catch up with while at the same time experiencing reality as it infinitely reflects into senselessness that already was. Pure mind rattling joy.
 
What Swinger Data became, then, is a celebration of this perceptual cavity. Every single sonic bit of the field recording – its recursive grammar, surrounding traffic, resonances and noise in feedback within feedback within feedback between microphone and source – was hacked further into self-referential nothingness; reassembled; disassembled; then reassembled again. The task was performed using spectral processing software. Like an audio prism, this software can divide a sound wave into the elemental vibrations that constitute it. BUT, and this is the truly wonderful part, as with any fine mechanism for measuring quanta, while the output phenomena can somehow be in two places at once, the input method, as of yet, cannot. In order to act upon the continuum of the wave, the software must either contract space or expand time – it cannot capture sound spectra in the present tense. In other words, dig into the hole all you want, but you’ll always only be left with what’s on your shovel and what you toss to the side. A loop for a loop and a cavity for a cavity. So the swinger data now traveling between your ears are not of the void, but sonic layers of distorted time and diminished space intersecting layers of constricted time and extended space, which shape its boundaries, and signal perception.