Between the Real and the Virtual

Posted by
|

Week 2: January 21 – 27 (Art-B1-14)

A review of historical theatrical concepts: the suspension of disbelief and the construction of illusion, the fourth wall, Wagner’s gesamtkunstwerk (total artwork), the integration of the arts, immersion, and the blurring of physical and virtual space within the context of theater and performance. A broader discussion of remapping physical and sonic spaces through media, theatrical architectures, and the reconfiguration of the traditional proscenium.  A review of the historical use of media in performance, including: Happenings, electronic theater, satellite projects, and installation; leading to new paradigms in contemporary media & performance, with an emphasis on the use of the network. There will be an in depth review of technical requirements of the WordPress multi-site environment, Adobe Connect webcasting, and specific software applications for micro and final projects. Continued discussion of ideas and possibilities for the final project.

Schedule / Assignments

  • Tuesday, 1/21: Class meeting (8:00 PM – 11:00 PM, Adobe Connect)
    • Review of micro-project 1: Video Double
    • WordPress account setups and review of basic functionality: editing posts, media integration, categories, tagging, excerpts, themes, etc.
    • Overview of the Adobe Connect web-conferencing system:
      • Establishing accounts / login / download and install add-in
      • Use of microphone / webcam / audio setup wizard
      • Testing connection, upload/download, video bandwidth
      • Interface: chat, web links, presentation, etc.
    • Flickr: creating an account, joining the group, assigning images to Flickr
    • Twitter: creating an account and testing the feed with the #ossntu hashtag
    • Facebook: the OSS Facebook page for sharing, etc.
    • Overview of the core interactive software tools: Ableton Live, VDMX, and Lemur
    • Presentation / discussion of the real and the virtual
    • Discussion of the project hyperessay and final project
  • Read Hyperlecture: Week 3 (due January 28)
  • Critique assigned work in a short blog post with media illustration and Research category (approx. 150-250 words with media) (due January 28)

OSS WordPress Setup & Review

Each student will be assigned a WordPress site that is part of the Open Source Studio networked multi-site. Each student site is a part of the whole, such that all postings are aggregated into the main site. Students will setup their accounts and learn basic fundamentals of working with their sites: media integration, editing posts, assigning tags and categories. The study of WordPress will continue throughout the semester. See the User Manual for additional information.

  • User Settings: the avatar, name, email address
  • General Settings: site name, tag line
  • Reading Settings: blog posts / static page as home page
  • Assigned Categories: Research, Micro-Project, Project Hyperessay
  • Assigning Tags: strategies for choosing keywords as tags
  • Editing Posts: Editor, Categories, Tags, Featured Image, Draft, Published, Update, Excerpt, Vimeo embed, Embedded Hyperlink, Preview, View Post
  • Themes: For now use 2014 until we add additional themes, restricted by the multi-site

Adobe Connect Setup & Review

Adobe Connect is web-conferencing software that will serve as the virtual classroom for Media & Performance. We will review the login procedure, download the add-in plugin, review the interface and functionality, presentation capabilities, Web linking, and chat room. Each student will become proficient with Adobe Connect in order to use it effectively for class discussion and presentation. Adobe Connect can be accessed at the following location: https://meet47208930.adobeconnect.com/_a945444840/oss. See the User Manual for additional information.

  • Login as Guest
  • Downloading the Adobe Connect add-in
  • Review of the Interface: chat, video, links, presentation
  • Uploading content to Adobe Connect
  • managing the webcam and microphone
  • headphones

Core Interactive Software Tools

A broad, conceptual overview of Ableton Live, VDMX, and Lemur, the core interactive software tools for Media & Performance. While students are free to choose software applications that best fit their project, this consortium of tools provides an integrated environment for organizing and manipulating visual and audio media for live performance projects.

Ableton Live

Ableton Live is interactive music software based on the paradigm of short clips and looping functionality. With Ableton, it is possible to build an audio piece sculpturally: adding and subtracting from the composition while the work is running, live and in real-time. Rather than the linear-based timeline (which it also includes), Ableton provides an interface where audio and midi clips can be loaded, manipulated, and performed.

VDMX

VDMX is a similar application for performing live visual media. It also uses the clip paradigm, in which fragments of video, graphics, and animation can be stored for real-time manipulation. Live camera inputs can also be integrated into the compositional process. The interface provides the means to organize live and pre-recorded media, where temporally based effects and processes can be assigned.

Lemur

Lemur is an iPad or iPhone app that provides wireless control of musical (or visual) software such as Reaktor and Ultraloop. With Lemur, it is possible to perform live on stage or in the studio without being tied to a computer. There are many creative uses for wirelessly controlling music and visuals, and we will be exploring those possibilities throughout the semester. For now, we will review how the Lemur app is configured to send both MIDI (Musical Instrument Digital Interface) control, as well as OSC (Open Sound Control), in order to create your own custom interface that can control most aspects of the music software. The app is fully customizable, such that an interface can be designed specifically for a composition, using faders, sliders, knobs, and buttons to trigger events.

Audio-Visual Tempo Synchronization

Multimedia production requires both flexibility to experiment, as well as a systematic approach for working with complex audio-visual software tools. For this presentation, I am going to overview a typical workflow process that results in a real-time audio-visual composition, in which music and visual are synchronized to the musical beat. The presentation is intended to cover important concepts and working methodologies, it is not intended to be a lab session with detailed instructions. Rather, the presentation sets the stage for an understanding of working with audio-visual tools, which we will cover throughout the semester. For those who are particularly interested in developing a project of this kind, I will put together a working group to participate a series of lab sessions in which we will discuss Ableton Live, VDMX, and Lemur.

Sound Performance

Often you will want to generate interesting sounds that can be brought into the composition process in Ableton Live. While Ableton has numerous musical instruments, it is important to create your own sonic forms and materials so as not to simply replicate typical musical compositions from sounds that are provided in the software. For this demonstration, I am using Native Instruments Reaktor, with an ensemble instrument called UltraLoop, crated by Twisted Tools. While Reaktor is a very advanced engine for producing electronic synthesis and sampling instruments, there are many instruments created for the software that are quite easy to use and are capable of generating very complex sounds.

Creating New Sounds with UltraLoop

UltraLoop is a new Reaktor ensemble created by Twisted Tools. It is a musical sampler that allows you to load layers of sounds that can be selectively organized into a complex variety of rhythmic patterns. UltraLoop has a great deal of processing capability, in which the looping sounds can be manipulated in real-time while the patterns are being generated. For this example, I am creating patterns that are recorded by the host application, Reaktor. The resulting recording will have a set tempo, which can then be brought into Ableton Live where additional tracks are added. Also, for this presentation I am controlling UltraLoop from my iPad, using the Lemur app, which is detailed below.

Here is an example of Ultraloop in action:

Wireless Control with the iPad and Lemur App

Ultraloop Screenshot

For this presentation, I am using a custom interface that was designed by Antonio Blanca specifically for UltraTools. The setup for using the Lemur interface is somewhat complex, so I will be reviewing details of the configuration in future presentations. For now, note how the Lemur app for Ultraloops responds to the computer software, and the interface on the app sends control back to the computer. Nearly every feature of Ultraloop can be manipulated from the app, including the position of sample playback, effects, and other parameters.

Here is a promotional video for Lemur to give you an idea of how it utilizes the iPad interface:

Musical Composition with Ableton Live

Once you have compiled sounds you would like to work with, you can incorporate the material into sequencing software such as Ableton LIve, where the sound can be organized, layered, mixed and recorded as a composition or for live performance. Ableton Live is ideally suited as a live compositional tool due to its ability to incorporate real-time methods into the compositional process. Using a system of clips and tracks, audio files, synthesized and sample sound, effects processing, and multi-track mixing can be created while the sequencer is running, unlike most linear time-based digital audio workstations (DAW).

Screenshot 2014-01-19 16.26.18

In the example in class, I am taking the sounds I generated in UltraLoop, all tempo-based, and layering these sounds with additional musical elements, including melodic, percussive, and effects processing. Since the original Ultraloop files can be brought into Ableton Live and matched to musical time, it is very easy to add other musical layers at the same tempo.

Visual Composition with VDMX

Screenshot 2014-01-21 08.53.24

As a precursor to our study of the space between the real and the virtual, I have situated myself in that space in this visual composition created in VDMX. By synchronizing the MIDI clock between Ableton Live and VDMX, I am able to create rhythmic effects that are controlled by measure length, beats, and even live audio analysis. In other words, the visual respond to the frequency of the beats, as well as the loudness of the music. Various forms of what is referred to as visual music has been evolving throughout the 20th and 21st century: from early animation to VJ performances that have become popular today in clubs and theaters.

14.01.19-Main Output-16.45.48

Hyperlecture: Week 2 – Between the Real and the Virtual

“In one way or another, all digital artworks and environments are concerned with possible relationships between the physical space and the virtual, and what distinguishes them are the balance between these two realms and the methods deployed to translate one space into the other. Some artworks try to translate qualities of the virtual world into the physical environment, others strive to map the physical into the virtual; and yet others are aimed at fusing the two spaces.” – Christiane Paul

“This sense of in-between-ness – a liminal space operating between the screen image and live performers” – Steve Dixon

Origins of the real and the virtual

The blurring of the real and the virtual can be traced back to the origins of theater and the idea of the suspension of disbelief, in which the audience loses itself in the virtual world of the stage through the mechanism of all the media of the stage. This idea, the believability of the illusion of the stage (or screen) informs our understanding of current day interactive multimedia.

Richard Wagner and the Gesamtkunstwerk (Total Artwork)

“Whereas the public, that representation of daily life, forgets the confines of the auditorium, and lives and breathes now only in the artwork which seems to it as Life itself, and on the stage which seems the wide expanse of the whole World.” – Richard Wagner

The opera composer Richard Wagner (1813 – 1883) explored the concept of the gesamtkunstwerk (total artwork), to create a form of theater that constituted a synthesis of the arts, in which music, drama, poetry, stagecraft, etc., were used to establish a believable, imaginary world on the stage. He built his own theater, the Festpielhaus (Festival House) in Bayreuth Germany in 1876 to reinvent opera as a fully immersive experience. The conventions of the Festpielhaus revolutionized music theater: clearing the stage of musicians by placing them in the orchestral pit; lowering the lights and darkening the house; surround-sound reverberance; and the revitalization of the Greek amphitheatrical seating to focus audience attention on stage. The interface of the proscenium theater thus transported the audience into a fully believable virtual world.

You can take a virtual of Wagner’s Festpielhaus with his grandson, Wieland Wagner, who discusses the design of the Festpielhaus and how Wagner created his operas specifically for the characteristics of the theater:

Variations V, John Cage (1964)

“John Cage made Variations V for the Merce Cunningham Dance Company. My colleagues and I set up a system of photocells so that the dancers switched the sounds as they cut the light beams with their movements. In this performance at the Philharmonic Hall at Lincoln Center on July 1964, Stan VanDerBeek showed films and Nam June Paik manipulated projected television images. The center for the new dance in the early 60s was the Judson Church with Trisha Brown, Yvonne Rainer, Steve Paxton, Lucinda Childs and others. Yvonne wanted to use the sound of her own breathing as she was dancing. She wore a contact mike on her throat that picked up the sound of her breathing. We made a small FM transmitter that we attached to her belt. The transmitter relayed the signal to the speakers.” – Billy Klüver

As a musician, composer, artist, poet, and philosopher, John Cage’s work rarely fit within the traditional boundaries of artistic practice. In the late 1940s, during a residency at Black Mountain College, he developed his provocative “theater of mixed-means” in collaboration with the artists Robert Rauschenberg and Jasper Johns, and the choreographer Merce Cunningham. These experiments gave birth to an explosion of performance art in the1950s and 1960s that introduced all types of actions, artifacts, noises, images, and movement into the performance space, such as in his own electronic theater work, Variations V from 1965.

The anarchic nature of Cage’s work, with its bold acceptance of indeterminacy (chance) as an integral part of its composition, later encouraged the composer to extend this new found freedom to include the participation of the audience. Cage, inspired by Zen Buddhism, revels in an anarchy that dethrones the artist as the heroic, all-powerful arbiter of creative expression. He proposes instead a shift to an inclusive, participatory art that encourages interaction between artist, performer and audience.

Variations V demonstrates interaction between the live performer and the projections. A transmitter was placed at center stage, emitting a light beam, which was tripped by the movement of the dancers to control the projections, in addition to vocal sounds as mentioned above. Rather than forcing the performer to follow stage changes and visual elements, the visual material follows the performer, thus giving them control over their movement. This interaction, which alters the hierarchy of the relationship between performer and staging, also applies to the shifting hierarchy of the relationship between the viewer and and interactive media work.

This video was created by Stan Vanderbeek as a documentation/collage of Variations V, superimposing performance footage and found material:

 The Pepsi Pavilion, EAT (Experiments in Art & Technology)

“The initial concern of the artists who designed the Pavilion was that the quality of the experience of the visitor should involve choice, responsibility, freedom, and participation. The Pavilion would not tell a story or guide the visitor through a didactic, authoritarian experience. The visitor would be encouraged as an individual to explore the environment and compose his own experience.” – Billy Klüver

The Pepsi Pavilion was created by over 60 collaborating artists and engineers as an immersive, multimedia performance space. Of particular concern to our study of the real and the virtual is the way in which the Pavilion combined “real images” from a 360 degree spherical mirror, integrating them with live performance and the interaction of the viewer. Klüver’s comment above regarding the freedom of participation is integral to our understanding of how interactive multimedia engages the viewer as a performer who can determine their own actions in relation to the work. The following is a description by Billy Klüver of the Pepsi Pavilion:

The last project I will describe in detail is the Pepsi Pavilion. In late 1968, Pepsi-Cola approached E.A.T. to design and program a Pavilion for Expo ’70 in Osaka, Japan. Robert Breer and I chose Robert Whitman, Frosty Myers and David Tudor to work on the first design. The roof of the Buckminster Fuller style geodesic dome was covered by a water vapor cloud sculpture, designed by Fujiko Nakaya. When fully operational, the fog system was capable of generating a 6 foot thick 150 foot diameter area of fog. And on the terrace are seven of Robert Breer’s Floats, six-foot high sculptures which moved around at less than 2 feet per minute, emitting sound. The cloud was produced when water under pressure of 500 psi was pushed through jet-spray nozzles and broken up into the water drops small enough to remain suspended in air. Strands of nozzles were installed in the ridges and valleys on the top section of the roof. The system used 2520 jet-spray nozzles.

The three legged black poles are part of Frosty Myers’ Light Frame sculpture. Four such poles of different heights, were set in a square 130 feet apart at each corner of the Pavilion plaza. At the top of each pole were two 500-watt, high intensity xenon lights. Each light was directed toward the light of the neighboring tower, creating a very narrow pencil beam of light between each tower. This created a well-defined tilted square of white light framing the pavilion at night. Robert Breer’s Floats moved slowly around the plaza. When they hit an obstacle or were pushed they would reverse direction. A battery-operated tape recorder inside each Float played low sounds like sawing, a group of people describing a view in English, a truck starting up and driving away, humpback whale songs. The kids loved them, as you can see from the slide on the left.

The visitor entered through a tunnel and descended into a dark clam-shaped room lit only by moving patterns of laser light. The Clam Room is where people first entered the pavilion. Lowell Cross designed the laser deflection system which used the four colors from a krypton laser. The highly sensitive mirrors in the system could vibrate up to rates of 500 cycles per second and were activated from the sound system in the mirror dome.

LaserTow_sm_crop

Upstairs the main space of the Pavilion was a 90-foot diameter 210-degree spherical mirror made of aluminized mylar. The artists conceived of this space as performance area that could be used by many visiting artists during Expo ’70. The mirror fulfilled all our expectations. Our architect John Pearce devised an ingenious way the mylar mirror could be fitted inside an air tight cage structure. A slight vacuum of less than 1/1000 of an atmosphere, which could be handled by a couple of good sized fans, would be sufficient to hold up the mirror. By having a negative pressure air structure, there was no need for cumbersome air locks. This optical effect in a spherical mirror of producing a real image resembles that of a hologram. The difference is that because of the size of our mirror, a spectator looking at an image could walk around the image and see it from all sides. The space in the mirror was gentle and poetic, rich and always changing. It was complex in spite of its simplicity. We discovered new and complicated optical effects every day. The slide to the right is projected upside down. Once the visitors could see themselves as a real image in the mirror, the reaction was incredible. It created much more excitement than we ever could have expected.

5Mirror

David Tudor designed the sound system as an “instrument” with 32 inputs and 37 speakers arranged in a in a rhombic grid on the surface of the dome behind the mirror. Sound could be moved at varying speeds linearly across the dome and in circles around the dome. Sound could be shifted abruptly from any one speaker to any other speaker, creating point sources of sound. All speakers were capable of giving out the same sound. The lights and sound could either be pre-programmed or controlled in real time by the artists from a console at one side of the dome. The floor was divided into 10 areas made up of different materials, such as astroturf, rough wood, slate, tile, asphalt. Through handsets, like the one held by the boy in the left slide, visitors could hear specific sounds on each different floor material. On the tile floor: horses hooves and shattering glass; on the astroturf: ducks, frogs, cicadas and lions roaring. These sounds were transmitted from wire loops embedded in the floor. Twenty 100-turn wire coils or loops 1 foot in diameter were embedded under each of the floor sections and were fed by tape recorders. The low frequency magnetic field they generated was picked up and amplified through handsets the visitors carried. The innovation in this system was the use of a large number of coils for each area to obtain an even distribution of the sound and not have sound spill over to another area.

This following is a documentary created by Eric Saarinen on the making of the Pepsi Pavilion.

Laurie Anderson, Home of the Brave (1986) & Puppet Motel (1995)

“You’re walking and you don’t always realize it, but you’re always falling. With each step you fall forward slightly and then you catch yourself from falling. Over and over you’re falling and then catching yourself from falling. This is how you can be walking and falling at the same time.” – Laurie Anderson

In Laurie Anderson’s early work, she was a pioneer in the transformation of the theatrical space with projections, interactive music, gesture, movement, poetry: a true synthesis of the arts for the digital age. Home of the Brave is a 1986 American concert film directed by and featuring the music of Laurie Anderson.

In the 1990s, Laurie Anderson transposed the stage to the screen with the advent of CD-ROMs, with her work Puppet Motel, created specifically for the medium. Here the stage is the interface of the screen and the graphical user interface, a metamedium for a new kind of synthesis of the arts, entirely digital world of animation, sound, graphics, text, music, and most importantly, interactivity. In 1995 Laurie Anderson explored the interactive medium, creating the CD-ROM Puppet Motel in collaboration with multimedia artist Hsin-Chien Huang.The viewer no longer sits passively in the audience, the viewer is a “performer” traversing the imaginary worlds created by Anderson. Puppet Motel is divided into 32 rooms, small vignettes based on previous works and trademark Anderson motifs: clock, airplane, plug socket, telephone, and other objects of technology. Puppet Motel is a new form of music-theater where the audience is on stage, controlling the flow of time and space in zero gravity.

Skip to toolbar