geert on Thu, 14 Feb 2002 03:52:01 +0100 (CET)


[Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index]

[Nettime-bold] Sensing Speaking Space: Legrady/Pope at the San Franciso Museumof Modern Art


From: "george legrady" <[email protected]>
Sent: Wednesday, February 13, 2002 4:23 PM
Subject: Sensing Speaking Space: Legrady/Pope at the San Franciso Museumof
Modern Art

Please distribute:

Friday and Saturday, February 15 and 16, 2002
The San Francisco Museum of Modern Art Phyllis Wattis Theater
Co-presented by 23five Incorporated and The SFMOMA Department of Media Arts

February 15: Evening performances by Sensorband, Paul DeMarinis, Scott
Arford, keynote lecture by George Legrady and 'Speaking/Sensing Space,'
an interactive installation by: George Legrady, Stephen Pope, Andreas
Schlegel, Gilroy Menezes, Gary Kling, a collaborative project with the Media
Arts & Technology program, University of California Santa Barbara
'Sensing/Speaking Space' will be on view at The San Francisco Museum of
Modern Art Schwab Room on the evening of February 15th and during gallery
hours on February 16, 2002.

____________________________________________________________________________

'Sensing Speaking Space'
George Legrady, Visual artist
Stephen Pope, Sound composition
Andreas Schlegel, Macromedia Visualization Design
Gilroy Menezes, Camera motion tracking software
Gary Kling, networks protocol

�Sensing/Speaking Space� is an interactive digital media installation that
is a real-time feedback environment where visualization and sound will be
generated to represent the presence and movement of spectators within a
public space such as a museum or shopping center. The interaction will focus
on the notion of the �intelligent space�, a space that knows you are there
and reacts to your presence and movements through a custom camera tracking
system. The installation will be able to accommodate simultaneously anywhere
from 1 to 20 spectators.

The event will be an installation consisting of real-time interaction
generating visuals and sound from stereo to a six channel system. The
visualization will develop through multiple layers beginning at the most
simplest which will consist of basic layered shapes animated to move like
organic behavior (primal cell growth) and eventually reaching 'culture',
i.e. language or texts functioning as visual texture. These will result as a
consequence of the audience�s movement and will also generate sound events.

Stephen Pope�s sound composition is based on a database of 20,000 words
broken into phonemes (Stephen Pope�s archive) and to be orchestrated in
multiple modes through software developed with Supercollider. These
phonemes are called into action according to a set of defined rules (the
composition) which are enacted in response to the presence and movements of
the audience and spread across the museum space through 6 channel sound
system.

The event, or 'dramaturgy' or narrative will function on multiple levels or
mood changes based on any number of factors: a consequence of the number of
spectators in the space and their movements, the cumulative number of
people who have visited the installation, a history of the actions,
progressive changes throughout the duration of the installation
event/evening, etc. In the end, the focus is on the relationship of the
audience�s presence in relation to the circumstance, generating a
visual/aural event and feedback interaction.

This project follows in a series of related investigations with implementing
advanced usage of database, intelligent data organizing algorithms, and
multi-user realspace interaction. The production component will take place
at the University of California, Santa Barbara in conjunction with the Media
Arts & Technology graduate program.
(www.mat.ucsb.edu)


_______________________________________________
Nettime-bold mailing list
[email protected]
http://amsterdam.nettime.org/cgi-bin/mailman/listinfo/nettime-bold