For more detailed information, please change to the German version of this website.

in between

in between

For millions of years in our evolution, human communication through gestures, sounds that convey emotions, and spoken language was only
possible in close proximity. They became abstract thoughts or even irrelevant when people were distant from each other. Today's
infrastructure like instant text-messaging, social media or even a written letter, allows us to communicate at any distance and spend more
time to craft the content that expresses what we think and feel. In contrast, communication can often stall when in close proximity due to
difficulty finding the right words on-the-fly or being overwhelmed by noise and people can be seen to communicate easier remotely than
face-to-face.

However, remote communication has its drawbacks. Much information is conveyed through subtext. Without the emotional cues of face-to-face
communication, misunderstandings can occur, especially when communicating in a foreign language.

We aim at making this tension between distance and presence tangible. The clothing links individuals and measures their proximity. By
gauging the expansion with piezoresistive wires embedded in the garment, we adjust specific sound parameters. Through granular synthesis,
we blend recorded audio from a crowded environment with distinct spoken words. By manipulating the sound within an Ambisonics sphere, we
can alter the listener's ability to locate the sound source until they find the correct garment tension to focus the voices into a single
point.


Our garment consists of crocheted tubes spanning between people. One person wears one end of the tube fixed to the chest, where also the
electronics are located in a buried pocket, while the other end is held in the hand by a different person. Alternatively, the second end
can be attached to the other person's leg or arm. The actual sensors consist of piezoresistive material embedded in the crocheted tubes,
which is connected to the board in the pocket. The sensors give information about the distance between the people by measuring the stretch
within the tube. Woven-in elastics retain the tension needed to measure the distance even then the people are close to each other.

The audio implementation in MaxMSP consists of the main patch 'sensor_ambisonics_headphone.maxpat' and given helper patches. It comprises
the sensor readout part, which takes data of two different boards (/a0 on Port 5000, /a7 on Port 5001) and merges them into the patch.
Two granular synths provide sound generation and are modulated by one machine-learning GIMLeT AI object. The sound is then spatialized by
splitting it into four audio streams which are fed into four spatializer blocks, positioning the streams as independent virtual sound
sources in an 3D Ambisonics sphere. To get the impression of distant noise and face-to-face, it is best to change between evenly
distributed sources around the listener, making source location difficult, and focusing all sources on one point near the listener.

Team and performers:

Clara Almeida, Alexander Maaß

Work in Progress

source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß
source: Clara Almeida, Alexander Maaß