Immaterial Architecture of Sound
This paper, explores our perception of sound as a spatial event and in relation to architectural space. Creation of sonic spaces with the interrelation of space, time and movement is explained by examples. My ongoing research project IS, as a sound space , aims to augment our perception of space through erasing the constrains of the physical space, and creating a temporal, dynamic and evolving space of sound where one sets out on a journey through time. Our cognition of space is shaped by our interaction and body movements. By notating movement in my project space, as an open process that extend over time and as a tool for the process of design, I explain the sonification process of movement data, the audiovisual behaviour of the system and examine the failings and findings of the interactivity. I discuss the importance of visual clues in this environment of sound, depending on the multi-sensory analysis of our perception and audiovisual correspondences. I investigate sound spatialisation and auditory illusions as means to construct immersive spatial experiences by means of sound. Herein, I discuss the topics of embodiment, sonification, interaction and spatiality as fields relating to my research project, IS, that tries to answer the question of ‘How to interact with sound space?’, through an evolving system that unfolds with the user behaviour through time and space.
Key Words : Space, Sound, Movement, Sonification, Sound Spatialisation, Multisensory Perception, Interaction, Audiovisual Correspondences, Personality
Introduction
“Music is liquid architecture; architecture is frozen music.” (Goethe)
“I can hear with my knee better than with my calves.” (Leitner)
Our perception is shaped by the information we gather from our surroundings, processing this information through the multimodal combination of our senses and our personal experiences. Similarly, our audition forms on perceptual, multi-sensory and spatial acts. Architectural space shapes our perception of sound significantly, both physically and contextually. Size, shape and acoustic values of a physical space and function of a specific space embodies our perception, just like sound attributes such as pitch, timbre, loudness and duration. Throughout the years architectural space evolved music, influenced musical forms that fits the physical performing space therefore, music has always been site specific. Consequently, architecture is sonic, space becomes a part of the instrument that generates the sound and a part of the composition that is played there.
“Sound is a spatial event, a material phenomenon and an auditive experience rolled into one. It can be described using the vectors of distance, direction and location. Within architecture, every built space can modify, position, reflect or reverberate the sounds that occur there. Sound embraces and transcends the spaces in which it occurs, opening up a consummate context for the listener: the acoustic source and its surroundings unite into a unique auditory experience.” (OASE, 2009)
Space and time are intrinsically linked as our understanding of both these notions is through movement– of our body or of sound in space. As Schafer and Krebs (2003) describes, “ A distance covered by a person — or a sound — in space is initially perceived as a temporal phenomenon, allowing a person to develop a subjective notion of space and time by measurement and comparison. Thus, movement in space becomes time. Time periods become spatial distances.” Space is defined by an experience of time, according to the movement of sound, therefore sonic events generate open, temporal, immaterial spaces. Sound in time is able to create notions of dimension and distance to embody space.
My ongoing research project IS, tries to answer the question “How to interact with the sound space?” which aims to augment our perception of space through erasing the constrains of the physical space, and creating a temporal, dynamic and evolving space of sound where one sets out on a journey through time.Â
Individuals have defined goals and rules in the space, that’s been determined by their inner selves and roles in society. Therefore, hearing the same piece of music would evoke different actions for each individual. With this assumption, it is a personalised experience where an individual composes his/her own piece of music and creates a space of this composition by spatial sound localisation and visual representations of sound.
 Figure 1 : Diagram of IS
In IS, sound and visuals are driven by interaction. Briefly, movement data in the IS space is sonified, sound is translated to visuals and visuals are the representation of sound interdependently, while visuals also work as clues to stimulate movement. I’m working on sound design and the user’s interaction in sonic spaces. Sound is spatialised through ambisonics and visuals are mapped through projection mapping, to alter the perception of the physical space by means of sound. Creating an immaterial architecture of sound, that can be perceived via the movement of sound between speakers and our movement in space through time.
Sonic spaces
Meaning of space in sound or music can be explained by two different purposes of spatiality: space as a metaphor and literal uses of space meaning the physical reality of acoustical aspects, and the aural perception of sound or music in space (Macedo, 2015).Â
Space as a metaphor is the use of spatial concepts to describe different aspects of sound and music, such as spatial associations of high and low, as listening is a perceptual experience. While the first category is more a conceptual discourse and not necessarily related to the aural perception of sound in interaction with space, the latter divided to space as acoustic space, space as sound spatialisation, space as reference and space as location by Macedo (2015), is directly related to the acoustic signature of space on sound.
As Macedo (2015) explains, ‘space as an acoustic space’ is the most direct form of interaction between sound and space. Acoustic effects of space, such as reverberation on sound, especially sound reflection, diffraction and resonance shapes the music or sound played there, for instance a plainchant performed in a large cathedral. ‘Space as reference’, is explained by the referential properties of sound, to evoke in the listener the spatial impressions and experiences of different places. ‘Space as location’, is explained by site-specificity. As our spatial perception includes all perceptual systems, each place has its own characteristics of sound and arouses specific expectations in the listener, such as hearing traffic sound would evoke being in a busy street, exampled by Macedo (2015). Therefore, there is a strong relationship between the listener’s behaviour, the kind of music performed and the architectonic features of space. The last category ‘space as sound spatialisation’ is the ‘surroundability of the auditory field’, that includes the ability of our auditory system to perceive spatial properties of sound such as direction, distance and motion. Sound spatialisation allows the construction of different kinds of aural perception in the listener, intended by the composer, by distributing the sound sources – loudspeakers or instruments, throughout the installation space or the performance venue. Among these categories, space as sound spatialisation is a fundamental element in addressing space which relates to my research project and to the examples of sonic spaces given in this chapter.
 Figure 2 : Diagram of IS
Sonic Lines n’ Rooms by Schafer and Krebs is a sound space, where movement of sound through loudspeakers located close to the walls on different layers, constructs specific sound spaces. The distribution of speakers allows visitors to walk around freely in the whole architectural space therefore visitors can experience different auditory perspectives as they move along the space. With the movement of the sound and of the visitor within space, physical space and sound space form a new perception of the real space, by cancelling the boundaries of the existing space. LOST is an hybrid version of Sonic Lines n’ Rooms including variations of loudspeaker array. It is created for spaces with high ceilings and long reverberation time, Sound is plunging up and down through the speaker column (A, Fig. 4), while a circular array of loudspeakers (B, Fig.4) surrounds the entrance of the space. A third group of loudspeakers (C, Fig.4), positioned on the floor work as a sound sculpture and allows visitors to walk around and engage with it if they choose to. Â
Sonic spaces are built with the interaction of movement through time in space. “The physical arrangement of the loudspeakers becomes in effect a living ‘body’ or ‘organism’ creating a form of mediation between the sound types, their actual movements and the installation situated in the space.” (Nuhn, R. Dack, J. 2003). Schafer and Krebs categorise their sound installations as space-sound bodies, depending on the various distributions of the sound source – ‘sound body’. Sonic Lines n’ Rooms is an example of ‘Enterable Space-sound body’, where the sound source situated in the space, is set up in a way to allow visitors to move within the confines it defines, that embraces the visitor to explore the interactions between the space and sound. LOST is a combination of Enterable and ‘Circumambulatory Space-sound body’, where the sound body is situated within a space and the visitor is external to it, but he/she is encouraged to approach it and engage with it (Fig.4, C).
Soundcube by Bernard Leitner, can also be categorised as an enterable sound space. The cube like space consists 64 loudspeakers on each side. As Leitner (1969) states, it uses “movement of sound as a tool to create and characterise space”. Sound travelling through loudspeakers, changing in pitch, speed and direction creates an architectural space. In his sound spaces, Leitner studied the relationship between sound, space and body. In his manifesto, Leitner writes that “hearing with ears is only one part of our auditory perception. An acoustic stimulus is absorbed with the entire body” (Leitner 1969).
In his experiments he sketches spatial figures with the movement of sound and the impact of sound on body. Unlike when one is listening to a concert, when the music starts and ends as you have already taken your seat, which creates the feeling that one has consumed all of it, in his installations the sound already exists in space and you have engaged with it momentarily.Â
Figure 5 – 6: Leitner – Wall Grid and Sound Cube
As he states, “Space is here a sequence of spatial sensations — in its very essence an event of time. Space unfolds in time; it is developed, repeated and transformed in time” (Leitner, Kargl, Groys 2008).
In Ikeda’s work ‘A’, various loudspeakers placed in six silos, are playing pure sine waves, that is non-directional therefore fills up the space with patterns of sound, creating various sound areas. Visitors can move freely and experience a unique auditory perspective that is interactively changed by their movement, speed and direction.Â
Figure 7: Ikeda – A [for 6 silos]
As seen from the examples given, it is possible to use sound as the material of architecture in shaping space. Sound moving through loudspeakers can define new constraints of temporal spaces that exist when the participant engages with them. Correspondingly, IS is an enterable sound space, where space is addressed as sound spatialisation. Our installation space is a cube like space with four speakers, placed on each corner. Speakers are hidden behind the walls allowing the user to walk around freely in space. Therefore, within the space the participant has no visual contact of where the sound is emanating from he/she is only presented with projected visuals and his/her own hearing. Our first prototype, as it was constructed in an existing architectural space, separated by walls and curtains but not isolated from sound, also works as a circumambulatory space defined within a space when you the user is out of it.
Notation and ChoreographyÂ
Musical notation is a method to narrate music and as Chafe (2012) describes, “A platform for musical communication”. Music includes the composer, the performer, the listener and even though not mentioned in the score, music is shaped by the space and our perception of it differs according to our experiences.
Music, as a temporal art is experienced over time. The conventional musical score is a static representation that is composed in “frozen time”, as it is not perceived as a whole, and unfolds linearly over time in the consciousness of the performer or listener. The conventional musical score is limiting in that: it doesn’t allow interpretation, and is insufficient in considering the computer music and performance space of today. Experimental music notation and visualising computer music have been investigated by many composers. In these graphic notations, sound is described through various shapes and lines and the composition can be grasped as a whole image rather than individual parts. Composer no longer defines rules or guides the performer, as there are no instructions given. Therefore the role of the performer changes. These new graphical notation methods, can be defined as open scores as they no longer need to be linearly realised, which allows the interpretation and free movement of the performer. Experimental music notation is liberating the composer and the performer from conventional norms, it is open to interpretation as Applebaum (2012) explains, “It is certainly music, yet I hear no sound in my head when I’m composing it”, there is indeterminacy in each performance. The composer erases his subjectivity from his music.Â
Although there are no given instructions, there is a visible connection of auditory and visual perceptions, through the execution of these works by performers. In Applebaum’s composition Metaphysics of Notation, twelve panels of hand drawn pictographic score were hanged around the performance space.Â
Performers were invited to interpret the piece in any way that is meaningful to them, moving freely around the panels. There were resemblances in the interpretations, continuous curves are interpreted as long notes changing in pitch, while repetitive objects were interpreted as rhythmic actions, illustrating an event of amplification, getting louder as the image gets bigger as the perception of bigger objects producing louder sounds.
 Figure 8: Applebaum – Metaphysics of Notation / Panel 4
Although, Cornelius Cardew hasn’t given any instructions on how to perform his work Treatise, it can be interpreted by examining the recurring patterns through the piece to come up with rules and organising objects of the score into units.
Figure 9: Cornelius Cardew – Treatise
This open process of the score as a visual picture exist as a whole and can be experienced by movement within it, similar to a spatial art such as architecture where the building is understood as a spatial whole at any moment and is experienced by movement of the observer, around and within it. Therefore, graphical notations surface the question of ‘Is it possible to notate these sounds spatially to create an architectural space by means of sound?’
Iannis Xenakis “explored such correspondences somewhat literally by projecting similar mathematical or formal structures into musical as well as visual space” (Sterken , OASE 2009). In Metastasis, he uses straight lines (glissandi) that refers to start at a certain pitch and slide through all the frequencies. “Two dimensional lines define a three dimensional space that implies unfolding over time” (Sterken, 2007). Xenakis applied the same mathematical rules and formations that he used in his music to architecture. The straight lines create a ruled surface which was later an inspiration for designing Philips Pavilion. Graphs of straight lines were different slopes corresponding to different ‘sound spaces’ (Roberts 2012 p.9).
Â
Graphical notation as open score, allows the notation of the interaction between space, time and movement. Hence, choreographing movement in space can be notated diagrammatically as an open score just like the musical notation. Halprin (1970), in RSVP Cycles explains as:Â “Scores are symbolisations of processes, which extend over time. The most familiar kind of score is a musical one but I have extended this meaning to include scores in all fields of human endeavour. …I saw scores as a way of describing all such processes in all the arts, of making process visible and thereby designing with process through scores. I saw scores also as a way of communication, these processes over time and space to other people in other places at other moments and as a vehicle to allow many people to enter into the act of creation together, allowing for participation, feedback and communications.” (Halprin, L. 1970).
 Figure 12: Halprin, L. – Motation drawings
Halprin thought about ‘choreographic space’, developed scores for urban landscapes that are designed specifically for movement experience. He designed ‘Motation – movement notation system’ for describing and recording present movements and choreographing future movements in space, ‘a tool for choreographing space that can be used for choreography in dance or design of movement through urban spaces’. Therefore, in his Motations, score is not only a system for reading or performing of the architectural space, it involves participants to the planning process of the space. It works as a guide between ‘what was expected beforehand’ and ‘what is observed’ so, the activity produces its own outcome in process.
Further on, participant movement is notated in the first set up of IS space. This reverse motation is to compare the expected and the observed interaction between movement and audiovisuals, to examine the participant behaviour to determine the successes and the failings of the system and investigating further considerations and findings for improvement of interactivity.
Interactivity and Conversation
“It is our interaction with the world that increases our understanding, and not just a head-knowledge of the resulting measurements.” (Hermann and Hunt, 2011)
“It can also be maintained that it is best to provide the machine with the best sense organs that money can buy, and then teach it to understand and speak English. That process could follow the normal teaching of a child. Things would be pointed out and named, etc.” (Turing, 1950, 460)
In Conversation theory, Pask (1968), talks about the learning capability of the machine and the interaction between the machine and participant. Authentically interactive systems can be achieved only by true autonomy (Haque, 2007, p. 58). Instead of pre programming all the possible responses of the machine, giving it an underspecified goal enables the collaboration of man and machine on a shared goal and from this interaction complex behaviour patterns emerge (Pask 1972, quoted in Haque 2007). Instead of a linear, prescriptive approaches, it is vital for interactive systems to allow collaboration.Â
Figure 13: Diagram of IS
In IS, interaction is the main component that drives the system. It creates a loop between the user and the system. The installation space becomes an instrument and the user becomes the composer and the performer. The system responds to presence. Position and bodily movements of the user are tracked by the 3D camera as they walk in. The data is then sent to the sound device (Max MSP) and the visualising software (Touch Designer). The user composes his/her own music as he/she walks around the space. Every step within the space works as a piano key with various parameters depending on the coordinates. Just as the sound output, visualisation starts as the user walks into the space. Parameters of the generated sound are sent to the visualisation software through OSC messaging. The sound is analysed and visualised as a landscape that works as a spectrogram of sound, therefore the peak point follows the position of the user. Visual representation of sound acts as an interface for users to engage with. Participant is the force effecting the audiovisuals. The sound and visuals are then mapped to space through four loudspeakers in each corner of the space and six projectors that cover the wall and floor areas of the space. The sonification process and the transcoding of sound to visuals will be explained in detail in further chapters. To conclude, motion drives the sound, sound is translated into visuals and visuals are the representation of sound, visual clues affect the perception of the user and stimulate the interaction that creates a loop of the system.
The comprehensibility of the design is important for an interactive system as Caldis (2014) quotes, if the user continually requires to recall the instructions then the design has failed. “The user must be able to figure out what to do and understand how to engage with the design.” (Norman, 2013).Â
– Motation of interaction in IS spaceÂ
Fig. 14: Movement Notation in IS Space
In our first prototype we based the interaction mainly on standing still and moving around in space. Our aim was letting the users figure out how the system works, as they enter the room without any instructions. Our expected interaction was the participant to walk around the space, stand still and try to manipulate the visuals with their hands. Different behaviours of the participants were observed as diagrammed above.Â
Stillness includes five behaviour types, such as standing still, sitting, and standing still while moving hands in various speeds. Movement can be categorised by the speed of the participant. The resulting interaction we have observed is a mixture of these categories. While some people moved slowly around the space, watching and sitting, some choose to run along the walls and were more active. Participants that had slower movements were also the ones that choose to sit down. While most participants used hand movements to interact with space, it was more common between people with slower movements. People who choose to run, generally didn’t stand still except very short intervals, they were more likely to run and walk around the space and were always active. Therefore, speed of the movement should be another factor that changes the sound.
After spending a few seconds in the space, we assumed that it will be obvious that they are generating a piano sound when they are walking. Some of the feedback we got, was that they were unsure if they had control over the audiovisuals. Moving and stillness was not as effective for every user as predicted. Some participants were moving and standing still between very short intervals, therefore the system wasn’t accurate enough to interact in every case.Â
Another observation is that instead of the two main changes depending on any type of body movement and stillness in our current prototype, there needs to be a direct relationship between the specific movement and the sound and the visuals it generates. We need to work on a narrative of sound, beginning with the user walking into the IS space and evolving through the interaction as the user continues the ‘journey’. The piano notes, that are changing in pitch and duration as the user walks in x and y directions, is insufficient to answer the result of the user’s direct interaction with space.Â
The continuous presence of sound and the contrariness of the visuals – moving particles that appear as user stands still– seemed restless as opposed to the stillness of the movement which created a confusion. A direct mapping of various sonic feedback accurately to the subtle bodily movements and gestures and only outputting sound when there is motion will be next attempts to investigate.
 Fig. 15: IS, Hand gesture controlling particles
Sound Gestures
According to Leman, our body casually connects with our environment and our subjective experience in our environment. It works as a mediator between them, in understanding space (Leman 2012). We build up a set of gestures and consequences of those gestures, that is kept in our memory to be used to interact with our environment, to interpret our next actions and perceptions. These are based on our particular goals in the space.
Similarly, music cognition is also situated in an environment and enacted through our actions. Individuals understand music, corresponding to how they understand expressive intensions during social interactions and sonic patterns recall gestures that are meaningful to an individual due to his/her entity in a cultural environment (Leman, Keller, Janata 2007 p.289).
We have a set of actions and gestures in our memory when we are perceiving music or sound. In the Sonification Handbook, Herman categorises listening as everyday listening which is ‘source-oriented’, musical listening such as the character of sounds by the acoustic shape, rhythm etc. where the focus is on sign rather than the signified, and analytical listening which is ‘the conscious use of all listening skills to distinguish and analyse an object under investigation’, such as shaking an opaque box to determine the contents (Hermann 2011 p. 400).
When we hear an everyday sound which we have previously have knowledge about, we tend to act in relation to how that sound is produced or we assign semantic labels to those sounds, our gestures are based on acoustic features and we interpret the sources of sounds. Whereas when we hear an abstract sound we tend to reproduce parameters of the sound with our movements, we use auditory sensations to depict the sounds.
In the experiments started by Pratt (1930, cited in Rusconi et al. 2006), and later, replicated by Trimble (1934, cited in Rusconi et al. 2006) and Roffler and Butler (1968, cited in Rusconi et al. 2006), where participants were asked to locate a number of tones with different frequencies, “Rusconi et al. 2006 p.115) It is shown that people have the tendency to describe pitch as high and low and locate them accordingly in the space.
Later in the experiments made by Lemaitre et al. including several people with no musical, dancing and sign language background, results showed that as well as the spatial metaphor of pitch, the noisiness or graininess of sound is depicted with a rustling metaphor with rapidly shaking hands or fingers. Rustling gesture used as a metaphor for the ‘fuzziness’ of noisy sounds, reminds the mapping of Kiki – Bouba effect, just as associating phonetic features to rounded or sharp visuals, they may lean toward associating random fluctuations in sound to visual fuzziness therefore shaking their hands (Lemaitre et al. 2017).Â
Fig. 16: Sound Gestures (Lemaitre et al. 2017. p.9)Â
As explained before “Sound is essentially movement” ( Munoz, 2007) and “Nothing sounds if nothing is in movement” (Fernández, 2000). A live performance depends on aural and visual perceptions. “Gestures act as the visual stimulus to perform and perceive music.” Musical gestures are visual representations and expressions of sound. “Seeing performers’ gestures as they play, strongly influences the particular kind of data registration that accompanies the listening in the total perception of the performance” (Munoz, 2007), just as a conductor’s use of gestures, technically and expressively represent music and provide communication in live performances.Â
 The studies of researchers (Behne, Wollner 1990 p.324) shows that ‘music is perceived in various ways of auditory and visual components’. In the investigations of Klaus- Ernst Behne (1990), participants of musicians and non-musicians from different age groups were shown short video recordings of the same audio, with manipulated visual representations; in which different musicians appear to be playing. Participants are asked to rate the recordings, the majority of the them reported different ratings that shows the influence of visual information.
 Figure 17-18: Applebaum – Aphasia
In Aphasia a 9-minute piece written by Applebaum, for a mute singer, unfamiliar sounds that are gathered from isolating and transforming individual samples – from a three hour voice recording of Isherwood, is accompanied by gestures of everyday movements, actions named such as “give me the money” or “post it notes”. Gestures are recorded as the musical score. There is no regular rhythm or repetition, therefore as the piece continues the relationship between the sound and action seems reciprocal. Gestures based on everyday activities when out of context and combined with unfamiliar sounds, seem foreign, but as the piece goes the viewer starts to make connections between the action and sound.
Although there are generalisations of gestures and expressions, people tend to act differently in the same environment. For example, body movements of each dancer is different to the same piece of music (Keller, Janata 2007 p. 291). Gestures as “nonsonic aspect of musical performance” (Kivy 1995), represent expressiveness and it will differ in individuals because it is a part of personality. IS, is a personalised experience based on the idea that individuals would act differently, as Caldis (2014) describes, “listen excite and perceive in their distinct way” in unfamiliar environments, according to their experiences and cultural history, and learned behaviours.
Data Sonification and Sound Synthesis
Data sonification, is the auditory display of quantities of information, it is a way to translate streams of data to sound with digital sound synthesis. It can be a much more effective way when one can hear the data rather than looking at a significant amount of values in a chart.
In the Handbook of Sonification, there are five different techniques of sonification, described as Audification, Iconic Communication, Earcons, Parameter Mapping Sonification and Model-Based Sonification. Different ways of sonifications include direct or indirect ways of outputting sound data, that the user can recognise or that needs to be learnt with subjective mappings. These are useful for vast variety of purposes from sonification of significant amounts of scientific data, that is not possible to comprehend or perceive at one glance, to improvement of performance in complex human motor tasks and rehabilitation of patients, etc.
Parameter Mapping Sonification (PMS) is the mapping of information to auditory parameters for displaying data. As Berger and Grond (2011) states, an example of PMS can be the sound produced when the water in a kettle reaches the boiling point. There is a wide variety of possible mappings, as only triggering sound when the target temperature is reached or  hearing a continuous change of sound as the temperature rises while associating selective temperatures with a sound output during the process. In the latter option, hearing selective temperatures becomes crucial, as the continuous change would acquire absolute pitch and knowledge of the user. Therefore, this variety presents a challenge of comprehensibility that sound might not always be intuitive to the user. To prevent ambiguity, some aspects of parameter association should be considered for the effective auditory display. These are polarity, meaning the direction of mapping as rising temperature to rising pitch in the kettle example, scaling the data domain to adapt to perceptual limits of hearing and context as references just as the tick marks on a visual graph, for example mapping temperature to pitch, rate to tempo and size to loudness. (Walker and Kramer, 1996)
Metastaseis by Xenakis mentioned earlier a graphical notation (Figure 10) is an example of PMS, where mathematical processes are used, and statistical and stochastic processes are mapped to sound.Â
I have used PMS in some of my previous works. I have worked on systems that translate streams of data to sound. In ‘Reality into Music’ series I have rendered data input such as texture of a wall or luminance values of read by a camera input to meshes that are read by a graphical sequencer as the musical score. These are then generated into sound output by the synth engine. By reading values from the environment as the composition, the synth engine works both as the performer and the instrument generating the sound output. In these works, the feedback loop was determinant for the continuation, as the system listens to or watches the output it generates and sends back messages that kept changing the score input.Â
Figure 19: Tufan, Y. Previous works, Reality into Music
In Micro | Macro, Ikeda is inspired by as Finkelstein describes “space of particle physics and physical cosmology–as measured in Planck units–for his audiovisual installation “micro | macro” (Finkelstein, 2016). He explores the intersection of art and science. Micro explores the building blocks of matter by enlarging it to human scale, while Macro scans depictions of nature starting from the human scale to the cosmological scale. Ikeda (2014) explains his work as, “reducing sound, light and the world into sine waves, pixels and data, so that the world can be viewed once more at a different resolution”.
Fig. 20: Micro | Macro – Ryoji Ikeda, the planck universe [macro] 2015. Martin Wagenhan ©ZKM | KarlsruheÂ
Model-based sonification (MBS) as explained by Hermann (2011), focuses on the acoustic responses the are generated in response to user’s actions. Every interaction we make with the environment is accompanied with a feedback of acoustic response therefore, in MBS, sonification evolves only when the user excites the system. Model – based sonification works best in the case of interactive installations because it is based on experience and the interaction of the user with the environment. It takes into account the physical acoustic feedback to user interactions. By the repeated interaction that responds in an answer of the system, the user is able to understand the mapping from a specific excitation to sound output. As Hermann (2011) states, “In MBS, the data is not ‘playing’ the instrument, but the data set itself ‘becomes’ the instrument and the playing is left to the users”.Â
Fig. 21: Diagram of Model-based sonification – Handbook of sonification
Both of these sonification methods are relevant to IS, for the reason that while position data is mapped to parameters of sound such as pitch and duration and parameters of sound are mapped to visual representations of sound which implies the use of parameter mapping sonification, it is an interaction driven system for the generation of any output and interactive systems are best realised with model based sonification methods.
Musical instruments are examples of interactive devices that generates sound through physical actions of a player. As Hermann and Hunt (2011) describes, their primary function “is to transform human gestures to sound for the purposes of expression”. It matters greatly to humans to be a part of the control loop, to be able to initiate the results and prepare mentally. Thus, considering real world interaction into interfaces and giving people control over the loop gains importance.
Digital instruments such as synthesisers, do not rely on an acoustic body of a physical instrument. However, as interactive sonification devices, using the same principals of the physical world interaction could work similarly to musical instruments. Controlling the digital instrument with human gestures instead of plucking strings or hitting keys.
Fig. 22: David Rokeby in Very Nervous System in the street in Potsdam in 1993
Very Nervous System is an example of interactive sound installation by Rockeby, that uses cameras, processors and synthesisers to translate body movements to sound.Â
Firewall is an interactive installation where a stretched sheet works as an interface sensitive to depth. Kinect measures the depth of the sheet from the frame, as it is pushed it creates visuals around the pressure point and triggers sound. Max MSP algorithm controls the sound getting faster or slower, louder or softer based on depth. The wall can be explained as an interactive instrument, as pressure of the contact is determining the volume and speed of the audio, so the user expressively feels in control of the system. As discussed in Model-based sonification, excitation is triggering the system as there is no audio when there is no contact. Mizaru is the performance version, where several Firewall screens are used creating a cube like space, to interact with the performers. The system works the same way, as audiovisuals appear upon touching the screens. Mizaru consists five different worlds representing different conceptual meanings.Â
Fig. 23-24: Firewall and Mizaru- Aaron Sherwood
Noisy Skeleton is an immersive installation and an interactive instrument, where human body expressions are used for the generation of audiovisuals. As Theoriz describes, it is “a dialogue between man and machine”. Theoriz, uses a 3D camera to track the user’s position and joint data and they calculate the distance between the user’s hands. All the collected data are then used to generate sound and visuals. There is a direct connection between the hand gestures of the user and audiovisuals, if the distance between two hands increases, the distance between the lines in the visuals, as seen in figure 25, follow similarly. They used minimalist visuals to allow user to feel the smallest disturbances.Â
Fig. 25-26: Noisy Skeleton – Theoriz
In Momentum by Schnellebuntebilder and Kling Klang Klong, “the user and his/her surroundings transform into fluid particles to form a synaesthetic experience of sound and visuals” (Schellebuntebilder). Body movements are tracked by Kinect camera and analysed in real time and transformed into generative sound using Max MSP and Ableton Live. The intensity and direction of movement is calculated to determine the trajectory of visualised particles.
Fig. 27-28: Momentum – Schnellebuntebilder and Kling Klang Klong
As seen from the examples given, for a successful interaction there needs to be a direct result between the movement of the user and the audiovisual feedback, in the case of IS the response of the space. Supporting complex audio with simple visuals or vice versa is more effective for the comprehensibility of the system. Moreover, not only as an interactive audiovisual system, we are aiming to compose a space of sound, starting from the moment the user walks in, and keeps evolving according to the user behaviour. This evolution is through the construction of four different scenes depending on personality and behaviour of the user similar to Mizaru example, for the continuation of the ‘journey’ of music through time. Therefore, spatiality and augmenting user’s perception of the physical space he/she is in, is essential. IS as an interactive sound space, compared to the examples adds on another layer, with the spatial mapping of sound as well as the visuals, to create spatiality by means of sound.
– Sonification in IS
Figure 29: Sonification Diagram of IS
In IS, we are using a depth camera (Kinect) to track user position. X and Y positions of the user are sent to the sound synthesis software (Max MSP) to generate sound. Sound composed by the user, is analysed for visualisation. When there is no one present in the room, pink noise through resonant filters, using subtractive synthesis is outputting an ambient, drone sound that is inviting people to explore where the sound is coming from. The system is activated with excitation. When there is movement in the room the whole character of the sound changes and the user creates his/her own composition. A landscape of a wave field works as a sound spectrogram, that fluctuates reactively to the ambient background sound. As the user walks within the space, the sound device outputs piano notes, parameters of sound are sent from the sound synthesis software to the visualisation software (Touch Designer) through OSC messaging to be analysed to generate a breaking point of the wave field at the position of the user that travels from the edge of the virtual landscape toward the user and keeps following the users’s position as she/he walks around.Â
Fig. 30: Diagram of the Installation and Loudspeaker Positions
We divided the space with a ten by ten grid, to have control over quantities of position data. By sending the position values to an additive synthesis – creating timbre of a piano by adding waves – that works as a piano keyboard to translate the values into midi notes. As the user walks along the space notes start to play. The x axis, changes the frequency of the notes, and the y axis changes the duration of the notes. The main factors that are affecting the sound is movement and standing still.Â
When the user is moving, hence generating notes the sound output is recorded. As the user stands still, the system also reacts with changing the speed of audiovisuals, as a representation of freezing the sound that the user has composed. The sound output is then sent to a granular synthesiser, that stretches the audio, changes the attributes and plays back layers of sound of the same generated composition in the act of stillness. Stretched playback of the sound here, is a representation of freezing time therefore the movement of sound, as time can be explained by the movement in space.Â
Since there are no given instructions to the user, in this stage it is important to provide the users with visual clues to understand the control over the system. During stillness the previous visuals representing the movement of sound waves and the active composition of sound by interaction disappear. The user is then surrounded by floating particles representing the grains of sound and the frozen movement in time. The user is the driving force of these particles, that represents the streams of sound layers floating in the air. As the user stands still, she/he is able to control the position of these particles. Same as before sound generation and visualising softwares are connected through OSC messaging this time for the mapping of audiovisuals. Sound is spatialised between four loudspeakers, and visuals are projected through six projectors that are covering the walls and the floor of the installation space. Hand movements of the user, moves the particles around therefore changing the location of which loudspeaker the sound is emanating from.
Fig. 33 – 34 – 35:Â Images of IS
Spatial Sound
Sound spatialisation and spatialised audio as Willits (2017) explains, “allows the use of space as a part of the music”. It challenges the front oriented performance spaces where the listener is only a ‘passive consumer’ (Hofmann, 2006, p.1) with the distribution of sound sources throughout the space. Developments on sound technology related to the surroundability of sound enables immersive sound experiences, the reproduction of accurate 3D sound fields and supports the creation of new spatial perceptions by auditory illusions considering the direction, motion and distance of sound.
Ambisonics is a method for recording, mixing and playing back, three-dimensional, 360 degree audio. Unlike surround sound technologies, ambisonics is not limited to any number of speakers or certain positions and angles of speakers, meaning that audio can be decoded to any variety of speaker array. Including not only horizontal plane, but also height and depth, sounds from above and beyond, it is capable of spreading the sound to a full sphere. However, the accurate representation of the sound field was constrained to a ‘physical’ sweet spot, at the centre of the speaker array, that increases in area with higher ambisonic orders and more loudspeakers. This results in problems of using ambisonic techniques within large spaces. Nevertheless, as Frank (2014) states, “a physically accurate reproduction does not necessarily yield a good perceived quality”. A hearing related approach of spatialisation such as Vector Based Amplitude Panning is much less restrictive. It could be used to create a ‘perceptual’ sweet spot based on the psychoacoustic phenomenon of a phantom source – creating a virtual source by guiding the perceived direction of an individual auditory object by distributing its sound signal between a number of loudspeakers at equal distances from the listener.Â
In our research project, the aim is not the accurate reproduction of a sound field but erasing the presence of the physical space to create an audiovisual illusion of a new perceptual sound space. As explained by Banks (2012), our perception is an active process in which the mind is able to fill in interruptions and make informed guesses to make sense of our surroundings, as a part of our evolutionary process. Our hearing is an imperfect assessment of reality that calls other senses for verification (Carson, 2007). In Rorschach Audio, Banks (2012) talks about Electronic Voice Phenomena (EVP) as an auditory illusion that can be explained by psychoacoustics, where ambiguous and unknown sounds in distorted sound recordings and radio interference has been associated with supernatural voices of ghosts and contact with afterlife by researchers in this field. Even though there is a scientific explanation of these ‘stray signals’ of ‘broadband very low frequency receivers’, our mind projects subjective, familiar and imaginary meanings to ambiguous sources. The mental processes that generate illusions are our normal cognitive process that we experience as reality (Banks, J. 2016).
In IS, I am using Max MSP software with an ICST Ambisonics external tool for sound spatialisation. Four speakers located in each corner of the installation space, is mapped precisely with the correct dimensions of the space within the patch. Position data of the user tracked by the camera, is sent to the software and used as the sound source. Therefore, the sound output follows the position of the user.Â
Fig. 36: Sketches of IS, using illustrations of Leitner -Soundcube
In our first prototype, there are three different ways of spatialisation depending on the presence and position data. If the room is empty the sound rotates in a 360-degree circle through loudspeakers. When there is presence the sound is following the location of the participant’s head position as he/she moves along while also travelling linearly through the space as the wave is created at the edge of the virtual landscape and travels toward the user. When the user stands still, he/she is able to move the sound through loudspeakers with her/his hand position. In figure A, sound moves through loudspeaker number 1 to 2 creating a line and in figure B, it is following numbers 1 to 4, 1, 2, 2 to 3, 1, 2 in order.
Â
In our first set up, using four speakers limited us to localising sound in a single level in space. Therefore, the spatialisation was in a circle rather then a three-dimensional sphere. The installation space was constructed of solid walls, that were punctured at the places of loudspeakers. However, it still affected the directionality of the sound. Therefore, our following aims of the installation set up is using different materials such as open weave fabrics stretched to frames, that allows sound to penetrate and that could be projected on to get rid of any barriers that restrict the sound to reach the user. We will attempt using eight or more speakers to be able move sound horizontally as well as vertically to investigate three dimensionality. This would enable the addition of the Z axis of the head/hand position for spatialisation, therefore allow accurate mappings of movement. Being able to move sound in the z axis, sound can be mapped to follow the visual representations that are located higher up or lower down in the space for an immersive experience. It would also allow new combinations, therefore would contribute to the personalisation of the space as the impressions of different users would be unique spatial experiences, for example the precise mapping for the sitting down or standing up behaviours instead of just one level.
Visual Sound
“The aesthetic value of a work can be enhanced if the work is simultaneously presented in more than one sensory modality.” (Pask, 1968)
“Since ancient times artists have longed to create with moving lights a music for the eye comparable to the effects of sound for the ear.” (Moritz, 1986)
People are a part of a biological, psychological and cultural context and our perception can not be separated from our surroundings. Cognition as embodied action exampled with seeing colours by Varela, Rosch and Thompson (1991) suggests that colours are not just “out there” or “in here” independent from our perceptual capacities or surroundings, “world and perceiver specify each other” (Varela, Rosch, Thompson 1991 p. 172). We take the input from our environment and process that information through our sensory perceptions.
Our senses are connected to and enhance each other. Our multi-sensory perception has the ability to match auditory features with visual features. When we hear a sound, it is not only about our auditory senses, but how we perceive and experience it requires information from other senses. Similarly, locating ourselves in space and our perception of distance, requires multi-sensory perceptual analysis; information from our visual depth perception, sound localisation, eye movements and motion perception (Proctor, 2012 p.85-87).
Research have been shown that seeing also involves hearing. Scientists at University of Glasgow (Muckli et al., 2014), have found that visual cortex in humans also processes auditory information. In their experiment blindfolded volunteers are asked to listen to three different sounds including birdsongs, traffic noise and talking crowd. They were able to identify unique patterns of brain activity and “discriminate between the different sounds being processed in early visual cortex activity”.
Sound – color correspondences have been examined by ‘physics (Newton, 1979)’, by their nature of vibration and mapping pitches to colours, ‘by their aesthetic appeal in art’ (Jewanski, 2010), and ‘by their processing in psychology (Spence, 2011)’, by synaesthesia and by our ‘crossmodal correspondences between various unisensory features’ ( Spence, 2011). “Examining these multi-modal interactions allow us to identify rule sets that the brain uses to pair seemingly separate stimuli together” (Hamilton-Fletcher et al., 2017).
Fletcher-Hamilton et al. (2017) found a linear relationship between frequency-chroma and loudness-chroma. Low frequencies produced blue hues and frequencies above 800 Hz produced yellow hues. They also reported that “loudness—hue relationship with quieter sounds yielding bluer hues and louder ones yielding yellower hues” (Hamilton-Fletcher et al., 2017).Â
Figure 38: Adeli et al., Diagram of colour-timbre correspodencesÂ
Figure 39: Adeli et al., Diagram of shape-timbre correspodences
In the experiments made by Adeli et al. (2014), a strong correspondence between timbre and shape and although not as strongly, correspondences between timbre and colour are detected. Participant matched sounds with harsh timbres such as sounds derived from crash cymbals or square waves with sharp jagged shapes and keen colour schemes such as red, yellow or dark grayscale, while they matched soft timbres such as sounds derived from piano or sine waves with rounded shapes and deep colour schemes such as blue, green or light grayscale. While sounds derived from instruments that contain harsh and soft timbres such as sax corresponded to a shape that has a mixture of rounded and hard edges. The result shows a consistency with the Kiki – Bouba effect (Adeli et al., 2014).
Fig. 40: Ramachandran, V.S and Hubbard, E.M. (2001) Synaesthesia – A Window Into Perception, Thought and Language (Ramachandran, Hubbard, p.19, fig.7)
The Kiki – Bouba effect, observed by psychologist Wolfgang Kohler (1929-1947, cited in Ramachandran, Hubbard 2001) is an experiment where people are asked to name two different shapes seen below, one with sharper edges and another with rounded edges as Kiki and Bouba, %95 of them named the sharper one as Kiki. “The reason is that the sharp visuals in first shape imitates the sharp phonemic inflections of the sound Kiki, as well as the sharp inflection of the tongue on the palate. The experiment suggests that there may be natural constraints on the ways in which the sounds are mapped on to objects” (Ramachandran, Hubbard 2001 p.19) .
As gathered from the experiment on audiovisual correspondences broadly, people consistently match high-pitched sounds with small, bright objects that are located high up in space. (Spence, 2011) The findings of the mentioned experiments steered us in the construction of different scenes and themes build upon corresponding sound and visuals in our research project.
People tend to make a stronger connection with the system when audio is supported with visuals. Dannenberg (2005) indicates that ‘image, sound and their impression can’t be though out independently because our perception of them is shaped/influenced by one another. He examples that, ‘when a particular sound or gesture affects some visual parameter, the audience is much more likely to “get it” if the sound and visual effect are presented in relative isolation (Dannenberg, 2005)”. Improvisation and movement can be encouraged if sound is supported with visuals. Dannenberg (2005), also argues that the mapping between visual and sound, should not be too superficial as in mapping apparent parameters of music to visuals as it stops being interesting, for example music visualisers in which the connection between image and sound quickly becomes obvious. Neither should it be on a deeply complex level where it is no longer possible for anyone to detect it.Â
Accordingly, in IS the visual clues given to the participant, by visual representations of sound plays an important role determining the interaction of the participant within the space. On our first prototype, solely hearing sound was not seen to be leading people in the direction of our expected movement in the project space. The visuals generated –particles, as the user stands still, was a powerful clue showing that there is a change of the system because of their action. Therefore, with the combination of audiovisuals the user is conscious of the effect of his/her movement. However, the chosen visual mapping should be attentively considered because in the case of our first prototype we realised that the emergence of a new layer of visuals –particles, are leading people to play with it rather than realising it was their stillness that generated it.
“The link [between the visual and aural components] is not between them but beyond or behind them. Because beyond there is nothing but the human brain — my brain.We are capable of speaking two languages at the same time. One is addressed to the eyes, the other to the ears.” (Xenakis)Â
Figure 41-42: Xenakis – Polytopes
In Polytopes, another example that can be named as an enterable sound-space (as explained in Sonic Spaces), Xenakis achieved an immersive spatial experience, made possible by addressing multi-sensory modalities. He argued that “in an acoustically homogenous place with sound emanating from various loudspeakers in an acoustical grid, geometric shapes and surfaces can be articulated in sound space” (Sterken, OASE). He worked on the immersive dimension of “a total work of art”, by the visitor’s contribution of combining visual and auditory layers of the spectacle.
– A personalised space of music
Studies have shown that musical preference is associated with characteristics of music as well as personality traits. Rentfrow and Gosling (2003), (cited in Ercegovac, Dobrota, Kuscevic 2015), mention that knowing people’s music preference can provide information about their personality. They categorise ‘four dimensions of musical preferences’ such as, reflective-complex, intense-rebellious, upbeat-conventional, energetic-rhythmic. They correlate these preferences with Big Five model of personality traits. The Big Five model of personality traits are based on five main factors that are extraversion, conscientiousness, neuroticism, agreeableness, and openness to experience (Chamorro-Premuzic, Reimers, Hsu, Ahmetoglu, 2009). Out of these traits openness to experience and extraversion has the strongest effect on musical preference (Rawlings, Ciancarelli, 1997). Openness to experience relates to intellect and creative interest and as Chamorro-Premuzic et al. (2009) suggest is ‘the most important correlate of art’. It involves sensation seeking and results have shown that individuals that are inclined toward sensation seeking, prefer complex, intense, aggressive and inspiring music. They also found that individuals that are open to experience listen to music in a cognitive manner whereas those who are neurotic listen to music to regulate moods and emotions.
Many investigations have been made about the correlation between personality traits and music preference. Brown (2012), found that individuals that are open to experience prefer reflective music such as jazz and classical music, whereas extraversion corresponds with popular music. Glasgow and Cartier (1985), found that conservative participants prefer familiar and simple music and sensation seeking is negatively correlated with conservatism. Rentfrow and Gosling (2003), categorised four musical types as mentioned before and defined intense and rebellious music, preferred by participants that are open to experience, more athletic, with high intelligence and verbal abilities and prone to taking risk while upbeat and conventional dimension corresponded to extraversion agreeableness and conscientiousness.
Commonly, it has been found by various researches that, there is a correlation between intellect and sensation seeking with complex, intense and classical music. While extraversion is associated with popular, dance and rhythmic music. Traits other than openness to experienced extraversion, are less consistent in their association with musical preference.
Evolution of Audiovisual Space in ISÂ
Figure 43 : Diagram of Personality, Music Preference and Sound-Colour-Shape Correspondences
Depending on the personality traits and corresponding music preferences mentioned in earlier chapters, IS consists four different scenes, that answers to various behaviour schemes in installation space seen in figure 14. Participant behaviour such as slow movement in space, hand movement or the combination is matched to reflective-complex types of sound that can be exampled by classical music or jazz, which includes soft timbres such as the piano sound of the first prototype, that are matched with soft shapes and a blue/green colour scheme. Whereas fast movement, participants who choose to run in space, combination of running and slowing down and short intervals of standing still is associated with intensive-rebellious sound types, as rock and heavy metal with harsh timbral sounds and a yellow/red colour scheme based on the experiments of sound – colour correspondences. Lack of movement, slow movement or only hand movement while standing still, is linked with upbeat and conventional sound types that is a mixture of soft and harsh timbres therefore curvy and sharp shapes and a darker colour scheme. Slower movement and mixture of walking around and hand movement is associated with energetic-rhythmic type and again a mixture of curvy and sharp shapes with a lighter colour scheme. Therefore audiovisuals will evolve depending on the behaviour of an individual, creating a personalised journey, that is unique for every individual according to the mixture of each category.
Conclusion
The interrelation of movement and time form the building blocks of temporal sonic spaces. Architectural environment plays an important role in our actions. It defines barriers and how we move around in space, observe and interact with space. It is possible to create a dynamic series of spatial experiences through the immaterial architecture of sound, by moving volumes of sound through multiple speakers, therefore creating temporal barriers, geometric lines and surfaces by movement through time. With sound spatialisation, we are able to augment or erase the physical space and define new boundaries to be explored by the participant.
I have discussed, the graphical notations of music which approaches the understanding of a spatial art, due to its realisation by movement within the whole image it represents. As explained by Halprin (1970), “scores are symbolisations of processes that extend over time”, therefore movement and interaction can be associated with a choreographic notation method for designing spaces, as seen in the Motation drawings of Halprin. I have presented a reverse motation system of my research project that investigates the expected and observed behaviour in our first prototype. Our intention was constructing an immersive audiovisual experience that is driven by excitation, hence it is crucial to detail our approach to interaction. I have explained the expected – observed behaviour and the audio-visual response of the system and sonification of position data in detail in previous sections. We have observed, having two different responses to two main interaction strategies –stillness and motion, was insufficient because the system was not accurate in the case of movement change in short intervals. Instead the transition between stillness and movement, the speed can be considered as a data for sound generation. The resulting feedback of the movement needs to be direct and distinctive. Gestures we use to depict the auditory sensations and for spatial localisation of sound could be the parameters to define the changes of sound output.
For a successful interaction, comprehensibility is important for the user to be able to engage with the system. Ambiguity can be reduced by the intuitive mappings of sound, although supporting audio with visuals, found to be critical for intelligibility. People tend to realise that their movement is the reasoning behind the answer when they see it as well as hear it. Therefore, the connection between sound and visuals is significant, thus needs to be kept at a level which is not to complex to result in confusion but not too simple to lose interest.
Our perception is the consequence of our multi sensory analysis, experiences, history and information gathered from our surroundings. We have multifaceted sensory capacity and our senses are intrinsically linked, thus when dealing with an environment of sound providing visual clues can help strengthen our cognisance. Experiments have proven that hearing also triggers visual cortex in humans. Our brain is able to fill in the gaps and make informed guesses by combining the data gathered with all of our senses and projecting familiar images to make sense of the world. Therefore we experience auditory illusions, similar to visuals illusion when we are exposed repeatedly to ambiguous sounds, that I have exampled with the psychoacoustics phenomena of EVP.
Our first prototype of IS was successful in creating an immersive sound space with visuals mapped on five planes and sound played through four speakers. The restrictions of spatialisation techniques are explained in detail, by the concept of physical and perceptual sweet spots. However, our aim is not the accurate localisation of sound, it is creating an auditory illusion of new perceptual spaces by the movement of sound through loudspeakers. To improve our next set up we will work with more speakers and different materials. Using more speakers would enable us to move sound horizontally as well as vertically in a three dimensional sphere rather than a circle. Therefore, allow better mapping to the various user behaviours – composition of one’s own movements and to visual representations for the immersive experience. Solid walls of the set up will be replaced with open weave fabric frames, to remove any barriers that affects the directionality of sound.
Our first prototype was the first scene of an evolving system, of a sound space developing and unfolding with the combination of user behaviour. We are working on four scenes in total, that will be constructed correspondingly to personality, music preferences and audiovisual correspondences, that will be triggered by specific behaviours in space, to provide a personal immersive journey of unexpected experiences. IS as an interaction driven installation distinct itself by aiming to create a perception of three-dimensional space, not just by the mapping of generative visuals but by using spatial sound as well for an immersive experience.
Images
Fig. 1 – 2: Tufan, Y., Zou, L. Diagram of IS
Fig. 3: Nuhn, R., Dack, J. 2004. Sound — Time — Space — Movement: the Space-soundInstallations of the artist-couple <sabine schäfer // joachim krebs>. Schafer, S., Krebs, J. 1999. Space-soundBody 4 from Sonic Lines n’Rooms. p.221, Figure 6.
Fig. 4: Nuhn, R., Dack, J. 2004. Sound — Time — Space — Movement: the Space-soundInstallations of the artist-couple <sabine schäfer // joachim krebs>. Schafer, S., Krebs, J. 1992. LOST. p.222, Figure 7.
Fig. 5: Leitner, B. 1972. Wall Grid. [online]. Available from: https://www.bernhardleitner.at/works [Accessed 6 July 2018]
Fig. 6: Leitner, B. 1969. Soundcube. [online]. Available from: https://www.bernhardleitner.at/works [Accessed 6 July 2018]
Fig. 7: Ikeda, R. 2013. A [for 6 silos]. [online]. Available from: http://www.ryojiikeda.com/project/A/ [Accessed 20 September 2018]
Fig. 8: Applebaum, M. 2008. Handbook for Metaphysics of Notation. p.2, Figure 1. The Metaphysics of Notation, Panel 4. [online] Available from: http://web.stanford.edu/~applemk/other-materials/HandbookForTheMetaphysicsOfNotationOriginalDraft.pdf [Accessed 20 September 2018]
Fig. 9 : Cardew, C. 1963-1967. Treatise. [online]. Available from: http://socks-studio.com/2015/10/05/the-beauty-of-indeterminacy-graphic-scores-from-treatise-by-cornelius-cardew/ [Accessed 20 September 2018]
Fig. 10: Xenakis I. 1953-54. Metastasis. [online] Available from: https://en.wikipedia.org/wiki/ Metastaseis_(Xenakis)#/media/File:Metastaseis1.jpg [Accessed 6 July 2018]
Fig. 11: Xenakis, I., Le Corbusier. 1958. Philips Pavillion. [online]. Available from: https:// en.wikipedia.org/wiki/Metastaseis_(Xenakis)#/media/File:Expo58_building_Philips.jpg [Accessed 6 July 2018].
Fig. 12: Halprin, L. Motation Drawing. [online] Available from: http://www.dataisnature.com/?p=1583 [Accessed 20 August 2018].
Fig. 13: Tufan, Y., Zou, L. Diagram of IS
Fig. 14: Tufan, Y., Zou, L. Movement notation drawings in IS space.
Fig. 15: Tufan, Y., Zou, L. IS, Hand gesture controlling particles
Fig.16: Lemaitre et al. 27 July 2017. Rising tones and rustling noises: Metaphors in gestural depictions of sounds. [online]. Available from: http://journals.plos.org/plosone/article/figure?id=10.1371/journal.pone.0181786.g002 [Accessed 6 July 2018]
Fig.17-18: Applebaum, M. February 2011. Aphasia. [online]. Available from: https://news.stanford.edu/news/2012/ february/applebaum-aphasia-music-020312.html [Accessed 6 July 2018]
Fig. 19: Tufan, Y. 2018. Reality into Music, Previous works.
Fig. 20: Ikeda, R. 2015. Micro | Macro. Image by Martin Wagenhan ©ZKM | Karlsruhe. [online]. Available from: https://www.someslashthings.com/online-magazine/2016/7/30/micro-macro-by-ryoji-ikeda [Accessed 6 July 2018]
Fig. 21: Diagram of Model-based sonification – Handbook of sonification Hermann, T. 2011. Model-Based sonification. The sonification Handbook. (pp. 404, Figure 16.2.)
Fig. 22: Rockeby, D. 1993. Very Nervous System. [online]. Available from: http://www.davidrokeby.com/vns.html [Accessed 6 July 2018}.
Fig. 23: Sherwood, A. 2013. Mizaru. [online}. Available from: http://aaron-sherwood.com/works/ MIZARUinstallation/ [Accessed 6 July 2018].
Fig. 24:Â Sherwood, A. 17 December 2012. Firewall. [online]. Available from: http://aaron- sherwood.com/blog/?p=558 [Accessed 6 July 2018].
Fig. 25-26: Theoriz. 2014. Noisy Skeleton.[online]. Available from: http://projection-mapping.org/ skeleton/ [Accessed 6 July 2018]
Fig. 27-28: Schnellebuntebilder, Kling Klang Klong. 2014. Momentum. [online]. Available from: http:// schnellebuntebilder.de/projects/momentum/ [Accessed 6 July 2018]
Fig. 29: Tufan, Y. 2018. Sonification Diagram in IS
Fig. 30: Tufan, Y. 2018. IS, Diagram of installation and positions of loudspeakers.
Fig. 31A/B – 32A/B: Tufan, Y., Zou, L. 2018. Illustrations of IS, audiovisual system and spatialisation.
Fig. 33-34-35: Tufan, Y., Zou, L. 2018. Images of IS Fig.
Fig. 36: Tufan, Y., Zou, L. 2018. Sketches of IS, using illustrations of Leitner -Soundcube.
Fig. 37A/B: Tufan, Y., Zou, L. 2018. Spatialisation diagram between loudspeakers.
Fig. 38: Adeli et al. 2014. Audiovisual correspondence between musical timbre and visual shapes Diagram of shape – timbre correspodences. p.6. Table 2. Color selections for timbre.Â
Fig. 39: Adeli et al. 2014. Audiovisual correspondence between musical timbre and visual shapes Diagram of shape – timbre correspodences. p.4. Table 1. Shape selections for timbre.Â
Fig. 40: Ramachandran, V.S and Hubbard, E.M. (2001) Synaesthesia – A Window Into Perception, Thought and Language (Ramachandran, Hubbard, p.19, fig.7)
Fig. 41-42: Xenakis, I. 1967. Polytopes. [online]. Available from: http://socks-studio.com/2014/01/08/yannis-xenakis-polytopes-cosmogonies-in-sound-and-architecture/ [Accessed 20 September 2018]
Fig. 43: Tufan, Y., Zou, L. 2018. Diagram of Personality, Music Preference and Sound-Colour-Shape Correspondences , IS
Bibliography
- Aaron Sherwood. Mizaru installation. [online]. Available from: http://aaron-sherwood.com/works/ MIZARUinstallation/ [Accessed 6 July 2018].
- Aaron Sherwood. 17 December 2012. Firewall. [online]. Available from: http://aaron-sherwood.com/blog/? p=558 [Accessed 6 July 2018].
- Adeli et al. 2014. Audiovisual correspondence between musical timbre and visual shapes, Frontiers in Human Neuroscience. [online]. 8(352), (fnhum.2014.00352). Available from: https://www.frontiersin.org/articles/10.3389/fnhum.2014.00352/full#h11 [Accessed 20 August 2018].
- Alves, B. 2005. Digital Harmony of Sound and Light. Computer Music Journal. [online] 29(4), pp.45-54. The MIT Press. Available from: https://www.jstor.org/stable/3681481 [Accessed 19 September 2018].
- Ambisonicnet. 2015. Why Ambisonics Offers “The Best Sounds Surround”. [online]. Available from: https:// www.ambisonic.net [Accessed 6 July 2018].
- Arch Daily. 23 September 2011. Bernhard Leitner: Sound Spaces. [online]. Available from: https:// www.archdaily.com/168979/bernhard-leitner-sound-spaces [Accessed 6 July 2018].
- Arnold, R. 2012. There’s no sound in my head. [online]. Available from: https://vimeo.com/14469188 [Accessed 5 January 2018].
- Art Guide Australia. 12 July 2018. Ryoji Ikeda: micro | macro. [online]. Available from: https://artguide.com.au/ryoji-ikeda-micro-macro [Accessed 20 September 2018].
- A Sound Effect. 7 January 2016. WELCOME TO THE WONDERFUL WORLD OF AMBISONICS — A PRIMER BY JOHN LEONARD. [online]. Available from: https://www.asoundeffect.com/ambisonics-primer/ [Accessed 20 September 2018].
- Avidar, P., Ganchrow, R., Kursell, J. 2009. Editorial. In: eds. Immersed. Sound and Architecture. [online]. OASE (78)., pp. 2-7. OASE Foundation, NAi Publishers. Available from:Â https://www.oasejournal.nl/en/Issues/78/Editorial [Accessed 20. August 2018].
- Banks, J. 2012. Rorschach audio : art & illusion for sound. London : Strange Attractor Press.
- Banks, J. 2016. Rorschach Audio —art and illusion for sound. The British Psychological Society. 29(4), pp. 280-282.
- Banks, J. 2001.Rorschach Audio: Ghost Voices and Perpetual Creativity. Leonardo Music Journal. The MIT Press. [online]. 11, pp. 77—83. Available from: https://muse.jhu.edu/article/20328 [Accessed 18 September 2018].
- Barron, M. 1993. Auditorium Acoustics and architectural design. Spon Press, pp. 1-35.
- Behne, K., Wollner, C. 2011.Seeing or hearing the pianists? A synopsis of an early audiovisual perception experiment and a replication. Musicae Scientiae 15(3), pp. 324—342.
- Bennett, S. Audio Media International. 05 December 2017. Spatial Awareness: Inside the world of immersive sound design. [online]. Available from: http://www.audiomediainternational.com/news/spatial- awareness- inside-the-world-of-immersive-sound-design/07107 [Accessed 5 January 2018].
- Block Museum of Art. Treatise: An Animated Analysis. [online]. Available from: https://www.blockmuseum.northwestern.edu/picturesofmusic/pages/anim.html [Accessed 20 September 2018].
- Brazil, E., Fernstrom, M. 2011. Auditory Icons. In: T. Hermann, A. Hunt, G. Neuhofff, eds. The sonification Handbook. Berlin, Germany: COST Office and Logos Verlag Berlin, pp. 325-338.
- Caldis, C. 2014. Data Sonification Artworks: A Music and Design Investigation of Multi-modal Interactive Installations. Thesis (Masters), Wits School of Arts, University of the Witwatersrand.
- Carlile, Simon. ed. 1996. Virtual Auditory Space : Generation and Applications. R.G. Landes Bioscience.
- Carriage Works. 10 May 2018. RYOJI IKEDA | MEDIA RELEASE. [online]. Available from: http://carriageworks.com.au/ryoji-ikeda-media-release/ [Accessed 20 September 2018].
- Carson, B. 2007. What Are Musical Paradox and Illusion?, Book Review. In: Massaro, D.W. ed. American Journal of Psychology. 120(1), pp. 123—170.Â
- City Arts Magazine. 29 January 2018. Swimming in Sound. [online]. Available from: https://www.cityartsmagazine.com/swimming-in-sound-ambisonic-technology-uw/ [Accessed 20 September 2018].
- XVI.Creators. 2 September 2014. Avec Noisy Skeleton, devenez le pont entre son et image [online]. Available from: https://creators.vice.com/fr/article/3d7pnb/skeleton [Accessed 6 July2018].
- XVII. Dannenberg, R.B. 2005. Interactive Visual Music: A Personal Perspective. Computer Music Journal. [online] 29(4), pp.25-35. The MIT Press. Available from: http://www.jstor.org/stable/3681479 [Accessed 19 September 2018].
- XVIII.David Rockeby. 2010. Works : Very Nervous System (1986-1990). [online]. Available from: http:// www.davidrokeby.com/vns.html [Accessed 6 July 2018].
- XIX.Dazed. Electronic voice phenomena. [online]. Available from: http://www.dazeddigital.com/artsandculture/article/38799/1/collective-rage-jen-silverman-queer-feminist-intersectional-play-you-need-to-see [Accessed 20 September 2018].
- Deutsch, D. 1983. Auditory Illusion and Audio. J. Audio Eng. Soc., 31(9), pp. 606-620.
- XXI.Di Bona, E. 2017. Listening to the Space of Music. Rivista di estetica. [online]. 66, pp. 93-105. Available from: http://journals.openedition.org/estetica/3112 ; DOI : 10.4000/estetica.3112 [Accessed 20 August 2018].
- XXII.Digicult. JOE BANKS: RORSCHACH AUDIO. EVP, PSYCHOACOUSTICS AND AUDITORY ILLUSIONS. [online]. Available from: http://digicult.it/news/rorschach-audio-evp-psychoacoustics-and-auditory-illusions/ [Accessed 20 September 2018].
- XXIII.Dirac Blog. 12 August 2016. Under the Hood of the Stereophonic System: Phantom Sources. [online]. Available from: https://www.dirac.com/dirac-blog/stereophonic-system-phantom-sources [Accessed 20 September 2018].
- XXIV.Ercegovac et al. 2015. Relationship Between Music and Visual Art Preferences and Some Personality Traits . Empirical Studies of the Arts. [online]. 33(2), pp. 207-227. DOI: 10.1177/0276237415597390Â [Accessed 20 September 2018].
- XXV. Evans, B. 2005. Foundations of a Visual Music. Computer Music Journal. [online]Â 29(4), pp.11-24. The MITÂ Press. Available from: https://www.jstor.org/stable/3681478 [Accessed 19 September 2018].
- XXVI.Frank, M. 2014. How to make Ambisonics sound good. European Acoustics Association. Forum Acousticum, Krakow. [online]. Available from: https://iaem.at/Members/frank/files/2014_frank_howtomakeambisonicssoundgood.pdf [Accessed 20 September 2018].
- XXVII.Hacihabiboglu et al. 2017. Perceptual Spatial Audio Recording, Simulation, and Rendering. IEEE Signal Processing Magazine. [online]. 34(3), pp.36-54. DOI: 10.1109/MSP.2017.2666081 [Accessed 20 September 2018].
- XXVIII.Hall, D. Cardew’s Treatise: the greatest musical score ever designed. [online]. Available from: http://davehall.io/treatise-score-graphic-notation/ [Accessed 20 September 2018].
- XXIX.Hamilton-Fletcher et al. 2017. Sound Properties Associated with Equiluminant Colours. Multisensory Research. [online]. 30(3-5), pp. 337-362. DOI: 10.1163/22134808-00002567 [Accessed 20 August 2018].
- XXX.Hanoch-Roes, G. 2003. Musical Space and Architectural Time: Open Scoring versus Linear Processes. International Review of the Aesthetics and Sociology of Music. Croatian Musicological Society. [online]. 34(2), pp. 145-160. Available from: https://www.jstor.org/stable/30032127 [Accessed 25 August 2018}.
- XXXI.Haque U. 2007. The Architectural Relevance of Gordon Pask. In: 4d Social – Interactive Design Environments. Wiley & Sons, pp. 54-61.
- XXXII.Hermann, T., Hunt, A. 2004. The Importance of Interaction in Sonification. Proceedings of ICAD 04-Tenth Meeting of the International Conference on AuditoryDisplay.
- XXXIII.Hermann, T., Hunt, A. 2011. Interactive Sonification. In: T. Hermann, A. Hunt, G. Neuhofff, eds. The sonification Handbook. Berlin, Germany: COST Office and Logos Verlag Berlin, pp.273-298.
- XXXIV.Hermann, T. Model-Based sonification. 2011. In: T. Hermann, A. Hunt, G. Neuhofff, eds. The sonification Handbook. Berlin, Germany: COST Office and Logos Verlag Berlin, pp.399-427.
- XXXV.Hofmann, B. 2006. Spatial Aspects in Xenakis’ Instrumental Works.In: M. Solomos, A. Georgaki, G. Zervos, eds. Definitive Proceedings of the “International Symposium Iannis Xenakis”.
- XXXVI.Hollier, M.P., Cosier, G. 1996. Assessing human perception. Speaking and Listening. BT Technology Journal Vol 14 No 1, pp.161-169.
- XXXVII.Ikeda, R. 2013. A [for 6 silos]. [online]. Available from: http://www.ryojiikeda.com/project/A/ [ Accessed 20 August 2018].
- XXXVIII.Ikeda, R. 2015. Micro | Macro. [online]. Available from: http://www.ryojiikeda.com/project/micro_macro/ [Accessed 6 July 2018].
- XXXIX.Isaza, M. Designing Sound. 29 September 2014. Sonic Architecture. [online]. Available from: http://designingsound.org/2014/09/29/sonic-architecture/ [Accessed 20 August 2018].
- Itoh et al. 2017. Musical pitch classes have rainbow hues in pitch class colour synesthesia. Scientific Reports.[online]. 7(17781). Available from:Â https://doi.org/10.1038/s41598-017-18150-y [Accessed 20 August 2018].
- Jablonska et al. 2015. Sound and architecture — mutual influence. 6th International Building Physics Conference, IBPC. [online]. Energy Procedia, Volume 78. pp.31-36. Available from: https://www.sciencedirect.com/science/article/pii/S1876610215018421 [Accessed 20 September 2018].
- XLII.Jeff Hamada. 27 November 2014. Momentum: Interactive Tool Generates Sound and Visuals Based on BodyMovements. [online]. Available from: https://www.booooooom.com/2014/11/27/momentum-interactive-tool- generates-sound-visuals-based-body-movements/ [Accessed 6 July 2018].
- XLIII.Kaper, H.G., Tipei, S. 1999. Data Sonification and Sound Visualisation. Computing in Science and Engineering, 1(4), pp. 48 – 58.
- XLIV.Keller, P.E., Janata, P. 2009. Embodied Music Cognition and Mediation Technology. Music Perception, Volume 26, Issue 3, pp. 289-292.
- XLV.Kern, H. 1984. Time-Spaces. In: Technische Universität Berlin, ed. Bernard Leitner Ton-Raum Tu Berlin. Berlin. [online] Available from: https://www.bernhardleitner.at/texts [Accessed 6 July 2018].
- XLVI.KlingKlangKlong.Momentum.[online].Availablefrom:http://www.klingklangklong.com/momentum.html [Accessed 6 July 2018].
- XLVII.Lacey, S., Martinez, M., McCormick, K. & Sathian, K. 2016. Synesthesia Strengthens Sound-Symbolic Cross- Modal Correspondencies. John Wiley & Sons.
- XLVIII.Leeds, J. Psychoacoustics, defined. [online]. Available from: http://thepowerofsound.net/ psychoacoustics- defined/ [Accessed 5 January 2018]
- XLIX.Leitner, B., Kargl, G.,Groys, B. ZKM Books. 2008. .P.U.L.S.E. Germany: Hanje CatzVerlag.
- Leitner, B. 1977. Sound Space Manifesto. [online]. Available from: https://www.bernhardleitner.at/texts [Accessed 6 July 2018].
- Leman, M. 2008. Embodied Music Cognition and Mediation Technology. Cambridge, MA: MIT Press.
- Leman, M. 2012. Musical Gestures and Embodied Cognition. Actes des Journées d’Informatique Musicale (JIM 2012), pp. 5-7.
- Lemaitre et al. 2017. Rising tones and rustling noises: Metaphors in gestural depictions of sounds. PLoS ONE 12(7): e0181786. In: K. Watanabe, T. Daigaku, eds.
- Ludden, D. Psychology Today. 06 May 2015. Do sounds have shapes? Synesthesia and the bouba- kiki effect. [online]. Available from: https://www.psychologytoday.com/blog/talking-apes/201505/do- sounds-have- shapes [Accessed 25 November 2017]
- Macedo, F. 2015. Investigating Sound in Space: Five meanings of space in music and sound art. Organised Sound. [online]. 20(2), pp. 241—248. Cambridge University Press. Available from: https://doi.org/10.1017/S1355771815000126 [Accessed 23 August 2018].
- Marentakis et al. 2014. Vector-Base and Ambisonic Amplitude Panning: A Comparison Using Pop, Classical, and Contemporary Spatial Music . Acta Acoustica United With Acoustica. [online]. 100(2014), pp. 945-955. Available from: DOI 10.3813/AAA.918774 [Accessed 20 September 2018].
- LVII.McGookin, D., Brewster, S. 2011. Earcons. In: T. Hermann, A. Hunt, G. Neuhofff, eds. The sonification Handbook. Berlin, Germany: COST Office and Logos Verlag Berlin, pp.339-361.
- LVIII. Merriman, P. 2010. Architecture/dance: choreographing and inhabiting spaces with Anna and Lawrence Halprin. Cultural Geographies. [online]. 17(4), pp. 427—449. DOI: 10.1177/1474474010376011 [Accessed 20 August 2018].
- Mic, Kasulis, K. 23 May 2017. Data sonification lets you literally hear income inequality. [online]. Available from: https://mic.com/articles/177877/data-sonification-lets-you-literally-hear-income-inequality#.ZnABFPMvm [Accessed 6 July 2018].
- Moody, Niall. 2006. Motion as the Connection Between Audio and Visuals and how it may inform the design of new audiovisual instruments.
- Moritz, W. 1986. Towards an Aesthetics of Visual Music. In: Asifa Canada Bulletin. Montreal: ASIFA Canada, Vol. 14: 3.
- LXII. Muckli et al. 2014. Decoding Sound and Imagery Content in Early Visual Cortex. Current Biology. [online]. 24(11), pp. 1256-1262. DOI: 10.1016/j.cub.2014.04.020 [Accessed 25 August 2018].
- LXIII. Munoz, E. 2007. When gesture sounds:Bodily significance in musical performance. International Symposium on Performance Science . pp. 55-60.
- LXIV.Norman, D. 2013. The Design of Everyday Things. Revised and expanded ed. New York: Basic Books, A Member of the Perseus Books Group.
- LXV.Pangaro, P. 1996. Cybernetics and Conversation, or , Conversation Theory in Two Pages, In: Communication and Anti-communication, American Society for Cybernetics
- LXVI.Parhesian et al. 2015. Design and perceptual evaluation of a fully immersive three-dimensional sound spatialization system. 3rd International Conference on Spatial Audio, ICSA, Graz, Austria. [online]. Available from: https://hal.archives-ouvertes.fr/hal-01306631 [Accessed 20 September 2018].
- XXXVIII.Pask,G.1968.Acomment,acasehistoryandaplan.In:Reichardt,J.eds.CyberneticSerendipity. ICA:London,UK, pp. 76-99.
- XXXIX.Pask,G.1969.TheArchitecturalRelevanceofCybernetics.ArchitecturalDesign,Septemberissue,No7/6. London: John Wiley & Sons Ltd. Pp. 68-77.
- Patapkina, A. 14 november 2017. Bernard Leitner. [online]. Available from: http://www.azucarmag.com/ bernhard-leitner/ [Accessed 6 July 2018].
- Patteson, T. 2012. The Time of Roland Kayn’s Cybernetic Music. In: A. Altena, ed. Traveling Time: Sonic Acts, Sonic Acts Press, pp. 494-6.
- XLII.Pavlus, J. 03 May 2014. What Impossible Music Looks And Sounds Like. [online]. Available from: https:// www.fastcodesign.com/3027119/a-new-musical-form-thats-part-electronica-part-data-viz [Accessed 5 January 2018]
- XLIII.Plack, C. 1 April 2005. The Musical Ear. [online]. Available from: https:/nmbx.newmusicusa.org/the- musical- ear/ [Accessed 5 January 2018]
- XLIV.PMC, Lightform. Skeleton Sound. [online]. Available from: http://projection-mapping.org/skeleton/ [Accessed 6 July 2018].
- XLV.Proctor, R.W, Proctor, J.D. eds. 2012. Handbook of Human Factors and Ergonomics. 4th ed. John Wiley & Sons, Inc., Salvendy, G.
- XLVI.Ramachandran, V.S. and Hubbard, E.M. 2001. Synaesthesia – A Window Into Perception, Thought and Language. Journal of Consciousness Studies,8(12). pp. 3-34.
- XLVII. Rastopov, D. 14 January 2013. Time and Space in Sound Installations. [online]. Available from: http://www.rastoropov.co.uk/time-space-sound-installations/ [Accessed 20 August 2018].
- XLVIII.Reybrouck, M. 1997. Gestalt Concepts and Music: Limitations and Possibilities. In: M. Leman, ed. Music, Gestalt and Computing. Studies in Cognitive and Systematic Musicology. Berlin, Heidelberg: Springer Verlag. pp. 57-65.
- XLIX.Roberts, G.E. 2012. Composing with Numbers: Iannis Xenakis and His Stochastic Music. 2 March 2012. Math/Music: Aesthetic Links. [online]. Available from: http://mathcs.holycross.edu/~groberts/ Courses/ Mont2/2012/Handouts/Lectures/Xenakis-web.pdf [Accessed 5 January 2018]
- Rudenko, S., Serrano, M.J.C. 2017. Musical – Space Synaesthesia: Visualisation of Music Texture. Multisensory Research 30 (2017), pp. 279—285.
- Rusconi, E. et al. 2005. Spatial representation of pitch height: the SMARC effect. Cognition 99 (2006), pp. 113—129.
- Ryan, D. 2012. Rorschach Audio – Art and Illusion for Sound. Book Review. Art Monthly. Issue 360, pp.37-38.
- Schafer, S., Krebs, J. 2003. Sound — Time — Space — Movement: the Space-soundInstallations of the artist-couple 〈sabine schäfer // joachim krebs〉. Organised Sound. In: Nuhn, R., Dack, J., eds. [online]. 8(2), pp. 213-225. Cambridge: Cambridge University Press. Available from: https://doi.org/10.1017/S1355771803000128 [Accessed 20 August 2018].
- Scnellebuntebilder. Momentum. [online]. Available from: http://schnellebuntebilder.de/projects/momentum/ [Accessed 6 July 2018].
- Skatvg. Sketching Audio Technologies using Vocalizations and Gestures. [online]. Available from: http:// skatvg.iuav.it/?page_id=14 [Accessed 6 July 2018].
- Socks Studio. 8 January 2014. Yannis Xenakis’ Polytopes: Cosmogonies in Sound and Architecture. [online]. Available from: http://socks-studio.com/2014/01/08/yannis-xenakis-polytopes-cosmogonies-in-sound-and-architecture/ [Accessed 20 September 2018].
- LVII.Some/Things Magazine. 30 July 2016. SOME/ART : MICRO | MACRO BY RYOJI IKEDA. [online]. Available From: https://www.someslashthings.com/online-magazine/2016/7/30/micro-macro-by-ryoji-ikeda [Accessed 6 July 2018].
- LVIII.Spence, C. 2011. Crossmodal correspondences: A tutorial review. Atten Percept Psychophys. [online]. 73(4), pp 971—999. Available from: https://link.springer.com/article/10.3758/s13414-010-0073-7 [Accessed 20 August 2018].
- Stamp, J. 5 June 2013. 5 1/2 Examples of Experimental Music Notation. [online].Available from: https:// www.smithsonianmag.com/arts-culture/5-12-examples-of-experimental-music-notation-92223646/ [Accessed 5 January 2018].
- Stanford News. 3 February 2012. Aphasia: A Stanford music professor’s work, with hand gestures and odd sounds, about obsessive attention to ridiculous things. [online]. Available from: https://news.stanford.edu/news/ 2012/february/applebaum-aphasia-music-020312.html [Accessed 6 July 2018].
- Sterken, S. 2009. Immersive Strategies in Iannis Xenakis’s Polytopes. Immersed. Sound and Architecture. [online]. OASE (78), pp. 116-120. Available from: https://www.oasejournal.nl/en/Issues/78/ImmersiveStrategiesInIannisXenakissPolytopes [Accessed 20 August 2018].
- LXII.Sterken, S. 2007. Music as an Art of Space: Interactions between Music and Architecture in the Work of Iannis Xenakis. In: M. Muecke, M. Zach, eds. Resonance. Essays on the Intersection of Music and Architecture. USA: Culcidae Architectural Press; Ames. pp. 31-61.
- LXIII. Tedx Stanford. May 2012. Applebaum, M., The Mad Scientist of Music. [online]. Available from: https://www.ted.com/talks/mark_applebaum_the_mad_scientist_of_music/discussion [Accessed 20 August 2018].
- LXIV.Theoriz. Noisy Skeleton. [online]. Available from: http://www.theoriz.com/portfolio/noisy-skeleton/ [Accessed 6 July 2018].
- LXV. University of Glasgow. 25 may 2014. Sound and Vision : Visual Cortex Processes Auditory Information Too. [online]. Available from: https://www.gla.ac.uk/news/archiveofnews/2014/may/headline_333617_en.html [Accessed 20 August 2018].
- LXVI.Varela, F.J., Thompson, E., Rosch, E. 1991. eds. The Embodied Mind: Cognitive Science and Human Experience. Cambridge, Massachusetts: The MIT Press.
- LXVII.Waves. 10 October 2017. Ambisonics Explained: A Guide for Sound Engineers. [online]. Available from: https://www.waves.com/ambisonics-explained-guide-for-sound-engineers [accessed 6 July 2018].
- LXVIII. Wired. 10 June 2017. THIS AUDIO INSTALLATION SUBMERGES YOU IN SOUND. [online]. Available from: https://www.wired.com/story/envelop-spatial-audio/ [Accessed 20 September 2018].
- LXIX. Woodruff Health Sciences Center. Emory News Center. 12 September 2016. Sensory connections spill over in synesthesia. [online]. Available from: http://news.emory.edu/stories/2016/09/ synesthesia_sathian/index.html [Accessed 25 November 2017]
Â
Â
Submit a Comment