A - B - C - D - E - F - G - H - K - L - M - N - P - R - S - V - W - Y
James Andean ; Alejandro Olarte
Sibelius Academy, University of the Arts Helsinki (Finland)
Technological change over the past decades has seen electroacoustic music move increasingly out of the studios, onto the stage, and from there to broader, more varied, and more flexible performance contexts. This in turn has brought electroacoustic and electronic tools and methodologies into ever more intimate contact and collaboration with the full range of arts practices, from other musical forms, to the other performing arts, and beyond. While none of this is entirely new in and of itself – performance collaboration has been a part of electroacoustic practice since the earliest days of the form – we see today a level of integration that begins to dissolve boundaries between genres and between art forms. As a result, performer roles expand beyond previous limits and borders; practices shift and lines blur; and the notion of electroacoustic performance practice becomes less clearly outlined, dissolving instead into a more fluid pool of performance possibilities, opportunities, and affordances.
These developments pose a number of challenges, as electroacoustic performance practice is reconfigured, renegotiated, and reinterpreted as it evolves and dissolves into these fluid, malleable, and transitioning performance contexts. This paper will examine some of the consequences and issues this implies for electroacoustic performance practice, focusing specifically on the context of interdisciplinary improvisation, as this latter arguably represents a particularly clear case of both a fluid performance situation, and of the dissolution or renegotiation of boundaries between practices and art forms. Supporting references will be drawn from the Helsinki-based Research Group in Interdisciplinary Improvisation, as well as related performance and research groups, including Sound & Motion, the Liikutus project, and the Helsinki Meeting Point. We will also briefly discuss some of the methodological issues and approaches which may be particularly well-suited for research in these areas.
The projects mentioned above involve collaborative improvisation between performers from a range of fields, including sound, music, theatre, performance art, dance, studio arts, and film and video. While there are a number of fascinating research questions involved with such collaboration, we will focus on those which hold most relevance for electroacoustic practice.
The first of these involves the incorporation of technological means and media by a number of the practitioners and performers involved, for example musicians, sound artists, and video artists. How does the inclusion of digital and other technologies affect the ability of individual performers and the group as a whole to achieve ‘now-ness’, to be fully engaged with the moment, as is so critical in improvisation? Many of these tools rely on modes of use which involve forms of analysis and preparation – coding or patching, for instance – which most commonly take place outside of the performance context. Can such acts be made performative, and if so, can they be sufficiently improvisatory? Or are the cognitive modes involved too far removed from those required for full presence and commitment to free and spontaneous improvisation? If, on the other hand, performance aspects of digital tools are prepared in advance, are they still fully suitable to the free improvisation context, or does this degree of preparation beforehand consist of a level of creative planning which is potentially antithetical to the freedom to follow where a freely improvised performance might lead? Are performers who rely on such tools fully able to integrate with others – performance artists, dancers, instrumentalists, studio artists – whose output does not rely on technological resources, or is there a noticeable difference in organic and spontaneous involvement?
Questions more specific to sonic performers involved with interdisciplinary improvisation can also be raised here. For instance, the use of electronics at times results in the potential lack of performance gesture – in the use of a laptop for instance. Is this problematic in communicating with other performers, or across art forms? Sonic performance gesture as a visual cue can be useful on a number of levels, from identifying a given sound source among a group of performers, to offering a preliminary level for engagement. Does its absence prevent the necessary level of fluid communication and interaction?
Perhaps most importantly, what transformations do electroacoustic performance practices undergo as a result of this meeting and melting of artforms? This occurs on two levels: first, how does the electroacoustic performer engage with performers from the other performance arts; second – and perhaps more relevantly – what happens to electroacoustic practice in a context in which the role of a given performer transcends boundaries, passing fluidly from sound, to movement, to more theatrical aspects, or to drawing, painting, or film? Or, more likely, in a performance act which combines aspects of any number of these into a single expression? Does electroacoustic practice dissolve in this multifaceted artistic pool? Or are there core elements which remain firm, and are retained across and throughout this transcendence of historical performance categories?
In tackling such questions in a performance research context, as with any research-oriented work, one must choose the methodological framework best suited for the project at hand, in terms of both research process and desired results. Specific methodological challenges faced here include those generally encountered when the object of research is process, rather than artistic outcomes or artefacts; as well as the acknowledged difficulty of improvisation as an object of research, due to its profoundly ephemeral and transitory nature, and the extremely broad interdisciplinarity which is at the core of the projects, calling for research strategies that take this into account, such as Gibbons’ ‘Mode 2’ research (Gibbons et al. 1994) and Wasser & Bresler’s ‘interpretive zones’ (Wasser & Bresler 1996; Hoel 1999).
For these reasons, among others, projects such as those discussed here tend to turn towards relatively recent methodologies of artistic research, beginning with practice-based research, or, further, practice as research, which perhaps better recognize some of the challenges which are particular to performance research (Barrett & Bolt 2007; Borgdorff 2012; Smith & Dean 2009). Further, these projects employ a methodology drawn from action research, in which the more traditional process of hypothesis – observation – conclusion is replaced by a cyclical model of planning – action – observation – reflection – repeat, or reflect – plan – act – observe – repeat (Carr & Kemmis 1986). A primary difference here is the emphasis on observation and reflection, with a corresponding lack of emphasis on conclusions; this is perhaps appropriate to the context under consideration, as it is difficult to claim definitive conclusions in a performance context which is under constant reinterpretation and renegotiation, as it is arguably impossible to establish an epistemological framework sufficiently stable for the research results to be claimed as definitive, or to be rigorously transferable beyond the confines of a given research context.
Despite such drawbacks, the strength of such research processes –
focused more on finding new or novel questions than answers, and on
opening further avenues for development and exploration than conclusive
determinations – is regularly demonstrated in research areas such as
that being considered here, due to their flexibility, their ability to
remain responsive to multiple disciplines and paradigms simultaneously,
and their ability to approach organic or intuitive artistic processes
as suitable research subjects. As employed here, they offer strong
implications for electroacoustic performance practice as it continues
to reinvent itself in the increasingly mobile, shifting, and
multi-faceted performance contexts with which today’s performers
increasingly find themselves engaged.
Andreas Bergsland ; Asbjørn
Norwegian University of Science and Technology (Norway)
This paper explores meaningful relationships between voice and aural architecture between reverberation and resonance implicit in realizations of Alvin Lucier’s I am sitting in a room, expanding the range of possible meanings beyond the fixed media versions of this electroacoustic classic.
The interior of a building always carries its own sound. It contains what Blesser and Salter term an aural architecture. The aural architecture in this respect is an equivalent to the physical space of the building. That is, its volumes, geometrical construction, and the materials making up the building’s surface. The major influence of the aural architecture of a specific architectural space are the sound sources situated in that particular space, and how the sound from the sources are reflected and diffused by the architecture. In this sense, the aural architecture will also influence our moods and associations in the listening experience.
The aural experience of an interior is closely connected to the term reverberation. Reverberation in the physical sense refers to sound reflections by nearby surfaces, and by this reverberation implies the size and shape of the space as well as the materials of its surfaces.
In addition, the term resonance to a certain degree overlaps reverberation in the sense that it can also imply sound reflection. However, resonance also involves the synchronous vibration of a surrounding space or a neighbouring object.
In this paper, we want to address the concept of resonance not only by the means of a physical definition. Resonance also implies a definition concerning mental imagery; the power or quality of evoking or suggesting images, memories, and emotions, thus referring to allusions, connotations or overtones. In this respect, the term corresponds with the aural architecture, with its influence on our moods and associations in the listening experience.
To address these issues, we have chosen Alvin Lucier’s I am sitting in a room as our case study. By referring to this work it would be possible to explore both how the aural architecture of an interior transform the initial voice to the point of unintelligibility. On the other hand, by suggesting an augmentation of the elements voice and room in this piece, it will be possible to discuss how the work can trigger mental images for the audience. By specifying the story told by the voice, a setting of the work in a specific room, this will guide the mental imagery of the audience on the basis of memory. The perceived space of the work will then be a combination of the composed space and the listening space.
Like several other electroacoustic works from the 60’s and 70’s, I am sitting in a room has had a dual existence since it was composed in 1969. On one side, it has existed and still exists in fixed form as a “tape composition”. In particular, two versions made by Lucier himself in 1970 and 1980 have been distributed on LP and CD, and the literature on electroacoustic music and sound art have frequently made reference to these. For instance, in both Broening’s and LaBelle’s analyses, Lucier’s stutter, the implicit references to these in the text, and the smoothing out of these “irregularities” by the gradual unfolding process are the central issues. Moreover, these more or less canonized recordings, and particularly the latest have been frequently performed, i.e. played back, in acousmatic concerts, and as Collins has noted, thereby made a point of bringing “the public into a private space”, into a room “different from the one you are in now”. Thus, the work seems to exist in a relatively “closed” and authoritative form, where Lucier’s own voice (and stutter), his own text (with reference to the stutter), and the rooms he has chosen (smoothing out the stutter) are crucial components, and which therefore restrict the range of possible meanings associated with the work.
On the other side, however, and perhaps more in later years, the work has existed in the form of live realizations of the written score which provides a set of instructions for the performance and a text passage that can be used. These versions are nevertheless firmly bound to the score, since it explicitly opens up for “any other text of any length”, as well as encouraging more experimental versions using different speakers, different (and multiple) rooms, different microphone placements, and lastly, “versions that can be performed real time”. Hence, the score seems to open up possibilities for a much wider range of realizations, which also naturally would widen the scope of possible meanings for a perceiver. By choosing speakers with a particular story or memory from a certain room, and letting the performance take place in that room, we contend that the work can take on another level of meanings in which, contrary to Lucier, the relationship between the “I” and the “room” is not arbitrary, but highly significant.
In our paper, we would like to exemplify how such a realization can create a heightened emotional and intellectual experience, both for performers and audience, highlighting the room not only as an acoustic environment but as the trigger of memories, imagery and emotions. We discuss a realization of the work in which Håkon, an anthropologist in his early 40s, was taken back to an empty hospital room, where a few decades earlier he had visited his grandfather for the last time before he died. Thus, the room itself was an important factor in bringing back memories of a highly emotionally charged situation, which subsequently amplified the recollection process for him, giving the telling of his story increased actuality and impact. And, as Lucier, he started by verbally locating himself, but Håkon spontaneously rephrased the opening of the text from the score: “I am sitting in that room...”, thereby making the link between his memories and the space explicit.
Subsequently, in the process whereby his story was iteratively played
back in the room, the resonances that gradually built up in the room
created not only a sonically pleasing process, but also a strong
metaphor for the interrelationship between Håkon and the room he was
sitting in, playing on both the physical and the mental meaning of the
concept of resonance, as presented above: The room resonated in Håkon,
triggering mental images of how he experienced the room in the past,
and then, by telling his memory, and letting the sound of himself
telling it progressively excite the physical resonances of the room, he
could follow the room as it “took back” his memory while gradually
transforming it into an aesthetically pleasing object. Thereby, the
process had an element of consolation and catharsis to it, which could
also be felt by those observing the process from the outside. Lastly,
in our presentation we suggest a branch of realizations that might use
this piece both as a form of music therapy, and as a way of creating a
new level of meaning and meaningfulness for audiences observing such a
process, and problematize the degree to which such realizations are
localized in the peripheral zone of the space of realizations that
Lucier’s score delineate.
Manuella Blackburn ; Alok
Liverpool Hope University (England)
In this paper, the author takes particular interest in the collection of sound material from musical instruments (for use in both acousmatic and mixed works) and how the composer manages creative intent and concepts while collaborating with a performer. Interactions at this stage ultimately impact upon the sound material collected as well as the final composition. The frontier for exchange during these composer / performer encounters enables collaborative work to flourish – but what are the optimum conditions for a successful recording session? Is there a requisite limit or a bare minimum on how prescriptive one should be as a composer when directing the performer in order to avoid confining the creative possibilities of one’s own imagination or the performer’s own input? And how does one navigate the same situation cross-culturally with foreign and ethnic instruments where unfamiliar performance practice traditions and language barriers may exist?
It is common to interact with object sound sources (eg. keys, coins, slinky etc...) in an exploratory fashion, prizing out unusual gestures and textures while always on the look out for those happy accidents that might lend themselves well to the transformation process in the studio. With instrumental sound sources, where a performer is involved, the same exploratory activity may not be immediately possible and we must therefore effectively communicate to the performer our request for specific experimentation with sound types and timbres. Approaches to this activity differ from composer to composer and modes of collaboration between composer and performer subsequently change as a result. How we, as composers, conduct this sound capturing process is led ultimately by what we want to work with in the studio. With the use of composer interviews, existing repertoire and previous noteworthy collaborations I am aim to propose, and distinguish between, the following modes of collaboration:
• Instructive / directional: The composer is prescriptive in outlining how and what the performer is to play.
• Explorative / interactive: Details of material remain somewhat unspecified. Some loose ideas and concepts may be discussed beforehand. Contributions from both sides allow a creative exchange to flow.
• Unstructured: An open session where the performer is given free reign / carte blanche to decide what to play. A typical example of this is when a performer demonstrates extended techniques specific to their instrument – the composer acts as a listener and thus learns directly from this process as to what the available sound possibilities are.
Two further distinctive situations are worthy of discussion:
• The composer becomes the performer. The composer experiments with an instrument that they have no formal training on as a means of generating sounds. This also applies to situations where the composer performs or plays with objects (not instruments as such) often in unconventional ways.
• Adapting to source. The composer adapts to a sound source or performer. Onthe-fly field recordings (eg. recording environmental sounds, street performers etc...) where the composer cannot intrude upon or affect the sounding outcome. All adaptations here refer to technical considerations eg. Position, microphone handling and volume control on field recorder).
This paper examines the authority and instructive
role of the composer in the recording studio along with how one might
take ownership of these captured sound materials in future creative
work. Finding oneself within the material generated by others (sounds,
notes, phrases, motifs and even melody lines), especially from
unfamiliar cultures and contexts can be challenging. This part of the
paper draws upon first hand accounts of collaborating with Milapfest
(UK, Indian arts development trust) in building an online sound archive
of Indian musical instruments as part of an ongoing educational
outreach program at Liverpool Hope University. The sound archive
material came to exist as a biproduct of collecting sound material for
my own creative work (two new electroacoustic music works exploring the
use of culturally significant sound material). A significant proportion
of this research project (supported by the Arts and Humanities Research
Council, UK) involves individual recording sessions with approximately
25 - 30 instrumentalists from this highly specialised performance
tradition. This raises important issues regarding cross-cultural
exchange and what, as an electroacoustic music composer, I might
achieve sonically from exploring their practice, along with the
question of how and what the performers take away from these
encounters. Within the ‘give and take’ of a cross-cultural
collaboration, I am posing the question of how possible it is to exert
one’s creative and personal compositional voice when considering each
different mode of collaboration. As creative projects evolve, take
shape and are eventually performed, how is the performer’s reception of
the final work informed by the early stage collaboration between
composer and performer? The collection of both idiomatic and
unconventional sound materials provides a discussion point within this
discourse, which will be supported by personal perspectives and those
from performers involved in this collaborative process.
Martin Luther Universität of Halle-Wittenberg (Germany)
Did composer X know the work of Y when he wrote his
very important piece Z? Was inventor P influenced by the discovery of V
when he developed his procedure of sound generation? Did sound artist W
know the special conditions of festival R, where his piece S was first
performed, and did he know about the context of sound installations
presented there? Questions like this are quite usual in the arts and in
musicology, inasmuch as the latter discipline is interested in social
matters at all.
The phenomenon that comparable things develop in the same time under more or less comparable conditions has always interested the arts from the perspective of dependence and independence. Nevertheless network theory in the narrow sense has not so far been seriously employed for the discussion of the appearance of artistic inventions.
Musicologists usually love to realize relations and interactions – often much more than simple parallels. In this age of social networks it often appears unimaginable that something could develop without another’s knowledge, that it is possible to do something on one side of the ocean that is not noticed on the other side, that there was a time – not long ago –, in which real spatial distances where as absolute in matters of time. And – marking by that also somehow economic differences – these distances could be interpreted as social ones and by that in another way as cultural distances as well, although the internal structure of developing subcultures or even parallel societies at different ends of the world is quite similar, if the conditions are comparable.
When we in our today’s researches in the arts raise or are confronted with questions such as whether and what Pierre Schaeffer for example knew about the history of “Hörspiel” in the 1920s in Germany or how close were the contacts of Werner Meyer-Eppler with the researchers in the USA or which sounds really got through the “Iron Curtain” from Western to Eastern Europe and back, or other questions with an hypothetic character like that, we have to reflect the real as well as the potential relations. It is not only the manifest contacts or the library of this or that musician and so on which has to be taken into account when we are looking for relations and interpenetrations, but also the possibility of communication, the existence and potential of obvious or latent networks. Every researcher makes assumptions against a backdrop of communicative experiences. As everybody knows, this is a latent danger facing every researcher in the arts and humanities. Working with young students we always have to ask questions such as: how did communication happen in a world in which self-promotion via Facebook or YouTube was not yet possible? how were the results of communication stored and kept accessible without the internet? The internet never forgets, as we know, but people do forget, and soon. Thus, how do we manage within our research approaches the resulting differences of archiving and storing in a broader sense?
This background of experience may be a problem reflected in every discipline, but within the study of electroacoustic music it could have an important and interesting function.
This talk is about the development of network structures within electroacoustic music and its study from a historiographical perspective. It takes as a departure point the idea that when people first became interested in using electricity for the production and reproduction of a new and modern kind of music, what developed can be in terms of social theories be seen as a certain kind of parallel society. This parallel society from its beginning developed a structure, which from today’s perspective can be seen as a network in the narrow sense defined by network theory. The reasons for that can be found as well in the differences between what was first called electric and later electroacoustic or electronic music and the social world of traditional music, and primarily in the new and multilayered demands of this music of people involved on all sides of the music market.
The positioning between the disciplines – music, communication and media sciences, engineering and so on –, between the markets –classical music, radio art, sound art, technologies and others – and thus between their audiences developed a structure which could be understood and analysed with the help of network theory long before this approach became important within the social and communication sciences.
Thus, we have to distinguish between the network as a social theoretical approach, the network as a technological term and networking moments as parts of practical social life structure.
In our approach the first and the last of these three are of special interest – but in order to compare possible perspectives we introduce the idea of parallel societies to provoke a kind of dialectic discussion.
First of all we tell the history of electroacoustic music interlinked as this of the development of a network or networks and that of very strong parallel societies. Thus, it is first of all a discussion of the appropriateness of social theories or approaches for dealing sociologically with electroacoustic music.
For that purpose we go quit far back in history and use the contemporary situation of a nearly global networking society for comparison, we consider the social structures of electroacoustics from their beginnings in early radio to the point when computer music fostered the development of another social sphere. Thus, the networks we are talking about developed first of all around the big traditional studios.
Getting back on another level to the position of the researcher in the era of social networks, it will be asked at the end of the talk how electroacoustic studies can profit from the analysis of the network character on the one or that of a parallel society on the other hand, how they are part of that, and how this can be reflected.
Although today nearly all information is available to everyone, we
often do not know what our neighbour is doing concerning the same
problems we are dealing with, because s/he may be interlinked with
other networks: Thus, at the end in a provoking manner it is shown, how
networks can quickly take over the classical risk of parallel cultures.
Alain Bonardi ; Frédéric Dufeu
Université Paris 8 / Ircam (France) ; University of Huddersfield, (England)
Pour le musicologue, l’environnement informatique destiné à l’interprétation d’une œuvre mixte interactive peut constituer un important objet d’analyse. Une attention apportée directement au programme sur lequel repose l’exécution permet en effet de relever sans ambiguïté les conséquences des informations d’entrée, dérivées du jeu instrumental ou vocal, sur les processus musicaux et sonores. L’enjeu d’une telle démarche, d’ordre organologique, est d’évaluer l’étendue expressive de l’instrument numérique et, à travers elle, le potentiel d’interprétation de l’œuvre musicale. Dans les cas où les programmes informatiques sont disponibles et encore exploitables, leur haut degré de personnalisation apporte une difficulté importante à l’examen musicologique d’un ensemble de cas : d’une composition à une autre, les unités de traitement musical ou sonore et leurs interconnections sont variables tant par leur nature que par leur implémentation. Bien que les comportements instrumentaux rencontrés à travers le répertoire des musiques mixtes présentent un certain nombre de traits communs, chaque étude est susceptible d’appeler une approche entièrement renouvelée, sans bénéfice significatif tiré d’expériences précédentes.
L’objectif de notre communication est de présenter des perspectives de modélisation pour le comportement des instruments de musique numériques. Nous nous appuierons sur une expérimentation faisant appel aux formalismes de la logique floue dans la section III de Pluton de Philippe Manoury. Nous utilisons la librairie FuzzyLib développée par Alain Bonardi et Isis Truck dans l’environnement Max / MSP, de ce fait directement connectable au patch de Pluton. Dans l’esprit du Computing with Words décrit par Lotfi Zadeh, ce formalisme nous permet de décrire les phénomènes en entrée et en sortie des modules de transformation de manière sémantique, revenant de la mesure physique à une appréciation relevant de la perception ; il est également possible de raisonner à ce niveau sémantique par des inférences. En logique floue, il est ainsi possible de travailler sur le vocabulaire musical de la dynamique et des tempi (pianissimo, moderato, etc.) de manière non binaire plutôt que sur des plages exclusives de valeurs numériques, et de décrire les fonctions de transfert par des règles floues.
L’exemple ci-dessous montre comment deux variables linguistiques appelées dynamique et ambitus ont été modélisées dans la section III de Pluton de Manoury. Elles possèdent chacune deux classes, associées à des sous-ensembles flous, matérialisés sur le schéma par les fonctions ascendantes et descendantes correspondant à un degré de vérité ; la première s’appuie sur les notions de pianissimo et fortissimo ; la seconde sur unisson et large. Deux règles floues ont été posées, reliant les deux variables :
• Si dynamique est pianissimo,
alors ambitus est unisson
• Si dynamique est fortissimo, alors ambitus est large
Un moteur d’inférences fondé sur le principe du general modus ponens met en œuvre le raisonnement et permet de déduire l’ambitus en fonction de la valeur en entrée de la dynamique.
Plus généralement, lorsqu’une variable comporte cinq classes (par exemple pour la dynamique piano, mezzo piano, mezzo forte, forte, fortissimo), un algorithme a été créé par Isis Truck pour façonner et répartir automatiquement les sous-ensembles flous à partir de données expérimentales directement récupérées du patch étudié.
Cette approche permet tout à la fois de modéliser le contrôle et donc
le comportement du patch selon des termes liés à la perception, voire
même de le remplacer ou de contribuer à sa pérennisation, étant une
forme d’écriture abstraite d’une implémentation liée à un logiciel. À
partir d’un cas particulier d’interaction homme-machine présent dans
une œuvre emblématique du recours au temps réel, nous discuterons la
pertinence de cette modélisation pour l’analyse des musiques mixtes et
ses perspectives de développements pour l’application de formalismes
généralisables à une large part des instruments numériques du
répertoire contemporain. Cette démarche constitue une contribution à
l’organologie des dispositifs temps réel de production sonore.
Université Rennes 2 / MINT-OMF Université Paris-Sorbonne (France)
When the user can control a machine based on what he perceives from this machine, a loop is created. From this definition, an acoustic musical instrument could be considered as interactive. However, in this case, the behavior of the machine is determined by its initial and physical construction, and the machine doesn’t actually interact with musician. The instrumentalist’s action is a gesture and he listens to the resulting sound. Thus, the loop is asymmetrical. The instrument is a device that doesn’t act by itself.
On the contrary, and we will insist on this difference and other particularities in order to introduce new developments in musicology, an electroacoustic device can act as an operative machine by itself. Its response to a request by the musician depends not only on the instrumentalist’s gesture and on its mechanical construction; its answer also depends on how it has been programmed by the composer or his assistant. An electroacoustic device involves real interaction because, for a given event, the electronics can be designed with a specific structure and a specific behavior. It is “composable” in the same meaning that we say a score is composed. In the interactive exchange, the answer depends on the question received and on the rules determined by the will of the composer. These rules define a complex behavior which poses new problems for musicological analysis. Never before, this sort of loop was involved in music.
Moreover, the structure of the electroacoustic device network and its
behavior is defined dynamically during the piece. They are often
constructed in the shape of networks of modules, changing during the
piece, exchanging information flows, and they can be partly random and
partly a learning process. Interactivity can be stable and balanced at
a given time, but it evolves along the temporal axes of music being
composed or being played. It may also be endowed with memory and its
reaction may be shifted in time. That is to say that it may take
account of past control values or past sounds. Since the early modular
synthesizers, the principle of modular network, already mechanically
implemented in the church organ, takes another dimension. This time, a
function representing a sound wave can interact with another function
of the same type. Sound can modulate another sound, and all kinds of
combinations are theoretically possible. This process is utterly
different. When technology became digital, sequential behaviors with
all sorts of various serial and parallel branches became implementable,
e.g. with a qList. And a network of modules can work by leaning on
another temporal structure. All forms of manipulations are possible,
including asynchronous structures using memory of what happened before.
The principle of the network can be local, as in a device comprising several modules. At first sight, it seems to be the oldest stage in the evolution of complex electroacoustic devices. However, the first examples of concerts involving multiple sites in one way or another began to appear when Telharmonium sounds were conveyed by the telephone network, and then when tape recorders turned out to be too heavy to be carried in the concert hall at the time of Schaeffer. More recently, a complex patch in PureData or in Max / MSP can be conceived and understood as a network of local modules. It can also be spread as a computing power distributed over several computers or as modules communicating through MIDI, UDP, or IP based networks, in one or several locations. But still, the notion of behavior and operative rules underlies the functioning of the whole.
From various examples, my reflection will initially involve a typology of behavioral interactions. This part will consist of a systematic attempt to classify the corpus of musical works. From this notion of behavior, it will construct new methodological approaches to develop an understanding of the most general possible sort of musicology and analysis of electroacoustic music in the context of an interactive approach and networks.
The concept of behavior is deterministic in itself. For example, a sound can be generated by a system from an action on a human / machine interface. The result is determined not only by the action of the player, but also by the behavior of the whole system. If it was programmed differently, the same action of the interpreter could trigger another transformation of timbre, a reverberation, a chord progression, a melody, a rhythm, an automatic monitoring or any type of event. But each time, the instrument response is controlled and depends on the wishes of the composer. More specifically, this response is inseparable from music creation, writing, style and aesthetics. It needs to “write” the instrumental response to the demands of the interpreter at any given time of the work. Or this reaction may take place moments or minutes later. Moreover, nothing prevents the composer from introducing a random part into this behavior or from using former events becoming from the same interpretation or even from former concerts. Examples will be proposed.
In summary, the concept of behavior into a coherent system itself can be precisely defined in nine statements:
1. A behavior has an identity by itself and a single one.
2. At any given moment, there is only one behavior.
3. Two different behaviors are necessarily differentiated by at least one different characteristic.
4. A behavior can vary over time gradually or discontinuously (successive stable states).
5. A behavior can include data belonging to previous events.
6. A behavior can be constrained. For example, a sound generation may not exceed a certain energy level in a frequency band.
7. The characteristics and constraints can be more or less random, but the opening of behavior always remains controlled in one way or another.
8. Behaviors can be interconnected and depend on each other. Thus, for instance, reverberation will apply only if a resulting value from a first process exceeds a certain threshold.
9. There may be behaviors of behaviors (meta behaviors), hence a hierarchical structure of behavior. Overall behavior is then the combination of a set of behavior units.
As a consequence, in such a context, the analytical approach must be of a relational type. Relationship is a set of attributes that define a fact. Different types of relationships can be distinguished. For example those based on:
1. gestures in the human / machine interface;
2. time related processes (cues, synchronous interpolations);
3. triggering (automatic tracking partition for example);
4. flow and events related to the signal / controls;
5. electronic functions: recording, sampling, synthesizing, effects, score following process;
6. musical functions: intervals and melody, chords and harmony evolution, time structures and rhythm.
These categories are neither exhaustive nor exclusive. More generally, all of the above reflections show the magnitude of the task facing the composer, the interpreter, the musicologist and the analyst. We will try to propose some generalizing models in order to better understand what are the invariants of electroacoustic music in the context of networks and interactivity. In order to be rigorous and precise, the musicological approach must be renewed on the basis of exact definitions and typologies. The main goal is to model processes in electroacoustic music, i.e. to find general structures for better understanding this music.
Michael T. Bullock
This paper will examine several pieces by American composer Alvin Lucier, which depend on the contingencies of the performative body in a particular place during a particular duration. The composer turns his own body’s interaction with a space and with itself into both generative processes and musical material. The main focus of this paper will be Music for Solo Performer: For Enormously Amplified Brain Waves and Percussion (1966). Electrodes are attached to a performer’s head to pick up low-frequency brainwaves and transform them into sound waves emanating from loudspeakers. The movements of the loudspeakers are used activate percussion instruments. I will investigate this piece’s relation to two later pieces by Lucier, Clocker (1978 / 88) and Bird and Person Dyning (1975). All three pieces are created live using a metastable feedback system: the performer’s body in a particular room at a particular time, with an audience, using electronics to transform certain aspects of the performer’s physical presence into music.
Rather than engaging in scientific investigation, in these pieces Lucier uses physical principles and mechanical processes to create for each of these pieces a conceptual heart that lives past their realization. The act of performance, and the life of the performer, overlaps with the audience’s experience. The result is an observation of oneself as being alive in space and time, what Jean- Luc Nancy calls “feeling-oneself-feel.”
Music for Solo Performer (1966)
While brainwaves are the driving force in Music for Solo Performer, they do not act as control information but rather are treated as audio waves, albeit at frequencies lower than the human ear can hear. The waves are amplified through electronics and sent to loudspeakers capable of reproducing subaudible frequencies. The loudspeakers are attached to various percussion instruments, and the mechanical action of the loudspeakers vibrates the instruments. In this way Lucier avoids altering the waves in any way aside from amplification. Nonetheless, Lucier is careful to point out in the title that the waves are ‘enormously amplified,’ reminding the listener that there is a level of mediation between the brain’s activity and the final sounds. The waves are not modulated or decoded, simply amplified, but because they are treated like lowintensity sound waves in need of amplification, they are stripped of any speculative associations with mind control.
The performer is directed to become impassive and reduce or remove all visual stimuli, either by closing or unfocusing his or her eyes. In this way, the performer named in the title is practically un-performing, and even the word ‘solo’ comes into question as the performer’s brain waves make changes outside of his or her conscious will, changing which may come as a surprise to the performer.
For this paper I will briefly compare several performances of this piece: by Lucier, by Pauline Oliveros, and by John Mallia and Neal Markowski.
Bird and Person Dyning (1975) and Clocker (1978 / 88)
In Bird and Person Dyning, the performer’s entire body becomes the metastable element. Wearing in-ear microphones, the performer walks around a room containing several loudspeakers and an electronic birdcall. The microphones are connected to the speakers and generate feedback. The feedback changes gradually and constantly, depending on the performer’s position as well as the positions of audience members’ bodies. In Clocker, the ticking of a mechanical clock is sent via contact microphone through a digital delay; the speed of the delay is controlled by the galvanic skin response of performer’s fingers at rest. The performer in these pieces is using electronics and physical objects (the birdcall, the clock) to mediate the interaction between his body and the room, and thereby create musical content that expresses that relationship abstractly and in real time.
Recent work with brainwaves
I will close the paper by returning the focus to the use of brainwaves in performance. I will discuss several projects of American composer and performer Alex Chechile, who uses digitizes brainwaves in live performance, in combination with more traditional performance activities. Chechile has used his own brainwaves as well as those of collaborators, including Pauline Oliveros and this author. His use of brainwaves is distinct from Lucier, in that Chechile transforms them into control signals via software, which can in turn be used to change the parameters of a filter or any other sound processor, or to make selections from a database of written music fragments to generate a score on the fly for the performer to read.
Regardless, the central conceptual heart is similar: a metastable
characteristic of the performer’s physical being, changing over time,
determines the unfolding of the piece. The most important difference
between the Lucier pieces and these more recent works is that the
performer who wears the electrodes is also performing an instrument,
and is thereby faced with a new version of “feeling-oneself-feel”: the
unusual but oddly familiar sensation of making musical decisions while
simultaneously witnessing your brain make similar – or different –
Tatiana Olivieri Catanzaro
Université de Paris – Sorbonne (France)
Contrairement à l’évolution dans d’autres pays sud-américains qui ont eu d’importants centres de recherche et de composition électroacoustique, le manque d’intérêt porté par les écoles de musique et la radio brésilienne à la musique électroacoustique a fini par restreindre, jusqu’à environ trois décennies, l’action des compositeurs aux « conditions précaires de studios privés, formés par l’assemblage d’appareils qui répondaient mal aux manipulations le plus simples, telles que la lecture des sons à l’envers ou les variations de vitesse (Neves, 1981, p. 188). »
Jusqu’au début des années 1980, l’histoire de la musique électroacoustique dans le pays se voit limitée à des tentatives isolées, dont la première peut être considérée le départ du compositeur Reginaldo de Carvalho à Paris, encore dans les années 1950. En France, Carvalho étudie avec Paul Le Flem et Messiaen et peut participer des expérimentations dans le domaine de la musique concrète développées dans l’ancien Centre Bourdan sous la direction de Pierre Schaeffer. De retour au Brésil, Carvalho reste pendant des nombreuses années comme un pionnier solitaire, et c’est seulement en 1963 qu’un autre compositeur, Gilberto Mendes, récemment rentré du Festival de Darmstadt, en Allemagne, s’intéresse également à la musique électroacoustique. Son œuvre Nascemorre, pour chœur et bande magnétique, composée peu après son retour, devient donc la preuve de cet intérêt chez le compositeur.
Cependant, c’est Carvalho qui lutte pour que se crée un centre de
musique électroacoustique au Brésil, formant une unité de recherche
afin d’obtenir la création d’un studio institutionnel, fondant ainsi le
Département de Musique et Électronique à l’Université de Brasilia et la
Radio Educadora. Pourtant, c’est seulement en 1966, lorsqu’il est nommé
Directeur du Conservatoire National de Chant Orphéonique à Rio de
Janeiro (plus tard l’Institut Villa-Lobos), que le courant brésilien de
musique électroacoustique peut enfin se développer. Neves déclare : «
Au cours de la période pendant laquelle Reginaldo de Carvalho a dirigé
l’Institut Villa-Lobos, il y a eu dans cette institution le
rassemblement de tous les jeunes compositeurs intéressés par la
recherche musicale, sans pouvoir pour autant mener leurs travaux à
terme par faute de soutien financier pour l’installation de vrais
studios de recherche ».
De ce fait, malgré les problèmes d’ordre monétaire vécus par l’Institut, il permet l’essor d’une importante recherche compositionnelle dans ce pays. C’est là, par exemple, que le compositeur Jorge Antunes, en rejoignant l’équipe de professeurs de l’institution en 1967, trouve l’espace pour développer ses recherches, inaugurant le laboratoire Art Intégral et le Centre de recherches Chrome-musicales. Avec l’entrée d’Antunes dans cet Institut, on témoigne la représentation brésilienne des deux principaux courants européennes : la musique concrète, centrée sur la figure de Carvalho, et la musique électronique, centrée sur celle d’Antunes. En fait, c’est à lui que l’on doit les premières pièces purement électroniques créées au Brésil: la Pequena Peça para Mi Bequadro e Harmônicos (1961); et la Valsa Sideral (1962).
D’autres compositeurs travaillent également pour
l’institutionnalisation de la musique électroacoustique au Brésil,
comme Jocy de Oliveira et Conrado Silva. De toute façon, nous notons
que peu de compositeurs arrivent à travailler la musique
électroacoustique de façon systématique dans le pays. Cela est dû
probablement au fait que presque tous les compositeurs qui se sont
intéressés à l’environnement de la musique électroacoustique et qui ont
pu apprendre le métier dans des studios étrangers on dû, lors de leurs
retours au Brésil, faire face à la réalité de ne pas trouver de soutien
institutionnel adéquat pour que la recherche et la composition
électroacoustique puisse être développée sérieusement. Dans ce
scénario, la plupart des compositeurs ont soit quitté le pays soit
menés à abandonner la composition pour ce milieu. Ceci, allié à une
position traditionaliste qui a prévalu dans le domaine de l’éducation
musicale, ont empêché l’expansion de la production électroacoustique.
La plupart de son histoire a été donc soumise, d’une part, au travail
isolé de compositeurs qui ont apporté l’expertise et l’équipement de
l’étranger et, d’autre part, aux courants créés par des groupes de
compositeurs et d’interprètes de façon rare, discontinue et sans le
Malgré le retard qui a souffert l’exploration de la musique électroacoustique au Brésil, l’esthétique de la musique contemporaine brésilienne entre les décennies 1950 et 1970 a été grandement affectée par les tendances de la musique électroacoustique européenne. Cela est dû à deux phénomènes spécifiques : d’un côté, l’existence de compositeurs étrangers qui se sont installés au Brésil et, de l’autre, des compositeurs brésiliens qui ont étudié ou qui ont participé à des festivals à l’étranger, et qui ont par conséquent vécu et été influencés par ces questions et les ont transplantés dans le pays.
Dans le but de faire ressortir cette expérience musicale, j’ai réalisé un entretien en 2002 avec le compositeur Gilberto Mendes (Catanzaro, 2003), compositeur qui a vécu ces événements de façon intense et que, comme tant d’autres, en régressant à la patrie, s’est retrouvé sans aucune infrastructure afin de pouvoir développer un travail technique adéquat dans ce domaine artistique. Muni d’appareils sans aucune sophistication, tels que des magnétophones ordinaires comptant avec trois vitesses, le compositeur a composé un nombre important de pièces, en particulier pour le théâtre, ce qui a mérité un mention dans le livre de Schaeffer où il est classé comme l’un des pionniers de la musique concrète au Brésil.
J’ai essayé de capture, de cette façon, non seulement la signification individuelle de cette expérience imprimée sur le compositeur, mais les impressions générales qu’il porte sur la signification qui a eu cette époque pour toute une génération qui s’est réuni sous les mêmes impasses et possibilités d’exploration esthétique.
Je propose, donc, l’analyse de trois pièces écrites par Mendes pour
chœur afin de démontrer l’influence que la musique électroacoustique a
eu sur la musique vocale et instrumentale au Brésil. Nous avons choisi,
pour cette étude, trois pièces de Mendes, une écrite dans un style
expérimental, qui a incorporé le technomorphisme de façon plus ouverte
(Nascemorre, 1963); une moins expérimentale, mais encore dans la lignée
de l’avant-garde (Motet em Ré menor
[Beba Coca-Col], 1967) ; et une qui
présente une approche plus traditionnelle (Com Som, Sem Som, 1978). Ces
pièces obéissent également à un ordre chronologique, afin que nous
puissions démontrer comment et dans quelle mesure le technomorphisme a
été introjecté dans sa technique de composition.
University of Missouri-Kansas (USA)
The notation systems used in contemporary art music can cause confusion and raise questions about how to analyze the music. Graphic notation, mathematically driven composition systems, Fluxus works, and aleatory and improvisation have changed the meaning of the score in relation to analysis. In electronic works, music is often performed or created without a traditional score. Still prized by many theorists and musicologists, there is no denying that this valuable artifact has had its primacy as an aid for analysis weakened. In regards to works with performers, the question becomes “What is the score, and what does it tell us?” This presentation looks at two works, Aphorisms on Futurism by Andrew Seager Cole, and The Machine the Sneetches Built by John Chittum and Bobby Zokaites. These two pieces offer a cross section of interactivity – Aphonrisms being triggered playback files and live signal processing, while Sneetches is an interactive installation using Wiimote controllers, Max/MSP, video games, and a kinetic sculpture. This presentation will look at what constitutes “the score” is in these works, and what the score conveys in these two pieces.
In my previous research, I took a broad view of approaches to analyzing interactive multimedia. The study, originally presented at EMS 2012 in Stockholm, brought in perspectives from researchers and musicians including John Croft, Dennis Smalley, Barry Truax, and Gunther Schuller. The paper also created a taxonomy of interactive media, and offered a methodology for approaching the analysis of interactive works. The proposed methodology is little more than a framework, and demands more extensive work in specific areas. This presentation is a follow-up on one of those areas of concern.
Before delving into specific works and the use of the score in analysis, a definition of a score and its content must be provided. A traditional score describes the basic elements of music: pitch, rhythm, meter, harmony, dynamics, form, texture, and timbre. Symbols (the notation) have specific meaning, and musical structures are developed around these meanings. This allows a researcher to use the score as the primary means of analysis, since all information needed for the performance of the piece is contained on the piece of paper. This kind of score is of course connected most obviously to music written during the “Common Practice Era,” encompassing West art music dating from the 17th century to ca. 1925.
In electronic music, the lexicon and grammar of music is defeated by the inclusion of “extra-musical” sounds that defy standard notation. These extra-musical sounds are often developed in ways other than pitch, making the pitch-centric style of notation in Western art music insufficient to provide their meaning. Because of this, scores that include mixed forces –live musicians with electronics, or interactive pieces – often utilize graphic representations of sound, text, and other non-traditional elements. These can include waveform diagrams, abstract drawings, or specialized symbols. These types of notations carry their own lexicon and syntax providing similar information as the notation from the Common Practice Era, but removing the original associations of standard notation. In particular, texture and dynamics are shown effectively in graphic representations, while other facets, such as pitch and timbre, are often displayed, respectively, as standard notation and text.
The purpose of the score is to act as a set of guidelines for performance, a clear set of instructions that give the majority of information needed to perform the work. The more the score holds to Common Practice Era notation and standard instrumentations, the more precisely the notation can be realized. In theory research, the score is the “Holy Grail” of the music, the everlasting form for the ephemeral art that occurs on stage. For works utilizing improvisation, aleatory, graphic representations of electronics, or few electronic notations, the score transitions from an all encompassing framework to a sketch which provides only a general sense of direction. In works of mixed forces, traditionally notated instruments and non-traditionally notated (or un-notated) electronics, the score acts as a point of synchronization and loose directions on what may occur and how one may interact with the electronics. Within these scores, traditional and non-traditional notations are displayed side by side. Neither notation by itself, nor the combination of notations can fully portray what is happening musically. In interactive electronic works, the insufficiency of the notation to accurately portray the music is the most apparent. A text description of the effect does not portray all the information regarding the end result.
In musical analysis, other artifacts beyond the score are seen as secondary sources. These artifacts include recordings, research papers, and presentations. However, in the case of some electronic pieces, the only artifact is a recording. In interactive pieces, a recording only captures a single possible performance, which leaves many possibilities undiscovered. By using published papers and presentations, a researcher can fill the gap.
In works that require some form of interactivity, researchers also have another artifact available – the program. Any piece that includes real-time processes will require some sort of hardware or software to perform the function. This gives the researcher full access to the electronics, be it single playback files, synthesis, live DSP, or any other function that may be occurring. For some processes, this gives the researched the “if-then” statements which guides the actions of the program. In the case of a work with mixed forces, such as Aphorisms, this can be a simple trigger –“If I push the space bar, this audio file plays and these processes are done to the voice.” For complex interactions, it can define all of the possibilities, as in Sneetches – “If I press Up on the d-pad, the pitch skips a major third. If I press right, it steps up a half-step. If I tilt the Wiimote left and right, and it goes up or down in a glissando fashion.” The above examples are simple, and show another important point: The researcher does not have to be an accomplished programmer, but must be willing to “play” the program itself, just as a performer would, to learn all the possible rules. This allows for a quantitative break down of relationships, the “if-thens” of everything from amplitude tracking to random generation.
As stated before, the score is the main artifact for reproducing performance and for analysis. But in some interactive works, the written instructions may not give all the pertinent information. As a performer, finding all the relationships becomes like a game. This was the impetus for Sneetches, an interactive environment that is, essentially, a playground, where users can interact with various objects—two electronic stations with video games and synthesizers controlled via Wiimotes, and a large kinetic sculpture—that allowed participants to play and discover relationships on their own. For Sneetches, the only written instructions were simple placards describing the basic functions of the synthesizer—Play note, change pitch, change sound, change effect—and the video game—choose game, trigger, restart—and calling for someone to help with the operation of the kinetic sculpture. In Sneetches, the score shifts from a Common Practice Era notion of a score to a combination of the programming, the visual stimuli from the video games which may influence the synthesizer participants, and the video recordings of the initial event. This runs contrary to Aphorisms where Cole’s attention to detail in the notation of the electronic part provides a fairly well rounded view of what is happening and when it occurs. Because of the attentiveness Cole put into the score, in particular pitch and rhythm during cueing or synchronous moments, the analysis of the piece traditionally, with the aid of the Pure Data patch, is plausible. Sneetches takes a more experimental approach with the lack of notation, improvised elements, and novel visual stimuli to influence movements. The analysis is then pointed in other directions, namely the interactive process, musicological analysis of the incorporation of “the playground,” and the effect of the various interactions on the final product. Also, it is possible to figure out certain pitch tendencies based upon the way a participant interacts with the device, as well as possible structures based upon the musical gesture. None of these would be easily apparent even when watching video anecdotal evidence of the display in action.
These two examples show the wide range of artifacts that can be used as
the primary source for analysis in electronic works. It also
illustrates how the idea of the power of the score as a set of
instructions can be shifted to the computer program or hardware setup
created by the composer – no longer are actions dictated specifically
by pen and paper, but instead by rules created by the composer, and not
always told specifically to the performer. This raises challenges in
analysis, but by carefully examining each piece, a researcher can
create a new “score” by combining several pieces of secondary
evidences. The goal is to find the same evidence one would find in a
traditional score, the limiting factors, the “if-thens,” and the
underlying structure that defines the content of the piece. By delving
into the programming – either by learning the language or carefully
guided performance, watching video, listening to recordings, or finding
any published materials – a researcher can move away from the passive
and static engagement with contemporary music and into the realm of the
Michael Clarke ; Frédéric
Dufeu ; Peter
University of Huddersfield ; University of Huddersfield ;
Durham University (England)
This paper introduces and demonstrates TIAALS, a new set of generic software tools designed to facilitate an interactive aural approach to the analysis of electroacoustic music. TIAALS is beingdevelopedasoneelementofa30-month AHRC-funded project investigating the relationship between Technology and Creativity in Electroacoustic Music (TaCEM: http://www.hud.ac.uk/research/ research-centres/cerenem/tacemtechnologyandcreativityinelectroacousticmusic/). TaCEM will examine a series of Case Studies, specific works exemplifying different technical and compositional approaches, from contextual, technical and analytical perspectives. It builds on the previous experience of the project team in terms of historical / contextual study (Manning 2013), organology of computer music (Dufeu 2010) and electroacoustic analysis (Clarke 2012).
In recent years there have been an increasing number of important texts on the analysis of electroacoustic music. All have faced a common challenge, how to present analyses that exist primarily in sound, not on the page, in the form of written text and graphics. Interactive Aural Analysis (IAA) provides one approach to resolving these issues. It was first developed by Clarke for analyses of specific electroacoustic works, beginning with Jonathan Harvey’s Mortuos Plango, Vivos Voco in 2006, and later Denis Smalley’s WInd Chimes (2009) and Pierre Boulez’s Anthèmes 2 (forthcoming). The underlying principle is that analysis of such works, in which the musical development involves aspects that cannot be notated traditionally, complex textural transformations and subtle spectromorphological variations, is best undertaken and presented not solely by means of verbal and visual representations on the printed page but through the use of software permitting the analyst and the reader to engage with the musical materials interactively as sound. Technical exercises also form an important part of the IAA software, enabling readers to engage with the techniques used by the composer and discover their potential. Previously only limited attention has been paid to the possibility of modeling the techniques employed as part of analytical study, and using modern software emulation facilitates a better understanding of the technical and creative processes that have underpinned the composition process. In each analysis therefore a substantial written text accompanies software that enables the exploration of the sound world and of the techniques used to produce the music. (For more information on IAA and the earlier analyses see http://www.hud.ac.uk/research/ researchcentres/iaa// and for a fuller account of the ideas behind IAA see Clarke (2012)).
Within TaCEM, one important part of the project will be the making an IAA of each of the case studies. In preparation for this a set of generic software tools is being developed, both for use in the project and more widely by others. The tools are in many cases developed from those specifically produced for the earlier analyses but take advantage of significant new technical developments and are designed to be adaptable. Whereas with the previous IAAs the software was developed specifically for each work in question, the aim here is to create generic tools that can be of use with any piece of music as appropriate ( TIAALS does not, however, include the technical exercises which, by their very nature, are specific to the individual works and the techniques used to produce them). A Beta version of the software will be released in February 2013, this will then be refined and extended as it is trailed by members of the TaCEM team and by others.
TIAALS is being made freely available. All the tools are built in Max6. This is so that they can be fully integrated into the software we devise to accompany the presentations of our Case Studies for the TaCEM project. Being built in Max also means that the tools are easily adaptable to different contexts and extensible.
TIAALS: Tools for Interactive Aural Analysis
Sonograms are often employed in analyses of electroacoustic music. However, presented as fixed graphical representations on the printed page they often severely limited in what they can show (see Clarke 2012). TIAALS however makes use of a sonogram that is interactive and aural. It is a highly developed version of the similar tool used in the analysis of Wind Chimes. The user can draw regions on the graphic display and hear the sound of just this region. It is also possible to scrub, moving a cursor of variable frequency range at variable speed through the display and hearing the results. Regions that have been drawn onto the sonogram display can be grouped and these groups soloed or muted. These and other features of the Interactive Sonogram enable users to interact with the sonogram and explore the musical significance of the visual display. It provides a means of investigating the different components of complex textures or timbres by identifying elements in the overall texture and hearing them in isolation, and possibly in slow motion.
Analysis is more than simply a matter of description: it is about making connections between musical ideas and showing the evolution of musical material, sometimes across long time spans. Traditionally in analysis of acoustic music such relationships are often presented using charts, often employing musical notation. Since musical notation and other forms of graphical representation are often of limited use in electroacoustic music (see Clarke 2012), TIAALS offers a means of creating aural charts in software. This builds on the Interactive Sonogram. Regions that have been created using the Interactive Sonogram (by drawing time and frequency selections on the visual display) can be exported into a Palette. The Palette can then be used as the basis for making aural charts to demonstrate features of the music. Regions in the palette can be imported into a chart as a (labeled) button. Clicking on a button plays the region it represents (and regions in charts can be related back to their context in the work as a whole). Charts might for example be used to show the evolution of a particular type of sound or musical motif through the course of a work. Or they might be used to present a taxonomy of the sounds used in a work or a genealogical chart of the relationships between different sounds (see the Wind Chimes analysis for examples). Paradigmatic charts or other structural charts can be used to show the shape of the work. Which charts are most appropriate and how they are best presented is up to the analyst, TIAALS simply facilitates the creation of such charts, and the prioritization of aural experience and interaction with sound as the preferred means of communicating ideas about the music.
Despite the obviously greatly increased importance of timbral, textural and spatial components in much electroacoustic music, pitch continues to be a significant factor in the shaping of many works. This tool provides and aid to identifying significant pitch and frequency elements. Sections of a work can be analysed and data about the most prominent frequencies presented using both musical notation and numerical data. It is also possible to set up a pitch filter to help demonstrate the recurrence of key pitch components (e.g. harmonic fields) in the course of a passage.
Spatial positioning of sound is a complex phenomenon and disaggregating a spatial mix is not an insignificant task. This task becomes even more complex in multichannel works. The spatial display tool in TIAALS does not claim to resolve all these complexities and needs to be used with intelligent reflection but it can provide some useful insights. The spatial display tool (developed from an idea by Sam Freeman for the Wind Chimes analysis) colour-codes each frequency bin in the sonogram analysis according to the amplitude balance between left and right channels. This can give some indication of the spatial distribution of sounds across the frequency range at each moment in the work.
TIAALS is a development from earlier Interactive Aural Analyses. As
well as playing an important role in the TaCEM project, TIAALS provides
a set of generic tools that can be used by any analyst seeking a means
of working interactively with the sound of a piece in creating and
presenting their analyses. IAA is not a method of automated analysis by
computer (although we may build some automated options into later
versions); it is primarily a set of tools for an analyst to use to help
in their own interactive aural investigation of works and in the
presentation of their findings. It is envisaged that TIAALS will be
further refined and extended in response to our own needs in relation
to the TaCEM project over the next two years and in response to
feedback from other users.
University of Auckland (New Zeeland)
To a number of electroacoustic composers working in New Zealand, and presumably in other parts of the world too, the three-dimensional acousmatic image seems to house the most alluring of creative possibilities – expressive musical forms that we have not yet been privileged to hear. More than 60 years have passed since the birth of electroacoustic music, yet it seems that only now we are beginning to come to terms with the complexities of composing with space. New technologies are providing a means of advancement by way of experimentation and through engagement in the creation of new work; however, there is still much debate between composers working with multichannel systems regarding the validity of such methods and approaches. Some of the most experienced and decorated composers working in New Zealand often attest to the fact that ‘there are no rules’ – that each sound carries with it an entirely new set of demands that transcends any previously established concepts or constants that may have been observed to date. While I admire this stance, I am not entirely convinced by it, and as a teacher of electroacoustic composition, I find the extreme viewpoint quite unsatisfactory.
This paper then, is an attempt to mitigate this purely subjective standpoint – to offer a review of current theories and strategies surrounding the multichannel domain that might be considered useful to the ordinary electroacoustic composer. The study is not a venture in objectifying artistic practices, rather it seeks to present a fluid set of conceptual contrivances based on a review of expert domain literature and repertoire, and discussions between experienced composers working in New Zealand that may lead to an opening-up of pragmatic possibilities concerning both the spatial treatment of individual sounds, and the spatial relationships between sounds over the duration of any given phrase or work. The investigation offers a local (New Zealand) point of view.
Regarding the method of enquiry, the study includes a comparison of key principles gleaned from the essential literature listed in the accompanying annotated bibliography, analysis of compositional techniques from the catalogue of selected repertoire, and transcriptions of group discussions between the author and leading members of New Zealand’s electroacoustic music fraternity. With reference to the last point, and further to ongoing research in multichannel electroacoustic composition on the part of the author, two meeting peri-ods have been specified: 11 May 2013, and 10 May 2014. The research group, consisting of John Coulter, John Cousins, Gerardo Dirie, John Elmsly, Dugal McKinnon, Michael Norris, and David Rylands (TBC), will meet in a specially designed facility (a 33-channel 8-metre diameter geodesic dome) located in Henderson Auckland to present creative work and to discuss concepts and techniques relevant to the title of this paper. Issues relating to localistion, proximity, and acousmatic imaging will be discussed alongside time-domain concerns such as metamorphosis of spatial image, perceptual space, and transsubjectivity. The spoken presentation at EMS13 will include a report on the discussion points raised at the first meeting.
In terns of outcomes, it is anticipated that the study will result in
an elucidation of the established guiding principals, constants and
variables surrounding the domain of multichannel electroacoustic
composition. Specifically, it is expected that groupings of actions
might be identified in response to individual creative ideas, and that
these options – relevant only to multichannel electroacoustic
composition will display specific character traits. This will be
presented as a compendium of known methods and approaches concerning
the spatialisation of individual sounds, alongside new findings with
reference to the nature and evolution of relationships between
acousmatic images. It is hoped that this information will serve the
growing fraternity of New Zealand electroacoustic composers who have
chosen to engage with multichannel technologies.
Université Paris-Sorbonne / De Montfort University (France / England)
“Le réel n’existe plus” (Jean Baudrillard)
Le champ des musiques électroacoustiques interactives est vaste, de nombreuses œuvres peuvent être regroupées sous cette dénomination : des œuvres proposant une interaction entre un ou plusieurs musiciens et un ordinateur à celles se présentant sous la forme d’installations avec lesquelles le public interagi. L’ensemble de ces créations électroacoustiques présente un dénominateur commun : l’œuvre n’est plus un objet fini et stable que le chercheur peut étudier avec des outils et des méthodes qui ont fait leurs preuves, l’œuvre interactive se caractérise par l’utilisation de processus informatiques permettant de générer une partie ou l’ensemble de l’œuvre. En d’autres termes, la machine numérique n’est pas seulement un outil mais un générateur de processus qui apporte sa part dans la création. L’interaction, qu’elle soit entre le musicien-compositeur et la machine ou entre plusieurs machines autonomes, par exemple dans le cas des œuvres utilisant les réseaux de neurones, suggère des approches analytiques très différentes des œuvres de support.
Les œuvres interactives tendent à bousculer des notions musicales simples.
Ainsi, la notion même d’instrument nécessite d’être augmentée afin d’intégrer l’ensemble des interfaces permettant au musicien (ou au public) d’interagir avec les processus numériques. Dans bien des cas, elle tend mêmeà recouvrir celle de partition.
Dans le domaine de l’analyse proprement dite, la forme devient difficile à définir. Peut-on parler de forme alors que l’œuvre n’est plus construite d’objets mais de processus ? Peut-on analyser – et par conséquent fixer – des données dont la temporalité n’est plus fixe mais floue ou inexistante ? En effet, une des caractéristiques de l’informatique est de ne pas exister dans le temps. Si le temps ne s’écoule pas pour le numérique, comment analyser les interactions entre l’homme-musicien pour lequel le temps produit un effet et cette machine numérique ?
A l’instar de Jean Baudrillard déclarant l’absence de réalité dans
l’image numérique, ne doit-on pas considérer que l’œuvre
électroacoustique interactive n’est plus un modèle de la performance
numérique mais que cette dernière, devient le modèle de l’œuvre ? Dans
ces conditions, le musicologue ne se voit-il pas contraint d’analyser
non pas ce qui constitue habituellement l’œuvre – les sons
préenregistrés, la partition, le programme permettant de gérer
l’interactivité, etc. – mais la performance elle-même ?
Nous voyons donc une augmentation sans précédent du champ des possibles en analyse musicale. L’objet de cette présentation sera donc d’en baliser quelques-uns et de suggérer quelques méthodes et outils utiles dans l’analyse de performances électroacoustiques interactives.
La première question sera celle des sources.
Dans de telles performances, la partition, l’enregistrement stéréophonique du concert voire la documentation liée à la performance ne sont plus suffisants. Il faudra donc trouver un protocole d’enregistrement d’une majorité des évènements produits durant le concert. Bien évidemment, le support de la vidéo, l’enregistrement multipiste ou l’enregistrement des communications entre les différentes interfaces et les logiciels sont une première réponse. Toutefois, est-ce réellement suffisant ? Ne doit-on pas imaginer des dispositifs techniques et des logiciels permettant d’assister le musicologue dans la captation et le décodage de ces données ?
Le choix des données à capter est indissociable de leurs exploitations.
Les logiciels récents et les interfaces actuelles sont souvent conçus
de manière modulaire. Ils mélangent des technologies très différentes
(par exemple l’analogique et le numérique) et permettent aux
musiciens-compositeurs de développer des instruments hybrides qui
manipulent des données complexes. Ainsi, ces données mélangent
facilement des ensembles de valeurs de différents formats, en temps,
séquentielles ou hors temps. De plus, certaines interactions entre
l’homme-musicien et la machine sont difficiles à capter car fondées sur
une autonomie partielle ou complète des processus numériques. Enfin,
les données générées par des instruments interactifs possèdent leur
propre codage. Le musicologue doit donc pouvoir décoder ces données
numériques afin d’en produire des représentations exploitables en
analyse musicale. Comme tout décodage, cette opération peut être source
Certains logiciels commencent à investir le champ de la représentation de données numériques pour l’analyse musicale. Ainsi, le logiciel d’improvisation OMax (Ircam) propose une visualisation du modèle en court sous la forme d’un graphique temporel. Ce type de graphique permet de visualiser à la fois les relations temporelles entre les éléments analysés du modèle et les liens qui vont permettre au logiciel de générer la suite de l’improvisation. Le temps est alors représenté sur plusieurs niveaux. De même, le logiciel IAA (Interactive Aural Analysis) de Michael Clarke propose une représentation de la structure de l’œuvre sous différentes formes : paradigmatique, arbre génératif, modélisation de certaines structures, sonagramme interactif navigable, etc. L’auteur a ainsi produit un ensemble de représentations interactives permettant d’analyser les structures d’une œuvre en tenant compte de ses différents niveaux. Ces deux exemples mettent en évidence la volonté de dépasser la représentation traditionnelle qui s’est généralisée dans le domaine de l’analyse de la musique électroacoustique. Ce modèle, basé sur le logiciel Acousmographe, a été une fantastique source d’études musicologiques depuis le milieu des années 90. Toutefois, il commence à montrer ses limites.
Depuis plusieurs années, mon travail de recherche porte sur l’étude de
ces modes de visualisations et sur l’élaboration de nouvelles formes de
représentations adaptées à l’analyse musicale. A cet égard, EAnalysis,
projet développé avec l’université De Montfort de Leicester, est une
étape importante de ma recherche car il vise à cristalliser un certain
nombre d’expériences que j’ai réalisées depuis plusieurs années.
EAnalysis se présente comme le premier logiciel permettant de
travailler sur les données analytiques à travers plusieurs modes de
représentations. La vue traditionnelle temps-fréquence permettant de
figurer des unités fixes et parfaitement délimitées dans le temps n’est
plus à même de représenter une grande partie du répertoire actuel des
musiques électroacoustiques dont les musiques interactives.
Université Paris-Est Marne-La-Vallée (France)
This paper presents an ongoing experiment on the topic of using emotive signals in order to control electroacoustic music, fixed music diffusion and computer-generated sounds in an improvisation environment. Through the use of inexpensive EEG monitoring devices, it is possible to gather meaningful information on emotional responses and states, either in raw form (e.g. EEG signal) or in pre-processed format. In turn, this information is then transformed into OSC packets providing temporal, conditional or interpretative control data. The presented experiment provides clues as to the possibility of “controlling” (emotionally interpreting) the diffusion and performance of electroacoustic music works. While several studies described the implication of parts of the brain when engaged to music listening and/or performing, including analysis of emotional states.
Music composition and performance always has been a highly emotional engaging human activity, whether considered from the listeners’ perspective or the practitioners’ point of view. However, mostly due to motor activities and related cortical zones activation, it is very difficult to examine the similarities and differences between emotional engagements related to one activity, the other, or compounded activities (such as improvisation).
Careful situational analysis of compositional processes and habits suggest that composers may exhibit more creativity when subjected to resistive tensions, notably when engaging with technology, while musicological analysis of electroacoustic and improvisation-related works prove the existence of particular tension/release mechanisms that can be mapped to specific temporal cycles. These mechanisms closely recall observations made by cognitive scientists that focused on auditory cognitive modeling in the context of tonal music.
Implications of these various observations are numerous, and impact several parts of musical studies. In particular, examining the relations between cortical motor activities, temporal aspects of music and cognitive processes in composition and improvisation in the framework of electroacoustic music would provide an interesting test bed, since results would not have to be strictly interpreted within the restricted context of tonal, beat-based music. Since music thinking is a complex domain relying both on perception and action, it was decided to explore the field of emotional and cognitive engagement through the design and development of a system whose ultimate goal is to be used in electroacoustic performance control, by means of gathering and analyzing the cortical electrical activity (known as EEG). Approaches of music composition and performances have already been described, however not in the context of electroacoustic music; related (esthetical) issues were not raised, which is a key point in our experiment.
One of the most interesting aspects opened by the possibility of
retrieving and exploiting neuronal data in a musical environment is to
be able to gain a better understanding of the action/perception
feedback loop at work during musical practice, restitution or
composition. Since the evidence of this feedback loop has already been
established, we decided to explore several aspects of this loop in a
musical setting. The immediate interest is to manipulate cognitive data
in real-time to alter sounds coming from the software environment,
creating a database containing both musical parameters and cognitive
data evolution in time. Another interesting derivation of this work is
to use this data in a more direct fashion, generating sound synthesis
elements in a real-time improvisation setting with other musician.
We chose to use the Csound environment to program simple sound synthesis algorithms that could be modified and “directed” in real-time by cognitive data. This choice was dictated by the possibility to decorrelate sound synthesis design from any reference to real-time conception: it was crucial to be able to gather information as precise as possible without having to worry about “perceived” influence of mental states on sound evolution. In this respect, it was evident from the start that sounds had to be a continuous, if we were to be able to gather interesting data from the experiments. There is a significant lag between a) detection of cognitive data, b) sending to the OSC server and Csound client and c) modification of the synthesis parameter(s). The initial sound algorithm is a white noise source filtered by several resonant filters, with cognitive data continuously modifying the frequency and bandwidth. As such, the sound varies from almost unfiltered white noise to almost pitched sound, depending on data acquired. Of the numerous modifications that can be applied to the sound to be tested, several were implemented (time modifications, mental states disturbances testing by introducing randomized events...).
There are still a number of elements to be examined in order to provide
a comprehensive software solution for emotional control of computer
music environments and performance, such as controlling the diffusion
of an electroacoustic work. The long-term goal is to use this setup in
order to control the diffusion (both spatial and temporal) of an
electroacoustic work. This would not require enormous modifications to
the source code of the given piece, but time would be required in order
to precisely train the system with the particular cognitive states of
the user diffusing the work. Due to the use of the OSC platform, works
programmed in several other computer music environments could be
modified to use cognitive data a “super-instrument” dedicated to
electroacoustic works diffusion could be implemented in any language
and controls various environments, using OSC callbacks in return.
This first rough report on using EEG-retrieving devices to explore and control computer music generated sounds and music is an initial work towards designing a novel interface for computer music, as well as providing new insights into the how we experience music in a non-tonal framework, both as an “active” listener and musician. Further evolutions of this research will be presented:
a) Characterization of musical time with respect to
cognitive rhythms and motor actions by analyzing the behavior of
practicing musicians during improvisation sessions;
b) Development of a diffusion platform for electroacoustic works using cognitive data and mental states for driving control parameters (relying on raw EEG data);
c) Elaboration of a database of various EEG data with corresponding creative musical activities and phrases. This will include: improvisation, practice, performance and composition.
d) Ultimately, design of a computer-assisted composition system that takes into account the various points developed above.
As can be seen, this research is only the first step in what could be a
long journey that could have impact in several aspects of computer
music, with possible impact on computational creativity, human-computer
interaction, and music theory.
Ricardo Dal Farra
Concordia University / CEIArtE-UNTREF (Canada / Argentine)
We are living in a world reaching a critical point where the equilibrium between a healthy environment, the energy our society needs to maintain or improve this lifestyle and the interconnected economies could pass more quickly than expected from the current complex balance to a complete new reality where unbalance would be the rule and human beings would need to be as creative as never before to survive. Has electroacoustic music a role in all this? Have musicians a responsibility in this context?
Music + art & climate change
When I started thinking about taking a more active role in looking for
ways to help with climate change’s related disasters through music and
the arts, I was wondering what my colleagues or specialists studying
and working daily in preventing and acting upon the consequences of
certain catastrophes will consider about it.
To my big surprise, that people, scientists and engineers alike, were
answering to my draft ideas with enthusiasm, telling me that it was the
right way to explore now, and encouraging me to do it.
A first conference on this specific subject was held in Buenos Aires on December 2010: Equilibrio-Desequilibrio [or Balance-Unbalance in English]. It was organized by Centro de Experimentación e Investigación en Artes Electrónicas - CEI- ARTE of Universidad Nacional de Tres de Febrero (Electronic Arts Research Centre at National University of Tres de Febrero - Argentina). The program is available here: http://ceiarteuntref.edu.ar/eq-deseq-en
Less than one year later and far from Buenos Aires another conference was done: Balance-Unbalance 2011, held at Concordia University, Montreal on November 2011: http://balance-unbalance2011.hexagram. ca/ Balance-Unbalance 2011 was accomplished thanks to the direct involvement of people coming from very diverse backgrounds, such as: communication, political sciences, geography, management, digital arts, design and music, all having a common interest: “...to bring artists together with scientists, economists, philosophers, politicians, sociologists, engineers, management and policy experts with the intent of engendering a deeper awareness and creating lasting intellectual working partnerships in solving our global environmental crisis.”
The organizing team was as diverse as interesting, rich and positive and the whole process was an excellent learning experience. The program of the conference can be found here: http://balance-unbal- ance2011.hexagram.ca/?page_id=229
Humans, mosquitoes and e-music
The immediate reaction of the organizing team of Balance-Unbalance 2011 to a very successful conference was to consider doing another conference the same year. Nevertheless, the final decision was: not right now. Why? The main consideration: we don’t want Balance- Unbalance’s goal to be just to produce a conference series. The conference is a medium.
Some months ago I was reading a few rules for students and teachers that John Cage gathered. Number seven starts with: “The only rule is work. If you work it will lead to something.” And then number ten: “We’re breaking all the rules. Even our own rules.”
The proposed catalyzer started to work. Sometimes experiments take their own way and, our life being so full of a variety of simple / complex surprises, brought us a project. While surfing different TV channels a few months before Balance-Unbalance 2011 I stopped for a moment in one having a documentary on climate change and its consequences. That was the way I initially knew about the work of Dr. Pablo Suarez, Associate Director of the Red Cross / Red Crescent Climate Centre. Long story short: we had two points in common, our love for music and art, and our interest in the consequences of climate change.
Balance-Unbalance invited Dr. Suarez to Montreal to participate in the conference. During a brief lunch between presentations the original experiment of bringing artists together with people from many different areas of knowledge and actions “with the intent of engendering a deeper awareness and creating lasting intellectual working partnerships in solving our global environmental crisis” started to move into something concrete. The seed for a leading project with a large potential was born then.
art ! climate
As a direct consequence of that Balance-Unbalance conference, a global initiative has been launched by CEIArtE and the Red Cross / Red Crescent Climate Centre, in collaboration with several international partners: art! climate. It is a contest calling to create sound miniatures that the Climate Centre will use in its activities such as work-shops, simulation / educational games, lectures and presentations around the world, and eventually in films / videos too.
In this way the Climate Centre’s network will have material that will be useful for their humanitarian actions. At the same time, we will be encouraging musicians and sound artists to read white papers and reports about climate change’s problematic that, hopefully, will get them more involved with this crucial threat. Having significant information available through the contest website we expect to rise awareness and inspire musicians in creating their works.
Worth mentioning that the initial idea of having sound and musical material for their activities came from the Climate Centre itself, as a need. This is remarkable and different from other projects where artists are proposing to help. It is also significant because the humanitarian organization is in this way recognizing that art can be an articulator and could also provide enormous leverage for a more efficient and responsive effect of their actions.
This call is being done with the idea of having not one but several winners. There could be many works selected. In some way, we will be all winners if this project is successful. The contest has two categories: one is “Open” and it “includes anything related to climate change and extreme weather events”; the other is called “Mosquitoes” and is related “to the problems caused by mosquito-borne diseases, and the relationship of these with climate change processes. Dengue transmission is accruing precisely because of such changes”.
Worldwide, over 2.5 billion people are at risk of dengue [...] Dengue is found in over 100 countries [...] There is no vaccine, cure or specific treatment for dengue fever. Prevention is the only effective strategy [...] The Red Cross / Red Crescent Climate Centre is one of the humanitarian agencies that are actively responding to the health care impacts of climate change by organizing education and cleaning campaigns to reduce the spread of dengue [...] As part of their awareness campaigns about this issue, they use a game called “Humans vs. Mosquitoes” to teach about diseases transmitted by vectors and the effects of climate change.
The sound art miniatures can include any type of sounds. Sounds of nature, sounds of acoustic and / or electronic musical instruments, sounds produced by common or unusual objects and sounds produced or transformed by digital devices. Voices can be also included, producing either onomatopoeias or phonemes, or as single words or whole sentences. In the last case, the accepted languages are: Spanish, French and English.
This contest is the first step of the project and is focusing on sound-based art: “For this contest, we mean by sound art miniatures creations of sound art / music made from the use of new technologies, whose products can fit into what is known as soundscapes, electroacoustic / acousmatic music, sonorizations and sonifications.”
Where is this taking us?
After the Equilibrio-Desequilibrio [Balance-Unbalance] conference was held in Argentina in 2010, several other associated initiatives were also produced in Buenos Aires, including three media arts exhibitions. All those exhibitions included electroacoustic music and sound art focusing on environmental issues and climate change’s consequences.
The third Balance-Unbalance conference will be held in Noosa, a UNESCO’s designated biosphere in the east coast of Australia during 2013, a few weeks before the EMS13. This is another consequence of the Balance-Unbalance 2011 held in Montreal, as it was proposed by people attending this conference. The decision to do this new conference came when the collaboration with the Climate Centre started to crystallize in a specific project, showing that Balance-Unbalance was starting to achieve its goal. Also from the possibility to facilitate the communication and integration with and between different world regions. Conference’s web site: http://www.balance-unbalance2013.org
Balance-Unbalance 2013 will be collaborating with other organizations in the same spirit that has taken us until here, such as: the International Symposium on Electronic Arts - ISEA 2013 and Leonardo, the Journal of the International Society for the Arts, Sciences and Technology (published by The MIT Press). The art! climate contest’s results will be announced during the Balance-Unbalance 2013 conference. Multiple initiatives and projects are being connected.
Electroacoustic music could help?
Environmental problems, economic uncertainty and political complexity has been around for a very long time. Not one year, one decade or one century. What was different before was the speed and depth of transformations compared with today’s fast changes.
In this context of global threats: How could the arts help? How can
electroacoustic music help? These were some of the triggering questions
that started the projects described above. And the first positive
results are starting to flow.
Swedish Royal College of Music (Sweden)
Listening is a cornerstone in understanding the interactive relations between an audience and a performer as well as a composer and a performer. Taking a standpoint emanating from ecological theory of perception as described by Clarke (2005), this work aims at outlining the different perspectives of these parties, and how these impacts the experience of the interactive mixed piece. Personal experience from performing and composing, as well as interviews with musicians performing in my recent chamber opera Ps. Jag kommer snart hem! (eng. Ps. I’ll be home soon!) (Einarsson, 2012), constitutes the framework for the discussion that follows.
James Gibson’s ecological theory of perception from the 70’s offers one way of understanding the interactive mixed work environment. Several authors have discussed ecological theory of perception relative different forms of electro-acoustic music (Andean, 2011; Eigenfeldt, 2011; Gurevich Cavan Fyans, 2011; Windsor, 2000) or relative music as a whole (Clarke, 2005). The theory of Gibson, for most parts a theory of vision, was formulated at a time when most attention from researchers was devoted to testing stationary observers by flashing stimuli on a screen in a laboratory setting. Gibson made a great contribution emphasising the moving observer, asking what information in the environment there is to build an experience from. His followers continued by inquiring what information out there are actually being used. Translated into music and more specifically interactive pieces, the question is what structures the different agents of the interactive piece actually attend to. Do the composer, the performer and the audience share the same listening approaches?
David Huron (2006) writes about musical expectations and how we hear relations in music, which relates to the experiencing of interactive relationships. According to Huron, a listener takes the sound object and sound events first into consideration and relationships among them only secondly.
Eric Clarke (2005) points out how the traditional staging of western classical music disrupts the normal relation between perception and action. Could it then be that a mobile listener that interacts with the situation (the musicians, the room, the sound objects) is influenced in how to attend to the interactive performed work? And how can live electronics and interactive electronics serve as an integrated part of a work – meaningful relative the composer’s intention, the performer’s actions and the listener’s experience?
These questions were elaborated on throughout the process of composing the piece Ps. Jag kommer snart hem! – a chamber opera for four singers and six musicians, tape, live performed electronics and interactive electronics (stand alones in Max/ Msp). The audience were physically guided through five different scenes. Half of the audience went clock-wise, half of the group anti-clock-wise and they met in scene 3. Careful notes were made during the composing of the piece. The work-in-progress also included four workshops at the Opera where different aspects of the piece were investigated: the non-linear narrative, the mobile audience, voice-controlled electronics and traditional opera-singers experiencing performing with live-electronics. The performances were the laboratory in which relations were realised and studied. Later, follow-up semistructured interviews were conducted with the musicians.
Some observations; Musicians describe how their listening was altered. Previously learned listening schemas were not applicable in this new situation, due to the absence of regular cues. New ways of “holistic listening” - listening to the whole of the sounding environment developed during the rehearsals and the month of performances of the piece. One musician moving in a staged motion-capture field describes how she feels as if she transforms the interplay there usually is with a fellow-musician to an interaction with the audience, displaying to them her relation to the computer. She experiences the audience through the computer. This suggests that the computer takes the role of a mediator between the performer and the audience in the interactive setting.
Musicians also state that the computer affects the relations to fellow- musicians that were not part of the interplay with the computer. There it seems the computer functions as a moderator of the relationships between musicians. Other experiences from the interactive environment are those of feeling less exposed than usual, or experiencing a holistic sensation when performing.
One interpretation is that the unfolding relations between the sounding
agents in the piece are of primary concern to the performer, in
contrast with Huron’s claim that relations are not a prime concern for
the listener. The reason for this discrepancy could be that the
performer – singers in particular - embodies the sounding and therefore
consciously and actively engages in directing attention to aspects
other than the sound as object. The dissonance between aural and visual
cues, due to the absence of a physical body, might also contribute to
the attenuation of the relational domain on part of the performer, and
a heightened attention to listening in general.
Highlighting these differences may shed some light on, for example why there are claims for live versus non-live being either solely a technical issue (the perspective of the audience) or live/non-live being a crucial characteristic of a piece (the perspective of the performer). Lastly the composer has the meta- perspective over the work at hand, as he/she chooses to attend to both the sculpting of sound objects and their interrelations. He/she thereby runs the risk of mistaking his/her own listening experience for a validation, neglecting the different needs of the different agents.
The different listening strategies at different levels suggest a need for calibrating the relation between work and audience towards making inferences of cause and effect kind. This was successfully attempted in Ps! through a segment (scene 2) of participatory action. The audience was encouraged to examine and interact with different sound objects, which truly engaged most people. This may have made them more inclined to infer causal relationships in the work through a priming effect, where a previous introduction to a stimuli facilitates its latter perception. Also, the mobile audience may have facilitated the experience of the staged interactivity in accordance with Gibson’s perception/action loop.
In conclusion, the outlined different ways of perceiving relations
emphasise how important it is that the technology used in the mixed
interactive work truly affords the work, thereby contributes to
creating a meaningful experience for all parties.
De Montfort University (Engalnd)
In two papers presented in 2011 (2011a, 2011b), I suggested that there had been a fundamental misunderstanding (in some quarters) on the issues of perceiving interactive processes in music making. Understandably there has been much discussion – even criticism – of the dislocation of ‘cause’ and ‘effect’ in many music interactive systems, that is, the apparently arbitrary relationship of instrumental gesture to ‘resultant’ electroacoustic sound. ‘Resultant’ in inverted commas means that we may not perceive the sound as resulting from the instrument or action at all. Some have argued that the perception of causal chains must be reestablished for ‘meaningful’ interactivity, others that the connections may remain opaque and almost mystical – that it simply doesn’t matter.
I wish to develop the argument that the listener perceives, first and foremost, effects not causes. The degree to which we might then reconstruct a possible cause from the effects will vary as it always has – but it may be the wrong question, or at least be one of many. Indeed it may not be needed at all in appreciating the expressive content of the music. That is not to say that the relationship is not important – only that we do not need consciously to uncover it for the music to ‘make sense’. We need to distinguish the functions of composer, performer and audience because each have very different agendas (needs) in this respect.
To draw a parallel – in an audience I do not need consciously to be aware of sonata form and tonal schemes, blues and jazz chord sequences and solo / chorus forms to appreciate sonatas, blues and jazz. What I hear is the result of such forms (maybe). To come closer to our era: I am arguing that we hear the results of serial manipulations and do not need to decode the particular rows and matrices used – or even be aware that they were at work. But of course that must be different for the compose. What such system might produce – they can clearly fail – is coherence and consistency. These are immanent qualities we sense in the music result which help articulate its expressive content, its meaning to us.
It has also been common in the literature to see ‘source’ and ‘cause’
run together – the source may be some wind chimes, the cause the wind
or a human physical gesture. The source is an object or substance, the
cause an agency (the origin of energy input) – real and simulated may
need to be distinguished at a computer music conference but not usually
at a concert. Of course there are ambiguities and overlaps – Aeolian
effects for example or near perfect software physical modeling.
I am aware that the three categories – composer, performer, audience – may overlap, reconfigure and even disappear. But the distinctions I will discuss are exactly part of that reconfiguration and contribute to the problems we have in discussing this field. Thus we have composer-performers, improvisers, or group music making without any further participants. All the points that follow below do not disappear but simply ‘remix’ in different ways.
The composer knows the real causes – s / he has constructed the max patch, established the causal chains, the vocabulary of possible responses. The composer tends to assume these are communicable directly to the audience through a performer. Autosuggestion is clearly often at work – composers tend to hear their composed interactivity because they know it’s ‘there’ – ‘yes, but can the rest of us in the audience hear it?’ has been the dominant question for many years. Strangely the performer is often omitted from this discussion.
A performer is particularly acute at listening and generating intricate
and multiple feedback loops: listen > modify action hence sound >
listen. Of course this is pre-linguistic though not unconscious, the
latency in such a loop can be of the order of the ‘duration of the
present’ in perception.
But the performer may or may not know the actual workings of the max patch. With the addition of interactive electronics a layer of complication is added: the performer (of a traditional acoustic instrument at least) is used to highly consistent cause-effect chains (think pitch intonation or vibrato control); here there is added a more conscious layer – which in time might be learnt and sublimated but may very well never reach that degree of assimilation.
There are two possible paradigms here (Emmerson 2010). The performer may be:
1. Extended by the electronics,
remaining primarily a soloist:
• the technology ‘layers’ over the deeply assimilated performance skills, maybe interfering with them (generally moving from echoic to short term memory). Over time these may themselves be learnt and assimilated (‘becoming second nature’).
2. Playing with
• the technology creates an ‘other’ and a raft of longer time scales comes into play (generally moving from short to long term memory).
3. Or a combination ...
I am arguing that for the audience the overt knowledge that we discuss at conferences such as this is sometimes misplaced. We need to get back to the music – does it in the end give the participants (all of them) a valuable experience? – where are the transcendental, epiphany moments – or the aesthetics of beauty, shock and provocation? These stand in front of – and are propped up by – the means and tools we discuss at length.
What are the musical and expressive potentials of interactive systems – the answers to that lie in the truly meaningful and balanced interaction of composer-performer-listener. Now we reach the hub of the problem – the performer has remained poor relation here. All three functions demand different but related and complementary discussions.
The paper will aim to place the performer back at the centre of the
discussion on interactivity. What does the performer need to know and
to understand and assimilate? The composer has insufficiently addressed
this in recent years, focusing too much on what the audience may
experience in the crude terms of ‘getting the message across’ of the
cause / effect chains s / he has created. This discussion will (it is
suggested) allow us to rebalance the language, acknowledging the
technology of production on the one hand, while restoring discussion of
the expressive potential of the medium on the other – with the
performer at the fulcrum.
- À la recherche d’une véritable continuité dans lacommunication
instrument - machine: les défis des approches interactives à la
musique dans la composition de Philippe Manoury et Florence Baschet
Filipe, Elsa - À la recherche d’une véritable continuité dans lacommunication instrument - machine: les défis des approches interactives à la musique dans la composition de Philippe Manoury et Florence Baschet
Université Paris-Sorbonne (France)
« Voilà plus d’un quart de siècle que mon esprit ne cesse d’être préoccupé, hanté même, par cette invention qui, un autre quart de siècle auparavant, a provoqué une fissure dans le monde de la musique : celle de l’électronique. » (Philippe MANOURY).
L’émergence de la musique mixte dans les années cinquante a soulevé plusieurs problématiques d’ordres esthétique, compositionnel, interprétatif et analytique. La question centrale concernait la mise en relation de deux mondes sonores distincts, l’instrumental et l’électronique. Initialement, les moyens technologiques permettaient seulement d’avoir une musique électronique pré-composée, enregistrée sur le support fixe d’une bande magnétique, avec laquelle l’instrumentiste devait dialoguer. Plusieurs compositeurs se sont intéressés aux nouvelles potentialités sonores de cette musique, cependant la rigidité temporelle du support était un obstacle pour la composition comme pour l’interprétation. Ce n’est qu’à partir des années quatre-vingt, avec la création de la norme MIDI, de systèmes permettant la synthèse et le traitement sonore en temps réel, ainsi qu’avec la création de logiciels et de nouveaux instruments, que se sont ouvertes des voies vers une souplesse temporelle visant à obtenir une continuité de communication souhaitable entre l’instrument et la machine. L’application de ces outils à la musique ne s’avéra pas aisée et impliqua de toute évidence des changements au niveau de la pensée compositionnelle, de l’écriture et de l’interprétation.
En France, l’Institut de Recherche et Coordination Acoustique Musique [IRCAM], à Paris, s’est engagé à un projet pluridisciplinaire de dimension internationale afin d’apporter les meilleures performances et réponses aux interrogations rencontrées par les compositeurs. Philippe Manoury et Florence Baschet, qui ont travaillé avec les équipes de recherche de l’IRCAM, ont beaucoup contribué par leurs travaux, et continuent de contribuer, à l’avancement des outils et logiciels développés rendant, d’une façon ou d’une autre, le processus d’interaction instrumentiste-machine de plus en plus continu. Les préoccupations de ces deux compositeurs se ressemblent, cependant les solutions divergent.
D’un côté, Philippe Manoury s’est beaucoup intéressé aux questions
théoriques qui entourent le temps réel et le processus d’interaction.
Outre des réflexions sur ces concepts et leurs applications à la
musique, il s’est également intéressé aux questions de l’écriture et de
l’interprétation, exposant ses idées dans un essai intitulé Partitions
virtuelles. Au moment de la composition de la deuxième œuvre de son
cycle Sonus ex machina, Pluton pour piano et électronique en temps réel
(1988), les équipes de recherche de l’IRCAM ont développé un nouveau
système pour la synchronisation automatique en temps réel entre les
musiciens et les ordinateurs, le suivi de partition. Pendant les années
quatre-vingt-dix, ce système a subi plusieurs modifications avec la
création de nouveaux algorithmes permettant de perfectionner le
processus de communication instrument-machine. Partita I pour alto et
dispositif électronique (2006) inaugure un cycle de pièces pour
instruments à cordes frottées et dispositif électronique de captation
gestuelle. Avec ce projet, le compositeur souhaite explorer de
nouvelles possibilités de contrôle de la synthèse par un « violon
étendu ». Ses recherches se poursuivent dans des travaux parmi lesquels
les plus récents : Tensio pour quatuor à cordes et électronique (2010)
et Partita II pour violon et électronique (2012).
De l’autre côté, Florence Baschet se concentre sur le geste musical et instrumental afin d’affiner ce même processus de communication. Les recherches commencées à l’IRCAM mènent celle-ci à travailler dans le domaine de la musique mixte ou, comme le dit la compositrice, la musique de partition instrumentale et partition électronique. Elle cherche à mettre en valeur les phénomènes d’interprétation dont dépen-
dront les transformations sonores. Pour elle, l’instrumentiste est d’abord l’interprète du texte musical, mais il faut également qu’il soit l’interprète de la partition électroacoustique. Dans ce sens, il ne s’agit pas de donner une pédale à l’instrumentiste pour déclencher des événements pré-composés, où cette action n’aurait pas une influence directe sur le résultat sonore mais au contraire, de trouver un système de captation gestuel qui prendrait en compte les modes de jeu du musicien. De cette façon il n’aurait qu’à se préoccuper de son interprétation musicale, comme s’il interprétait une musique instrumentale ou une musique de chambre. Cependant, son mode de jeu aurait une influence directe sur le résultat sonore obtenu. Son projet de recherche débute par l’œuvre Bogenlied pour violon et dispositif électronique (2005) et se développe avec l’œuvre StreicherKreis pour quatuor à cordes « augmenté » et dispositif électroacoustique live (2007/2008).Pour la première pièce, un archet augmenté a été utilisé. Celui-ci permettait la reconnaissance des modes de jeu et l’extraction automatique des paramètres musicaux des trois types d’articulation basiques : le détaché, le martelé et le staccato. L’objectif initial de StreicherKreis a été celui d’étendre la recherche initiale de Bogenlied sur les modes de jeu et l’organisation d’un vocabulaire de gestes plus spécifiques à la musique contemporaine.
Après cette brève rétrospective sur les travaux de Philippe Manoury et
de Florence Baschet, notre communication tentera d’apporter des
réponses à des questions comme : du point de vue compositionnel et
interprétatif, quels défis se sont présentés à Manoury et Baschet?
Comment ces deux compositeurs se sont appropriés un même environnement
technologique et comment l’ont-ils appliqué à leurs travaux ? Comment
répondent-ils à des questions comme qu’est-ce que le temps réel ou
qu’est-ce que l’interaction ? Que cherchent-ils à obtenir des systèmes
interactifs ? Pour répondre à ces questions nous nous baserons sur les
derniers travaux compositionnels entrepris par Philippe Manoury, à
savoir Partita I et Tensio, sur les œuvres de Florence Bachet,
Bogenlied et StreicherKreis, ainsi que sur des écrits d’auteurs divers
et des entretiens. En conclusion, nous mettrons en évidence des
éléments importants pour l’avancement des études musicologiques sur la
musique mixte interactive.
« Il faut que la musique prenne la mesure de la complexité de notre monde actuel, de ses connaissances, de ses avancées, de ses doutes également. Il ne s’agit pas de brimer l’intuition. Au contraire, celle-ci doit être aiguisée par la connaissance. » (Philippe MANOURY).
De Montfort University (Engalnd)
Unlike traditional instrumental music electroacoustic music “presents no score, no system, and no ‘pre-segmented’ discrete units like notes” Delalande (1998: 14); “the analyst, deprived of any score which purports to represent salient features of the musical materials, is forced not only to consider which aspects of these materials are pertinent to an analysis, but must also contemplate the very basis and process of analysis” (Camilleri and Smalley 1998: 3). Consequently there is no precedent for the analysis of electroacoustic music. There exists a number of tools and analyses of electroacoustic music, specifically within acousmatic research; however, there is no central consensus on the correct tools or methodologies for the variety of different categories of electroacoustic music. Many prominent publications on analysis, such as on spectromorphology (Smalley 1986) and typo-morphology (Schaeffer 1966), mostly discuss single sound events and not their relations with other sonic materials to create musical structures. Other publications that do consider musical structures, such as Stéphane Roy’s grille fonctionnelle (2003), are meant to be used in conjunction with other methodologies that investigate individual sound events. Hence, there is no one explicit ‘tool’ that can fully analyse a single work. This lack of a general consensus might be viewed as a negative attribute, when in fact it is a positive one. Although it does not provide solid grounding for a singular methodology it does allow for many different perspectives on a particular work. As Nattiez (1990: 168) states “there is never only one valid musical analysis for a given work”. The same concept can be applied to the different methodologies of analysis, which inevitably relate to the varying reasons for undertaking one.
The proposed paper will discuss the potential benefits of an open access platform that allows users to share and collaborate ideas and analyses to rectify the shortcomings of this field of research. The Online Repository for Electroacoustic Analysis (OREMA) project (www.orema.dmu. ac.uk) will be used as an example of such an initiative. It should be stated that one is not advocating that the current academic method of peer review should be abolished, rather that a peer-to-peer model could be developed to compliment what is currently being published.
For the past two years the OREMA project has been in operation allowing users to upload analyses and post topics for discussion to a wider community of participants. It is an open access initiative that has no limits to the type of analysis one might submit (provided it is within the scope of the project) or any hierarchical structure, which facilitates a dialogue between postgraduate students, professors, lecturers and enthusiasts. What has been determined is that there is a now a model for community engagement towards the advancement of analytical ideas and practices within the domain of electroacoustic music analysis.
The simplest description of OREMA is that it is a community based website that functions as a repository for electroacoustic music analyses. It is a non-profit initiative and does not charge users a subscription or submission fee. All content on the website is user generated (unless referencing external links). The user generated content does not need to go through a peer-review process, instead users are allowed to upload content to the website, provided it is related to the subject of electroacoustic music analysis. All content submitted to the website is maintained under a Creative Commons licence, which allows for adaptations of other content as long as it is not for commercial use and that the original author is attributed.
The website is split into three main areas: analyses, the analytical toolbox and a public forum. The analysis section of the website allows users to upload and share analyses of electroacoustic works. There are no rules regarding the type of analysis that is accepted; only that it is an analysis of an electroacoustic work. Only authors and moderators (for administration purposes) have the power to make amendments to an analysis, whilst other users have the option to comment within a comment section on the page. The analytical toolbox is a collection of short articles documenting methodologies and strategies for analysing electroacoustic music. Unlike the analysis section of OREMA all users have the ability to make amendments to the content, much like Wikipedia articles. The idea is that the entire community will review the information to form a consensus on a shared understanding of each tool. Finally the forum provides a platform for extended discussions beyond the comment section of both analysis and toolbox pages.
The intention of the OREMA project was to gauge if a community could be formed that would concentrate on the advancement of the analysis of electroacoustic music. Although intended to be open and diverse to allow many different perspectives five rules were defined to ensure focus and autonomy. These are:
• The OREMA project will analyse electroacoustic music in all its guises (acousmatic, sound art, installations, electronica etc.).
• There is no one “true” analysis. The OREMA project encourages the analyst to post analyses of the same composition to show different perspectives.
• There is no one methodology or strategy for analysis. The analytical toolbox is there for reference and is not a list of the acceptable tools for analysis within the project. Users are encouraged to apply their own devised strategies to analyse electroacoustic works.
• There is no
hierarchy within the OREMA project. All
members, regardless of their occupation and status, are equal and share
the same rights.
All information held on the site is free to access and free for people to reference under the protection of a Creative Commons licence.
The OREMA initiative is part of a three year funded project titled New Multimedia Tools for Electroacoustic Music Analysis (funded by the Arts and Humanities Research Council), which is coordinated by Professor Simon Emmerson and Professor Leigh Landy of De Montfort University, Leicester. The concept of the OREMA project was developed after the funding application and forms part of the original contribution within my PhD research.
Since March 2011 to the date of this abstract there has been a total of
12 analyses of 7 compositions submitted directly to OREMA with several
links to analyses on external websites. The analyses currently hosted
on the website range from: graphic transcriptions, typological
analyses, spectrogram segmentation using spectromorphological terms
(Blackburn 2006) and even a Schenkerian analysis of an acousmatic work
(Batchelor 1997). Furthermore, the scope of analysis has not been
confined to acousmatic music and has included analyses of electronica
(Ramsey 2012) and even an audio-only
game (Hugill 2012).
The OREMA project is ambitious as it removes the necessity of a peer-review committee and allows any user, who can register for free, the ability to publish their ideas for others to see. The expectation was that the community would act as a peer-review committee by vetting contributions to the site in order to promote excellence. In parallel to the core OREMA project itself initial preparations for a peer-reviewed eJournal section of the website, called eOREMA, have already taken place and a committee of 14 reviewers has been assembled. The eOREMA journal is intended to be a biannual publication arm of OREMA that will consist of both peer-reviewed analyses and articles concerning electroacoustic music analysis.
Benoît Gibson; Makis Solomos
Université d’Evora (Portugal); Université Paris 8 (France)
Les premières œuvres électroacoustiques de Xenakis datent de l’époque où il travaillait au studio du GRM, elles appartient donc au corpus historique de la première musique concrète : Diamorphoses (1957-58, bande deux pistes), Concret PH (1958, bande trois pistes), Analogique A et B (1958-59, mixte : neuf cordes et bande quatre pistes), Orient-Occident (1960, deux versions : musique de film ; musique de concert ; bande deux pistes), Bohor (1962, bande huit pistes). Ces œuvres posent de nombreuses questions autour de la notion de musique concrète, telles que : leur restauration et numérisation, l’existence de multiples versions, la notion même de musique concrète et sa théorisation par Pierre Schaeffer, la relation à la musique instrumentale...
Les deux auteurs actuellement un certain nombre de recherches autour de ce répertoire, à l’aide d’une enquête de type génétique dans les Archives Xenakis (BNF) ainsi que d’une approche à la fois analytique, esthétique et historique. Cette communication propose de résumer quelques enjeux de ces recherches en prenant l’exemple de deux œuvres : Diamorphoses et Bohor.
Diamorphoses est donc la première œuvre de musique concrète de Xenakis. Elle utilise l’approche compositionnelle de la musique concrète : transformations de sons enregistrés. Les enregistrements utilisés sont de sources variés : tremblement de terre, sons de jets, chocs de bennes à ordure... L’enquête dans les archives montre que la plupart de ces enregistrements n’ont pas été effectués par Xenakis luimême. Quant aux transformations, elles ont été effectuées à l’aide des techniques et de la technologie courantes à l’époque : manipulation de bande et utilisation du phonogène. L’originalité de Xenakis se situe à de multiples niveaux. Tout d’abord, la pièce est pensée comme une vaste étude de bruit, comme l’a affirmé Xenakis à de multiples reprises. Ensuite, on voit qu’il ne travaille pas avec la notion d’« objet sonore » (au sens schafférien du terme) : non seulement les sons sont parfois « trop » longs ou excentriques, en outre leur nature (la reconnaissance de leur cause) peut être un facteur esthétique important (il est aussi intéressant de voir que Xenakis déjoue parfois la perception eu égard à la nature des sons). Autre aspect intéressant : la pièce se pose comme « expérimentale » et peut être envisagée comme une étude sur la perception logarithmique de la densité. Enfin, sa construction globale soulève déjà la question de la forme comme « émergence » – ce n’est pas par hasard que c’est dans Diamorphoses qu’on rencontre pour la première fois chez Xenakis l’approche granulaire, qu’il développera, comme on le sait, dans la pièce suivante, Concret PH.
Bohor est, au contraire, la
dernière pièce de musique concrète, c’est-à-dire réalisée dans le
studio du GRM. En partie, il s’agit déjà d’une musique de polytope, car
l’un des soucis majeurs de Xenakis y est l’espace (c’est la première
pièce huit pistes du GRM, qui consiste en une quadruple stéréophonie)
et l’immersion. Eu égard à la problématique esquissée ici, on notera
que l’œuvre comprend quatre double pistes nommées « piano », « orgue »,
« Byzance » et « Irak ». La première consiste notamment en sons
produits en jouant à l’intérieur du piano (mouvements chromatiques et
autres). La seconde est composée d’harmonies d’un orgue à bouche
laotien. Dans la troisième, on reconnaît de nombreux sons de cloche.
Enfin, il semblerait qu’«Irak» corresponde à des sons de bijoux et
d’autres objets métalliques. Une grande partie de ces sons a été
enregistrée avec Xenakis lui-même comme « interprète ». Quand aux
traitements électroacoustiques, ils sont simples, mais radicaux, par
exemple ralentissement de la bande, ce qui fait sonner les sons d’orgue
à bouche comme de puissants bourdons très graves. Parmi les nombreuses
questions que soulève cette pièce, on citera : l’existence de versions
différentes quant au mixage des pistes ; la notion de continuum qu’elle
met en œuvre.
Università degli studi di Teramo (Italy)
Le développement de la musique électroacoustique au Canada a eu lieu au sein des universités et de centres de recherche, et non pas dans les studios de la radio et de la télévision, comme ça a été le cas en Europe. Cette différence s’avérera déterminante pour le développement d’une culture électroacoustique autonome, traçant un chemin indépendant et en rapport dialectique avec les modèles dominantes français et allemands.
Afin de souligner cette démarche, la relation proposée a pour but
d’examiner le cas du studio de musique électroacoustique de
l’Université de Montréal. Dans le détail on en analysera la genèse, le
développement et les directions esthétique-compositionnelles, dans le
but de reconstruire le rôle joué par cette institution dans la
constitution d’une vivace scène musicale électroacoustique.
Les universités représentent des lieux voués également à la formation (cours, ateliers), à la création (studios) et à la diffusion (séries de concerts, colloques). Cette tripartition occupe une place essentielle dans la création d’une communauté de compositeurs-chercheurs, qui puisse développer un propre esprit critique et une poétique à travers laquelle se rapporter aux démarches musicaux d’ailleurs.
Si en effet la toute première production électroacoustique canadienne - et québécoise - a été réalisée sous l’impulsion et l’esthétique de la tradition française, pendant les vingt ans qui vont du 1970 au 1990 on peut remarquer une autonomie croissante, tant du point de vue des techniques d’écriture que du point de vue de contenus musicaux.
Dans cette optique, l’étude du studio électroacoustique de l’Université de Montréal a occupé une place majeure, en représentant le lieu de rencontre entre la culture européenne et nord-américaine. On ne doit pas négliger que parmi les premiers compositeurs canadiens à s’intéresser au répertoire électroacoustique, les québécois étaient nombreux.
La proximité linguistique et culturelle entre le Québec et la France, de plus, a facilité sensiblement les échanges entre les compositeurs de la francophonie canadienne et les principales institutions françaises dédiées aux études sur la matière sonore (GRM, IRCAM, ORTF). Pour les compositeurs montréalais intéressés à la production musicale réalisée avec des moyens techniques, se fabriquer une formation en Europe devint indispensable afin de se familiariser avec les musiques nouvelles (cours de formation, festival, concours).
Ces moments d’échange s’avéreront très importants au moment d’établir au Québec des lieux dédiés aux pratiques électroacoustiques. L’Université de Montréal fonde son propre studio au début des années Quatre-vingts-dix, dans le cadre de la première formation complète en musique électroacoustique, du Baccalauréat jusqu’au doctorat. L’aspect didactique sera fondamental pour la survie et le développement du répertoire électroacoustique québécois, qui augmentera de manière exponentielle.
Le liens avec la tradition européenne, représentés par les premiers professeur dont le français Francis Dhomont, est adapté à un contexte plus orienté vers la fusion des genres et des cultures. La musique acousmatique poussée vers des nouvelles explorations, connaît un bon succès, en contribuant à la formation de cela qui certains appellent l’ “école de Montréal”. Il faut aussi remarquer que le studio de l’Université de Montréal, à l’époque de sa fondation, était une des structures les plus équipées du Canada.
Afin de mieux soutenir les idées proposées avec cette relation, on analysera certaines des œuvres les plus marquantes réalisées dans le studio, pour souligner les moments de continuité, ainsi que ceux de rupture avec la production musicale précédente. En conclusion on se basera sur les écrits théoriques et sur les témoignages directes (correspondances, mémoires) et indirectes afin de tracer une esquisse du contexte historique et sociale qui a accompagné la production acousmatique franco-américaine de la période 1970-1990.
L’analyse de ces aspects veut représenter, dans l’espoir de l’auteur, une contribution utile dans le cadre des études proprement musicologiques sur la musique électroacoustique. La recherche musicologique, très avancée en différents domaines de la production musicale, est plutôt en retard pour ce qui concerne la musique de ce répertoire. Quoique l’on puisse se réjouir de plusieurs études importants, il faut encore beaucoup d’effort pour arriver a une connaissance ponctuelle de la production musicale technologique.
Dans cette logique le studio de musique électroacoustique de
l’Université de Montréal peut représenter un efficace case study,
capable de contribuer à une correcte collocation historique et
esthétique de la musique de notre temps.
University of Guelph (Canada)
There is a long tradition of soundscape composition, developing out of the musique concrète history of electroacoustic composition. The term “soundscape” was coined by R. Murray Scha- fer (Schafer 1977) for his World Soundscape Project (WSP), launched in 1968 at Simon Fraser University, Canada. This research, pri- marily in the area of acoustic ecology, led to related compositional activity. For Schafer, his creative focus has been on site-specific performances, notably the music theatre works of his Patria cycle, such as Princess of the Stars, performed on a wilderness lake beginning before dawn, where the acoustics of the environment, including the songs of waking birds, are integral to the performance (Schafer 2002). Others of his WSP asso- ciates, notably Barry Truax and Hildegard Westerkamp, have developed their sound- scape works primarily in the studio, using electronic and digital tools and techniques to create electroacoustic compositions such as Riverrun (Truax 1987) or Beneath the Forest Floor (Westerkamp 1996).
At the same time, other composers were creating electroacoustic works
built from field recordings of soundscapes. Most notable is Luc
Ferrari, whose Presque rien No. 1, Le Lever Du Jour Au Bord La Mer
(1970) gained noteriety in France and else- where for its minimalist
(Drott 2009). Soundscape composition has since that time established itself as a strong sub-category of electroacoustic composi- tion, even if the definition of what does, and what does not, fall into the category is prob- lematic (Truax 2008, Harley 2008).
Given the immersive nature of real- world soundscapes, there has been an organic extension of soundscape composi- tion into the realm of multi-channel produc- tion and spatialized diffusion. Barry Truax worked with the Richmond Sound Design AudioBox to create pre-programmed spatial diffusions of his works (Truax 1999), followed by other composers such as Darren Cope- land, who has also used the AudioBox live to create real-time eight-channel diffusions of soundscape (and other) works. At the same time, other composers worked to develop strategies and sound systems for creating spatialized presentations of their electro- acoustic work, most often working from a stereo source and diffusing it in performance over an array of loudspeakers of differing specifications and dispersal patterns . Exam- ples of large diffusion systems include the BEAST sound system (Harrison 2000), and the GRM Acousmonium first implemented by François Bayle (Prager 2012).
The evolution of technology and the availability of more affordable multi-channel audio interfaces has facilitated the diffusion of electroacoustic work and the creation of immersive audio environments for a variety of contexts, from concert to installation. On the software side, a variety of utilities for the spatialization of digital audio has greatly enrichened possibilities and has made sophisticated multi-channel composition and diffusion possible outside of specialized facilities. Examples include Ambisonics (Uni- versity of York), Kenaxis VBAP (Stefan Smu- lovitz), OctoGris (Université de Montréal), Spat (IRCAM), UBC Toolbox (University of British Columbia). Many of these tools have been developed using the Max program- ming environment, which has been designed to accommodate a large number of audio inputs and outputs, limited only by hardware constraints.
Along with spatialization software, a vari- ety of MIDI controllers have been developed, enabling spatialization to be “performed” along with other musical elements. Faders and knobs enable sound to be diffused in gestural ways. With sensor interfaces such as the Electrotap Teabox, other gestural con- trollers have been developed using a variety of sensors. With the development of ther OSC protocol, wireless controllers have also been implemented, using iPhones, iPads, data gloves, and so forth.
Spatialization, therefore, can be part of interactive computer music and can be one element of technologically-based improvisa- tion (Mooney et al. 2008). The incorporation of real-time spatialization technology within the context of soundscape composition and performance is entirely feasible. A number of strategies are possible: 1) a studio-produced soundscape composition can be diffused using interactive controllers; 2) the elements of the composition can be triggered in real-time and spatialized, creating a real-time composite composition; 3) a combination of the first two, with pre-produced elements perhaps creating an immersive sonic envi- ronment with other more focused, elements being presented as soundmarks.
In performance, soundscape elements may be combined with other
performative elements, such as image, video, lnstruments, dance, etc.
As an example, the duo ~spin~ combines interactive improvisation using
amplified flutes and computer with con- trollers within an immersive,
eight-channel sound environment surrounding the per- formers and
audience (Waterman 2012). The flutist is able to evoke natural sounds
in a variety of ways (breath sounds, whistle tones, vocalizations,
birdsong imitation). The sound of the flutist is routed to the computer
for a range of processing effects, and the output is spatialized using
controllers. The computer musician is able to trigger other sounds
(bird- song, etc.), add instrumental sounds (such as a Theremin), play
soundscape tracks already spatialized, and so forth (Harley 2007). The
aim of ~spin~ is to present interactive music that explores the sonic,
performa- tive territory between fixed, multi-channel soundscape
composition, instrumental improvisation, signal processing, and inter-
active computer music. This work is only possible with controllers that
enable real- time response to improvisational cues. The soundscape
element is at the core of ~spin~’s creative aims, seeking to engage
listeners with sonic aspects of the environment while at the same time
remaining cogniscent of the ethical issues of working with such materi-
als (McCartney 2010). ~spin~ was invited to make a presentation at the
2012 Ecomusicologies Conference.
University of Valladolid (Spain)
Eduardo Polonio (b. 1941) is considered one of the pioneers of electroacoustic music in Spain working in 1966 as a composer in the first sound laboratories in the country, Alea in Madrid and Phonos in Barcelona, in this latter exerting also as a professor.
From 1965 to 1969 he attends Darmstadt summer courses (Germany). During
this stay he gets in contact with the latest trends in contemporary
music and studies instrumentation with Günther Becker.
In 1969 he realizes an intensive course at Institut International de Musique Electroacoustique de Bourges (IPEM) in Belgum. This experience led him to discover the potential of electronic instruments where thereafter opts for using technical media and abandons serialism and aleatory music.
Besides composing and performing, he is founder of musical groups such as Alea Música Electrónica Libre, the first spanish group of live electronic music. He is also teacher, festival organizer, co-founder of the Asociación de Música Electroacústica de España (A.M.E.E), creator of his own Diáfano Estudio de Música Electroacústica laboratory, and author of various writings for conferences, symposia, seminars, magazine articles and books, most of them for Institut International de Musique Electroacoustique from Bourges (IMEB).
With respect to musical production, his works since 1980 are characterized by its interdisciplinary nature in close collaboration with artists from various backgrounds.
Throughout his career as a composer he receives numerous commissions from private and public entities.
The composer’s catalogue of works includes a wide variety of
categories: multimedia, video, theater, cinema, radio, opera, fixed
medium and concerts, this latter the most plentiful genres (Polonio
Regarding his style, a new musical production period starts in 1981. He leaves repetitive technique and decides to use new musical materials based on music concrete aesthetic.
The main characteristics that define the personal style of Polonio are tenderness and irony (Iges 2001). But particularly the poetic language is constant in his musical composition. The albums Acaricia la mañana (1976- 84) and Bload Stations-Syntax Error (1985) are examples of this approach. They are integrated by brief pieces that evoke the idea of baroque suite. This concept allows the possibility of writing long pieces with varied and complex materials, and on the other hand to connect to the public through the Long Play format (Zulian 2001).
Within this poetic category we distinguish the following works: Flautas, voces, animales, pájaros, sierra, la fragua de protones, trompetas, frialdad con sangre, arpas judías, trompetillas, agua, agujero negro (1981), Cuenca (1985), Narcissus (1988), Vida de Máquinas (19988), Esa ola de luz(1990), Ice Cream (1991), ChC (1992), Histoires de Sons (1993), En un eclipse, en un eclipse total, en un apagón general del universo (1998).
Another characteristic of his work is relating the music to mathematics field. Some his compositions evoke certain concepts and theoretical models and other occasions mathematic operations are patterns of organization. Is the case of Ussesion (1995) and U flu for fru (1996). They are composed using U binary number, concept introduced by british mathematician Alan Turing to describe universal machine principles. Diagonal (1991) is another work its structural foundation corresponds to this linear geometric concept.
Devil’s dreams (1998) and Trois moments précédant la genèse des cordes (2001) address mathematics but in this case as evocative way. Devil’s dreams is structured into 12 sections which represent Tepotaxi`s dreams, the character of the book “Der Zahlenteufel: Ein Kopfkissenbuch für alle, die Angst vor der Mathematik haben”(1997) by german writer Hans Magnus Enzensberger and Trois moments précédant la genèse des cordes (2001) alludes “string theory”.
This paper, which is part of my ongoing research, focuses on semiotic
study applied to extra-musical signification processes in most
important acousmatic works of the spanish composer Eduardo Polonio, Cuenca (1985), ChC (1992) and Trois
moments précédant la genèse des. They are concert works most
representative of each decade with similar musical style. Their sound
materials are based on exploration of timbre qualities and have great
evocative power of images, stories and concepts.
Cuenca (1985) stands for break with the minimalist world. Its compositional process reflects this spanish city landscape and the diverse experiences lived by the composer in this place.
ChC (1992) recreates the myth of cicadas by Plato, the story about men forgot eating and drinking until they die and finally become this sort of insect. In 1994, he awards the Magisterium of the Grand Prix International de Musique Électroacoustique in Bourges (France) for this electroacoustic composition.
Trois moments précédant la genèse des cordes (2001) was composed in 2000 commissioned by Institut International de Musique Electroacoustique de Bourges (IPEM). In 20002 Eduardo Polonio receives III SGAE prize for electroacoustic music in Spain for this work. Trois moments précédant la genèse des cordes recreates the “string theory” capable of integrating the four natural forces: gravity, electromagnetic, strong nuclear and weak nuclear. This theory explains that inside the elementary particles of atom underlies the vibration of a string which by vibration mode defines the nature of these particles, i.e., its electric charge and mass.
The methodological framework applied to this study corresponds to analytical approach proposed by Francesco Giomi and Marco Ligabue, trying to identify signification strategies in acousmatic works of Eduardo Polonio. The study of this kind of strategies will be carrying out in different categories: narrative organization and characterization of sections, semantic associations, anticipation and repeat, and narration of timbres (Giomi and Ligabue 1998).
This paper examines in each of these levels how sound events works as signs to express abstract concepts and external realities to musical discourse. The study of extra-musical associations is complemented by Philip Tagg’s semiotic approaches based on the concept of “anaphones” (2012). This concept refers the use of existent models in sound formations, i.e., sounds capable of connection with other sensitive levels by evoking other sounds, transmitting tactile sensations or suggesting movement ideas. Depending on mode of perception the anaphones are divided into three categories: sonic anaphones, tactile anaphones and kinetic anaphones.
And to conclude, it should be pointed out that hitherto there is no
musicological research centred on Eduard Polonio, so this paper
contributes to diffusion the figure and work of this spanish composer
and represents fundamental episode in electroacoustic music evolution
of 20th and 21st century in Spain.
Université Paris-Sorbonne (France)
Le domaine électroacoustique n’est pas toujours principal dans l’œuvre de Pierre Boulez mais en effet il me semble que la réflexion sur la composition électroacoustique ainsi que sa pratique ait permis d’approfondir sa pensée musicale fondée sur le sérialisme. Nous ne pouvons pas séparer les deux domaines, électroacoustique et instrumentale, pour examiner le développement compositionnel de ce compositeur. La relation avec la musique électroacoustique se divise de façon générale de deux périodes : l’époque transitoire de 1951 à 1972, et celle de l’IRCAM depuis 1977. Il avait de l’intérêt à la composition électroacoustique dès les années cinquante, avant de commencer à travailler à l’IRCAM. La composition avec bande magnétique concerne quatre pièces : deux études de musique concrète (1951-52, abandonnées), Symphonie mécanique (musique pour le film de Jean Mitry, 1955), et Poésie pour pouvoir (1958, abandonnée).
La découverte de l’univers électroacoustique de Boulez remonte à sa première composition de musique concrète dans le stage du 19 octobre au 13 décembre en 1951 au Groupe de Recherches de Musique Concrète (GRMC) de Pierre Schaeffer avec d’autres compositeurs tels que J. Barraqué, M. Philippot, A. Hodeir. A cette époque-là, Boulez s’intéressait à l’élargissement de la possibilité du phénomène sonore capable de rendre la matérialisation sonore et la recherche rythmique avec l’opération de la bande magnétique. Il est suggestif qu’il s’agisse d’un moment parallèle au sérialisme intégral tel que Polyphonie X (1949-50) et Structure I (1951-2). Des pièces électroacoustiques composées avant l’IRCAM, la plupart a été abandonnée en raison d’une technologie insuffisante à leur époque, et leur partition ne fut pas publiée. Ainsi, il n’est pas aisé de connaître son approche à la composition électroacoustique durant cette période. Cependant, il semble que la composition de musique concrète au début des années cinquante, lui a donné une occasion importante pour reconsidérer et développer sa pensée sérielle, surtout la notion de timbre et de rythme, et pour faire avancer la question de la spatialisation sonore.
Selon Antoine Goléa, qui a observé la musique concrète au GRMC de
Pierre Schaeffer, les deux études de musique concrète (1951-1952,
abandonnées) de Boulez, comme Timbres-Durées d’Olivier Messiaen,
appartiennent à une deuxième tendance, la « musique abstraite », qui
est « un essai d’organisation totale du monde sonore, non seulement de
ses hauteurs mais aussi de ses rythmes, de ses timbres et de ses
attaques ». Ceci illustre évidemment que l’approche vers la musique
concrète portait sur l’adaptation à la recherche sérielle et son
développement. De ce point de vue, E. Gayou fait également remarquer
que les techniques de la musique concrète ont permis aux compositeurs
sériels « d’aller encore plus loin dans le raffinement de leur
technique compositionnelle, notamment sérielle ».
Il convient donc d’analyser, globalement et historiquement, l’influence de la technique électroacoustique sur sa pensée musicale et esthétique, tout en reliant la pensée sérielle aux propriétés acoustiques (timbres, durées, hauteurs, rythmes, intensités, attaques). Cependant, comme nous l’avons vu, nous ne pouvons pas encore faire le tracé de la musique concrète de Boulez à cause de l’absence de leur partition. En effet, deux études de Boulez, parmi Antiphonie (1951) et Vocalises (1951) de Pierre Henry, Étude (1953) de Jean Barraqué et Étude I (1953) de Michel Philippot, étaient premières œuvres sérielles basées sur la technique de composition par le montage sonore grâce au phonogène du GRMC. La construction sonore d’Étude sur un son se compose d’un schéma sériel en établissant les rapports fréquence-durée qui engendrent des transpositions d’un seul son de senza africain amplifié avec les vitesses variables. Tandis que cette première étude a réussi à engendre la richesse du timbre à partir de la transposition d’un son, dans le montage sonore d’Étude sur sept sons Boulez a utilisé les sources sonores beaucoup plus variées que celle-là enregistrées par Pierre Henry qui était au GRMC. Ce qui est remarquable, c’est qu’elle est une pièce unique et expérimentale dans l’œuvre de Boulez composée par les sons concrets (évidemment y compris les sons instrumentaux). Ceci illustre bien que la recherche de nouveaux sons était l’un des éléments importants dans la pensée sérielle de Boulez. En plus, la structure musicale particulièrement dans la durée ainsi que le rythme est également plus élaboré en dépendant du processus sériel. On pourrait dire que la deuxième étude est hors d’« une étude » par rapport à la première.
Cette communication s’appuie sur l’analyse des esquisses des deux études conservées à la Fondation Paul Sacher qui contient les matières sonores, le schéma sériel, le montage sonore et les processus électroacoustiques. Elle serait un premier essai d’analyser les méthodes électroacoustiques de la musique concrète de Pierre Boulez avec la partition inédite en se greffant dans le contexte du sérialisme intégral. Il est vrai qu’après la composition de ces deux études, Boulez a renoncé la musique concrète en raison de la limitation technologique du montage sonore de cette époque-là et du désaccord avec le concept des objets sonores de Schaeffer. Cependant cette tentative sérielle du domaine électroacoustique a certainement marqué sur le développement de la composition sérielle de Boulez.
(Subvention de la Fondation NOMURA)
Yuriko Hase Kojima
Shobi University (Japan)
Past several years, I have had my researches in music with technology in Japan in different perspective. While looking mainly into aesthetical topics, I was always concerned about the composers’ relentless efforts in facing the problems of the musical notation for the electronic music part. Especially, if the piece is a real-time contemporary music, a composer has to face a big problem to decide what kind of musical notation would best present their musical ideas.
In Japan, there are not yet so many composers who compose this area of composition. Japanese composers who are active to compose this kind of pieces include Takayuki Rai, Ichiro Nodaira, Naotoshi Osaka, Mikako Mizuno, Masahiro Miwa, Mari Kimura, Hitomi Kaneko, Miyuki Ito, Akira Takaoka, Ai Kamachi, and myself, Yuriko Hase Kojima. They are basically Classical-trained composers and later applied music technology to their creation of music.
Many of above composers have pieces for Japanese traditional instruments and live composer music system. It would be very interesting to investigate why they choose a particular instruments to combine with electronic devises when we all know the instruments can create various kind of sound withoutelectronics. They seem to find vast interests in the changes of timbre of the traditional instruments. Meanwhile some Japanese composers do not find their interests in composing for traditional instruments. They say the instrumental sound is not stable and changes tremendously over time and bother their creative mind when they compose. I am interested in how different concepts of musical sound could be applied to each composer’s creative mind.
Finally, I will look at their notation to express their ideas on the
paper. Every composer knows how difficult it is to explain what one
listens to in his / her mind in order to convey to a performer. Here,
we may have to go back once again to compare what is written on the
paper and what is actually heard by the listener. That is the problem
of music notation even in the traditional sense.
De Montfort University (Engalnd)
Following the success of the ElectroAcoustic Resource Site (EARS: www.ears.dmu.ac.uk) and based on a request by one of its funders, Unesco, the desire to create a resource site related to electroacoustic music (or sound-based music, as I like to call it) for young people was born. After careful consideration and a good deal of contextual research, it was concluded that what Unesco had requested, namely “an EARS site for children” (EARS 2) was not going to be a small project. Children are not interested in simply receiving information online; they like to be active and jump across media as much as possible.
It is within this context that the notion of the EARS 2 pedagogical project was born. This talk will briefly introduce listeners to its history, the results of the contextual research undertaken, its ambition, vision and implementation. Due to the fact that funding was not gained all at once, but instead sequentially, with amounts of very different sizes, some of the development was quite tricky. The following paragraphs offer a structure regarding what will be contained in the presentation.
This is briefly described above. In this first section a case will be made to underline how pedagogical initiatives are vital to the development of the field of electroacoustic music studies.
This first of three central discussions regarding EARS 2’s development
focuses on issues related to online and classroom-based learning; how
traditional means of learning, e.g., splitting up history, theory,
technology, repertoire acquisition and creative endeavour, is not an
option in terms of this initiative. With this in mind, the EARS 2
approach of concept-driven learning is introduced. Taking a simple
example, there are various ways that the notion of real-world sounds
can be introduced in the EARS 2 curriculum. By allowing it to relate to
the entire list a few lines above makes the didactic approach of EARS 2
holistic and, in the development team’s view and by way of user
Therefore, the scope of the content will be briefly summarised and how the site’s three general categories: listening, learning and doing are entirely interwoven. Examples will be presented.
Another vital aspect of the site, multiple forms of navigation, will be introduced. The developers have created an incremental curriculum similar to those in any subject, but are also offering thematic ones (related, for example, to learning manipulation techniques in a block). Teachers can create the navigation for their own students or individual users who visit the site. À la carte viewing is also possible.
Gaining the rights for repertoire has been critical and we are pleased
to share that we are able to use examples (in general, not entire
works) from the GRM’s collection, ZKM’s and the CEC Sonus collection as
well as individual works from an everincreasing number of
Rights also plays a role in terms of the sounds used on the platform’s Compose with Sounds (CwS) creative software. Composers’ works are being uploaded accompanied by Creative Commons licenses. The challenge is: individual samples need to be controlled somehow. Users need to fill in rights agreements demonstrating their awareness of the legality of the samples they are using and teachers have the ability to intercede wherever relevant.
The approach to CwS will be introduced at this point. In particular, our goal to make usage as intuitive as possible where what you hear is what you see and vice-versa will be demonstrated.
Like the original EARS project, making EARS 2 available in a number of languages – with all that implies, that is, not only translation but also cultural conditioning where relevant – is an important priority. As will be presented in the next section, CwS is appearing immediately in six languages as a consequence of its EU funding. Partners are being sought to take on the larger project of the entire pedagogical site. Potential partners have been found here in Europe and as far afield as Latin America and China.
As stated, development would ideally have taken place within the auspices of a proof-of-concept grant and then a major grant for the entire project. However, the small grants that kicked off the initiative did not lead towards that single large grant, but instead two separate funding streams, one for the software – an EU Culture grant with INA / GRM (F), ZKM (D), Notam (N), Miso Music (P) and EPHMEE / Ionian University (GR) and a separate one for the eLearning site. The EU project, “Composing with Sounds” will end in April 2013 after twelve concerts in six countries involving twelve professional composers teamed with twelve students (aged 11-14), two pairs / country, and several workshops for teachers have taken place. Whilst the EU team members were working on this, a Higher Education Innovation Funds grant was procured to develop the proof-of-concept for EARS 2. The project is seeking its follow-up funding at the time of this proposal; its team area hopeful that most content will be available online by the time that EMS13 takes place. Some of the tricky coordination involved between the two teams will be discussed, as this is not unique to this project.
This part of the talk will not spend much time dealing with sequential funding, but instead will offer some incite into strategic choices (platforms, how to connect the learning and creative systems with web 2.0 tagging systems and social networking opportunities, earning badges, moving up from level to level like in a computer game, etc.). Demos related to these choices from the EARS 2 site and the CwS platform, including its sound generation and manipulation tools, will take place here and an excerpt from a student’s piece shared at this point.
Throughout the development, a rigorous testing programme has been fundamental to the team’s gaining usability feedback. It is just as important that teachers are at ease with this system as it is with young people. To this end, various teaching aids have been and will continue to be prepared (see following section for further detail). This short section of the presentation will summarise test results and identify important actions that were taken.
Like EARS, EARS 2 is never complete. Web resources are dynamic and the development will continue in the future. Content will be added and amended; translations and cultural conditional will broaden its use. Curricula will be devised for use in regions and nations where appropriate.
CwS will be further developed to allow for images or a movie to be placed on a timeline for post-synchronisation. The team is also considering the software being able to interact with instruments and controllers in the future. Even a tablet version is under consideration. Its final level 3 is its end point as it overlaps heavily with professional-level software but is not intended as an alternative to such platforms.
To aid teachers, some semi-commercial support systems are being devised. The author’s latest book, “Making Music with Sounds” (Routledge, 2012), that is directly linked to EARS 2, has been published and teachers’ packs are being prepared. Online and telephone support systems are being developed as well. In the UK at least, various forms of training for teachers are being organised so that they are taught about an area many of them will know little about.
A good deal of work has gone into this initiative and the development team is of the view that the platform is of potential value outside of the project area. Next year a start of an EARS 2 for Higher Education will be created as well as a proof-of-concept in an entirely different field, for example, chemistry. This will heighten the impact of the education strategy behind the project (run by eLearning and educational studies specialists). Still, the immediate goal is to open up the world of sound-based music to young people in particular, but finally to people of all ages both in terms of appreciating this type of music and ideally in terms of active participation as well.
The two sites, which are still in development at the time of this proposal, but will be publicly available in June, will be www. ears2.dmu.ac.uk and www.cws.dmu.ac.uk.
NB: Along with this paper proposal, the steering committee has been
asked whether they could programme the four Portuguese pieces (two
professional, two children), or at least a selection, from the
“Composing with Sounds” project for their concert programme. I have
been led to believe that this will be possible.
- Musical and Physical Gestures : Auditory and Visual Organization
Induced by Traditional Practices of the Far East: Virtual
Multiplication, for percussion and electronics by Mei-Fang Lin
Liao, Lin-Ni - Musical and Physical Gestures : Auditory and Visual Organization Induced by Traditional Practices of the Far East: Virtual Multiplication, for percussion and electronics by Mei-Fang Lin
Université Paris-Sorbonne (France)
RAmong the four categories of the use of cultural
elements : 1) The cultural concept remains at the level of inspiration,
2) The cultural idea is expressed in relation to a theory and
philosophy but does not match the identifiable sound, 3) The idea
CORRESPONDS to a philosophy and is linked with the sound, 4) The idea
CORRESPONDS to a cultural theory and philosophy which also CORRESPONDS
with the sound, the search for a recurrent cultural identity is present
in any place and at any moment. On the occasion of the EMS13, our
analysis will look at the musical and cultural construction of sound
based on the reappropriation of Far Eastern culture, and more
specifically, the practice of (氣), the vital force.
• The work for percussion and real time electronics (2005), created at IRCAM by Taiwanese composer, Mei-Fang Lin, transforms her own physical practice of into a musical through the Chinese traditional theory of the Mei-Fang Lin gained some knowledge about in practicing (氣功) and (太極拳). These ancient traditions helped her to understand the meaning of (氣韻 生動) or spirit resonance and life-motion, a master principle that allows an artist to connect to the universe and with the vital force, . It also refers to the first of Xie He’s (謝赫) (六法) or Six Laws in the art of painting dating from the Nanqi Dynasty of the sixth century, which states that an art project created by man should always integrate the spirit of humankind and not loose its humanity.
Throughout her body of work, Mei-Fang Lin’s compositional structure directly results from her reading of the Book of and its theory based on sixty-four hexagrams which stand for sixty-four conditions of life, as well as from her mastery of the movements. Her own practice of is expressed in her music by an intense energetic continuity. Indeed, Mei-Fang Lin organizes and makes a sound synthesis out of these energies that articulate and unite the diverse sections with the main parts of. This sound synthesis also imprints her work with directionality.
Her specific approach of “musical gesture and physical gesture” creates intensity and drama, referring, among other things, to traditional Chinese Opera in which strictly codified gestures lead to an evaluation of time and space. Out of sound morphology flows timbre that entwines horizontally and vertically with articulation. Putting into motion all these parameters, she draws her inspiration from this tradition in order to transpose the visual into the musical dimension.
To gain the attention of her audience, brings in quite discernible gestures, and by such, tends towards a manner of stage direction that is in many ways a contemporary reflection of Chinese Traditional Opera. The physical dimension also derives from her professional training as a concert pianist and a conductor. An experienced performer on stage, she hopes to develop this physical aspect by incorporating her auditory experience of electroacoustic composition and the real time electronic dimension of New Technologies.
While trained in Western composition technique in the most prestigious
institutions abroad, it is nonetheless true that Mei-Fang Lin strongly
claims an attachment to the culture of the Far East. However, the
cultural sources brought to light through the analysis of Mei-Fang
Lin’s music is but one key element in her composition. By just
listening to her music, one cannot identify the cultural origin of the
composer, rather Mei-Fang Lin’s identity is subtly revealed through her
approach and the thoughtway that contribute to her creation. It is from
her inner-self that Mei-Fang Lin works with theses sources
incorporating them beyond the extent of classical Chinese musical,
cultural and philosophical tradition.
Depuis les tentatives infructueuses pour formaliser complètement la pratique de la composition musicale, on place celle-ci, en particulier lorsqu’on utilise les technologies numériques, quelque part dans une zone intermédiaire (middle zone) entre l’algorithmique et l’action dite « manuelle » (Laske 1981). Or, si on a une idée assez précise de ce qu’est un algorithme ou un mécanisme, il ne va pas de même pour ce qui est du côté opposé. Que veut-on dire exactement par l’expression composer « à la main » ? Faudrait-il la prendre littéralement ? À croire un bon nombre de recherches visant des interfaces « plus intuitives » par le truchement de représentations graphiques, il semble que cela soit le cas. Imaginons la situation dans laquelle se trouverait un compositeur qui, par un malheureux accident, aurait perdu ses mains. Ne pourrait-il plus composer ? Devrait-il se contenter de composer avec des langages alphanumériques ? Cette situation imaginaire volontairement extrême, cherche a montrer que nous sommes face a un vieux problème philosophique qui est venu s’installer confortablement (comme dans beaucoup d’autres domaines) dans les affaires les plus techniques du faire musical, nous menant, a terme, vers un chemin sans issu. Appelé traditionnellement « intuition», sa forme scientifique est celle d’un mécanisme mental d’une certaine sorte plus au moins complexe selon les approches, et encore aujourd’hui mal connu. Or, ce glissement du paradigme causal vers, littéralement, l’intérieur du corps (qu’il soit le cœur, la tête ou aujourd’hui les mains) du compositeur, pour expliquer les choix musicaux qui sont les nôtres, ne va pas sans poser des difficultés logiques insurmontables. Bouveresse parle à juste titre du « mythe de l’intériorité » (Bouveresse 1976). Autrement dit, comprendre l’expression « à la main » n’est pas (ou pas seulement) un problème causal à résoudre mais surtout un désordre conceptuel à clarifier. Nous ne pourrons pas répondre à la question de savoir comment arrivons-nous a faire un choix correct en l’absence d’un système de règles préalablement défini (un système de vérification), si nous n’entamons pas une analyse grammaticale du langage ordinaire (Wittgenstein 2005) autour de la composition musicale, et en particulier du mot « choix » ou encore celui de « règle » et de ce que signifie « suivre une règle ». Cela nous évitera, entre autres, de faire appel à des êtres occultes qui agissent, et surtout, qui choisissent a notre place.
Il va sans dire que la question que nous posons ici dépasse très
amplement le cadre d’une conférence pour qu’elle puisse recevoir une
réponse satisfaisante. Notre proposition tient seulement à mettre la
question dans le débat du faire musical actuel, et a montrer la méthode
que nous considérons la plus appropriée pour l’aborder.
Conservatorio “G. Verdi” di Como (Italy)
This paper is an analysis of Anthèmes 2, a piece composed by Pierre Boulez in 1997 for violin and electronic device, lasting about 20 minutes. The piece of the French composer, of the utmost importance as regards the history of composition associated with the use of new technologies, has been the subject for musical analysis and technological, conducted on the score to identify strategies and compositional processes put in place, through a parametric-esthesic object based mixed methodology aimed at a hermeneutic intervention. The resulting data were then crosschecked with the considerations given by the composer at the first performance of the piece (October 21, 1997, IRCAM, Paris) to French philosopher and musicologist Peter Szendy: the substantial correlation between them and the analytical evidence has allowed me to build an integrated framework for inferences about the composer’s approach to live-electronics, highlighting guidelines (asymmetrical relationship between acoustic instrument and electronics; monodirectionality of interaction; demiurgic role of the composer; use of redundancy within a dramaturgy; purposes of constructive dialogue between mimetic function - electronics and diegetic function - instrumental materials) and objectives (targeted manipulation of the psychological mechanisms of fruition).
This analytical approach is derived from a parametric-esthesic model based on contrastive changes, according to the direction indicated by Michel Imberty for which the process of segmentation of a musical piece is structured from the perception of more or less pregnant qualitative changes in the flow of musical time. The contrastive change, to be perceived, requires that the ego perceives not only the states A and B, but the transition from A to B. The passage constitutes the perceptual reality of the relationship between the parts. B must have a different quality than A. The change introduces a discontinuity in the time texture through two possible modes: hierarchy and juxtaposition. Through the segmentation implemented in this way, the piece is first described at the level of macro-form, articulating a preliminary paradigmatic observation in which the methodology varies flexibly depending on the object; observation then descends to the micro-form, with the aim of identifying the structural thematic cells and the allocation of roles at the morphosyntactic level. In parallel to the analysis of instrumental materials, the role of electronics is investigated by comparing its formal course with that of instrumental materials, looking for consistencies or differences which give meaning. This mixed methodology is consistent with the statement made by Boulez about the importance of observing a piece while depending on how it is perceived rather than only in how it is built. An investigation focused on pitch parameter, already fully completed by Goldman in his thesis, will therefore not be performed here. Through the survey of hierarchical relations (syntagmatic axis) and horizontal relationships that affect the course of the formal track (paradigmatic axis) qualitative inferences are performed about the choices made by the composer, by integrating the analytical observation with an intervention of hermeneutic. This approach follows the one I proposed in the article Atomi distratti di Mario Garuti in which, through a perceptual-paradigmatic analysis I operated an hermeneutic intervention directly on compositional choices, which thus become intelligible and meaningful. The statements by the composer, in the case of Atomi distratti, have been used as guidelines for organizing analytical observation. In this paper, otherwise, the words of Boulez are used as verification engine and comparison post hoc.
In the light of the findings, it was possible to draw a map of the compositional processes involved in Anthèmes 2:
• multi-layering processes
• formal and object crossfade
• use of specific sound objects as elements of punctuation
• process figure-ground
• simple-complex transition as a strategy of variation
• crystal-organic continuum
• mirror mechanisms in both the macro-form and the micro-form
• decrease as false transition
• interchangeability of roles between the instrument and electronics, relating to the
• construction of the formal organization and the allocation of structural functions
• functional flexibility
• clear division of roles at the functional level
• mechanism of redundancy (electronics: thickening of linear, angular, point events)
• mimetic / diegetic function
• transposition of the same object on different axes (horizontal reading of pointlike objects)
• targeted modulation of the relationship ambiguity / recognition
These guidelines emerged from the analysis are entirely consistent with what was stated by Pierre Boulez in an interview in Paris, at the first performance of the piece. Through a cross-validation of the data the following points of congruence have emerged between the information gleaned from the analysis (training sets) and the statements of Boulez (validation sets). With regard to the items processes of micro-variation, clear division of roles at the functional level and targeted modulation of the relationship ambiguity / recognition, also in relation to the findings about management and targeted manipulation of the expectations and processes of mnestic retention, these strategies are implemented with process-specific purposes: compositional strategies include managing the element of surprise, the use of processes such as crystallization of processed instrumental objects and connotation of them in psychological terms (anticipation / recollection); mechanisms of compression / expansion observed in the macro-form shall also apply in micro-form; processes and mechanisms of transition, mutation, variation, augmentation, reduction, etc.. (i.e. all compositional strategies that are applied to primary material, in this analysis described in the form of objects) implemented are central and are derived from starting materials intended as undifferentiated elements to which they are conferred order, size and organization, identity and personality.
Boulez also explains what is his approach to live electronics, with respect to this work. “...the violinist provides all the material we ask, with all the necessary freedom. There is no forcing on him, no time limitation. In particular, he does not need to worry about synchronization, which could otherwise affect its imaginative contribution. On the contrary, we take what plays the violin in order to draw out something else”.
If we consider live-electronics as continuous interaction and interplay
between two players, the sound director and the performer of the
instrument, for which the piece becomes more and more “the result of a
collective work”, in which the interpreter is engaged in “a kind of
ideal polyphony with himself, forcing him to become aware not only of
the data immediately sensitive but also the impact of their
performative gesture and relations between the different levels of
sound processing”, this concept appears in his unilateral determinism
mechanistic. On the other hand, the complexity of the piece and its
difficulty of execution by electronic means provided by IRCAM on that
occasion, they could provide some sort of justification for the
decision to reduce the scene of live electronics to a one-sided
interaction. However, the origin of this choice is in my opinion the
demiurgic function that Boulez seems to reserve the composer and a
precise poetic choice which provides an asymmetric relationship between
the traditional acoustic instrument’s performer and the composer /
live-electronics performer. The dialogic tension between representation
(mimesis) and plot (diegesis) is derived directly from the classical
Aristotelian tragedy, in which central role was played by Fate, which
had freely, and according to his whims, about the destiny of the
characters. The alternation of interludes and sections, elements of
punctuation and diegetic elements, both in macro-form and in micro-form
– stylistic feature that pervades and describes the piece – is a
compositional strategy whose origin must be sought in ancient forms
and, in particular, as stated by Boulez himself, in the Lamentations of
Jeremiah, a piece performed by Boulez more than once during childhood.
The decision to assign a function predominantly mimetic to electronic
sound is aimed at creating disorientation in the listener and to
conceal the high recognition of the materials. In fact, where the
instrumental materials preponderantly hold a diegetic role, the main
purpose of electronics is of a mimetic and uses in a systematic way the
mechanism of redundancy. The reasons for the choice of the electronic
means are purely pragmatic and the traditional instrument remains in
the foreground; the role of electronics is to amplify its mechanical
possibilities and, in this way, increase the degree of complexity of
the musical objects generated from it: in practice turn it into a
hyperinstrument. The centrality of the compositional work done on
expectations, the time of fruition, the psychological mechanisms of
expectation, surprise, estrangement is ultimately a strategy related to
the psychology of perception: a work on emotions that shows Boulez’s
lucid will of returning to the music its identity as a voice,
understood as musical trace of the passions quae sunt in anima, to use
the words chosen by Boethius in his commentary on Aristotle’s De
Nagoya City University (Japan)
This presentation focused on two Japanese composers who created pieces of electroacoustic music in Europe before 1970. Their pieces have not introduced in Japan until they came back to Japan. Even though the first Japanese musique concrète had been created in 1953 and the NHK studio for electronic music inaugurated in 1955 was one of the first electroacoustic music studios in the world, the two composers had started their work of electroacoustic music in foreign countries and their creating method were much different from that of Japanese studio.
The two are Akira Tamba and Makoto Shinohara, both of whom are still actively composing. Tamba worked in GRM (Paris) and Shinohara worked in Studio Voor Elektronische Muziek (Utrecht). This time my presentation is based on GRM archive research which provided several new information concerning the history of Japanese electroacoustic music the sources (sound and text). Comparison the GRM sources with those of Japanese historical text clarified and some alternative phases of the history of Japanese electroacoustic music.’
In GRM Tamba had created “9 Pièces”, “Interlude”, “Morphogrammes 0” and others for television and film in 1964 and 1965.
Three works of Tamba are found in GRM sources. The pieces are not known in Japan. I certified their sounds in cooperation of GRM. “Enrichissement Sono Drama” and “Le Nô Mus. Orient”. “Plac 30” are thought to be the radio programs. “Synergies” was created with Berbard Mâche for the concert collectif. Shinohara had created one piece in GRM with some cups and bowls, but no information about the piece is found in GRM. The direct interviews with the composers also testified the atmosphere how they accepted electroacoustic.
International cooperation to compare such kind of sources in different
countries is now necessary for historical research of electroacoustic
Adrian Moore ; Adam
Stansbie ; Stephen Pearse
University of Sheffield (England)
This paper considers some of the numerous relations that hold between acousmatic works and their performances, and proposes a compositional method that responds to the art of sound diffusion. The paper is divided into three main parts. The first part surveys the established paradigm; for some considerable time, sound diffusion has been employed to animate the fixed media work which has, in turn, become much more malleable thanks to interfaces and software that are more appropriate to a composer’s needs. Despite this, fixed media works remain relatively inflexible in performance, notwithstanding recent developments in the capabilities of diffusion systems. The second part of the paper introduces one of the author’s compositions and explains how and why graphics tablets and external controllers were used to trigger pre-configured acousmatic materials (from sound objects to sections). This compositional approach was later described using the term ‘fractured’ (Moore, 2008) and a new method for a link between composition and performance was identified; the meeting of sound diffusion (at its simplest, the invisible loudspeaker) and projection (by contrast the very visible loudspeaker) providing a fertile ground for the exploration of the fractured acousmatic (a work that can be configured but has a significant proportion of pre-composed elements). The third part of this paper abstracts key findings from the fractured acousmatic in performance and proposes a method for sound diffusion with pre-composed materials. We presuppose that composition and performance are intricately linked and that the composer has a view, however simple it may be of a rendering over a traditional multiple loudspeaker system in a format that amplifies the content (most notably in terms of power and spectra).
In the proposed method, composition is broadly speaking constructed in a similar fashion to the closed acousmatic work but the presence of fractures may result in alternate pathways being suggested (in a similar way to the controlled aleatoric works of Boulez and Lutoslawski). During the construction process, a ‘what if?’ point is reached; the composer is required to suspend a definite path and consider multiple routes within the compositional structure. For example, a texture – once rendered whole – might be fractured in seams to be re-mixed live. In this instance the performer would be able to control both input levels and output levels and thus the ‘art’ of performance becomes at once more intricate and demands of the performer some degree of interpretation (over and above the current role of a diffuser which is normally to amplify and extend, making images, spaces, atmospheres, textures and gestures inherent on the media, explicit in performance based upon experience).
The composition of fixed media works has always relied upon ‘out of time’ working. It is vital to ensure accuracy where it is needed, especially in cases where what is desired is impossible to achieve in real time (playing a transformation before playing the original). Moreover, the necessary levels of musical complexity present in most acousmatic music on fixed media must be taken into account. In postulating a method it is worthwhile exploring the physicality of a) combining the playback of increasingly smaller segments of a fractured work with b) a more demanding diffusion / projection paradigm. Thus, this paper lays composition against performance and suggests potential avenues for a heightened finished product, namely the fractured elements brought together through multiple routes and a skill-centered art of diffusion and projection.
The concept of divergence in composition suggests interpretation; a bifurcation opens up a new avenue of discovery concerning the composition process. In the layered texture example the performer would require control of input levels and output levels and potentially the control of each layer to the matrix of in-out linkages. The ‘art’ of performance becomes at once more intricate with interpretation being closely linked not only to pragmatic aspects of venue, audience and sound diffusion system, but to the ‘idea’ of interpretation (as once suggested by Christian Clozier when describing the sound diffusion system at IMEB). It therefore demands of the composer a score with more concrete instructions to control the degree of interpretation, as there are no guarantees that the performer will be the composer or will have significant understanding or experience to render the best possible outcome. Given that inexperienced sound diffusers of fixed stereo works can often completely misunderstand the simplest practical issues, – namely sound intensity, the amplitude being either too low to work the loudspeakers, too high, or too constant – a performance practice of increased difficulty will require some degree of instruction, practice and rigidity.
Multichannel works may also be suited to some form of fracture. However, a vast majority of multichannel works require next to no diffusion or ‘diffusion en masse’ (such as the BEAST multi-8 channel practice). Given the potential complexity in fracturing the work’s flow through time, this research will concentrate on working within a fractured stereo paradigm.
The performance must necessarily reveal some of its methodology and not enable the reconfiguration of a work that could otherwise be fixed as an original (thus allowing for multiple interpretations and potential fixed versions for broadcast). The end result must not only depict a ‘best possible outcome’ but potentially show off the interpretive and performative skill of the live musician. Interfaces that afford greater visualisation in performance tend to sacrifice detail and subtlety in the music for visual effect. Interaction via tablets and faders where global changes may be plotted in rehearsal may still be the best option for performance though larger interfaces (including aspects of object recognition) have enormous potential.
There is relatively little need to discover new technological means of composition. The DAW suffices to produce mixed segments of transformed sound and performance playback (to whatever degree) can be interfaced in MaxMSP or Pd and combined with a diffusion paradigm that is matrixed to multiple loudspeakers and control interface(s). Although many programs for diffusion exist (the M2 diffusion mixer has been in use for ten years at the author’s institution (Moore et al, 2004)) a prototype will be envisaged that handles soundfile triggering, scripting, matrix mixing on input and dynamic matrix output routing, a staging post for a composition in fractured form, ready to be truly interpreted by a performer.
As has been shown by the proliferation of electroacoustic music
throughout the last thirty years, the forms, controllers and
performance practices are as varied as the music being presented. This
research will neither replace nor necessarily advance any one
particular method but serves to enrich the solid practice of sound
diffusion of stereo fixed media.
Concordia University (Canada)
My proposal is in the form of a presentation-experiment in the area of “Taxonomy, terminology, and ‘meaningful’ units of music description”. My impetus for choosing this theme and apparently ignoring the conference theme is due to the urgency of this research in the context of developing our own interactive network NESTAR (Network of Exploratory Spaces for Temporal Arts Research - Phase III of an ongoing research programme known to attendees of EMS-05 as the Multimedia Thesaurus). Although our network does not (yet) function in a direct compositional-performance role, it is moving towards a fuller integration of internet and large communities, and can function as a kind of pre-compositional tool - especially in collaborative contexts - so possibly there will be some relevance for those reflecting on other types of networked systems.
In the context of the physical installations of NESTAR, we use terms or descriptors in 3 contexts: as axis labels for three-dimensional grids, as labels for sorting racks & bins, and as descriptors on the database entry for each media clip catalogued. Although there are many overlaps between these sets, there are also differences. Axis labels, used to encourage reflection on a certain aspect of the sonic fragment and hence on perception and salience, are designed to operate in pairs, defining “opposite” ends of a continuum; sorting labels presuppose a much more minimal collection of terms and therefore indicate perceived salience. Now we are developing an online component, which will allow people to search and sort clips, upload their own, and fill in or (non-destructively) edit the database form. The descriptors are therefore functioning increasingly as “tags” in the conventional web sense.
I will give a very brief (mainly pictorial) explanation of NESTAR to
explain the context. I will contextualize the main issues within the
various strands of the relevant musicological discussions, referring to
both specific categories and descriptors (Schaeffer, Chion, Thoresen,
Weale, EARS, and MiM); and a few more general perspectives and
frameworks articulated in writings and talks by various EMS and
Organised Sound contributors over the years: Bossis, Emmerson, Geslin,
Landy, Lalitte, Spiegel, Young, etc. I will also compare our categories
and terminology to those used in other classification contexts (such as
MPEG, FindSounds, Amazon) and explain how the issues overlap
significantly with music information retrieval. But how does one
reconcile all of these approaches?
A unique feature of the NESTAR setup is the ability to choose one’s own labels; it is thus highly adaptable for conducting one’s own explorations with sophisticated terminology. Alternatively, the project is designed to show sounds simultaneously with still or (silent) moving images in a variety of configurations (e.g. with changes in tempo or visual colour) and therefore allows for the establishment of correspondences and even the designing of visual labels (‘icons’ would be the most extreme example) - hence the origin of the title ‘multimedia thesaurus’ (borrowed from Battier). One of my initial ideas, for instance, was to explore the variety of musical textures, and to establish a refined vocabulary for their categorization, by comparing a sonic texture with a variety of visual clips of texture. This still seems a rich area for exploration, and is very useful for explaining in composition and analysis classes as well as providing a good example of the complexities of crossdisciplinary discourse for those unaware of the problem.
However, since NESTAR is also concerned with cross-disciplinary
discourse, it needs to include descriptors that would satisfy both
specialists - e.g. electroacoustic composers - and those who are less
used to electroacoustics and / or analysis - e.g. all those unfamiliar
with the rich libraries of terminologies we are building up in EMS. To
this extent, my current presentation also revisits a few ideas
presented in the context of a panel at ICMC 2004 (and an Organised
Sound article) on the marketing of EA / computer music, after several
years of experimentation.
I wish to take advantage of the expertise of EMS members to help us refine these aspects of this project - and to help my own understanding of the issues - so I will explain how we are intending to solicit ongoing help for the website and simultaneously encourage reflection on appropriate tags or keywords for favourite pieces. For this next phase, we would like to (a) provide useful, editable, lists; (b) find appropriate ways to interact with those lists; (c) allow intelligent customization of the database forms so that those who are uploading can easily find the descriptors they want; (d) encourage uploading of better examples of a variety of terms, categories, moods, characteristics, and structures; and (e) provide links to appropriate resources, research centres, etc.
Finally - in what I consider the main part of the presentation, though it will take less than half of the allotted time - I will present a practical exercise. Having previously distributed a questionnaire with various words and boxes, I will ask EMS members to think about the appropriate tags for each one of several very short excerpts from electroacoustic works. No one is obliged to fill out the questionnaire, let alone submit it (one of the objectives of NESTAR is to tap the responses of those who are not inclined to fill out questionnaires), but it is hoped that the exercise will stimulate some reflection, or at least amusement.
A few of the examples will be of musical and visual textures, to illustrate the texture project and test whether my colleagues share similar reactions, and to propose that such visuals could extend our ability to describe textures beyond the existing vocabulary, while serving to test the appropriateness of existing descriptive terms. The other examples played will manifest a variety of structures and content: referential, abstract, complex, sparse, etc. to be associated with one or more specific tag (such as “elegant”, “narrative”, “complex”, “abstract”, “sparse”, “textural” and “musical”) or to trigger other words.
This exercise will hopefully clarify both the fascinating aspects of
this area of study as well as the difficulties that can arise from
multiple usages of the same word by people in different
(sub-)disciplines, cultural differences, listening modes, and the
assumptions that can skew frameworks and questionnaires.
University of Toronto (Canada)
Karlheinz Stockhausen, in his work Solo für Melodieinstrument mit Rückkopplung, Nr. 19 (Solo for Melody Instrument with Feed-back), sought a new conception of form, a ‘memory’ form in which a feedback of musical ideas would interact in realtime. The creation of the score itself follows an interactive process whereby the instrumentalist extracts fragments from Stockhausen’s pre-composed musical material and patches them together anew. A performance of Solo incorporates a variable length tape delay and feedback system that superimposes recorded material and plays it back live. It is this ‘Strukturbildung’ (‘structure formation’ of electronic superimpositions) which will be the focus of analysis. Although Solo appears to be an open-form work, electronic superimpositions manifest structures which function at a macro-formal level, whereas content (and a number of other parameters) shape form at a micro-formal level. Thus, Solo has a definite fixed form: a structure of electronic superimpositions which Stockhausen systematically conceives and distributes across the six Versions of the work.
I will begin by examining and creating a nomenclature for electronic superimpositions which includes the following terminology and concepts: electronic rests (lack of output from both audio channels), electronic canon structure (electronic playback of live sound in the immediately subsequent period [Stockhausen’s term for a unit of time]), full accumulation (continuous playback of all previous periods), sub-accumulation (playback of a subset of periods), strict interrupted accumulation (alternation of full accumulation and sub-accumulation following a consistent logical pattern), free interrupted accumulation (a less strict version of the latter in which interruptions do not necessarily alternate on successive periods and in which sub-accumulation does not necessarily follow a logical pattern), cyclical canon (a continuous series of electronic canons that cycle within a relatively sparse texture), interrupted cyclical canon (a less strict version of the latter in which not all periods form canons), drones (a period that loops for an entire or partial cycle), structural chordal blocks (the sudden addition of two or more electronic layers to a period subsumed within the process of interrupted accumulation or interrupted cyclical canons), cadential chordal blocks (the sudden addition of two or more electronic layers which serves mainly to mark divisions of formal units), deaccumulation (a reduction of electronic layers), delayed canons (the reintroduction of a period with a delay of a single period into a texture where this period is recognizable), dynamic layer density (continual increase or decrease in the number of electronic layers by a factor of one), and static layer density (the number of layers remains unchanged from one period to the next). Electronic superimpositions form patterns and manifest techniques that evolve across complete and partial cycles (sections). In an attempt to prove an overall structure of electronic form, I will present a topology of these patterns and techniques which demonstrates a systematic organization of elements.
Stockhausen conceived a precise system to determine and allocate superimpositions: he organizes superimpositions into six groups of six patterns (each group of patterns displays similar characteristics), apparently on the basis of layer density, and disperses these patterns across the six cycles and six Versions of Solo using a system based on mathematics, logic, and arbitrary decisions. Group 5 consists of accumulation structures: the first three patterns display full accumulation and the remaining three display interrupted accumulation; all patterns end with full accumulation. Group 6 patterns also display accumulation structures ending with full accumulation, but the density of overall accumulation is lower than in group 5 and is not systematic. Group 4 patterns reach accumulation of approximately half the total layers; the first two patterns involve two nearly equal points of accumulation and the remaining patterns involve three equal (or nearly equal) points of accumulation. Group 3 patterns accumulate to a point of static density of two layers in the case of the first five patterns and of three layers in the final case. Group 2 patterns all involve a symmetrical accumulation and deaccumulation structure. Finally, group 1 patterns involve an accumulation to a peak point in the middle of the cycle followed by deaccumulation and then accumulation to another lesser point.
Stockhausen’s schematic of layer density patterns explains the usage and allocation of different superimposition patterns across Versions, but it does not provide a meaningful understanding of the functionality of all the patterns and techniques in use. Therefore, I have supplemented Stockhausen’s conception of electronic form with my own analysis of superimposition patterns and techniques in order to elucidate this important functional aspect. Thus, Stockausen’s sketch of layer density patterns and my own analysis complement each other, creating a vital bridge towards the comprehension of electronic form in Solo.
Although musical analysis of a Version (or Versions) of Solo is by no means capable of providing an exhaustive understanding of form and content, it does yield insight into the multilayered processes at play. Musical content affects form to varying degrees, ranging from negligible to significant; however, in no instances does musical content define form to the degree which electronic superimpositions do. In fact, Stockhausen, in his choice of musical content, seems to select material that supports and complements the predetermined framework of electronic superimpositions. Thus, electronic superimpositions establish the foundations of structure and create form at a macro-level, while micro-formal elements carry out processes of subtraction and variation, shaping, but not undermining, the structural paradigm of superimpositions and imparting a uniqueness to Versions.
Stockhausen systematically allocates a set of logically and musically conceived superimposition patterns across Versions, and these patterns, along with a range of superimposition techniques, generate the subdivisions of form within Solo. Complete cycle superimpositions patterns, which include accumulation, cyclical canons, and drones, formally define cycles; cadential chordal blocks and electronic rests punctuate these formal boundaries, while both structural and cadential chordal blocks carry out the function of densely recapitulating material; and superimposition techniques, including partial cycle superimposition patterns, deaccumulation, static layer density, delayed canons, and various non-recurring techniques, act to unify cycles and delineate further subdivisions of form.
Stockhausen abandons the traditional
exposition/development/recapitulation paradigm for a new conception of
form, a ‘memory’ form involving an interaction of acoustic and
electronic feedback. Solo could be considered thematically
non-developmental, but I contend that Stockhausen achieves a different
type of development: a development through structure, texture and
diffusion which amalgamates these traditional elements of form, thus
creating a continuous, temporally displaced exposition / development /
recapitulation structure. Stockhausen strove for, and achieved,
‘something new’ in the composition of Solo; although his original
intentions underwent a transformation in which the idea of a ‘structure
formation’ takes on a new meaning, the kernel of Stockhausen’s idea
persists in the manifestation of electronic superimpositions. Today,
Solo occupies a seminal position in the repertoire of live electronic
Concordia University (Canada)
“The perceived world is the always presupposed foundation of all rationality, all value and all existence. This thesis does not destroy either rationality or the absolute. It only tries to bring them down to earth.” The renowned 20th century French philosopher Maurice Merleau-Ponty aptly and succinctly sums up the main objective of his philosophical expositions as expounded in his Phenomenology of Perception. Much more so than in the field of science, the value of body-sensory perception is the incontrovertible modus operandi in the arts. For the musical arts in particular, occupying the central role of praxis from creation, production to the eventual stage of reception, the phenomenon of perception involving bodily senses every so often traverse the conventional boundaries of mere auditory-cognitive senses.
Although in its very nature, music can never truly be said or known to exist without transmutation via the mechanics of bodily perception, over its historical development, the process of its creation had, at some point, substantially veered off the course of which sensory perception has its place in the central locus, and ventured deep into the wilderness of the Cartesian “cogito”.
An infamous period spanning the 1950s to 1960s saw western music reaching its pinnacle in abstract cognitive conception, tracing a very faint link to the realm of auditory perception. Composers brought forth the relentless ideal of creating music wholly by virtue of deconstructively and mathematically manipulating the abstract signs and symbols that constitute a set of “instructions” to produce the music intended to be perceived via the auditory system. There even came a point when a piece of music was written solely to be read (consumed directly and silently in the mind), rather than to be performed and heard. This may be attributed to the fact that the body-sensory experience of the phenomenon of music can be sufficiently repeated and observed, repackaged to be stored in memory only to be retrieved, unpackaged and virtually reconstructed in the imaginary part of the human mind. This very much explains the natural talents of musicians who could launch into spontaneous acts of music creation without resorting to relearning the sounds and mechanics of an instrument from scratch every time they are inspired to create. It is this ability of the human mind – the Cartesian “cogito”, to create and build with an abstraction of our body-sensory experiences prior to unleashing them again into the physical medium of the real world that underlies the vast majority of our acts of music creation.
Subsequently music managed to find its way into ever-intriguing and diverse modes of creation, expression and sound world including one that is more grounded in the deep exploration of spontaneous creative acts in reflex to live, transitory auditory and visual sensory stimuli – musical improvisation. Even so, ostensibly improvisatory acts such as jazz may never truly lay claim to have entirely abandoned the confines of the cogito as there remains the need for jazz musicians to prime themselves around a predefined sketch of melodic essence and harmonic scaffolds in advance of the actual improvisation. We see the ability of music to build on a powerful dialectic between varying degrees of engagement of body-sensory perception and the cogito. Instead of pitting the significance of body-sensory perception against the cogito in a diametrical stance, it would be more appropriate to frame them up within an uninterrupted continuum of which the ultra-cogito and the uninhibited improvisatory forms (ones that assume body-sensory perceptions as their primary mode of engagement) can find their place at two of the extreme corners of the vast spectrum.
Yet the cogito-body-sensory continuum is but one possible ontological dimensions of music. The story being told by the cogito-body-sensory dialectic would have no consequence in itself without an audience, and more critically, without a means of existing or being perceived in another of music’s irrefutable form: the form that makes possible our primeval experience of its existence and beauty – music as “organized sound”. On the ontological plane of sound, it is totally legitimate to view various kinds of music simply as differentiated organization of sounds. Hence an ontological continuum of sound can additionally be layered on this story of music. Now that two ontological continua have found themselves linked at the pivot of music, an approximate coordinate on the cogito-body-sensory space, can, in effect, manifest itself in the corresponding sound space. To put it simply, it is the basic premise for music to manifest itself differently in sound depending on how it is conceived – be it meticulously conceived and assembled within the confines of the cogito or conjured from the firmament of live improvisation.
A third ontological dimension to music essentially forms the missing
link between the conception of music and the eventual production of
musical sounds – the musical instrument, or to circumvent the
terminological complexity of technological interface control of
sounding materials, this third dimension shall be conveniently referred
to as the “musical interface”. This dimension highlights the intricate
relationship between a certain kind of sound produced and perceived,
the way it is conceived and the medium it is conceived on and conceived
for. The complication of a three-fold ontological musical universe
presents a set of its own challenges in the context of mixed-media
responsive or interactive performance or participatory environments
aimed to explore the phenomenology of perception along Merleau-Ponty’s
proposition. In some of these phenomenological explorations, its
inhabitants are encouraged to dwell in a continuous and prolonged state
of movements and play on interacting with sonic media events. Part of
the most intriguing aspect of this phenomenological engagement draws an
analogy of an inhabitant immersed in an elusive act of organizing sound
materials by fumbling around for a constantly morphing structure of a
“musical interface”. Somewhere along the line of events, bodily
movements began to take on a mysterious life of its own. When examined
under the three-fold ontological field of music, granular strands of
sonic media substrates would occupy a specific locality in that
ontological continuum. But thanks to the continuous transfiguration of
the “musical interface” within a circumscribed limit, the range of
modulated gestures invoked already speaks of the potential of even
richer palettes. It begs to the imagination what more interesting
idioms of gestures and movements would emerge if the sonic media
substrate can be cajoled into venturing to other designations on the
vast musical ontological continuum. Would it be defeatist even if the
type of sonic materials were to remit to the near-cogito region of the
musical ontological space against the philosophical backdrop and origin
of such responsive environments emphasizing body-sensory perception?
One of the great paradoxes of Merleau-Ponty’s earnest appeal for the
return to the relative merits of sensorial perception lies in the
condition and sense of time; that there used to be a time at the dawn
of human consciousness when the body was man’s primary tool for
perception and a much touted return to sensorial perception since a
round-trip to the Cartesian cogito could only be made possible and
meaningful if and only if the journey begins from cogito territory.
Would it be necessary to reenact the conditions of this paradox in such
responsive environments in order to do justice to the phenomenological
experience that strives to enlighten in the significance of bodily
perception? If so, would that necessitate a richer configuration of the
sonic media substrate to accommodate organized sounds spanning the
cogito-body-sensory ontological space? Would it be a good idea to
introduce more established forms of musical interfaces with newly
evolving ones? What are the possible scenarios of affective disposition
on the participants should more familiar musical interfaces be made
available at their disposal? And how should sonic materials conceived
in the depth of the cogito (without any association to movement-based
musical interface) be activated into a kind of manifestation that
interacts with bodily gestures and movements of the participants?
Norwegian University of Science and Technology (Norway)
Ode to Light is a landmark work in the history of electronic art. At the initiation ceremony in August 1968, it was believed to be the first time electronic sensors were used to influence the audio output of a sound installation. It had been an interdisciplinary project between artist, composer and technologists. The structure itself was built by the renowned abstract sculptor Arnold Haukeland (1920-1983), and had the form of a 20-meter high monument reminiscent of two giant hands reaching up to the sky through a cloud of shining stainless steel. Twenty-six speakers installed in the “hands” and the “metal cloud” played electronic music by the composer Arne Nordheim (1931-2010). Finally, the acoustics group at the Norwegian Institute of Technology had, under the direction of Nordheim, conceived an intricate electronic logic unit, a “Music Machine,” to control the sound diffusion. The principle governing the interactivity was simple and elegant. A photocell recorded light intensity, which then controlled the reproduction speed of a complex pattern of sound levels. Thus, the stronger the light, the more intense the sonic activity in the sculpture. In this way, the sun could be seen as “conducting” the work, reflecting the ever-changing light conditions.
In 1995 the worn down sound system was rebuilt. The audio material was digitalized and brought up to date with Nordheim’s then-current style of electronic music. The result was a quite different work. For a long time the original analogue tapes were thought to be lost. In 2011 the author discovered a copy of the tape in a basement at NTNU. The discovery has now made it possible to perform an analysis of the original sound work.
The two versions are rather different from each other, sonically speaking. Where the first version of the sculpture reflects innovations in the electronic analogue techniques of the 1960s, the 1995 version of the sculpture is made in the wake of digital sound installations, based around personal computers and samplers. The core set of artistic ideas is the same, but the technical implementation and sound aesthetic is quite different.
In this paper I will discuss Ode to
Light both as a cultural-historical product and an aesthetical
object. Ode to Light
is a work that reflects many of the changes in aesthetic thinking in
the mid 20th century; the change in emphasis from beauty to
truthfulness, from producing works to producing aesthetic situations,
from art as accomplishment to art as
aesthetic experience (La
Motte-Haber 1999). In order to understand the work we need to consider
both its exogenous aspects and endogenous parameters, and give a
general account of the aesthetic character of the work. My focus is
only on the aural part of sculpture. An analysis of the physical
sculpture can be found elsewhere (Aamold 1992).
For the aural analysis, I will in part look into on the interactive aspects of the work, and secondly perform a deep listening to the actual sonic content. I will pay special attention to the particularities of analysing electronic music; for instance what are the sound sources, which kinds of sound synthesis or morphologies are used, what is the desired listening behaviour (as proposed by i.e. Emmerson and Landy 2012). I am particularly interested in the construction of the sonic “objects,” and how these reflect the artist visions.
Two storylines are central to the exogenous aspect of the work. The first story is dealing with the role and vision of an unlikely patron, the blind musician and successful composer of sentimental pop-songs Erling Stordahl. Ode to Light was commissioned to be the centrepiece in a flower park on Stordahl’s family farm in Skjeberg, a rural area about an hour from Oslo. The park was a part of a philanthropic centre, a garden of senses where the blind could experience nature and culture by touch, smell and sound. Ode to Light reflected Stordahl’s thematic vision for the park: the everlasting struggle between darkness and light, where light ultimately would prevail. A key idea in his vision was that sculpture should be “painted aurally” with sound, so it could be experienced also by the non-seeing.
The second story is concerning Ode to Light as a vital work in the oeuvre of the composer Arne Nordheim. Since 1960, Nordheim had been an electronic music pioneer in Norway. By the mid 1960s, he felt he had exhausted the limited resources of the small studio at the Norwegian Broadcasting Corporation, and was looking for opportunities to create something more advanced in one of the major European studios. The generous budget provided for Ode to Light made it possible for Nordheim to travel to Studio Experymentalne in Warszawa, at the time one of the most vibrant centres of electronic music in Europe. This was the first of several projects Nordheim undertook in the Polish studio, and the works that he conceived in Studio Experymentalne between 1967 and 1972 make up the most vital part of Nordheim’s electronic output.
As these storylines indicate, the work consists of several “layers” of aesthetic ideas. The “outer layer” is concerned with innovative electronic interactivity and the integration of sound in the sculpture. But there is also an “inner layer” that addresses the actual sound production and the development of Nordheim’s electronic aesthetics. An interesting observation is that in the publicity surrounding the work, only the outer layer was actively communicated. This was done in order to attract publicity and to guide the spectators approaching the work. The inner layer, though fundamental to the aesthetic character, was not communicated publically. What happened in the studio stayed in the studio. It is my claim that this has led to only a partial understanding of the work. No previous report on the work has taken the actual sound into consideration.
Looking into the sound work, we recognize two central concepts: synaesthesia and infinite form. Nordheim has indicated that he wanted to make aural abstractions of the material used in the sculpture, black painted iron and stainless steel. The source material is both electronic and concrete, ranging from the recognizable to the unrecognizable. Listening closely, one can recognize metal beating upon metal and other “hard textured” recordings. The sounds often have a hard and metallic or bright and fluttering character. Typical 1960s transformation techniques, like speed adjustment (often 4 times faster than recorded speed), ring modulation and filtering, are used extensively.
The use of infinite form is frequent in sound works conceived for exhibition areas. The goal is to have a work that is constantly changing, but then also always the same. Nordheim achieved this by organizing his material into 18 short parts, lasting between 50 seconds and three minutes. Each part had its own distinct character. The parts were recorded on two “endless tape loop” cassettes that were set to run slightly out of sync with each other. The sounds from the two tapes were mixed together into one signal, but the uneven length of the tapes would assure that only rarely would two sounds occur at the same place twice. In addition, an element of indeterminacy was achieved when feeding light data fed into the Music Machine. This would assure that exactly the same sound image would never occur twice.
Over the last 40 years Ode to Light
has been emanating its electronic sounds over the verdant cornfields of
Stordahl’s farm, “singing” with the light. A full experience of the
work cannot be had except on the site of the sculpture. The flower
garden where it resides and the landscape that it cuts out of are
integrated parts of the artwork. The work is neither a piece of
electronic music or a sculpture, but something that embodies the
characters of both. It is a gesamtkunstwerk, where the physical and
aural qualities are interwoven. And after the discovery of the old
tapes, we can now for the first time go in depth also on the sonic side
of the original work.
Universidade Nova de Lisboa (Portugal)
Nous débuterons notre présentation en exposant brièvement les paradigmes analytiques adoptés dans l’approche à ces quatre pièces de François Bayle, en appuyant nos chois dans la littérature spécialisée sur le sujet. Nous montrerons notamment comment la production d’acousmographies est un outil très pertinent dans l’étude des pièces acousmatiques, dans le sens où ces acousmographies agissent comme un intermédiaire dans le processus analytique. Ensuite nous présenterons les œuvres étudiées et, pour chacune, tisserons des commentaires à propos de leurs structures et contenu sonore. Ces commentaires seront faits à la lumière des théories développées par François Bayle, de sa conception d’acousmatique, de son idée d’image-de-son et abondement appuyés sus des exemples graphiques et auditifs.
La pratique de l’analyse des œuvres acousmatiques implique, par la nature même de ces œuvres, des problématiques particulières. En effet, la musique acousmatique, ne comportant généralement pas une représentation symbolique prescriptive, nécessite d’approches analytiques qui s’appuient dans d’autres paradigmes. Ainsi, l’analyse est normalement faite à partir du son en soi, par des pratiques analytiques soit fondées sur la perception auditive, soit fondées sur l'étude du signal sonore en tant que phénomène physique.
Dans le cas qui nous concerne ici, nous appuierons nos analyses essentiellement sur la perception auditive. Pour ce faire, et parce que des représentations graphiques sont des outils assez profitables dans la pratique des analyses musicales, nous avons produit des partitions d’écoute pour chacune de ces pièces. Ces partitions d’écoute, réalisées par le biais de l’Acousmographe, sont le support graphique, la représentation visuelle des événements sonores, des images-de-sons, des structures musicales qui constituent l’œuvre. Elles sont à la fois le résultat du travail de perception auditive intensionnel et le support graphique qui appuiera la perception auditive pendant le processus d’analyse des œuvres.
Ce sera donc en partant de l’étude perceptive des images-de-sons présentes dans ces œuvres, assistée par les acousmographies réalisées, que nous présenterons quelques commentaires analytiques à chacune des quatre pièces. Nous approcherons leurs structures, en exposant à la fois l’organisation des images-de-sons et des figures que composent chaque pièce individuelle ainsi que les strates sonores qui les déterminent. Nous y observerons le transfert d’images-de-sons et de figures d’une pièce à l’autre, ainsi que le développement de certains éléments qui offrent à l’auditeur des expériences acoustiques par fois surprenantes, par fois sereins.
L’expérience Acoustique est un cycle composé de 14 pièces, groupés en cinq partis, et a été composé entre 1969 et 1972. Pour Bayle ce fut « […] bien plus qu’un travail de composition lié à un moment d’une époque. Il est devenu à la fois un projet et une philosophie. »
En réfléchissant à la Musique de François Bayle, nous pensons à des images, des images-de-sons, évocations qui orientent l’écoute, indices, ambiances, référents, morphologies, figures, saillances... Des « […] index iconiques : clichés sonores, comportements imagés ou identifiables […] : l’eau - le murmure - le cri - l’oiseau qui parle, qui rit - la parole - le vent… [et] […] index métaphoriques : situations non-sonores, spatialités, temporalités […] : la nuit - la scène et son rideau – la transparence - l’air coloré…”. Or, c’est selon Bayle à cause de ces images que « […] la musique “a un sens”, elle contient et décrit des idées […] ».
C’est pourtant avec des images-de-sons que Bayle nous emmène dans « […] une mise à l’épreuve, des choses et de nous-mêmes qui les observons, et l’épreuve est toujours épreuve des limites – limites de la sonorité […] ».
Des propos de François Bayle, registrés en 1992, expliquaient clairement les sensations auditives que l’auditeur perçoit en écoutant les divers numéros de l’œuvre : « L’Expérience Acoustique propose un parcours des états de conscience et des manières de travailler dans ce que j’appelle la modalité acousmatique : du cri au silence, de l’accent à la trame, de la texture au texte. » Marc Favre a commenté, en 1981, que, pour Bayle, cette « […] expérience couvrait plus d’espace que la science acousmatique, et que son œuvre était une somme d’expériences simultanées : expérience personnelle des choses sonores de tous les jours, expérience historique de la recherche musicale, enfin expérience humaine. »
En effet, un auditeur attentif trouvera dans L’Expérience Acoustique des émotions dès celles presque primitives comme dans L’aventure du cri, à d’autres plus réflexives, presque philosophiques, comme dans L’épreuve par le son.
De cette expérience sonore tantôt envoûtante, tantôt effrayante ;
tantôt concrète, tantôt métaphysique, nous traiterons ici quatre
parties : … L’inconscient de la
forme ; Match nul ; Métaphore ; Métaphore / lignes et points.
Ces pièces, les numéros 1, 2, 5 et 6 du Cycle, se montrent à la fois très différentes et assez semblables : différentes dans ses évocations, ses sensations d’écoute, dans les expériences acoustiques qu’elles nous font subir ; différentes dans ses sonorités, ses textures, ses structures. En même temps elles se ressemblent, se rapprochent : des figures appuyées les traversent, par fois frugalement, par fois indiscrètement, ces figures passent d’une pièce à l’autre, y engendrant une cohérence, une unicité, comme si elles se déclaraient le fondement même de cette expérience.
Queen’s University (England)
This presentation will discuss the use of visual communication through graphic notations in composition as a way of bridging gaps in verbal communication in networked electroacoustic and live music performances. I argue for the modular nature of visual communication as it is applied to composition for networked performances.
Networked (also termed: telematic, distributed, or co-located) performances, by their definition, link geographically dispersed locations. Each location holds combinations of performers, composers, listeners, or instruments, but these distributed forces cannot collaborate through the network without also using a distributed set of instructions that represent the core performance framework. In the case of networked music performance, this set of instructions forms the score. The common task of creative collaboration brings together the colocated groups, and their shared communication provides a network of meaning within the network of technology. As Franziska Schroeder describes: “a network intends to join groups of dislocated interests and expressions.” She also notes, however, the functional dis-unity and fragmentation that is inherent in the networked performance model. (Schroeder, 2009) The network is no longer theorised as a diagram of paths and nodes, but instead as a dynamic space that is evolving, connecting, and disconnecting. (A. Munster & G. Lovink, 2005) Not only is the score disseminated through the network but also its ideas are changed and developed by the network and its nodes.
In the case of multi-regional or multi-national distribution of networked performances, the transmission of the score throughout the network encounters an obstacle: differences between multi-cultural communications. How can a performance direction in the score be communicated to a performer who must not only understand but also respond creatively, if that performer may or may not share a linguistic or cultural knowledge with the composer? This performance direction must also provide allowances for the lack of networked rehearsal time and the latency (both processing and transmission delay) of the network. (Braasch, 2009) All of these factors within the network environment serve to fracture the network and dis-connect the score.
In this paper I argue that networked performance scores show a trend toward visual rather than verbal communication, and that one reason for this trend is a need to adapt communication customs to the environmental factors inherent in distributed performances. As Jonas Braasch notes, when discussing the development of telecommunication and the contradictions of what he terms “the telematic environment”, only by establishing and developing a communication language within its environment is it “possible to optimise the signals for the given acoustic environment.” (Braasch, 2009)
Visual communication is the “use of images as messages”. Images, or icons, have been used for decades by the transportation, health & safety, and consumer technology sectors for succinct communication of a function or idea across linguistic and cultural boundaries. This communication practice has become pervasive as “icons” are employed extensively in the user interfaces of digital technology. An example case is the iconic green “Exit” indicator in the form of a running human. This pictogram, designed in 1982 by Yuko Ota, indicates an emergency exit. It contains several layers of visual signs, all of which serve to convey the message “emergency exit”: the running human figure, the doorframe space, the arrow, the bright green warning colour. Written language is replaced by pictorial means of conveying messages, in order to improve the information content and precision of communication. (Ellis and Kaiser, 1991 p. ix) At the core of many publicly visible pictograms, there is re-combination and modular arrangement of icon components. As in the example of the “Exit” indicator, each icon component has been chosen and placed in order to form a composite whole. This modularity of visual communication is central to the compositional process of network performance scores like Jason Freeman’s Graph Theory (2005) and Pedro Rebelo’s Disparate Bodies (2008). Each visual component of the score, becomes itself an object, which may be manipulated as part of the network performance creation. The ‘triarchy’ of creative modularity outlined by Angela Buscalioni: the modular entity, the modules, and the model of the interactions between the modules, neatly describes the score, its visual components, and their transformation by the networked performance. (Callebaut, 2005) This modularity enables collaboration inter- / intra- site and flexibility in interpretation at each site. The network of participants and technology directs the growth of the composition, and itself evolves through the participants’ responses to visual communication.
However, visual communication has omnipresent limitations. The
interpretation of an image, if a select range of communication outcomes
are desired, is reliant on the reader understanding the context and
cultural basis of the image and its modules. (Jonathan Baldwin &
Lucienne Roberts, 2006) While directing precise outcomes may not be a
desired function, nor even a possible function, of scores in networked
performances, message coherence and a degree of performance control is
necessary. For visual communication, and thus scores, to be effective
in this environment, it must be accompanied by some cultural or
physical context or textual content, as an anchor. Compositional use of
communication through images in a visual score does not guarantee that
the meaning of the images will be conveyed across cultural boundaries.
The information value of each image module is related to the amount and
direction of the context anchoring, which may take the form of text,
other images, or shared cultural allusions. The message distortion may
occur as an accepted side effect of modular visual composition or be a
foundation for purposeful exploration. By remaining conscious of the
factors inherent in visual communication, its modular nature and
reliance on context, composers may reflect on the affordances present
in their own work and inform their future work. Purposeful use of icons
in visually communicated scores continues to enable the democratisation
of networks, reaching across boundaries of nation, tradition, and
language. Through an examination of the modularity of communication
through images in visual scores, this paper will explore the
connections and fractures of network performance. These factors will
also provide a springboard for discussion of the implications for
composition for networked performance environments.
Norwegian Center for Technology in Music and the Arts (Norway)
This presentation builds on the basic idea in acoustic ecology; that sound can be understood in terms of how it regulates and is regulated by social contexts, or as a description of the relationship between human beings and their environments. This is a quite inclusive approach that includes a focus on both sound sources and the macro- qualities of sounds such as density, amplitude, site-characteristic concentration and so on. The field emerged in the late 1960s, much based on environmental concerns, and in his founding texts about the sonic domain, Murray Schafer adds a practical dimension to soundscape construction and analysis, pointing towards positive action for reducing undesirable elements and adding sound that would both empower listeners and bring them in better contact with their environment.
With base in this understanding of acoustic ecology, its academic practice often builds on notions of nature as a substance that essentially does not include human activity in other aspects than as a source of unwanted noise. Both philosophically and politically, this is an untenable position, and acoustic ecology’s limitation to traditional environmentalism paradoxically reduces the potential for furthering the understanding of modern, human-made soundscapes. Firstly, the idea of nature as the true source of harmony, moral balance and high ethics is not more than approximately 250 years old. And notable, this romanticism grew from the challenges posed by urban growth and the emerging problems of pollution, exploitation and industrial development. Romanticism can be seen as a retreat or escape from the reality of everyday life, where “nature” was caricatured as recreational, and no longer dangerous. Phrased differently, this notion of nature emerged as part of our cultural and economic history, and it is quite a leap to elevate it to universal principle.
The validity of this viewpoint is debatable also from the perspective of whether nature is best seen as a stable substance of sorts, or as an unstable, dynamic process – Darwin, for example, stongly emphasized that change rather than stability was the natural condition, in the closing paragraph of his “Origin of the species”. Another question is whether it is reasonable to separate humans from nature - whether it is possible to abstract human activity in such a way that it is removed from nature. It is quite a challenge to say exactly what in human nature and actions that is unnatural - in acoustic perspective, is it that our voices are heard, or is it that we are capable of affecting our environment sonically with other means? This is, after all, common animal behavior. The logic also leads to a political impossibility - that man’s presence and activity should be separated from nature, in a situation where human species, its physical and intellectual constructions and actions influence nearly all life-forms on the planet. This point is much discussed by for example theoreticians Slavoj Zizek and Timothy Morton, and they also maintain that by adopting a more inclusive view on ecology, a new basis for activism can be created.
This presentation posits that human activity is just as natural as that
of other species, and that our sonic ‘emissions’ can best be understood
in terms of their function in human social contexts, and as deliberate
and non-deliberate results from human activities. An understanding of
the soundscapes thus depends on the underlying logic of human action
and interaction, and a case is made for broadening the perspectives of
soundscape analysis to include analyses of the social contexts that
sounds are part of. Sounds are rarely emitted without cause, and the
underlying logic can not be understood when the focus is on spectral
densities and amplitudes, or whether the sounds are wanted or unwanted
by the analyst or his subjects. The usefulness of numerical methods and
the different weightings of acoustic measurements is thus limited. This
is not to say that these methods are useless, but for a fundamental
understanding of human-made soundscapes they are insufficient.
In order to arrive at better understanding of soundscapes, methodology from social sciences and sound studies is proposed. Here, the social values, concerns and contexts explain the nature of the soundscapes, and correlations between actions and events can be identified and understood. With this basis, the possibilities for understanding actions and sound emissions will be significantly bettered, and since sounds are rarely emitted for their own sake; they are understood as results of actions that have social significance.
The above described critical reconsideration of focus and key
terminology in acoustic ecology, together with a broadening of the
analytical perspective unlocks an understanding of soundscapes as
social constructions, and this is a precondition for unlocking the
critical potential that is already embedded in the term acoustic
Institute of Electronic Music and Acoustics Graz (Austria)
This paper tries to approach understandings of interactivity in the context of audio augmented environments. But what is an interactive audio augmented environment? What is special about it? Why looking at it when thinking about interactivity?
The term “audio augmented environment” is not (yet) used very widespread nor well defined, but it combines several reference terms which are. The environment, to start with, was developed as an art form in the late 1950s in close relation with the happening, known from Fluxus art, and in the tradition of Dadaism and Surrealism. It is characterised by incorporating the context of a piece of art into the artefact itself. This may be manifested by a spatial relationship between an installation and the exhibition space but also by a more abstract reference to e.g. social or economic processes. More recently, in the context of music and sound art, environments denote the quality of being enterable, often in conjunction with a potential (auditive) immersion. Therefore, environments usually draw boundaries, implicitly or explicitly, of clear or ambiguous kind, within the piece of art – there is something inner and something outer, but both is part of the environment.
Augmented environment hints at the term augmented reality, which in turn refers to virtual reality. Here, “augmented” means an extension to the perception of a pre-existing reality, e.g. an “overlay” to the visual or another sense, in contrast to a complete replacement of the real stimuli by “virtual” ones. Audio augmented indicates that this overlay is taking place in the auditive realm.
Since there is always a pre-existing auditive domain, any additional acoustic utterance is such an overlay, without necessarily constituting an environment. Therefore I propose to introduce the intentionality of the overlay as another condition, or, in other words, the recognition of the pre-existing as something that has to remain, something that reappears as the “inner outer” of the environment. One can state that the characteristics of an environment, intentionally incorporating the context of a piece of art, is partially fulfilled by the augmented auditive, it therefore includes a spatial dimension.
These somewhat vague definition attempts may underline that the term audio augmented environment is not understood as a genre or form here but rather as a means of reflection that stretches out over several established genres. Certain kinds of sound installations may be specified by this term, especially those dealing with public space (as opposed to merely taking place there), but also those that unfold by means of mediated acoustics such as binaural technology. In such an extended understanding, even a performance of John Cage’s 4:33 would mark an extreme form of an audio augmented environment insofar the inner is transferred to the imaginary domain.
The aim of pursueing this understanding of audio augmented environments is to establish a vehicle for looking at the phenomenon of interactivity from a less genre-specific point of view. Yet, interactivity is not regarded in its full generality (which seems to be hardly possible at all) but rather closely tied to its forms of appearance in certain kinds of music and sound art. Potentially, some characteristics of interactivity exposed in this context might be applied to other art forms as well.
A widespread understanding of interactivity, especially in audio augmented environments, is its notion as a human-machine interaction. It therefore assumes the presence of technological means, i.e., an interface to the machine. In electroacoustic music, interaction is said to take place if a (human) performer of a piece influences the machine’s contribution. In many cases this means e.g. triggering the different cues by a switch, controlling a score follower by the instrumental performance itself or influencing certain degrees of freedom of a sound synthesis or a generative process by means of sensors or a tracking system. In binaural augmented environments, a tracking system may be used to render a consistent virtual image of an auditive scene, i.e., to compensate for the listener’s movements, but these may also trigger events at the same time, let the listener enter or leave virtual zones or enable him to interact with sounding entities, both spatially and sonically.
Taking a closer look from an æsthetical point of view, there seem to be at least two fundamentally different modes of interaction in this sense: those which support the perceived integrity of an artwork, such as a score follower or the abovementioned tracking system for compensating the listener’s movements, and those whose effects are recogniseable as the performer’s or recipient’s additional artistic contribution to the work. In the latter case, the interactive involvement of the recipient receives a participative quality. This apparent fundamental difference raises the question whether the technology-oriented notion of interactivity is helpful at all, at least in the realm of musicology.
Taking into account definitions of interactivity from sociology and communication theory, the reflection becomes even more difficult. There, interaction would mean a certain freedom of acting or reacting out of several alternatives. In terms of communication theory, an interaction would be formed by responses to not only one, but several received messages and their interrelations. Many interactive systems used in contemporary electroacoustic music do not satisfy these definitions, if a “freedom of acting” may be attributed to a deterministic machine at all. Therefore, simpler forms of human-machine communication have been called reactive instead of interactive. Still, these terms denote technical and not æsthetical processes.
Could it be helpful to possibly define the opposite of interactivity? Would it be sufficiently described as the absence of technical means to influence or to participate? How to deal with those interactions that are not recogniseable as the listener’s contributions or even not noticeable, neither by himself nor by others?
In the late 1990s, Austrian philosopher Robert Pfaller introduced the concept of interpassivity, originally developed as a reaction to the biased primacy of interaction and participation in contemporary (media) art. Interpassivity denotes an understanding of an artwork’s recipient who is not only totally passive, i.e., merely consuming, but even more passive than passive in that the consumption process itself is delegated to a placeholder, be it a thing or another person. This way, the enjoyment of receiving is taken over by an external instance, nevertheless, a kind of distant, substitutional enjoyment still takes place (or is first made possible). Pfaller illustrates some examples of interpassive behaviour: collecting books instead of reading them, recording films onto tape instead of watching them (“letting the video recorder watch them for me”), or, cited from Slavoj Zizek, artworks with a “builtin” consumption instance, such as the canned laughter in some TV series. Constituent for interpassive behaviour is an act of replacement, an as if, that could have been taken for the real act (e.g. programming the video recorder). It implies an imaginary, naïve spectator which, in terms of Zizek, is the subject supposed to believe.
This imaginary naïve subject, to which the as if is addressed, allows for the sensation of enjoyment by interpassive delegation.
Coming back to the modes of interactivity described above, an
inter-passive dimension of the non-participative interaction becomes
visible. It serves for keeping alive the great as ifs of the artwork,
e.g. as if the computer was able to exactly follow the natural temporal
fluctuations of a human musical performance (i.e., the computer
“listens” in our place) or as if the virtual, binaurally rendered scene
was consistent despite the listener’s movements (i.e., the computer
“knows” the scene we are about to discover). Æsthetical enjoyment does
not begin because something is believed, but because it could have been
believed despite better knowledge. This may be even true in situations
without interaction taking place in a technical sense: a purely static
binaural recording, depending on the setting, may be listened to as if
it was reality (although it is undoubtedly known to be not), or a
static tape part could be believed of having sensibly accompanied an
instrumental soloist (but in fact the soloist caused this impression by
sensibly playing along the tape). These examples may clarify that
interactivity and interpassivity are not necessarily counterparts when
describing the involvement of the recipient with perceived facets of an
artwork. Instead, they form a continuum and provide means of
investigating mechanisms of æsthetical enjoyment between the poles of
identification and externalisation. To come back to the concept of an
environment, these poles are located at the often blurry boundaries of
the inner and the outer domains, which both are inside the environment.
Such constellations may lead to interactive qualities without technical
means. “One could have believed that we did not hear anything”, a
visitor of John Cage’s 4:33 might say, with perverse delight.
Conservatorio “G. Rossini” (Italy)
This project is about some reflections originated from an experience of interactive composition. The latter stemmed from the creation of background music for a project of architectural regeneration that later became an audio-visual installation with its own independent character. The musical idea was born out of the firm belief that the perception of a landscape doesn’t involve only the elements of visual and spatial character, but also the sound elements that compose the landscape. The soundscape is an integral part of our perception of the landscape in general.
From these premises derived the idea of composing an audio-visual installation where the sounds accompanying the viewing were, in part, sounds from the landscape itself and, in part, realized through synthetic instruments and techniques. The perception of the whole depends on the interaction with the movements of those who observe and listen. This way, also the concepts of relationship between sound and image and between sound and listening space come into play. By now, technologies have been contributing for a long time to the rethinking of these relationships as well as to their use in both musical and multimedia compositions.
Subsequently, starting from reflections about the meaning of gestural expressiveness in sound production and the relationship of the latter with the sound result, I carried out some experiences of interactive composition and improvisation in which the role of the interpreter could have an influence in various ways on the outcome of the composition.
In this specific context, through this project, I would like to deepen my reflection on how interactive technologies keep facilitating the spread of the use of certain aspects of the artistic and music composition such as, in the first place:
2. interpreter-listener / observer relationship
The A changing landscape
audio-visual installation was a quite simple experiment of interaction
and, for many aspects, not at all innovative. However, it enabled me to
create a relationship between different ambits, starting from the
coupling of music with an architectural project. The initial proposal
consisted in creating background music for a project of regeneration of
a landscape. Therefore, it was a music work born out of a context of
redesigning of a space that supplied the starting point for the study
in depth of the interaction between sounds and environment. The idea of
proposing the form of the audio-visual installation derived exactly
from the fact that the project was going to be enjoyed through the
listener-spectator’s experience of the space. The interactive
technologies, in this case of relationship between presence-movement
and sound-visual outcome, highlight the importance of the
Therefore, the idea of A changing landscape is, from its inception, linked to an architectural project of regeneration of some mines that were once working in South-Western Sardinia. The audio-visual installation, although it starts from this original idea, can later be realized also separately from it and can be performed in other contexts as a stand-alone musical and visual installation.
References and instruments
For me, a first reference for this idea of background music of the landscape was the project of the Centro de Investigacion sonora of the Jardin de San Francisco by P.A. Padilla Jargstorf. In it, some thin vegetable sheets with different bends, produce different acoustic responses depending on visitors’ placement within the space. Here the concept of soundscape is conceived as the “primordial element of the comprehension of our surroundings, always implied in our visual experience, that plays a fundamental role in our spatial perception”.
A second reference to interactive composition for the music for the A changing landscape installation were the theories by the landscape architect Gilles Clément, expounded in his main works and, in particular, in his Manifesto of the third landscape. The space-garden should be lived and observed as the place of the change where “some biological energy unfolds naturally”.
The A changing landscape installation is composed of a series of images accompanied by audio files. The original images were taken from a group of photos of seascapes, arid lands, wooded areas from the original architectural project of regeneration. Each type of images is combined with audio files composed both with sounds of acoustic instruments, and with synthetic sounds, and with sounds from the landscape itself. The installation is arranged in such a way as to be performed along a spatial route of a size between 5 and 8 metres of length, like a corridor, a room or a courtyard, both outdoor and indoor. Within this space, the images are projected from a projector and at the same time the audio files are played. By walking through the installation area, each visitor-listener will be presented with different perspectives of the images and of the accompanying sounds. The changes during the running of the installation depend on two types of events:
Number of spectators present
2. Type of movements of the spectators
The presence of the public and their movements are detected by a video-camera and affect the transformation of the sounds and of the images, that are processed by an algorithm realized through the program max/msp.
Table of sound-image
associations within the composition:
Table of changes produced on images and sounds:
The project offered here is an opportunity to deepen our reflection on various topical themes:
1. The interaction as an emerging element of many current artistic activities that avail themselves of the available technologies, further and further improved.
2. the function of the interpreter who, in the final analysis, can even be the spectators themselves or a person acting from a distance. Also on this point, if at the dawning of the use of live electronics there was an interaction, for example, between the performers on their instruments and the composer’s thought through the way the sound was produced by the performer and processed by the sound engineer, now with the interaction this process extends also to the spectator-listener.
3. Furthermore, still through interaction techniques, it is possible to deepen the interrelation existing in this type of works between sound and visual event, as well as between space and movement.
Finally, it is necessary to point out that in this
project we can find implied two different concepts of soundscape: that
of the sounds coming from the landscape itself (for example, water with
its own characteristic sound types) and that of music especially
composed, as per tradition, with sounds that refer to the landscape but
are recreated through a musical idea.
António de Sousa Dias ; José Luis Ferreira
Portuguese Catholic University (Portugal)
We present a progress report of a project under development: the recast, the transcoding and the proposal of a real time version of Jean Claude Risset’s Inharmonique (1977). One of the main objectives with this work is to achieve a real-time version of this work. This implies an extended analysis and resynthesis of the work, thus allowing two subsidiary goals: the first, to provide further documentation of this mixed electroacustic work, which subsists as a fixed tape (20khz original tape version and the 44.1Khz “upsampling”). Along with this goal, the transcription of algorithmic processes – the PLF routines - originally programmed in Fortran. The other goal is to provide a version of this work where the electronic part can be more adapted to the constraints (and flexibility) sometimes required during performance. Inharmonique is a work for soprano and tape and was premiered on the 25 April 1977, with soprano Irène Jarsky, at the Centre Georges-Pompidou, Paris. The tape was produced at Ircam and presents different types of sound synthesis strategies, through extensive use of the Music V software synthesis [Mathews et al. 1969]. Besides its musical quality, this work acquired some relevance in the context of computer music studies. In fact, the orchestras and scores used were reported and documented by Denis Lorain [Lorain 1980].
As a consequence, they were adapted to other software synthesis languages, as Csound or presented and discussed in computer music textbooks [see for example Dodge & Jerse 1985]. Two main questions arose since the start of this project: (1) the relevance of this recovery and (2) the relevance of transcribing a fixed mixed music work to real-time synthesis and processing.
This project is being carried through the articulation of two applications: Max 5 and Csound Max acts as a front end manager: its main task is to prepare and trigger events, mainly through a “ js” object, which is used as an interface between message events and the Csound sound object manager. One of its main tasks its to carry out the processing of PLF routines in real time. Csound is used as a sound generator. Events are send to a Csound engine through the use of the “csound~” Max external object from Matt Ingals. One of the advantages of the use of Csound its that it allows one to preserve a flavour or idiomatic writing regarding Music V style, thus becoming a bridge between Music V and Max.
The interest in this transcription is double folded: on one hand the performer no longer needs to follow strictly the tape, on the other hand we aim to contribute to the enlargement of the corpus on computer music studies. For what concerns the relevance of the recovery, we found that the preservation of all kinds of electronic music has been a problem that composers and performers have faced since its beginnings. If, on the one hand, the implementation of restoration seems obvious on a 35 years old work, on the other hand, the issues that arise are not. The choice of the term “recasting” to describe this recovery work reminds us that often it is not to just a matter of repeating the technological component of a work with the same technology (which nowadays is somethimes impossible). Indeed, there is a redistribution of roles implemented to other “actors”. This is why we prefer the term proposed by [Chadabe 2001] to others, as for example, the term “reforging”. We think that the word recast emphasizes the compromise implied by the fact that adaptations will always take place. Sousa Dias faced this problem with his transcription files of Jean-Claude Risset MusicV to Csound deciding to keep the similarity of style between the two programming languages, regarding the structure programming and variable names [Sousa Dias 2007; for a full discussion on this issue cf. Sousa Dias 2009, 2011,Van Ransbeeck et al 2012].
This is the reason why the solution found for the new version may seem at first sight different from the original. In fact, sometimes, a recasting with the participation of the composer may lead to a solution different from the “original”. Chowning reports that during the reconstruction of Stria, artifacts due to technological problems (low sampling rate, quantification of signal, etc.) were suppressed at the request of the composer [Chowning 2007].
Regarding Inharmonique, the “primary” sources of information are the composer and its archives, thew before mentioned Denis Lorain’s report, Sousa Dias’s former transcodings from MUSIC V to Csound and the tape itself. Lorain’s report, which includes the performance score, revealed crucial to the starting success of this project. It contains a thorough description of the digital processes, although it has some information gaps which need to be filled in. One of them is the transcription of the PLF routines Risset used to generate sequences of events in an algorithmic approach thus contributing to complete the actual set of documentation available.
Transcribing a fixed mixed music work to real-time synthesis and processing, raises also aesthetic issues. The differences from MUSIC V to Csound synthesis systems, gives rise to this problem: how much can we “improve” the transcription with a full benefit from the outcomes of recent systems and remain faithful to the original? This presents us questions of “fidelity” to the work itself or to the composer’s original wishes (or concept) for the work. This aesthetic issue gains more relevance with the open possibilities of a real-time version implied by the technological factor and the performance factor - the musical (fixed-electroacustical) work may become flexible in ways not expected in former Inharmonique’s performances. This raises also the issue on autonomy from the original work and subsequently from the composer’s original version. Here we decided to rely upon the willing of the composer through presentation and discussion of the development of the project. Another interesting aspect on this issue relies upon the last section of the work, as the fact that the voice on the tape can be nowadays replaced by the voice of the actual performer can contribute for a more intimate blending between voice and electronics as the echo effect will be more effective.
The score itself reveals other interesting problems. Being a score for non real time composition, materials are usually organised according to material/instrument type rather than time occurrence, as Music V languages can sort time events prior to the generation phase. This can become a delicate question, as well as the foundations of choices we are dealing with regarding what can be generated in real-time (possibility of timbrical and temporal displacement from the original); to decide where to pre-synthesized complex figures that will be launched in real-time as one single event; what will be the responsive strategies to be used between voice and computer.
Finally, we think that the accomplishment of this project will be a
substantial contribution for the documentation of Inharmonique and to
provide new possibilities on the performance of this beautiful
University of Sheffield (England)
This paper addresses a growing lack of clarity as to the ontological nature of the acousmatic work; in recent years, a number of key philosophical texts have addressed the ontology of acousmatic music (Davies 2004, Ferguson 1983, Godlovitch 1998, Goehr 2007, Kania 2005). However, such texts have either overlooked, misrepresented or misunderstood both the compositional methods involved in the creation of acousmatic works and the practice of sound diffusion. This situation is as understandable as it is regrettable; ontologists, who have a specialised knowledge of ontological methods, terms and techniques, can only theorise about those traditions that they know particularly well, but most appear to have a limited understanding of the acousmatic tradition. The vast majority of musical ontologists seem unaware of the compositional methods employed in the creation of acousmatic music and none of the various theorists listed above have considered or discussed the practice of sound diffusion and the various issues surrounding the presentation and instantiation of acousmatic works. As a result, the existing literature fails to provide an adequate account of the acousmatic work; in one extreme case, such works are deemed to be ‘in search of their metaphysical status’ (Ferguson 1983). In all other cases, acousmatic works are described as ‘for playback, not for performance’ (Davies 2004, Godlovitch 1998, Goehr 2007, Kania 2005).
The situation outlined above is often reversed when acousmatic composers and theorists engage in the practice of ontology; whilst they may have a detailed knowledge of the acousmatic tradition, acousmatic composers and theorists often have a limited understanding of the methods and techniques employed by musical ontologists. This does not (and should not) prevent acousmatic composers from posing ontological questions, but it does limit their ability to provide rigorous and structured answers. This point has been raised by Jonty Harrison, who, in a recent talk, considered the ontological nature of acousmatic musical works:
“There is debate, even among composers of acousmatic music, as to what constitutes ‘the work’ – is it the trace on the storage medium (let’s call it the ‘studio version’) which, when reproduced in conditions sufficiently similar to those of its composition, renders the piece audible as the composer heard it? Or is it the public presentation, probably on a larger sound system, in an unknown acoustic, in which case what is stored on tape / disk is ‘incomplete’, serving merely as the blueprint for further manipulation of the sounding material? Can it be both?” (Harrison 2011, p.5)
In this short statement, Harrison poses a number of ontological
questions. However, he does not attempt to provide any answers to these
questions, stating that this is not his primary objective: “[...] I am
raising questions for discussion, rather than offering answers or
definitions [...]. What I hope to do is simply identify some of the
areas in which further investigation is required.” (Harrison 2011, p.1)
This paper addresses some of the ontological issues raised above and thus provides answers to Harrison’s questions. In doing so, it draws from the vast body of ontological literature, including (amongst others) the writings of Stephen Davies (2004), Lydia Goehr (2007), Nelson Goodman (1969), Roman Ingarden (1986), Andrew Kania (2005; 2008), Peter Kivy (1983; 1991; 1997), Roger Scruton (1994; 1999; 2004) and Richard Wollheim (1980). These writings provide an means of answering the following ontological question: What is an acousmatic work?
The paper is divided into three main sections. Section 1 differentiates between acousmatic works and their performances, suggesting that numerical, temporal and spatial distinctions necessarily hold between them. This section draws from the writings of Roman Ingarden (1986) and Andrew Kania (2005) in order to clarify the distinction between works and performances. Section 2 surveys three dominant ontological views (the medium view, the class view and the type view) and considers whether any these views will provide an appropriate ontological account of the acousmatic work. The type view is deemed to be the most suitable and a full rationale is provided. Section 3 presents a bespoke version of the type view, suggesting that acousmatic works are abstract types that underdetermine the concrete details of their various instances. In doing so, the type theories of Richard Wollheim (1980) and Stephen Davies (2004) are introduced and explained and the benefits of this ontological theory and presented and defended. In conclusion, the paper proposes an ontological account of acousmatic music that reflects the unique nature of the acousmatic work whilst providing a clear framework through which one may consider the relations that hold between such works and their various performances. Benefits arising from this observation are outlined and proposals for future research identified in use for ten years at the author’s institution (Moore et al, 2004)) a prototype will be envisaged that handles soundfile triggering, scripting, matrix mixing on input and dynamic matrix output routing, a staging post for a composition in fractured form, ready to be truly interpreted by a performer.
As has been shown by the proliferation of electroacoustic music
throughout the last thirty years, the forms, controllers and
performance practices are as varied as the music being presented. This
research will neither replace nor necessarily advance any one
particular method but serves to enrich the solid practice of sound
diffusion of stereo fixed media.
Peter V. Swendsen ; Liliana Milkova
Oberlin Conservatory of Music ; Allen Memorial Art Museum (USA)
Visual art can teach students a great deal about sonic environments and imaginations. For the past six years, my teaching of electroacoustic music and soundscape practice at the Oberlin Conservatory of Music has drawn extensively on the collection of Oberlin’s Allen Memorial Art Museum as a starting point for discussions, assignments, and compositions. In collaboration with the museum’s Curator of Academic Programs, I have developed both small- and large-scale projects as part of a larger effort to re-position the museum to be a fulcrum of learning for the entire campus community, regardless of disciplinary focus.
As a pedagogical tool for electroacoustic music, the use of the museum offers many valuable features and disruptions: the opportunity to find a common starting point for students of different backgrounds and experience levels; the opportunity to safely rediscover the role of “novice” in a field that tends to demand defining oneself as an “expert”; and the physical movement of students to a new learning and creative space. Class visits to the museum are rich in the genesis of new ideas, and subsequent music-based classroom discussions consistently benefit from related discoveries.
In the autumn of 2012, my Advanced Electroacoustic Music class undertook a semester-long project that included several visits to the museum and the selection by each class member of a single piece from the collection to serve as the basis for an octophonic composition. The selected works ranged from a 16th century Spanish painting based on the Book of Revelation to an 18th century British landscape to a 20th century film still by Cindy Sherman. The pre-compositional and compositional approaches to translating these works were rich and varied and led students to a number of important and otherwise unlikely decisions and discoveries about their music-making.
Developed with my colleague from the museum, Curator of Academic
Programs Dr. Liliana Milkova, the presentation will detail the
development of my museum-based pedagogical practice and outcomes,
including strategies for linking visual literacy with electroacoustic
music theory and practice. We will also provide examples of recent
student projects and evidence of success based on preand post-visit
student surveys. Our findings and experience working with colleagues
from other institutions suggest these strategies have wide applications
and can be highly useful regardless of the presence of a major
This paper gives an insight and overview into the promising new field of using direct human emotional response by the means of psychophysiology, biofeedback and affective computing into the field of music composition and music performance. The described techniques give rise to a vast number of new and unexplored possibilities to create in a radically new way personalized interactive musical compositions or performances where emotional reactions of listeners measured with biosensors are used as vital input. The paper is organized in three parts. The first part of the paper contains a characterization and definition of the concept of interactivity in the arts as well as a concise overview of its historical background. With this overview an emphasis is put to the field of music. The second part gives an overview of the fields of psychophysiology, biofeedback and affective computing and how they can be used in the practice of music composition and music performance. Several classical musical concepts such as the concept of score, general musical affects, the connection between musical idiom and emotional impact are described from a new point of view. The second section ends with the extensive description of two different existing methods of integrating direct human response, indicated by the terms sonification and interpretation. In the third part the techniques described in the second section are reviewed in the light of the characterization provided in the first part of the paper. The fourth part deals with fundamental questions and paradigms that arise from the techniques and concepts described in the paper.
Interactivity. The concept and
definition of interactivity in art has a long history. For the paper we
will therefore work with a contemporary characterization as proposed by
the belgian pioneering multimedia artist Peter Beyls. Interactivity can
hereby be characterized by five fundamental principles; integration,
the principle of interacting, hypermedia, immersion and narrativity.
Integration hereby pertains to the multidisciplinary character of the
artwork, the principle of interacting to the connection that is
established with the audience, hypermedia with the nonlinear use of
information, immersion with the creation of an alternative reality and
narrativity with the narrative qualities of the artwork. The five
concepts are described more in detail in the paper.
The characterization of interactivity in the arts is followed by its brief historical contextualization. The principles laid out here will be used in subsequent sections of the paper. In this overview emphasis is put on the field of music and sound based interactive art. Themes that are described here include the first interactive artwork by the legendary greek artist Pharraseus, Gesamtkunst as proposed by R. Wagner, synesthesia and music kinetic art as initiated by the by A. Scriabin and V. Baranov-Rossini, futurism as presented by L. Russolo and F. T. Marinetti, Happening art as initiated by A. Kaprow, and the recent rise of the digitized society and its influence on the concept of interactivity in art and music more specifically.
Psychophysiology, biofeedback and affective computing as new ways for interactive music composition and performance practice.
In this section we start by describing two domains which are fundamental for the integration of human emotional response in music composition or performance. These are psychophysiology and biofeedback and affective computing.
In the first domain, psychophysiology and biofeedback, researchers are looking for ways to measure human emotional states by the means of biosensors that register certain psychophysiological parameters such as ECG (electrocardiogram), GSR (galvanic skin response or stress), EMG (electromyogram or muscular tension) and EEG (electroencephalogram). The second domain, affective computing, was founded by R. Picard at MIT and build on the work of M. Clynes and the extensive domain of artificial intelligence. In this growing new field researchers are looking for ways to establish emotional man-machine interactions. As affective computing heavily relies on accurate measurement of human emotions psychophysiology and biofeedback are widely used here.
Subsequently a schematic overview is provided of how direct emotional
response measured using biosensors can be interactively integrated in
music composition or musical performance practice. The techniques
described hereby rely on several artificial intelligence techniques. A
brief conceptual insight is therefore also provided into these
techniques which include genetic programming and creative evolutionary
systems. Keeping these techniques in mind a general blueprint is
subsequently presented on how direct human emotional reactions can be
integrated into any musical composition or performance. Emotional
response will hereby be viewed as a new dimension or axis extending the
concept of the classical musical score. Several ways of using this new
dimension for composing or performing music are described. Hereby the
highly individual and dynamical character of direct emotional reactions
is taken into account. Therefore emphasis is also be put on the
(partially) unpredictable and highly personalized component that is
introduced into a musical composition or performance.
The schematic overview of the use of emotional response in musical practice is followed by an elaboration on two fundamentally different approaches that can be distinguished.
For the first approach, indicated by the term sonification, biometrical data such as for example ECGorGSR is directly transformed in to a musically or sonically meaningful data such as e.g. MIDI data. This transformed data is subsequently integrated into a musical composition or performance. In the second approach, which is indicated by the term interpretation, use is being made of an intelligent mapping and interpretation of biometrical data. In contrast to the first approach a digital system will hereby really try to understand or interpret human emotional response to composed or performed sound or music. The knowledge that is gathered in this way will subsequently be used to integrate the direct emotional response into the composition or performance.
The different concepts and techniques laid out in this section are illustrated by historical and idiosyncratic examples. These include “Music for solo performer” by A. Lucier, “On being invisible” by D. Rosenboom, “ Spacecraft” by R. Teitelbaum, “Sensorband” by E. Vander Heide, A. Tanaka and Z. Karkovski, “Biomuse trio” by by R. B. Knapp, E. Lyon and G. Ouzounian, “Heart Chamber Orchestra” by the artist duo TERMINALBEACH, “Mind pool” by K. Long and “EMO-Synth” by V. Vermeulen and Office Tamuraj.
Using emotional feedback as compositional or performance tool and interactivity. In this section the theoretical framework as described in the second part of the paper is reviewed in the light of the characterization of interactivity as described in the first section. The central question here is the following. To what extend does using direct human emotional reactions by the means of psychophysiology, biofeedback and affective computing into musical compositions or performances imply the principles of integration, the principle of interacting, hypermedia, immersion and narrativity? Integration is hereby viewed in the light of the interdisciplinary approach that is proposed throughout the paper. The principle of interacting is considered both from the viewpoint of performer, composer or listener. Hypermedia is linked to the nonlinear character of the ever changing human emotional state. Immersion is viewed from the perspective of the new experience that the described techniques of embedding emotional reactions can provide. The narrativity component is considered with the inherent non verbal narrative power of the frame work of human emotional constitution as a starting point.
A new perspective on musical creativity. Using direct emotional reactions in an interactive way for music composition or performance gives rise to some utmost interesting new questions and paradigms with regard to the musical creative process. This can pertain to the process of composing as well as performing. Two essential questions and paradigms are elaborated extensively namely:
• By using and integrating human emotions by the means of biofeedback the composer or performer can extend and complement his or her own creative process. A new inspirational tool arises. It moreover becomes possible to understand the creative process for both composer or performer on a whole new level.
• By using emotional
input into the musical practice a new virtual platform arises in which
boundaries between artist, audience and music are being redefined.
Important questions arise with respect to authorship. To what level is
the music that is composed or performed a creation of the artist or the
audience? To what level is it an instant creation driven by the
biometrical responses of the listener. Who is the author of what is
being musically produced?
Brown University (USA)
In many ways electronic music composers have adopted musical imaginations similar to older mythical sound practices where the world was rendered comprehensible through listening. Music is once again being interpreted as an “apprehensible reflection of the transcendent numerical order”, and that there is a sense of “harmony”, or music “in the universe” and “is manifested in the sensible world.” (Slocum 1993, p. 15).
In the medieval era, Boethius claimed that Musica was the study of Truth through the “witness of the ears” (Boethius 1492, p. 2) This notion has returned in the 20th century with composers like James Tenney describing their music as “sound for the sake of perceptual insight” (Haas 2007, p. 3). Harry Partch said that the first people who discovered octaves at the nodes of a string “discovered magic... as the people who found tones in electronic tubes” and then “through art, they plunged... toward an insight into the greater universe.”(Partch 1991, p. 184). We even hear this sort of talk in writings about mapping, design, or soundscape architecture. Barry Truax, when writing about musics and installations, mentions the benefits of musico-compositional reflections in a space because they “may direct the listener’s attention back to an understanding of some facet of that world.”(Truax, p. 195). These statements are saying similar things: that sound is an ethereal mediator and that the nature of the world may be comprehended through these activities.
I will focus on three specific facets of the older tradition of Musica and how they are represented in a modern electroacoustically-influenced context. I will begin with some modes of listening and their relationship to the notion of attunement, or attention. Second, I will discuss modern examples of listening modes that draw inspiration from the Greek the correlation of music with astronomy. Lastly, I will talk about sound as mediators of the invisible, or magic. These beliefs are acts of pure imagination and fertile ground for composers. They also contextualize the electronic music composer as an important person in the modern technocracy, for they explore, through sound, the nature of the world and its technologies. This is one possible answer to the the Global Issues theme of this conference, though this paper will lean toward Listening and Intention-Reception.
Listening and Reflection
One major characteristic of 20th and 21st century sound cultures is the development of modes of listening. Many taxonomies describe potential new ways of perceiving the audible world. Some modes of listening re-focus the listener away from classical aesthetics such as the works of composers and theorists like Iannis Xenakis or Dennis Smalley. Others directly reference mysticism and religious states of being, like Pauline Oliveros’ Deep Listening. Still more question the nature of noise and our hearing practices, like Cage, Stockhausen, and R. Murray Shaffer. Of course, the mere act of developing such a mode of listening questions the culturally-imposed notion of an aesthetic appreciation of sound at all. Most agree that sound is a vessel that “encourages an active sensorial engagement on the part of the listener”(Lane and Parry 2006, p. 1).
Harmonics and The Cosmos
He hears a cosmic echo like that which astronomers detect, a residue of archetypes that, with goodwill, can be apprehended. (Harvey 1999, p. 48)
It can be a little difficult to understand why music and the cosmos are related. Simply put, they are both ethereal. Sounds are beings that cannot be seen or touched. To the ancients, sound (harmonious sound in specific) was characterized as being mathematically related in a reproducible way to the motions of other objects, like the stars. Through these means sound became a revelation and earthy metaphor of cosmic (or absolute) Truth, and a mediated path to the spirit realm.
La Monte Young, for example, brings these notions full circle. He believes that periodic sounds activate a fixed region in the cerebral cortex, and over time similar frequencies will be learned. These can then be utilized to interact with the nature of time, resurrecting feelings and memories from the past. The inherent reference to astronomical time scales cannot be ignored, as it refers directly to older beliefs. Non-periodic wave-forms are an auditory eternity, and represent impenetrable senses of time inexpressible through sound.
There are many other examples. Stockhausen’s works such as Tierkreis, Stimmung, or Kontakte are good examples, as well as Ive’s Universe Symphony. Iannis Xenakis mentions the relationship between what he calls “outside time structures”, or sounds that do not bend to the will of musical form. Spectralism is described by Jonathan Harvey as “subject-in-process time”, and he even goes so far as to claim that it “is in essence outside the world of linear time”(Harvey 2001, p. 39). There are many others. Of course, it can be argued that musical time has always existed separate from chronological time. The importance of these modern considerations has more to do with composers’ conscious manipulation and reflections of pure time than the fact musical time exists. Sound has once again been asked to mediate the physical and intangible worlds.
Many contemporary composers view electronics and electronic music and recording as the medium that brought about the “new” semi-mythic aesthetic priorities mentioned so many places in our literature. Some composers even talk about the act of recording a sound as capturing beings from another world.
Spectral metaphors only make sense because, like spirits, electronic sounds “have no, or only vestigial, traces of human instrumental performance... They are sounds of mysterious provenance”(Harvey 1999, p. 57) This ‘mysterious provenance’ is the sensation resulting from the ephemeral nature of sound, especially when reproduced without the source. There are superstitions about early recordings in the 20th century that reflected this notion through paranormal listening practices such as EVP.
We have heard these statements before. Just as the celestial monochord rendered the ephemeral properties of the invisible physically accessible and reproducible, so would the audio recording. This reproducibility draws the listener “into the blind depths of materiality” and “emphasises the tension between the now and the past in current perception” (VOEGELIN and Eacute 2006, p. 14)]. In a sense, the resulting de-contextualization offered by this technology enables the listener to reflect upon the nature of his or her sonic perception. Some composers used this ability to conjure new sonic spirits from previously inaccessible material in a way highly reminiscent of attunement. Embedded in the recordings, was “phenomena to discover and invent, which heretofore were not accessible to compositional reflection.” (Haas 2007, p. 138)
Some would argue that electroacoustic music conjures the spirits in spite of its mechanical drawbacks. I would argue that the technology awakened sonorous spirits in the human mind through the re-presentation of the ephemeral.
The point of this paper is not so much to argue that we have become
ancient, but to discuss some of the background and potential of some
very modern ideas. It has been said many times that these ideas and
philosophies have ‘saved’ music from the ‘tangent’ of the Renaissance
and believe that these philosophies and concerns have, as Harvey would
say, “achieved a rebirth of perception” in electronic music. The
ancient mindsets are central to these thought processes (just as they
are to my own), and this is what I would like to explore with the EMS
De Montfort University (Engalnd)
This paper attempts to outline a framework for understanding implications of the musical devices used by composers in the opening and closing moments of electroacoustic music, with a particular focus on acousmatic music.
The beginning-middle-end paradigm exerts a strong influence on the way practitioners and listeners formulate frameworks for the creation and reception of Western music (Agawu, 1991), as does the concept of the narrative curve (Childs, 1977). Thus openings and closings are particularly telling parts of musical structure. They are significant aspects of a work’s rhetorical character, often embodying much of the way music functions at levels of phrase and more extended sectional boundaries. They are also elements of a work that frequently differentiate the formulaic from the genu- inely innovative and inventive. It is selfevident that openings establish a frame for the types of materials employed in a work, as well as establishing a relational and temporal architecture between them. The opening of a work inevitably influences the way listeners form expectations around the experience of a piece and interpret the ongoing nature of a musical design. Closure is considered here fundamentally problematic in musical structures that are not organised around the kind of perceptibly hierarchical syntax that is projected in tonal music. Focus on the transformation of sonic material, rich noise textures that pose segmentation problems and direct use of natural environmental sound tends to place electroacoustic music in this ‘difficult’ category.
A distinction made by Meyer (1996) is used here to characterise the problematic nature of electroacoustic music’s materials, namely that of primary and secondary musical parameters. Primary parameters are those capable of being segmented into ‘perceptually proportional’ steps, with relationships between them shaped by ‘syntactic constraints’ enabling a set of tangible hierarchic value relationships to emerge between them. The psychologically complex consequences of discrete pitch steps used to evoke a tonal centre (definitively or ambiguously) and metrical rhythmic formations are core examples of the musical efficacy of primary parameters. Secondary parameters are those that cannot be separated definitively into proportional values, nominally: tempo, dynamics, and timbre (Meyer uses the term ‘sonority’) and are apt to function through relative increases and decreases in quantity or alterations in character, rather than a relational syntax. The polarised textural and behavioural sonic continua proposed by Smalley (1997) exemplify ways in which electroacoustic composers, and many spectrally or sound-object oriented instrumental composers, have dealt with the problem of creating a coherent basis for meaningful distinctions and oppositions between materials at this secondary level. For instance, in terms of spectral types, following the more detailed model of Schaeffer (1966), Smalley’s continuum of note-to-noise represents a theoretically summary of a sense of both the psychological distance between these notional sound states and the potential for grasping an uninterrupted sense of parametric coherence between them. That is to say, in a musical work, we need to be attuned or acculturated to the notion of such a continuum in order to gain meaning (such as tension / relaxation or goal states) from stages and states of progression or play within it. A supporting perspective is that of Snyder (2000: 201), who emphasises a culturally relativist perspective for all cases, in that within a particular cultural frame ‘the meaning of learned syntactical patterns must constantly be maintained by repetition or it will be lost, both on the immediate and historical time scales.’ But despite it being aurally verifiable that Smalley and Schaeffer’s nodal states of spectral definition and density are perceptually relevant, and despite the fact that in certain circumstances we might reasonably regard a focal pitch as a kind of goal state, movement through the continuum cannot be generically quantised, and therefore the sense of a definitive point of arrival (viz. closure) will tend to be tenuous. A similar critique can be posited for a continuum between sonic abstraction and realism, again a frequently summoned compositional strategy in acousmatic music.
Assuming the absence of a culturally embedded syntactical scheme for electro-acoustic music, it is hypothesised here that a generalised model for analysing the implications of ‘opening’ within this multi-faceted genre is that of entering a ‘space’. Space is used here as a metaphor for the volumes, distances and material associations that a listener might infer from the nature of the sounds and the timing and style of their presentation. A set of interrelated criteria, derived from extensive comparative listening, is proposed for evaluating the rhetorical and behavioural implications of the way a composed space is presented at the outset of a piece. In the spirit of an ecological context for musical understanding (as elaborated in Clarke, 2005) this is underpinned by a fundamental notion of low-level structural inference: that the opening of a work allows the listener to witness a spatial construct that is either:
(1) already formed, or (2) in the process of forming.
These are regarded here as structural primitives. In a formed setting a confluence of contextual information provides a sufficiently stable sense of spatial / material identity that the listener can comprehend as a coherent scene. This need not be naturalistic, but may be a complex amalgam of divergent material sources: such as in the opening of Federico Schumacher Ratti’s El Espejo de Alicia and many works of, or influenced by, a soundscape approach. Forming settings involve gradual assemblage of materials which unfold over time, elaborating characteristics and dimensions of an acousmatic space, as in Natasha Barrett’s The Utility of Space. In both cases of these opening structural primitives there are qualitative and temporal features of the materials that influence the richness and cogency of the design and its implications for the work. For example an initial sound of high or low frequency may imply the possibility of motion into some not-yet-stated frequency regions, or a granular edge to a pitch offer the potential for movement towards more saturate or noisy spectral constructs, while short fragmented noise bursts may afford potential for a clustering or coalescing process. From this perspective, very simple means at the outset of a work can be seen to have rich and imaginative implications, such as in Enrique Belloc’s Para Bla, which presents stark juxtaposition of relative pitch and noise – calling attention through the rhetoric of gestural immediacy, contrasting sound types and registral displacement. Additional factors such as the degree of divergence in the characteristics of materials and the timescale over which events are presented are also taken into account as defining formative seeds. In fact it can in itself be a significant analytical challenge to determine where the ‘opening’ section of a work ends, and another phase of continuation or involvement in discourse commences.
The paper concludes that there is a richer range of
opening strategies than closing ones in electroacoustic music where, in
the latter, recourse to fade outs, singular upward or downward motion
of pitch, or decisive final gestures are common devices. This kind of
emphasis on secondary parameters for closure is also an element of
classical tonal music, but is the staple of electroacoustic music
without the additional advantage of a widely comprehended syntax that
allows for structural clarity as well as playful and meaningful
ambiguity (for example in many of Haydn’s endings). However, it is
suggested that by identifying archetypal strategies within the
electroacoustic genres and by drawing sensitive parallels and analogies
with aspects of the tonal tradition, it may be possible to find ways to
explain electroacoustic forms to a wider audience as well as locate
ways for practitioners to expand and enrich their vocabulary.