Abstracts decorative dots

B - C - D - E - F - G - H - K - L - M - N -O -P - R - S - T - V - W - Y - Z

Bback to top

Mark Ballora - Building A Program From Scratch: Creating A Music Technology Presence At Penn State University

Mark Ballora

The Pennsylvania State University

Music technology at Penn State was not begun by a high-level initiative with a stimulus of funding, staff, and a degree program. Rather, it has taken root slowly through the establishment of a series of courses that cover a core curriculum.

While Penn State has a well-established School of Music, in the year 2000, music technology related courses consisted of two classes dating back to the 1980s: one class was in electronic music composition, the other was an independent studio project. At that time, an upcoming accreditation review by the National Association of Schools of Music (NASM) foretold a need for technology training in relevant technologies for all. This led to the creation of a faculty position in music technology. At about the same time, the School of Theatre established a faculty line in sound design.

It was not immediately clear exactly what type of training should be provided for Music students. Complicating matters was the fact that “music technology” means different things to different groups of people. To verify this, one need only compare the material covered in Dodge and Jerse’s Computer Music with what is covered in Webster and Williams’ Experiencing Music Technology to discover two entirely different sets of goals and assumptions about computers and music.

In an effort to provide training relevant to all students in Music, a course was established in digital audio and MIDI , wherein students gained fluency with sequencing and notation programs. In theory, it may sound attractive to require that all Music students get training in audio/MIDI/music production. In practice such a requirement turns out to be a mixed blessing. The course is classified as General Education Arts, which makes it available to students from other majors, who often approach the course with great enthusiasm. Music majors, on the other hand, tend to regard the course with the same level enthusiasm they would have at getting a root canal. (In all fairness, concepts such as comb filtering or bussing audio to an auxiliary track may indeed be needlessly arcane to a student whose vocational goal is to conduct middle school choirs or bands, and whose credit load is well above the norm due to state-mandated teacher certification requirements.) The challenge has been to balance the needs of students needing only a broad introduction to the area with the needs of those wishing to gain vocational proficiency in it.

Therefore, this course is updated regularly as technology evolves, and the needs of musicians and educators change. Two significant updates appeared in the 2009-10 academic year. One was to make it Web only instead of the traditional face-to-face format. The other was to split the course into two sections, one at a lower credit level and available to Music majors only.

Other courses that followed include the history of electroacoustic music and in musical acoustics , as well as an introduction to theatrical sound design3, audio recording, and music programming (typically in Max/MSP or SuperCollider).

The Music Technology Minor, created in 2006, allows students to complement to their major field of study. At Penn State, minors are becoming something of a buzzword, which is an interesting phenomenon. On the one hand, a minor amounts to little more than a kind of merit badge in an adjunct area. Yet many students seem compulsive about collecting as many miscellaneous minors as they can – especially, it seems, those who are unhappy in their majors. But the reason for this tendency is likely due to a challenging economy facing them on graduation, which makes students eager to distinguish themselves. Minors are thus becoming an increasingly common way to individualize their credentials. A “textbook” example is that a Psychology major with a minor in Early Child Development looks very different on a resume than a Psychology major with a minor Accounting or a minor in Statistics. Therefore, it is understandable that a minor in music technology could be seen as a valuable complement to a degree in a variety of fields. People getting the minor have majored in Music, Information Sciences and Technology, Electrical Engineering, Theatre Sound Design, and Journalism.

An instructor’s remark from my undergraduate years (as a Theatre Arts major) rings true to this day: “Why is it,” the professor lamented, “that we bring in the big-name actors to teach advanced classes? Anyone can teach a scene studies class! The masters are needed in beginning level classes, teaching the fundamentals.”

By the same token, when sports commentators are favorably inclined toward Penn State’s football team, the Nittany Lions, they frequently praise PSU football’s emphasis on fundamentals.

Effective instruction in fundamentals is not always something one can expect. The fact that music technology at Penn State has taken hold from the ground up, rather than the top down, means that we started by developing introductory courses, which had to be accessible to the general student population. Thus, it is in keeping not only with our own training, but also with good old Nittany Lion pride, that I can say that our courses probably cover fundamentals as well as anyone does, which is probably why a number of our students have gone on to graduate programs in music technology at major universities.

It has been often noted by veterans in the field that computer music has broadened and democratized from a small group of specialists to a medium now as ubiquitous as photography, which was also a medium once inaccessible to any but professional experts. The ease with which advanced DSP can now be carried out on consumer-grade laptops has made digital music accessible and relevant to a variety of fields. Just as content creators of computer music have broadened, at Penn State the curricular offerings in it have similarly broadened and democratized, so that the significance of music technology is not due to its being an area unto itself, but rather to its playing a significant complementary role for other areas of study.


Topback to top


Leah Barclay - Sonic Dialects: Explorations in Intercultural Electroacoustic Music

Leah Barclay

Griffith University, Australia

Throughout 2009 and 2010 Australian composer Leah Barclay traveled through India and South Korea engaging in a series of collaborations aimed to inform doctoral research in intercultural electroacoustic music. The project had obvious challenges such as language barriers and cultural protocols but the most important aspect of this collaborative process was understanding and experiencing traditional music in cultural context.

This paper argues that intercultural electroacoustic music can provide a framework for collaboration that could contribute to preserving and exposing rich music traditions across the globe. This is revealed through a detailed analysis of the creative and collaborative process of two electroacoustic works composed and performed in India and Korea.

This paper also explores the methods of disseminating these works in virtual environments. Electroacoustic music or ‘sonic art’ in a broader sense is intertwined in the multi-platform digital wave rolling through the creative industries, and can play a key role in the global demand for cultural content. The dramatic advancement of digital media and information technology has truly cultivated a paradigm shift in how artists collaborate today. These changes have evolved and expanded the tools of expression but most importantly they’ve opened the ability to communicate at a higher level in an interdisciplinary context. Now, more than ever before art driven by technology could truly be a global tool for change.

Intercultural electroacoustic music offers a unique opportunity to fuse tradition and technology and delve into a deeper understanding of the world. It fosters a stronger dialogue at a time where an awareness and understanding of international cultures is becoming imperative. This research ultimately aims to preserve ancient music traditions and promote the infinite possibilities of electroacoustic music to a global audience.


Topback to top


Natasha Barrett - Ambisonics spatialisation and spatial ontology in an acousmatic context

Natasha Barrett

Oslo, Norway

The investigation of ambisonics spatialisation has intensified in recent years. Precise sound field representation in terms of higher-order ambisonics (HOA), near-field encoding (for example Daniel 2003, Bertet et al. 2009) and optimised decoders (Adriasen, 2008, Berge et al. 2010) is implemented in user available software. In the artistic milieu, composers and sound-artists are likewise exploring ambisonics, yet little has been theorised on an aesthetical level. Ambisonics is commonly understood mainly as a technical solution to the spatial limitations of stereo or multi-channel, offering the following features: the presentation of real soundscapes derived from B-format Soundfield recordings, the accurate spatial synthesis of points and trajectories, a realisation of spatial forms conceptualised during the compositional process, and the transmission of composed spatial information to the listener without the need for performance interpretation.

I propose that hybrid three-dimensional ambisonics spatialisation combining first-order and HOA, when embedded into a compositional methodology, transcends ambisonics as a technical tool. This paper discusses how ambisonics influences our perception, interpretation and physical relationship to sound archetypes, addressing the core ontology of sound and our relation to it, altering both acousmatic compositional structural procedures and the listening experience.

The ambisonics projection of spatially real archetypes such as size, shape, action and interaction, and a connection of source-space to listener-space (rather than simply the projection of a ‘known’ or mimetic space), I propose allows the creative consciousness to immediately enter the sound world. In this way we avoid hearing the real “arena” (Emmerson 1998 and Smalley 2007), and directly engage the sound before its fleeting significance has faded, allowing the music to he heard. I further draw on both Emmerson’s observation and my own experience that acousmatic works in galleries and site specific installations are more popular, postulating the reason hinges on the fact that the contexts of these works address, rather than contrast, the listener’s space.

In addressing this connection between space, sound and the listener we can challenge whether an underlying obstacle in the reception of acousmatic music concerns the ‘nothing to see factor’, nor that there exists a negative sense of disembodiment. Instead I suggest that reception can be hampered by an inability to enter acousmatic space-form as a tangible construct – a construct that for the composer in the perfect listening position is natural and apparent. In terms of Landy’s “something to hold on to factor” (2007) space takes a central position. For example, Smalley’s eloquently evocative personal depiction of the Orbieu soundscape (Smalley 2007) would tantalise internal images for many a reader, yet to achieve such in sonic art, in a public concert, is another matter by virtue of the complex coupling in sound-space on which the description relies.

The core of this paper presents the following ideas linking hybrid three-dimensional ambisonics to a spatial acousmatic ontology, combining straightforward technical explanations and aesthetical implications:

(a) The contrast between different recording and synthesis methods is used as a way to explore space as yielding size and shape, rather than as a playground for point sources. Soundfield recordings (that for practical purposes are currently constrained to a first-order representation), which embody complex auditory scenes, are contrasted to HOA synthesis techniques used to control a specific essence of the gesture-space relationship. On a rudimentary level this combination of techniques allows a spatial investigation of Emmerson’s concepts of ‘local’ and ‘field’ (Emmerson 1998).

(b) How three-dimensional sound adjusts our understanding of acousmatic ‘existence’ in terms of the positions in which the listeners feel they are located, whether they are spectators or actors, their sense of participation or ‘otherness’, and introducing the idea of the social distance (Blesser and Salter 2007) rather than physical distance. In this connection I discuss the following:
(i) The way in which ambisonics allows us to understand the difference between listener envelopment and listener immersion, and that immersion may be controlled, rather than being simply an amorphous experience.
(ii) In a stereo projection, as listeners we may guess at material, method of sound production and motion, while an understanding of size and proximity are contingent on our understanding of the source. Real spatial cues allow our listening imagination to embrace an alternative flow of information, where an understanding of the source is clarified by size and distance relationships captured through near-field encoding
(iii) How a clearer idea of size and distance may be extended, such that our listening imagination investigates the framework of the scene and allows a physical projection of the ‘self’ in relation to the music as part of the acousmatic ambisonics experience. The discussion draws on the idea of ‘personal agency’ (Casey 1976), or the listener’s ‘imagined’ involvement, in an investigation that rejects complaints of dehumanisation in acousmatic music and instead embraces the reality of human elements in studies as diverse as Emmerson (2007) and Godøy (2006).

The ideas presented are illustrated with specifically made ambisonics sound examples and extracts selected from the stereo and ambisonics repertoire. For practical presentation sound examples will be transcoded from ambisonics to binaural HRTF

References cited in this abstract:
Adriasen, F. (2008). Ambdec. www.kokkinizita.net/linuxaudio/downloads/ambdec-manual.pdf
Barrett, N. (2010). Kernel Expansion: A Three-Dimensional Ambisonics Composition Addressing Connected Technical, Practical and Aesthetical Issues. In The Second International Symposium on Ambisonics and Spherical Acoustics. Paris.
Berge, S. Barrett, N. Hammer, Ø. (2010). High Angular Resolution Planewave Expansion. In The Second International Symposium on Ambisonics and Spherical Acoustics. Paris.
Bertet, S., Daniel, J., Parizet, E., & Warusfel, O. (2009). Influence of Microphone and Loudspeaker Setup on Perceived Higher Order Reproduced Sound Field. In Proceedings of the Ambisonics Symposium. Graz.
Blesser, B. Salter, L. (2007). Spaces speak, are you listening? MIT Press
Casey, E. S. 1976. Imagining: A Phenomenological Study. Bloomington: Indiana University Press.
Daniel, J. (2003) Spatial sound encoding including near field effect: introducing distance coding filters and a viable, new ambisonic format,” In AES 23rd International Conference, Copenhagen, Denmark.
Emmerson, S. (1998). Aural landscape: musical space. Organised Sound 2(3):135-140
Emmerson (2007). Living Electronic Music. Ashgate Publishing Limited.
Godøy (2009). Musical Gestures: Sound, Movement, and Meaning. Routledge
Kocher, P. Schacher, J. (2007 / 2009). MaxMSP ICST objects. Institute for Computer Music and Sound Technology.
Landy, L. (2007). Understanding the Art of Sound Organization. MIT Press.
Smalley (2007). Space-form and the acousmatic image. Organised Sound 12(1): 35–58.
Svensson, P. Johansen, T. Stofringsdal, B. (2008). Ambitools. Centre for Quantifiable Quality of Service in Communication Systems, Norwegian University of Science and Technology.


Topback to top


Adam Basanta & Arne Eigenfeldt - Perceptual analysis of gesture interaction in acousmatic music

Adam Basanta & Arne Eigenfeldt

Simon Fraser University

Broadly speaking, the compositional aesthetics of the acousmatic genre ­ whether using exclusively abstract sound material or integrating recognizable sound identities ­ center on complex gestural interaction between various sound units. We intend on undertaking a typological examination of gesture interaction in canonical works of acousmatic music, in order to determine the factors and variables that constitute “successful” gestural interaction. Once outlined, we envision these typological models to be useful in their suggestion of an acousmatic grammar, which could be applied in composition and pedagogy (Blackburn 2009), the musicology of acousmatic music, musical interaction with computational systems (Young and Bown 2009), and real-time generative music systems (Eigenfeldt 2009).

We will constrain our analysis to several acclaimed works, namely, Francis Dhomont’s Novars, Jonty Harrison’s …et ainsi de suite… and Dennis Smalley’s Wind Chimes. A detailed analysis of Dhomont’s Novars, singled out for its relatively clear and developmental gestural interaction strategies, will be undertaken. Following a classification of gesture interaction typologies, results will be subjected to comparative analysis in relation to the two remaining works. The comparative approach will reveal similarities and differences in typologies, strengthening the argument for the existence of an acousmatic grammar.

In terms of methodology, the typological analysis suggested above will utilize several accepted approaches from the discourse of acousmatic music. Primarily, we will use perceptual approaches to analysis using spectromorphological vocabulary (Smalley 1986, 1997). Perceptual identification of basic, meaningful sound units will be followed by a production of graphic scores.

Following analytical strategies exploring the intersections between spectromorphology and musical structure in given works (Fischman 1998, Young 2004), our analysis will concentrate on the relationships between perceptually unique sound units. Furthermore, we will specifically focus on the compositional function and potentially causal relationships between each sound unit and the overall phrase or morphological string (Smalley 1986), as well as its functional evolution within the section and overall form.

As typologies are observed and their variations are noted, a classification of the parametric variables of each gestural typology, as well as parametric variables of sound units within each typology, will be constructed. Attention will be paid to the examination of the variation rates, and their relation to previous gestures and the overall form. For the purposes of our research, we will assume a reduced-listening stance, though we will note the changes between abstract and referential sound units as a parametric variable.

Finally, the relationships between individual sound units comprising each gesture typology will be analyzed in terms of agency, following Young and Bown’s proposed terminology (2009). The proposed classification of interaction strategies (shadowing, mirroring, coupling, and negotiating) will prove useful in the adaptation of gestural typologies for live musical interaction with computational systems, one of the proposed applications of our analysis.

In addition to aiding the development of interactive real-time computational systems, we view our research as contributing to the discourse on the aesthetics of acousmatic music. The examination of the acousmatic grammar of gestural interaction will aid in the charting of aesthetic trends within the genre, contribute to the musicological research of acousmatic music, and potentially affect pedagogical approaches to the teaching of acousmatic composition. As suggested by Blackburn (2009), since spectromorphology can also be used as a prescriptive vocabulary for the creation of new works, the articulation of an acousmatic grammar may be of use to composers who seek to utilize it, avoid it, or develop it beyond its current state.

Finally, we envision this research as a contribution to the field of generative music. The typological classification of gestural interactions, coupled with the parametric analysis of said typologies in terms of their possibilities of variation, could act as a database for real-time generative music systems, as well as guide the development of database architectural design. We envision the compilation and understanding of “successful” gestural interaction typologies will aid in the development of more intelligent and aesthetically cogent generative systems capable of generating music in the acousmatic genre.



Topback to top


Marc Battier - The formative years of Pierre Schaeffer between philosophy, theatre and poetry

Marc Battier

MINT-OMF Université Paris-Sorbonne

When Musique Concrète appeared in 1948, it was under the impulse of creating a new form of music. The quest of Pierre Schaeffer was to compose a symphony in which an orchestra would converse with various noises, which would be prepared from and presented on recordings. He called it a "Symphony of Noises" (Symphonie de bruits). Predecessors to this musical endeavor can be traced back to Guillaume Apollinaire who, at the beginning of the 20th century, came up with the idea of "Symphonie du Monde" (symphony of the world) in which sounds and noises captured all over the world would instantaneously be mixed in harmonious ways. Several years years later, French avant-garde composer Carol-Bérard imagined to use phonograph recordings of natural and industrial sounds to create what he christened "Symphonie de bruits". In the meantime, the Italian Futurists had tried to systematize the use of non-musical noises, and Varèse had devised a theory that noises were sounds being formed ("le bruit est un son en formation"), while extraneous noises were used in Eric Satie's "Parade" (1917) and George Antheil's "Ballet mécanique" (1924).

So, using noises to create music was not really a novel idea. What Schaeffer brought with him were ideas on sounds which were informed by several other sources of inspiration. In this paper, I will extricate these from Schaeffer's own writings dating back from several years before the advent of musique concrète. Three areas appear to have had a strong influence on him during the maturation period which led to the 1948 invention.

The one which had a very strong influence was his work in theatre and the radio and the recording of voices, which led him to come up with acute remarks on the role of the microphone, a phenomenon which had not escaped earlier the attention of another theoretician, Rudolf Arnheim. This was to have a very deep impact of the formative years of Schaeffer and later on the development of Musique concrète.

Another source of inspiration has been philosophy with the work of Paul Valéry and his dichotomy between the conceived (the "poietic") and the perceived (the "esthesic"), Valery having himself forged these terms of poietic and esthesic roughly 10 years before musique concrète (1937). Schaeffer does indeed refer to Valery on various occasions. The remarks of Valery on musical sounds and noises might also have had an impact on the avid reader that was Schaeffer, probably as much as the separation of the conceived and the perceived, two conceptual tools which make clearly their way in the first pages of his 1952 journal (A la recherche d'une musique concrète) when he distinguishes his work, based on percpetion and located in the control room, from what Pierre Henry was doing with the manipulating of actual sounds objects in the recording studio.

Another influence, albeit Schaeffer did keep his distances from it, can be picked out in his writings: the role of poetry and more specifically André Breton's Surrealist movement. Not only early musique concrète, over its first decade, was often labeled "surrealist music", but more deeply Schaeffer himself introduces the concept of interplay between sounds objects in reference to the way Breton obtains collisions of meanings through surrealist collage of words. In the early years of musique concrète, these collages of noises can be seen as a musical transposition of the collage of words as practiced by the Surrealists. Interestingly enough, the term "Surrealism" had been coined by Apollinaire, the very one who invented the concept of symphony of the world's noises.

This paper is in homage to Pierre Schaeffer as the 100th anniversary of his birth is being celebrated this year.


Topback to top


Alex Bennett - Injecting Life into Live Electroacoustic Music: A Presentation and Analysis of Creative Work

Alex Bennett

University of Auckland, New Zealand

The scope of ‘live’ electroacoustic music is extremely broad and ever evolving. An important area of research within this super-domain is the discipline of live electroacoustic performance using hybrid instruments and live electronics. Here, the term hybrid instrument refers acoustic musical instruments with some form of electronic modification whilst live electronics indicates the presence of real-time computations/manipulations. As a composer within this genre, I have become ever concerned with preserving the fundamental aspects of a ‘live’ musical performance. There is also evidence of a common thread within ‘live’ electroacoustic music of New Zealand ­ where a strong focus on the human body, the beauty and nuance of gesture as the agent for the sounding world prevails.

This study draws examples from a number of New Zealand composers who share a common interest in live performance with a focus on active body/instrument systems. The works investigated and discussed are Jasmine Chens’ Floral Myth (a piece for modified vibraphone and live electronics), John Cousins’ Bowed Peace (a performance work for the body, amplified bow and live electronics), as well as two of my latest works: Wheeze Box (a piece for modified button accordion with live electronics) and Stagpipes (consisting of custom made bagpipes with live electronics and multi-channel sound diffusion). By viewing live performances (or video documentation) and interviewing the composers, I investigate the problems that have faced during conception and design of the instruments, the acquisition of performance gestures (creating effective agencies), mapping strategies and calibration to ensure clear cause/effect relationships, and of course, the tribulations of live sound diffusion (both stereo and multi-channel formats). To illustrate possible ways of broaching these problems facing likeminded composers, I present various topical extracts (video examples) from live performances coupled with supporting research from Simon Emmerson and Marcello Wanderly. In doing so, I attempt to ‘boil’ the works down to their essence, to discover the elements that are successful (and perhaps not so) in a live electroacoustic music setting. The new knowledge uncovered from the study will not only be of benefit to those composing and teaching within the specific field, but also from a musicological viewpoint, highlights a unique trend within the vibrant electroacoustic music community of New Zealand.


Topback to top


Andreas Bergsland - Arne Nordheim and early electroacoustic music in Norway

Andreas Bergsland

Norwegian University of Science and Technology (NTNU)

Two years ago a set of short electronic and mixed compositions by the Norwegian composer Arne Nordheim, originally composed for several radio plays in the 1960’s, were published on CD as “The Nordheim Tapes”. They were discovered a few years earlier in the archives of the Norwegian Broadcasting Corporation, NRK, after having been assumed lost, something which has given rise to the description of these tapes as the “Dead Sea Scrolls” of Norwegian electronic music. In this paper, a few of the earliest of these compositions will be examined and placed in the context of other early electroacoustic pieces by Nordheim as well as other of his contemporary composers. I will argue that even if the techniques are relatively simple, there are some elements that point to the more mature electroacoustic pieces composed by Nordheim in the Studio Eksperymentalne in Warsaw. This includes the layered superimposition of speed manipulated instrumental tones into relatively dense textures. In many ways these pieces re- evaluate the role played by NRK in the development of Nordheims skills as an electroacoustic composer, showing that it provided Nordheim with an important stepping stone and testbed for his work in Warsaw. I will also link the “Nordheim Tapes” to other early electroacoustic music in Norway. Among other things, I will claim that there are some similarities with these pieces and the earliest known electroacoustic composition by the much less known Norwegian composer, Gunnar Sønstevold. All in all, my paper will be an attempt of initiating research on early Norwegian electroacoustic music, a field of study for which there exists no major dedicated publications as of today.


Topback to top


Hannah Bosma - Algorithmic music in The Netherlands: ‘work’ or tool?

Hannah Bosma

Music Center the Netherlands

Within a theoretical context of the ontological issues of electroacoustic and algorithmic music, I will discuss the work of some composers of algorithmic electroacoustic music in The Netherlands, such as Gottfried Michael Koenig, Paul Berg, Remko Scha, Luc Houtkamp, René Uijlenhoet, Hans Timmermans, Jorrit Tamminga, Rozalie Hirs and Luc Döbereiner. Although these composers all create algorithmic systems in the form of computer software to compose their music, they have different aesthetic and compositorial approaches. Through this diversity, there are also lines of influences and affiinities; the younger generations are influenced by the older composers, who teach(ed) at conservatories and universities and who made their software available (e.g. Koenig’s Projekt 1 and Berg’s AC Toolbox). Algorithms are used to create different kinds of music: score-based, live-electronic, improvisational and/or fixed-media sound-based music. Some create their software in general purpose computer languages (such as C++), others use programming languages that are developed for musical purposes specifically (such as SuperCollider and Max/MSP). Also, composers differ with respect to their ideas about the publication and dissemination of their algorithms. Is a compositorial algorithmic software system a ‘work’ or a tool?

One of the differences in approach concerns the ideological role of the algorithmic aspects: whether there is an ideology to circumvent or to extend human intervention, and on what level; if there is even an ideology of “inhuman” or transpersonal art; or whether the composer whishes to work on a higher level than that of individual compositions, by devising a system; or whether the algorithmic compositional aspects come from practical needs; or from the wish to not use ready made commercial software sounds, effects or processes; or from a practice of electroacoustic computer composition where there are no clear boundaries between the composition of sound or structure and between composing or computer programming.

Related is the issue if and when/where the composer interferes in the algorithmic production and at what level compositional choices are made: when inventing the algorithmsic structure, when fine tuning the software, when choosing input, when making a selection from the output, by adjusting the results, by determining the context, interactively, and/or by “live coding”, etc. And what roles are there for the performers (if any)?

Then there are the kind of algorithms that are used, and their aleatoric, stochastic or deterministic character. And to what extent is there mapping of extra-musical processes or structures? And with what agent(s) does the software interact? Another aspect is the level of the compositorial process in which the algorithms are used: whether as “sketchbook”; as sound generator; as tool; as (co)performer; as compositorial environment; or as framework. The software may be conceived as a main project or as handy tools, or as something in between. An algorithm may be specific for one composition, of may be used for more compositions, and perhaps more and more developed and refind. Some composers create the software only for themselves; other composers are very interested in others using their software.

The composers have different opinions on the dissemination and publication of their software. Some are happy to offer it to all who are interested; others are afraid to give away their “secrets” and consider the software as a private affair; or one may publish (about) the software after one finished using it oneself for a set of compositons.

Preservation of software is a particular problem: it may become obsolete soon. Other than the term algorithmic composition suggests, often not independent algorithms are created, but software in a specific programming language with its particular (im)possibilities and character. To translate such software into another version or even into another programming language, involves a reconsideration and re-evaluation of the algorithmic composition system and the algorithmic-compositorial choices. This may be a huge undertaking for a composer, taking much time that could have been used to create new software or new compositions; however, it is often rewarding and can bring further development and new insights and ideas. And it is indispensable for the survival of the compositorial practice.

When discussing these issues with examples from the above mentioned composers, I will also refer to an expert meeting on algorithmic music that I organize at Music Center the Netherlands on March 20, 2010, that includes presentations by most composers mentioned above


Topback to top


Bruno Bossis - Une utilisation avancée de PureData dans l’étude pratique des figures stylistiques en électroacoustique

Bruno Bossis

Universités Paris Sorbonne-Paris IV et Rennes 2

Progressivement, l’enseignement des musiques électroacoustiques s’est développé au sein de différentes institutions. Les différents outils comme les microphones, les dispositifs de sonorisation, le Midi et les logiciels sont étudiés dans un but de maîtrise technique. Par ailleurs, l’étude de la musique électroacoustique comprend généralement l’histoire des studios et de la lutherie électronique, mais les résultats restent souvent peu satisfaisants. Les musiques des siècles précédents bénéficient d’une approche plus systématique : histoire, analyse, formation de l’oreille, écriture, pratique instrumentale, vocale, harmonie au clavier. Ce modèle a fait ses preuves sous différentes formes dans le monde entier. Une réflexion sur les objectifs fixés, les moyens pour y arriver et les possibilités d’évaluation des étudiants et de la formation s’impose donc. Pourquoi ne pas concevoir une formation à l’électroacoustique structurée sur des axes similaires à ceux correspondant à d’autres musiques ?

A l’université Rennes 2, une première étape a été d’instituer un enseignement de l’analyse des musiques électroacoustiques dans lequel aussi bien les partitions que les patches et les enregistrements sont étudiés en détail. En même temps une attention particulière est portée à la formation de l’oreille par une écoute précise. Il manquait cependant un équivalent d’une approche pratique relativement universelle comme celle de l’harmonisation et de l’accompagnement au clavier. Il fallait donc choisir un outil instrumental souple et capable de produire un grand nombre d’éléments stylistiques d’écriture, comme le piano pour les autres musiques.Pure Data, logiciel relativement aisé à comprendre, gratuit et multi-plateforme, convenait. L’objectif général était d’étudier l’histoire et le style des musiques électroacoustiques de manière pratique et vivante.

L’expérience menée avec ce type de logiciel, après une initiation indispensable (comme pour le piano), se construit sur l’histoire et la réalité des œuvres, et non sur le seul apprentissage des possibilités du logiciel. Au piano, mis au service de l’étude des styles, l’étudiant ne se contente pas de jouer des gammes, il déchiffre des partitions, harmonise, accompagne, ou improvise. Avec Pure Data, il n’apprend pas seulement la virtuosité technique, il travaille par appropriation sur des caractéristiques de l’écriture musicale ou des éléments stylistiques, que la réalisation soit en temps réel ou non.

Quelques pistes allant dans ce sens ont été expérimentées à Rennes. Par exemple, la juxtaposition et la superposition d’éléments sonores répartis dans des fichiers sons peut favoriser la compréhension des premières pièces de musique concrète. Il est également intéressant de reconstituer ainsi le type d’écriture de Timbres Durées d’Olivier Messiaen, ou des Etudes de bruit de Pierre Schaeffer. Un travail pratique sur des filtres, générateurs et modulateurs permet aux étudiants de mieux appréhender les caractéristiques stylistiques des œuvres composées au studio de Milan. En même temps que les étudiants réalisent ces exercices, ils apprennent à entendre et reconnaître à l’oreille des propriétés sonores essentielles.

Le structuralisme et tous les éléments stylistiques plus ou moins inspirés du post-sérialisme sont abordés par la pratique des relations mathématiques. Une application s’appuyant sur l’écriture de la synthèse additive dans Studie I de Karlheinz Stockhausen est très profitable s’il s’accompagne de l’étude du studio de Cologne en cours d’histoire et de celle de l’œuvreen cours d’analyse. L’objet random permet de mieux comprendre l’aléatoire, par exemple en s’inspirant du tri des sons dans Williams Mix de John Cage.

Les objets d’analyse FFT et de resynthèse peuvent illustrer le courant spectral des compositeurs de L’Itinéraire et favorisent une approche des œuvres de ce courant faisant appel à l’électronique.

Différents types d’écritures ou de manipulations du son sont aisément mis en œuvre dans des ateliers Pure Data, comme l’amplification thématique, la prolifération, la répétition en boucle, la construction d’une morphologie ADSR, la mise en évidence de régions spectrales, etc. Parmi les applications de ce dernier principe d’écriture-transformation, l’harmonisation d’un son par filtres en peigne accordés, et la formantisation s’avèrent des modèles de choix dans un tel cours.

D’autres aspects des œuvres, styles, compositeurs ou studios étudiés en histoire ou en analyse sont également explorés par l’intermédiaire d’exercices pratiques avec Pure Data. Les difficultés de la synchronisation entre des instrumentistes et un dispositif informatique peuvent être étudiées, par exemple en écrivant des patches impliquant la gestion d’événements successifs par des qlists.

Enfin, l’approche de la lutherie électronique donne lieu à des applications pratiques sur des modules simples : gestion des entrées/sorties, transposition, séquenceur, synthèse additive, filtre. Ces modules sont ensuite assemblés et permettent une meilleure compréhension de la dimension modulaire et connexionniste de ce type de lutherie. Une plus grande maîtrise du Midi, d’OSC et d’interfaces gestuelles utilisés par les compositeurs passe par l’étude simultanée de la théorie, de l’histoire, de l’analyse d’œuvres et de la pratique. Cette approche peut ainsi être reliée à une composition ou une improvisation collectives destinées à être diffusées ; la compréhension et la création sont intimement liées.

L’ensemble de ce cours s’inscrit actuellement dans une progression établie sur une année, mais ce cheminement pourrait s’étendre à un cursus complet. Les travaux proposés ne donnent pas seulement à l’étudiant la maîtrise d’un logiciel, mais favorise la compréhension plus générale de l’utilité des objets et des concepts abordés. L’étudiant aborde de manière pratique l’analyse et l’écoute de procédés d’écriture en électroacoustique. Il perçoit ainsi plus profondément les multiples courants successifs ou simultanés qui ont parcouru l’histoire de ces musiques jusqu’à aujourd’hui.

An advanced use of PureData in the practical study of styles in electroacoustic music

Over the past fifteen years, teaching electroacoustic music has gradually been developed in different institutions. Various tools such as microphones, sound devices, MIDI format and audio software are taught with the aim of technical mastery. In addition, learning electroacoustic music usually includes the history of the more important studios. However, the results are often unsatisfactory and too incomplete.

The music of the previous centuries benefit from a more systematic approach containing history, analysis, ear training, writing practice, instrumental and vocal practice, and accompaniment at the piano. This model has proven itself worldwide in various forms. The general objective of our project was to apply this more systematic and coherent approach to the teaching of electroacoustic music in order to obtain better results. Why not design the studies for electroacoustic as we do for other music?

A first step has been to establish a curriculum for electroacoustic music analysis. Toward this goal, scores, patches and recordings were studied in detail. At the same time, particular attention was paid to the training of the ear. However, an approach similar to the relatively universal practice of harmonization and accompaniment at the piano was lacking. What is needed is a flexible instrumental tool, capable of producing a large number of stylistic elements of writing, like the keyboard for other music. At Rennes University, PureData, the well-known free software, relatively easy to understand and multi-platform, has finally been chosen.

After a necessary introduction (as for the piano), the course on and with PureData is built on the history and the styles of electroacoustic music (not only on the features of the software). At the piano, for ancient music, in order to study the styles, a student usually goes far beyond the scales. He reads scores, harmonizes, accompanies and improvises. Thus, a student works practically on stylistic elements.

For example, exercises on the juxtaposition and layering of sound elements with triggered short sound files aim to help the student understand how the early pieces of musique concrète are working. In this context, rebuilding a bit of Timbres Durées by Olivier Messiaen, or Etudes de bruit by Pierre Schaeffer is useful. Beyond that, practical work on filters, generators and modulators allows a better understanding of the stylistic characteristics of works composed at Milano. By designing and performing these exercises, a student learns how to hear and recognize the main properties of electroacoustic sounds and textures. He also studies the structural details of this kind of music, in relation to its historical context. Structuralism and all stylistic elements more or less inspired by post-serialism may be approached in PureData by practicing mathematical relationships. E.g., exercises on additive synthesis in a partial rebuilding of Studie I by Karlheinz Stockhausen are highly beneficial if done together with an approach to the history of the studio in Köln. In another context, training to randomize all kinds of categorized sounds is a lively way of understanding Williams Mix by John Cage. In the same way, PureData objects including FFT analysis or resynthesis can illustrate the spectral trend of writing.

More broadly, different methods of writing or different manipulations of sound may be easily invoked in PureData workshops. For instance, thematic amplification, proliferation, the concept of looping, the construction of an ADSR morphology, the use of spectral regions etc. are several interesting concepts to manipulate. Among applications of the principle of writing-processing, the harmonization of a sound by tuned comb filters and formantisation are other good models in such a course. Other aspects of works, styles, trends or studios usually studied in a course on history or analysis can also be explored through practical exercises with PureData. Thus, synchronization between instrumentalists and a computer can be studied by writing patches involving the control of successive events in a qlist.

Finally, an accurate way of studying electronic instrument-making may involve designing modules built for a single practical purpose: dealing with input/output, transposing, sequencing, providing additive synthesis and filtering. Subsequently, these modules are assembled, providing a better knowledge of both the modular connectionist dimension and this type of instrument-making. Furthermore, a better knowledge of the MIDI language, OSC and gestural interfaces used by composers comes from the simultaneous study of theory, history, analysis and practice. This global approach may be introduced in a composition or an improvisation workshop whose results are destined to be diffused. Understanding and creativity are intimately linked.

This course is currently designed to be taught over two years. However, this can be extended after assessment. In fact, the course described above is an appropriate and useful way of improving the capabilities of analyzing and understanding all the processes involved in electroacoustic music. The students understand more deeply the successive or simultaneous trends that have existed in the history of this music until today.


Topback to top


William Brunson - A State of Flux: From Curriculum to Course

William Brunson

Royal College of Music in Stockholm

Building a curriculum for electroacoustic music is a daunting, multi-faceted enterprise. A curriculum must address both the breadth and depth of the subject area in order to be credible, useful and have long- term viability. In this sense, the protean nature of the electroacoustic music poses continual and signifi- cant challenges. The convergence of digital media and the transdisciplinary qualities, which are inherent to the field, serve moreover to widen the scope of the discipline to encompass other areas which are in and of themselves both deep and wide. Indeed, electroacoustic music and the digital arts in general are in such a state of flux that, like the elusive electron in quantum theory, any attempt to pin it down is futile. Establishing the position of the electron, its direction or speed become increasingly uncertain; on the other hand, knowing its momentum, one cannot pinpoint its location.

A general breakdown of curriculum theory reveals four contrasting, but not exclusive, approaches (Smith: 1996, 2000):
1. curriculum as a body of transmitted knowledge;
2. curriculum as a product;
3. curriculum as process;
4. curriculum as praxis.

In the proposed paper, the author intends to describe the on-going process of defining and refining a curriculum for electroacoustic music composition at The Royal College of Music in Stockholm (KMH). Following an introduction to the overriding curricular ideas, their practical implementation in courses will be presented. Both the former and latter will be considered in light of contrasting approaches to curriculum.

The primary and ultimate goal of the composition curriculum at KMH is creative and independent thinking. Not surprisingly, the panoply of courses are characterized as creative, theoretical/technical or historical/analytical. With regard to the aforementioned approaches, each implies both different learning goals and procedures which, I will argue, exist concurrently within the composition program courses at KMH.

While the practice of electroacoustic music is, per definition, dependent upon music technology, the primary focus of the program is on creative composition. No longer controversial, the computer is viewed as a unifying factor. Further, a crossover mentality between electroacoustic, instrumental and jazz composition programs has long been actively encouraged; the same applies to the programs for Film Music Composition and Intermedia. Still thorough mastery of music technology - both hardware and software as well as the theory and practice of audio production is essential to the artistic process and the resultant work. A balance between the artistic and technical dimensions in the curriculum must be maintained. Lastly, the need to integrate or harmonize the specific theoretical concerns of electroacoustic music and traditional musical praxis has lead to an investigation of alternative approaches to musical theory. Inspired by the Bauhaus retrospective at the Museum of Modern Art in New York, the author has dedicated a current seminar course to investigating the viability of applying ideas and techniques developed in the Bauhaus foundation course to theoretical and practical aspects electroacoustic music composition. Proceeding from Wassily Kandinsky's classic book Point and Line to Plane, the course hones in on the restricted use of sonic materials, first employing points and lines to create gestures and textures, but then incorporating the concepts of planes, color and opacity. Particular reference is made to selected paintings by Kandinsky and Paul Klee.

While this single course is clearly not an entire curriculum, it does suggest an approach which may in time help to re-define a significant portion of the basic courses in the curriculum.


Topback to top


back to top

John Coulter - Visual Redundancy in Electroacoustic Music With Moving Images

John Coulter

University of Auckland

This study concerns the language of electroacoustic music with moving images. In a recent publication (Coulter 2010) I put forward a system for the comprehensive classification of audiovisual media pairs. The model, presented in the form of a cube, describes sonic/visual materials as the intersection of 3 dynamic continuums: referential to abstract audio, referential to abstract video, and heterogenous to homogenous attention. To illustrate the extremities of the model, 8 examples of creative work (one from each corner of the cube) were made available. The findings of the study suggest that our natural tendency is not to divide our attention, but to integrate even the most heterogeneous of textures. Integration occurs when congruous relationships are perceived to exist between audio and video materials. Two types of relationships were identified: ‘concomitant’ and ‘isomorphic’. Concomitant relationships rely on the highlighting and masking that occurs when two schemas are simultaneously activated (overlaid), while isomorphic relationships rely on the congruence of both meaning and physical parameters, leading to the activation of solitary schemas. Detailed methods of creating both concomitant and isomorphic relationships were also presented in the article.

In a recent study, tests were carried out concerning the nature of transition between the audiovisual and acousmatic modes. The previous study suggested that the experience of transition could be described as a ‘tolerant switch’ - that various data-poor visual images (such as a black screen) were not as destructive to the acousmatic mode of listening as data-rich moving images. The study has now been expanded to include a range of still and moving image types all of which rely (at least in part) on the condition of ‘visual redundancy’. The category includes still images, images that change at an imperceptible rate, and repetitious moving images. Examples of creative work are used to illustrate the points put forward which include the proposal of an ‘inter-modal zone’ (or ‘workaround’) that exists between the acousmatic and the audiovisual modes of listening/seeing.

The findings of the study offer an extension to the vocabulary for considering the functionality of audiovisual materials. Although there are a number of other established constructs and borrowed terms available for discussing electroacoustic music with moving images (for example, those found in acousmatic electroacoustic music and filmmaking), there are few that address the issue of ‘language’ so directly.


Topback to top


back to top

Ricardo Dal Farra - About teaching electroacoustic music and new media

Ricardo Dal Farra

Concordia University Montreal and Electronic Arts Experimenting and Research Centre (CEIArtE-UNTreF) Buenos Aires

Teaching electroacoustic music and "new" media for over 30 years has been a wonderful experience and still is. From private instruction in my personal lab to exploratory sound experiences in the jungle with teen students, from advanced training for music professors to high school projects blending music with sciences and an innovative approach to electronic technologies, from new media and electroacoustic music university curriculum development to defining standards and the designing of programs for teaching and learning multimedia at a national level, it has been one challenge after another (or some times simultaneously), and a rewarding process.

There is more than one way of doing things, and I don't find there is a perfect and only way to teach music. The same applies to electroacoustic music, and we are meeting in these international conferences trying to understand what has been happening, where we are and the next steps in our field (of expertise!)

In 1985, a group of young Argentinean girls and boys, living in the north-east of the country near the border with Brazil, and knowing mostly about the local folk music and a little bit about rock and pop, were immersed for several days in a musical-tour non-stop covering miles of roads, different landscapes and environments and an unforgettable sound and musical experience. In just a very few days, they changed their approach from a shocking surprise to the deepest interest in that new world they were discovering and creating with a (simple, small, monophonic, analog) sound synthesizer. And one day, after walking, partially under the rain, through a desert island in the Paraná river, they even discovered the music was inside them, and it was: "organized sound". And all that happened in no more than 4 days. Could we imagine what the possibilities are working not only a few days but for some weeks, or months, approaching music, new music, electroacoustic music, working towards free and open-minded thinking? Rules and frames and structures could maybe considered differently or at least, the contemporary musical creation could be perceived in a different way.

In 1992, a well-known technical high school in Buenos Aires decided to give special attention to music and create a program which could work in close connection with sciences and new technologies studies. This was fitting in such a natural way for the students that they were going from their "sound-oriented" biology classes (focusing on the human hearing system and our phonatory capabilities) to their mathematics classes (where logarithms were used to explain musical scales) to their history classes (going from a broad human history approach to specific links with music in different periods) to the music lab (with many specially designed workstations where students were learning, discussing, analyzing and creating) and more. And these were also teens, but with a completely different background and cultural environment in regard to the "musical-tour" students group mentioned above. In a program only 3 years long, they became a mix of multidisciplinary creators, basic music performers, well-skilled users of advanced music technologies. They also developed a remarkable understanding of both hard and soft sciences.

In 1996, the Multimedia Communication national program started to be developed at the National Ministry of Education of Argentina, to be applied in technical-vocational schools all around the country, as part of the new reform of the educational system. The Multimedia Communication program, with over 1500 hours of specific study through three years, a competencies-based modular structure, using several intertwined streams with multiple deliveries, created the teaching/learning basic standards to approach image synthesis, video, new media and sound and music production and creation with again, a new level of freedom in terms of creativity, use of resources and knowledge, unthinkable only a short time ago.

In 2000, the National University of Tres de Febrero (UNTreF) started offering an intensive five-year long Electronic Arts program, including two structural streams one focusing on electronic image, and the other focusing on sound production/music creation using new technologies. Electroacoustic music studies has a major role in this program, and this whole field is clearly considered by students and faculty as part of the media arts, something that not always happens in other institutions around the world. Even if this public university is placed far from the downtown of Buenos Aires, in a lower middle to working socio-economic class area, since its inception, the program has had a very positive impact on the students coming from the surrounding area as in the development of alternative projects in other educational institutions, and even on the economy of the district. The National University of Tres de Febrero also created the Electronic Arts Experimenting and Research Centre (CEIArtE) whose focus is on local and international media arts research, and creation and dissemination projects (including electroacoustic music). This Centre is a pioneer in supporting media arts research in the region and allows the participation and integration of advanced students and also graduates, who would be losing their links to the University environment without this academic space.

Yes, there are many challenges, but every step counts. And successes as well as failures are showing us the possible paths to follow or the need to develop new ones. Electroacoustic music and new media education has been the key in helping to develop a new generation of creative people in some places. People who had the opportunity to chose their own way because they received in an early stage the opportunity to learn that there is more than one way of doing things, there is more than one way to approach music, and electroacoustic music, and the fertile field where arts, sciences and new technologies meet and enrich each other, to develop new knowledge.

The "About teaching electroacoustic music and new media" presentation will review a number of educational programs of electroacoustic music and new media, with proven results, focusing on their process for curriculum design, the teaching/learning strategies, the pedagogical tools as well as the context and circumstances for each case.



Topback to top


Richard Dudas - An Electronic Music Curriculum for the 21st Century

Richard Dudas

Hanyang University School of Musicg

Although, as little as a quarter of a century ago, establishing a comprehensive electronic and computer music curriculum was a fairly straightforward task, relating to the equipment and proprietary software available in any given studio, the rapid development of technology (in terms of both hardware and software) in the past few decades and the proliferation of new and readily-available tools in the electronic and computer music community today often necessitates the design and creation of leaner, more focused curricula for computer music programs within an academic setting. Whereas some schools prefer to focus on a specific piece of software as a focal point for teaching the ideas and skills of electronic and computer music within their degree programs, others try to provide a general (and often necessarily superficial) broad-spectrum overview of the trends and techniques available to the electronic musician. Although both schools of thought have their particular merits depending on the pedagogical goals of a particular institution, both sides are beset with the fact that within the context of a two-, three-or four-year program, it is often difficult to cover everything necessary with enough breadth or scope to properly prepare students for the continuing educational path ahead of them.

At Hanyang University we have tried to create a balance between teaching some well-worn software, in addition to exposing our graduate students to a broader spectrum of electronic music tools during the two years of their coursework in our graduate program for computer music composition. We have found that this adequately prepares them for further study, but that their knowledge of the discipline is inadequate or superficial. F. Richard Moore points out in his "Elements of Computer Music" that computer music is an amalgam of 5 different fields of study. With that in mind, perhaps students should realize that, even though technology has simplified our lives in many respects, as an electronic musician the use of technology in fact complicates their work five-fold (well at least twofold or threefold). Composers of electronic music need to be more than just composers: they need to wear the additional hats of technician, sound designer, performer and acoustician.

Due to the recent proliferation of introductory undergraduate electronic music courses at universities in Korea (with varying levels of compass), most students interested in entering our computer music program arrive with some basic knowledge of the tools of the field. Nonetheless, we have found that there is a general lack of education about the cultural, historical and purely musical perspectives of electronic and computer music, not to mention 20th and 21st century music in general. Although many genres of music today in the realms of both art music and practical music share the same technological tools, the aesthetics of each genre can be substantially different from one another. Therefore we have realized that an important pedagogical strategy is the organization of courses and workshops that teach both the aesthetic and historical background for the discipline, in order to provide the students with a strong context for the more technical aspects of the field.

Focusing on teaching the techniques and concepts of electronic and computer music also provides a solution. These can be applied to a wide array of tools in the studio, as well as in the home studio. In contrast with the aesthetics and principals, the techniques and concepts are not exclusive to one musical genre, although they may be focused or tailored to suit a particular musical viewpoint. Nevertheless, the tools themselves should be of secondary importance, and ideally should not be perceived in the final artistic product. With this in mind, there does need to be a pedagogical balance between practical knowledge and artistic mastery, especially during the creative working process, even if the tools themselves will remain invisible to the audience.

Self-motivation on a graduate level is one particular obsticle for many students studying at universities in East Asia (as well as for many students at smaller universities in the U.S. and Europe). In Asia this is primarily the result of a rigorous secondary education system, which has favored teaching students directly instead of educating students how to teach themselves . For studies in electronic and computer music the consequence is that students will not instinctively attempt to autodidactically learn material above and beyond what is presented to them in courses. This, coupled with the fact that graduate courses in electronic music out of necessity must teach a reduced content in order to fit within the 2- or 3- years of coursework dictated by the university, results in students who are often ill-prepared for continuing their graduate education in electronic and computer music, etiher domestically or abroad.

Finally, although we have elucidated several problematic areas in electronic and computer music curriculum design, and offered possible solutions for some of these, one continuing problem is the increasingly rapid development of technology. How do we "keep up" with technology? Perhaps we need to be re-evaluating and re-inventing our curricula every few years, instead of trying to develop a permanent and comprehensive course curriculum. Although doing this could tend to cater towards musical and technological fads, it could also help keep students knowledge current and well-prepared for the path which lies ahead of them.

Topback to top


back to top

Christian Eloy - Enseigner la composition de la musique électroacoustique, pas du tout une ligne droite …

Christian Eloy

Université Bordeaux 1

La première partie de cette communication retrace mon propre parcours musical, une sorte d’aventure personnelle, sonore et pédagogique, faite de lignes droites et de virages, de confrontations entre passé et futur, de certitudes et de doutes alternés, à l’image de la création et de la pédagogie. Ma description de certaines époques révolues ne veut en aucun cas participer d’une nostalgie ou de regrets, mais seulement servir à éclairer mon engagement d’une vingtaine d’années dans l’enseignement de la composition de la musique électroacoustique, et ainsi comprendre comment il s’est nourri. On verra aussi comment les fondements de cet enseignement furent posés dans ces années 60 par les fortes personnalités que furent Pierre Schaeffer, communicateur et pédagogue né, mais aussi Guy Reibel et d’autres encore qui ont cultivé ce goût et ce talent à communiquer et à enseigner, confondant parfois les deux.

Ayant vécu le passage entre l’analogique et le numérique, à la fois comme compositeur et comme pédagogue, on peut affirmer que cette période n’a pas été une véritable “révolution“ dans le domaine de l’enseignement et de la transmission de cette musique. J’ai pu vérifier par de nombreux témoignages qu’un certain nombre des “exercices“ et des méthodes qui sont encore pratiqués aujourd’hui, venaient directement de la décennie 1960/70 : de Pierre Schaeffer lui même et de ses proches collaborateurs, ainsi que des premières places fortes où cette musique fut enseignée, avec ses “gardiens du temple“, en particulier le CNSMD de Paris qui fut un creuset très riche de compositeurs et d’enseignants qui ont préservé et transmis cet héritage exceptionnel. C’est plus une approche et une démarche personnelle, qu’une véritable méthode pédagogique, qui seront transmises dans ces lieux et de cette façon.

La deuxième partie de cette communication vise à montrer la synthèse, entre ces fondements ou ces bases mises en place avec les moyens analogiques et la nécessité d’accompagner l’arrivée des “nouvelles technologies“ numériques ; cette synthèse que les enseignants ont dû construire (parfois de façon empirique) au fur et à mesure de ces évolutions technologiques. Elle démontre aussi comment la démarche expérimentale Schaefferienne est restée la base la plus solide et la plus pertinente de cette pédagogie de la composition de musique électroacoustique, même lorsque les studios et les outils sont radicalement différents 40 ans plus tard. Pour moi, la musique acousmatique demeure une forme d’écriture extrêmement exigeante. Ainsi, l’écoute critique, avec le groupe de la classe, les pairs en quelque sorte, reste dans nos cours le moment fort dans la pédagogie de la composition et certainement le moment le plus stimulant dans l’acte et le processus créatif de la composition musicale pour bon nombre de jeunes compositeurs.Bien sûr, qu’il faut toujours faire ses gammes dans la musique électroacoustique ! rien n’a changé à ce niveau ! il est toujours extrêmement touchant de suivre l’évolution des travaux d’un étudiant en composition, on ne sait toujours pas dire quand apparaîtra la “maturité musicale“, quand ces notions subtiles d’unité, d’équilibre, de forme, de musicalité, vont se révéler et se catalyser chez un jeune compositeur. Pourtant, la personnalité de sa pâte sonore, l’unicité de son discours, l’originalité de ses matériaux, la singularité de ses thématiques et de ses univers, émergent relativement vite, ils sont latents et doivent être décelées et valorisées le plus tôt possible dans son cursus et sa progression, c’est sûrement l’une des clés de cette “pédagogie“ quand même pas tout à fait comme les autres, reconnaissons le !Faire cette musique concrètement, passer par le “faire“, travailler en boucle entre écoute et production, restent nos atouts principaux dans la pédagogie de la musique électroacoustique, que l’on manie des ciseaux avec une bande magnétique ou une souris devant un écran, ne change finalement pas grand chose.

Teaching electro-acoustic music composition‚ not exactly a straight line...

The first part of this paper traces my own musical journey, a kind of personal adventure, of sounds and education, made of straight lines and turns, confrontations between the past and the future, alternating certainties and doubts, reflecting creation and education. My description of certain bygone times has no intention of recounting any nostalgia or regrets, but only of illustrating my commitment to teaching electro-acoustic music for twenty years or so, leading to an understanding of how this commitment has flourished. We shall also see how the foundations of this education were laid down in the sixties by the strong personalities that were Pierre Schaeffer, a born communicator and educationalist, and also Guy Reibel and others again, who cultivated this taste and this talent to communicate and educate, sometimes merging the two. Having lived through the change from analog to digital, both as a composer and as an educationalist, it is possible to assert that this period was not a true revolution in the domain of teaching and promulgating this music.

Many accounts have allowed me to confirm that a certain number of the practises and methods, still practised today, came straight from the 1960-1970 decade : from Pierre Schaeffer himself and from his close collaborators, and also from the first strongholds where this music was taught, with its temple guards, especially the Conservatory in Paris, which was a very rich melting pot of composers and teachers who have preserved and promulgated this exceptional heritage. It is more of a personal approach and initiative than a veritable educational method that will be communicated in this place and in this manner.

The second part of this paper aims to reveal the synthesis between these foundations or bases, created by analog methods, and the need to follow the arrival of the digital new technologies; a synthesis that teachers were obliged to build (sometimes empirically) as these technological developments came along. It also demonstrates how Schaeffer experimental approach is still the most solid and pertinent foundation of education in electro-acoustic musical composition, even though the studios and tools are radically different 40 years on. For me, acousmatic music remains an extremely demanding way of writing and composition. Thus, critical listening with the class group, peers in some way, remains in our teaching the great moment of education in composition and certainly the most exciting moment in the act and creative process of musical composition for a good number of young composers. Of course, you still have to keep practicing, faire ses gammes said P. Schaeffer, in electro-acoustic music! Nothing has changed in that respect ! It is always very touching to follow the development of a student composition work ; it is not always possible to know when musical maturity will appear, when all the subtle notions of unity, balance, form and musicality will be revealed to our young composers. But, the personality of their acoustic colours, the uniqueness of their discourse, the originality of their material, the singularity of their themes and of their universe, all emerge relatively quickly; these are latent and must be detected and developed as early as possible during their studies and progression ; this is surely one of the keys to this education, not quite like the others, we have to admit! To make this music in physical terms, to rise above making it, to work in a loop between listening and production, all these remain our main assets in electro-acoustic music education ; whether you are taking a pair of scissors to a magnetic tape or using a mouse in front of a screen does not really change anything much.

Topback to top


Steve Everett - Auditory Roughness in Contemporary East Asian Music

Steve Everett

Emory University

This paper proposes an analytic method for contemporary East Asian music that examines the degree of timbral auditory roughness present and attempts tocontextualize the data within an ecological framework for understanding musical perception. The principal goals of this analytic approach are to ascertain the degree of similarity of timbral formations to those found in traditional East Asian art forms and to investigate if these relationships are relevant in establishing the perceptual conditions for the transmission of musical meaning.

Comparisons will be made of the spectral analyses of select timbres and linear gestures in traditional Chinese and Japanese musical forms with those found in electro-acoustic and acoustic compositions by select composers including Chen Yuanlin, Tan Dun, Joji Yuasa, Maki Ishii, Yuji Takahashi, and Toshio Mayuzumi. Spectral analysis data was collected using IRCAM‘s Audiosculpt analysis program, data comparisons were performed with Open Music visual programming environment, and roughness calculations were determined using the SRA web application developed by Pantelis N. Vassilakis.

Timbre is a primary structuring element in music and one of the most important features of auditory events. The auditory sensation of roughness can be described as a timbral attribute based on the sensation of rapid fluctuations in the amplitude envelope. It is involved in several aspects of sound evaluation.

In this analytic approach, a high level of auditory roughness is defined as possessing: - Spectrum: High ratio partials, auditory interference, highly unstable - Envelope: significant changes in timbre within envelope - A degree of sonic information in between pitch areas - Pitch centers not well defined - High ratio scale relationships.

The examination of timbre as important in the perceptual understanding of East Asian compositions has some support from Chinese and Japanese histories. In early Chinese texts, the term sheng refers to “sound” in general, but yin implies a “musical tone” that is defined in terms of its two primary acoustic properties, i.e. pitch and timbre. This last distinction is of great importance in the Chinese concept of music, as it appears that throughout much of Chinese history these two properties of sound were recognized asof equal value in music.

This analytic approach aims to motivate thinking about modern East Asian art compositions in terms of their timbral properties and perceived cultural significance and to suggest a broader ecological approach in the analysis of compositional and perceptual frameworks. A central principle of an ecological approach, perception must be understood as a relationship between environmentally available information and the capacities, listening history, and interests of both composer and perceiver. Anecological approach emphasizes the structure of the musical environment and regards perception as the collector of that information. Perceptual systems become attuned to the musical environment through continual exposure as the consequence of perceptual learning within one’s lifetime.

Timbral auditory roughness is a significant sonic attribute in forming these perceptual systems. Identifying the types of roughness in traditional musical forms is relevant in the determination of both composer choices and listener perceptions in contemporary compositions. This study concludes that a comparison of levels of timbral auditory roughness between traditional and electro-acoustic East Asian compositions provides an important perceptual foundation for ascertaining musical meaning. This approach is also a relevant study for composers to more fully realize the many cultural and perceptual issues involved in a compositional process that is positioned within multiple musical traditions.


Topback to top


Fback to top

Alizera Farhang - Modélisation des éléments musicaux des cultures extra-européennes chez les compositeurs de la musique contemporaine. Le cas de musique mixte

Alizera Farhang

Université Paris Sorbonne ­Paris IV

Les compositeurs se sont longtemps penchés sur l’intégration des éléments de musique provenant des cultures extra-européennes dans la musique occidentale. Ce souci de préoccupation devient alors, de Mozart à Bartók sans oublier Berio, influence, référence et inspiration.

De la richesse sonore de la musique des pays de l’Asie au mysticisme intriguant de la musique persane, la magie des cultures extra-européennes a inspiré de nombreux compositeurs de musique contemporaine. Les possibilités que les nouvelles technologies et l’informatique musicale procurent mènent le compositeur à adopter des éléments musicaux de ces cultures. La modélisation à partir de ces éléments, ainsi que la présence de l’interprète sur la scène, ont certainement des conséquences esthétique et stylistiques importantes.

Cette communication tente à faire une étude comparée sur les approches compositionnelles des compositeurs vis-à-vis de la pratique du modèle comme le fondement de l’écriture musicale. Afin de donner un aperçu assez riche, différents exemples seront soigneusement choisis et présentés.

Après une introduction à la problématique de la modélisation, les éléments musicaux dans leurs états initiaux, les techniques et l’environnement informatiques que le compositeur utilise pour traiter son matériau compositionnel et le résultat finale seront les sujets de notre analyse sans oublier son aspect esthétique et poétique dans des œuvres mixtes de Tristan Murail et de Christopher Dobrian.

La gestion du temps, la forme, les lignes mélodiques, le timbre sont donc parmi les éléments les plus importants qui engagent le compositeur d’une manière décisive puisqu’ils sont traités hors de leur propre contexte musical et culturel.

Modeling based on musical elements from non-European cultures in the mixed works of Tristan Murail and Christopher Dobrian

For a long time composers have been interested by the integration of musical elements from non-European cultures into Western music. Such inspiration, influences and references can be heard in works from Mozart to Bartók, and of course, Berio.

From the rich sound of music from Asia, to the intriguing mysticism of Persian music, the magic of non-Western cultures has inspired many contemporary composers. The possibilities provided by new technologies and the use of the computer lead composers to adopt the musical elements from these cultures. Modeling based on these elements, as well as the presence of the performer on the stage certainly has important aesthetic and stylistic consequences on many works.

This paper will try to make a comparative study on the compositional approaches of composer’s vis-à-vis the use of the model as the foundation of musical composition. In order to provide a complete outline, various examples will be carefully chosen and presented.

After an introduction to the problem of modeling, musical elements in there primary states, the techniques and technical environment which the composer uses in creating a composition and the final result will be subject to our analysis along with the aesthetic and poetic aspects.

Since time, form, melodic line and timbre are not treated in their own musical and cultural context, they are amongst the most important elements which composer involved in decisively.


Topback to top


Ken Fields - Experiments in Telemusic

Ken Fields

Syneme gave three network concerts in 2009/10: Musicacoustica 2009, Happening Festival 2010 and the Intermedia Festival 2010 in Indianapolis. The tools of network performance are comprised of a suite of open source and off the shelf solutions, showing various strengths and weaknesses that we are addressing to improve in Syneme's new network performance platform: Artsmesh. We hope that the simplification of this performance platform will result in the expansion of a growing practice and new artistic space. With the backgrounding of the tools, we foresee more focus on the emergent forms of network performances: forms that extend the concert hall venue into a multi-nodal space, use the network itself as an artistic instrument - a global resonant delay chamber - and that explicitly explore network delay and complex routing topologies. The Syneme lab is home to the Canada Research Chair program in Telemedia Arts. The lab is equipped for networked artistic performances over high-speed research networks.

Topback to top


G back to top

Peng Guan - Sound Object And Sound Symbol: The rational and emotional perceptions of sound in acousmatic music

Peng Guan

Acousmatic Music is one of the Electroacoustic Music. Historically, Acousmatic music is the continuation of Musique Concrète. Both form and discourse of music, it is the inheritance and development of Musique Concrète.

In Acousmatic Music theory, the concept of Sound Object and Sound Symbol extend to a series of relevant theory, which reflects the perception of sound in different levels. In this thesis, sound object and sound symbol as two phenomena can be cross-referenced and compared in theory and practice.

Firstly, this thesis presents a review of Pierre Schaeffer’s Quatre Ecoutes and Michel Chion’s Three listening modes and then sum that reduced listening and causal listening is the listening mode of acousmatic music. For acousmatic sound, different hearing results from different perceptual mode. Reduced listening, actually, is a process which the acousmatic sound is being reduced or abstracted to sound object. On the contrary, Causal listening is a concrete process which the acousmatic sound become sound symbol.

Secondly, through a review of Pierre Schaeffer’s Typo-morphology and Denis Smally’s Spectromorphology about sound object, and the characteristics of Narrative, Transcontextuality, Metaphor of sound symbol, and the analysis of compositions by Pierre Schaeffer, Denis Smally and John Young, o?this thesis conclude that the concept of sound object and sound symbol is truly reflect the rational and emotional perception of acousmatic music.

Finally, the article discusses the sound object and sound symbol which reflect on the concept of rational thinking and emotional thinking in acousmatic music and these conceptions are applicable to composition and education and research of acousmatic music.


Topback to top


Sanne Groth - Converting history to theory

Sanne Groth

University of Copenhagen

Since the early years of electro acoustic music great self-awareness is found among the field’s composers who often and willingly have communicated historical chronology, thoughts about analysis, aesthetic directions and rivalries. This we find both in relation to the historical studios (Schaeffer’s work in Paris, the studio in Cologne and in the studio EMS in Stockholm) and in relation to today’s discussions of EAM and Sound Art. The extended rhetoric about the music and the production of it is a useful tool in our discussions of musical development and analysis, but can in some cases lead to the disappearing of the aesthetic work and contemplation.

In this paper I will, with the electronic music studio EMS in Stockholm in the 1960’s and 1970’s as my case, present an example of the later. EMS was established with the intent to create an international centre for research in sound and sound perception, and to build one of the world’s most advanced hybrid studios. During the process of establishing the studio the choice of rhetoric in communicating these plans was of great importance. The apparently non-political EMS-project was shaped in accordance to the social democratic cultural policy of the time, where science and research were central topics. The principal creators of the studio were rooted in Swedish modernism, and their careful planning enabled the project to achieve continuous financial support e.g. to purchase a computer in 1969. The overall rhetoric is not only to be recognized in actual project-descriptions to the government but also in little TV-features, in newspaper articles, in the visual design of the studio and in discussions at different seminars.

In the paper I will illustrate and present an analysis of the rhetoric at EMS: Its function in a political context and its aesthetic and scientific context. Inspired by Carl Dahlhaus’ analysis of the rhetoric of serialism (Dahlhaus 1976) I will also discuss what impact this displacement of focus (from the sounding work to the contextualization of the work) has had on the production, comprehension and reception of aesthetic works produced in the hybrid-studio at EMS. In this discussion I will include Dahlhaus’ argument that a certain and high degree of emphasis on the context results in the artistic work leaving its position as aesthetic object to become a historical document instead.

This discussion, I believe, is not only of relevance to historical issues, but is also to be considered in the discussions of today’s communication of EAM and Sound Art.


Top back to top


H back to top

Chun-Zen Huang - Using X-System to Structure the EMSAN-Taiwan Database: Intentions and Methodologies

Chun-Zen Huang

National Taiwan Normal University

X-System is a metadata management system developed by the Library Information Science team of National Taiwan Normal University, and has been widely used by many digital archival projects in Taiwan. It provides the flexibility for the learned users (archivists, librarians, and teachers) without programming skill to construct the knowledge structure of an archive database with only six steps:

- Step 1: Metadata Analysis;
- Step 2: DTD Editing;
- Step 3: Metadata Setting;
- Step 4: XML -> Excel;
- Step 5: DO / XML Import;
- Step 6: Web Page.

X-System simplified the tasks for constructing a database, however, it is originally designed for the paper and photo based documents. For a musical database, which includes many sound and video files, some customized works to the system should be considered and proposed.

The MDAC (Music Digital Archives Center, NTNU) is working with X-System developing team to build up a database system with multimedia storing and playing functions to archive the major documents, events and pieces of the ElectroAcoustic Music of Taiwan. The purpose of this presentation is to discuss the intentions and methodologies of the project. A demonstration of the current stage will be also included.


Topback to top


Ching-han Hsu - Migration Music Festival: A Multicultural Sound

Ching-Han Hsu

National Taiwan Normal University

Migration Music Festival, the best-known festival based on ethnic music in Taiwan, was founded by Chung Shefong in 2001. In the past eight years, every late summer Chung invited musicians from different cultural backgrounds to participate and collaborate with local artists. During the limited time, the musicians would try their best to “talk” to each other by using their instruments, and finally, play a piece of music together on stage.

In Chung’s opinion, there is a place for bold innovation drawing from traditional music and for collaboration across borders. However, Chung feels that music has to proceed from a foundation of a rich cultural background and deep respect for one's partners. Only in this way the collaborative work can achieve depth and avoid being emptied or vulgarized. In other words, the invited musicians should deeply comprehend and identify their own music culture, also be eager to share and work with musicians from other cultures.

Every year Chung created an issue as the theme of the festival. The complete program included lectures and workshops, but the main stage performance was always the highlight in the festival. This paper will be presented with the opening program live video from the 7th festival in 2008, which was a combination of visual/audio devices, poetry reading, singing and live instruments played by artists from different areas. The main purpose of this paper is to discuss the meaning and value of crossing borders, not only the musical part, but also the mode of performing.


Topback to top


Kback to top

Volkmar Klien - Towards Automated Annotation Of Acousmatic Music

Volkmar Klien

University for Music and Performing Arts Vienna

At the Austrian Research Institute for Artificial Intelligence (OFAI) we are currently untertaking a two year research project entitled Towards Automatic Annotation of Electroacoustic Music investigating the possibilities and potential obstacles for finding (partial) solutions to problems related to computer assisted annotation of electroacoustic music.

We do this using Smalley’s theory of spectromorphology (SM) as our point of departure and investigate to what extent it is able to provide the conceptual tools necessary. The proposed paper (setting aside technological issues pertaining to the relevant fields of signal processing and music information retrieval) aims at outlining the reasons behind our choice of spectromorphology as our conceptual background, issues pertaining to the role of the annotated score, the formalisation of spectromorphology for automation as well as potential limitations. Given that neither the manual annotation of acousmatic music nor the technical implementation thereof can be seen as straight-forward matters, research in this area is still at a very basic level making fully automatic and even fully functional semi-automatic annotation of electroacoustic sound a long-term research goal.


Topback to top


Yuriko Kojima - Meanings in Making Music: Has Composing Changed with Technology?

Yuriko Kojima

Shobi University

In the Twentieth Century, there was a dramatic change in musical style for art music creation. The atonality, 12 tone technique, total serialism were all beginning of the change. By the middle of 1990s, the mixture of legacy of Western art music and technology drew lines of mainstream in front of us along with the keyword “arts and sciences.”

The technology for music making, which was brought by musique concrète in the middle of Twentieth Century, has been greatly developed along with the research of application of computer technology to music. The speed of the development of high technology in music has been even accelerated since the beginning of 1990s. As the new possibility of musical creation was introduced by a scientific point of view, integration of arts and sciences was further more advanced. New musical styles have been established in the field of contemporary music as the foremost art music, and it has begun to lead the world of creation as the main stream: that is so called spectral music.

Spectral music is demonstrated not only as the new compositional technique to use the spectra of audio to harmonic structure and horizontal time informations but also the interactivity of musical technology by using Max/MSP/Jitter as the new musical programing. And that has been regrettably overlooked in Japan even now.

On one hand, there is a new tape music like acousmonium or laptop music were born and totally new musical expression has been pervasive as the technology further develops. In fact, as the knowledge of computer programing has been needed for composers, many kinds of major changes and problems have risen up in musical making. Sometimes, so-called media artists take up composers’ jobs. It is obvious that musical world has been changing in high speed.

The development of technology has accelerated more than we had expected, and as a result of informative globalization, the field of music has become more and more boundless and difficult to hold onto.

Ten years has already passed in the Twenty-First Century, and, under the influence of technology, we have more possibilities in musical styles and expressions than the past decades. The development of music technology and of general life style has changed our listening habits and perceptive capabilities of music that has eventually changed the world of music creation as a whole. As a traditionally trained composer myself, who has seen the change of world of creation in recent twenty years, I would like to study how meaning of music creation has changed and to approach what it could mean to be as a composer of out time. In this paper, a special attention will be paid to after technology was brought in to music creation.


Topback to top


L back to top

Leigh Landy - Educating students in electroacoustic music studies: what does this consist of and how can we best deliver it?

Leigh Landy

De Montfort University

Electroacoustic music is still in its youth in some senses and its revolutionary character offers significant challenges to education. Do we, for example, borrow the traditional music education approach separating (roughly) history, theory, technology and science and artistic practice? Is electroacoustic music studies not better delivered holistically taking into account the broad horizon of (my term) sound-based music? This short position paper will commence with a survey of pedagogical papers offered at EMS10 and then tackle the proposed vision related to holistic approaches to education, not only at university/conservatoire level, but for younger students and any interested individuals of all ages as well. Following this paper, all EMS10 participants will be able to contribute their ideas related to EMS10’s theme in an extended group discussion.

Topback to top


Jia Li - On the Characteristics of Electronic Music Composition Thinking in Acoustic Media / From Composes Who Explored in The Field of Electronic Music

Jia Li

Shanghai Conservatory of Music

This paper starts with a general overview of the background of the emergence of electronic music, together with a review of the representative composers of the mid-20th century who explored in the field of electronic music, including a systematic examination of their representative works from such angles as acoustic media, electronic media, mixed media and so on.

Based the concepts and techniques in electronic music composition and with examples of various media from this period, the paper summaries the characteristics and presents detailed analyses and illustrations, such as material integration, microscopic elements, gradual change of the structure and so on. It also points out the experience on electronic music have given the composers brand new audio experiences and led them to thinking patterns with regard to acoustic theory, the method of organizing sounds, structural form and so on. Further, it argues that the thinking of electronic music composition is just as valid as it is for music written for conventional resources, and it has also been adopted in the latter subtly.

It can be argued that the thinking of electronic music composition is both fruits of the composers’ accumulated experience in the field of electronic music and new sources for their further exploration of the musical language: for one thing, it has promoted modern musical composition in such areas as the notion of sound, the design of material, the method of development and so on; for another, it promises infinite vigorous potentials for the sustained development of modern music. It will, helpfully, provide new and significant insights into our grasp of the changes that the musical language went through in the 20th century to understand and study them.


Topback to top


Sijun Liu - Oriental Context in China’s Electronic Music/ A Study on the Thinking Features of Creating China’s Contemporary Electronic Music

Sijun Liu

“Oriental context” is a lexical feature in the works of the modern and contemporary Chinese electronic music. It is also a type of non-language stream of consciousness. It appears either consciously or unconsciously and permeates either concretely or abstractly in the works, therefore forming a unique characteristic in China’s electronic music.

The thesis begins with an introduction of concepts and background related to “context” and proceeds to elaborate on the role and way of expression of context in other art forms, such as movies, drawing, calligraphy and dancing. This is followed by the introduction of how context is correlated to music: First, the presence and symbolic significance of context in China’s traditional music; Second the presence of context in western music, such as common knowledge, dodecaphony, atonal and aleatory music, all represent context in various periods in western music.

“Electronic music” emerged after 1945. The emergence, apart from the realization of necessary electronic technology, is more of meeting the demand of Avant-garde music. The numerous electronic music laboratories that were established continuously in Europe have been working persistently for the composers to seek new sounds. In the development of over 6 decades, electronic music has undergone four major periods, i.e. “Musique concrete”, “Work on Tape”, “Electroacoustic music” and “Computer music”. The lexical features, context characteristics and technical approaches of each period vary from each other. The thesis lists out and analyzes one distinctive electronic musical work in each period, and ends with a summary of all the features about “context”.

China has been creating electronic music for more than 20 years. In the thesis, four works of XU Shuya, AN Shengbi, ZHANG Xiaofu and LIU Jian are compared and analyzed. The critical roles of “oriental context”, represented either abstractly or concretely, in the works are listed. The distinctive “oriental context” has gradually evolved into a symbol of ethnic self-consciousness. This unique thinking feature is precisely the important motivation that drives the emergence of China’s electronic music school, which is also the intention of writing this thesis.

Topback to top


Theodore Lotis - The Perception Of Illusory And Non-Identical Spaces In Acousmatic Music

Theodore Lotis

Ionian University

This paper examines the perception of illusory space in spatial acousmatic music and develops Annette Vande Gorne’s concept of espace-illusion. Illusory space is the Alice's - Adventures - in - Wonderland - Through - the - Looking - Glass - and - What - Alice - Found - There of acousmatic music. In Lewis Carroll's book, speaking playing-cards, white rabbits with pink eyes and looking-glasses are not just speaking creatures, non-beings or unrealistic objects thrown together in a supposedly multi-dimensional world with a slightly bizarre mise en scène, but representations and mindscapes triggered by the reader's mind's eye. The lively objects and characters of the tale have little significance other than to pull the trigger of a boundless imagination, personifying deceptive appearances and impressions, and false or unreal perceptions. One can argue, though, that anything perceived is real, and therefore, imagination, wherever it comes from, can fit into the restrictive limits of isomorphism that characterise human perception. Based on this argument, or rather sophism, a white rabbit with pink eyes is as real, although at a different perceptual level, as a white rabbit in the woods. The important issue however, is not whether such a creature can ever exist, but how we perceived it through the printed pages of a tale.

Seemingly, illusory space in acousmatic music does not deal with sounds as Schaefferian objects, but Baylian images and representations. Sounds become i-sounds, or image-sounds, tools of perception that liberate the imaginary. However, the perception of motion and therefore of space remains quite tangible. This is feasible via two compositional methods for the imitation of motion and space:

1. The alteration of the spectromorphology of sound events (within the pitch field).

2. The alteration of the spatiomorphology of sound events (within the stereo field, the perspective field and geometric structures).

Within the frame of illusory space also lays the notion of non-identical space, as described by the french philosopher Maurice Merleau-Ponty. Non- identical spaces appear as deformities of objects due to the interference of other objects or causes. They provide the building material for the construction of spatial melodies. Although deceptively unrealistic, non-identical spaces offer a view that combines cause, interaction, and a flair of imagination, and they are, therefore, genuine aspects of reality.

Symbolic or metaphorical, illusory space is the fruit of an innermost audition, balanced between the realistic stimulation of the senses and seemingly unrealistic perceptions. In between the two, the curtain can be raised for the rabbit to enter the scene.


Topback to top


Mback to top

Jeff Martin - Electroacoustic music in middle and secondary education: Some concerns regarding curriculum development

Jeff Martin

Beiyuan Jiayuan

The current importance attached to information and communication technology (ICT) in education and the wider availability of freeware music applications have increased the opportunities for student engagement in electroacoustic music. Still, research indicates that most music teachers use ICT as tools merely to facilitate working in traditional composing contexts (such as score writing or MIDI keyboard sequencing) rather than to explore diverse electroacoustic practices. Also, when the focus is sound exploration, rather than pitch and rhythm entry, the learning activities are rarely linked contextually to those traditions.

Fortunately, responses to this problem are emerging in the efforts of a few individuals and organisations to connect the technology-based composing done in schools with the authentic practices of electroacoustic composers. However, despite these valuable initiatives, it remains unclear what constitutes an effective curriculum. In this presentation, I raise and explore concerns regarding electroacoustic music curriculum development, for middle to secondary school, against the background of recent advances in curriculum and music education philosophy, as well as my own teaching experiences. I argue that the success of such a curriculum is dependent on the perspective of music, electroacoustic music and education that informs its aims, activities and learning outcomes. Without critical attention to these foundational issues, teaching runs the risk of adopting an influential, but erroneous, conception of music and teaching that abstracts electroacoustic from the lived experiences and meaning-making of students, even when active music making is the focus of the lesson. Conversely, a curriculum that enables meaningful participation with the living and transforming traditions of electroacoustic music may help to prevent the further alienation of an already marginalised art form.


Top back to top


Stephen McCourt - Aesthetics of Multimedia and Visual Concepts in Electroacoustic Music

Stephen McCourt

University of Limerick

This paper proposes that concepts derived from multimedia and the visual domain can be applied to a compositional approach to electroacoustic music. In particular compositing, spatial montage and space-medium as defined by Lev Manovich can be applied to form. And, vectors as defined by Herbert Zettl can be applied to create various relationships between sounds. This approach to composition is further supported by shared perceptual phenomena between sound and image such as figure-ground organization. This paper also discusses comparisons between sound and visual objects including the idea of sound edges.


Topback to top


Tatjana Mehner - How to talk about unknown early electroacoustic music such as compositions from early GDR times

Tatjana Mehners

Martin-Luther-Universität, Halle-Wittenberg

Nearly everybody starting electroacoustic studies is familiar with Karlheinz Stockhausens “Gesang der Jünglinge” or Pierre Schaeffer’s and Pierre Henry’s “Symphonie pour un homme seul”. And thus it is easy to forget the technical standard and the acoustical quality of the recordings. Listeners, while having adopted such works as parts of the electroacoustic repertory, as a kind of canon, accept the sound quality as well as the technological and compositional standard of the 1950s or 60s.

Technical means of realisation of these pieces are close if not quite similar, but the have very different impavts on listeners. These selections are comparable to those having taken place in the cases of other ­ traditional instrumental ­ musics. Nevertheless, in those traditional note-based-music situations, there is always a kind of possibility to “update” or “correct” works by interpretation.

But, what about Siegfried Matthus’ “Galilee” or compositions from Hans Hendrik Wehding, Wolfgang Hohensee, Bernd Wefelmeyer or others made in German Democratic Republic (GDR) during the 1950s or 60s in a today nearly unknown studio? or even a piece as Lothar Voigtländers' “Maikäfer flieg” having in some way marked the more official starting point of electroacoustic composition in the GDR? Because of a special performance and reception history, such pieces did not take part in the continuously running general competition of aesthetical acceptance. How can we know today the reasons why one is a truly fantastic artwork while the other is attempt to develop its own electroacoustic language, one being a kind of etude of technology and the other an aesthetical revolution? Is it necessary to know that?

We are used to listening with our ears accustomed to the audio standard of 2010. If we ­ by our experience ­ are not able to figure out other criteria, it is this standard which guides our discrimination. Quite often researchers treating outsiders of general electroacoustic history confine themselves to writing down single techno historic stories as part of a supposed general history ­ supposing a general interest in history and rare technological outgrowths. This procedure becomes problematic when related musics should be addressed. When we are talking about the early years of electroacoustic music in the GDR, we can nevertheless find more general points. This paper takes results of a publication project treating electroacoustic music from the GDR as an example to discuss possible perspectives of listening to unknown ancient electroacoustic pieces.

It was in 1956 that at the East German Radio (RTF), a music acoustic research laboratory (Laboratorium für akustisch musikalische Grenzerfahrungen) was founded. It included a large production unit. That has been in some ways the first electroacoustic studio of the Soviet Bloc of Eastern Europe. In our paper, we give some reasons and interpretations to thie foundation of this studio and show from the political and aesthetical background why researchers in this studio spent a lot of time and resources on the development of an electronic instrument closely related to the tradition of Friedrich Trautwein and especially Oscar Sala: The “Subharchord”. It was only in 1970 that the work in the laboratory was stopped for various reasons, not only political. Ten years later, electroacoustic music development started again in a less official manner.

In our research, we try to treat both developments as related and in the farthest possible way to well-known clichés of the GDR and the Socialist Realism, but nevertheless in a critical way. The first period (that of the lab at RTF) will be used as an example. We discuss historical facts and our methodological ways to treat them, the technological developments and their possible reasons, the sounds and works and the listening and selection ­ and thus mediation and teaching strategies.

The paper seeks on the one hand to give some impressions of the partially unknown situation of electroacoustic music in the GDR (taken from a study to be realized for the MINT research group at the Sorbonne), and, on the other hand, is looking for strategies to deal with electroacoustic history in research and teaching. Thus, we especially develop a research and mediation concept reflecting the area of conflict between cliché, sound and context.


Topback to top


Mikako Mizuno - What is the aesthetic value of live-interactive music? ------system, sound , performance

Mikako Mizuno

Nagoya City University

Today the leading edge technology of computer science, signal processing, bio-engineering, modeling, interface design, sound engineering etc. makes great effect on creation of arts. Thanks to the speed of computer operation, both with sounds and with images, and thanks to the well refined interface of software, composers can more easily make their original interactive system for their musical piece.

But what is the aesthetic value of interaction? Is technological interaction indispensable for electronic/electroacoustic music? These questions are more important than those of technological developments. In this presentation aesthetic value of technological interaction in musical pieces will be discussed along with the Japanese pieces since 1990s.

Here, interactive live-electronic music means the computer music that is composed of human performers (singers, instrumentalists etc.) and the computer generated sound, and that include some technologically interactive system by computer, which includes not only computer program, parameter settings and mapping strategy but also various kinds of interface and various type of input devices. In other words, interactive live-electronic music is defined by the style of performance and the technological frame. The definition does not include any aesthetical, conceptual description.

Interactive relationship has been existentially realized in music even without technology. It is important to think out what has been changed or added concerning the musical interaction. The traditional interaction can be indicated in several levels of music representation, but here I concentrate on two phases of the traditional musical interaction; concerning about instrument and about ensemble.

a. interaction with instrument The interaction through instruments produces sounds based on the playing action. Sounds are the results of action. There is a coexistent causality of reason and result. The audience, and sometimes the instrumentalists, can perceive the causality because there is a device which does not conceal the physical phenomena. The audience see the performing action and the instruments, and hear the sounds, so causality is clear. Computer has added another level of interaction through man-machine interface. Most of the interactive techniques captures the physical actions, sounds, images etc. through the interfaces and changes them by the program. The causality cannot be clearly perceived because the technological system conceals and changes the situation of sounding. The audience may have question like how the sounds are produced in connection with the human action. The audience may want to know what is happening on the stage. At the moment when the audience gets some hints for their question they may feel they understand one important part of the piece. The feeling is similar to the case of interactive installation. One of the important parts of the interactive installation shown in the exhibition is the corresponding system as device in the sense of game. And the interface and the device system are conceived in the same technical situation as the musical instrument which produces the sounds. They should be easy to play, should have reality of making sounds, and should be designed as human friendly interface.

b. human-to-human interaction The second question concerns ensemble between the musicians. In the performance of chamber music like string quartet a lot of types of human communications occur one after another. The phases of musical performance are to be discussed as the followings. In which tempo it is to start? How is the volume balance between the principal melody and the second one or the accompaniments? Which type of phrasing (articulation, change of dynamics, attack, overtone control, softness/hardness etc.), is to be selected? What kind of timbre is suitable? How long should the fermata be? How is the agogic? Should the phrases be continuously played or is it possible to make a slight silence between the two phrases? In electroacoustic music, ensemble can be made between instruments and computer that can produce sounds both similar to the instruments and the opposite sounds like noise, which does not have been tuned. So the problem of new type of musical accompaniments should be discussed. What is the logic for the relationships between the primary melody and the accompaniments in our situation? The discussion of accompaniments will include both sound correspondence and ensemble relationships in the interactive live electronic music. One essential question is how psychologically connects the instrumental or tuned sounds and melody to the digitally produced or edited sounds? The latter sounds are similar to the instrumental sounds in the point that they have dynamic curve and sounding character as timbre. It depends on the hearing of the composers how to make the relationship between the timbres and in which timing the instrumental sounds can be heard in connection with the noise.

c. definition of music --- towards future discussion The frame of instrument and ensemble is closely related to the traditional music in the sense that musical pieces should be performed in front of the audience. But the leading edge technology of computer science is now changing music in another phase. If we can think music as a "communication art", what happens to the aesthetic situation of interactive live electronic music? Traditionally musical communication has been realized, as Jean Molino and Jean-Jacques Nattiez discussed, in the special relationship between composer, performer and audience. But communication has acquired dramatically changed the social network no more than the musical network. The music is not necessarily accepted only by way of hearing. Communication tools like zwitter, net streaming, YouTube etc. are used by some composers in order to extend their musical thinking. Masayuki Akamatsu and his video to paper-print, Yuichi Matsumoto and his questionnaire art, and Taro Yasuno and his streamTV, Masaru Yonemoto and his electric gadgets, and so on. People loving music talk about the society, music and life, and their talk is on the communicative network to have interactive real time responses. This may be also the interactive live electronic "music". As algorithm in itself is the primary element in Japanese algorithmic composition, communicative network system in itself is the main element in the communication music.


Topback to top


Nback to top

Robert Normandeau - Typology and analysis: a revision

Robert Normandeau

Université de Montréal

Over the last 7 years I have taught the course on auditory perception at the faculty of music. This subject is one of the oldest within the electroacoustic music program. Even before the program has been officially launched in 1980, Marcelle Deschênes was already teaching it (and had already done so in Québec City in the 70's). Francis Dhomont succeeded her in the 80's and after two or three different lecturers, in 1999, I was put in charge of the esthetics of the electroacoustic music courses. Originally named Perception auditive (Auditory Perception), one of the main courses is now­since 2003­called Typologie et morphologie sonore . The course content includes the Pierre Scheaffer's typology and morphology, as well as the typology of soundscape by R. Murray Schaffer and the different writing techniques used in electroacoustic music (transmitted by Francis Dhomont before he returned to France). The general idea is to give the students a lexicon of words and tools to describe the sounds.

In addition, since 2008, I offer new course dedicated to electroacoustic music analysis. I will present the method we use as well as the result we have obtained over the last two years.


Topback to top


Oback to top

James O'Callagan & Arne Eigenfeldt - Mixed Morphologies. Gesture transformation through electronics in the music of Kaija Saariaho

James O'Callagan & Arne Eigenfeldt

Simon Fraser University

Kaija Saariaho has developed a unique musical language, fitting neatly between instrumental and electroacoustic methodologies. Regardless of which idiom a given piece of hers explores, however, her focus on spectral transformation reveals the richness of influence electroacoustic traditions have had on her writing. Replacing the tonal language of consonance and dissonance, she has selected the broader spectrum of tone and noise. This manifests itself both in the development of form and individual gesture. An analysis of the types of processes used in the latter manifestation will form the focus of this paper with particular attention given to the relationship of acoustic and electroacoustic elements in her mixed pieces (for instruments and electronics).

It is in her first period of works (roughly within the 1980s and early 90s) that the continuum of tone and noise illustrates itself most clearly, and it is therefore not surprising that the bulk of her pieces in this period involve electroacoustic processes in some way, either through inclusion of a computer-generated tape part, or with live electronics. In these pieces, the use of electronics facilitates the transformative capacities of gestures toward the extreme ends of the tone-noise spectrum, and allows for the expansive qualities of her transformations. Attention will be given to the importance of electronics in these pieces in expanding the palette of gestures, as well as on how electroacoustic traditions strongly inform Saariahos compositional style. A focus on three major works from this period, Verblendungen (1984), Lichtbogen (1986), and Io (1989) will serve as illustration for the methodologies of gesture transformation that Saariaho has developed.

The paper will examine the relationship between instrumental-sound sources and electronics, examining in detail her use of three proposed models of interaction: mutation, unison, and emergence. In the case of mutation, the electronics sometimes operate as alien elements, where their contrasting timbres infect a given motivic gestural unit observed in the instrumental parts, causing it to acquire characteristics of the tape part. Conversely, the instrumental and tape parts often operate in unison, such that their elements act in unison and are perceived as a single gestalt. This latter relationship is sometimes the result of the former process, where the the electronic part and instrumental part begin as disparate elements, but begin to acquire each others characteristics until they merge into a single perceptive unit. Finally, the electronic part sometimes reveals itself as an emergent element of an instrumental gesture, where it was absent or hidden before. The extent to which it emerges as the gesture is developed differs in that it sometimes becomes integrated seamlessly, or else it overwhelms and absorbs the instrumental gesture and becomes the dominant (or exclusive) element.

These three models of interaction: mutation, unison, and emergence, form the basis of the gesture development in Verblendungen, Lichtbogen and Io. In certain cases, they present themselves as the chief tool for development and form the focus of the piece. Verblendungen and Lichtbogen explore the relationship of mutation and emergence, respectively, in this capacity, whereas Io is a sort of tour-de-force which exhibits more complex and varied gesture interaction, with relatively clear focusses in individual sections. The models used for interaction in each piece are in some ways informed by the nature of the electronic part. In Verblendungen, there is a fixed tape part to which the orchestra must synchronize, while Lichtbogen features live amplification and processing of instruments. Io uses both of these strategies, and thusly is afforded a diversity of relationships. In each of the three pieces, Saariahoengages the contrasting elements of tone and noise and negotiateshow they operate differently in electronics and instruments.

The paper will examine each of these three pieces in-depth with attention to the detail of individual gesture shapes and their morphologies of development. A special focus will be afforded to the relationship between the instrumental and electronic parts of these pieces, and how they interact to create exceptional transformations of sound- shapes. Attention will be given to the importance and essentiality of electronics in these works and in Kaija Saariaho_s compositional language and process during this period, in not only expanding the capacity ofher gestural development, but also in informing the formal structures of her composing, which are far divorced from traditions in western instrumental music and owe much to spectro-morphological traditions1 in electroacoustic music.

Topback to top


back to top
Rback to top

Neil Rolnick - Computer Music in the Music Curriculum

Neil Rolnick

Rensselaer Polytechnic Institute

As a composer who has been using the computer as my primary compositional tool and performance instrument since the mid_1970s, and as a teacher in a well respected electronic arts program since the early 1990s, I’ve been trying to sort out the place of electronic music in the university curriculum, and in the musical world at large. Or maybe it’s the place of computer music. Or of electro-acoustic music. Or maybe the rhetoric just gets in the way.

My fundamental belief, based on my own experience, is that the time of segregating technologically oriented arts as “computer music” or “electronic arts” has passed. Because of the ubiquity of the technology, and the many ways in which computers have impacted our lives, talking about computer music or electro-acoustic music is only meaningful in the sense that talking about romantic piano music or virtuoso violin music is meaningful. There is an existing repertoire of music, new works which enter the repertoire. Players need to learn performance skills, and composers need to learn how to use the tools in their creative processes. From this perspective, teaching computer music is as necessary as teaching violin or piano. And perhaps more so now, since students and young players are able to use commercial tools to begin playing and performing without going through traditional musical educations. In the context of learning performance and compositional skills with the computer, developing familiarity and deep knowledge of past and present repertoire is essential.

On the other hand, many students coming to music from a non_traditional path today find their way in through the use of commercial music software. Many undergraduates I teach begin to “compose” by using these packages to put together pop songs, or minimal or ambient pieces. If and when these students become interested in pursuing music more deeply, their relationship to the computer is established in the same way as the players of other instruments: it’s their axe, their path of least resistance for exploring the world of music. But just as I wouldn’t advise a pianist to only listen to and study piano music, or limit an alto sax player to listening to Charlie Parker, I’m uncomfortable with building a ghetto of “computer music” in programs which use computers for musical purposes.

I will argue in this paper that the study of electo-acoustic music, or of computer music is today only relevant in the sense of teaching specific performance or compositional techniques. At the same time, I think there is an exciting opportu now to think about using computers as a vehicle for teaching deeper listening, performance and compositional skills, and for putting greater breadth in the narrative of our music history.

At Rensselaer, we are in the midst of a discussion about how we can reorganize our music and computer music curriculum to erase boundaries between the two. From integrating the use of computers into basic theory courses for both listening skills and notation, to the regular inclusion of computers in performances of a variety of traditional and experimental ensembles. We are looking at ways to include non-western and pop music into our more traditional theoretical courses, and to include the development of technology in this view. Similarly, we’re looking to revise our “historical” narrative for music history to be inclusive of the many musics which exist outside the classical western tradition, and to trace the use of technology through this narrative.

In the context of such a program, our courses in computer music or electro_acoustic music become either skills courses akin to group piano lessons, or performance courses like chamber music ensembles, or theoretical topics like the study of the work of a particular composer or musical period. But hopefully, they will then be more completely integrated into a larger perspective on music as a form of human expression. And while I expect that we’ll spend some time listening to and analyzing music of electro_acoustic composers, I hope it will not limit our ability to likewise focus on Beethoven, or gamelan music, or bebop. And we expect that students will find that the computer has become the same kind of tool which the piano was for earlier generations of musicians and composers: a way to access music and sounds and to realize ideas in the most direct way ­ and perhaps also a compositional tool to explore music and sound in greater depth, and a performance instrument to channel their music making through.


Topback to top


Sback to top

Qing Shao - The Sounds and Structures of Pierre Jodlowski’s Dialog/No Dialog

Qing Shao

Electro-Acoustic Music, established in accordance with the development of human scientific technology, is a new musical language which is different from the traditional western music in sound production and sound organization.

With the emancipation of noise at the beginning of twentieth century, many more ways of producing and organizing music have become possible. Also, with further scientific development, music is able to explore new fields with the help of computers and related software.

It has been more than one hundred years (1907-2008) since Electro-Acoustic Music germinated; from the very beginning stage of being merely free from the constraints of the twelve-tone system to the era of simulation of copying and pasting by using scissors and glue to the current digital era operated and controlled by the software, Electro-Acoustic Music has tremendously developed its channels of creativity. During its development, Electro-Acoustic Music not only has developed and benefited its own creativity, but also has it deeply influenced the creation of non electro-acoustic music.

The new concepts of sound produced by Electro-Acoustic Music result in the new sound production and new organization techniques. These new techniques and new musical reasoning not only benefit Electro-Acoustic Music’s own creativity, but also has instructional meaning to the creation of non electro-acoustic music. Therefore, it is highly necessary to analyze a composition which combines Electro-Acoustic Music and western traditional musical instrument; furthermore the analysis may provide musicians a technical model for composing.

This thesis will focus on the work Dialog/No Dialog composed in 1997 for Electro-Acoustic media and flute by French composer Pierre Jodlowski (1971- ). The thesis aims to reveal the specific compositional methods and the structural format of current compositions which combine electro-acoustic media and western traditional musical instruments; more specifically, it will analyze the sound, rhythm, timbre of both the electro-acoustic media part and the flute part, and the organization of various parameters appearing when the two parts are combined.

This paper examines the type of Electro-Acoustic Music composition which is created by using electronic techniques and compositional ideas of electro-acoustic music during its compositional process; it doesn’t cover the type of music performed by electronic instruments.

Topback to top


Margaret Schedel - Dodge’s In Celebration: The Composition and its Analysis Electronic Music Performance

Margaret Schedel

Stony Brook University

In Celebration, Charles Dodge’s electronic music realization of the 1973 poem by Mark Strand was realized at the Columbia University Center of Computing Activities and the Nevis Laboratories in 1975. The work belongs to Dodge’s Speech Song series, where he explored making music out of the nature of speech itself. The piece is well-documented in Dodge’s own article “In Celebration: The Composition and its Realization in Synthetic Speech.” While Dodge includes a score, and analyzes the speech synthesis and electronic music techniques extensively, he does not present a musical analysis of the work. This paper seeks to further Dodge’s analysis. This task was greatly simplified by the ability to reference Dodge’s musical score. Although it is not a perfect representation of the music, his notation does convey a great deal of information about the piece, and gives valuable insight into which aspects of the work's structural elements Dodge considered important. By building upon Judy Lochhead’s work in examining the musical object, evidence, subjectivity, representation, and goals of another of Dodge’s Speech Songs, Any Resemblance is Purely Coincidental, in her article "How Does it Work?": Challenges to Analytic Explanation,” this paper offers an complementary critical methodology for a musical analysis of In Celebration.


Topback to top


Tback to top

Pascal Terrien - From the appreciation to the teaching of electroacoustic music: a new didactic approach

Pascal Terrien

Conservatoire National Supérieur de Musique et de Danse de Paris et Université de Paris IV Sorbonne

Musicologue et enseignant-chercheur en Sciences de l’éducation appliquée à l’enseignement de la musique nous étudions les relations entre l’œuvre et la personne en cherchant à comprendre comment ces relations peuvent être explicitées et médiatisées par l’enseignant afin de permettre à l’élève, à l’étudiant, ou plus simplement à l’auditeur, de s’approprier un savoir sur la musique. Cette étude sémiologique repose sur le modèle théorique de la tripartition de Jean Molino, adaptée par Jean-Jacques Nattiez au domaine musical, mais d’autres cadres théoriques, prenant en compte des approches analytiques différentes [Ten Hoopen, Delalande, Cogan, Smalley, et autres], peuvent être convoqués pour travailler la compréhension des phénomènes de perception et de compréhension des musiques électroacoustiques. Nous partageons avec Jean Roy l’idée que l’approche esthésique peut permettre d’éclairer avec pertinence la réception du traitement de ces œuvres. Pour autant, enseigner la musique électroacoustique aujourd’hui dans une institution scolaire spécialisée ou générale (conservatoire ou université) pose des problèmes tant le domaine de la maîtrise du vocabulaire, que dans celui des techniques employées, des esthétiques convoquées ou de la réception de l’œuvre par les auditeurs. L’effort pédagogique qui pourrait permettre de mettre en relation la volonté du compositeur et la compréhension de l’auditeur nécessite un travail préalable de transposition didactique et ne peut faire l’économie d’un questionnement épistémologique sur la nature et la fonction de l’objet d’enseignement. Les enjeux entre perception et interprétation, comme ceux entre la compréhension des intentions du compositeur et l’auditeur, questionnent les stratégies pédagogiques du professeur, qu’il soit musicologue, analyste ou lui-même compositeur. On ne parle bien que de ce qu’on connaît vraiment, en d’autres termes que de ce qu’on a clarifié à l’aide de l’analyse didactique.

Notre communication abordera les conditions de réception et de la compréhension de l’œuvre en utilisant les concepts théoriques de la didactique adaptés à l’enseignement de ces diverses musiques. Nous souhaitons, à la lumière d’un questionnement épistémologique sur la nature de la musique électroacoustique, effectuer une transposition didactique d’œuvres qui nous permette de mieux cerner les éléments nécessaires à la compréhension du phénomène musical de ce courant (Ecoute, intention-réception : Enjeux de la perception et de l’interprétation), et de cerner les enjeux du langage qui donne une interprétation de l’œuvre. Nous analyserons, à partir d’œuvres du répertoire, différents objets scientifiques, esthétiques, et formels, qui fondent la nature d’une œuvre en faisant émerger les termes d’un glossaire en partie propre à la musique électroacoustique et qui permet à l’auditeur de s’approprier la polysémie de l’œuvre. Pour cette étude, nous avons choisi des œuvres appartenant aux différents courants de la musique électroacoustique allant de compositions mixtes, comme Metallics de Yann Maresz (1995), à des œuvres comme Sonare de Bernard Parmegiani (1996). Notre contribution se veut un outil de réflexion sur une approche didactique qui fonde de nouvelles démarches pédagogiques dans le domaine de l’enseignement de ces musiques.



Topback to top


Vback to top

Mike Vernusky - Embodying the Future of Electro-Acoustic Music

Mike Vernusky

Quiet Design Records

As visionary composers, we must embrace the role we are playing in the unfolding of this new era by asking questions, taking risks, and linking these possibilities to tomorrow. We, as the creators of this music will determine how it is constructed, who hears it, and the distance it travels. Much like witnessing a stellar concert performance, it is how this music will be remembered in the future.

The 21st century model of composer is blurring the roles of craftsman, performer, and convergent media artist. New aesthetics in music are constantly being invented, as well as the kinds of media used in performance. In practice, we are faced with evolving performative challenges in physicality and space. And as artisans in a global music community, we have new demands to meet pertaining to awareness and intelligence inrecent music and art. Being an informed and skilled composer now calls for deeper levels of cultural understanding. Largely due to the internet, there has been a shift in expectations as far as how a composer interacts with her/his own music and audience. Destined to be obsolete are the days of the solitary composer, contentedly oblivious to the goings-on outside his musical world, who patiently waits to be discovered and celebrated. Times have changed. One can no longer rely on an audience finding you. Rather, it is becoming increasingly more complicated to keep up with, or manage the ways to reach them. This leads us to develop new strategies that will attract these listeners to our music and give us access to the widest audience possible.

As many predicted in recent years, a new type of composer skill set is needed to maximize our effectiveness in creativity: artists who can quickly adapt to new environments for musical creation, and maintain their own unique musical language inside of that environment. Further, there are duties required of this artist that may go beyond the established role of being a musician or a composer. The artist must balance creative work alongside the non-musical challenges she/he is now faced with. Of key interest are questions pertaining to adaptation, listener base, and the ability to publish music in visual ways that best represent the sound. After all, the musical experience is not limited to what is heard. We can now produce a professional-grade album, from scoring to recording, mastering to designing, and even manufacturing - literally in our bedrooms. It can then be made available for download or purchase from almost anywhere on the planet.

That being said, I would like to suggest one possible model as a point of departure for defining this new type of musician in the 21st century. Although certainly not limited to, it might consist of three standout traits:

1) Composer as Expert This role applies to the ongoing refinement of composition, analysis of music, listening strategies, pedagogy, and performance. The 20th century represented a peak for this type of composer, and is the benchmark that we can strive to build upon when it comes to developing our craft in the current century.

2) Composer as Innovator This role entails discovering new ways to put sound in the air, suggesting previously unrecognized relationships in music and/or technology, and working with innovators in other disciplines or fields to inform our musical decisions.

3) Composer as Auteur This one is perhaps the least familiar to previous generations of composers, and is an area sometimes met with opposition or indifference. And yet, I suggest it is the one with the most potential for development and return. This pertains to creative exploration in various areas of production, distribution, and re-defining strategies to bring our music to the ears of others. In addition, it unveils the significance of adapting one’s music to the artistic potential of technology ­ in particular, the internet. Technology has always played a crucial role in the life and times of the composer. It helps convey or construct ourmusic, connects us to society, and allows us to communicate more clearly with our listeners. Technology also encourages us to create new models, conditions, and modes of expression in our music. But where else can it be applied?

There are endless paths to progress when we not only use technology to construct our music, but also to share it on a global level. And the potential for exposure is undeniably significant, even compared to just a few years ago. Just ask Pauline Oliveros, who takes full advantage of the current social utility websites such as Facebook, Myspace, LinkedIn, and Plaxo. At the time of this writing, she has over 3000 online friends and 14,000 listens for a single work on a popular networking site.

It is my belief that in the coming decades, composers will need to diversify themselves in a variety of areas to express a clear vision of their relationship to the sonic landscape. This will pose a new challenge for many composers, as it requires knowledge and experience in areas that fall outside the boundaries of a traditional musical education, as well as significant time spent away from the manuscript or studio. Progress can only happen in practice, and the results of each step inform the next. The only way of knowing what is possible will be by actually doing it.


Topback to top


Wback to top

Hasnizam Wahid - Teaching and Learning Electroacoustic Music in Malaysia

Hasnizam Wahid

Universiti Malaysia Sarawak

What are the pre-requisite in understanding electroacoustic music? How can I introduce electroacoustic music, where the compositional approach is not immediately obvious? How much, and what are the technical and aesthetical modules? How to define good and bad electroacoustic pieces? Should we approach electroacoustic music composition from technical point of view?

My experience in teaching electroacoustic music in Malaysia, particularly at the Faculty of Applied and Creative Arts, Universiti Malaysia Sarawak has been particularly interesting. This is because, electroacoustic in Malaysia may be regard as something ‘new’ it even more interesting when it was first introduced as one of many courses offered in Universiti Malaysia Sarawak (UNIMAS) in mid 90s..

Faculty of Applied and Creative Arts (FACA) was established in 1993 with its main focused on ‘technological exploration in the arts’. Unlike any other public universities in Malaysia, FACA was established with the aim to explore the many possibilities of technological applications in the arts. My initial experience in electroacoustic music first appeared when I was at University York, England during my two years study with the music department. Infact I was also fortune to experience listening to electroacoustic music through the Ambisonic sound diffusion system, and my most memorable experience was during MediaMix96 event, organised by the music department, It was during this period for the first time I’ve listened to a classic piece entitle ‘sud’ by Jean Claude Risset through the Ambisonic diffusion system. As the result, my listening experience have somehow introduced me to the genre and I began my long, and hard journey into electroacoustic music. I returned to Malaysia in 1996, soon after and start to established the computer music related courses at FACA. Electroacoustic music was first introduced as ‘Computer Music’ at the Faculty of Applied and Creative Arts (FACA) as early in 1997 after a short visit by Burton and Priscilla Mc Lean or also known The McLean Mix. The initial idea of inviting the Mc Lean Mix to FACA was to establish our very first state of the art Musical Instrument Digital Interface (MIDI) studio. During its early establishment, the studio was designed with a very minimal set-up with Apple Mac running Pro Tools with audio and MIDI interface as well as the Yamaha 02R. Early exploration in the studio are centered around MIDI applications with Studio Vision Pro as the main software.

My early experience in exploring electroacoustic music, are mostly due to my curiosity and self exploration. Unlike during my stay at York, not much of listening on electroacoustic music are possible in Malaysia. Even to experience a full set of symphony orchestra playing live would also be a rare opportunity not until late 90s and early 2000.

At FACA, electroacoustic music was first introduced as a one of the many music subjects as early in 1997 and was earlier known as Advance Computer Music. I would like to firstly discuss on what the level of students that enrolled into our music degree program before I could discuss further. At FACA, in its early days, most of the students who enrolled into our music degree program, are mainly trained in western music through private music schools in Malaysia with either Associate Board Royal School of Music (ABRSM), a British base certification or Japanese based certification, mainly with Yamaha Music grades, in theory as well as practical.On top of that the standard qualification to qualify enrolling to study in a public university would be to have a sound Malaysian Higher School Certification. Once they are in the music degree program, they would normally study music with emphasis on the usual music curriculum from music history to musical composition.

Electroacoustic music, are introduce to third year students in its early days and later to second year students with knowledge on Studio Recording Technique and Musical Instrument Digital Interface as its pre-requisite. Our electroacoustic music, course description would covers from history on musique concrete and until the evolution of synthesis technique in CSound. On top of that students will also introduce to compositional technique in which would normally covers the ‘classical tape technique’, and together with software oriented processing technique through the use of plug-ins as well as specific software applications. Towards the end of the course, a typical evaluation would be to submit a portfolio of ‘sonic postcard’, including a concert presentation.

Teaching and Learning Process During in the initial stage of the course, students are given the historical perspective on the early days of development on music and the technology, including Schaeffer’s musique concrete, Russolo and his art of noises, Varese and his works, as well as general view on the development of CSound. The idea of giving the historical context of the course is to equipped the students with some level of what had happened before with some listening examples from my personal CD collections. Later, during in the middle of the semester, students will be exposed to what I would describe the ‘classical tape technique’ and later adapt the composition technique on the digital platform focusing specifically on software and plug-ins application. The idea of exposing students to the historical context on electroacoustic music have somehow successful in introducing the course as how and the conceptual ideas. As we all might agree that factual information such as historical perspectives are rather straight foward in nature as long as the text are carefully studied and well deliver in the classroom. However my personal experience teaching electroacoustic music at FACA, so far have been a rather technical more than aesthetic. This is because most of the lecture are centred or focused on how to compose with software as the main platform. As we are aware, composing electroacoustic music are mainly happen in the studio domain. This is true through my own experience at BEAST studios between late 1999 to 2004, most of my worries are how to use those software which are available in the studios and later throughout the exploring and experimenting process my ‘arranging’ or structuring process would happen parallel. There were no specific compositional approach as my guidelines except for listening to others composers’ work. As a beginner, the listening process does answer most of ideas of how my composition should sounds like but in term of achieving those sounds that I like, that came out from what most experience composers would define as good piece would be another set of issues. My first experience listening to some of those electroacoustic music CDs available at the music department’s library at the Barber, are rather very interesting. With no graphical scores available, except for very minimum information on the CD sleeve, it is very hard to really understand and interpret composers’ intention on the composed piece, and even difficult if having no information at all on conceptual ideas and what the piece is all about. My main question would be, how do we define a good piece as ‘good’ and how a bad piece can be define as ‘bad’? and my other set of question would be does a good composer will always compose ‘good piece’. What would happen if a ‘bad piece’ being presented in a live concert situation, where the culture of presenting a piece in multi-channel diffusion system have always been the norm? Can a ‘bad piece’ be a ‘good piece’ in multi-channel presentation? My personal preferences have always been that, a good and well recorded sound would have a significant impact towards the compositional process. Is this the solution to the right answer? The most common questions being asked to me after series of listening session with my students, was always ‘why the piece sounds the same’. In fact some of them even regard electroacoustic music as ‘very dark’, ‘solemn’ and ‘scary’ in nature.

What about electroacoustic music composition? In a very typical approach, pieces are developed and composed in a studio domain. My concern have and always been of that how pieces can be discuss and analyse effectively. At the point of writing I would content that, there are efforts being made to at a very minimum to able composers to analyze their composition. A traditional approach would be to represent some ideas of his or her piece in a form of graphical representation. I would suggest at this point, the ability to listen ‘critically’ and analyse constructively a piece of electroacoustic music would be beneficial. However through my experience, the question that I’ve always being asked was, what are we looking for in a piece of electroacoustic music? It is more difficult to a group of students who don’t have any extensive listening experience in electroacoustic music. In one of my experience at FACA, students are more comfortable and able to discuss a piece of work if the material use is something they are familiar with. In one of these example, group of students was introduce to piece which used gamelan, as its main source for the composition. Not much sound manipulation were done and students can easily recognized material being used in and throughout the piece. As the result, students are more comfortable and may easily recognized the piece. Another set of listening experiment with the same group of students was carried out, but in this test, students are given a piece from a well known or famous electroacoustic composer. The piece selected was a piece composed with a extremely processed materials, up to a level where the it is impossible for the listener to recognize the source material. As the result, of this test, as predicted earlier, the students were having problems in explaining sounds and source used in the piece. At the point of writing, at FACA we have two students who are now at their writing up stages in their thesis. One is doing her MA and the other a PhD student. During their early year of doing their research both of them was exposed to listening and analysis of electroacoustic piece from a renowned composers. During these early days, they do have problem appreciating the pieces played and having some level of difficulty of understanding the whole conceptual ideas of the pieces, but later after a series of ‘repeat listening approach’, both have somehow get the hold of the idea and conceptual framework of the piece.

Analysis and Critical Listening In note domain, any musical score can easily be analyse, The most common approach would be to approach the score from the traditional harmony point of view or the modern harmony point of view. Either way, these approaches are well structured and can be easily identified.Which later gives an impression or ideas of a piece by just interpreting the score. How should EA music be analyze? Any specific approaches? I was personally involved in a competition L’espace du Son at Musique and Recherches, Brussels in 2001. In one of the section during the competition, participants were required to what I would term as ‘interpreting’ a piece of EA music and later analyse a given piece. During the competition I submitted a graphical interpretation of a music composed by Nikos Stravapoulos. Infact during the competition I personally analyse almost every single piece being given to me specifically during the diffusion competition. I found it very interesting when after sketching my ‘interpretation score’ on papers, every single sketch of mine looks very similar on every single piece that I interpret. I actually don’t have any specific references or method of doing it but most of my ideas about the score are all based my listening to , intensity, high and low frequency, as well as specific approach used by the composers’ intention. In another word is something which I would regard as extraordinary in the piece. As the result I acknowledge that there are few specific technique used which are unique in every single piece that I’ve listened to but I was not quiet sure how it could be ‘specifically explained’ in my graphical score. Later I share my previous experience with my students. In one of the sessions, students were asked to interpret EA music. The task was given in such a way that students will not being firstly taught how a piece of EA music could or should be analyze. After the listening session and notes collected, I found that most of their interpretation do have something in common, that is they found out that all the pieces sounds ‘dark’ and ‘horror’. In fact most of my students will always relate EA music as such and my assumption probably be that, it was the only way to describe EA music. Could it possibly be an EA music theory in the future?

Parametrical Approach in Composing EA music Unlike the ‘classical tape technique’ the software approach in composing EA music are now nothing new. With the advent of Plug ins, User define software, and ready available effects bundle together with the software, composing EA music would be more accessible to anybody. Beginners would probably uses whatever at their hands and advance user would probably be more ambitious by creating their own Plug Ins or software. Such of these approaches can be easily done at this moment through software such as Max/MSP, Supercollider and etc. Composing EA are no longer confined to whatever available at the market but they can be more explorative by ‘inventing’ interface, software, Plug Ins and resulted into ‘new sounds’, is this true? Composing EA music perhaps have now became more ‘parametrical’ approach, a term that I borrowed from Composers’ Desktop (CDP) Manual. Which I found when I was so fascinated with what CDP can do for me in term of composing EA music.

This paper is rather a speculative, than answering what and how to teach electroacoustic music studies.

From my point of view, teaching computer music, electroacoustic music, MIDI or any technology and music related practices, needs a very unique approach and should be in considerable depth. The uniqueness of teaching music and technology is more than curriculum development itself. Apparently, we should also consider the availability of the technology, ability to understand the technology and its applications through application manuals, and so on. Moreover, electroacoustic music is not just merely about composing sounds, but it is also involved recording the source materials, evaluating and analyzing the recorded materials, manipulating the edited sounds, structuring processes and mode of presentation either through stereo, or multiple channel presentation.


Topback to top


Jing Wang - A Look at Interactivity Between Performers and Computer

Jing Wang

University of Massachusetts Dartmouth

Since the middle of the twentieth century, electronic media have been used by many composers as a compositional tool and a new medium to convey their artistic ideas. With the rapid development of web- based communication and computer technology, contemporary computer music has the potential to be quite refined relative to early electronic music and has become one of the mainstream modalities in the composition world.

These technological advances have given rise to the move from tape music to the evolution of many varieties of electronic music, including numerous works with interactive components that combine acoustic instruments with an artificial instrument—the computer. In a real-time interactive music performance system, the computer manipulates various algorithms to generate sounds through real-time audio analysis, data resynthesis, sound effects construction, physical modeling, and so on. In this manner, the computer demonstrates a high level of artificial intelligence by interacting with the human performer who plays a traditional musical instrument in real time.

Interactive music is real-time two-way dialogue that takes place between live performer(s) and a computer. In a real-time interactive music performance system, the performer plays a traditionally notated score and/or improvises on his/er instrument along with a pre-programmed computer. The computer, acting as a “virtual” performer, listens to its counterpart’s performance, analyzes the parameters of the audio signal, locates the position in the score, and responds to the live performer. Through this role-playing, interactivity between the performer and the computer is accomplished by their respective abilities of listening (to each other), analyzing (performance events/parameters), allocating (their scores), and responding/playing.

A computer’s listening ability is achieved by fast CPU technology with the use of microphones that transduce atmospheric energy or the sound pressure of an original sound into an electromagnetic signal that can be transduced into digits in the computer. The sampling and quantizing process that takes place enables the original sound to be captured, converted, stored, and used as computer data. As an analogy, the digitizing process makes the sound listenable to the computer’s virtual ears. The subsequent functions of analyzing, allocating, and responding on the computer, originating from the listening stage, are the vital parts for interactivity.

During the analysis process, the computer evaluates the incoming audio signal so that the individual parameters, such as frequency, amplitude, envelope, overtone components, and even pitch/rhythmic patterns can be detected and stored. The devices utilized to track various musical events are event detectors. According to the particular musical characteristics of the piece, detectors for pitch, attack/amplitude, phrase, note/rhythm prevalence, and other musical attributes are purposely designed in order to accurately capture specific musical information.

A computer’s artificial intelligence in interactive music is especially demonstrated in the responding/playing stage. During this stage, the computer functions as an effect module, an artificial instrument (a synthetic instrument constructed electronically and algorithmically in the computer through programming), and/or an improviser. Even though the mathematical and sonic complexity generated by the computer, in most cases, is unequal to the inherent complexity of an acoustic sound produced by a human performer, utilizing the complex musical characteristics created by the live musician to shape the computer’s musical expressivity is a widespread attempt to facilitate interactivity between the real and the artificial performers on a balanced level.


Topback to top


Tiantian Wang - Artistic Characteristics of Contemporary Electronic Composition of Chinese As Reflected from "TaiYi II" and "Moo·Nui

Tiantian Wang

A branch of modern composition genres, electroacoustic music composition is called “high-tech composition” due to its use of computers, audio editing software and related electronic equipments as a major medium for the creation and performances (or playbacks) of a work. Electroacoustic music first appeared and began to flourish on the stages of Europe in early 1950s and its first introduction to China’s music scene happened thirty years later. It was not until early 1990s that electroacoustic compositions by Chinese reached their maturity and established their unique characteristics.

Scientific and technological development has on one hand given birth to ever-expanding new musical “instruments” (i.e. electronics) that are differentiated from traditional acoustic instruments and in many senses challenged and transcended traditional harmony and the twelve-tone system by including noise as a musical element and sometimes even as a principal part of a particular work. In defiance of the traditional aesthetic notion that music should possess a degree of melody, harmony or rhythm, electroacoustic musicians have expanded the ingredients of music to embrace all sorts of possible “sounds.”

On the other hand, however, the creativity-boosting expansion of the realm of music and the seemingly limitless ranges of materials it has brought has also posed bigger challenges and trials to composers in the accessibility of their works and the establishment of a coherent, logical musical language.

This article attempts to analyze the several features of contemporary Chinese electroacoustic works by demonstrating two pieces of electroacoustic music of similar configuration but composed in different periods with sharply contrasting characters ­ Xu Shuya’s “Taiyi No.2” for flute and electronics (1991) and An Chengbi’s “Moo·Nui” for viola and live electronics (2002). By relating to their respective concepts, composition processes and techniques and with references to both their actual acoustic effects and extracted scores and scripts, the author delivered a multi-dimensional analysis of the two works, which not only captures their individual features but also reaches a comparative conclusion

The two works reflect differences in their means of composition and manners of performance due to objective restrictions (e.g. electronics, software, hardware) from different time periods. Also manifested are the different messages the two works convey, which result from the composers’ differences in ethnic background and philosophical and aesthetic ideology.

Topback to top


Yback to top

Yin Yang - The Study on the Acoustic Sound & the Application of Audio Signal Processing in Electro-acoustic Musics

Yin Yang

Electroacoustic music has experienced a five-to-six-decades development since its beginning. Compared with long history of music, this is only a rather short period; however, it has succeeded in establishing its own language, aesthetics and system. From the making of acoustic source to texture of structure, and from method of notation to concept of composition, electroacoustic music comes entirely from the electronic devices. This is a monstrous shift from traditional composition, which not only embodied composer’s burning desire for innovation but also the great advancement of science and technology. Recently, composers have produced a large amount works in this regard with novel skill and idea which has accelerated the development of electronic music and further engender its branching.

In China, electroacoustic music sprouted in 1980s when computer music is in the process of displacing cassette music. China’s electronic music started 20 years later than some other country in the world; however, china has been in line with international practice due to the ever increasing number of research institutes, ever flourishing of teaching and composing as well as ever strengthening of academic exchange. As a result, domestic research both in forms of words and composition is urgently needed.

Because of the lack of related sources in China, I rely mainly on my own experience of learning and practicing, which provided me with following problems to discuss: what is the peculiarity of acoustic source? What can influence its peculiarity? How to generate structure between various sounds? What is its character? Through what method could a composer conceive the design of sound? And so on. It is my hope that some basic laws in making electronic music can be revealed through the integration of sound comparison and composing practice.

My dissertation is divided into five chapters. Chapter one: collation of the theory in electronic music field with summarization as main method; chapter two and three: demonstration of sound resource and organizing feature in electronic music teaching with case analysis and summary of its character; chapter four: discussion of its space problem; and chapter five: the thinking of practicing and method in audio technology.

Topback to top


Zback to top

Ruibo Zhang - The Application of an Internationally Peer Reviewed Professional Glossary System, the ElectroAcoustic Resource Site (EARS), in China - CHEARS in Five Years

Ruibo Zhang

This paper is based on an analysis of the ElectroAcoustic Resource Site (EARS) system, an internationally peer reviewed professional (multilingual) glossary system for electroacoustic (EA) music. EARS is a scientific, integrated and dynamic knowledge system that is based on standard terminology and uses the internet as its user platform. It can be applied in all of aspects of EA Music research in the contemporary era. The thesis then proposes a China Electroacoustic Resource Survey (abbreviated as CHEARS) as a translation and adoption of this system. Finally, the thesis discusses the application of the glossary in the research sector as a core issue in order to deduce the need for CHEARS in China. The thesis tries hard throughout to point out the gaps between the current status of EA music in China and the West. Accordingly, it outlines the key problems and proposes a theoretical solution. All in all, this work not only builds up a relevant electroacoustic music classification system in China, but in general provides a peer reviewed framework for the development of all aspects of China EA music research.

In China, especially at this urgent stage of development of EA music theory, migrating EARS is a very practical approach in building up a theoretical system. We may either found an original theory or migrate a system that currently exists from overseas; both are viable approaches. The central issue is the applicability of a standard theoretical system in China; a comprehensive theoretical system will provide a robust framework for development of EA music research and hence curriculum. At the same time, we must speculate on how this theoretical system be promoted for wider effect in China.

Topback to top


Xiaofu Zhang -

Xiaofu Zhang

Topback to top


Qian Zhou - The characteristic and organizational method of timbre ­ acoustics structure—An analysis of Kaija Saariaho "Io"

Qian Zhou

The Innovation of contemporary music vocabulary is based on the re-discovery, re-definition and re-arrangement of the ‘relationship between the elements of sound’. With the continuous development of acoustic science and sound art, the emerging research on rebuilding the relationship between sound elements has entered a new stage which is characterized by the emphasis on everything from the sound nounmenon itself. With high attention to the essentiality of sound, contemporary compositions are personalized by the novel timbre - acoustics, timbre - texture and timbre structure.

New timbre-acoustics structure often needs new techniques or new media, moreover, the new music is interacted with new techniques and new media. Although their instrumental meaning will gradually blanking over time, new techniques and new media still has the essential role to reveal the meaning of music works.

Some interesting things will be shown in the research of Kaija Saariaho’s Io from the aspect of the timbre-acoustics structure and its characteristics. The sophisticated arrangements about timbre and acoustics can be found in the very colorful work, which has involuted texture elements and structural components. Skilled at interactive electronic music, Saariaho has almost restructured the chamber orchestra effect in this work: the modulation of the real-time sounds of instruments results in total difference from the aspects of the density, quality, balance and volume, etc. So the texture of Io presents a new luster. Her experience about the characteristics of sound transition in time helps her deal very flexibly with the connection between noises and notes which translated into and associated with each other changefully in the evolution of the structure. Following a number of sound clues, the detection of the timbre-acoustics structure in Io shows some ambiguity. There are different structures presented explicitly or implicitly in different perspective. These logically relevant and different structures jointly develop Io in its overall appearance.

Compared with other works, a glimpse can be caught of the originality in the music style of Io. As a work in Saariaho’s mature age, Io integrates her lyrical characteristics with the emphasis on the sound details. Although Saariaho has the same keen sense of the nature of sound with Tristan Murail and Gérard Grisey, her music has been grown more emotionally. Among the group of creation influenced by the research of IRCAM, Io is not the most complex in techniques, but it has an extraordinary effect. In the view of music development in Europe since 1970s, this work reflects Saariaho's own style, which originated neither from the center of Europe, nor from the edge, maybe either from the two dimensions or between them.

Therefore, the research of Io achieves much not only about its historical value, but also the characteristics and organization of the timbre-acoustics structure. It will have widespread and long-term impacts on creation and innovation.

Topback to top


Yuan Zhou - Reflections on the emergence of electroacoustic music drawing from Ligeti's experiences

Yuan Zhou

The electro-acoustic music emerged from the medium-term of the last century. Critically speaking, it neither belongs to some schools, nor stands for the naissance of some new style. Actually, it just provides the composers with the new creation tools and the possibility of using new creation thought in terms of the new timbre exploring, the new representation ways and new principle of music structure.

The paper deals with the two creation ways between the electro-acoustic music and the conventional instrumental music, furthermore, these two ways are absolutely not conceived as a pair of contradictions. The creation language materials, the sound organized ways, the structure building mode, the configuration achievement and the prevalence approaches of these two kinds of music are analyzed and researched, meanwhile this paper engages in researching the mutual complementarity and merited reference of these two kinds of music. It emphasized that the electro-acoustic music is an extension, an enrichment and a complementarity of the conventional creation way.The features of the paper:

1, It does not deal with the case analysis and research of the electro-acoustic music pieces. It just brings forward a few commonplace remarks by way of introduction so that others may come up with valuable opinions, which interpreted to mean that the paper focuses on the nonelectro-acoustic music pieces created by the composers who experienced the electro-acoustic music creation.

2, It pays attention to the conv †entional pieces form a different way. Analyzing and researching the pieces by using the thought of electro-acoustic music pieces, and exploring the mutual influence and penetration of these two different kinds of creation thought.

Among the modern western composers, many of them engaged with the creation of the electro-acoustic music. The reason to choose Legeti to be researched is the sound of his pieces is very distinct, which manifests a diversified creation thought. In addition, his experience in the creation of the electro-acoustic music affects the cognition of sound and the innovation of creation thought to some extent. Although he did not create any electro-acoustic music at all later, the “residual” mark and underlying influence emerges more or less in many of his other conventional pieces.

After Legeti paused the period of the creation of the electro-acoustic music, the°ðAtmosphere°Ý(Atmosphères£¨1961) is his first conventional symphony has been made ( this piece was premiered on the Donaueschingen Music Days, Oct.22th, 1961 ). It impressed and shocked the modern western music with its adventurous sound and brand-new structure thought. The author hopes for the validation of the issue via the particularly comprehensive anatomy, and makes a foundation for the further research.

The paper is constituted of two parts, which are macroscopical dissertation and cases confirmation

The first part is mainly the introduction of the emergence and development background of the electro-acoustic music. Based on it, Legeti’s creation experience during this period is introduced. It discusses his learning and practice for the composing of the electro-acoustic music and the further impact he has been influenced.

The second part is a analysis and argumentation of the Atmosphere. It discusses the creation thought and the mutual relationship of the timbre material, the organizing techniques and † the entire structure.

The epilogue comes to a conclusion and abstraction of the core content of the issue, and also makes a preliminary sum-up of the trend of diversified thought in the contemporary music creation.

Topback to top


Topback to top
Return to the EMS2010 Homepage