5-8 September 2017
Nagoya City University, Kita-Chikusa
School of Design and Architecture, Center for Environmental Design
Communication in/through Electroacoustic Music
The conference theme considers cultural/intercultural communication in/through electroacoustic music. Communication is possible among people who share certain common bases such as language, logic, sense, perception, listening contexts, etc. What are the common bases for electroacoustic music? How are these manifested in intercultural situations? Topics concerning technical application within electroacoustic music regarding communication systems such as interaction, telematics, SNS etc. are also welcomed.
Other paper themes are welcome concerning any topics within the field of electroacoustic music studies. For example, presentations can be given on research on electroacoustic music history, aesthetics, analysis of a piece, social aspects, terminology, taxonomy, sound ecology, genres and styles, pedagogy, research trends, etc.
Professor of Musicology at the Toho Gakuen School of Music, Tokyo. He holds a Ph.D. from Tokyo University of the Arts. In 2008-2009, he was visiting fellow at Harvard University. His publications include “Ligeti, Berio, Boulez: the end of avant-garde and the future of art”(2005), “The history of Japanese Contemporary Music since 1945”(2007), “Edgard Varese and his Utopian Idealism: the detail of unfinished Espace”(2009), “A guide to fundamental musical analysis”(2017).
Rethinking “The Liberation of Sound”
"Our musical alphabet is poor and illogical. Music, which should pulsate with life, needs new means of expression, and science alone can infuse it with youthful vigor.”
Thus wrote Edgard Varèse in 1917—just 100 years ago—in the magazine “391” published by the Dadaist Francis Picabia. Varèse was the person who continued searching the possibility of new musical instruments beyond the restrictions of traditional ones, and the possibility of a new musical concept. Many of his lectures about the future of music were later gathered up under the title of “The liberation of sound” by his pupil Chou Wen-Chung. Even after nearly a century, these texts provide invaluable hints, relevant to the issues of our time. Varèse called the music "organized sound" in these lectures and evoked many times the image of the projecting “sound beams” in space. Indeed, he began to compose the work titled "Espace (space)" in the 1930s, but it was not completed, after all. Although he could realize the part of his spatial strategy in his "Poème électronique" at Brussel Expo after the Second World War, it seems that technology of the time was never enough. Today, after more than fifty years since Varèse died, how far have we advanced from the point where he was?
In this speech, while taking Varèse's lectures into account, I will reconsider "space," one of the important concepts in Electroacoustic Music. In doing so I pay attention to the fact that the boundaries of the spatial theories have expanded from physical to philosophical, psychological and sociological fields. For example, Heidegger made a stern criticism of the standard spatial concept―it was a homogeneous, without any centers, infinite, and only as a container or a framework for things to exist. Abraham A. Moles proposed "psychology of space" and tried to capture space in terms of "motion." In addition, in the field of sociology, Henri Lefebvre stated that the space had active properties and it could engage in productive process positively. He then classified the function of the space into three categories: spatial practice, representation of space, representational space. These “spatial turns” are basically geographical concepts targeting social spaces; however, I show that they are equally applicable in thinking about space in music.
Beyond practice? Tracing cultural preferences
In the 1980's, the newly developed, faster, and smaller digital processors allowed the use of computer in real- time within musical performances in concert situations. In these early days of mixed music, the first personal computer entered the public market, and synthesizers with digital sound processing were established. Set-ups often contained combinations of computer and digital sound processors. As technology was still very expensive, many compositions were developed in co-operation with institutions that provided the access to computer and processors and, if necessary, technicians and programmers.
Over the last 30 years, computer technologies rapidly developed. Technologies used in the 1980s are now outdated, the original devices as well as the often hardware bound programming languages are no longer in use. But many compositions were established in mutual relationship with the development of then new technologies – the technologies used for the composition's performance is therefore a significant part of these works. In consequence, this causes new challenges when establishing new performances of mixed music from the 1980s in the 21st century: the original hardware and software is now outdated, but was usually not archived along with the composition. The documentation of these compositions usually include descriptions of the used technologies and the set-up, but contains very rarely the original code or papers with excerpts of code transcription. Additionally, there is no established notation for the use of the electronic operations, neither in combination with traditional music scores nor more experimental notation formats. Not as common was also an audio or video documentation of the premiere.
In my recent research project with the working title “(Historic informed) Performance Practice for Computer music”, I am examining this field along several questions, such as: How is the information or the inherent artistic decision withdrawn from the documentation, and how is it interpreted? What guidelines are considered in the discussion on how should/can/must be played these compositions nowadays? How important is it to keep the original technologies and techniques, or should all the limitations set by the outdated technology be erased? Is it then still the same piece? What happens to the artistic idea, if this may partly be inherent in the original set-up? What do these practical decisions mean for the relationship between original composition and re-performance? Is the question on authenticity of a performance important, and, if so, to what extent and to whom?
One first approach was to ask, if it is possible to find already a tradition in performing mixed music, which implicitly guides the whole process of establishing a re-performance. This, however, is not possible without considering the circumstances of the original production on the one hand, and the re-performance situation on the other. Taking in account writings on performance practice of other musical fields, it becomes apparent that also the (surrounding) cultural constraints may take a strong influence.
In this paper I want to discuss, if it is possible to pin down some hints on an existing influence of cultural issues. For this, I focus on some examples of mixed music from the 1980s composed in Europe, which was re-performed after 2005 in Europe and East Asia.
The paper bases on my research at Institut de Recherche et Coordination Acoustique/Musique IRCAM in Paris, and the Center for Research in Electroacoustic Music and Audio CREAMA at Hanyang University in 2015, where I gathered informations on mixed music compositions of the 1980s in the archives as well as following new projects, in which compositions from this time period were re-performed.
The discussion bases on the idea, that deciding on suitable recent technologies, and on how to reveal technical and artistic information concerning the composition, can strongly be influenced by our (aesthetic) expectation. This includes not only the initiated debate on the existence of a tradition within mixed music performances, but also hints at a deeper discussion on musical formation and self-confidence within contemporary music in Europe and East Asia, given the fact that many recent composers work not only on their own musical language, but being also significantly influenced by their surrounding culture.
But what would this mean in detail? Where do inherent aesthetic guidelines and/or expectations derive from? Is the process of establishing a performance led by musical or aesthetic expectations? How are these expectations influenced by inherent aesthetic guidelines, and are these driven by the surrounding culture and/or the musical (aesthetic) education? Is there a silent and invisible ruleset concerning the expectation on performances deriving from the surrounding art scene and/or music tradition?
Considering that the technology is a crucial element within the artistic process – are cultural constraints for computer music maybe not as important as they were for music in former times, as technology was exchanged between the continents from the very beginning of the developments in digital sound processing? How important is it, when David Wessel tells Gregory Taylor in an interview, that he used the brand new synthesizer Yamaha DX7194, a prototype of the Roland MPU 401, and an IBM PC on his concert tour in 1983 in Japan, which was organized in cooperation with Roland, and that he tried hard to bring this equipment also to IRCAM? Was it just one moment in time of matching technologies, when Xavier Chabot (IRCAM/CARL), Roger Dannenberg (Carnegie Mellon University), and Georges Bloch (CARL) started in 1984 to developed an „instrument-computer-synthesizer system for live performances“, consisting of an IBM PC, three Roland MPU 401, and four Yamaha TX7 modules? How did the use of technologies differ in between the various music contexts, and how did this influence the emerging music?
- Miriam Akkermann
Miriam Akkermann was born in Seoul/Korea. She took a classical flute degree and a MA for New Music and Technologies at Conservatorio C. Monteverdi in Bolzano, and studied product design at the Free University of Bolzano, and composition and Sonic Art at Berlin University of the Arts (GER), where she also completed her PhD in musicology in 2014 entitled “Between Improvisation and Algorithms. David Wessel, Karlheinz Essl and Georg Hajdu“. In her recent research, she focusses on the idea of historic informed performance practice of mixed music. Since 2015, she is a Member of the German Young Academy.
Her compositions, sound installations, and performances have been shown at international festivals and galleries, and she published papers on artistic topics as well as her research at international conferences. Since December 2015, she is Lecturer at the Media Science Department at Bayreuth University. www.miriam-akkermann.de
Efficiency of adopting Interactive Machine Learning
into Electro-Acoustic Composition
In conventional composition way for electro-acoustic music, huge size of parameter sets for synthesis electro-acoustic sounds shoud be decided by human composers. In past days, pre-set parameter sets for synthesis had been commonly used. But the pre-set is not so usefull in artistic meaning, because using pre-set parameter sets for composition has been patterned and it produces rut. Also the changing a parameter in the pre-set parameter sets is dengerous for synthesis. Thus appearance of ideas that adopting machine-learning techniques to generated huge parameter sets for electro-acoustic composition learning from existing sounds is natural. The machine learning technique generating huge parameter sets from existing sounds automatically.
On the other hand, almost machine-learning techniques require “existing teaching data, expected good and bad” (in general, correct and incorrect data) or “explicitly written functions to decide good and bad work” (in general, top-down defined evaluating functions). It is a restriction of machine learning mechanism. The restriction of creativity when machine learning is applied to art-creation is unavoidable. Musics generated as results of machine learning with correct and incorrect data, top-down defined evaluating functions are not considered a new creativity. Bacause it’s strictly based on only existing work or sensibility, that has become be already old.
Therefore, as a new method, the author adopting “Interactive Machine Learning” as a music creation. The interactive machine learning means that the computer describes human composer’s sensibility as methods explicitly with dialogue process between human and computer. If human composers keep having new sensitibilities for their music, if it not discribed explicitly in human composer’s brain, computer explicitly states it. Thinking aesthetically, this is that it seems to be as a relationship between artificial inteligence and human composers.
In this paper, the author indicates summary of these actual used techniques, then discuses aesthetics meaning of the adopting these techniques and the aims of the system developing named CACIE(Computer-Aided Composition by means of Interactive Evoluation).
Actually the author mainly adopting Interactive Genetic Programming a kind of Evolutionary Computation into composition of all time-series medias such like electro-acoustic and noted works, also currently adopting human performer support system for real-time improvisation. For interactive creation, many techniques have been developed and tried in actual composition in this system. The main ideas of the CACIE, tree representation of S-Expresssion programming is suitable to describe music generally, to place electro-acoustic sounds or musical notes, to describe envelopes of synthesis parameter sets, and all parameters in time series in a work. Typical representation examples are researches and works of David Cope, Common-Lisp Music and Open-Music system.
An electro-acoustic work named “Vedana” by Masahiko Inada accepted in ICMC 2008 concert had been composed with the CACIE. The author had interviewed the composer about the feeling of use of the system and his creativitiy with the system in his composition for this work. In summary of his answer, composition with the interactive machine learning is not only that it is very similar to his conventional electro-acoustic composition thinking way but also that the his conventional method had been expanded strongly with new ideas by computer-aid, also that his creativity is not restricted with the system.
In aesthetics meaning, as mentioned before, the property of restricting creativity of the ordinary machine learning, generates new works with inputted existing works or top-down define evaluation function, is not suitable for the art creation. However, as a result, the my proposed method, composition with the interactive machine learning, expands the human’s creativity, especially electro-acoustic musical works.
- Daichi Ando
Ph.D in Science, Born in 1978 in Japan. He studied composition and computer music under Takayuki Rai and Cort Rippe at the Sonology Department, Kunitachi College of Music, Japan. Then he studied computer music with Palle Dahlstedt and Mats Nordahl at the Art & Technology, International Master Program from IT-University of Göteborg, Chalmers University of Technology, Göteborg, Sweden. In addition, he received a Ph.D. in science from Graduate School of Frontier Sciences, The University of Tokyo, Japan for studies in the application of numerical optimization methods to art creation. Currently, he teaches and conduct researches as Assistant Professor in Division of Industrial Art, Tokyo Metropolitan University.
Evaluating the need for unified notation: conceptual and creative consequences of communicating electroacoustic music
Electroacoustic music has been one of the most dislocating forces upon traditional western notation, challenging our concepts regarding what music is, as well as our ideas regarding how to express composed music in a manner so that others can reproduce it later. This paper, then, explores the concerns surrounding musical documentation of electroacoustic works, and some of the challenges faced in documenting these works that do not conform to conventional systems of documentation. It also discusses the influence that documentation systems have on the music that is created, and some of the ways in which the means of musical expression influence composition.
If we begin our discussion around the early medieval period, we can see rudimentary forms of notation emerging, beginning with neumes that depicted musical trajectory and gesture. Guido’s later developments in notation allowed for the pitch element to be notated such that performers could learn a piece of music without ever hearing it. This was an important breakthrough for both documentation of music and performance, but it also made a singular parameter of sound, that of pitch, the most important element, the musical substance, with all other elements being determined attributes (Kelly 2014, Lang 2016). This contributes heavily to how we have thought about and how we have written music in western society for several centuries, and even presently, this notational system and expansions of it are still used. The concept of pitch as substance also translated well to musical instruments developed in the following years, which were primarily pitch-based, and therefore a very consistent paradigm of notation and performance could be relied upon.
Electroacoustic music has served a very liberating role, both to the determination of substance, as well as the means of musical expression used. Other parameters of sound, such as timbre and space, can now be used as compositional elements. However, as both of these qualities are multidimensional, they are not as easily represented by notational symbols on a page. This has resulted in a variety of attempts to create unified notation systems and even systems of classification for these parameters. The notation of spatialization, for example, has been a major research focus in the last several years, especially as spatialization systems become more elaborate (Schacher et al 2014, Ellberger 2014).
It is also much harder to notate for electronic instrumental performance because many of the instruments do not have consistent and reliable expected behaviour. We can predict the behaviour of acoustic instruments because they are constrained by physical parameters of sound; however, without some kind of electronic extension, a string instrument, for example, can only make those sounds that are enabled by its physical components vibrating. This is not the case in computer music systems, or in acoustic music in which sounds are pre-recorded and modified. A computer music system can behave any way that a programmer tells it to, which prevents a (completely) universal paradigm from being established for composition or documentation.
Additionally, more electroacoustic works are beginning to draw from “the expanded field”, incorporating things such as visuals into a work (Ciciliani 2016). I have, for example composed works in the past which use the performance aesthetic as a compositional parameter (Aska 2015, 2016).
Therefore, even as we still have not established a unified means of expressing electroacoustic music, the material from which we can draw on continues to expand. It is simply changing too fast for scholars and composers to keep up with. The other issue is the 3-dimensional nature of parameters such as space, and timbre (and multi-dimensional elements present in the expanded field).
These parameters can't be represented on paper, but multimedia can be implemented; score reproduction was performed on paper because that was the resource they had available, there are now other options, such as video, and even interactive tools. Such tools have been extremely useful in other art forms such as video games, which also do not have a predetermined universal functionality. There is always a common input across consoles, the controller, but in each game a different function is performed by all of the buttons. A player has to re-learn this for every different game, but yet there are in-game tutorials, and a written book. This could Therefore prevent a valid starting point for gesturally-controlled music, for example.
The lack of unified notation results in two primary concerns: the sustainability and future performance of EA works, and the lack of a commonly understood notational system for electronic music. This lack of unified notation system for electroacoustic works has led to a certain compositional effect; most works are performed by the composer, and most primary documentation systems end up consisting of a video of the performance. Performers aren't usually trained to perform other people's electronic works, and even diffusion tends to be done by the composer. This affects the style of the works considerably, as performers are often writing for themselves and therefore, will rely upon gestures, techniques, and procedures that they are accustomed to.
This places electroacoustic music in the very same realm as pre-Guidonian era western music: there were several types of rudimentary notation systems that could generally represent the music, but noting unified, and a very strong lack in systems of communication that enable reproduction of works. The desire to sing works without ever having heard them before was the impetus behind development of Guidonian notation. We face very different challenges in communication, however. While it is often a goal of acoustic music composers to notate the music in such a way that the performers can read and perform it without needing to ask questions of the composer, technology enables us easy and instant communication. I can send a score from Canada to Japan in seconds, and if the performer (in Japan) had questions, they could not only write me an instant email or message, but could call me with video over the internet. This does change considerably the meaning and need behind notated music. However, I argue that the notation, or lack thereof, changes how we compose and create music. Therefore, there are elements beyond simply conveying a work for reproduction that contribute to the need for notation.
Notation of acoustic music led ultimately from a very general representation of pitch and loose trajectory to very specifically notating every parameter. We could therefore view the notation of electroacoustic music to be in a similar phase as late Medieval music. The question continues, of whether a unified system is even necessary, as we can see that the guidonian system had creative consequences regarding what was important musically, and this continued to affect music for centuries (and still somewhat today). It is also important to consider that the challenges faced previously, including lack of digital storage of pieces and access to recordings do not exist, and we have different media available today. Therefore, we may need to re-examine the idea of what is the best way to communicate electroacoustic music, so that it can be understood more universally.
- Alyssa Aska
Alyssa is a composer, researcher, and educator who writes both acoustic and electroacoustic works. Her current research explores the aesthetics of musical works which contain electronics, and concerns itself with the way in which new such compositional applications can be integrated into works in a musically meaningful way. She has done extensive work with translating gesture into sound, as a member of the UBC SUBCLASS and as an individual composer-performer. The results of this work has been presented at several conferences, including ICMC, NIME, SMC, and EMS. Alyssa received her B.Sc. in Music Technology from the University of Oregon under the supervision of Dr. Jeffrey Stolet and Dr. Robert Kyr and studied an M.Mus in Composition from the University of British Columbia with Dr. Keith Hamel and Dr. Robert Pritchard. Alyssa is currently a Ph.D. candidate of Dr. David Eagle at the University of Calgary.
The representation of the electronics in a musique-mixte environment: analysing some ontological and semiotic solutions for performance
There are many communication and ‘reception of idea’ (Landy, L. (2007). Understanding the Art of Sound Organization. Cambridge, MA: The MIT Press) occurrences which are fundamental in the process of developing musique-mixte (for instrument with electronics) works. In practice, the creative process requires a nexus between composer-creator and performer(s) operating across both the acoustic, and electroacoustic realms. Having a notation system that is commonly understood and accepted notation and even language – a common semiotic ontology – for the electronic component of a musique-mixte work, as comparably ubiquitous as the stave and stick or gestural notation within the instrumental paradigm, would assist the composition development process.
Presently, there is no systematised, universally accepted form of electronic notation with which to record performance details in a form that can easily be shared between performers. Whilst there are excellent softwares which can assist analysing music/performance after the event (e.g. EAnalysis or Sonic Visualiser), composers usually create individual ‘notation’ solutions to represent the electronics component in a musique-mixte work. These solutions are normally separately tailored to meet the needs of both the player’s performing score(the instrumental score) and the technologist’s score. The ontology of each solution is, initially, pertinent to the musical creation and its creator(s) and secondarily, to the performance environment.
Four musique-mixte works from the first years of this century provide exemplars for analysing the notation of the digital signal processing component of each work. The works under discussion are creations for pipe organ with live digital signal processing and, in each, the acoustic sound of the organ is the origin of all electronic sound emanations. The combination adds a layer of sonic complexity to the already rich sonic quality of the pipe organ, and the musical intention of the processing, and an analysis of how this is represented in the notation of each work is the core of this paper.
The consideration of the semiotic ontology in of each ‘system’ used to represent the digital signal processing, permit some conclusions to be drawn regarding the information which is required by each of the participants in the musical performance. With the exception of the work by Thurlow/Halford/Blackburn, the works are scored for an organist and technologist(s). Andrian Pertout’s composition also includes flutes. The electronic notation solutions in each work provide the instrumentalist with a representation of the electronic component of the work, which may have a currency beyond the exemplar works. The works under discussion are: Andrian Pertout (2007) Symmetrié Intégrante (2007) for organ, flutes and electronics Op 394, Lawrence Harvey/Andrew Blackburn Eight Panels for organ, live electronics and sound diffusion system, Steve Everett (2005)Vanitas, and Jeremy Thurlow/Daniel Halford/Andrew Blackburn (2015) Ceci N’est Pas Une Pipe.. Reference will also be made to Uijenhoet, R. (2003 rev. 2009). Dialogo sopra i due sistemi for organ and quadraphonic live electronics.The background of the composers is diverse - Australian, Chilean, American, Dutch, and UK. While two of these works (Eight Panels and Ceci N’est Pas Une Pipe) use Cycling74’s Max to create a software patch which serves in part as the ‘notation’ of the electronics score, the others each use different software/hardware combinations including Kyema and SuperCollider. The electronic notations in the organist’s scores are also equally variable, ranging from an indication of a ‘scene’ change to detailed gestural instructions for both technologists and organist. Every work provides quite specific ‘recipes’ for (re)creating the sound palette which the performer/technologist may follow in conjunction with the organist’s score.
These compositions each provide an opportunity to delve into issues that have been raised earlier (eg Morrison (2014) Graphical Music Representations: A Comparative Study Based on the Aural Analysis of Philippe Leroux’s M.É. EMS 2014 proceedings). While not intending to provide a solution to the lack of a common electronic notation (which will likely be evolutionary in development rather than imposed), the paper will identify how the issue has been approached in the selected compositions, noting both the commonalities and distinctions between each.
- Andrew Blackburn
Dr Andrew Blackburn is a Research Fellow at Universiti Pendidikan Sultan Idris (UPSI), Malaysia, after being appointed in 2011 as Senior Lecturer in Music and, in 2015 as Deputy Director of UPSI Education Research Laboratory. His involvement in research projects include higher education training and assessment, intercultural music, and leading research projects in organ performance - particularly pipe organ and live electronic processing of sound (DSP), new forms of music representation, and musical histories in Malaysia. Andrew’s doctoral thesis is - The Pipe Organ and Realtime Digital Signal Processing: A Performer’s Perspective. Andrew has continued working with his range of expertise derived from his earlier career in Australia–music education, music creation, keyboard performance, music technology, and choral conducting.
Andrew has performed widely as soloist and with orchestras and ensembles all over the world, including Australia, Malaysia, England, Sweden, Denmark, Germany, Hungary, Italy, and Spain.
Other people’s sounds: examples and implications of borrowed audio
The starting point for much electroacoustic music is the capture of audio from the sounding world around us. Recorded sound (field and studio recordings) provides the composer with pliable audio data, inspiration and impetus for the creation of new work. The content of these audio files varies widely to include sounds from musical instruments, inanimate objects, spoken languages and environmental landscapes. Composers working in the field of electroacoustic music and all its associated formats and subgenres (soundscape, live laptop improvisation, acousmatic and noise-based to name a few) are reliant on the presence of audio, whether it be from synthesized or recorded sources, in order to move forward with a new work. Sound’s fundamentality to the composition of electroacoustic music is clearly understood within this discourse, but what is less clear and defined are the finer details relating to external sound sourcing, especially when the composer looks beyond their own materials, to others and/or digital resources (eg. Sound archives, sound libraries and sound maps) for this starting point inspiration. On the surface, it can seem that by removing the sound recording stage of the process, the composer forfeits a direct connection with the physical source, along with memories of this sound-capturing act. On the other hand, for some, skipping this step is not even an option, especially for composers who pride themselves on their well-honed microphone techniques and noise-minimizing skills, since the recording of ones own sound may be viewed as the first stage of the compositional process in which a compositional imprint is firmly forged and found. A given composer may have a recording ‘style’ or pattern, and this approach to recording can seep into his/her choice of sound materials. Take the example of a soundscape artist who braves the wind and rain with their highly specialized and adapted recording equipment. Their techniques for shielding their microphone from direct gusts and torrential downpours provides a striking contrast to the composer who inserts lavalier microphones into a bottle of fizzy water to capture the liquid’s microscopic effervescence within the calm, acoustically dry recording studio. In short, a composer can choose and create what sounds they want to work with in order to achieve specific, personalised end results. Chris Watson’s recording expertise comes to mind in this instance with his skillful use of ‘super compact particle velocity microphones’ to capture minute, barely-there caterpillar sounds. Sound recordings can be in some way a reflection of the composer’s personal aesthetic, demonstrating creative planning at a very early stage in the compositional process. Contrary to this, there are a number of instances where composers choose not to work with sounds they directly collected, some in fact never record their own sound, as found in the numerous cases of sampling or plundering. Composers who seek out existing pre-recorded sources juggle their own creative integrity with the often-requisite sense of homage or respect assumed in these situations. This paper chooses to take on this issue, searching for specific examples, circumstances and outcomes of sound borrowing. Issues of sound quality, personal preference and dealing with dated or cultural remnants all undoubtedly arise when examining viewpoints and musical outputs of composers using other people’s sounds.
The paper considers the aspect of originality and how this can be lost when other people’s sounds are sourced and used in a composition. Adrian Moore talks of achieving originality through sound recordings: “how to make your work original? …record your own… they are immediately personal to you and the playback of these sounds comes with the guarantee that ‘you were there’. It is immediately easier too, to be able to re-trigger the emotions felt as you recorded the [sounds].” With this in mind, we can start to see personal attachments and connections with sounds that we might miss out on if we make use of other people’s material. Sound recordist, Antye Greie supports this viewpoint regarding her own field recordings “they [field recordings] were my memories and my property and that meant a lot to me, like a bass drum made out of the pop of my lips recorded in Belgrade, or the hi-hat sounds made of snow I was crushing… these sounds made the songs more meaningful to me.” 
This paper will challenge the notion of originality loss and instead searches for the potential benefits and upsides of using externally sources sound. The paper will present perspectives on the intricacies and nuances of borrowed sound through a collection of case studies where composers have looked to others for sound materials. These include:
- The ‘Prize Presque Rien’ competition (questionnaire responses from Daniel Blinkhorn and James Andean).
- Cormac Gould’s CoreCore (2105) demonstrating exclusive use of sound archive sources along with only freeware software used for the construction.
- Compositional responses to the The European Space Agency (ESA) Estrack 40th Anniversary Sound Contest.’ Nikos Stravopoulos’s work, Metakosmia (2015).
- Pete Stollery’s Three Cities Project’ (2013) and the sharing of sound recordings from foreign cities (Stollery, Kim and Whyte).
- Instruments INDIA composition project (2106-17) – three composers commissioned to work with the Instruments INDIA sound archive.
Examining case studies of sound borrowing aims to demonstrate the variety of perspectives and motivations composers have in selecting and integrating these materials into their own aesthetics. Seeking sounds from archives and libraries can be viewed as advantageous in the accessibility this affords the composer, however adopting these sounds also means adopting the quality issues the sound may carry with it.
Following the perspectives of composers and audiences for works that borrow culturally significant sound is a further perspective the paper will introduce through the author’s own experience in establishing the Instruments INDIA composition project, which commissioned three composers (Steven Naylor, Greg Dixon and Ish Sherawat, 2017) to work exclusively with a sound archive of Indian musical instrument recordings. Introducing composers to foreign and unfamiliar sound sources had educational and creative impact upon the composer’s compositional processes and their preconceptions of instrument sounds and capabilities.
- Manuella Blackburn
Manuella Blackburn is an electroacoustic music composer who specializes in acousmatic music creation. She has also composed for instruments and electronics, laptop ensemble improvisations, and music for dance. She studied Music at The University of Manchester (England, UK), followed by a Masters in Electroacoustic Composition. She became a member of Manchester Theatre in Sound (MANTIS) in 2006 and completed a PhD at The University of Manchester with Ricardo Climent in 2010. Manuella Blackburn has worked in residence in the studios of EMPAC (Experimental Media and Performing Art Centre, New York) Miso Music (Lisbon, Portugal), EMS (Stockholm, Sweden), Atlantic Centre for the Arts (Florida, USA), and Kunitachi College of Music (Tokyo, Japan). Her music has been performed at concerts, festivals, conferences and gallery exhibitions in Argentina, Belgium, Brazil, Canada, Chile, Costa Rica, Cuba, France, Germany, Italy, Japan, Korea, Mexico, Portugal, Spain, Sweden, and the USA. She is currently Senior Lecturer in Music at Liverpool Hope University.
Electroacoustics as transcultural dialog
in Jonathan Harvey’s music and thought
On one hand, beyond the obvious qualities of the music of instruments or voices, technology gives access to another level of expression. Albeit the power of timber in instrumental music and the strength of words in vocal music, the infinite possibilities of simulation and treatment procure stronger and subtler possibilities at the same time. In the history of the last seven decades – analog then digital electronic technologies – have been used in different musical styles in a wide range of writing techniques, esthetics and goals.
On the other hand, since Le Désert by Félicien David and even before, European composers have been fascinated by the Orient, its cultures, arts, myths, religions and philosophical trends. From Debussy to Murail or Mâche, a lot of occidental composers have understood the unbelievable resources and richness of a multicultural approach. In addition, Tōru Kakemitsu, Yoshihisa Taira and plenty of other Asian composers show us more than a simple interest for occidental culture. All of the greatest composers aim at a sort of hybridization or fusion of oriental and occidental cultures, without any tendency toward self denial or impoverishment due to a worldwide standardization.
My purpose is to demonstrate how impressive technologies are to facilitate and deepen the communication between different cultural areas. Multiple examples can be found in the domain of art and especially in musical composition. However, I’ll concentrate my investigation on a composer who considered the powerful possibilities of technologies in order to improve the subtlety of expression. In fact, far from spectacular effects, computers are able to give the right tools to explore delicate frontiers in the art of composing. I already wrote about the efficiency of technologies to deliver refined and sophisticated emotions but this time, I’ll explore the role of electronics concerning the intercultural relationships inside composition.
The English composer Jonathan Harvey, born in 1939 and dead in 2012, was particularly interested in both domains: electronics and the Orient through Buddhism. Early in his career he discovered – thanks to Milton Babbitt at Princeton university – the ability of computers to comply with all sorts of compositional and expressive ideas. One of the great qualities of Babbitt was his excellent knowledge of extended serialism. He taught his students the means of controlling all sorts of variables in the composing process. Or to put it on a higher and more accurate level: Babbitt was one of the best authorities on structuralism. Thanks to him, Harvey combined in Timepoints (1970), the power of computational processes with the concepts resulting from structuralism. This was before Harvey’s real interest in the elevation of mind through art and music.
Then Harvey developed a deeper and deeper curiosity for oriental cultures and especially Buddhism philosophy. Born and raised in England, his family cultural background was clearly linked to Christianity and more specifically to the Church of England. However, before long, he enlarged his cultural interests, as many other artists did, toward other cultural areas. Fascinated by both the freedom and the intensity provided by Buddhism – a philosophy of life more than a real religion in the occidental meaning – he tried to understand this different world. His goal was to assimilate the two different approaches in his art. Reading, practicing meditation, even traveling to observe and dialogue with monks, he built his own spiritual world.
From the 80’ to his death, Harvey’s music has used the strength of electronics as a tool to express this enlarged “spirituality”, as he put it. His pieces that were composed at Ircam prove how the supposed coldness of technologies can actually be useful in creating subtle expression. I will take precise examples from this period to demonstrate through short bits of formal analysis how technologies are more than just a means to enhance trivial sound effects or a resource to better control different formal variables. In my presentation, I’ll base my arguments on Ritual Melodies, Mortuos Plango, Vivos Voco, the String Quartet #4, Speakings and other pieces. Not all the pieces are related to the Orient, but the means used in his general research are there. My book on Harvey, which is currently going through the editing process, will also show this triple duality (dvaita) transformed into unity (advaita): technologies/poetic creation, control/refinement, Christianity/Buddhism, and the Orient/Occident. His main idea, which he wrote about, was to musically intertwine various cultures and their different ways of dealing with the mysteries of life and death.
- Bruno Bossis
Bruno Bossis is a Professor in musicology, analysis and computer music, director of the Musique laboratory and permanent researcher of the research team Arts : Pratiques et poétiques, EA 3208 at the University of Rennes 2 (France). He wrote the book La voix et la machine, la vocalité artificielle dans la musique contemporaine. Editor of several books, he is the author of numerous articles on contemporary music, analysis and electroacoustic music. Currently, he is working on a book on Jonathan Harvey (Symétrie).
The cross-use of electroacoustic music and traditional funeral ritual music of Taiwan in a dance performance The End of the rainbow
Performed by the “Mauvais Chausson Dance Theatre” group from 09 to 11 December 2016 at Song-Shan Cultural and Creative Park, Taipei, the dance performance named The End of the rainbow（《彩虹的盡頭》）took its inspiration from one of the popular funeral performing art arrays in Taiwan - 牽亡歌陣 (Qian-Wang-Ge-Zhen, according to the local believe in Taiwan, is a ceremony leads the death soul to pass through the chthonian path and the netherworld gates with gods’ blessings and protections, for arriving at the nirvana with Buddah). Doing a research of the Qian- Wang-Ge- Zhen during several months through field investigation in the south area of Taiwan, the choreographer and fours dancers learn with a master of this art and participate themselves as performers of ceremony for family of deceased. Although the dancing movement inspired from Qian-Wang-Ge-Zhen, the music based also directly on the recording of the singing of the master, yet the main part of the dance performance used mixed electroacoustic music. According to the music designer for this performance, while she received the indication of the choreographer to insert electroacoustic music into such a performance, she felt so confused. Yet from a point of view of an audience, the result of this cross-use may not disturb each other, on the contrary, I think this use interesting.
Actually, in Taiwan, one can find from decades the insertion of digital technology and visual art during the ceremony of the popular folkloric ritual that calls out to divine spirits. Although inspired from the funeral ceremony of folk’s belief in Taiwan, yet the choreographer as well as the dancers, not just reproduce the body movment of folk rituals, but develope it as the basic motive of the body vocabulary into the performance. The End of the rainbow transformed the folk rituals into a new and original artistic performing art, and provides a different experience to the participants. For the excellent performance, The End of the rainbow is renowned by the Taishin Art Reward 2016.
This paper aims to present the cross-use of electroacoustic music and traditional funeral ritual music Qian-Wang-Ge-Zhen of Taiwan in the dance performance The End of the rainbow, and try to discuss the socio-cultural ramifications of electroacoustic music through the performance.
- Hui-Mei Chen
Hui-Mei CHEN began her professional career at the age of twenty as flutist of the National Symphony Orchestra of Taiwan and acquired her official teaching post just about the same time. After her study in CNR de Paris, she assumed an even more active role in Taiwan’s musical scene, as a pedagogue as well as a performer. She obtained her DEA degree at the University of Paris IV/Sorbonne and the IRCAM during 1996-1997, which launched her new career as an academic researcher. In her third venture to Paris, she obtained her Ph. D. degree with highest honours in Music and Musicology of Twentieth Century at the University of Paris IV/Sorbonne on 2007 with her dissertation on the Japanese composer Yoshihisa Taïra (1937-2005). Since then she was engaged as a very active researcher and invited to participate to conferences in Taiwan as in foreign countries. After teaching at several universities in Taipei, since August 2017, she begins a full-time assistant professor position at Shih Chien University in Taipei.
CHEN, Yi-Shin / CHEN, Kuan-Ting / LIU, Hsien-Chi Toby / SARAVIA, Elvis
Automatic Beatmaps Generation for Electroacoustic Songs in Rhythm Games: an Audio Data-driven Approach
The demise of art music, such as classical music and most recently jazz music, as most famously argued by Neil (2002), has to do with the inability of composers to incorporate popular culture elements and demands. Neil further establish disputable evidence that our newly discovered interest for electroacoustic music is due to the rhythmic complexity and sound dynamics that this type of music offers. The composition of electroacoustic music through the incorporation of new aesthetic approaches, makes such music more favorable and pleasurable as compared to music without rhythmic structure. There is no denying that the artistic nature through which electroacoustic music is composed makes it an interesting source of inspiration for music-based applications. Therefore, rather than understanding the social reasons for the demise of classical music and its decrease in audiences, we combine modern techniques—electroacoustic composition and gamification—to help revive interest in traditional and classical type of music. In other words, we envision multimedia applications, with human-gestural interaction, that can influence users, regardless of music taste, to sympathize with least popular musical styles.
This study aims to disseminate the concept of gamification applied to electroacoustic music as a viable alternative to increase outreach and engagement of falling-out-of-favor music among different cultures. That being said, we shall strictly emphasize on the technical aspect of the proposed gamified tool, which holds as basis, a composition of several popular electroacoustic pieces. Gamified tools that incorporate electroacoustic music, more widely known as rhythm games in this space, are found to bring numerous benefits, ranging from improved mental well-being to physiological improvement (Cevasco et al. 2005). Another distinctive benefit, and one which we investigate thoroughly, deals with the increase of engagement and appreciation towards classical and other least popular music. More formally, we hypothesize that higher engagement and retention time with our rhythm game will increase user likeability and appreciation towards traditional and least popular music, such as classical music.
Our main contributions as it corresponds to the aims of this project are as follows:
the implementation of an audio data-driven approach to automatically generate beats that are powered by electroacoustic music. More importantly, through this approach we aim to address the music-genre limitation problem present in other related rhythm games.
Measure the effectiveness of rhythm games powered by electroacoustic music with the ultimate goal to attract users to explore traditional and unconventional music genres. In this respect, we envision the wide-spread use of multimedia applications to revive interest in classical music and other music that has, over the years, inexplicably lost popularity.
As a long term goal, we plan to capture and analyze users’ real-time game-playing patterns to conduct extensive studies on the effects of electroacoustic music in the context of culture and other human aspects such as language, perception, and emotion. Moreover, with these complex datasets and multi-featured, multimedia tools, we can now begin to answer fundamental research questions about the cultural benefits and effects of electroacoustic music and the art of it.
Rhythm game powered by electroacoustic music
Rhythm games have long become mainstream in the interactive games space. In the standard rhythm game, players tap onto hints of beatmaps to score points as electroacoustic songs play in the background. Through tapping these hints, players are consciously or subconsciously interacting with audio and music features like drum beats, chords, tempo, or even melodies. With such interactive features, previous studies have shown that rhythm games, besides improving individuals’ cognitive performance, can also be applied in pedagogical scenarios (Abramson (1997); Overy (2008)). Thus, these games come with purposes beyond entertainment. Most rhythm games are confined to fixed hand-crafted beatmaps and a limited range of electroacoustic songs. These limitations result in players easily disengaging with musical games as they find no excitement in the monotonous stages, hence thedesignated missions or intended purposes of the games are easily hindered. Rhythm games built around electroacoustic music inherit the intrinsic and artistic nature through which this type of music is composed. As with many other successful interactive games, our assumption is that the unexpectedness and dynamicity of beatmaps in our rhythm game needs to be enhanced in order for players to experience positive emotion while playing it. For instance, the rhythm game must effectively embed the element of surprise, which is crucial to boost the stickiness of interactive games, regardless of genre.
In this study, we propose an audio data-driven approach that can automatically generate beatmaps for rhythm games. To keep players intrigued, our generative beatmaps should possess a few characteristics. First, we have to generate hints in generative beatmaps so that they are perfectly aligned with rhythm points – this increases the chance for users to experience reward sensations. In order to retain rhythm points, percussions in electroacoustic songs are transformed into note patterns. Longest common subsequences (LCS) is then utilized to extract patterns that appear frequently in songs as described by Bergroth et al. (2000). In this way, beatmaps can be composed by significant patterns of rhythm points derived from the song. Second, we have to ensure that our generative beatmaps are encouraging players by offering sufficient excitement in them. Composed of a repetition method and metrical complexity proposed by Toussaint et al. (2002), DifficultyScore evaluates complexity and intensity of generated hints. For this purpose, DifficultyScore is introduced to measure the goodness of our beatmaps. Based on the score, players can then choose what kind of difficulty levels they prefer to challenge. Finally, with such technique, automative beatmaps can now be generated based on any variety of electroacoustic songs, which indicates that the number of songs offered in rhythm games would no longer have a genre limitation problem. Moreover, any song can now be adopted as part of stages in the rhythm games. Playability and stickiness of rhythm games are also expected to boost with sufficient generative beatmaps.
To avoid bias of any sort, several participants with different cultural backgrounds and gender are invited to participate in the experiment. We then validate our results and test our hypothesis of whether our rhythm game, composed by several electroacoustic pieces, increases player engagement on music with diversified genre, more specifically least popular music. Preliminary experimental results reveal that, despite the participants' preference towards particular musical styles, their listening times increased significantly as they engaged with the rhythm game, regardless of the songs that played in the background. In comparison to listening to the raw songs, where disengagement is observed to occur frequently, participants willingly spend more time on interacting with unpredictable beatmaps that were powered by the rhythm game. Overall, the proposed automatic beatmap generator is beneficial to playability and robust stickiness of rhythm games. This in turn leads to a rise in user engagement towards least preferred musical styles, which would otherwise remain unexplored. It is important to point out that one of the main challenges with this study is the difficultness to assess or validate whether our rhythm game increases likeability towards particular music genres over time. In other words, immediate engagement is easier to measure as compared user preferences over long periods of time.
In conclusion, the use of electroacoustic music in rhythm games presents an opportunity to revive interest in classical and other least popular music. The ability of our rhythm game to support unlimited music genres sets the stage for future studies with multi-capabilities, and the implementation of diverse music applications. For instance, with the proposed technology any type of music can now be adopted and used to automatically generate beatmaps, which is potentially useful to conduct studies that consider understudied musical styles. Besides presenting empirical evidence on the usefulness of electroacoustic music via rhythm games, our goal with this study is to discuss the importance of electroacoustic music in the revival of art music itself. This implies that we need to investigate more deeply the underlying factors and effects of electroacoustic music in rhythm games to actually begin reaping societal benefits from it.
• Abramson, R. M. (1997). Rhythm games for perception & cognition. Alfred Music Publishing.
• Bergroth, L., Hakonen, H., & Raita, T. (2000). A survey of longest common subsequence algorithms. In String Processing and Information Retrieval, 2000. SPIRE 2000. Proceedings. Seventh International Symposium on (pp. 39-48). IEEE.
• Cevasco, A. M., Roy Kennedy, M., & Natalie Ruth Generally, M. (2005). Comparison of Movement-to-Music, Rhythm Activities, and Competitive Games on Depression, Stress, Anxiety, and Anger of Females in Substance Abuse Rehabilitation. Journal of Music Therapy, XLII (1), 64–80.
• Neill, B. (2002). Pleasure Beats: Rhythm and the Aesthetics of Current Electronic Music. Leonardo, 12, 3–6.
• Overy, K. (2008). Classroom rhythm games for literacy support. Music and dyslexia: A positive approach, 26-44.
• Toussaint, G. T. (2002). A mathematical analysis of African, Brazilian, and Cuban clave rhythms. In Proceedings of BRIDGES: Mathematical Connections in Art, Music and Science (pp. 157-168).
- Yi-Shin Chen
Professor Chen joins National Tsing Hua University (NTHU) at 2004. She has served eleven years as the director of the International Master Program of Information Systems and Applications, which is one of the two international master programs in NTHU. Professor Chen is a passionate educator and researcher, and throughout her career, 60 of students have graduated, including 16 international students and 10 students without an IT background. Two received Ph.D. degrees and 58 received master degrees. All Professor Chen’s students received one-on-one, personal advise each week, even during her three maternity-leave periods.
Professor Chen is passionate about increasing society’s benefits through her research efforts. Due to her awareness of the declining audience in classical music (through her 10-year professional training to be a pianist), Professor Chen’s research investigates multimedia applications to attract potential audience. Due to the fear of a media monopoly, she has focused her research efforts on Web intelligence and integration. Her goal is to create a social media interface that explores and visualizes the Web data easily.
On the difficulty to consider as a continuity the production
of the NHK electronic studio
It is known that production at the NHK electronic music studio does not exhibit any theoretical or formal unity obvious enough that we could easily resort to synchronous methods of analysis with a systematic nature, which would allow its essentialization: numerous composers worked there, none of whom stood out – or rather sought to stand out – as the figurehead around which to establish a common direction and precise features. Therefore the play of opposites used to describe production of musique concrète at Club d'essai / GRMC in France and the electronic music of NWDR in Germany cannot be applied to grasp what is created at the NHK as well – or in a more limited fashion. Beyond an analysis of the aesthetics of the pieces which we could constitute – to put it simply – around specific technological, technical and stylistic characteristics, it thus appears necessary to highlight the shared particularities from the cultural context and the work environment in which they were being produced; in other words, the idea would be to extract the core properties on which depend all the ones previously mentioned.
To allow that, the development of a diachronic history of the studio's operation is an obvious tool to consider. Indeed, if the studio is where all the repertoire was created it seems natural that it would de facto represent – with high certainty for the researcher – a structural setting, a persistence of stable landmarks, encompassing and levelling the music pieces through the action of agents that we can believe to be irreducible. Yet, some archives tend to show that it cannot be considered self-evident. The purpose of this communication is to underline with a few examples backed by references how it is proving challenging to apprehend the production of the studio in terms of continuity.
First of all, the most essential question is to ask ourselves what we really mean by the NHK electronic music studio: that is, which structure, in space and time, are we referring to. Was the studio created in 1954 when Moroi Makoto (the first to write about Cologne electronic music and to present its principles the same year in an article published shortly before in the journal Ongaku geijutsu) joined a team of NHK technicians to perform sound experiments with existing material? Or was it in 1955 to promote the conception, by Mayuzumi Toshirō, of the first electronic music studies based on the analysis of texts from Robert Beyer and Herbert Eimert? Or even in 1956 with the creation, by Moroi and Mayuzumi, of the first original piece titled Shichi no variēshon? Would it be possible that what is known today as the electronic music studio was a construct posterior to 1956?
It turns out that all those possibilities are at the same time correct and insufficient. In fact, it seems that the first mention of the existence of a "studio" appears in some writings by Moroi published in 1957 in Ongaku geijutsu to highlight the work done on Shichi no variēshon. Yet apparently, activity reports from the NHK, presented through almanacs, only report a "laboratory" after 1964, that is when the Audiovisual division was moved to a new building located in a different part of Tōkyō - fact that an article in the Asahi Shinbun highlights with its tile: "An electronic music studio. For the first time possible at the NHK" –; however we now know, thanks to an autobiographical text by Shibata Minao published in 1995, that prior to what could already be considered a studio by the enterprising and ambitious composers had only been a set of machines first stored one after the other in a corridor of the building generally only accessible after the working hours of the recording and broadcasting studios, then in the observation room of the hall dedicated to symphonic concerts. Maybe as significant is the late but continuous mention, after 1961 as sections, in those almanacs, of the production of electronic music at the NHK: 1960 being the year when Ondine by Miyoshi Akira earned the first prize at the Italia competition organized by the Rai in Italy to reward the best television and radio programs, we can thereby think that this success initiated in fact the awareness of the value of a creative endeavour about which it is from then on necessary to communicate more.
Second of all, which pieces are really considered as coming from the studio? Numerous pieces produced for radio serial broadcasts, special programs, or for specific institutional use have a uncertain status and do not appear in all the inventories made over the years. Such a creation as Rittai hōsō no tame no myūjikku konkurēto for instance, produced by Shibata Minao in 1955, which casts no doubt on its origins (it is notably part of the compact disc anthology Oto no hajimari o motomete dedicated to the work of the NHK electronic music studio) is nevertheless only presented as a production of the studio after 1968, after appearing in the International electronic music catalog compiled by Hugh Davies. Lists did not refer to it before that and for a good reason: Shibata mentioned in 1995 that his piece, produced during the same period as Mayuzumi's studies, was conceived in a different recording studio to those; that studio, already established and operational, probably offered a broader stability and ease of work.
Those few facts – obviously far from comprehensive – illustrate how delicate a task it is to restrict the production of the studio within known markers, as it cannot be said that all production from the NHK taking advantage of electronic technologies or the technical capabilities of the tape recorder automatically becomes a piece from the electronic music studio. It is all the more perilous considering the fact that there is no official archives of the studio, and that the only available resort for the researcher to treat the data is to rely on scattered documents and discographies. In this way, to achieve simplicity and clarity, it is not surprising that we would need to rely on a rational and quickly workable classification as made available by the inventory produced a posteriori. Although convenient and allowing us a direct entry into the repertoire under scrutiny, this could however not suffice to extract the potential characteristics of the aesthetic identity of the production of the NHK electronic music studio; nor can it help determine the stakes of its positioning within the international production. Finally, it is therefore about managing those difficulties that a method yet to be defined should attempt to minimise.
- Jeremy Corral
Currently a doctorate student in Japanese Studies at Inalco in Paris and a researcher in Ōsaka University of Arts, I’m writing a thesis about the first years of activity of the NHK Electronic Music studio. My research interests include the history of contemporary music and experimental music in Japan, as well as Japanese cinema and media.
“Six Japanese Gardens” by Kaija Saariaho:
eastern and western temporalities
“Six Japanese Gardens” (1994), before its existence, was fundamentally an intercultural work: its composer, Kaija Saariaho, is both from Finland and France, and this piece was commissioned by the Kinutachi College of Music of Tokyo. It also was written in memory of the japanese composer Toru Takemitsu. Does this work reflect this interculturality? To answer this question, I chose to look first for relations it is possible to find between sounds themselves and what can reflect a way of thinking or a symbolic part. Then can these two aspects be intercultural? It is easy to hear that an intercultural dimension lies in the very choice of instruments: their timbre sounds both occidental and oriental, with similar functions. Concerning the symbolic part, I will study the relation beetween formal analysis and philosophical analysis, using eastern and western philosophy. According to K. Saariaho herself, “Six Japansese Gardens is a collection of impressions of the gardens I saw in Kyoto during my stay in Japan in the summer of 1993 and my reflexion on rhythm at that time.”
The first movement, titled “Tenju-an Garden of Nanzen-ji temple”, can be considered as an electroacoustic piece because it contains an electronic part and because its instrumental part only consists in the pulse of different percussions used one after the other, creating variations of timbre. Since it is the first movement, it can be considered as an introduction that contains and exposes the purpose of all the work. The method to analyse this movement will be based on two steps.
1) Formal analysis
On a first step, I will make a formal description through a transcription made with the acousmoscribe, a system of signs describing sounds from a phenomenological point of vue, and that enables the analysis of the relationships between instruments and tape with the signs.. Tape and instruments are considered using reduced listening, and can be compared. More: the description of shape and matter of sounds enables to see stuctures that present some isomorphisms with structures of time. But it also enables a comparison between the electronic part and the instrumental part. At last, there is both an opposition and a link between the tonic timbre of the instruments and the inharmonic timbre of the tape.
This work is obviously a symbolic work and is speaking about time throught different kinds of rhythm. A strong opposition exists between the instrumental part and the electronic part. Actually, this work opposes two temporalities radically different: pulsed and oriented time with the instruments, and smooth and static time at the tape. The analysis of this work with the acousmoscribe allows both a formal and a symbolic analysis, and so it allows to study the semiosis, the way the plane of contents works with the plane of expression.
2) Symbolic analysis
If musicians can easily speak about rhythm from a formal point of vue, philosophers are more indicated to analyse concepts and to speak about temporalities with words: how is this piece speaking about time, and what is time? Can music be a way of thinking the world and can it have an hermeneutic function? I will try to answer these questions using time analysis by western philosophers, and more particularly Martin Heidegger and Jacques Derrida, and a japanese philosopher, Nishida Kitarô. The approach of K. Saariaho herself allows this comparison. She says: “Music is a pure art of time, and the musician – composer or not – builds and controls the experience of the flow of time. For music, time is material, and by this fact, to compose is to explore all the forms of time.” (Saarihao, 1997). “Tenju-an garden” exposes different kinds of time.
2.1 In this piece, there is an opposition between pure instant and duration of time, which is a problem very deeply studied by these philosophers. Of course, on a basic level, this opposition is
materialized par short percussive sounds and long files of sound flowing. But, on a more elaborated level, we can easily hear that percussive sounds are repeted and create duration, while long sounds maintains the listener in a perpetual instant. This opposition creates a dialectic studied by Kitarô: “It [time] must be considered as continuity of discontinuity.”
From a western point of vue, Derrida expressed the same idea in a different way: “The impossible co-maintenance of several present maintenants is possible as maintenance of several present maintenants.”
2.2 The superposition of instruments that appends sometimes and the interleaving and the recovering of sound files at the tape make each moment turned both to the past and to the future, what philosophers thinked. Kitarô wrote: “… we are touching the infinite past […] But at the same time [...], we are also confronted to what determine us from the infinite future...”
In a different way, Heidegger was speaking of “the ekstatic horizontality of time”, that is meaning that in each moment past, present and future coexist.
2.3 These oppositions co-exist with an other opposition: individual temporality and historic and social temporality. This dimension egally exists in Saariaho's work: the aspect very ritual of the instrumental part responds to the gregorian song we can hear in the electronic part. Even the sound correspondances between instruments and tape can be heard in this meaning. These two temporalities refer to time that exceed individual time, the time of the creation and the time with the others. These aspects are studied both by Kitarô and Heidegger.
2.4 The repetition of patterns (the association of smooth and granular timbre, for example) can be linked to what Heidegger called individual repetion: “In the being-toward, Dasein repeats itself in the most proper for-being by early. The proper Gewesene, we call it repetition.”
2.5 At last, according to them, time structures are linked to being. Kitarô says:
“The “you” as the absolute other that the “I” sees deep inside himself must be a “you” who, as an infinite past, determines the “I” in an internal way from its deep inside, i.e. a “you” who is past.” This opinion is very near from what Derrida called “differance”, the things that become different when they are differed in the time.
In other words, the consciousness of the other is directly linked to the perception of time. So “Six Japanese Gardens” can also be considered as a reflexion on human being.
Tenju-an Garden of Nanzen-ji Temple speaks about complex temporalities that can be analysed from an estern or a western point of vue. These points of vue very often are converging, and one can think that interculturality can be universality. The divergences can be thinked more as complementarity than as opposition. But music, with its own langage, can reflect our perception of the world and reach a certain universality.
- Jean-Louis Di Santo
Jean Louis Di Santo studied classical guitar and electroacoustic composition. He is the recipient of several awards in composition competitions and has played in several festivals. He is especially interested in the relations between sound and meaning. He has discovered the sound minimal unit (EMS06) and has created a notation for sounds based oh reduced listening called “acousmoscribe”. He has participated in many conferences in France and abroad.
The tape music of Jikken Kôbô 実験工房 (Experimental Workshop): Characteristics and specificities in the 1950s
Active during the 1950s in Tokyo, Jikken Kôbô is a collective of fourteen young artists, both musicians and visual artists, working together on works such as experimental ballets, concerts/exhibitions or audiovisual productions. Major group of the renewal of the avant-‐garde after the war, it inaugurated original forms of performance based on the idea of artistic collaboration. Concerning its musical production, composers such as Takemitsu Tôru 武満徹 (1930- 1996), Suzuki Hiroyoshi 鈴木博義 (1931- 2006) or Yuasa Jôji 湯浅譲二 (1929- ) early experienced potentialities of tape music and became pioneers of it in Japan, alongside other composers like Mayuzumi Toshirô 黛敏郎 (1929- 1997) or Akutagawa Yasushi 芥川也寸志 (1925- 1989). Contemporaries of the first tape music experiments in Europe, they developed their own way of apprehending this new medium, from an intermedia perspective.
In fact, in their explorations of new interactions between medias, the members of Experimental Workshop have been interested in technologies of sound and visual reproduction. In 1953, they created works for an automatic slide projector (ôtosuraido オートスライド). It was an educational purpose device, developed by the company Tokyo Tsûshin Kôgyô 東京通信工業 (TKK), later renamed Sony, that made possible to synchronize a slide projector with a sound tape. Subsequently, the group continued its audiovisual experiments with Mobile and Vitrine (1954), which is the first film in Japan using electronic music, and GinRin 銀輪 (silver wheel) (1955) made under the supervision of filmmaker Matsumoto Toshio 松本俊夫 (1932- ). Some composers of Jikken Kôbô also created radio dramas in collaboration with the Nihon Hôsô Kyôkai (NHK) 日本放送協会 (japan broadcasting company) which opens its experimental sound studio in 1955. For instance, Honô 炎 (flame) by Takemitsu Torû was diffused in November 1955 and will become later the raw material of the piece Relief statique ルリェフ・スタティク (Static Relief). The tape music of the group is also used as accompaniment for stage performances such as ballets or even for art exhibitions.
The outcome of all these experiments in the 1950s will take the form of a concert in 1956 at the Yamaha Hall, jointly with two composers of the group Sannin no kai 三人の会 (Society of three) – Mayuzumi Toshirô, Akutagawa Yasushi – and Shibata Minao 柴田南雄 (1916- 1996). Entitled Musique concrète/electronic music audition, it marked a turning point in the development of tape music in Japan and outlined the different directions taken by Japanese composers concerning this medium – between a French influence (musique concrète) and a German influence (constructivist electronic music).
Nevertheless, apart from the Mayuzumi’s trip to Paris in 1952 during which he attended two concerts of musique concrète, contacts with Europe were almost nonexistent in the first half of the 1950s. We can argue that tape music in Japan has grown quite in a self-sufficient way at that time. Composers of the Experimental Workshop had only scattered echoes of experiments made in France and developed their own way of apprehending this medium. First of all, we notice that, unlike Pierre Schaeffer with musique concrète, they had no will to theorize or even conceptualize this new music. This is linked to the very nature of Jikken Kôbô, which has almost never sought to explicate their artistic activities by writing, evidenced by the lack of a manifesto. Even if they used the same processes of sound transformation – as reverse lectures, slowing or accelerating effects, etc. – they did not share the schaefferian thought of « objet sonore ».
Indeed, the tape music of Jikken Kôbô has developed from an intermedia perspective, in a relationship to images and some forms of narration. It differs from Schaeffer’s concepts because it does not necessarily avoid the anecdotal aspect of recording sounds. While Schaeffer wanted to abstract the sound in order to replace it in a musical context, composers like Takemitsu readily used sounds of nature or human voices in a dramaturgic perspective more linked to radio dramas. It was certainly more suitable for works such as autoslides and musics for films or stage performances.
Furthermore, other composers of Jikken Kôbô like Yuasa Jôji, Suzuki Hiroyoshi and Fukushima Kazuo 福島和夫 (1930- ) had worked with recordings of instrumental pieces they have composed previously and altered them by manipulating the tape as described above. Akutagawa has already done a piece using recording of instrumental music: Maikurofon no tame no fantajî マイクロフォンのためのファンタジー (Fantasy for microphone), broadcast on NHK in 1952. Considered as the first tape music experiment in Japan, it was a superposition of two orchestra recordings with shifting effects. Works like the music of the autoslide Resupyûgu レスピューグ by Yuasa Jôji, which consist of a reverse lecture of a piece for piano and flute, was certainly influenced by Akutagawa’s achievements.
My presentation will therefore propose to put into perspective the experiments on tape music undertaken by Jikken Kôbô with the ones of the same time in Europe. The aim will be to identify its specificities with regards to musique concrète, radio drama or even the electronic music of the Cologne studio. For this, I will contextualize the emergence of this music in Japan after the war and try to categorize and explain the different conceptual approaches of tape music inside the group. I will analyse these pieces in their relations with other artistic mediums in order to highlight how their intermedia purpose has influenced their conceptions.
- Marin Escande
Born in 1992, Marin Escande is a second year PhD student in musicology at Sorbonne University in Paris. His current research concerns the japanese avant-‐garde group Jikken Kôbô. His main interests are japanese contemporary music, new forms of interdisciplinarity and relations between art and society. Since October 2016, he is a scholarship student researcher in Tokyo University of the Arts. In parallel of his activity as a musicologist, he is also a composer student in instrumental and electronic music.
Musique concrète of Minao Shibata
A Japanese composer, Minao Shibata (1916-96) is one of important figures in the reception of the Western tendencies in the postwar period, alongside Toshiro Mayuzumi (1929–97) and Toru Takemitsu (1930–1996). In 1955, He composed his first electro-acoustic work Musique concrète en sonore stéreophonique (1955) at NHK (Nippon Hoso Kyokai; Japan Broadcasting Corporation) and continuously committed to this genre until 1970s.
Early works of musique concrète by Japanese composers such as Shibata and Mayuzumi show quite different characters from the European ones: they are not manipulated the sound materials following the concept of objet sonore by Pierre Schaeffer (1910–1995). Rather, they are using the processed sound materials as substitutes of conventional instrumental sounds. Indeed Shibata’s first musique concrète work is based on a formalistic compositional plan and serial techniques. As Shibata writes in the program note of this work, they seem to have been aware of the differences from the European tendency, as Shibata wrote: “It would be the proper way to compose a musique concrète piece by handling the existing concrete sounds in the same way as objects in the plastic art. However, I started with formulating the [compositional] plan and looked for the required sounds. Then, I modified or produced them to be fitted within the musical form.” (Jikken kobo 1956: 8) Also, a twelve-tone row and sketches of musical gestures based on the row can also be found in his manuscript written in standard music notation
This is partly because they also had keen interests in twelve-note compositional technique in the period. In those days, Shibata learned twelve-note compositional technique, together with Yoshiro Irino (1921–1980), firstly through Schoenberg et son école (1947) by Rene Leibowitz (1913-1972). And partly because, except Mayuzumi, none of Japanese composers, except Mayuzumi, had not listened to Frencah musique concrète pieces before 1957, as Shibata himself commented on the occasion of broadcasting Panorama de musique concrète (Ducretet–Thomson, 1956).
This paper will discuss an aspect of the reception of electro-acoustic music and twelve-note compositional technique, through the examination of Shibata’s writings and analysis of his early electro-acoustic pieces with examination of his manuscript. The aim of this paper is to provide resources for further investigation into this topic, particularly for concerned researchers in other language regions than Japanese. Graphic transcriptions of his works will be presented during this presentation, which have been produced by the author of this paper through aural analysis with the help of spectrum analysis with computer.
- Koichi Fujii
Koichi Fujii studied musicology and aesthetics at Faculty of Letters, Keio University in Japan and its Graduates School of Letters, where he earned his BA and MA. His main research interest is the history and musicology of electro-acoustic music, modernism in music and so on.
After his working experience in the music industry, he resumed his research to pursue a PhD at Keele University in UK and later Keio Universty. He has also been active in the field of creativity and education in music technology and media art, collaborating with various artists, organising workshops and presenting lectures. The projects he participated in include Moppet which was sponsored by NTT and received an honourable mention at Ars Electronica 97. Currently, he is teaching musicology and music history at Keio University and Tokyo Zokei University in Japan, while working for IK Multimedia in localizing their web, apps and documentations.
Navigating the noisescape: Repurposing unwanted sounds to raise awareness through sonic art
Noise pollution is a concern for industrialized cultures all across the planet. Excessive noise seriously harms human health and interferes with people’s daily activities at school, at work, at home, and during leisure time. It can disturb sleep, cause hearing loss and negative cardiovascular and psychophysiological effects, reduce performance, and provoke annoyance responses and changes in social behaviour. These effects can also occur even when the sound is outside standard levels of perception. Low frequency noise (LFN), particularly infrasound, can lead to vibroacoustic disease (Leventhall 2004, 2006) and is particularly worrisome as it is difficult both to detect LFN and to locate the source. These concerns will direct my discussion towards specific examples of noise pollution and related research involving phonography and electroacoustic-based work.
Over the past several decades there have been documented cases of intrusive “hums ”—also referred to as the “global hum”—that adversely impact a segment of the population, typically in industrialized countries. Some high profile cases include Kokomo and Taos (USA), Bristol (UK), and Windsor (CAN). There have also been reports of the hum in Japan including the areas of Hamamatsu and Tokyo (MacPherson 2017). Only recently have complaints about this particular type of noise pollution been taken seriously enough to warrant scientific studies (Novak 2014, Silba 2014), even though the problem in the Windsor region has similar documented cases dating back to the late 1950s. Although each affected region has unique circumstances, there are some commonalities, one of which is that the hum is heard by some residents and felt by others, but is not perceivable by the majority in any given area. The primary reports of the global hum locate the perceived frequency to be in the 30-40 Hz range, however there are some records of 50 and 60 Hz disturbances as well. There are reports of people who can only feel the hum which reinforces the possibility that infrasound is also present in some cases.
In addition to ultrasound and LFN, other concerns related to noise pollution research include imperceptible electromagnetic energy and the detrimental effects of noise on an ecoacoustic balance. Health concerns over electromagnetic fields continue to be controversial and polarizing (Frey 1962, 1979, 1998, 2001, 2011) This energy requires sensors and amplification to bring it into the audible domain. Although electromagnetic waves cannot generally be detected by humans without the aid of technology, they can be as omnipresent as noise pollution in large urban environments. The “hearing” of electromagnetic fields can be experienced with coil pickups. An example of this is the ‘electrical walks’ with electromagnetic headphones designed by Christina Kubisch (Kubisch 2016). Similar to victims of infrasound and global hums, there is also a marginalized segment of the population who claim to have electromagnetic hypersensitivity (World Health Organization 2006, 2014).
Depending on the location, there are many challenges when recording the perceived sound environment without also capturing noise that is subconsciously ignored. This scenario is a challenge for phonography in the noisescape of large urban and industrial areas, as I have experienced in my creative-based doctoral research. Examples and analysis of these recordings in addition to Hildegard Westerkamp’s A Walk Through the City (Westerkamp 1981) and Kits Beach Soundwalk (Westerkamp 1989) – works that deal with urban noise – will be presented and discussed. To capture and work with sounds at the extreme low end of the hearing spectrum in these areas requires additional considerations, particularly in terms of equipment and technique: the correct microphone capable of recording infrasonic sound sources in a specific environment, playback systems capable of recreating as much of the audible range as possible, larger nearfield monitors, and ideally a quality subwoofer. The analytical component of my own research has involved both conventional methods, such as standard FFT analysis algorithms, as well as unconventional, such as employing cymatics to visualize recordings and to detect and locate the LFN pollution within the spectrum. While the use of cymatics is typically for scientific or artistic applications, I have found the remediation of sound through liquids to be effective in the analysis of frequencies in field recordings and the determination of an infrasonic presence. Future concerns for this work involve whether a listening environment, either online or in a concert setting, has the frequency response to accurately playback the sound files. The goal of this work is to draw attention to pollution in the sonic environment, a complex problem and reality that is often ignored.
- Brian Garbet
Brian Garbet has composed acoustic and electroacoustic music for film, theatre, and concert. While at SFU, he was a Jeu de Temps/Times Play national prizewinner for his composition Ritual. His music has been performed by Quatuor Bozzini, Standing Wave, and Turning Point Ensemble and has received airplay and performances across Canada, the United States, New Zealand, and Finland. After years of touring and recording with the rock band Crop Circle, Brian completed his Master of Music at UBC. Currently a PhD candidate at the University of Calgary under the supervision of Laurie Radford, he has also studied with Barry Truax, Hildegard Westerkamp, Rodney Sharman, Bob Pritchard, Keith Hamel, and Allan Bell. Recently, Brian returned from a research residency in the United Kingdom where he was working with Joseph Hyde, P.A. Tremblay, and Trevor Wishart.
Psycho-geography and Psycho-sonic Cartography through Electroacoustic Music
“. . .I am collecting the ashes of the other possible cities that vanish to make room for it, cities that
can ever be rebuilt or remembered.” Invisible Cities, Italo Calvino, pg. 60
The Ephemeral City
As forces like gentrification and globalisation affect the fabric of city life, it grows difficult for a single individual to identify their own story, history or place, or shape any part of the cities where they live as unique from any other. How can music, and particularly electro- acoustic music, become a way to map new psycho- geographies? Can such music empower others to tell their own stories, beyond material means? Psycho-geography is a way for people to be able to tell their own stories. The ephemeral city can be a way of coming to grips with an urban environment that changes in rapid and disempowering ways. We all create a myriad of ephemeral cities, in the maps we create of the places we live- some real, some imagined. Psycho-sonic cartography affords a way for people to create an alternative city through sound- both composer and the listener.Musicians and sound artists working with spatialisation are in the business of creating ephemeral places.
Saul Steinberg's picture of a New York-centric view of the city was published on the cover of the New Yorker in 1976. Although tounge-in-cheek, it deftly illustrates the power of the imaginer to suspend disbelief and shape the world And this is precisely what electro-acoustic music affords a listener the possibilty of, in collaboration with its composer to a greater or lesser degree.
An earlier, well known such work is Luc Ferrari's Presque i rien - built from field recordings. And Luigi Nono's La fabbrica illuminata "inhabits" a factory with its singer. The two pieces illustrate different ways of drawing places- each imaginary and evocative. The imagined city contains impossibilities- simultaneous non-concurrent histories, places that move around stationary living things, ghosts, and imagined futures.
Lennart A.F. Petersson's photography of Klara- a central Stockholm neighborhood, demolished for "modernisation" in the 1950s- inform generations of Stockholmers' "memories" of a city they never inhabited. They also provide counterpoint for those who identify with the environs that replaced Klara. In my interviews with Stockholmers, one described a nostalgia for the city's Culture House, at the former center of Klara, as well as the modernistic idealism of the "Five Sisters"- brutalist office buildings dominating a portion of Stockholm's skyline. One can regard these periods simultaneously, thus dwelling beyond the predominant feature of the current street- advertising and international franchises. Benjamin's treks down a single street in 1920s Weimar, and later through Paris, give us an itinerant world of tiny wonders against a backdrop of impending disaster, to seek windows or cousins to in the Now. Others like Will Self and Rebecca Solnit wander in more dystopic worlds, from post-industrial wastelands to pristine deserts, psycho-geographically cutting three dimensional chess boards out of the checker-plain landscapes they traverse.
Music can simultaneously specify, and leave much to the listener to complete - just as the afore-mentioned psycho-geographical works yield new ways for a reader to comprehend the places they traverse. Composers who work with spatialisation share qualities with the very field many psycho-geographers seek to circumvent- architecture. Spatialaised sound is a convincing way to build an ephemeral place- one which is and is not present, compelling a listener to experience the sonic works as places.
Thus, psycho-sonic cartography takes or imagines sounds from an environment, and structures them to illuminate that environments' aspects, apparent or hidden in history, obscurity, powers that be or imagined futures.
David Prescott Steed makes a literal psycho-sonic cartography, taking a cheap violin into a system of tunnels he has trespassed under Melbourne, recording himself sawing away on the open strings while walking. This creates a spectral-sonic map of the tunnels, indicating their shape and distance in relation to his walk. An example of an electro-acoustic piece which takes aim at a place for it's banality is Negativland's A Big 10-8 Place. The piece draws a literal map through field recordings, referential music and text, and slaps an impossible story onto a mundane suburban block, interspersed with mocking songs about stupidity. A more serious and contemporary work of psycho-sonic cartography is Natasha Barrett's OSSTS, where a listener sits in a chair built to control virtual travel through a sonic version of Oslo, with both recognisable and surreal elements.
Sound walks were first extolled as a way to tune in to the environment by R. Murray Schafer. They are both a capture method, and way to illuminate a place through listening, yielding materials and insights. Bill Fontana's Metropolis Stockholm! is a good example of an earlier work somewhere between a sound walk and electro-acoustic music- where simultaneous recordings were made throughout the city, brought together and mixed at City Hall, and broadcast on the radio. One might hear things that were not noticeable before, in the shuffled context offered by this kind of "stationary sound walk". In 1986, the appearance of subways and church bells in the same sonic space on one of the few radio stations available may well have been striking. But listening today, one hears a collage of cues without further comment or information, leaving little to be built in the end.
Offering an incomplete narrative in a musical work is a method of striding the boundary between presenting finished environments and locales the listener must complete. But sound walks presented as complete compositions give listeners little to complete. The lack of any narrative beyond the recording itself restricts the listener's own agency. Many of the pieces in The Acoustic City are field recordings without much compositional work- very different from Ferrari's early work, where careful choice of recordings and ordering offer a great deal of the implied tale of a day. Other examples include Janet Cardiff's video walks, which have instructions, or Christina Kubiches' Electrical Walks, which construction is evocative enough to offer the listener a meeting point in the ephemeral parallell world her machines psycho- sonically map.
Text-Sound work and Interviews
Interview-based Text-Sound composition straddles lines between documentary, story-telling and sonic art concerned with speech. Stockholm has a venerable history of Text-Sound composition. The majority of Swedish Sound-Text works are not about interviews or places. But there are some examples, such as Mats Lindström's Rekviem för svensk medborgare med anledning av mordet på Olof Palme, a piece created in art from interviews with school children read by well known Stockholmers, or Sol Andersson's In Memorandum, where the "place" is an array of 8 speakers playing a subtle drone-pitch, and the person speaking emanates from a wooden speaker cabinet. A large-scale work from recent years is Trevor Wishart's Encounters in the Republic of Heaven. Using local dialects as materials, Wishart masterfully weaves the voices of inhabitants in an industrial British town together into an image of that disappearing place through the everyday stories they tell, converging in an unearthly requiem.
Building the Imaginary City
Electroacoustic music is made of sounds de-coupled from their sources. To engage in psycho-sonic cartographical music through electroacoustic music is to engage in a constant interplay where the seemingly immutable can melt and transform in the blink of an eye- or at least a shorter span of time, and with a lighter burden of materials, than architectural upheavals and flights of fancy take.
A large share of the music I am making is an attempt to interface with and enhance this very individual way that people interact with the cities in which they live, utilizing field recordings, interviews, generative installations, transposition of field recordings with acoustic instruments and the metaphorical usage of transformational processes like frequency modulation and artefacts like beat tones as metaphors for ghosts, spirits and other possibilities.
In 1980, Derek Bailey and Min Tanaka made a CD. Bailey plays a sparse, acoustic guitar, while the space in which the collaboration unfolds is sonically outlined by the sound of Min Tanaka's equally sparse movements, and rain pelting a rickety roof. Psycho-sonic cartography- an evocative picture of a charged moment in time and place, long gone but brought to immediacy by its nature as music. Just as that imagined place is illuminated and brought to life, so we live in as many cities as there are inhabitants, each disappearing into a myriad more new cities which form with every new memory, to be brought forth and re- imagined in psycho-sonic cartographies the real world can only begin to imagine.
- Katt Hernandez
Katt Hernandez moved to Stockholm in 2010, and rapidly began working with many artists. In addition to solo violin work, she co-founded The Schematics and Deuterium Quartet, and has worked with a host of artists. Katt earned a Masters degree in Electroacoustic Composition from the Royal Music Academy of Sweden in 2014. In 2015 she began a PhD program in Music at Lund university, and is also employed at the Royal Music Academy as part of Klas Nevrin's research project. Her work has been featured on the Swedish Radio, and on many festivals including Norberg, Stockholm Music and Arts, Svenskmusikvår and Intonal.
Before leaving the U.S., Katt was a 13 year veteran of experimental music scenes on the east coast, where she worked with a vast array of musicians, dancers, visual artists, puppeteers, film makers and performance artists, in venues ranging from underground urban art spaces to ivy league concert halls.
Melodic Contour Applied for Algorithmic Composition
This article is mainly focused on the methodology of melodic contour to analyze music to retrieve the music features, in order to perform the algorithmic composition using the analyzed features with melodic contour control to alter music structure. With the proposed method, algorithmic composition can be realized in more practical way to generate music automatically into a specific style, based on the input music segment with its correspondent music meaning, hopefully. Pitch, interval, and duration are used as the main features of the music meaning, and the above three parameters can be used in a hybrid way to construct the melodic contour with the interval control of stepwise, arpeggio, and jump, to generate the variation of pitch and interval. Finally an innovated melody generator can be obtained based on the above methodology of the algorithmic composition.
Pitches must progress, going to up or down, otherwise the constructed melody will make the listeners bored soon. As shown in Fig. 1, the melodic contour is drawn up or down steeply while the melody is suddenly jumping to higher or lower notes. If the melodic contour is drawn with a smooth shape, then it refers to a calm music segment. Rather than an absolute pitch values, Schoenberg’s “Fundamentals of Musical Composition” (1967) shows the melodic contour into a visual shape as the supplement of the score to represent. Adams’ “Melodic Contour Types” (1976) categorizes 15 melodic contours as a criterion for melody classification. Based on the melodic line shown in the score, it provides an idea that the music meaning related to a specific psychological “gestalt” or emotion, according to Meyer’s “Emotion and Meaning in Music” (1956). The proposed method will extract the melodic contour for the input music segment based on the automated analysis. The uniqueness of the melody can be obtained by a certain melodic contour which represents a unique meaning in listener’s mind, according to the shape, direction, and range of the correspondent music segment.
Figure 1. Melodic Contour
In order to reinforce the usability of algorithmic composition, melodic contour can be used as the automated music analysis tool. Three melodic contour features including the following items:
1. Shape: the overall contour shape can be analyzed and divided into several smaller sub-shapes as the basic music units.
2. Direction: the melodic contour can go up or down, and the frequency and slope of the direction determines the music tension.
3. Range: the pitch distance between the highest and lowest notes forms the range. Range change rate and its register distribution and variation will make the music have different tensions and meanings.
In this research, the first step is to input a MIDI file before feature analysis. Due to the polyphonic multi-channel MIDI format, the main melody shall be determined firstly. According to Uitdenbogerd and Zobel’s research (2001), four kinds main melody extraction includes All-mono, Entropy-channel, Entropy-part, and Top-channel. In the proposed method, Top-channel is used to extract the main melody track based on the averaged-highest-note channel, and the Parsons code is used as the data process for MIDI readout, to get the basic shape of the music. Parsons developed his system (1975), to denote the relationship between any two consecutive pitches into the following four possibilities:
1. “+” means "pitch up", if the note is higher than the previous note
2. “-“ means "pitch down", if the note is lower than the previous note
3. “0” means "repeat", if the note is the same pitch as the previous note
4. “*” means “first tone as reference”
Therefore “*++-----0+---++++++--+” is the analysis result based on the input of music in Figure 1.
The proposed system also uses “sieve” based on Xenakis’ “Formulized Music” (1971), to filter out the unwanted pitch class from the input music segment, to generate music using algorithmic composition, according to Eq. (1), where PC is pitch class, P is pitch, and “mod” is modulus operation.
After the melodic contour with sieve function operated is retrieved, algorithmic composition is applied to generate music automatically. Hiller and Isaacson also applied the principle of Markov chains in the middle 20th century. As shown in Eq. (2), Markov chains can be used to generate music rhythm according to the style of the music segment input through probability control. A random variable X is represented as the timing between two note-on pulses in an independent time t, and the probability of the next-state Xt+1 will depend on the current-state Xt, and the probability at a certain time tm and tm+1 is:
Finally the proposed melody generator can be performed and adjusted with the following interval control function, after MIDI data input, and music features analysis using Parsons code analysis to generate basic shape as the algorithmic music baseline:
1. Stepwise: interval change is controlled within major or minor second interval. The “scale-like” stepwise melody is fluent and stable, compared to other interval features.
2. Arpeggio: consecutive interval change is within major or minor third, and then major or minor triad arpeggios can be constructed. The “horn-like” interval seems more energetic in high spirits.
3. Jump: if the interval change is greater than a major third, then it is “jump”. Jump interval provides the highest energy, however it should make the interval immediately change the direction most time.
Among the above three interval changes, it should hybrid them into certain among of percentage. For instance, if we would like to generate a piece in more energetic style, then the stepwise percentage should be low, and arpeggio and jump rate should be high. The interval control function can be used to alter or adjust the generative melodic contour. Figure 2 shows the process of the proposed system:
Figure 2. Research process for melodic contour based algorithmic composition
Three cases with various interval control conditions will be applied and analyzed into the proposed system to verify the melody generator. In the future rhythm and harmony will be added into the analysis process, in order to provide more diversities and possibilities for algorithmic composition.
- Chih-Fang Huang
Chih-Fang Huang, the Associate Professor at the Department of Information Communications at Kainan University, was born in Taipei city, Taiwan. He acquired both a PhD in mechanical engineering and a master’s degree in music composition respectively from National Chiao Tung University. He studied composition under Prof. Wu, Tin-Lien, and computer music under Prof. Phil Winsor. His electroacoustic pieces have been performed in Asia, Cuba, Europe, and the USA, such as the electroacoustic piece “Microcosmos” were selected and performed in International Computer Music Conference (ICMC) in 2006, and the composition presented in CEMI (Center for Experiment Music and Intermedia), University of North Texas in 2010, and works performed in Berlin, Cologne, Sweden, Italy in 2011-12, etc. He is also the fellow of 2012 Art Music Residency, New York. In 2013 he was selected into the International Conducting Master Class of Martinu Philharmonic Orchestra under Mr. Kirk Trevor and Prof. Donald Schleicher, performing the works of Debussy, Brahms, etc. In 2014 he was invited to conduct the Greater Miami Youth Symphony (GMYS) orchstra. His research papers include many fields, such as automated music composition and the sound synthesis, which have been published in ICMC and international SCI/SSCI/AHCI Journals. Now he is also the conductor of the Taoyuan New Philharmonic Orchestra.
- Yen-Yeu Yu
Yen-Yeu Yu, graduate student, he study in Information Communication at Yuan Ze University, and studied computer music under Prof. Chih-Fang Huang, graduated from Computer Science of Fu Jen Catholic University.
JARAMILLO ARANGO, Julián
Perceptualization machines: environmental data sonification based on electroacoustic music
Environmental data sonification has been an inspiring topic for new media artists  and other researchers interested in spreading relevant scientific information across the population . While computer music tools are the main resource to acoustically display scientific data, Electroacoustic Music (EAM) analysis can also play an important roll by enriching, diversifying and extending the sonic content with which these systems deal. According to Supper, “… many sonifications stress that they regard sound as a way of allowing people to emotionally connect with something otherwise incomprehensible.” . Perceptualization machines presented here bridge the gap between the empirical and scientific reading of reality, which are in constant negotiation in the definition and understanding of our surroundings. Particularly, by adopting an artistic approach to sonification, I am interested in questioning the roll of scientific data about air pollution in the everyday life of my local community and enhancing awareness to environmental conditions through a rewarding listening experience.
I will discuss Auditory Display (AD) and sonification strategies adopted in three percpetualization machines where air quality information is portrayed by sound: AirQ jacket , Esmog Data  and Breathe! Although being inspired by electroacoustic composition concepts and insights, these projects are not pieces of music, but objects and installations and can be also considered sound design and sound art works. They are the result of a two-years postdoctoral research about urban sound design conducted along with MA and PhD students of the Caldas University Design and Creation program in Manizales, Colombia.
We will also discuss sound creation from the perspective of Design studies, since they give reasonable emphasis to methodology and propose project-based directions in the creative process . Design thinking considers both, the aesthetics and functional dimension involving modelling, interactive adjustment and re-design . The sequence of three projects allowed us to achieve partial results and conclusions and adopt them as inputs in the next prototype. By testing different AD and sonifications the same information, we find an opportunity to reflect about the implications and potentialities of each communication design.
Sonification aesthetics and listening
In our research, we discriminate between AD, which covers all the topics exploring the auditory channel as a non-verbal information conveyor, including the display environment design (the audio system, speakers, listening facilities, etc…); and sonification, the data-dependent generation of sound . Although these studies are mainly informed by psychological and physiological studies, they occasionally takes into account sound design and music composition considerations. From an aesthetic perspective, the goal of sonification would be producing “...auditory representations that give insight into the data or realities they represent to enable inference and meaning making to take place”. . In this concern, EAM proposes significant contributions to sonification studies by providing listening and structuring models , ,  which informs some listening frameworks that are being currently discussed by AD scholars , .
Sonification listening explores the Schaefferian notion of Comprendre, which is both, objective and abstract . This fourth listening mode or function comprises “…symbolic (i.e. consensual) relations between representamen and object”  and is activated when facing a “… structure that has a sense and meaning for those listeners who share the code” . Sonification listening demands unequivocal interpretations and potential actions in the case of alarms or notifications. In the case of natural phenomena (astronomical, biological, genetic, environmental) sonification can trigger a shift of mind related to the observed object, by unveiling the relation between data and its reality. Furthermore, grounded by Kantian aesthetics Supper  suggests that some artistic sonification pieces can lead to a sublime listening, in which the sense of immensity and infinity arises in the listener when the communication process let him/her witness a non-perceptible natural phenomenon.
Air Quality data
Since the time dynamics of environmental data occurs in the course of hours or days, input data is not assumed as compositional material. Instead, the sonification machines aims to enhance environmental information by compositional resources, and should be enjoyed as sound-augmented consulting devices. In our particular context (Manizales, Colombia) the growing industry and vehicle fleet determines important factors in the environmental contamination. In addition, an active volcanic region regularly emanates toxic gases surrounds Manizales, and then air pollution is now a critic element in the everyday life of local community. Although there is significant available information about the topic, environmental data are hardly taken into account by the community. From this point of view, air quality data became a rich source of information to be interpreted and its sonification an inspiring topic for those artists who aim to call attention about environmental awareness. Moreover, It offers an opportunity to create auditory images relating the surroundings, by triggering supplementary meanings to sound according to encrypted information.
AirQ Jacket (2016)
The first project I will discuss is entitled AirQ Jacket. It is a wearable technology garment with an attached electronic circuit, which measures contamination levels and temperature and transforms this information to perceptual stimuli through light and sound. It was created along with fashion designer Maria Paulina Gutierrez whose participation in the laboratory triggered an interchange among electronic, sound and dressmaking crafting, which resulted in this unorthodox AD device.
Figure 1. The AirQ jacket sonification system runs in a custom-made, telephone-like sound artifact.
In the design process we reflected about the listening culture created around portability, which was promoted by the Walkman . The AirQ wearer should be able to acoustically consult our AD device wherever he/she goes, in order to create healthy courses through the city. We also paid special attention to Johnatan Sterne study of XIX auditory devices , such as the stethoscope. In the medical field stethoscopic listening produces objectification, which is the “… capacity to make external and concrete, and hence situate as perceptually objective” . Our wearable proposal included a custom-made artifact attached to the jacket that was built with a piezo-electric device located inside a plastic cabinet that totally kills the sound, unless you approach it the ear.
We faced a challenge when creating sonic content with the arduino microcontroller, since it allows a meager repertoire of sound generation possibilities. We opted to work with a couple of sound pulses of energetic attack sounds. The first one displays temperature by varying speed and contamination level by varying pitch. The second pulse acts as a grid of reference, a contextual sound  representing “normal” environmental conditions.
Esmog Data (2016)
Esmog Data represented both, an advance and a redirection in the research process. It is an immersive installation presented in the Art Exhibition of the 2016 Balance-Unbalance Festival  along with Christian Quintero and Vanessa Gañán. The piece displays through audio and motion graphics the temperature and the concentration of some toxic gases determining air quality index (CO, CO2, SO2 and PM10). The AD device comprises surround speaker system whose sonic content is constantly changing, since a custom-made environmental station located in the entrance of the exhibition space regularly refreshes the system.
Figure 2. Esmog Data immersive space.
While the discussion about prototyping was already reported,  we will focus here on the EAM criteria adopted in the sonification. The Esmog Data sonic material was produced with the Johan Eriksson EcoSYSTEM  pure data patch, since it offered a modular workflow of audio synthesis blocks in which sensor data can be input. Instead of mapping sensor data directly to single synth parameters (pitch, amplitude, rate, adsr), we associated each one of the sensor inputs to parameters in many synth blocks in the search of more complex intrinsic musical values. Dennis Smalley vocabulary  was helpful to establish the qualities of sound motion attached to each pollutant gas. NO2 was associated to the notion of unidirectional motion (ascent, plane, descent), PM10 to occupation of the spectral space (diffuseness-concentration), CO to textural growth process (agglomeration-dissipation), CO2 to multidirectional motion (exogeny-endogeny). Temperature was associated to the behavioral relationship of dominance-subordination among the toxic gases
BREATHE! is our current sound art project and depicts the research findings. It is conceived as a multi-channel installation with no visuals, where the visitor should be able to identify each one of the measured toxic gases as a different sound source in the space. Since we are in the prototyping process we will discuss here some insights and motivations that have been introduced as variables in our project-based research.
The installation displays six human breathing sound loops, which shrink and stretch according to toxic levels, from different points in the space. The AD device provides a multisource environment in which the listener can walk through the exhibition space in order to approach to each speaker. An improved air monitoring station prototype located outdoors (with a more accurate CO2 sensor, a SO2 one and a Wi-Fi module) provides six different inputs attached to pure data wavetable samplers.
BREATH! deliberately intends to adopt a denotative strategy to sonification, by counting on the attraction of human gesture in the meaning making process as a strategy to call attention. The metaphor of the human breathing acts both, as a collateral effect of pollution and as a source of pollution information. Furthermore, it invokes the multimodal aspects of sound meaning by dealing with the tension and relaxation of muscles and activating a sort proprioceptive listening . Another implication of dealing with this material is the emergence of emotional aspects of listening. The reference of the natural pulses of the human body being affected by contamination intends to pose a critical insight about human condition in a post-human society.
 Polli, A. (2014). “Atmospherics/Weather works: A multi-channel storm sonification project”. Proceedings of the 10th International Conference on Auditory Display 2004, Sydney, Australia.
 St Pierre, M., Droumeva, M. (2016) "Sonifying for Public Engagement: A Context-Based Model for Sonifying Air Pollution Data". Proceedings of the 22nd International Conference on Auditory Display (ICAD2016)
 Supper, A. (2014) “Sublime frequencies: The construction of sublime listening experiences in the sonification of scientific data.” Social Studies of Science. Vol 44(1) 34-58
 AirQ Jacket (2016). https://sonologiacolombia.wordpress.com/lab/airq-jacket/
 Esmog Data (2016). https://sonologiacolombia.wordpress.com/lab/esmog-data/
 Findeli. A (2008) Research Through Design and Transdisciplinarity: A Tentative Contribution to the Methodology of Design Research. I: Focused -- Current Design Research Projects and Methods, Publisher: Swiss Design Network, Editors: Swiss Design Network, pp.67-91
 Hunt, A. And Hermann, T. (2011) Interactive Sonification,. In Herman, T., Hunt, A., Neuhoff, H. (ed) The sonification Handbook. Logos-Verlag, Berlin.
 Barras, S.&Vickers, J. (2011). Sonification Design and Aesthetics. In. Herman, T., Hunt, A., Neuhoff,H. (Ed). The sonification Handbook. Logos-Verlag, Berlin.
 Smalley, D. (1996) “The listening imagination: Listening in the electroacoustic era”, Contemporary Music Review 13 (2), 77-107
 Kendall, G (2014). “The Feeling Blend: Feeling and Emotion in Electroacoustic Art.”Organised Sound, 19(2):192:202
 Smalley, D. (1997) “Spectromorphology: explaining sound-shapes”, Organised Sound, 2(2), 107-26.
 Vickers, P and Hogg, B. (2006). “Sonification abstraite/sonification concrete: An ‘aesthetic persepctive space' for classifying auditory displays in the ars musica domain”. Proceedings of the 12th International Conference on Auditory Display , London
 Roddy, S., & Furlong, D. (2015) "Sonification listening: An empirical embodied approach". Proceedings of the 21st International Conference on Auditory Display July 6-10, 2015, Graz, Styria, Austria.
 Schaeffer, P. (2003) . Tratado de los Objetos Musicales. Madrid, Alianza
 Palombini, C. (1999). Musique Concrète Revisited. Electronic Musicological Review, 4. UFPr Arts Department
 Hosokawa, S.(1984) The Walkman Effect. J. Popular Music 4, 165--180
 Sterne J (2003) The Audible Past: Cultural Origins of Sound Reproduction. Durham, NC: Duke University Press.
 Rice T (2008) “Beautiful murmurs: Stethoscopic listening and acoustic objectification”. The Senses & Society 3(3): 293–306.ew
 Walker, B. & Ness, M.A. (2011). Theory of Sonification. In Herman, T., Hunt, a., Neuhoff, H. (ed) The sonification Handbook. Logos-Verlag, Berlin.
 Balance-Unbalance 2016. http://www.balance-unbalance2016.org/
 Arango, J; Gañán, V., Quintero, C. (2016) Esmog Data. Interpreting Air Quality through Media Art and Design. In: Proceedings of Balance-Unbalance International Conference (2016)
 Eriksson, J. EcoSYSTEM. http://www.monologx.com/ecosystem/
- Julián Jaramillo Arango
Julián Jaramillo Arango is composer and researcher working in the field of new media design and focusing on experimental sound practices, multimodal communication and in the development of interactive applications and services. Jaramillo Arango`s works bridge the gap among science, arts, technology, creativity, society, community and sustainability through works that explore different modes of sonic interaction. He holds a Ph.D. in Sonology adviced by Dr Fernando Iazzetta, São Paulo University. Currently Julián conducts a postdoctoral research in the Caldas University Design and Creation program where he develops novel interfaces for the local urban space. Julián lives and works in Manizales, Colombia. http://sonologiacolombia.wordpress.com/
Troop: A Collaborative Environment for Live Coding
Live Coding is a movement in electronic audiovisual performance that emerged at the turn of the millennia (Collins, McLean, Rohrhuber, & Ward, 2003) and is now being performed all over the world through a range of artistic practices (TOPLAP, 2004). It can be characterised by the continual process of constructing and reconstructing a computer program to generate audio and/or visuals while projecting one’s screen for an audience to see (Mori, 2015). In a musical context a Live Coder creates algorithms that temporarily define the rules of their performance until they feel the need to alter them. This form of live notation is sometimes described as a kairotic practice (Cocker, 2013) wherein opportune moments are seized before they pass. Troop is a Live Coding environment that allows users connected over a network to collaborate within the same text buffer and create live electronic music together in a truly kairotic way.
The projection of the performer’s screen is seen as fundamental to a Live Coding performance but its reception by the audience can be divisive (Burland & McLean, 2016). Code and music are intrinsically linked during performance and this is understood by the audience who often expect to be able to follow musical events and relate what is on the screen to the consequent sonic experience (Magnusson, 2011). The downside of this is that some audience members feel that watching the screen “pulls the focus away from the human performers and the listening” (Burland & McLean, 2016, p. 10). Burland and McLean suggest that by projecting their work performers allow their code to become a representation of themselves but one could argue that this is only accessible to a minority who are able to decipher the code’s meaning. In comparison to other styles of performance Live Coding can be relatively static; performers may move in rhythm with their music but the constraint of using a computer keyboard to compose/perform in real-time does not allow for very expressive movement. As a consequence the code becomes the outlet for self expression and without it, the audience will rarely gain insight into the mind of the performer. Research has shown that being able to see a performer in close proximity improves the experience of live music (Burland & Pitts, 2012) and Live Coders often rely on the projection of their code to develop a level of intimacy with the audience.
The relationship between the audience members and the projected code is not universal and even those who appreciate that its role is integral to the performance style of Live Coding believe there is room for improvement. One participant from Burland and McLean’s study stated “I really enjoy seeing the projected code. I still think the community has a long way to go in terms of stagecraft while preserving the legibility of code” (p. 10), Zmölnig (2016) suggests that a potential solution “is to provide additional information on the screen that is not a presentation of the source code itself, but instead some kind of visualisation of the running program. Zmölnig is referring to the computational processes involved in a Live Coded performance but perhaps the idea of the “running program” could also be interpreted as the cognitive processes that occur during performance, such as those involved with musical creativity, but the argument could be made that code’s projection in itself already addresses this.
Live coding shares many of the same characteristics that make live jazz performances so interesting in that each performance is usually improvised and therefore unique (Magnusson, 2014). Music is created in the moment and the aspect of risk and uncertainty adds excitement to the event as it does in jazz (Burland & Pitts, 2012). Jazz audiences want “to be close to the musicians, see them interact with each other and see them play as clearly as they could hear them” (Brand, Sloboda, Saul, & Hathaway, 2012, p. 9), which suggests that the creative process accounts for as much of the appeal of improvised jazz as the music itself. The concept of sharing with the audience the creative processes that emerge from the dynamic interaction between performers is fundamental to the design of Troop. In a similar manner to Google Docs (Google, 2017) Troop uses a shared text buffer (see Figure 1), which enables concurrent word processing over the internet. This allows Live Coders to collaborate directly on the same material and see, as well as be part of, the live composition. Working within the same document Live Coders not only share their immediate textual material with one another but also their cognitive workspace; traces of each contributor’s thought patterns are left in the document as each keystroke updates the code, visible for all to see. This on-screen interaction makes the collaborative process transparent and accessible to the audience and allows the performers to share their musical ideas with one another in real-time. To help differentiate their individual contributions each performer is allocated a different coloured font. This addresses the problem of “identifying how to know what is each other’s code, as well as how to know who has modified it” identified by Xambó, Freeman, Magerko, and Shah (2016).
By contrast a typical Live Coding ensemble will usually synchronise tempi whilst working on individual portions of code on separate screens, some, or all, of which will be projected for the audience. Without the use of a shared text buffer collaboration in Live Coding often relies on per- formers listening to one another’s contributions, which creates a latency between the instantiation of a musical idea and a corresponding co-performer’s reaction. It could be argued that this stifles the interactive processes that appeal to audiences when watching improvised music. In addition to this, the audience’s attention can be divided when performers’ code is projected across multiple screens. In some cases, a lack of projection equipment may mean not every performer’s screen is visible. In either scenario the process of creative collaboration is obfuscated by the separation of the performers’ expressive representations in their code.
There are several existing programs that facilitate collaborative Live Coding performance such as the popular browser-based system Extramuros (Ogborn, 2016), which attempts to address some of the issues mentioned above. It allocates each connected performer a small text box on a web page into which they can each write code. These text boxes are visible to, and can be edited by, any other connected performer. This allows them to create their own code and request and modify other performer’s code in the same window, reducing the number of screens necessary to project during the performance. However, working in separate text boxes does not encourage the joint development within the same textual material.
The Troop environment opens up many possibilities for research into collaborative Live Coding. I have founded The Yorkshire Programming Ensemble (TYPE) to explore the effect of different creative constraints on performance. One example of this is limiting the proportion of text that any one performer can contribute to encourage a democratically constructed performance. Through this exploration TYPE is aiming to research and develop live performance strategies that can emulate the high standards of improvisational collaboration that jazz music is renowned for.
Figure 1: A screen shot of the Troop interface with three connected users.
Brand, G., Sloboda, J., Saul, B., & Hathaway, M. (2012). The reciprocal relationship between jazz musicians and audiences in live performances: A pilot qualitative study. Psychology of Music, 40 (5), 634–651.
Burland, K., & McLean, A. (2016). Understanding live coding events. International Journal of Performance Arts and Digital Media, 12 (2), 139–151.
Burland, K., & Pitts, S. (2012). Rules and expectations of jazz gigs. Social Semiotics, 22 (5), 523–543.
Cocker, E. (2013). Live notation:–reflections on a kairotic practice. Performance Research, 18 (5), 69–76.
Collins, N., McLean, A., Rohrhuber, J., & Ward, A. (2003). Live coding in laptop performance. Organised sound, 8(03), 321–330.
Google. (2017). Google docs - create and edit documents online, for free. https://www.google.couk/docs/about/. (accessed: 02/02/17)
Magnusson, T. (2011). Algorithms as scores: Coding live music. Leonardo Music Journal, 21 , 19–23.
Magnusson, T. (2014). Herding cats: Observing live coding in the wild. Computer Music Journal, 38 (1), 8–16.
Mori, G. (2015). Analysing live coding with ethnographic approach - a new perspective. In Proceedings of the first international conference on live coding (pp. 117–124). ICSRiM, University of Leeds. doi: 10.5281/zenodo.19343
Ogborn, D. (2016). d0kt0r0/extramuros: language-neutral shared-buffer networked live coding system. https://github.com/d0kt0r0/extramuros. (accessed: 13/12/16)
TOPLAP. (2004). TOPLAP — the home of live coding. http://toplap.org/. (accessed: 08/12/16)
Xambó, A., Freeman, J., Mgerko, B., & Shah, P. (2016). Challenges and new directions for collaborative live coding in the classroom.
Zmölnig, I. m. (2016). Audience perception of code. International Journal of Performance Arts and Digital Media, 12(2), 207–212.
- Ryan Kirkbride
Ryan is a postgraduate research student at the University of Leeds in the UK. His background is in Computer Science but has recently been combining this with his interest in music through live coding and algorithmic composition. He began work on his Python-driven Live Coding environment, FoxDot, in 2015 and has since performed across the UK and internationally using it. His university research focusses on nonverbal communication in live coding ensembles and collaborative performances and developing software to explore this. www.foxdot.org
Musical Analysis of Takemitsu’s “Water Music”:
Rhythmic Interactions and Spacial Projections of the Sounds
This paper deals with an experimental analysis of Takemitsu’s “Water Music” especially focussing on its rhythmic and spacial structures of the phrases. The piece is not only regarded as Takemitsu’s one of the most innovative works with new technology of his time but also as one of the most important works for the history of Japanese electroacoustic music in its earliest period. The water sounds are recorded to be modified and projected in stereo. And yet, the piece sounds as if it has tracks with each part played by a performer in duo setting.
Takemitsu seems to have tried to compose this piece with experimentation of projection of the sound for 6 speakers placed unevenly surrounding the concert space. Although it has been difficult to find an evidence of performance in perfect realization, we may imagine how Takemitsu planned the piece to be heard in real space. In fact, just listening to the piece in stereo, we can still catch each sound in multi-layered auditory perspectives. In terms of the details of the phrases, the sound projection plays an important role in making the rhythms become more vibrant in three- dimensional space.
The musical space of electroacoustic music could be described with time, sound direction, and pitch. A certain rhythmic figure would be fixed in a certain place of the musical space in real- time. Every single tone could be placed in a certain place in conjunction with other tone(s). They are sometimes very closely placed and sometimes embedded in certain distance. The distance between the tones could be achieved by several parameters such as amplitude, time-gap, and timbral change. It is the true pleasure of composing electroacoustic pieces that composers could controll those changes in time just as real performance of music. Reversely, we may be able to more closely analyze electroacoustic music by looking into the pieces three-dimensionally. Especially, in “Water Music” has distinctive quality of elaboration of time and space to create rhythmic figures. Here, sound projection is the keyword to better understand the piece.
- Yuriko Hase Kojima
Born in Japan in 1962. Completing her studies in piano in Japan, Yuriko Hase Kojima studied composition in the United States for ten years, and got DMA from Columbia University in 2000. She studied composition, theory, aesthetics and philosophy, with Tristan Murail, Jonathan Kramer, Fred Lerdahl, Brad Garton, and Lydia Goer, among others. Her works have been presented at the international festivals including the ISCM, the ICMC, and the IAWM, performed by the Ensemble Modern, the Pearls Before Swine Experience, the Azure Ensemble, and the New York New Music Ensemble, to name a few. Currently, Ms. Kojima serves as Professor of Composition at Shobi University, specializing in composition, music theory, and electroacoustic music. Along with her career as a composer, she conducts researches in musicological studies. She is also active as the founder and the artistic director of a non-profit organization Glovill (www.glovill.jp) aimed to introduce new music to Japan.
Cultural Identity in Electroacoustic Music: A Beijing Case Study
In 2015, I was invited by my colleague, Simon Emmerson, to write a chapter related to my experiences of electroacoustic music in China for a volume that he was preparing. The goal was to investigate a broad repertoire of recent electroacoustic works and attempt to discover a number of binding factors. This proved to be extremely difficult. As I had been warned by the underground musician, Yan Jun, many Chinese underground musicians are not keen to communicate about their work and are thus, in a sense, shy to engage in studies regarding their artistic endeavour. This indeed proved to be true leading toward an unbalanced collection of interviews, almost all of which involved conservatoire-based or trained composers. In fact that aspect of the project involved focusing on the Central Conservatory of Music (CCoM) in Beijing and, more specifically, the work of Zhang Xiaofu and three ‘generations’ of his students. Similar discussions were held with musicians at other institutions in China and fewer still were held with those musicians not linked to a conservatoire.
Having visited the CCoM often since 1993, I felt comfortable speaking with musicians who had studied there having heard literally dozens if not hundreds of works created at this world-famous institution. Having visited a number of the sister institutions around China – currently, with the opening of the new Zhejiang and Harbin conservatories, they number eleven – and engaged with several electroacoustic composers from these institutions, a pattern of behaviour evolved. These discoveries led to a hypothesis that would form part of the research leading towards the book chapter. It also made me wonder about the importance, or lack of, cultural identity in electroacoustic music in general. Therefore, this issue forms the focus of the talk, although the research itself investigated Chinese works solely.
Cultural identity in electroacoustic works: There have been too few attempts, as I wrote a decade ago in Understanding the Art of Sound Organization (2007: MIT Press), to categorise sound-based works. Within this area, research regarding cultural elements has been relatively marginal. To name an example, although many works of musique concrète, later acousmatic music, share some basic properties with roots in France where they were produced, the broadening of the acousmatic diaspora did not particularly lead towards cultural variants. By the 1980s a certain similar quality could be found in acousmatic compositions made around the globe and this continues today to an extent.
That said, acousmatic works form but one cluster of electroacoustic compositions. Let’s look at a very different sort of sound-based work, noise music. Although there has been a great concentration of work in Japan, and thus the term Japanoise, are there cultural markers in noise music that, for example, differentiates a British from a Greek noise composition?
The first part of this talk, after the introduction regarding the background described above, will address this issue attempting to propose its importance in terms of electroacoustic music discourse regarding both composition and analysis. An early example will be called upon due to EMS17 taking place in Japan, namely Takemitsu’s ‘Water Music’ which is, in my view, an extremely Japanese/E. Asian composition.
The three paths discovered in Chinese electroacoustic works: Following the more general discussion of cultural aspects of electroacoustic composition, the talk will move on to the research that formed the basis of the book chapter.
To begin, two tendencies were noted based on the author’s experience of Chinese electroacoustic works and conversations with Chinese composers: a) the proportion of mixed music pieces appears to be higher than in most other countries, and b) the proportion of music that consciously involves aspects from the musicians’ own culture – China in general but also its diverse regions – is also higher than in most other nations in which there is an active electroacoustic music scene.
The former tendency may have to do with the history and demands of education at Chinese conservatoires and is much less relevant in terms of musicians outside of academe. Nonetheless, the interest in sonic quality related to Chinese traditions is an aspect that aligns well with the sonic focus of electroacoustic music. This is an interesting subject and was integrated within the second tendency.
It is the links with Chinese culture that formed the focus of all interviews in this project. Three stood out and were discussed with all musicians:
2) The use of Chinese instruments and/or musical approaches
3) Inspiration from Chinese culture (e.g., Buddhism, Taoism, poetry, philosophy)
Illustrations of the three will be presented through the musicians’ thoughts and sound examples. It should be noted that the third of the three is in many ways the least tangible and interestingly more difficult for younger Chinese musicians.
The four CCoM musicians with whom interviews were held and whose works were studied were: Zhang Xiaofu, Guan Peng, Li Qiuxiao and Qi Mengjie (Maggie). The key underground musician interviewed was Yan Jun, but materials collected since 2006 due to my friendship with Yao Dajuin, an established Taiwanese sound artist based in Hangzhou, were also used to create the other side of this portrait.
There is a significant gap, perhaps greater than in other countries, between the conservatoires (and universities), on the one hand, and the free-lance underground artists on the other. Indeed, the information gained was highly dissimilar but the cultural links provided an excellent discussion point in all cases. Therefore, illustrations will come from both trained and autodidactic artists.
Conclusion: I have often spoken of an audience’s need to connect with new experimental forms of music. Shared experience is the most efficient way of achieving this, and using cultural aspects that are identifiable to the listener exemplifies a great means to success. This project led me to several pieces using elements of shared experience as an access tool.
Whilst undertaking this investigation, I had to take into account China’s own history that included a dark period for the arts during the Cultural Revolution. The difference between the musicians’ knowledge then and shortly afterwards and the availability of repertoire on today’s (partially censored) internet today is enormous.
Thus, although the cultural links are very strong regarding many of the musicians investigated here, today’s students and younger underground musicians are involved with more international tendencies, in many ways joining the more international scene of sound-based music found elsewhere today. Is this something to celebrate or just historical inevitability?
- Leigh Landy
Leigh Landy holds a Research Chair at De Montfort University (Leicester, UK) where he directs the Music, Technology and Innovation Research Centre. His scholarship is divided between creative and musicological work. He is editor of “Organised Sound” (Cambridge) and author of several books including “What’s the Matter with Today’s Experimental Music?” (Harwood), “Understanding the Art of Sound Organization” (MIT Press) and “Making Music with Sounds” (Routledge) in 2012. He co-edited “Expanding the Horizon of Electroacoustic Music Analysis” (Cambridge) with Simon Emmerson and is currently completing “The Music of Sounds and the Music of Things” with John Richards. He directs the ElectroAcoustic Resource Site (EARS, EARS 2) projects, is a founding member of the Electroacoustic Music Studies Network (EMS) and is chercheur associé of IReMus (Paris).
Triangular sound shapes: spectromorphology
and its perceptual implications
Studies on psychological aspects underpinning the listening experience of electroacoustic music are rare. This presentation offers a foundational perceptual study on the basis of Denis Smalley’s concepts of spectromorphology (Smalley, 1997). These concepts assume an important place in sound-based composition, at the same time, relying strongly on acoustical and perceptual assumptions. Musical discourse and expression are achieved by shaping the spectral evolution over time, often drawing analogies to extra-sonic gestural, motion or growth processes. For instance, in the visualizations of spectromorphological processes, simple geometric shapes are sometimes employed (Smalley, 1997; Blackburn, 2011). These visualizations commonly imply the shapes to evolve on the spectrotemporal domain: the horizontal dimension represents time; the vertical axis describes the frequency spectrum, while spectral amplitude may only be vaguely specified. Acoustical assumptions are even more clearly implied when these geometric shapes are used as visual annotations, resembling or even superimposed onto spectrograms (e.g., EAnalysis software, Couprie, 2014).
In probably the simplest case, a geometrical shape may delineate a spectrotemporal evolution in which two sides of a triangle diverge in frequency over time. Importantly, the notion of spectromorphology encompasses both the acoustical characteristics of the sound shape and the relevant perceptual qualities that are evoked. Given the rather literal analogy of mapping a visual triangle to the spectrotemporal domain, a number of questions arise how this visual analogy translates to being perceived in the auditory domain. Assuming a triangle like the one illustrated in Figure 1, i.e., one exhibiting linear sides, how is this ‘linearity’ best translated into the perceived sound shape?
Figure 1. Visualization of triangular sound shape used in the experiments. In terms of the perceptual qualities, the black outline corresponds to clarity, the filled grey area to opacity, and the balance between the top and bottom ends of the triangle relative to the grey, horizontal axis to symmetry.
Does it depend on the scales of frequency (linear or logarithmic) or amplitudes (equal amplitude or loudness)? In an iterative, granular process, how would the influence of granular density influence perception, and would there be a difference between a seamlessly continuous spectrum as opposed to one exhibiting a spectral gap? Finally, does the perception of sound shape for the same acoustical parameters remain unaffected if the triangle were reversed in time?
This set of questions relates to morphological qualities that still need to be identified and associated with their corresponding acoustical factors. Although the perceptual clarity of the shape could be implied by the diverging sides’ trajectories, the spectral content enclosed therein could also bear some morphological significance, suggesting that sound shapes could involve several perceptual aspects. Addressing these issues, this study explored the acoustic factors influencing the perception of sound shapes and how they may possibly relate to different perceptual qualities. The consideration of acoustic factors was informed by prior knowledge on psychoacoustical scales for frequency and amplitude as well as principles known from auditory scene analysis (Bregman, 1990).
2. Perceptual experiments
Two experiments were conducted to investigate a number of acoustical factors influencing the perception of triangular sound shapes. Both experiments utilized asynchronous granular synthesis, with the sound shapes composed of many individual 100-ms sinusoidal grains, each of particular frequency and amplitude. As shown in Figure 1, the tip of the triangular sound shape at its beginning is centered on a single frequency (always 1100 Hz), whereas at its end, the triangle spans across a frequency range (always 2000 Hz bandwidth). Given these constraints, a triangular sound shape comprised randomized sinusoidal grains either beginning at the centre frequency and widening toward higher and lower frequencies or narrowing from the widest frequency range toward the centre frequency. The temporal and frequency density of the grains, their amplitudes, and the frequency trajectories were variable (see experiments).
To study the influence of these acoustical factors on the perception of sound shapes, the perceived degree of three qualities was measured using rating scales. As visualized in Figure 1, these qualities concerned the characterization of sound shape, namely, the clarity of the defining outline or contour of the shape, the opacity of the therein enclosed sound material, and the symmetry of the shape relative to the centre frequency.
Experiment 1: density, orientation
17 participants completed a total of 144 trials. In each trial, a participant was presented a sound shape (11 s duration, via loudspeaker in mono) and had to provide ratings for clarity, opacity, and symmetry of shape. Among all sound shapes, participants listened to a total of nine different levels of granular density (both time and frequency). As shown in Figure 2, perceived clarity of the triangular sound shapes increased with increasing granular density. Notably, for sufficiently high density (above level V), perceived clarity remained about the same. Similar trends were observed for the opacity ratings.
Figure 2. Ratings of perceived sound-shape clarity across increasing levels of granular density. Points: mean, bars: standard error.
Furthermore, half of all presented sound shapes involved a widening orientation, while the remaining half narrowed in frequency towards the end. As shown in Figure 3, perceived clarity of shape was markedly higher for widening than for narrowing sound shapes, although the only acoustical difference concerned a time-reversed orientation.
Experiment 2: frequency gap, frequency and amplitude scales
Nine participants completed a total of 64 trials and provided the same ratings as in Experiment 1. All sound shapes (7 s duration) in this experiment followed the widening frequency orientation, half of which filled the entire spectral frequency range, as in Experiment 1. The remaining half exhibited a gradually widening spectral gap around the centre frequency. As shown in Figure 4, the solid, gapless sound shapes were perceived as more opaque (less transparent) than those exhibiting gaps.
All sound shapes were grouped into four different conditions of acoustic or psychoacoustic scalings. While for half of all sound shapes the widening in frequency occurred along a linear scale in Hz, the remaining half followed the rate of equivalent rectangular bandwidths (ERB, Moore & Glasberg, 1983), which is a psychoacoustically derived scale related to the frequency resolution of the inner ear. Orthogonal to the two partitions of frequency scalings, the amplitudes of individual sinusoidal grains were also determined either acoustically or psychoacoustically. The former considered equal amplitudes across all frequencies, whereas the latter weighted amplitudes based on the frequency-dependent equal loudness contours (Fletcher & Munson, 1933; ISO 226, 2003). As apparent in Figure 5, sound shapes that exhibited equal amplitudes across frequency and that widened along linear frequency were judged as most symmetric (closest to 0). By contrast, amplitudes equalized for loudness were perceived as upward or downward asymmetric for ERB rate and linear frequency, respectively.
Figure 3. Ratings of perceived sound-shape clarity for widening vs. narrowing orientations
Figure 4. Ratings of perceived opacity for shapes exhibiting a frequency gap or not (solid).
Figure 5. Ratings of perceived sound-shape symmetry across frequency (x-axis: linear vs. ERB rate) and
amplitude scalings (y-axis: equal amplitude vs. equal loudness).
Ratings of 0 were judged fully symmetric, > 0 downward asymmetric, < 0 upward asymmetric.
In an attempt to shed light on how extra-sonic gestural or morphological analogies map to perception in the auditory domain, the reported experiments allowed an exploratory study into possible acoustical factors that affect the perception of spectromorphological processes. All three perceptual qualities that were derived from a visual analogy, namely, clarity, opacity, and symmetry of the triangular shape, proved to be perceptually relevant, moreover, seemingly associated to different acoustical factors.
While granular density seemed to affect both clarity and opacity, it is important to note that the reverse orientation of the shape mediated perceived clarity. For one, this points toward greater density and thus proximity among grains achieving stronger perceptual grouping (Bregman, 1990), whereas the latter finding draws parallels to previously observed perceptual asymmetries for time-reversed sound processes (Patterson, 1994; Neuhoff, 2001). Furthermore, the gradual insertion of a spectral gap into the sound shape was attributed a decrease in perceived opacity. As common visualizations depict triangular sound shapes as symmetric relative to a center frequency, the results of the experiment offered valuable insight into what frequency and amplitude scales were perceived as most symmetric. Somewhat surprisingly, most symmetry was achieved for the sound shapes lacking any psychoacoustically derived scales.
Future results upon completing the second experiment may still mediate some of the conclusions, or add new findings. As a result, the reported results should be considered preliminary. Nonetheless, the obtained findings emphasize the importance of considering both acoustical and perceptual aspects in obtaining a more comprehensive understanding of spectromorphological processes. The longterm benefit from conducting similar studies on more complex gestural or other morphological processes could lead to developing perceptually informed tools or controls for sound synthesis or processing that operate based on relevant morphological parameters.
Blackburn, M. (2011). The visual sound-shapes of spectromorphology: an illustrative guide to composition. Organised Sound, 16(01), 5–13.
Bregman, A. S. (1990). Auditory scene analysis: the perceptual organization of sound. Cambridge, MA: MIT Press.
Couprie, P. (2014). EAnalysis (version 1), software, http://logiciels.pierrecouprie.fr.
Fletcher, H. & Munson, W. A. (1933). Loudness, its definition, measurement and calculation. Bell System Technical Journal, 12(4), 377–430.
ISO 226 (2003). Acoustics: Normal equal-loudness contours. Technical report, International Organization for Standardization, Geneva.
Moore, B. C. J. & Glasberg, B. R. (1983). Suggested formulae for calculating auditoryfilter bandwidths and excitation patterns. Journal of the Acoustical Society of America, 74(3), 750–753.
Neuhoff, J. G. (2001). An adaptive bias in the perception of looming auditory motion. Ecological Psychology, 13(2), 87–110.
Patterson, R. D. (1994). The sound of a sinusoid: Spectral models. Journal of the Acoustical Society of America, 96(3), 1409–1418.
Smalley, D. (1997). Spectromorphology: explaining sound-shapes. Organised Sound, 2(2), S1355771897009059.
- Sven-Amin Lembke
Sven-Amin Lembke has a background in musicology, acoustics, and psychology and in 2015 completed a PhD degree in Music Technology at the Schulich School of Music, McGill University, where he was a member of the Music Perception and Cognition Laboratory under the supervision of Prof. Stephen McAdams. He currently holds a lectureship in Music and Audio Technology at De Montfort University, Leicester, United Kingdom. His research seeks to describe the experiences of creating, performing or listening to music, based on an interdisciplinary approach. He recently embarked on a new research focus on the perception and cognition of electroacoustic music, conducted as a member of the Music, Technology and Innovation Research Centre at De Montfort University.
Thema (Omaggio a Joyce) – expression as a meaning
Thema (Omaggio a Joyce) is maybe the only electroacoustic work ever created involving a musician and a semiologist at the same time. During their short cooperation in the Milanese Studio di Fonologia Musicale during the 1950ies at the Italian broadcasting company RAI, Luciano Berio and Umberto Eco reached it's height with this tapemusic produced in 1958. Originally intendet as a radioexperiment, this work grew out of the study of onomatopoeia from Eco and Berio as a survey using the eleventh chapter of James Joyce's Ulysses as an example. Although Ecos semiotic profession would come to full extend after this period, his fundamental thoughts on opennes and the meaning of open forms were already on the table when he published his article L'opera in movimento e la coscienza dell'epoca in 1958/59. The production of Thema (Omaggio a Joyce) included the reading of the text from Ulysses in French, Italian and English, which was then put together and treated with special procedures that were developed in the Milanese studio such as overdubbing, filtering and variation of speed and dynamic. The result was a piece with the text of James Joyce which becomes gradually difficult to comprehend because of its multi layer structure and which was now equipped with new dimension of meaning and understanding.
The intention of this paper is first to understand the creation of Thema (Omaggio a Joyce) by historical means and to find out what were the ideas behind it and especially what possible traces of the cooperation of musician Berio and philosopher Eco can be found in the project. Secondly it is the purpose to understand the work by its structure through analysis of music and text, so that together with the historical survey the full dimension of this work can be decoded. The next focus is to point out the semantic aspect of Thema, thus to find out what it expresses and its particular way of expression. Finally the results should provide an insight of the semantics of this particular piece as an example for meanings and expressions of electroacoustic music on the one hand in comparison to other works of electroacoustic music and on the other hand in this unique case of having a semiologist as one of the originators.
To understand the historical background of Thema (Omaggio a Joyce) scholarly research was carried out on secondary and primary resources. Primary resources contain statements made by Lucano Berio himself regarding this project. Among these resources is the interview with Angela Ida De Benedictis, where he reflects on the work of the Studio di Fonologia Musica (1) and an interview with Barry Schrader (2) concerning the explicit project Thema (Omaggio a Joyce).
Because of the fact that Eco was involved, the study of his cooperation and friendship is another fundamental to this investigation. Therefore, primary resources containing Ecos's statements especially in his interview with Thomas Stauder were studied. (3)
As secondary resource, the thesis of musicologist Flo Menezes Luciano Berio et la phonologie. Une approche jakobsonienne de son oevre and Ulysses Annotated from Robert Seidman were regarded together with testimonies of other witnesses of the Studio in Miland such as Marino Zuccheri and Roberto Leydi. (1) The understanding of Thema (Omaggio o Joyce) regarding its structure was carried out on the basis of the interview from Luciano Berio with Barry Schrader and A.I. De Benedictis, also Berio's essay Poetry and music – An experience and Bronze by gold from Nicola Scaldaferri (1). The analysis covered especially the procedures of treatment of the recorded sound material in relation to the recorded words of the eleventh chapter of James Joyce`s Ulysses. The results of this procedure-analysis were compared to the results of the historial survey of the cooperation of Berio and Eco, especially regarding their resaerch on onomatopoeia, to see if possible relations and links between these events can be found. The results of these procedures were drafted together to get a clear picture of Thema (Omaggio a Joyce) and its meanings. Finally this was compared to the electroacoustic works Fontana Mix by John Cage and Gesang der Junglinge from Karlheinz Stockhausen to find out possible similarities and to show how unique this piece is regarding the cooperation of Berio as an artist and Eco as a philosopher as its foundation.
The historical investigation could reveal, that the project of Thema (Omaggio a Joyce) actually begun already at a private occasion: Because of their mutual interest in James Joyce, Berio and Eco became friends and spent evenings at Berio's house reading out parts of Ulysses to each other. Cathy Berberian became a very important role in this project, because her voice had the unique quality that would be crucial to create recordings of the eleventh chapter of Ulysses. This contributed to get a perfect impression of the acoustic abilities of this text. According to Berio the focus layed on onomatopoeia and the idea behind that was the aspect of sound, in fact according to Berio and Eco the eleventh chapter of Ulysses could be seen with its onomatopoeia as a musical sound or even clearer: a composed fuga per canonem. But their intention was not rather to trace the classical form of a fugue in the text but to go into the deepth of the onomatopoeia to highlight the polyphonic structure. With this phonetic emphasis the project started with the recording of the english version including exaggerations of the onomatopoeia to decode the musical sound behind them such as „Imperthnthn thnthnthn“ as a trill or „chips, picking chips“ as a staccato, which has been identified by Luciano Berio himself. This recording was then added with a French and an Italian version.
However, the process of transformation was based in the first step on overdubbing a recorded voice twice, so three voices were heard together with increasing and decreasing the time relations and dynamic relations to highlight and to confuse the acoustic image. This procedure was repeated with the Italian version of the text read by three voices and the French read by a male and a female voice. Then the three languages were combined together and re-ordered according to a musical principle, so this text becomes a polyphonic structure. The next step of electronic elaboration was to go back to the English text recorded on tape and to classify the words into new chords according to a scale of vocal colours which was then treated with filtering and adding fundamental tones to reveal new relationships in the material itself. Finally the French text was examined again and was treated with variation of its time relations between various elements. Together these procedures created a new polyphonic structure, where it is not possible anymore to recognise meaningful words – the composition transformes the text partially into a cloud of sound where the words are not anymore audible in their original sound. However, this is not the case during the whole piece since the text is often understandable in the beginnging as it is written by James Joyce. To both Berio and Eco, it was their desire according to the interviews to explore a state of language, where the meaningful language would completely melt with the world of sounds, what they called a continuum. And it was Ferdinand De Saussure who founded the distinction between phonetics and semantics, a linguistic theory of distinction which Berio was well aware of and Eco came in touch with through him. This work investigates the border between sound and meaning as an aestehtic perception and other composers like Stockhausen with Gesang der Junglinge had the same focus in mind. The novelty in this piece lies in the contribution of a text with onomatopoeia in several languages that from the beginning was about to emerge into sound. Because of the fact that the meaning of the words as a denotative characteristic gradually disappears and emerges into pure sound, this piece has an open character since these words are not anymore restricted to a significance and are now completely open to be heard unrestricted.
The presented results allow to conclude that this project is based on the exclusive cooperation and private friendship of Luciano Berio and Umberto Eco. The basis of this private aspect of their cooperation lay in their common interest in the onomatopoeia of James Joyce, which lead to the distinctive characteristic of Thema (Omaggio a Joyce). Onomatopoeia as a meaning of expression did not only distinguish this work from other works in the field of contemporary electroacoustic music but was the core of its intention: To overcome the dichotomy between meaning and sound or semantic and phonetic. Onomatopoeia the text of Ulysses reached out to this intention and it was thanks to the cooperation of Eco and Berio that attention and discovery was undertaken of this special case of musical expression as a meaning. Berio and Eco established in this electroacoustic piece meaning not in the sense of the assignment of phonetic structures to certain determined denotations, but in the sense of meaning without predetermined significances and therefore open. This open charactristic would later have a magnificent influence on Umberto Eco's essay L'opera in movimento e la coscienza dell'epoca and his grand publication Opera aperta.
(1) A.I. De Benedictis, V. Rizzardi, Nuova musica alla radio, 2000
(2) B. Schrader, Introduction to Electro-Acoustic Music, 1982
(3) T. Stauder, Gesprache mit Umberto Eco aus drei Jahrzehnten, 2012
A.I. De Benedictis, Luciano Berio – Nuove Prospettive, 2012
(4) F. Menezes, Luciano Berio et la phonologie. Une approche jakobsonienne de son oevre, 1993
(5) R. Seidman, Ulysses Annotated, 1988
- Martin Link
Martin Link was born in Gießen 1989. After graduation from high school in Düren, artistic degrees of music have been completed at Folkwang University Essen (bachelor of music) and Robert Schumann Hochschule Düsseldorf (master of music). During these studies additional academic training in musicology was followed with theses about the music theory of Olivier Messiaen, the aesthetic philosophy of Theodor Lipps and systematic sociology of Niklas Luhmann. Since 2014 a doctoral PhD study course is followed at Westfälische Wilhelms Universität Münster with the thesis The friendship between Luciano Berio and Umberto Eco. Aesthetic foundations and artistic implications.
Overview of Types and Researches of Data Controllers
in Interactive Electronic Music
In recent years, interactive electronic music is a popular development direction of electronic music. Its design and creation need not only consider the sounds, but also design or create the condition parameters to influence the occurrence and changes of music according to the requirements on works and performance. For composers, one of the ways to obtain the condition parameters is to collect data through the controller and send data to the computer, in order to have the computer influence the occurrence and changes of sounds. During the processing, creation and even the whole process of human-machine interaction, the controller has a great influence. The controller protocol could be mainly classified into OSC, eucon, AES, MIDI and so on. For example, as the most common and normative protocol in electronic music, and from the simple MIDI keyboards with output of 0&1 signals to the complicated Kinect which could realize the data mapping of 11 collection points through capture of actions, MIDI protocol could convert the input data into MIDI signals and influence the sounds. The commonly used data controllers mainly include gravity controller, speed controller, light controller, vibration controller, location controller, breath controller and so on. And the emergence of visual platform of programming language, like MAX/MSP and KYMA for musical creation and media creation, facilitates the human-machine interactive creation and performance.
Comparing with the traditional medium of control such as sound console, controller is a very promising tool with profound impact on how musicians create their works. With regard to the development history of electronic music worldwide, China has transformed itself from a follower in the past to the role of a creator actively innovating electronic music. Therefore, I believe that a historical perspective should be adopted to probe into the principles and technical measures of different data controllers for a better electronic music in the future. With that in mind, this paper elaborates on the principles of several controllers and how they should be applied to the future creations. The integration of technological theories and creations will enable us to spot problems and accumulate experiences. In addition, different practical situations are considered to highlight the significance of innovations in technology and art.
- Chenhan Liu
Liu Chenhan is an undergraduate student in the Electronic Music Center of the Department of Composition of the Central Conservatory of Music, where he studies under the guidance of Mr. Guan Peng. He used to study in the Department of Music in the Affiliated Middle School of the Central Conservatory of Music where he studied under the guidance of Mr. Zhu Shijia. He is a member of the Chinese Society of Electronic Music.
His major works are as follows: electronic music including “Dance for Max”, “A Double Song of Autumn: for zheng, flute and electronic music”, “A Psalm of Grains”, “Patch 0.0.1”, “Hothworld”, “0—for GRM Tools and Timpani”; works of chamber music including “Bravo”, “Adagio” and “The Theme and Its Reflection Game”.
LÓPEZ RamÍrez-GastÓn, José Ignacio
Hybrid Modulations: Report on the Culture of
Electroacoustic Music in Contemporary Peru
This paper makes an evaluation of the current situation of electroacoustic music in Peru. It maps the current interest (or lack of it) on part of the Peruvian musical institutions in relationship to electronic and electroacoustic music for both composition and performance. It deals primarily with the, in general, unsatisfactory, state of affairs that has kept Peruvians from producing a culture of electroacoustic music within the geographical territory of the country, and with the steps taken recently (and those not taken) to fill that historical gap.
1. Historical Background
By the end of the 1940s and throughout the 1950s, we saw the development of the ‘studio model’. Spaces for experimental, tape, and soon electronic music started to proliferate, mainly in Europe and the United States, spreading the interest for new sound techniques to the world.
Peruvian academically trained composers, following a traditional postcolonial attitude, were attentive to the changes the developed world had to offer. This initial stage was to produce during the 1960s some of the first electronic musical compositions in Peru by composers like Edgar Valcarcel, Cesar Bolaños and Enrique Pinilla. However scarce the production of electronic musical pieces was at the time, a new generation of Peruvian researchers and musicians have made an attempt to develop a history of Peruvian electronic music based mainly on the work of Cesar Bolaños.
This first chapter of our electronic music history sees itself partially truncated in 1968 by the coup d'état of Juan Velasco Alvarado and the setting up of the Gobierno Revolucionario de la Fuerza Armada. Although, the nationalist leftist government of Velasco had no official position regarding electronic music, its nationalist and indigenist ideals would play a role in the downfall of this seminal electronic music culture. For once, the Conservatorio Nacional de Musica (CNM) would be, under his government, absorbed by the new Instituto Nacional de Cultura (INC) bringing the objectives of the CNM under the umbrella of the political ideals of the government. While the work of Varcarcel and Bolaños had, in many cases, strong political and revolutionary themes, this was not enough to secure the continuity of their initial endeavours. Bolaños himself was to become the director for the Oficina de Musica y Dance of the INC while withdrawing from electronic music composition and dedicating his time to musicological research on ancient Peruvian music.
2. Political Climate
The political environment on this initial period was instrumental on the failure to develop a culture for technologically aided musical initiatives. Long before the democratization of access to the tools of the trade, electronic music required the support of private and public institutions and the adequate economical climate. The postcolonial condition, the economical predicaments of the country, the continuous class struggles, and the subaltern condition of most of the population to power elites during most of the republican times, made the electronic dream of musical modernity (1) difficult to carry out, and (2) possibly viewed as a sign of elitism and imperialistic cultural invasion.
Nationalism, indigenism, ethnocentrism, ancestralism, and a sense of emergency regarding the salvage of indigenous and pre-colonial cultures have been contributory elements to the manifestation of a lack of interest and sometimes-negative perception of electroacoustic and electronic music. Ignoring or disregarding the importance of electronic music in ‘opposition’ to more traditional or folkloric genres is common to most institutions of musical higher learning.
3. Later Developments
With the exception of composers like Arturo Ruiz del Pozo and Douglas Tarnaviewky who manage to promote the culture of experimental and electronic music out of the classrooms and laboratories during the 1980s and early 1990s, most academic electronic work slipped under the radar. Other academic musicians dedicated to electronic music during this period, like Rajmil Fischman, work outside of the country and participate of musical culture in Peru from afar, serving as a liaison and giving support and motivation to those Peruvians envisioning the possibility of dedicating professionally to electronic music.
In 1994, Jose Sosaya Wekselman founded the Taller de Música Electroacústica at the CNM after being trained in electronic music in France. This marked a new attempt to cultivate electronic music on the CNM. From this effort new composers like Juan Benavides and Gilles Mercier emerged. Both would at different points in the future be in charge of teaching classes on electronic music at the CNM.
The conditions of the space used for electronic music shows the precarious conditions in which the composers had to work. The lack of equipment and appropriate acoustic conditions of the space (see Figure 1) played a role on the musical output and on the motivation to continue. The space had been used for many years, mainly by these three composers, but it has, again, failed to gain a following on part of the composition students of the CNM, and at this point is highly uncommon to encounter an electronic music compositions performed or worked at the school.
One noticeable exception would be Abel Castro, a composition student at the CNM that works with technology and has organized two festivals at the CNM under the name La Trenza Sonora. The main aim of the festivals is to include contemporary compositions, and works of acousmatic and electronic experimentation. The last edition of the festival took place on june of 2016, and on it, most of the few Peruvian contemporary composers and performers of electronic music presented their work, including: Rajmil Fischman, Jaime Oliver, Gilles Mercier, Abel Castro, and myself. Within them, Jaime Oliver and myself belong to a new generation of musicians that have studied Computer Music at the University of California San Diego with Miller Puckette, and work for the most part with Puredata.
4. The New Taller de Música Electroacústica
On 2016, the Academic Director of the CNM, Nilo Velarde, considering that the teaching of electronic and electroacoustic music should not be neglected at the CNM, presented a proposal for the remodeling of The Taller de Música Electroacústica. This proposal included the acoustic treatment of the rooms, the modernization of the equipment and the purchase of software. With help and recommendations from Rajmil Fischman, Velarde implemented the new space with three areas: one for teaching, a second one with working stations including an 8-channel spatialization system, and a recording booth.
For the beginning of the academic year 2017, and with the remodelling almost done, I was contacted by Nilo Velarde and become in charge of instruction at the space, teaching Electroacustic Music 1 and
Electroacustic Music 2 to the students of the area of composition.
This new process has just started and it is difficult to speculate about the outcome. Historically speaking, this experience is part of a long list of attempts to bring electronic and electroacoustic music to the CNM, and in a sense to the country in general. If we could say, that many attempts to make electronic music a possible tool for composition in Peru have encounter several political and logistical difficulties, it is also true that the only academic space in existence is placed at the CNM. For the most part, the rest of the academic institutions that counts with musical training are yet to find electronic and electroacoustic music as creative options for music. For the most part the work related to technology is relegated to the role of assistance and normally done by students of sound engineering at local academies. Even thought, this separation between the creative composer and the technological assistant has been a problem wherever the culture of electronic music has been established, in the case of Peruvian musical school, the option of a creative electronic musical composer has no existence outside of the CNM.
Not having had a continuous history of electronic academic music in the country might play a role in the process and became a disadvantage. In any case, as an active participant in the process, and, in a sense, responsible for part of the new advances that might be produced, I will be reporting on the progress obtain and also confronting and analysing that reactions and perceptions produces on the students by the Taller de Música Electroacústica hoping to revert a tendency in the near future.
 López Ramírez-Gastón, José Ignacio. Constructing Musical Spaces Beyond Technological Eden: A Participative Initiative for Musical Interface Development Based in the Peruvian Context. Masters Thesis, University of California, San Diego, 2008.
 López Ramírez-Gastón, José Ignacio. “Cuando Canto Bajan los Cerros: An Initiative for Interface Development Informed by a Latin-American Contex”. In Proceedings of the 2008 International Computer Music Conference (ICMC-08), 2008, 667 - 670.
 López Ramírez-Gastón, José Ignacio. “Tijuana Sound Arts Project: A Nomadic Studio Report”. In Proceedings of the 2009 International Computer Music Conference (ICMC-09). 2009, 219-223.
 E. P. Thompson, “History from Below”. In Times Literary Supplement, 7 April 1966, 279-80.
- José Ignacio López Ramírez-Gastón,
Jose Ignacio Lopez Ramirez Gaston is Professor of Sound Arts at the Pontificia Universidad Catolica del Peru (PUCP) and the Conservatorio Nacional de Musica (CNM). Is in charge of the electroacoustic studio of the CNM and at the PUCP is a researcher at the Instituto de Etnomusicologia, founding member of the Grupo de Investigacion en Musicologia, and coordinator for the art courses at the EEGGLL of the university. He is a PhD candidate at the University of California San Diego where he holds a Masters degree in Computer Music.
Analysis on Multimedia Convergence Composition of EXTREMA
In the 21st century, composers blend the elements and performing modes of dance, opera, action art, installation, visual art, etc., into interactive music, and make multimedia be based on computer and digital technique collaborate to exert integrated effect on it, and achieve to integrate different sensory messages, which form a special mode of multiple convergence composition. EXTREMA stands as a perfect example which demonstrates the characteristic of multimedia convergence of interactive music, which is profoundly affected new media.
EXTREMA is an interactive music for Bed of Nails and 4-channel composed by Hongshuo FAN, the New-Generation Chinese composer. He used Bed of Nails which was designed basically by John Richards to compose this work. (see Fig.1 )
Fig.1 Bed of Nails
Fan found the electricity of Bed of Nails perfectly matched his original concept. It effectively promoted composer to focus on data characteristic and application of Bed of Nails, a semi-product, then to conceive EXTREMA and basically confirm the embryonic form of dual interactivities involving auditory sense and visual sense.
The composition of EXTREMA is based on following system. (see Fig.2 )
Fig. 2 Equipment Link
The composition of EXTREMA is based on analysis of sound produced by Bed of Nails (see Fig. 3). The performer presses Nails of Bed to produce noisy sound which is used to be the basic sound material of this work. Different combinations of 8 nails being pressed at the same time and their pressure levels control high / middle / low frequency ranges.
Fig.3 Flowchart of Sound Design and Visual Design
The performer uses MIDI pedal to work with four sound process modules, including resonance, freezing, granular synthesis, spectrum delay and filter, to process and transform special frequency ranges of sounds. The video camera captures real-time image to be source material of visual part. The edge detection, RGB delay, luma waves, stringy sphere (three-dimension twist) twist and other video effects in Resolume Arena are used to process and transform image in real time. Neither sound design nor visual design uses any prepared materials, that is to say, both music and video of whole piece is created lively. Fig.3 shows the flowchart of sound design and visual design of EXTREMA.
Because of multi-interactivity, the video design was in parallel with sound design during composition. Both video part and music part develop from original states to simple transformation, to extreme complicated transformation and finally back to minimalist material. The performance in Utrecht (ICMC2016) was around 10 minutes. It has 5 sections (see Form1). For each section, the interactivity between sound and video is obvious.
When fingers touch nail, the noisy sound with low amplitude will be produced. The acoustic signal is primary frequency component between 100Hz and 1000Hz. The composer uses it to design whole work. He cut off the part of signal outside the audible range and mainly processed the energy concentrated energy. For example, the composer reconstructed the signal by superposition to strengthen the signal in certain frequency ranges. (see Fig.4) The whole music part grows from the primary material.
The noisy sound produced by fingers’ pressing nails is the main material which is kept in original state at the beginning of section A. Accordingly, the image captured by video camera in real time is kept in original state. With the development of music, the composer uses freeze effect to process sound material to increase the space and sonic layers. Then, the sound starts to be stretched to drive the development of sound at new stage. At the same time, the edge effect is added to video part. But for both music and video, the transformations are not complicated that original state still could be recognized. Since end of section A, video starts to change gradually from real to virtual image. Section B presents further transformation on both sound and video. The composer mainly uses granular synthesis and delay effect to process the basic signal, the video part focus on RGB delay accordingly. (see Fig.5 and Fig.6).
When comes into section C, the deepening transformation increases sonic layers by using different combinations effects of resonance, freezing, granular synthesis. Meanwhile, the visual feature——luma waves (some particle effect) is introduced. (see Fig.7) And two-dimension view is switched to three-dimension gradually in this section. (see Fig.8).
The composer designs a short silence before coming to section D. The sound design for section D features high density of sound grains (see Fig.9). The image jumps to new stage of transformation. With addition of stringy sphere and RGB delay, the level relation between sound grains is reconstructed (see Fig.10). Increasing grain density creates legato and sustainable sonic layer. The aural flavor changes dramatically. And the music reaches high point in section D.
The granular synthesis lasts until section E. But for 1’10” of section E, the composer mainly use spectrum filter and delay effect to weaken the signal of special frequency component and makes music to sound more quiet. For video part of section E, the edge effect is added to video part. Both auditory part and visual part go back to very simple state finally.
EXTREMA was composed for special instrument—— Bed of Nails. So Fan creates a completely new notation for Bed of Nails according to practical requirement of performance. He uses note head of 9 different pitches to represent 9 nails. (see Fig.11). While expression marks are used to show single finger touching nail or two fingers press nail and to identify pressure levels (see Fig.12). A series of special symbols, like R (repeat), (surround twist), (moving up and down), etc., shows fingers’ moving. The alphabet (A, B, C, D, E) plus number mean the sequence number of MIDI pedal for 5 sections. Time code and duration time replace time signature and bar of traditional music notation.
The unique score (see Fig. 13), computer program, electric circuit, illustration of hardware connection and software configuration (see Fig.2), and audio-video recording of performance, all are necessary.
For interactive music at present, the new media products or scientific research have inspired composers to compose unique works. Some of them even guide the composers to make subjects of works. The game controllers, X-Bod Kinect, GPS, Leap Motion, other sensors and electronic products, etc. which appeared in a few of works, have improved interactive music towards subversion of traditional style. EXTREMA is such kind of interactive music which is based on non-traditional instrument to explore the expressive sounds. Fan tried to show a kind of new and poetic multi-sensor aesthetic that interactive music gained from computer and to explore the combination of auditory sense and visual sense in interactive music composition through multimedia convergence composition. The new media has played a key role in every point of composition. The subject of work, experimental environment, music form, sound design, visual design, performance and notation, all reflect the impact from new media. New media has already exceeded the simple meaning of technology, tools or media attribute, which is an important contributor for interactive music to turn of art creation and aesthetics in old media age.
- Minjie Lu
Minjie LU (Iris Lu) received her bachelor degree in electronic information engineering and received her master's degree in Electronic Music from Sichuan Conservatory of Music. She received her Ph.D. in Art and Media from Sichuan University. Now she is the associate professor of Electronic Music Department of SCCM and Digital Media Art Key Laboratory of Sichuan Province. Her research is focused on interdisciplinary program including electronic music and culture. Her works have won the Pauline Oliveros Prize given by 28th IAWM and prize given by eARTS Digital Audio Competition. In addition, her scholarly musical essays have received awards from MUSICACOUSTICA-BEIJING. Her works were also selected to perform in ICMC/SCM, Kyma International Sound Symposium, Sonic Rain, FMO, etc. In 2012, she was sponsored by China Scholarship Council to be the visiting scholar in U.S. In recent years, she has been invited to be reviewer of ICMC.
- Hongshuo Fan
Hongshuo FAN, born in Chengdu of Sichuan Province, China. He is the first recipient the master degree major in new media music at Sichuan Conservatory of Music. He also studied electronic music with Xiao Hu, Takayuki Rai, Zygmunt Krauze and Jeffrey Stolet. Now, he is a faculty of the Electronic Music Department at Sichuan Conservatory of Music and key member of Electronic Music Creative Research Center at Sichuan Conservatory of Music. His research and creative interests include New Media Art, Interactive art and Multimedia Design. His works have been selected to perform in China,U.S, Poland, Netherlands, Sweden etc., He is the winner of the 2015 Shanghai International Electronic Music Week "Best Works Award" and the 2016 ICMA（International Computer Music Association）Asia-Oceania Regional Award.
Alternatives Perception of Musical Pitch:
Compositional Practice Towards Auditory Aesthetics
in music by contemporary Hong Kong composers
Is the perception of pitch a subjective response to the intellectual design of sound frequency in music within the context of an established system in relation to culture and technology? If yes, how and to what extent can it be adapted in creative music making and communicated through electroacoustic music?
This presentation reports the learning experiences of the author based on selected pieces by Hong Kong composers through the adoption of a compositional approach with electronic means that imply different sensations of sound frequency from the early age in 60s to the recent years. The discussion about this issue will refer to the active composers including LAW Wing Fai, Joshua CHAN, LO Hau Man, and Steve HUI (Nerve). Indigenous methods of proactive music listening are mentioned to highlight the aesthetic and functional role of pitch as it evolved in the pieces, and how attempts were made to accommodate two main types of engagement: the use of inherent precision and programmability of digital sound synthesis; the placement of perpetual motion of single tones on traditional Chinese instruments or singing practice in line with the treatment of computer generated sound. The revisiting of relatively narrow domains of scale-aligned pitched tones in traditional Chinese music adapted with a synthesized voice in context of a “digital” opera will also be discussed,.
The discussion of illustrative musical examples starts with the “Goodbye China” by Doming Lam composed in 60s, regarded as the first electroacoustic piece in Hong Kong in which the pitch variations of piano sound were achieved with magnetic tape techniques. It is followed by LAW Wing Fai’s Sun Soundic written for ensemble with tape. The synthesized sound in the electronic part melted into the instrumental gestures making the sonic atmosphere with eastern harmony. With the increasing use of personal computers in music from 80s and 90s in the City, local composers like Joshua Chan employed computing skills in music to manipulate specific tuned scales and tonalities to attract a number of audiences who expect melodic and harmonic projection in a more conventional way in terms of traditional and local identity. Near the end of the last century, composers even used more straightforward approach in expressing their concern of pitch characteristics in Chinese music. In The Last Judgement by LO Hau Man, the portamento of the theremin was employed to work with the Chinese instruments, simulating the perpetual motion of single tones produced by Xiao, the traditional chinese wind instrument. In the last decade, there are numerous substantive electroacoustic works in Hong Kong presented alongside the performance of traditional Chinese regional opera or folk songs. In the “digital opera” The Memory Palace of Matteo Ricci and Idle Brow Shaping composed by Steve HUI, digital human voice and the singing of Kunqu (one of the oldest extant forms of Chinese opera) were mixed with electronic sound in a theatrical performance.
The “accuracy” of pitch produced by Chinese musical instruments or singing of traditional operas often annoys or arouses the curiosity of some western composers who make use of the intervallic relationship as an important factor in music without understanding the mechanism of the Chinese instrument and singing practice that the sound is produced in the specific styles of music. Sensation of pitch generally depends on the frequency of a periodical waveform. But if the sound of the waveform is presented with a perpetual fluctuation in frequency in particular formats of movements or direction, like the playing of some Chinese instruments or traditional singing, or if those fluctuations also happen in the amplitude of particular overtones of the sound, the recognition of the perceived pitch might be shifted to the frequency of the fluctuated tone(s), not mentioning the tone color perceived is also changed. When the tone color is enriched with multi harmonics up to a certain level, the original perceivable pitch might change because of the attraction of the tone color by the overtones especially that from inharmonic spectra. It becomes truer if the sound of the composition is performed with treatment of panning and spatialization in a theatre environment, such as Steve HUI’s Idle Brow Shaping.
It is addressed that the selected compositions supposedly represent our Chinese musical heritage that we reacted with suspicion and curiosity. The objective is to experiment the compositions with incorporating pitch elements in both instrumental sound and computer music, and trying to introduce creative innovations while preserving the spirit of the Chinese music in a different way. The presentation concludes with an exploration of auditory stimuli and pleasure towards the aesthetic power of pitch through understanding those diversified treatments of sound frequency in musical compositions.
- Clarence Mak
Clarence Mak was born in Hong Kong, pursuing music training in composition and electronic music in Hong Kong and USA, studied guitar performance in Musica En Composetela in Spain, and musical acoustics and programming at Stanford University’s Center for Computer Research in Music and Acoustics. He composes music for orchestras, drama, dance, electronic and multi-media and has been commissioned by numerous international and local professional performing groups of Chinese and Western music.
Mak actively takes part in professional services, promoting music making and related educational activities, being the key-note speaker, advisor, committee member, delegate and member of Jury, and producer of projects for international symposiums and festivals. He was the secretary, treasurer, Vice Chairman, and Council Director of the Hong Kong Composers' Guild from 1987 to 2014 respectively, a member in organizing committees for major musical events. He was invited to play the guitar as the soloist for his work “Blue Sky The Heart” commissioned by the Hong Kong Chinese Orchestra in the concerts of the Hong Kong Arts Festival in 2014 and 2015.
Mak is currently the Head of Composition and Electronic Music, teaching composition and computer music.
Intention and Reception
– Listening Behaviours in Acousmatic Music
Listening to acousmatic music has been the topic of several publications in the past decades, although it has rarely been studied extensively and in a systematic way, regarding acousmatic music specifically. Acousmatic music psychology studies are quite hard to find (Dean & Bailes 2011, 2012), and a lot of proposals about acousmatic music listening are rooted in semiotics, phenomenology or cognitive theories, with few or no input from actual listeners (Schaeffer 1966, Smalley 1992, Bayle 1993, Windsor 1995, Kilpatrick & Stansbie 2010, Pasoulas 2011, Meric 2012, Kendall 2014, Thoresen 2015). Two exceptions deserve to be mentioned. Leigh Landy (2007) and Robert Weale (2005) developed the Intention/Reception project, through which they showed that listeners were more drawn in by electroacoustic music when information about the composers’ intentions were given. However, Delalande’s “listening behaviors” paradigm (1989, 1998, 2010) and its developments by Alcázar (2004), Anderson (2011) and Spampinato (2015) suggests that without being aware of composers’ intentions, unexperienced listeners can still have a rich and satisfying listening experience. According to my findings (2014, 2016), composers’ intentions or listening guides, without always being a necessity nor even helping out, may still be useful in specific situations, for unexperienced listeners to make sense of a kind of polyphony they are not used to.
The point of my presentation will be to show how listeners can draw more actively on their own experiences and goals than on perceived composers’ intentions, without losing interest in a piece of music. The ideas put forward will draw from my doctoral research about listening behaviors as well as my experience as a composer and composition student having been able to observe various teaching methods and various verbal reactions from the most unexperienced listeners’ to the most experts’.
I can thus testify to the fact that listeners as well as some acousmatic music professionals often like to be guided through a work by the sounding content of the work itself rather than having to adopt a specific perspective indicated by a text or a discourse. As a composer I am interested in silence and its role beyond the articulatory (rests) and structural (silence at the beginning and end of a work, movement or section), towards silence being a layer of polyphonic listening (closer to the Japanese 間), as a continuo or a drone. In this sense, sound is not as important as silence, serving mostly to surround silence and enlarge it. As a student, I was often told I needed to give the listener a hand, “something to hold on to” (Landy 2007). However this was not in the sense meant by Landy (with an interest for helping out listeners not only through the work itself but through its surrounding context). The point was to have me make things abundantly clear for listeners: when there was sound on the tape, the tape wasn’t silent, so I should not have been talking about silence, however relevant it could become in my own formal conception of the work.
Other statements related with silence were quite recurring, having to do with a capitalistic (or pragmatic) perspective: why mobilize a whole orchestra to have them stay quiet for relatively long periods of time? Why want performers to play the music if we don’t want these performers to be seen? In the case of electroacoustic music, why have such low amplitudes, when the richness of sounds would be so much more highlighted with higher amplitudes? Why compose music that seemed to ask for a still, unbreathing audience? Listeners would often prefer works including actual continuo or drones, being able to be immersed in sound, and many acousmatic works nowadays, however bare, often keep some kind of continuum, in order to grasp and keep listeners’ interest.
In this sense it is not difficult to find interest in common listening behaviors and the ways one can use them in a composition. However, cultivating commented concerts and specific program notes, encouraging students to engage in projects challenging our aural, perceptive and perceptual preconceptions, could be a way to make them actively wonder about the multiple relations between intentions and receptions and their consequences on the activity of the composer.
ALCÁZAR, Antonio (2004). Análisis de la música electroacústica – género acusmático – a partir de su escucha, Ph.D. thesis in musicology, sup. José A. S. Garcia et Francisco G. Calleja, Université de Castilla La Mancha.
ANDERSON, Elizabeth (2011). Materials, Meaning and Metaphor: Unveiling Spatio-Temporal Pertinences in Acousmatic Music, Ph.D. thesis in composition, sup. Denis Smalley, London City University.
BAYLE, François (1993). Musique acousmatique - propositions…positions, Paris, Buchet Chastel.
DEAN, Roger T. & BAILES, Freya (2011). “Modelling Perception of Structure and Affect in Music: Spectral Centroid and Wishart’s Red Bird”, Empirical Musicology Review, 6 (2), 131-137.
——— (2012). “Comparative Time Series Analysis of Perceptual Responses to Electroacoustic Music”, Music Perception, 29 (4), 359-375.
DELALANDE, François (1989). “La terrasse des audiences du clair de lune de Debussy : essai d’analyse esthésique”, Analyse Musicale, 16, 75-84.
——— (1998). “Music analysis and reception behaviours: Sommeil by Pierre Henry”, Journal of New Music Research, 27 (1/2), 13-66.
——— (2010). “Signification et émotion dans les conduites d’écoute musicale”, in Musique, Signification et Émotion, dir. M. Ayari & H. Makhlouf, Paris, Delatour, 231-248.
KENDALL, Gary (2014). “The Feeling Blend: Feeling and Emotion in Electroacoustic Art”, Organised Sound, 19 (2), 192-202.
KILPATRICK, Stephen & STANSBIE, Adam (2010), “Materialising Time and Space within Acousmatic Music”, L’Espace du Son III, Lien (revue d’esthétique musicale), dir. Annette Vande Gorne, 55-62.
LANDY, Leigh (2007). Understanding the Art of Sound Organization, Cambridge et Londres, MIT Press.
MARTY, Nicolas & TERRIEN, Pascal (2014). “Listening behaviors and formal representation of an extract of acousmatic music in non-expert listeners”, EMS14 Conference, Berlin.
——— (2016). “L’entretien d’explicitation pour analyser l’écoute des musiques acousmatiques”, in Musiques électroacoustiques – Analyses ↔ Écoutes, ed. Nicolas Marty, Paris, Delatour, 67-86.
MERIC, Renaud (2012). Appréhender l’espace sonore. L’écoute entre perception et imagination, Paris, l’Harmattan. 3
PASOULAS, Aki (2011). The perception of timescales in electroacoustic music, Ph.D. thesis in composition, sup. Denis Smalley, London City University.
SCHAEFFER, Pierre (1966). Traité des objets musicaux, Paris, Seuil.
SMALLEY, Denis (1992). “The Listening Imagination: Listening in the Electroacoustic Era”, in Companion to Contemporary Musical Thought Vol 1, ed. Painter J. et al., London, Routledge, 514-554.
SPAMPINATO, Francesco (2015). Les incarnations du son – Les métaphores du geste dans l’écoute musicale, Paris, l’Harmattan.
THORESEN, Lasse (with Andreas Hedman) (2015). Emergent Musical Forms – Aural Explorations (Studies in Music, 24), University of Western Ontario.
WEALE, Robert (2005). The Intention/Reception Project: Investigating the Relationship between Composer Intention and Listener Response in Electroacoustic Compositions, Ph.D. thesis in musicology, sup. Leigh Landy & John Young, Leicester, DeMontfort University.
WINDSOR, W. Luke (1995). A Perceptual Approach to the Description and Analysis of Acousmatic Music, Ph.D. thesis, sup. Eric Clarke, London City University.
- Nicolas Marty
Nicolas Marty is a Ph.D. student in musicology at Université Paris-Sorbonne and holds a Bachelor's degree in psychology. His musicological work is interested in the study of listening to acousmatic music. He is junior lecturer in computer music at Université Bordeaux-Montaigne. He studied instrumental composition and acousmatic composition at the Conservatoire de Bordeaux (with Jean-Louis Agobet and Christophe Havel) where he earned his diplomas in 2016. He has been nominated in April 2016 president of the association Octandre for electroacoustic music in Bordeaux. He is also a member of associations éclats, temp'óra and ACTA. He studies taiji quan with Yann Lapeyrie (IRPO). His aesthetics is founded on the contemplation of space and silence, putting aside linearity and discursive chronology.
Pour Adoucir le cours du temps of Tristan Murail
Local-global processes and his treatment
of computer assisted composition
In this presentation, I’ll try to examine Tristan Murail's musical style about his idea of local and global processes through his piece for orchestra and electronic, “Pour adoucir le cours du temps, literally, “To calm the course of time”, written in 2004.
At first, we will examine the context of this piece concerning the Tristan Murail's youthful interest in the phenomenon of attack-resonance discovered through the vibratory morphologies of Beethoven's Pastoral gathered in this piece. And then, we will see Murail’s idea of temporal structure of the piece and his treatment of attack-resonance concerning the morphology of a series, noise anacrusis, attack-derived echo, resonance’s vibration and iterative balancement, supported by harmonic manipulation varied in its temporal aspects by energetic increase and decrease. We will see, next, the metamorphosis of musical object’s profiles on his score, concerning the morphology of the phenomenon on the construction of attack-resonance’s variations, where we will discover the generating figures that cross an archetype of melody, maintained sound or iterative pattern, as well as Neume. After, we will also verify the harmonic structure concerning the sound model of gong in which this composer has discovered harmonic equality and inequality, and also the idea of modeling of the bell’s sound on instrumentations.
At the end, verifying the similar morphology in several works of Tristan Murail, I would like to point out that Tristan Murail develops the "relation of cause and effect" far beyond the elementary structure and the modeling of the bell’s sound on the beginnings of the spectral school, and that with his two axes of aesthetics, the local and global process, and the attack-resonance with its energetic driving of the metamorphosis on the musical objects, his music becomes an abstract, almost combinatory process while remaining organic of acoustics and it results in a style and a formal singularity.
- Keita Matsumiya
Born in Kyoto, Japan in 1980, Japanese composer Keita Matsumiya is currently a resident artist 2016/2017 at Casa de Velazquez of French government, as a member of Academy of France in Madrid, in Spain. Previously he was a member of Cursus de composition 2011/2012 at IRCAM, where he researched computer assisted music. His principal teachers there were Kiyoshi Furukawa, Masakazu Natsuda, Frédéric Durieux, Gérard Pesson, Luis Naon, Michael Levinas and Claude Ledoux. He received Bachelor of composition (2010) and Master of composition (2013), and obtained two Prises of composition and music analysis at the Conservatoire National Supérieur de Musique et de Danse de Paris. Before Paris, he studied musicology at Aichi University of the Arts and sound installation at the graduate school of Tokyo University of the Arts, where he received Master of Arts in 2006.
Matsumiya is a recipient of several honors. With his 2010 work La glace s’étoile, s’enchaîne for flute and harp, he was awarded Takefu Composition Award 2010 in Japan by jury Toshio Hosokawa and Mark André, and with his 2013 work Soliton for chamber orchestra and electronics, he was awarded honorable mention of Concours Destellos 2015 in Argentina. Other recent honors include two artistic residency fellowships from Institute Français Fez in Morocco and Tokyo Wonder Site, as well as a commission from Klangspuren Festival 2012 in Schwaz.
His catalog extends from instrumental-vocal music to mixed and electroacoustic music. In 2015, he received a commission from Dairakudakan, a prestigious dance company of Butoh in Tokyo, to write an electroacoustic stage piece entitled ASURA, and in 2017, a commission from Ensemble Regards in Paris with the support of SACEM for his choreographic-musical project, KARURA. His works have been performed by National Orchestra of Lorraine, Ensemble TIMF, Camerata Stravaganza, Musica Universalis, Ensemble Regards, Orchestra of Laureates of the National Conservatory of Music and Dance of Paris, among others, and presented at renowned festivals such as: Festival Mixtur in Barcelona, Festival Klangspuren in Schwaz, Centre Achantes in Metz, Brittany International Saxophone Academy, Tongyeong International Music Festival, Takefu International Music Festival and Ars Musica Festival in Brussels.
His scores are available at the Edition Tempéraments in Bordeaux. He is a part-time lecturer at Aichi Prefectural University of Fine Arts and Music from 2017 to 2018.
The audio-visual contract within multi-modal virtual reality environments: a case for re-evaluation
The field of visual music has historically been constrained upon two-dimensional screens, yet recent developments in virtual reality (VR) technology have the potential to revolutionise the medium by allowing new levels of integration between multi-sensory data streams. The new domain challenges established audio-visual integration models, and in so doing offers new creative potentials to the electroacoustic composer. As VR systems proliferate, audio-visual composers, transitioning from traditional two-dimensional screens into VR, must ask a fundamental question: Does “the audio-visual relationship” change within a virtual reality environment (Chion 1994; Coulter 2010)? This paper presents some preliminary observations and proposes an experimental framework to facilitate further investigation.
Virtual Reality: a multi-sensory experience
Much as the transition from mono to stereo recording introduced a wealth of spatial possibilities into audio production, so too does the transition from 2D screens to VR offer an exponential increase in how visual stimuli may be experienced. However, threedimensionality is just one of multiple factors that make the current generation of virtual reality systems (VRS) such an intriguing medium: motion tracking, haptic feedback and, of course, integrated audio spatialisation form the basis of what has been termed ‘Multi-modal VR’ (Wilson, Soranzo, 2015). It is only logical to assume that such a fundamental shift in the way audio-visual media is consumed (Ryan, 2015) must surely affect not only the way audiovisual works are experienced, but how composers should approach the development of such works (Breuleux, 2015).
To fully understand the impact of multi-modal VR on the human brain it is necessary to introduce two terms that relate directly to the efficacy of a VRS: immersion and presence. Immersion, in this context, refers to a VRS’s ability to render realistic, detailed virtual environments and is directly impacted by a system’s technical specifications. Presence, in this context, refers to the psychological effect of the virtual reality environment (VRE) upon a user – the greater the presence the more a user will feel and act as if they are “really there” (Wilson, Soranzo, 2015). Immersion and presence are closely related and the immersive qualities of a system have been shown to positively correlate with the sense of presence (Diemer et al, 2015; Wilson, Soranzo, 2015). Experimental research has characterised high levels of presence as resulting in an increased focus on stimuli within the virtual environment and a decreased focus for stimuli not relevant to the simulation (Burns, Fairclough, 2015). This masking of irrelevant stimuli is strong enough that it has been shown to decrease perception of acute and chronic pain and to relieve anxiety and stress in patients dealing with phobias (Dascal et al, 2017; Hoffman et al, 2014; Wiederhold et al, 2014). I refer to this masking process as ‘non-world occlusion’. Conversely, the increased focus on VRE-relevant stimuli, which results from higher levels of presence/immersion, has been shown to elicit higher memory recall, increased intensity of emotional responses, faster mental processing times and improved performance during abstract mental activities (Diemer et al, 2015; Maliński et al, 2015; Wilson, Soranzo, 2015; Keshavarz et al, 2014). I refer to this phenomenon as ‘in-world focus’. Based on the findings outlined above it is reasonable to infer that multi-modal VRS can induce a heightened sensitivity to in-world stimuli.
The possibility of such heightened sensitivity is of particular importance to the field of visual electro-acoustic composition – where successful works often depend upon a listener’s ability to maintain focused attention and emotional engagement while experiencing complex abstracted materials. Furthermore, though audio-visual streams may arbitrarily form relationships through the process of synchresis (Chion, 1994), the practice of creating artificial isomorphic and concomitant relationships is all too frequently hindered by technical limitations. As Kröpfl argued in 2007:
“One of the main drawbacks to integrating sound and visual image into an art form is the fact that while sound matter develops in ‘real’ space, visual image so far is projected on a bi-dimensional surface in a ‘virtual’ space. My point of view is that until a consistent development of holographic, that is to say, three-dimensional techniques are attained, integration will not be satisfactory enough.” (Kröpfl, 2007)
Now, some 10 years later, the current state of VRSs beg the question: does the heightened sensitivity offered by VR correlate to an increase in constructive synchresis? Or does it in fact lay bare and magnify pre-existing deficiencies within an audio-visual relationship?
I use the term ‘constructive synchresis’ to mean that the simultaneity of 3-dimensional visual materials and 3-dimenesional sonic materials will automatically form stronger and more transformative relationships than their 2-dimensional counterparts. For the electroacoustic composer this presents the possibility of creating convincing media pairs with far greater ease.
Initial observations: ‘Tilt Brush’
I will now discuss my own phenomenological experiences using an app called ‘Tilt Brush’ in VR. Tilt Brush is a 3D painting software that allows a user to create artworks either in a VRE, using motion tracked hand controllers, or using a 2D screen with a mouse and keyboard. Ostensibly a ‘visual-focused’ program, in its default setting Tilt Brush utilises only the most basic audio-visual mapping strategies: audio-visual synchronisation and panning. The human auditory and visual processing systems are highly interrelated (Talsma, Doty and Woldorff, 2007) and it has previously been shown that that audio and visual material, if congruent, can become mutually reinforcing and take on a gestalt quality, enhancing the effectiveness of both audio and visual streams (Chion, 1994). Despite the objectively simplistic audio-visual (AV) mapping strategies employed in Tilt Brush, both audio and visual streams were perceived as fully congruent and evoked a autocentrically engaging response that fits with Chion’s concept of ‘super-additivity’. Upon repeating the experiment using Tilt Brush with a traditional 2D screen, the AV streams felt less integrated and resulted in a far less engaging overall experience. Expanding upon these observations leads me to hypothesise that deficiencies between AV streams may in fact be masked by the phenomenon of non-world occlusion. This in turn suggests that weak AV relationships may be strengthened purely by being experienced in a VRE. This has significant implications for visual music composers as a lowering of technical overheads would allow highly convincing and impactful audio-visual relationships to be more easily created. Finally, given that a satisfying synchretic relationship was created using only rudimentary AV mappings in VR, it follows that, should more sophisticated strategies be employed, a highly effective visual music work would result.
From these preliminary observations of audio-visual relationships within VREs, it is predicted that, in comparison to media consumption using traditional 2D screens, in-world focus will increase the perception of synchresis between audio and visual textures, leading to a superadditive effect. In turn it is predicted that non-world occlusion will mask deficiencies within the audio-visual relationship, allowing satisfying isomorphic relationships to be more easily attainable using VRS than using traditional media.
To investigate these predictions, I propose to create a series of creative studies which may be experienced in either a VRE or using traditional 2D screens. These studies will investigate three primary questions:
1. Will the same material produce noticeably different perceptions of AV congruence when viewed in a VRE versus a 2D screen?
2. Do satisfying AV relationships created for a 2D screen become more integrated and autocentrically arresting when experienced in a VRE?
3. Do AV relationships created for a VRE collapse upon being viewed on a 2D screen?
Study 1 – VRE vs. 2D screens
Study 1 will present audio-visual materials that demonstrate both concomitant and isomorphic media pairings.
Study 2 – 2D material experienced in a VRE
Study 2 will present audio-visual materials that were created on a 2D screen and then authored for VR.
Study 3 – VRE material experienced on a 2D screen
Study 3 will present audio-visuals that were created in a VRE.
The results of these studies present many potential positive and negative implications for the field of visual electro-acoustic music that utilises VRSs. For example, it has been postulated by Coulter that “it may even be possible to evoke convincing isomorphic relationships through arbitrary or random assignments of three or more simultaneous parameters” (Coulter, 2010). If indeed VREs do lead to a heightened synchretic response then it may be possible to reduce the number of mapped AV parameters required to create convincing isomorphic relationships. However, if audio-visual works created in a VRE do indeed suffer from a weakening of AV integration upon viewing on 2D screens, then this poses serious problems of communicability and transference of a work.
In conclusion, VREs have the potential to significantly affect human perception of audiovisual relationships and, as such, may result in new compositional methodologies and approaches. My current research into the integration of electro-acoustic music, of which this paper is a key component, will form the theoretical grounding for a creative work which I am currently developing. The work is scheduled to be exhibited in early 2018 and will feature electro-acoustic music integrated with a virtual reality environment.
Breuleux, Y., 2015. VISUAL MUSIC: DISPLAY FORMATS. FULL-DOME PROJECTS. THE LANGUAGE OF A/V SPATIALISATION. In PROCEEDINGS OF UNDERSTANDING VISUAL MUSIC 2015 SYMPOSIUM (p. 7).
Burns, C.G. and Fairclough, S.H., 2015. Use of auditory event-related potentials to measure immersion during a computer game. International Journal of Human-Computer Studies, 73, pp.107-114.
Chion, M. 1994. Audio-Vision: Sound On Screen. (Claudia Gorbman Ed.) New York: Columbia University Press.
Coulter, J., 2010. Electroacoustic Music with Moving Images: the art of media pairing. Organised Sound, 15(01), pp.26-34.
Dascal, J., Reid, M., IsHak, W.W., Spiegel, B., Recacho, J., Rosen, B. and Danovitch, I., 2017. Virtual Reality and Medical Inpatients: A Systematic Review of Randomized, Controlled Trials. Innovations in Clinical Neuroscience, 14(1-2), p.14.
Diemer, J., Alpers, G.W., Peperkorn, H.M., Shiban, Y. and Mühlberger, A., 2015. The impact of perception and presence on emotional reactions: a review of research in virtual reality. Frontiers in psychology, 6, p.26.
F. Aardema, K. O’Connor, S. Cˆot´e, and A. Taillon, 2010. Virtual reality induces dissociation and lowers sense of presence in objective reality. Cyberpsychology, Behavior, and Social Networking, 13(4), pp. 429–435.
Hoffman, H.G., Meyer III, W.J., Ramirez, M., Roberts, L., Seibel, E.J., Atzori, B., Sharar, S.R. and Patterson, D.R., 2014. Feasibility of articulated arm mounted Oculus Rift Virtual Reality goggles for adjunctive pain control during occupational therapy in pediatric burn patients. Cyberpsychology, Behavior, and Social Networking, 17(6), pp.397-401.
Keshavarz, B., Hettinger, L.J., Vena, D. and Campos, J.L., 2014. Combined effects of auditory and visual cues on the perception of vection. Experimental brain research, 232(3), pp.827-836.
Kröpfl, F. 2007. Integrating sound and visual image as artform. In Relationships Between Audition and Vision in the Creation in Electroacoustic Music. (Barriére, F. and Clozier C. Ed.) Academie Internationale de Musique Electroacoustique / Bourges, Institut International de Musique Electroacoustique de Bourges / IMEB, Bourges cedex, France. Volume VIII (2004-2005), 89-90.
Macedonia, M.R. and Rosenbloom, P., 2001. Entertainment technology and military virtual environments. ARMY SIMULATION TRAINING AND INSTRUMENTATION COMMAND ORLANDO FL.
Malińska, M., Zużewicz, K., Bugajska, J. and Grabowski, A., 2015. Heart rate variability (HRV) during virtual reality immersion. International Journal of Occupational Safety and Ergonomics, 21(1), pp.47-54.
Ripton, J. and Prasuethsut, L., 2015. The vr race: who’s closest to making vr a reality. URL http://www. techradar.com/news/world-of-tech/future-tech/the-vr-race-who-s-closest-tomaking-vr-a-reality--1266538.
Riva, G., Wiederhold, B.K. and Gaggioli, A., 2016. Being different. The transfomative potential of virtual reality. Annu Rev Cybertherapy Telemed, 14, pp.1-4.
Ryan, M.L., 2015. Narrative as Virtual Reality II: Revisiting Immersion and Interactivity. Talsma, D., Doty, T.J. and Woldorff, M.G., 2007. Selective attention and audiovisual integration: is attending to both modalities a prerequisite for early integration?. Cerebral cortex, 17(3), pp.679-690.
Wiederhold, B.K., Gao, K., Sulea, C. and Wiederhold, M.D., 2014. Virtual reality as a distraction technique in chronic pain patients. Cyberpsychology, Behavior, and Social Networking, 17(6), pp.346-352.
Wilson, C.J. and Soranzo, A., 2015. The use of virtual reality in psychology: a case study in visual perception. Computational and mathematical methods in medicine, 2015
- Clovis McEvoy
Clovis McEvoy is a 29-year-old composer, lecturer and sound engineer based in Auckland, New Zealand and completed a Bachelor of Music and Graduate Diploma in Sound in 2014.
Clovis currently lectures at Auckland University School of Music in the field of sonic arts and music production. He specialises in the field of live electronics, designing customised music software for the purpose of interactive performances and installations. In both 2013 and 2017 Clovis was selected to travel to Paris, France to study and participate in IRCAM’s Manifeste-Acadamie arts festival.
Clovis has placed three times in the Douglas Lilburn composition prize, taking first place in 2014. In 2015 Clovis’ work Conflux (2014) was selected for performance at the Seoul International Computer Music Festival in South Korea. In 2017 Clovis’ work Flaneur (2012) was selected for the Mise-en Festival in New York City, USA and the Forum Wallis Swiss Contemporary Music Festival, Switzerland
Recognition of Tôru Takemitsu’s Electroacoustic Composition Outside Japan:
its Theatricality and the Vortex
Recent studies of early electroacoustic composition in Japan have illuminated the distinctiveness of its development from that in Europe or the United States. These studies also revealed the significance of early electroacoustic works by Tôru Takemitsu, which he produced before gaining international recognition through his instrumental pieces such as Requiem for Strings (1957) and November Steps (1967). This research has contributed not only to highlighting those long-forgotten works, but also to giving recognition to the importance of his compositional experiences in these works, which formed the basis of the so-called “Takemitsu sound.” However, it appears not to be widely known yet that Takemitsu’s electroacoustic works were already drawing attention outside Japan in the second half of the 1950s.
They aroused the interest of the sound-visual artists, who founded the collaborative organization Vortex in San Francisco. “In essence, the Vortex demonstrations represent a theatrical exploitation of spatial movement in sound and image in the architectural setting of a planetarium.” The co-founders, Henry Jacobs (1924-2015) and Jordan Belson (1926-2011), played Takemitsu’s tape music in the Vortex concerts and in Europe between 1957 and 1959. These events occurred before Igor Stravinsky discovered the outstanding musical talent in Takemitsu’s Requiem during his 1959 stay in Japan. In other words, although the world-famous Russian composer’s positive comment on the Requiem resulted in Takemitsu being given the opportunity to embark on his international career as a composer, his musical creativity had already been acknowledged by the less well-known composer and the film maker on the West Coast.
Initially, Vortex focused on a twofold musical aspect – electroacoustic music and world music. More specifically, the former referred to Western electroacoustic composition, while the latter conformed to the definition of Ingrid Fritsch: “The American term ‘world music’ emphasizes the simultaneous and independent juxtaposition of various musical cultures.” The program of the first Vortex concert in May 1957 clearly represented this direction, and included carnival music from Trinidad, Japanese Koto music, a Balinese Gamelan orchestra, dance music of the Middle Congo, North Indian classical raga, and Cuban percussion instrumental music, in addition to electroacoustic works by Jacobs and his American contemporaries as well as Pierre Schaeffer’s Primitive 1948. In contrast, the third Vortex concert in January 1958 no longer contained such non-Western music; instead, the program consisted entirely of contemporary electroacoustic works including Takemitsu’s electroacoustic piece. This was the first time his work was played in public outside Japan. In the following concert, Vortex 4, Jacobs attempted to expand the variety of sound characteristics in electroacoustic music, partly with his preference for world music. This was reflected in the concert program, which included Karlheinz Stockhausen’s Gesang der Jünglinge (1956), Takemitsu’s Static Relief (1955), and the tape music of Takemitsu’s contemporary, Toshirô Mayuzumi’s Aoi-no-ue (1957). Jacobs’ choice of Mayuzumi’s piece is understandable because it employed recorded sounds of Noh chant as the primary compositional material. By contrast, Jacobs did not find such an exotic characteristic in Static Relief; rather, the piece was characterized as “an interesting eclectic work where Takemitsu combines the techniques of the French ‘concretists’ and the German ‘sinusoidals’,” regardless of the appropriateness of such characterization. The Vortex 5 concert in January 1959 no longer included Mayuzumi’s work, but did present Takemitsu’s “untitled” piece (by inference, La mort de Eurydice). The program note introduced this work by “one of Japan’s leading contemporary composers” as follows:
Untitled is a very brief, but exhilarating work, which clearly demonstrates this young composer’s creative exploitation of synthetic sound.
This implies that Jacobs evaluated Takemitsu’s piece only from the perspectives of creativity and sensibility of electroacoustic composition, not taking his cultural background into consideration.
The other artistic aspect the Vortex artists might have paid attention to or sensed in Takemitsu’s tape music was the theatricality, one of the core concepts of the Vortex presentation. The note in the 1958 Vortex 4 program reads that the idea of theater is a central feature of the Vortex concept as “a new form of theater based on the combination of electronics, optics and architecture.” Taketmisu’s electroacoustic composition was originally associated with multimedia or interdisciplinary work in most cases. In other words, he produced tape music for film, radio plays, or theater, rather than as independent musical pieces. This was also the case with the pieces selected for the Vortex concerts: initially, Static Relief was composed for a radio play and La mort de Eurydice for a theater piece, respectively, which the composer edited to make them independent electroacoustic pieces after the events. In this regard, however, a question about the Vortex’s choice of Takemitsu’s composition arises; in what way did Jacobs and Belson sense an element of theatricality in these works?
As Peter Burt points out, on the one hand, Takemitsu’s instrumental compositional style in the 1950s was still conservative and remained behind the newest developments in the West. On the other hand, his experiences of electroacoustic composition played a significant role in the development of his musical ideas. In particular, his collaborations with other artistic genres enabled him to gain insight into the indispensability of visual imagination—whether symbolic, story- or word-related, unrealistic, or abstract—when composing music. Takemitsu’s engagement with radio plays gave him the opportunity for this development, for instance. According to Christopher Balme, the “influence of radio on theater above all in the realm of sound design” was a specific case of intermedial development of art, because
The mix of complex sound tracks on audio tapes was primarily developed in the radio broadcasting studios but soon found its way into the staging practice of theater and film. Dramaturgy also remained influenced by the technical and aesthetic developments of radio broadcasting. . . . No doubt modern dramaturgy shows a tendency towards episodic structure, montage, abrupt change of scene, and so forth.
While Takemitsu had an unshakable focus on the pursuit of sound design and thus did not become deeply involved in dramaturgy, he experienced the process of intermedial work Balme describes. Static Relief, for instance, was his first independent electroacoustic piece after refining the tape music for the radio play Hono-o [Fire], his first musique concrète work for this media commissioned by the Shin Nippon Hôsô [New Japan Broadcasting]. As the composer himself explains, the piece employed a surrealistic concept—collage and montage of sound elements, including both recorded and electronically generated sounds. In this way, Takemitsu intentionally avoided structural development.
Through such compositional experiences, Takemitsu admitted his preoccupation with musique concrète “as the best method of recognition, rather than creating a piece with this method.” More specifically, he wrote:
I myself grasp [musique concrète] almost as the sense of action. By the way of combining unrealistic sounds, I reconstruct an unexpected landscape.
The idea of a landscape of sound, which remained as one of the important concepts throughout his career as a composer, suggests the similarity to Vortex’s concept of theater. Just as Jacobs and Belson found a new artistic expression in the mixture of movable sound and continual change of visual imagery on the dome screen, so Takemitsu sought a new sound space by manipulating unrealistic (i.e., recorded) sounds. The spatial dimensions of their approaches, however, were different from one another; more specifically, while Vortex treated real images projected on the screen, the images Takemitsu had in mind were only within musical composition, literally imaginary. Nevertheless, both Vortex and Takemitsu gave importance to the conception of movement, a crucial element of theater. It is therefore possible to assume that Jacobs and Belson identified Takemitsu’s electroacoustic music as potentially theatrical in their terms.
- Makoto Mikawa
Makoto Mikawa is currently a doctoral candidate of the International Postgraduate Program in “Performance and Media Studies” at the Institut für Film-, Theater- und empirische Kulturwissenschaft at the Johannes Gutenberg Universität Mainz, Germany. He received his doctorate in Music from the University of Western Ontario, Canada, and his M.A. in Music Theory from the State University of New York at Buffalo. He worked at the University of Windsor, Canada, as a sessional instructor in Music Theory. His research interests include postwar avant-garde approaches to the theatricalization of music, interdisciplinary and intercultural composition, sociocultural issues in the music of the mid-20th century, and the artistic-aesthetic significance of the connection between Japanese Nô-theater and postwar new music. His writings have appeared in Tempo, Perspectives of New Music, and The Musical Times, as well as in online archives.
Acoustic Expression of Japanese Special Morae
in Singing from the Viewpoint of Word-Setting
and Reflection for Electroacoustic Music
In the Japanese language, geminate obstruent /Q/, or ‘sokuon,’ and moraic nasal /N/, or ‘hatsuon,’ have independent sounds of subsequent consonants, and each consists of an independent phoneme and constitutes a unit of one mora. In Japanese songs, each note generally has a mora set to it, but for special morae including geminate obstruent and moraic nasal, there are two cases. One is that a note has only one special mora and the other is that a note has an independent mora and a special mora. These two word-setting cases often appear in the same piece of music. It is regularly commented that this co-existence cannot be explained from a linguistic point of view. As for singing, it is thought that the selection of these cases depends on the musical characteristics, such as pitch, duration and dynamics, of the passages concerned.
On Japanese acoustic features, many of the previous studies investigated permissible timings and thresholds of duration in the fields of education and learning Japanese. However, there were few studies in other fields. In the field of sound information, for example, a singing voice synthesis system set a note to a mora as for all morae including special morae; and for another example, geminate obstruent was handled as staccato. Thus, there are many problems to be solved for singing voice synthesis based on the mora system.
On studies about acoustic features of geminate obstruent in singing, there was a study investigating whether the melody lines were suitable for expression that didn’t damage the acoustic features of geminate obstruent and the text was heard precisely. On studies about acoustic features of moraic nasal in reading texts aloud, there was a study proving that the generated duration of moraic nasal varies with preceding vowels. However, there were no studies researching the acoustic expression in singing passages including geminate obstruent and moraic nasal.
Among electroacoustic music, Toru Takemitsu’s Quiet Design and Joji Yuasa’s Voices Coming are works using human voice, and we can grasp the meaning of Japanese words in them. It is surmised that the works retain the characteristics of Japanese acoustic features in the process of composition. From the viewpoint of listening comprehension of Japanese, it is presumed that the treatment of geminate obstruent and moraic nasal, which are important in linguistic acquisition, plays an important role.
In this presentation, not only the ratio but also the duration and other acoustic features of geminate obstruent and moraic nasal in singing lyrics varied with pitch and note value are examined, in comparison with those in reading aloud the same lyrics. The aim of this presentation is to find out how to reflect the differences in the word-settings from a musical point of view by analyzing those features.
The acoustic expression of geminate obstruent and moraic nasal in Japanese vocal works is examined by analyzing the vocal data of seven master course students of vocal music. I extracted some passages containing geminate obstruents and moraic nasals from a Japanese hymnal, many of which are translated songs whose texts are put to the original music later. Recording was carried out at the Aichi University of the Arts. The students both sang and read aloud the passages twice, and I recorded their vocal data using the software Praat on a personal computer. These students were requested to sing in the ideal Japanese singing formula. From the resulting audio data, including sound waves, sound spectrograms and formants, on Praat, I analyzed the musical characteristics.
As for geminate obstruent, while in the case of two morae in the note, the formant transitions are clear in varying degrees, and in the case of only one special mora in the note, both the preceding vowel and geminate obstruent are sung in the note and the formants of the preceding vowel overlap those of the geminate obstruent. On the other hand, as for moraic nasal, while in the case of two morae in the note, the formants of the preceding vowel overlap those of the moraic nasal, and in the case of only one special mora in the note the formant transitions are clear.
Thus, as for both geminate obstruent and moraic nasal, differences in the word-settings are reflected in the degree of the overlapping preceding vowel’s formant, and geminate obstruent contrasts greatly with moraic nasal in the case of overlapping.
It is peculiar that in most cases, the duration of the geminate obstruent in singing is almost as long as that in reading aloud, while the duration of the moraic nasal in singing is unrelated to that in reading aloud. It is also peculiar that in the case of two morae in the note, the duration of the moraic nasal creates a unit for a subsequent beat.
In this way, I conclude that as for geminate obstruent, natural reading duration takes priority in music, while as for moraic nasal, the duration changes to create rhythm for the music. I also show the findings about retaining of Japanese acoustic features in electroacoustic music by analyzing acoustic features of geminate obstruent and moraic nasal in Quiet Design and Voices Coming based on obtained characteristics of them.
- Yoko Momiyama
Yoko Momiyama, musicologist, was born in Japan. She graduated from Tokyo University (linguistics). She completed master’s degree course and doctorate course at Aichi University of the Arts (musicology) and received a Doctor of Music in 2013. Her Recent Main Studies are:
(1) On acoustic expression of singing special morae from the viewpoint of text-underlays in Japanese:
“Differences in Acoustic Expression of Singing between Geminate Consonant and Moraic Nasal from the Viewpoint of Text-Underlays,” The 79th National Convention of Information Processing Society of Japan. (March, 2017)
(2) On English word-settings and pronunciation in renaissance and baroque eras:
“Word-settings and Music in Handel’s Messiah from the Viewpoint of Stress Change in English,” In Fields of Research in Music Expression 2, edited by the Japan Music Expression Society. Tokyodoshuppan. (September, 2016)
She is a researcher at Nagoya City University and a lecturer at Nagoya Women’s University.
A structured timbre and its application to electroacoustic music
Timbre has become a very important music factor since 20th century, and ound synthesis, which creates new timbre has been an important engineering field since early years of electroacoustic music.
The timbres created by digital technologies in early years are mathematical functional wave timbres such as sine or rectangular function, later FM synthesis, and simulation of existent musical instruments. These are sound sources and newly synthesized. On the other hand, new effect technologies, which are modification synthesis given acoustic timbres, are also developed. Remaking of ready known analogue technologies such as reverberation and filter are representative functions. However, newly born digital technologies such as granular synthesis and modulation, pitch shift, squeeze/stretch effect have given more impacts and these give a new perspective of electroacoustic music.
In this paper new concept of timbre, “structured timbre” is defined and introduced in detail. This is defined as timbre which has musically distinctive factors in it. This is so to say “musical distinctive features”, which loosely corresponds to, but not as strict as, distinctive features in phonetics. An example is a musical instrumental timbre with vibrato and timbral trills.
Several outstanding structured synthesis technologies have been proposed. Those effects listed here are: 1) sound morphing, 2) sound hybridization and performance hybridization 3) sound by sound. We describe them in detail.
Sound morphing is an effect which changes its timbre from one to another gradually. This notion comes from computer graphic technologies, and synthesis technologies were open in 90s and the technique is being investigated now and a good quality is the central concern of technical topics.
Jonathan Harvey’s “Mortuos Plango Vivos Voco” (1980) is an antecedent of sound morphing, which transformed the sounds of Wincester Cathedral’s great bell to a treble voice. These two voices are modified and mixed together. But at this time, technology was not enough to implement real morphing, which maintains always one stream at a time. However, in esthetics these mixture and morphing should not be old or new, and both expression should exist parallelly. Osaka’s “Mirror Stone” for flute and computer (1996) is an example of a piece in which sound morphing is used as a musical theme.
Sound hybridization is a synthesis technique that creates sound which is composed of perceptual factors from various sounds. It is also called cross synthesis when two sounds are involved. Joji Yuasa’s “Nine Levels by Zeami” commissioned by IRCAM in 89 is an early example of cross synthesis of white noise and speech.
In this effect, one particular performance style is transferred to another instrumental timbre. Osaka’s “Morphing Collage” for piano and computer (2002), Shakuhachi’s shaking performance is transferred to Sho timbre. Sho is a Japanese traditional mouth pipe organ in gagaku and as its natural attribute from its sound vibration mechanism, it cannot generate vibrato and the pitch stays uniform. On the other hand, shakuhachi can freely control the pitch without fingering and have rich shaking expression called “yuri”. In the piece, sho’s timbre has a similar expression of shakuhachi’s yuri.
Extended sound stream
We classify sound morphing and sound/performance hybridization into a category of extended sound stream. Sound stream is a psycho-acoustical term, which means sound which can be perceived as only one sound. Acoustical sound is usually easy to find, since one stream corresponds to one physical object vibration. One of exceptional examples is Mongolian throat singing, khoomii, in which you can find two different timbres from one physical source, vocal chords. These effects cannot directly be said as one stream. However, it is a border timbre between one stream and two or multiple streams. Given
two streams, sound morphing make it possible to feel an extended stream by gradually changing the timbre.
In electroacoustic music these sounds are computer synthesized. Similarly instrumental performers have made effort to find extended stream within the instrumental timbre of his/her own.
One understandable extension is pitch register extension, seen as Mari Kimura’s one octave lower subharmonics performance, a flute player’s one octave lower performance using lip vibration like brass instrument. Sound morphing is also seen in flute player from a flute sound to the player’s own voice, which is not singing. The direction of extended stream is the common target for both the music performers and engineering researchers.
Sound by sound
Sound by sound is defined as a timbre composed of other timbre. For example, a phoneme composed of music instrumental sound such as flute sound. In this example, it is very close to the concept of sound hybridization, but it has a hierarchical structure and in the top layer a speech is heard while in the lower layer flute sound is clearly heard. On the other hand, sound hybridization tries to copy and transfer the certain perceptive aspect of another timbre and not the whole sound. This notion also comes from paintings. Trompe l’oeil art of Giuseppe Arcimboldo, and Yose-e in Utagawa Kuniyoshi’s ukiyoe of Japanese art. The former introduces many portraits composed of vegetables and fruits, while the latter introduces portrait composed of human. The same analogy was adopted to sound timbre. The well-known piece using this timbre is Jonathan Harvey’s orchestral piece, “Speakings” (2008).
In this piece, rapid change of the under-layer-timbre preserving the upper layer timbre (global timbre) unchanged such as that of trombone or strings. As the title implies, this tries to implement sound by sound effect defined here. However, the intelligibility is not very high, and although we admire that the expression imitates the speech, the audience cannot perceive the phoneneme. Now the questions arises Is it technically possible to express phoneme in the upper level in terms of instrumental timbres in the lower level. This is a technical concern, and one of the important axis of the accomplished sound quality evaluation. In general, These two axes are trade off, if the intelligibility of musical instrumental timbre in the lower level is high, the intelligibility the timbre in the upper level is lower and vice versa. Sound synthesis technology, which satisfies both aspects, is still an item of further studies.
On the other hand, in the musical aspect, intelligibility of the speech using an orchestra does not seem to be the very important factor in the Harvey’s piece. However, we are not sure whether this comes from technical limitations or esthetic requirement.
Future directions of sound by sound effect
Sound by sound effect is not limited to synthesized sounds. As we define, some of acoustical sounds have structured timbre, such as vibrato and timbral trill. A timbral trill can be observed in yodel and the shakuhachi’s korokoro or karakara. Contemporary flute players can perform a similar kind of shakuhachi trills. These performance make use of the timbral change among different registers of the voice or instruments.
Possibilities of more extended timbres.
One direction is to modify or emphasis of the lower level timbre of acoustical sounds. An example is the change of characteristics of vibrato, such as rate and depth of original sound’s vibrato.
Another possibility is reorganization of temporal structure. For example as an extension of wood instrumental timbral trill, such a timbral trill as two or more different timbres changes alternatively or sequencially with rapid movement. This might give us a different impression of overall timbre in the top level.
Trompe l’oeil art and Yose-e are very much related to illusions. Similarly, auditory illusions and its timbral possibilities can be an interesting music idea. In visual objects, humans tend to see special coherency within a local range and cannot detect the global special contradiction. Since sound by sound is defined as a hierarchical structure of timbres, such similar auditory illusion among short and long temporal direction can be expected.
We have discussed sound effects which are newly born after digital technology has become a main stream of sound synthesis. In particular, structured timbre is defined and classified into categories, and discussed in detail, introducing music pieces which not only include those effects but make them as main theme that penetrates the whole piece. Some effects introduced here are still under study in sound research field, and it is expected that more enrich timbre with a good quality will be synthesized in the near future. A new performance style, as well, based on these effects are also expected to be created by an instrumental players. These timbre will become a standard electroacoustic music vocabulary.
- Naotoshi Osaka
Naotoshi Osaka received M.S. degrees in electrical engineering from Waseda University and received Doctor of engineering in 1994. He presented his pieces in ICMC'93, ’03 and ’06. His music interest focuses on timbre synthesis from orchestral sound to computer generated sound. He has also organized computer music concerts, such as the NTT Computer Music Symposium I (1997) and II (2001). From 1996 to 2003 he led a computer music research group at NTT Communication Science Laboratories in Atsugi, Kanagawa. He is presently a professor of Tokyo Denki University. He is a president of Japanese Society for Sonic Arts (JSSA) in 2009.
(Re)notating cultural identities through musique-mixte:
A reflection of heterotopian constructs in performance
This paper presents a discussion of performative heterotopia within musique-mixte (for instrument and electronics), focusing on the implications of the spaces between the materials of music, the performance space, and zones of interaction. Heterotopia is proposed as a performance space of connectivity; of interweaving different perspectives of experience and practice; of modes of interaction; and a place of others as seen through the self. Electroacoustic elements of performance are proposed as enablers, as mediators for cross cultural understandings in performance, and the negotiations and relationships created within the space that converge as a shared experience of cultural diversity.
Nicholas Bourriard recounts in his introduction to Michel Foucault’s Manet and the object of Painting (2009) that Foucault developed the concept of ‘heterotopia’ as a way of representing “a constant among all human groups, [which] can be described as ‘anti-location’. It consists of an ensemble of “places outside of all places, even though they are at the same time localizable”. The concept of an ensemble of places expands into heterotopian performance spaces where the performance may articulate diverse relationships and interactions through the integration/juxtaposition of acoustic and electronic elements.
The research project, The Imaginary Space: Developing Models for an Emergent Malaysian/Western Electroacoustic Music (Malaysian Government Fundamental Research Grant Scheme 2012-14) lead by this author, aimed to make artistic and cultural connections through the medium of musique-mixte, to experience contemporary and traditional music practices of Malaysia and to examine aspects of cultures and interculturality in a context of new electroacoustic composition and performance. New compositions utilized Western flute, fixed sound and live electronics. It was found that creating a sonic environment through the digital signal processing (sound manipulations including timbre, volume, sound location and spatialisation) provided a context for investigation of intercultural parameters and the potential for creative exchange. The capacity of electronic techniques to shift perceptions of sonorities, location, spatial dynamics and characters of the music challenged listener and performer expectations and responses, provoking shifts in understandings and performative identities, and activating new interchanges through a fusion of practices and cultures. Exemplars of heterotopia arising in compositions from this research will be included in the paper.
- Jean Penny
Australian flautist/researcher/educator/editor, Dr Jean Penny, returned to Australia in 2016 following four years as Senior Lecturer in Music at the Fakulti Muzik dan Seni Persembahan, Universiti Pendidikan Sultan Idris, Malaysia and her subsequent appointment as a Fellow at the UPSI Education Research Laboratory. She has extensive experience in performance with major Australian symphony orchestras, chamber ensembles and solo recitals. Dr Penny’s work is grounded in Western art music cultures – principally new music performance, intercultural studies and practice-led research. Her Doctor of Musical Arts (QCGU 2009) study investigated the performative nexus of flute with digital technologies. In Malaysia she led major research projects centered around new music and intercultural perspectives, taught at multiple levels, and was Chief Editor of the peer reviewed Malaysian Music Journal from 2012-2015. She has presented at many national and international conferences and forums, and has won numerous awards, research and arts grants, sponsorships and university awards for outstanding service. She resides near Melbourne, Australia.
Network Music Performance over IPv6:
Two Year CERNET2 Project to Create a Large Scale Piece
Our presentation will be a research update on the first year of a two year CERNET2 grant (China Education Research Network) that was awarded to Ph.D. student Maggie Qi at the Central Conservatory of Music in Beijing. It was the only project out of 100 that was in the category of arts, as we promised a sci/eng contribution of open source code to enable OSCgroups to send data over IPV6 networks. That has been accomplished.
We also built in a rational methodology to design, test, scale and reiterate a large network music event using only IPV6 infrastructure. In the testing phases we collect latency and QOS (quality of service) statistics scaling up from two to five performance nodes, while at the same time testing a theory of multi-tempi/polyrhythmic form to which we believe network music is well suited.
Our modus operandi is to not add artificial delay and to feature the irrational inter-nodal latency ratios that determine the rhythmic and harmonic features that are serendipitous to any particular configuration of participants and will be unique in every networked case. Our aesthetic hypothesis is that we will finally discover the equivalent of the Higgs-boson phenomenon in the realm of networked signaletic discourse.
- Mengjie (Maggie) Qi
She is currently a doctoral student at Central Conservatory of Music studying with Professor Zhang Xiaofu and visiting scholar at CUNY- Brooklyn College under the supervision of Professor Douglas Geers. In 2015, she received her master degree at the Central Conservatory of Music, studying electroacoustic music composition under Professor Ping Jin. She strives to make electroacoustic music with Chinese characteristics and study computer music, she participated in the interactive installation program Sound*Beijing in the same year. She received her Bachelor degree at National Academy of Chinese Theater Art in 2012, majored in Sound Design Department to study sound recording arts and designing.
Her electroacoustic music works have won many awards on noted competitions, such as Spectral Color won an Honorary Mention (2011), Echoes of woodblock from Peking Opera won the First Prize (2012), Dances with Crystals won the First Prize (2014) at MUSICACOUSTICA COMPOSITION COMPETITION, and The Road to Krakow won the third prize of The Competition of Oskar Kolberg (2014).
She has been commissioned by festivals and ensembles, such as Autumn- for violin and electroacoustic music (2013), Linchong Fled at Night for Peking opera singer and electronic music (2016) at MUSICACOUSTICA-BEIJING. She is also the residency composer of Love for Music Ensemble in Beijing.
Her works been performed widely at the festivals around the world, such as Audio Art Festival in Bunkier Sztuki (Poland, 2012), Hungary, Slovakia, WOCMAT (Taiwan, 2013), CIME (North Texas, USA, 2014), ICMC (Netherlands, 2016), International Electronic Music Festival (New York, 2017).
She is also enthusiastic for music translation job and has served as translator for lectures of MUSICACOUSTICA Festivals and served as assistant for Professor Marc Battier’s and Professor Jeffrey Stoles’s workshops.
- Ken Fields
Ken Fields engaged in interdisciplinary studies across multiple departments (art, music and cognitive science), receiving a Doctorate in Media Arts from the University of California at Santa Barbara in 2000. Ken then moved to China to participate in the development of nascent digital arts/music programs at China's Central Conservatory of Music (China Electronic Music Center, 2003- Current) and Peking University (School of Software, Department of Digital Art & Design, 2004-2007). A major accomplishment of this period was to lead a team project to translate the Computer Music Tutorial by Curtis Roads into Chinese (Published in 2011). Ken held the position of Canada Research Chair in Telemedia Arts from 2008-2013 and is now back full-time in Beijing at China's Central Conservatory of Music developing Artsmesh, a professional MacOS application used widely in the field of network music.
On the Music of Sounds and the Music of Things
After a century of great upheaval in music, the twenty-first century is demonstrating that it will provide electroacoustic (or sound-based) music with continued radical developments although they may very well be of a different sort. Technological developments certainly dictated most of the twentieth century changes in music and this influence is in no way decreasing. The key change is less in terms of radical change regarding content; instead, our thesis is that production and distribution will be highly influenced by the formation of new musical communities, often focused on increasing participation through a workshop approach. Although tendencies that have existed for centuries will continue alongside those that arose in the previous century, traditional concepts will be renewed given the ubiquity of technology.
Stated in another manner, the development of artists producing Western art music or forming part of the commercial music sector may not alter significantly although interest may wane in the former and new means of packaging music may need to be developed within the ‘music industry’. This, however, is not our key concern. Our focus stems from the radical broadening that took place in the previous century, namely starting with the musical note as the unit measure of almost all music produced to the availability of any sound as musical material. We are talking about particular forms of sound-based music and how their future evolution will involve an increasing number of enthusiasts, how their position within music as a whole will redefine musical boundaries, and how their production and dissemination may form an addition to what might traditionally have been called folk music. This latter point is of great significance, as music in recent centuries has evolved from primarily an art form made by and for everyone and anyone to a more artisanal, professional trade. Our position is that an evolving eras of sampling and do-it-yourself cultures, the latter also known as hacking, will dissolve the ‘amateur’/professional distinction to an extent. This development, alongside much of sonic art’s music existing outside of clear pop/art music boundaries, will offer this young century a new form of music of the folk, whether musicians are performing together in one physical location or are performing by way of a (virtual) network.
Even the notion of instrument is broadening, one exciting product of the do-it-yourself (DIY) culture. An increasing number of musicians build their own instruments and, in so doing, are designing the need for their instrument and creating music based on that need leading towards new and often surprising forms of virtuosity and even anti-virtuosity and ‘naïve’ approaches: making new sound-based instruments as a method for creating a tabula rasa.
It is arguable that, following the music and associated philosophy of John Cage, musical content is not changing as rapidly and radically as it did throughout the previous century. Our century is, as far as this is concerned, one of synthesis – that is, further developing the radical musical approaches of the twentieth century. Instead, the radical nature of our time is to do with the holism related to creation and dissemination that many working within the music of sounds are in the process of developing. It is this form of radical development that will be our focus in this text.
This talk sets-out to summarise the core themes of Richards’ and Landy’s research in this area. Central to their argument is the development of a ‘music of sounds’ and a ‘music and things’. Landy builds on his sound-based music paradigm - a condensed version of the key ideas presented in Landy’s two 2007 books  - setting the scene for the re-examination of music’s key categories and the place of the music of sounds within that.
Landy investigates the evolution of sampling culture and, in particular, sound-based approaches within it. Issues of interest involved with production include: legality and related rights issues, sequential composition, the author and ownership, the apparent lack of a ‘celebrity culture’ amongst others. Distribution channels are mainly through nonstandard means of audio production. It is, as is the world of hacking, a space in which accessibility is broad and the professional/everyone else distinction is not of particular importance. Here there are two-way influences between traditional high art and popular cultures leading towards a variety of forms of music that possesses its own space, its own communities of interest offering a variety of forms of participation. Sampling here represents a broad range of approaches from soundscape to grains of samples, from music-based sampling using sound-based techniques to sampling anything. Where hacking is highly focused on the experience of making, sampling is highly focused on the recomposition or recontextualisation of experience.
Richards examines a music of things and the holistic approach of how the borderlines between instrument maker, performer and composer are becoming increasingly fuzzy or, better said, a new form of artist is emerging whose music is a manifestation of his or her (or their) instrument(s) and their self-sufficiency. He looks beyond Cage and Duchamp and the ideas of found sound and objet trouvé to discuss a new type of materialism and objecthood found in electronic music that draws on the broader philosophy of object-orientated ontology as expressed by, for example, Bruno Latour. Instrument is no longer seen as a tool for musical expression, but a self-sufficient system in which the music is ‘found’. In such cases, this demarcated system points to a technological object that has clearly defined boundaries and often limited parameters for control. The object may have the capacity to generate its own sound (self-generating). But at what point does objecthood breakdown and the sound-making system resemble a collection of things? Through making and engaging with electronic sound on a fundamental level – wire, solder, electronic components – the musician/artist is placing an onus on the constituent parts of these sound-making systems and how such elements are connected. There is a shift from the prescribed and concrete, to a relational aesthetic: how things fit together or not. A consequence of this approach, where the idea of musical instrument would seem to be subjugated, is to question instrumental virtuosity. Richards proposes a new type of virtuosity that resides in ‘listening’. Moreover, he considers the politics of, what can be broadly described of as, hacking in relation to music. He observes a new form of electronic music that has emerged that critiques contemporary culture through anti-technology manifestations and how a DIY electronic music is often used as a way of seeking self-determination. Finally, he reflects on how such practices can lead to new forms of electronic music performance and how the act of making is taken on to the ‘stage’.
The talk, based on a book that the presenters are currently completing, will commence with a contextual introduction and the presentation of the talk’s key ideas. After this Richards and Landy draw parallels between the two areas of the music of sounds and the music of things. Key concepts such as recycling and appropriation, sample as object, plundering and hacking, and technological processes are discussed. Cut and paste culture is considered, not only in relation to sound, but in relation to objects and materials, schematics and code. The hardware re-mix is also presented. The emergence of new communities forms the focal point for reflection. Workshopping and participation, which echo a broader cultural milieu of ‘an age of participation’, are seen as central to DIY and sampling cultures within sound-based music. To substantiate the findings, the authors will also draw on a range of case studies and statements from artists working across the disciplines presented.
- John Richards
John Richards explores the idea of Dirty Electronics that focuses on shared experiences, ritual, gesture, touch and social interaction. He is primarily concerned with the performance of large-group electronic music and DIY electronics, and the idea of creating music inside electronics. His work also pushes the boundaries between performance art, electronics, and graphic design and is transdisciplinary as well as having a socio-political dimension. Dirty Electronics has been commissioned to create sound devices for various arts organisations and festivals and has released a series of hand-held synths on Mute Records. In 1999, Richards joined Andrew Hugill and Leigh Landy as part of the Music, Technology and Innovation Research Centre, at De Montfort University where he helped initiate the Music, Technology and Innovation, and Music, Technology and Performance degrees. Richards has also written numerous texts on DIY practices in electronic music and new modes of performance.
- Leigh Landy --- see p.49
Considering Space: Issues Surrounding Communicating Spatial Information in Electroacoustic Music Works
Electroacoustic music has made an important imprint on how we conceptualize, compose, perform, and listen to sounds in communal settings. With the advent of the speaker, sound(ing) objects have been freed from their attachment to a direct physical action and may now display a high degree of centrifugal aesthetic (Ciciliani 2014). I will use three composers for shaping the discussion on space, the use of space, and how we might approach it analytically once we have the means to archive it: Karlheinz Stockhausen, Hans Tutschku, and Barry Truax.
Karlheinz Stockhausen was a pioneer in both electroacoustic music and the use of space in music. He was a prolific writer and we have a large body of literature as well as sketch materials showing how he thought about space and the future of the use of spatial elements in music (Stockhausen 1959). Seminal pieces like “Gesang der Jüngline” and “Kontakte” will bring historical perspective and help situate the current discussion.
Hans Tutschku studied sound diffusion with Stockhausen during the late 1980ies and early 1990ies. His piece “Zellen-Linien” will be used as a means to advance the discussion into the present. Sketches, the score, as well as personal communications with the composer will be used to gain insights into the use of space in Tutschku’s music.
Finally, portions of Barry Truax’s piece “Basilica” will be used as a case study to demonstrate a new system of visualizing spatial content by using ambisonic recording technology and computer vision algorithms.
The traditional documentation/archival of (western) music, i.e. the score, has a long history formally starting in the early medieval period with Guido D’Arezzo. Guido’s systems had to make a choice as to what was deemed the most important musical parameter for maximizing future reproduction/performances, namely pitch. This choice, made hundreds of years ago, still limits us today in how we learn, think, compose, and write about music. Had Guido decided, for example, that timbre or indeed space was more, or as important as pitch, our musical thinking and how we approach sound would be markedly different. One notable effect of space on a sound is that any sound carries an acoustic imprint of the characteristics of the (physical) space it is heard in (Blesser and Salter 2009). This means that a performance of a musical piece can never be recreated the same way twice. Once the sound leaves a speaker/sound source, the space, and the objects within the space alter the sound.
The pitch centric system of the western score took hundreds of years of constant improvements to get to our contemporary system (Read 1987), which allows for a remarkable number of parameters to be archived. However, spatial elements are still rare, and difficult to retain since 3-D space, especially fine-grained, complex temporal movement in 3-D space is exceedingly difficult to represent on 2-D paper/screens. Recent research has attempted to add spatial aspects to notation with the help of digital tools such as The Spatialization Symbolic Music Notation Project (Ellberger et al. 2014, 2016). This is an important step as this can encode the composer’s intentions clearly into a score, which, much like pitches on the staff can help the analyst understand the piece. However, since space imprints on the sound, the study of a score alone even with embedded spatial trajectories still cannot achieve a satisfactory analytical outcome. With “Zellen-Linien” we can study the Max/MSP patches gain another layer of information. There are a total of 32 electronic cues (for a less than 20 minute long piece), which are triggered via MIDI foot pedal. They are self-contained and do not need any further input from the performer. Tutschku describes spatialization not in terms of movement, stasis, or other more common terms, but rather assigns “attributes” to his cues such as “jittery”, “dull”, “energetic”, etc. He has confirmed that these attributes operate on a continuum, meaning they happen within a reasonably defined space but the computer determines the exact details at runtime. This means that each performance of “Zellen-Linien” has a slightly different, yet predictable outcome, which in turn makes it difficult to generalize the piece in any conventional theoretical sense.
In order to analyze and better understand spatial music, such as “Zellen-Linien” or “Basilica”, we need to investigate a different approach. Traditionally, the emphasis of musical analysis was to summarize, explain, and contextualize, a piece of music on a global scale. This means that we have a good understanding of how, for example, the sections of a Mozart piano concerto are constructed and relate to each other theoretically. We can even investigate and understand historical trends that led to the composition and how maybe over time the interpretation of the same score changed by various performers. But all these tools do not easily distill into a workable solution for electroacoustic music. As a consequence, the performance itself must become the main focus of an analyst’s investigation. However, since a performance is ephemeral we need to preserve/archive it with the highest possible fidelity, not only in pitch (traditional recording technologies) but also space. For this, ambisonic recording technologies may be utilized. An ambisonic recording captures the sound at the location of the ambisonic microphone from all directions, which means we can imagine the microphone array as a directional sensor that allows us to visualize the sonic energy on a sphere (O’Donovan et al. 2007; O’Donovan, Duraiswami, and Zotkin 2008). Therefore, a higher order ambisonic recording would allow a musicologist to preserve these spatial attributes of a performance, mirroring the experience of a single audience member. With the proper hardware and software, the performance situation can then be recreated to a higher degree than possible with stereo or surround sound recordings. Of particular interest here is also the option of beamforming. Beamforming allows a listener to concentrate on a specific direction of the spatial recording. This means, for example that one could listen for all the sounds coming from the back left of the room, or try and follow a sound as it moves through the space, recreating its trajectory for analysis. This opens up a completely new way of interacting and thinking about spatial contexts. Research my colleagues and I are currently undergoing focuses on the extraction of trajectories form ambisonic recordings using an em32 Eigenmike. We postulate that with the help of computer vision algorithms we will be able to parse recordings into viable trajectories for further musicological and theoretical analysis, which I will demonstrate through examinations of Barry Truax’s “Basilica”.
- Martin Ritter
Martin Ritter is interested in the intersection of music, technology, and performance. This includes musicological research in electronic music and how it can be analyzed and understood with the aid of digital tools. He is also a composer of electronic and acoustic works, which have been performed across North America and Europe. He has been published in the proceedings of NIME, ICMC, EMS, and eContact!. He also actively participates in new music festivals such as the Darmstadt, Impuls, and ComposIt where he has had the opportunity to study Roger Reynolds, Mark Andre, Klaus Lang, Gerd Kuhr, Joshua Fineberg, Philippe Leroux, Emmanuel Jordan, Marta Gentilucci, Davide Ianni, and others.
He holds a DMA in composition from the University of British Columbia where his primary teachers were Drs. Keith Hamel and Robert Pritchard, and is currently pursuing a PhD in Computational Media Design at the University of Calgary with Drs. Friedemann Sallis and Jeffrey Boyd.
“Murmullos del Páramo”: rethinking Julio Estrada's opera
through Zeami's theater Nô aesthetics
The purpose of this research is to rethink the characteristic elements in the opera “Murmullos del Páramo” of the Mexican composer Julio Estrada through Zeami’s aesthetics. One of the opera’s versions was performance in Tokio in 2010. This opera was inspired from the study of sound in Juan Rulfo’s works that Estrada made in the 90’s.
The aesthetics in Zeami’s theater is an art of austerity elements to impact the public with a fine drama. In “Murmullos” (whisperings) Estrada searches for the core of music: the result is an austerity of sound. I want to show how these approaches join the same aesthetics.
This project will be developed into two parts:
The first part, I present Estrada’s musical idea and his opera. He develops the concept of sound in his book “El Sonido en Rulfo, el ruido ese”. He presents four ways of sound’s perception: speaking sound, environment sound, musical sound and time sound. He explores these dimensions to create and structure his opera.
The second part, I present the Zeami’s aesthetics through Aya Sekoguchi’s recent work “L’empreinte de Zeami dans l’art japonais: la fleur et le néant”. Sekoguchi explores the Zeami’s periods to analyze the evolution and the formation of the theater nô and his influence in Japanese art. I analyze Estrada’s approach through the concepts like “ma” (starkness).
The goal of this research is to describe and analyze Estrada’s opera performance of 2010 and consider this work in order to define his style or even to propose a theory for its stylistic work.
- Judith Romero-Porras
Judith Romero Porras was born in the city of Puebla on April 16th, 1975. She obtained a bachelor's degree in classical music at the Conservatory of Music of the State of Puebla. During her years in the Conservatory she participates as a pianist accompanying the Children's Orchestra and the Symphonic Orchestra of this institution. She participates in two competitions: "Young Pianists of Puebla" in 1992 and the "First National Piano Competition Isaías Noriega" in 1994. In 1995, she became assistant professor at this Conservatory.
In France, she makes an equivalence of studies allowing her to enter the last year of the “licence” degree in music. In 2003, she obtained her degree in music and musicology. On her return to Mexico, she retook her work at the Conservatory of Music of the State of Puebla. Then, she obtained a master's degree in education at the Center for Prospective and High Studies A.C. of the city of Puebla. From 2007 to 2010, she works as a French teacher in the Department of Language and Culture Studies at the Autonomous Popular University of Puebla.
In 2010, she received a scholarship from the Ministery of Education of the State of Puebla for master's studies in musicology at the University of Paris-Sorbonne. This research will be extended with a scholarship from Columbia University in New York during the fall of 2012. Her research subjets concern the history and construction of a musical identity in Mexico in the twentieth century. The evolution of Mexican music leads her to become interested in the introduction of new techniques of composition of the 1960s. This subject is currently her research line for the doctorate in music that she has been doing since November 2015 under the advice of Ph.D. Marc Battier.
Analysis of electroacoustic and interactive music works:
Solo by Karlheinz Stockhausen, an example of performance analysis
Electroacoustic music is no longer just an art, but has become a means to comunicate. The musical works composed using electroacoustic media, interactive works as multimedia installation, are a great part of music creations today. In order to understand the characters and structure, as well as the processes that leas to its realisation, the traditional analysis does not always work. That is particularly evident in music works realised recently, but also some works created in the past have a non conventional representation. These are more performance events than events to be fixed permanently in a score. For the analysis of these works we need a new approach.
In this study, I further investigate different recent interpretations of the work Solo by Karlheinz Stockhausen, composed in 1966, using analogic techniques for the realisation of the live electronics. Starting from 1990 and just now, the piece has been performed with digital techniques and recently with interactive technologies. Also that piece can be analysed as a performance event communicating emotions.
The aim of this study is to analyse examples from different performances of that piece and investigate differences in interpretations of Solo by relating score segmentation with the analysis of performers’s gestural interaction. I also explore the effect of musical structure on communication of emotion.
The analysis is conducted in two steps:
1- different interpretations by different performers using a own version of the score (also different score versions)
2- different interactions between the player and the live electronics in the same score version (same score version, with different live electronics)
The analysis involves qualitative and quantitative methods : an analysis of the different score versions, I , a constant-Q time-frequency representation for analysis of audio files, and an analysis of interactive processes in real time during a live performance of the piece as a soundscape improvisation.
I have considered the following versions:
1- flute version, by D. Wiesner
2- double bass version, by E. Francioni
3- violin version, by Simonetta Sargenti
4- soundscape improvisation, by A. Petrolati, L. Muncaciu, E. Francioni and others
K. Stockhausen (1966). Solo n.19 (score). Vienna: Universal Edition.
S. Sargenti, (1996). Software per la definizione di strumenti nella musica elettronica: Analisi e realizzazione di Mikrophonie I e Solo di Karlheinz Stockhausen. In Atti del XI Convegno di Informatica Musicale, Bologna 1996.
S. Sargenti, An example of evolution in electroacoustic music performance: Stockhausen’s Solo and the creation of a soundscape , ‘Proceedings of the EMS Conference’, University of Sheffield, 2015
E. Francioni (2014). Solo, n. 19. Tutorial. https://www.youtube.com/watch?v=lu_j9zDWUb0
L. Muncaciu (2014). A vocal version of Solo. www.youtube.com/watch?v=hLdylmU-fJ
L. Gabrielli e al. (in fase di pubblicazione), WeMUST: Wireless technologies and open software for networked music performance: An example of the performance ‘Waterfront’: www.youtube.com/watch?
C. Cannam, C. Landone, e M. Sandler (2006). The Sonic Visualiser: A visualisation platform for semantic descriptors from musical signals. In Proceedings of the 7th International Conference on Music Information Retrieval (ISMIR-06).
O. Lartillot e P. Toiviainen (2007). A Matlab toolbox for musical feature extraction from audio. In Proceedings of the 10th International Conference on Digital Audio Effects (DAFx-07), Bordeaux, France (pp. 237-244). ISBN: 978-88-901479-1-3.
- Simonetta Sargenti
Simonetta Sargenti was born in Milan. She completed a M.A. Degree in Violin, in Composition and in Electroacoustic Music at the Conservatory “G. Verdi” in Milan, and holds a M.A.Degree in Philosophy and Musicology at the Università Statale di Milano e Università di Bologna. As a professional performer, her interests mainly lie in the application of technology to the musical domain, with a special focus on the 20th-century repertoire. Her compositions involve several instrumentations, including live electronics and magnetic tape, and have been performed in several European countries. She recently contributed to audio-visual installations at the Villa Simonetta in Milan. She is active as a musicologist and researcher in electroacoustic music and music theory and analysis. She teaches History of Music at the Conservatory of Novara and History and Analysis of Electroacoustic Music at the Conservatory of Pesaro.
Terminological discourses in the field of sound art
The use of the term “sound art” started occurring in the 1980s to refer to a multifaceted and composite genre. As a result of its significant connections to both music and arts, the multiplicity and hybridity of its forms as well as the polysemy of the term itself, it started opening the field for various definitions which until now raises debates and discussions. Through an overview of the sound art literature, this paper studies the different discourses on definitions and terminologies according to both theories and practices in the aim of questioning the framework in which sound art belongs today.
Since the emergence of the first sonic art forms and throughout their development, the opening of a considerable number of alternative spaces, collectives, galleries and festivals associated as much with the music field as with the artistic field increasingly drew attention on this growing art form within cultural and academic institutions. The pioneers of this art showed us how sound art is an expanding field which suggests plurality in its correlation with diverse disciplines. The ephemeral characteristic of sound and the countless possibilities it endows constantly varies the forms sound art can take, including the environment and space it interacts with which, consequently, puts the perception of the works in continuous change.
Following the early sonic art forms that have been described with different terms, the use of “sound art” has been recurrent for the past decades. Sound art literature has since been reflecting upon terminology questions in greater depth and stressing the difficulty to determine a specific framework, whether it is by identifying it separately from electroacoustic or experimental music because of their historical attachments (Lander, 1990) or securing the place it deserves in the art world and acknowledging all of its forms because of its phenomenal experience (Schulz, 2002). Studies led on the diverse terms used show the difference in characteristics also depending on countries (Engrstrom and Stjerna, 2009). We could note that the German theory of sound art, Klangkunst, provided specific insights highlighting new aesthetical implications that differentiates it from the strictly performative aspects (La Motte-Haber, 1999). Expanded insights on the topic are also given by including other musical genres or artistic movements, in Dan Lander's sound art anthology (1990) as well as in Brandon Labelle's study (2006), published later on, presenting a detailed overview of the main movements, artists, musicians, collectives and theories that shaped what we now call “sound art”.
Indeed, delimitations might therefore be rooted in complication. We could point out the consideration of sound art only as a general term for works of art that focus on sound (Cox, 2004) and further outlooks that suggest different terms such as 'sound in the arts' which might represent a greater topic and embodiment (Kahn, 2006). If the use of the term itself is challenged, other thought-provoking reflections question if 'sound art' does indeed constitute a new art form (Neuhaus, 2000).
Accordingly, this study suggests a taxonomy that considers the modes of creation and diffusion, and the terminologies that fall within the definition of sound art based on a comparative overview of the different theories mentioned. As the peculiarity of its interdisciplinary and composite forms questions the pertinence of any possible classification, terminological inquiries are thus important in defining it independently from other major fields such as electroacoustic music. Therefore, this presentation focuses on intrinsic aspects that takes into account the hybrid characteristics of sound art and its relevant practices.
COX, Christoph, WARNER, Daniel, Audio Culture : Readings in Modern Music, New York, The Continuum Publishing Group, 2004.
ENGRSTROM, Andreas, STJERNA, Asa, « Sound Art or Klangkunst ? A reading of the German and English literature on sound art », Organised Sound, volume 14, n°1, Cambridge, Cambridge University Press, 2009, p.11-18.
KAHN, Douglas, « The Arts of Sound Art and Music », The Iowa Review Web 8 : Special issue on Sound Art, ed. Ben Basan, February/March 2006.
LABELLE, Brandon, Background Noise. Perspectives on Sound Art, London, Bloomsbury Academic, 2006.
LANDER, Dan, Sound by artists, Paris, Facsimile, 2013.
MOTTE-HABER, Helga De La, Klangkunst – Tonende Objekte, Klingende Raume, Handbuch der Musik im 20 Jahrhundert, Bd 12, Laaber : Laaber-Verlag, 1999.
NEUHAUS, Max, « Volume : Bed of Sound », P.S.1, Contemporary Art Center, New York, July 2000.
SCHULZ, Bernd, Resonanzen/Resonances : Aspekte der Klangkunst/Aspects of Sound art, Heidelberg, Kehrer, 2002.
- Aya Shimano-Bardai
Aya Shimano-Bardai is a Ph.D. student in Musicology at Paris-Sorbonne University/IReMus. She holds a Masters degree in both Fine arts and Musicology and is currently working on her doctoral thesis under the supervision of Professor Marc Battier. Her research, focused on the development of sound art in Scandinavia, sets out the study of the interdisciplinary aspects of sound art through the diverse forms its manifests, an inquiry of stylistic relations and semiotics through different modes of dissemination, and the identification of the necessary means for their documentation.
In parallel with her research activity, she pursues her artistic practice which combines electroacoustic compositions and sound installations.
The role of performers in the electroacoustic music
- around the mixed music and the real-time electronic music -
Today, following the development of technology, the field of electronic and electroacoustic music is evidently indispensable and undeniable for the music creation. But, the performers such as pianists, violinists, flutists…and singers, are sometimes perplexed with this genre, because, the technological power of the electroacoustic music seems to be unlimited.
So what is the role of the performers in this present musical situation? How we can associate and conciliate between the composers of the electroacoustic music and the performers? On this point, the genre of the “mixed music” is important for the progress of the future musical world.
In this presentation, I propose firstly to clarify the definition of terms, “mixed”. As Vincet Tiffon (Musique Mixte in Théorie de la composition musicale au XXème siècle, Symétrie, 2013, p.1297-1314) indicate, the meaning of the “mixed music” stay ambiguous. When we talk about the repertory for the electroacoustic music and the instruments, why we don’t call concerto, but “mix”? According to Mixis written by an eminent French philosopher, Jocelyn Groisard (Les Belles Lettres, 2016), the origin of the concept “mix”, can go back in time of Aristotle. Groisard researches this notion until the neo-platonic period. Probably, it is interesting to study the origin and their chronological modification of this philosophical notion in order to adapt to the music.
Secondly, I will analyze concretely the characteristic of the “mixed” music, using several musical examples of different type of the “mixed music”, not only the repertories which I have played, K.Narita, Pas de deux, Daniel Teruggi, Cristal Mirage, Jean-Marc Chouvel, Ligne claire -obscur horizon, Sofia Martínez, Estelas en la mar, but also, the real-time electronic music like Philippe Manoury, Pluton. And I will try to find out the reflection of the terminology in music itself, discussing the difficulty of
synchronization, the question of fusion and fission, between two types of sounds – one treated by computer and the other of instruments.
- Eiko Shiono
Born in Japan, she has studied piano and solfège in Tokyo. After some occupational experiences, she went in Paris where she has improved her study of music. She specialized in a contemporary music (after 1945). She learned a contemporary repertory of piano with Claude Helffer. And then, she was first performer of a lot of pieces at the Piano Recital, such as, Toute Volée of Laurent Martin, Kaleidoscope of Kazuko Narita, Tokyo City of Allain Gaussin and so on. She was invited several times by the Festival of Contemporary Music. At ppIANISSIMO at Sofia in Bulgaria, she created à l’infini… of Suzuki Rika and at Alicante Contemporary Music Festival in Spain in 2007, she created Ligne claire -obscur horizon of Jean-Marc Chouvel, Estelas en la mar of Sofia Martínez. At the same time, she has followed the courses of the musicology in the University of France. She has acquired master (master1) and PhD in musicology at University Paris-Sorbonne and DEA (Master 2) at Ircam. At present, she is interested in the method of analysis of a contemporary musical pieces, also their aesthetic. Especially, she searches the philosophical concept of perception in the Hellenistic period (the philosophy of Pythagoras and Aristoxenus of Tarentum) and their chronological evolution until today, and how to connect this ancient Greek concept to today’s music. Moreover, interestingly and paradoxically, the ancient Greek concept is very modern and some present composers stay still inspired by the philosophy of Antiquity.
SU, Yu-Huei / SOO, Von-Wun / HUANG, Chih-Fang / CHEN, Mei-Chih / LI, Hsi-Chang / CHEN, Heng-Shuen
Implementation of Interactive Ecological Sound Devices
in a Long Term Care Facility
Previous soundscapes researches in Taiwan mainly focused on device design and environmental survey of urban or rural area with sound sample recording into digital archives. Some of the soundscapes research involving healing concept can be found from some other countries, such as Evert De Ruiter’s "Healing soundscape” for hospital acoustics 2.0 (Netherlands), “healing soundscpaes” project by Georg Hajdu, Clemens Wöllner, Eckhard Weymann, etc. with the technical realization of interactive electronic sound compositions and their installation. Dong Zhou’s “Interactive Environmental Sound Installation for Music Therapy Purpose” (Germany, China) designs the interactive music and sound to relax the atmosphere of waiting in a hospital. In this study, the concept of sound field system would be further applied to space and environment design of long term care facilities in Puli Christian Hospital and Quixotic Implement Foundation. Through interactive ecological sound devices, the auditory atmosphere could provide an environment of mood changing and stress reduction for the residents, doctors, nurses, caregivers and visiting families in the facility. The installed interactive sound device could be also a guide to visitors and attract their motivation to participate the interactive activities by enhancing the perception and sensory link. With combination of Arduino electronic circuit boards and various sensors, Max / MSP software were used to control and produce a variety of ecological sound in real time. Although the concept of soundscape is based on Murray Schafer since 1977, it is not popular yet in Taiwan to design/compose the electroacoustic music applying the concept. In our proposed system, the Taiwanese soundscape composition includes the sound samples of ocean, wind, bird singing, hybrid with Taiwanese popular song and Western classic music. Infrared sensors detect if any person is inside the range of the proposed soundscape system. After the system is triggered, than the ultrasound sensor will be interactive with the real time electroacoustic music composition based on the soundscape samples. There are five levels for the real time soundscape composition morphing, according to different range between the system and the listener to represent the correspondent relieving conditions of the mood. Indoor and outdoor device designs were applied respectively and user satisfaction survey would be done to provide further information for improvement.
Keywords: sound field, sound device, sensor, Acoustic Eco1ogy, long term care
The concept of the Soundscape was proposed by the Canadian composer Murray Schafer in the early 1970s. His WORLD SOUNDSCAPE PROJECT had launched a series of studies from traditional individual sound researches to holistic environment sound field surveys [1,2]. In contrast to ecomusicology as the study of music, culture, and nature, Acoustic Eco1ogy is the study of Soundscape including any elements in the sonic environment also inspired by the Schafer.
Previous soundscape researches in Taiwan mainly focused on device design [3,4,5] and environmental survey of urban or rural area [6,7]. Some of the soundscapes research involving healing concept can be found from some other countries, such as Evert De Ruiter’s "Healing soundscape: hospital acoustics 2.0" (Netherlands), using soundscape regarding to the complex sound environment including noise contorl for hospitals . Georg Hajdu, Clemens Wöllner, Eckhard Weymann, Se-bastian Debus, Jan Sonntag, Frank Böhme, and John Groves’ “healing soundscpaes” (Germany) also uses soundscape for the medical environment, to integrate the music-psycho-artistic art research project with the technical realization of interactive electronic sound compositions and their installation [9, 10]. Dong Zhou’s “Interactive Environmental Sound Installation for Music Therapy Purpose” (Germany, China) adapts Max/MSP to compose the interactive music and sound to relax the atmosphere of waiting in a hospital .
In this study we applied the concept of soundscape, integration of music creativity and technology, interactive ecological sound device in a healthcare environment. The devices were designed to install in the long-term care facility of Puli Christian Hospital. Puli Christian Hospital is the only regional teaching hospital in Nantou County, Taiwan. Quixotic Implement Foundation was established to promote complex elderly welfare services in greater Puli area under the concept of community-oriented, unit care, group home and small-scale multi-function for social welfare and long-term care services.
In this study, the environmental requirement assessment and installation design have been completed. After completion of installation, the natural ecological sound will be introduced into the space and change the auditory atmosphere to provide the residents, doctors, nurses, care workers and visiting families an environment for changing mood and relieving stress. This interactive sound setting can be a medium to attract visitors and enhance participation in interactive activities and sensory links. Interactive sound device with sensors based on different types of sound scenes to play respectively. In addition to the ecological sound playing, some specific music will be played synchronously. The devices combining auditory, visual, and spatial characteristics could also adjust between the different senses and trigger and link the different sense perceptions of the visitors.
2.1 Environmental Assessment
2.1.1 Mushrooms Farm in the Nursing Home
There is a mushroom farm which offered residents to plant and watch the growth of mushrooms in the Nursing Home of Puli Christian Hospital. The installed interactive sound devices in the mushroom farm area could increase incentives of residents to walk, watch and plant mushrooms. It could also increase residents’ auditory and visual stimulation with interactive devices introducing natural soundscape and music to establish an ecological environment in an indoor space.
2.1.2 The garden pool in the outdoor space
The garden pool in the outdoor space of Quixotic Implement Foundation is a public area that provides residents to take a walk and to have conversation. The pool is surrounded by trees, flowers and seats for recreation. To increase the incentive of the residents to use, the interactive devices with sensors could be triggered to play different soundscape related to specific scene or activities. The introduced natural sound mixes with original sound of public space could be integrated even better.
2.2 Concept of Interactive Soundscape and Technique
2.2.1 Mushrooms Farm in the Nursing Home
The concept is that natural soundscape-- water flow sound is played and warm light is turned on when someone enters the space area. Participants can immediately feel the auditory and visual changes in the natural atmosphere and transform the original cold scene space. As approaching the mushroom farm, another music element which is nature music is turned on. It can present relax and smooth atmosphere in the space through melodious music, water flow sound of natural soundscape and warm light.
Two infrared sensors were placed in the space that can instantly turn on the audio playback and light by Arduino dashboard or Raspberry Pi dashboard in the mushroom farm. First infrared sensor can turn on water flow sound and orange LED light at top space as sensing someone enters into the space, and another one can turn on smooth music playback such as nature music repertoire as sensing someone close to the mushroom farm (Figure 1). We can analyze the usage frequency of the space and emotional change of residents before and after installing the interactive devices through questionnaire survey.
2.2.2 The garden pool in the outdoor space
The concept of the set of people near the pool will appear around the natural sound, such as water, The concept is that natural sound plays when people approach, such as water flow, birds sound, cicadas sound, cricket cries, by four independent channel speakers playing in different positions. People walk through the pool will hear different natural sound from different pool orientation, and thus enhance auditory richness of the place. So that people who pass through the pool, or people sitting in the pool can continue to hear the natural soundscape to play, and then enhance the use of the space and appreciation of natural sound, to cultivate temperament, and promote interpersonal relationship.
Four human infrared sensors were placed in the four fixed points of the wood bracket above the pool to sense four directions respectively. When someone walks to pass or sits beside the pool, the human infrared sensor will trigger the specific natural sound playback. Through the link to Arduino or Raspberry Pi control panel the software will immediately start audio playback from the speaker (Figure 2).
3. Discussions and Conclusion
Through the installation of the interactive soundscape devices, it is expected to enhance the residents’ incentives in participating the activities in public space. By increasing physical and psychological stimulation of the auditory, visual and spatial dimensions in indoor or outdoor space, improved interpersonal relationship with positive emotional effects could be achieved. Evaluation of psychophysiological and behavior changes could be done with Bio-signal measurements and questionnaires survey after the soundscape intervention. The environmental limitation is also important impact factor. For example in the garden pool in the outdoor space of Quixotic Implement Foundation, we need to take into account the electronic equipment placed outdoors, under sunshine and rainwater that could be easily damaged. In this study the concept of sound scenes was extended to the design of the public space environment of the long term care facilities. Through the interactive devices and sensors with triggered ecological sound playback, the original long-term care facility atmosphere was modified and transformed to positive emotional effect and stress reduction for residents, doctors, nurses, care workers and visiting families. In this study device designs were applied respectively and user satisfaction survey would be done to provide further information for improvement.
This study was supported by the Grants for Ministry of Science and Technology (MOST 105-2218-E-007-031 ) in Taiwan. Warm thanks are due to technical assistants, Chi-Shen Wu and Feng-Chen Hsieh.
 Yang, C.C. and Lu, T.H.C. “The Study of Sound and Space: Global Trends and Local Responses”. Guandu Music Journal, 13, 77-96, 2010.
 Chen, M.Z. “The Practice of the Ecology Sound Interactive Installation in Campus-In the example of Shanghai Conservatory of Music”. Entertainment Technology, 48, 15-21, 2010.
 Wei-Ting Su. “Reconstruction of Cultural Soundscape toward a Sound Experience in West Central District”, Tainan City, Taiwan. (Unpublished master’s thesis). National Cheng Kung University, Tainan, 2014.
 Chi, T.S. “Eco-Conductor – Interactive Soundscape with virtual ecosystem”. (Unpublished master’s thesis). National Chiao Tung University, Hsinchu, 2010.
 Wang, J. C. S. “Soundscape's Expression in the Two-city Case: Imagination of Environmental Sociology”. Journal of Building and Planning, National Taiwan University, 10, 89-98, 2001.
 Lee, C.L. “A Research on Soundscape along MRT Lines in Two Cities: Field Investigation and Sound Stage Listening Reconstruction” (Unpublished master’s thesis). National Chiao Tung University, Hsinchu.2014.
 De Ruiter, Evert. “Healing soundscape: hospital acoustics 2.0.” (2015). Accessed April 29th 2017: http://www.conforg.fr/euronoise2015/proceedings/data/articles/000233.pdf
 Georg Hajdu, Clemens Wöllner, and Eckhard Weymann. “Healing Soundscape” https://www.unserenhochschulen.de/projekte/unseren-hochschulen-2016/gewinner-2016-healing-soundscape.html?tx_hkschuelerlogin_pi2%5Bantrag%5D=50&tx_hkschuelerlogin_pi2%5Baction%5D=publicDetail&tx_hkschuelerlogin_pi2%5Bcontroller%5D=Hschulantrag&cHash=9fe8a2aa88f2d24301c813a329c2309f
 Georg Hajdu, Clemens Wöllner, Eckhard Weymann, Sebastian Debus, Jan Sonntag, Frank Böhme, and John Groves. “Healing Soundscape: a study of the effect of sound and music in a medical environment”,
GROVES Sound Communications, accessed April 29th 2017: http://groves.de/de/en/wp-content/uploads/sites/4/2017/01/Healing-Soundscape-study-on-the-effect-of-sound-and-music-in-a-medical-environment.pdf
 Dong Zhou. “Interactive Environmental Sound Installation for Music Therapy Purpose”, 2016 IRCAM Forum - WOCMAT Joint Conference, Kainan University, Taoyuan, Taiwan, December 14-16, 2016, pp. 34-37.
- Yu-Huei Su
Yu-Huei Su, the full Professor at the Music Department of College at National Tsing Hua University. She acquired a Ed.D. degree in education, a M.Mus. degree in orchestra conducting and a B.Mus. degree in piano performance. She is also the director of “Center for Music, Technology and Health” in National Tsing Hua University. Prof. Su’s specialties include music and health technology integration, health promotion for music performers, social and applied psychology of music, measurement of music behaviors. She was the Co-Chair in 2010 and the Chair in 2014 of the International Symposium of Music and Health Promotion-The New Trend in Music Medicine, Music Therapy and e-Health Technology. She is an editorial board member of Journal of Aesthetic Education now. She is also a committee member of Higher Education Evaluation Committee in Fields of Arts, Taiwan Higher Education Evaluation and Accreditation Center
- Von-Wun Soo
Prof. Von-Wun Soo had his academic training in electrical engineering at National Taiwan University. He obtained biomedical engineering master degree at EE and then computer science ph. D degree at Rutgers. He has been studying artificial intelligence for 30 years since his Ph D. He has served two years as president of Taiwanese Association in artificial intelligence. He has also participated in organizing international conferences. In 2011, he hosted International Conference on AAMAS at Taipei. His research interests are on machine learning, natural language, and intelligent agents. He has worked on various applications such as prediction drug adverse effects in bioinformatics, story generation using common sense causality and energy management based on multi-agent coordination techniques. Currently he is leading research projects of Ministry of Science and Technology in Taiwan on 1) Deep Learning in Personalized Music Recommendation for Healthcare, 2) Automated Story Generation based on Commonsense and Monte Carlo Tree Search.
- Chih-Fang Huang
Chih-Fang Huang, the Associate Professor at the Department of Information Communications at Kainan University, was born in Taipei city, Taiwan. He acquired both a PhD in mechanical engineering and a master’s degree in music composition respectively from National Chiao Tung University. He studied composition under Prof. Wu, Tin-Lien, and computer music under Prof. Phil Winsor. His electroacoustic pieces have been performed in Asia, Cuba, Europe, and the USA, such as the electroacoustic piece “Microcosmos” were selected and performed in International Computer Music Conference (ICMC) in 2006, and the composition presented in CEMI (Center for Experiment Music and Intermedia), University of North Texas in 2010, and works performed in Berlin, Cologne, Sweden, Italy in 2011-12, etc. He is also the fellow of 2012 Art Music Residency, New York. In 2013 he was selected into the International Conducting Master Class of Martinu Philharmonic Orchestra under Mr. Kirk Trevor and Prof. Donald Schleicher, performing the works of Debussy, Brahms, etc. In 2014 he was invited to conduct the Greater Miami Youth Symphony (GMYS) orchstra. His research papers include many fields, such as automated music composition and the sound synthesis, which have been published in ICMC and international SCI/SSCI/AHCI Journals. Now he is also the conductor of the Taoyuan New Philharmonic Orchestra.
- Heng-Shuen Chen
Assistant Professor Dr. Heng-Shuen Chen received his medical education in 1978-1985, and family medicine residency training in 1987-1992. Since then he served as an attending physician of Family Medicine and a faculty in the Departments of Medical Informatics and Family Medicine, College of Medicine, National Taiwan University. He also earned a Ph. D. in Electrical Engineering in 2000 from the same university. His research interests in health information system, mobile computing, e-learning, telemedicine and e-health, led him to several large projects funded by government. He was leading NII telemedicine project, NII distance education projects, national health e-learning project for primary and secondary schools, a multidisciplinary project u-HOSP (Ubiquitous Hospital) with universities, research institutes and industry alliances of telehealth. From 2016, Dr. Chen moved to Puli Christian Hospital and serves as director of Community Medicine department and International Medical Affairs office. He still continues the tie with NTU College of Medicine and NTU Hospital as an adjunct faculty.
An Analysis of Movements in Playing Percussion Instrument and
an Application to Performance Information
Conventional performance information is to record pitches, durations, velocities on a sequence, or to analyse recorded sound. However, by using data from the analysis of movements in playing musical instruments, it is considered to be able to record physical information given by playing instruments besides information associated with sound. It is possible to transmit physical information as well as the recordings to a distance via Internet, and to re-create physical movements of a person at a distance; moreover, it enables to develop a body as a data including sound in a stage space for music. Atau Tanaka is known as a performer who is famous for a performance using various sensors, and he described about a stage expression using gesture making by a part of body.
The creation of gestural and sensor based musical instruments makes possible the articulation of computer generated sound live through performer intervention. While this may be a recent area of research, the musical issues of engaging performance are something that have been addressed before the arrival of the computer. So while from a technical standpoint research in this field may represent advances in musical and perceptual bases of gestural human-machine interaction, musically these efforts in computer music have precedence in instrumental music. It can be said that musically this brings us full circle, back to the concert as forum for the communication of music.
(Tanaka, A. “Musical Performance Practice on Sensor based Instruments,” In Wanderley, M., Battier, M. (Eds.) Trends in Gestural Control of Music (CD-ROM). IRCAM, Paris. 2000.)
Atau Tanaka built musical instruments by himself for input of physical information. He assigned a player’s gesture acquired from sensor based musical instruments to a synthesizer parameter, and showed a relationship between a stage space for music at Live Electronics and embodiment. As seen above, it is important to build a relationship between a body of a player, sound and a stage space for music from the standpoint of a stage art, when a Live Electronics work is performed on a stage. Therefore, performance information acquired by the analysis of movements in playing instruments in this experiment is considered as one of music information to connect a stage space for music and a body in a Live Electronics work.
This research focused on acquiring only information given by playing musical instruments, and analysed movements in playing instruments from a standpoint of playing skill. This is information that specialized to play musical instruments, and it enables to support that information as physical information for music. This study is considered physical information when playing instruments by analysing basic movements in playing percussion instruments, and discusses creating data of performance information as an applicative method to analyse movements in playing instruments.
With regard to skills of playing percussion instrument
Skills are elucidated in various fields in recent years. Skills essentially mean acquirements that an expert learnt. Continuous actions and experiences establish these skills. The usability of skills is evaluated on efficiency and an aesthetic sense in respective fields. Styles of skills in respective fields are organized by passing the skills to others by word of mouth or by imitating it. In order to elucidate skills, it is important to discover an interrelationship between movements of an expert and events occurred from the movements. By analysing the interrelationship, precise skills of experts will be able to be passed down to posterity.
Skills in playing instruments are often discussed about a matter of interpretation of music to play. There is music with the precise rhythm, and useful skills for it are to keep the beat precisely. On the other hand, there is music that emphasizes “melodic pattern” in particular. It is composed by a variable continuity of the rhythm, which is shown by articulations on the musical scores. Useful skills for this type of music are to structure melodies rather than to keep the precise beat. Therefore, players are required various skills in every musical pieces or in every individual passages in the musical pieces. Even if players play the same sound, it is necessary to play musical instruments using performance skills with various interpretations of music.
To assess skills in playing instruments, the object of assessment must not only be a physical ability for music performance but also a quality of a musical pieces, a character of a player or an interpretation of a musical score. However, these elements are difficult to define as common skills that all experts acquired because there are variable assessments. Thereby, to analyse common movements in many players, the object of the analysis must be a movement in playing an instrument under the certain condition, which is sheltered from the effect of interpretations of music pieces. With regard of this problem, this study examines basic movements in playing instruments, which is common in players. The object of the analysis is not an assessment of aesthetic skills of playing music, but instrumental skills as a motion that experts acquired. With this basic movement of playing an instrument, this study analyses muscle motions of percussion players in order to control drumsticks, and consider the relationship between a musical instrument holding by an expert and a body.
With Regard to movements in playing percussion instrument
Important movements in playing percussion instruments are actions to control drumsticks. Generally, a player holds drumsticks with a thumb and an index finger, and places other fingers lightly on the drumsticks. When a player beats a membranophone, drumsticks bounce back by tensile force on a top of a drumhead. In order to prepare for the next beat, the player must adjust strength to beat a drum and control a bounce of drumsticks. The main muscle for the control is one to flex a thumb and an index finger to hold drumsticks.
Experts constantly make a basic practice to controlling drumsticks, so that they can perform stably with rhythms, tones and velocities. This is a training to control a quick response of a muscle on a thumb and an index finger. Also, It is the element that is associated directly with the proficiency of playing percussion instruments. Players play an etude to train the way to control drumsticks accurately, and reactions of muscle must be accelerated, so as to play with precise rhythms and stable velocities. Moreover, it is necessary to keep responses of muscle stable in order to keep tones of the same quality. It is surmised that experts unconsciously make this movements during their performances. We reveal a correlation between the moment to produce percussion sound, a reaction speed of muscle and an aspect of contracted muscle. These muscles are for controlling a thumb and an index finger. Based on the idea above, we carry out the analysis to assess basic skills of playing percussion instruments.
The method of the analysis
In playing percussion instruments, a player pushes a thumb and an index
finger against a bounce of drumsticks on a top of drumhead. A reaction of drumsticks applies pressure against a thumb outwardly on the palm. The player conversely must apply force to his thumb inwardly on the palm. Likewise, the player must apply force to an index finger inwardly on the palm, so as to an index finger pushes against a bounce of drumsticks. Altogether, the way to control drumsticks is to make two quick movements at the same time. While players are applying force to both of a thumb and an index finger, they push back a bounce of drumsticks and simultaneously beat the percussion. We use the surface electromyogram, in order to analyse these essential movements for beating percussion. The objects of the analysis are motions of “thenar muscle” and “radial flexor muscle of wrist”. “Thenar muscle” applies force to a thumb, and “radial flexor muscle of wrist” applies force to an index finger. We analyse them in the relation to producing the sound of a musical instrument. With regard to the aspect of a muscle contraction from the analysis, the aspects of experts are compared with the aspect of non-experts, so that we analyse basic movements in playing percussion instruments that experts acquired. Here we report the result of the analysis, and consider a mechanism for proficiency of playing an instrument through the analysis of basic movements in playing percussion instruments.
- Yoshihisa Suzuki
Yoshihisa Suzuki, a percussionist, and composer, was born in Yokohama, Japan in 1975. He studied percussion instrument at Showa university of music (1994-1998) and studied composition at Institute of Advanced Media Art and Sciences (IAMAS, 2002-2005). He worked as a programmer in " sein und zeit #2 " (collaborated with Masayuki AKAMATSU, ISEA 2004). He works percussion performance and the sound programming. His major works include "Ring, Quartet (2003)" "Chromatist (2004)", " Marimba Pleasure (2008)" and " Marimba Pleasure ". He received the ARS Electronica (Linz, Austria) 2006 digital music Honorary Mention as a musician group mimiz . He is a member of Japanese Society of Electronic Music (JSEM) and Japanese Society for Sonic Arts (JSSA).
Rationality in electronic music – what would Adorno say?
What is rationality in music, and how does it differ from other kinds of rationalities? Is electronic music especially rational as it can be controlled thoroughly? How does the progressive rationalization of society affect music? What is the relationship between control and freedom in electronic music? In my paper, I will outline answers to these questions from the perspective of Theodor W. Adorno’s philosophy.
Following Max Weber, Adorno believes that Western culture tends towards the ever-increasing rationalization of all aspects of social life. With the aid of means-ends rationality the domination of nature is developing further and further. In music, this tendency has led to the extended composition techniques and technologies. The quest for total control over musical means and material characterizes the process, and the most extreme case can be found in electronic music.
Rationalization brings music closer to science, and its rationality threatens to turn into means-ends rationality. Yet, artworks do not have a clear end or purpose and Adorno sees that their rationality is more like mimesis of scientific processes. Art mimics the means-ends rationality in order to criticize it and free itself from it. Due to their own rationality, artworks show that there can be other types of rationalities.
According to Adorno, compositions are rational because of the systematic control over their predetermined material, but at the same time there is some pre-rational mimetic impulse. With rational procedures art helps this impulse to fulfil itself. That is, the control of material as the domination of nature gives voice to suppressed nature, and only rationally articulated artworks can illustrate freedom. I will expand on this in my paper.
Rationalization can easily turn against itself and lead to the loss of freedom. For example, the rationality of twelve-tone technique liberated music from the shackles of tonality, but later it petrified into a collection of rules. In the field of electronic music, the musical variables can be controlled more accurately than ever. There are no bodily limitations and harmonic series can be created freely. The problem is that electronic devices might determine the course that music takes. In my paper, I will consider whether rationality increases the freedom of composers or binds their hands. I will also ponder, what kind of rationality electronic music embodies today.
- Noora Tienaho
Noora Tienaho is a PhD student in the School of Social Sciences and Humanities at the University of Tampere, Finland. Her doctoral programme is Philosophy and her research interest focuses on the aesthetics and philosophy of music of Theodor W. Adorno. She is applying Adorno’s writings especially to the field of electronic art music. Concepts such as musical material, technology, rationality, and musique informelle are linked to her research. In addition to PhD studies, she is working as an editor in the philosophical journal niin & näin and studying musicology at the University of Helsinki.
TORO PÉREZ, Germán / BENNETT, Lucas
Spatial concepts and performance practice
On the impact of evolving sound diffusion standards on electroacoustic music
Decisions about loudspeaker configurations and diffusion systems in electroacoustic music are in many cases an integral part of the creative process and can often be considered as basic structural elements. Nevertheless, special setups envisioned in early stages of the composition process can be challenged by practical considerations and specific technical constraints imposed by a piece’s realization, performance, or diffusion. As a consequence, original formats and setups of several pieces have been subject to modifications in order to adapt to the requirements and possibilities of available technology.
The establishment of standardized (mono, stereo, quadriphonic, octophonic, 5.1 surround, Ambisonics etc.) formats was obviously an important and necessary development in electroacoustic music, since they are widely supported and allow for predictable and reliable performance conditions. Through the spread of such nowadays standard formats and setups and the accommodation of pieces to them, however, original musical ideas, specifically spatial qualities, have in some cases been lost or superseded. The question of how to deal with such cases is highly relevant in a historically informed performance practice. There are canonical pieces in the electroacoustic repertoire that have yet to be approached from this perspective, if performers are to fully understand the challenges they present. Among them are the works discussed in this lecture, Stockhausen’s first mature electroacoustic works Gesang der Jünglinge (1955-56) and Kontakte (1958-60), Ligeti’s Artikulation (1958) and Jonathan Harvey’s Mortuos Plango, Vivos Voco (1980).
Karlheinz Stockhausen’s Gesang der Jünglinge was originally conceived for 5 channels. However, the 5th channel has not been available since the 1960s, Stockhausen having combined the original fourth and fifth channels into a new fourth channel, thus creating the 4-channel version that until today is distributed by the publisher. In addition, sketches and setup plans raise several questions about the position of the loudspeakers in the early version. While Stockhausen himself reverted to playing a 4-channel version on a rectangular speaker setup in concert, this practice compromises some aspects of the original spatial conception as will be discussed in this lecture, taking as a point of departure research done by musicologist Pascal Decroupet.
The sound projection prescribed for Stockhausen’s Kontakte for piano, percussion and tape (1958-60) has been modified in the piece’s second edition. While the first edition of the score (UE 14246) defines a setup with channels 1-4 routed to speakers placed at the left, front, right and back of the hall, the second edition (Stockhausen-Verlag Work N°121/2) places the four speakers at the four corners of the hall with channel 1 at the rear left and the channel sequence proceeding clockwise, thus rotating the original disposition by 45 degrees. The difference between these setups is musically quite substantial, so that using the original disposition should at least be considered, as will be shown in the lecture.
It seems at first glance appropriate for Ligeti’s 4-channel tape piece Artikulation (1958) to be played on a standard rectangular speaker setup. However, Rainer Wehinger’s “Hörpartitur” (listening score), which was first published in 1970 suggests a setup according to the cardinal points of the space, forming a rhombus rather than a rectangle. Ligeti himself considered Wehinger’s score as accurate, and this rhombus disposition is also depicted in a sketch for the piece. There is no information on an auctorial performance tradition of Artikulation, however, it seems clear that if it was originally used, the rhombus setup was eventually superseded by the standard rectangular setup. We will address the merits of the original setup and discuss its specific practical challenges.
Performance instructions supplied for Jonathan Harvey’s eight-channel tape piece Mortuos Plango, Vivos Voco advise to position the speakers around the audience in square disposition in clockwise sequence, starting with channel 1 in the left corner. According to musicologist Bruno Bossis, however, Harvey had intended for a diamond-shaped setup and a different sequence to be used. As will be shown, this disposition in many instances produces a more consistently engulfing spatial sound than the setup suggested by the instructions accompanying the performance material, with clearly apparent moments of symmetry in the respective spatial disposition of sounds. As is evidenced by a sketch, Harvey was considering a cube setup at least at some stage of the composition process. In this setup, the eight channels would have been placed on two squares, 1 to 4 on the lower, and 5-8 on the upper plane. This setup, possibly abandoned at a later stage, will also be taken into account.
In this lecture we aim to aim to expose in detail these works’ specific problematic concerning the overarching question, considering possible reasons for the superseding of original speaker dispositions by other, “standard” formats, taking into account various editions of works, performing instructions as well as unpublished sources such as sketches. Also, the merits of different setups for specific pieces will be discussed taking as a point of departure analytical observations made in the specific works. Finally, the lecture will offer a reflection on the implications of such findings for contemporary performance practice and consider the legitimacy of reverting to earlier dispositions.
The pieces discussed were studied within two research projects on the performance practice of electroacoustic music realized at the Institute for Computer Music and Sound Technology of the Zurich University of the Arts since 2012. The first project was dedicated to the study of pieces produced at the Milan Studio di Fonologia, including works by Berio, Nono, Maderna, Vlad and others, while the second project «Performance Practice of Electroacoustic Music. Towards a practice-based exchange between musicology and performance», funded by the Swiss National Science Foundation (SNSF), was dedicated to the study of pieces held at the Paul Sacher Stiftung, Basle (which was also the main project partner). Both projects included public workshops and concerts.
- Germán Toro Pérez
Born 1964 in Bogotá. Minor in music theory at the Universidad de los Andes in Bogotá, composition studies and Master degree in arts at the University of Music and Performing Arts, Vienna. Conducting courses with Karl Österreicher and Dominique Rouits. Studies on electroacoustics and computer music in Vienna and Paris.
His catalogue includes instrumental, electroacoustic and mixed compositions, as well as works in collaboration with graphic design, painting and experimental video. Publications and texts on artistic research, composition theory and aesthetics of electroacoustic music as well as on history and identity of Latin American music.
He was director of the computer music course and guest professor of electroacoustic composition at the University of Music in Vienna. Since 2007 he is director of the ICST and professor for electroacoustic composition at the Zurich University of the Arts. He was professor for composition at the International summer courses in Darmstadt 2012.
- Lucas Bennett
Born 1975 in Basle. Studies in musicology, German literature, linguistics and music theory in Basle. Research associate at the Institute for Computer Music and Sound Technology. Current research activity in the field of performance practice of electroacoustic music. Teaching assignments, publications on 20th and 21st century music. Numerous independent music productions, co-president of the Swiss Society for Music Pedagogy (SMPV) 2014-17, member of the editorial board of the Schweizer Musikzeitung (SMZ).
“Getting it done” in electroacoustic studies:
The effects of deadlines and structured guidelines on the creativity and motivation of electroacoustic music students
As researchers, composers, students, artists, productive human beings, we are mostly familiar with the sensation of an approaching deadline: perhaps a change in our efficiency, decision making ability, organization, excitement, and stress levels, among others. Studies have shown that deadlines have diverse positive and negative effects on motivation and creativity (Chae, Seo, & Lee, 2015; Maier & Branzei, 2014; Dougherty, 2008; Gersick, 1995). Additionally, highly structured conditions have been shown to increase creativity (Sagiv et al., 2010). We are currently investigating the effects of deadlines and varying degrees of structure, in terms of guidelines, on the creativity and motivation of students of electroacoustic music composition, performance, and aural perception at Concordia University (Montreal) and the University of New England (Sydney). Using anonymous questionnaires, our educational qualitative study collects the views of ca. 50 students regarding the effects of assignment deadlines and structured guidelines on the students’ creativity (defined as “the ability to generate ideas”) and motivation (defined as “the energy to get it done”); the relationship between motivation and creativity in this context; and the unique aspects of creativity within electroacoustic composition, performance, and aural training—such as “domain-relevant skills” (Amabile, 1985). The study also questions how students’ creativity, motivation, and the self-perceived quality of their produced works may be affected by teachers’ leniency towards deadlines and prescribed requirements. The collected data will be compiled, coded, and analyzed using grounded theory principles (Charmaz, 2014). We will report the findings, analysis, and proposed implications of this study in light of the aforementioned literature and other studies that investigate the effects of intrinsic and extrinsic motivation on creativity (Amabile, 1985; Byron, Khazanchi, & Nazarian, 2010).
•Amabile, T. M. (1985). Motivation and creativity: Effects of motivational orientation on creative writers. Journal of personality and social psychology, 48(2), 393.
•Byron, K., Khazanchi, S., & Nazarian, D. (2010). The relationship between stressors and creativity: A meta-analysis examining competing theoretical models. Journal of Applied Psychology, 95(1), 201.
•Chae, S., Seo, Y., & Lee, K. C. (2015). Effects of task complexity on individual creativity through knowledge interaction: A comparison of temporary and permanent teams. Computers in Human Behavior, 42, 138-148.
•Charmaz, K. (2014). Constructing grounded theory. Thousand Oaks, Ca: Sage.
•Dougherty, D. (2008). Bridging social constraint and social action to design organizations for innovation. Organization Studies, 29(3), 415-434.
•Gersick, C.G.C. (1995). Everything New Under the Gun. In: Ford, C.M., Gioia, D.A. (Eds.), Creative Action in Organizations: Ivory Tower Visions and Real World Voices. Sage, Thousand Oaks, CA.
•Maier, E. R., & Branzei, O. (2014). “On time and on budget”: Harnessing creativity in large scale projects. International Journal of Project Management, 32(7), 1123-1133.
•Sagiv, L., Arieli, S., Goldenberg, J., & Goldschmidt, A. (2010). Structure and freedom in creativity: The interplay between externally imposed structure and personal cognitive style. Journal of Organizational Behavior, 31(8), 1086-1110.
- Eldad Tsabary
Dr. Eldad Tsabary is the coordinator of electroacoustic studies at Concordia University in Montreal. He is founder and director of Concordia Laptop Orchestra (CLOrk) which specializes in collective improvisation and interdisciplinary collaborative performances in which students function as co-creators/co-researchers. CLOrk’s notable performances include a recent collaboration with singer Ariane Moffatt at Montreal’s Musée d'Art Contemporain and a performance at Akousma festival 2016. In the past decade, Eldad has also spearheaded research and development of a new sound-focused aural training method for electroacoustic musicians, which is inspired by perception studies and is based on a transformational, democratic educational model. Eldad received his doctorate in music education from Boston University. He is the current president of the Canadian Electroacoustic Community (CEC).
- Donna Hewitt
Dr. Donna Hewitt is an academic, vocalist, electronic music composer and instrument designer. Donna’s research has been primarily exploring mediatized performance environments and new ways of interfacing the voice with electronic media. She is the inventor of the eMic, a sensor enhanced microphone stand for electronic music performance and more recently has been creating wearable electronics for controlling both sound and lighting in performance. She is a founding member of Macrophonics, a mediatised performance collective. Her work has attracted funding from the Australia Council for the Arts, most recently with all female collective Lady Electronica. Donna has held academic positions at the Sydney Conservatorium of Music and Queensland University of Technology and is currently the Convenor of Music and Bachelor of Music Co-ordinator at the University of New England.
Communicating the past through electronic music remixing process
In my paper, I will explore some of the possibilities afforded by the compositional techniques of recomposition and remix. Based on analysis of John Oswald edited version of Igor Stravinsky’s The Rite of Spring and John Oswald – Dab – a remix of Michael Jackson’s song “Bad” I will use electroacoustic techniques as well employ compositional transformations related to the “remix” genre.
While the terms recomposition and remix are often used freely and even interchangeably, I see them as two distinct, but related creative activities. For me, recomposition refers to the practice of one composer creating a work based on a pre-existing musical composition. Often recomposition takes the form of working with music at the level of the note and creating variations on an existing theme, or, for example, free variations of existing compositions as in the Liszt piano fantasies. This is not a new concept and we can observe it often in the 20th Century. We see it in Charles Ives’ Concord Sonata, in Luciano Berio’s Sinfonia, in George Crumb’s Makrokosmos and also in Hans Zender’s Schubert’s Winterreise: A Composed Interpretation. In 1985, John Oswald coined the term Plunderphonics, giving a definition to music made by extracting audio segments from existing recordings and by modifying and recombining these segments to make a new composition. I should also note that another common term related to what I am doing here is the word mashup. A mashup involves combining elements of two or more pre-existing compositions to form the basis of a new composition. As you will see, I will go several steps beyond simple borrowing and recombining to create my composition.
Traditional acoustic analytical recomposition works directly with composition at the level of note, varying and changing them to make stylistic connections that we may have sensed, but were unable to articulate. My goals, on the other hand, in using remix formulations are to transform the existing world, as represented in original recordings, into newly imagined worlds by modifying audio recordings.
Traditional analytical recomposition approach serves a more obviously analytical purpose; this type of recomposition often involves variations in melodic or harmonic structure or musical texture. These types of variations typically involve music making at the level of the note. On the other hand, remixing refers to sonic transformations that occur to an existing musical composition. Because the remix is indigenous to music that employs audio recording technologies, the sonic transformations imposed on the existing work derive from the affordances provide by recording technology and more recently computer technology. Thus the elemental particle of the remix is the sound – the audio recording. Each of these audio recordings, we should observe, is fundamentally different from the concept of a musical note. A musical note is separate from its timbre, its duration or musical volume. The audio recording, on the other hand, is a complex organism containing a multitude of relationships involving pitch, duration, timbre and other musical attributes inextricably entwined. So instead of varying melodies and
harmonies, the remix is apt to use techniques like crossfading, filtering, looping, time- stretching, granular processing, or in some cases, literally reversing the sound in time.
Indeed, one of the main differences between traditional acoustic recomposition and my remix composition is that my creative work arises from direct consideration of audio recordings of performances of the songs rather than from traditionally notated scores. Further, my remix does not adhere to the conventions of nineteenth-century Romantic styles, but extends musical possibilities into a stylistic realm that might be described as experimental electronic music, a mode of musical expression that leverages the potential of recent technologies to reshape existing sound in new and exciting ways.
Thus, my creative efforts on this project share musical concerns with traditional acoustic recomposition, in that we both begin with the similar objects of study and we work with these objects with similar questions in mind. However, our techniques and our results are distinct, and although we proceed in the like spirit, the way that we execute our transformational processes are fundamentally different.
Since the beginning of recorded sound during the late 19th century, technology has enabled people to restructure the traditional listening experience. Such alteration became more common and accessible due to advancement in technology. The audio techniques in breaking down and reassembling of recorded audio files provide special opportunities and challenges related to sound-based remix composition. I should also note here two very important attributes of electronic music. First, electronic music is predicated on principles of sonic transformation. One sound can become another sound. Second, I should note that electronic music allows not only for the transformation of sound, but also for the transformation of the way time is articulated and experienced. Time, for instance, in electronic music can literally be beautifully stretched, extended or reversed, creating musical outcomes that are not remotely possible in the acoustic domain.
Composing an acousmatic electronic music remix using pre-existing audio recordings is a multi-step process system. It involves 1) analyzing the original songs by conventional means, 2) analyzing the audio recordings of the performances of each song, 3) partitioning those audio recordings into edited audio segments, and finally, 4) sequencing, layering, and mixing the audio segments into a remade and reconceptualized composition. Remixing techniques applied to compositional process might involve filtering, time-stretching, crossfading, granular synthesis, analysis and resynthesis, and looping techniques in the remix process.
- Chi Wang
Chi Wang is a composer and performer. Chi enjoys making music and intermedia art that involve Computer Human Interaction. Her current research and composition interests include data driven instruments and sound design. Chi’s compositions have been performed internationally, including International Computer Music Conference (2015, 2016), Musicacoustica in Beijing (2011, 2012, 2013, 2014, 2015, 2016), Society for Electro-Acoustic Music in the United States (2015, 2017), Kyma International Sound Symposium (2012, 2013, 2014, 2015, 2016), Future Music Oregon Concerts (2009, 2010, 2011, 2014, 2015, 2016, 2017), I. Paderewski Conservatory of Music in Poland (2015), International Confederation of Electro-Acoustic Music (2014), WOCMAT in Taiwan (2013), Center for Computer Research in Music and Acoustics in Stanford University (2010). Chi is also an active translator for electronic music related books. She is the first translator of Electronic Music Interactive (simplified Chinese) and Kyma and the SOS Disco Club. The book KymaXitong Shiyong Jiqiao is published by Southwest China Normal University Press. Chi received her M.Mus. in Intermedia Music Technology from the University of Oregon and previously graduated with a BE in Electronic Engineering focusing on architecture acoustic and psychoacoustics from Ocean University of China. She is currently a D.M.A candidate in University of Oregon, teaching Digital Audio and Sound Design.
Integrating Interdisciplinary Work in
Contemporary Non-linear Real-Time Digital Arts Practice: Communication, Sequence, Frameworks, and Logistics
The ubiquity of digital tools affords electroacoustic music practitioners opportunities to collaborate with artists from other disciplines to make non-linear and real-time works. But like other digital arts practitioners entering this shared space, one has to rethink known frameworks and processes, and explore new ways communicate.
Over 2010-2016 I led the ArtzElectro series at Waikato University, presenting works that combine
different electronic media, and combine electronic media with ‘performance’ through both audience and stage-based interaction. The event has presented over 120 new works, mainly from students. Over 2014-2016 particularly, works came largely from a third-year class drawing on degree majors from Screen and Media, Creative Technologies (including electroacoustic music), Computer Graphic Design, Maori Media, and Creative Practice (primarily dance). Class members participated in small groups to make outputs combining the skills of different majors in contemporary media art approaches, extending individuals technically and artistically. My working with many groups to develop pieces from conception to realization, provides an opportunity to reflect on what makes a successful process, points of communication, and outcomes – useful generic knowledge as the non- linear real-time and interdisciplinary area is increasingly explored by contemporary practitioners. The presentation covers a useful sequence of learning, some conceptual frameworks, and the logistics of implementation.
To begin, many class members or digital arts practitioners generally, enter this arena from an edit based approach to digital production and notions of authorship, from a passive view of audience, from familiarity with discipline specific tools and processes, from viewing the computer as a recorder/playback means, and from a discipline specific aesthetic. Addressing this in classes involved progressing from the known to the unknown in order to acquire knowledge and skill. In the first instance, a wide range of new media works were presented (where we are going), and non-linear coding largely in Max7 introduced that could also integrate and amalgamate contributions from all practitioners (a means to communicate) yet allow drawing on known tools to provide assets to manipulate. The learning sequence then moved through developing aesthetic/technical/craft skill across disciplines individually first. Three interdisciplinary practical works followed, built in small groups with members drawn from different disciplines who made a) a generative work to establish the idea of the computer as participant b) a generative/interactive work with multiple external controllers and data flows c) and finally a public work exploring generative/interactive and performance input (staged and/or audience-driven). Examples of final works can be found at
Various supporting texts have been trialed, with Candy and Edmonds (Ed. 2011) Interacting. Art Research and Creative Practitioners being central. Useful artist exemplars can be found on sites like Cycling74’s Projects, and The Creators Project. Practically, the broad gambit used is drawn from Schön’s (1983) concepts of frames, exemplars, and fundamental method & overarching theory, coupled with themes to be explored. A central idea looked at is the changing relationship between audio and visual material in contemporary art: Battey and Fischman (2016) give and historical account of mimesis and abstraction in sound and vision in western art, for example, and cover recent affective and gestural considerations. More recently and retrospectively echoing the ArtzElectro experience of building works, is a recent paper by Keller and Lazzarini (2017) on Ecologically Grounded Creative Practices in Ubiquitous Music. In this, they react against instrumentally oriented and individual understandings of music interaction, and by proxy confront wider issues in accounting for contemporary creative practice. Their conceptual model of making works integrates Human Agents and Material Resources, these combining in Afford Activities through Creative Support Tools.
To begin, memory of the logistics of implementing public works is often different depending on what discipline a practitioner comes from; and it is something that has to be collectively learned for new media artworks within groups. And in bringing together art and craft, technical underpinning and ‘performance’, this process is best managed through multiple working drafts/semi-public mockups that receive wide feedback from both potential audience members and expertise from contributing sub-discipline expertise. Requiring the co-development of concept and implementation to meet iterative deadlines also necessitates a small working model being built with aspects of the assets that contribute to the outcome as early as possible; and developing works over as long a period as practical to allow building, trial and reflection. This experience is also social in building and managing group dynamics in coordinating tasks and developing skills, techniques, and content – often through independent rehearsals.
As expected, the most robust works are often produced by cohesive groups with aesthetically flexible views, by adaptable membership who share technical expertise but can also contribute discipline specific talents in the context of distributed authorship, who have a keen sense of the environment of realization – and who launch or develop a strong conceptual yet practical ideas that are doable projects within material and time constraints. And one also sometimes discovers rare individuals, new digital creatives, that are multitalented across digital and performance media as well as being technically fluent – embodying multimodal thinking where aspects of disciplines are conceived of and implemented simultaneously based on a wide aesthetic understanding. Artistically, the best works in terms of reception have been those that engage quickly yet hold attention, that invite interaction or the perception of it freely but generate a diversity of outcomes, and that contain large visual gestures as inputs but are not overtly obvious and often subtle and multidimensional and evolving as outputs. And those that integrate media and performative elements seamlessly but not in a 1:1 manner, and embody even remotely, a kinetic sense of human movement.
- Ian Whalley
Ian Whalley is Associate Professor of Music at the University of Waikato in New Zealand; and is an internationally recognised author, researcher and composer in the fields of electroacoustic music,
computer music, and sonic art. His works have been published by CUP and MIT Press and included in
international events such as ICMC, MUSICACOUSTICA, TIMESPACE, VCH, and ACMA. He has received awards and grants from the British Council (UK), the NZ/Japan Exchange Programme (NZ/Japan), Kunitachi Centre for Computer Music (Japan), ICMC2000 (Germany), Meiji University Visiting Fellow(Japan), Klangart '99 (Germany) and UNESCO (India). His current research includes networked music/sound, interactive systems, intelligent agent applications in non-linear music, generative systems, real-time graphic scoring, and data sonification. His research and invited workshops have been published internationally in arts/technology proceedings (ICMC, ISEA, EMS, NIME). Ian was Director at Large for ICMC from 2004-2005, is on the editorial board of Organised Sound (CUP).
The Culture Characteristics under the Oriental Context of Chinese Electronic Music Composition
——Take Two Electronic Music Pieces by Minjie LU as an Example
The electronic music technology and theory have been well developed during the late 20th century. During the past 30 years, Chinese composers not only studied the composition ideas and skills of western music and electronic music, but also managed to integrate the national traditional culture with western music culture. They composed many novels and outstanding electronic music works. This paper analyzes two electronic music pieces, Flowing Water and Distortion and The Watching Tuvas by Chinese female composer Minjie LU which won prizes in international electronic music competitions. Based on the analysis of cultural background, composition skill, sound design methods, etc, the authors discuss the electronic music composition characteristics under the Oriental Context.
Flowing Water and Distortion is a real-time interactive electronic music piece for Qin player and computer. Qin is also called Gu Qin, in which “Gu” means ancient or old literally in Chinese. Gu Qin, a seven-stringed zither, is China’s oldest and most historic plucked instrument, with a history of more than 3000 years. (see Fig. 1)
Fig. 1 Gu Qin
The inspiration of this piece is from the event happening in 1977 when the U.S. spaceship "Voyager" was launched. A gold CD was placed on board to introduce the music of the Earth to the rest of the universe. A most famous Guqin piece Flowing Water was included.
Flowing Water and Distortion is to represent the communication between modern civilization and ancient civilization, based on the ancient musical instrument and new science. The composer used the special methods of Gu Qin, the characteristic melody and tone of Gu Qin to compose the piece. Gu Qin stands for the ancient sound, and the electronic sound seems to respond to the modern technology or civilization. Besides, the famous ancient piece of work Flowing Water was composed to glorify the deep and warm friendship of human being about 2000 years ago. This electro-acoustic music is also based on this point. The sound of music seems to advocate the friendship from humankind in searching for bosom friends and the response from intelligent lives in the outer space.
In the piece Flowing Water and Distortion, Max/MSP patch calculates the sound of Gu Qin in real-time through microphone, and triggers the prepared samples through MIDI pedal touched by Gu Qin player. Some special effects and transformed sounds are produced or realized by Max/MSP patch. It represents the response of the intelligent creatures of the vast universe, and the Gu Qin player played it with different traditional skills to show the hope of peace and friendship of Chinese people.
The Watching Tuvas is an acousmatic piece based on the sound materials of Tuva nationality. The Tuva, called the “Mysterious Tribe in the Clouds,” is one of the oldest nationalities in northwestern China. Hoomi is a Mongolian special vocal skill that produces multiple voices and has been extended in Tuvas. The singer controls his respiration to vibrate the larynx and make sound, then opens the vocal bands, widens the throat, lifts the velum, controls the shape of tongue and mouth, so the oral cavity resonance with the larynx vibration, then produce two different tones at same time (see Fig. 2). An ancient Tuvan melody palyed by shoor is used in piece. The shoor is a traditional Mongolian musical instrument, made of a kind of local grass, which is like a hollow pipe and regarded as a living fossil of minority nationality instruments (see Fig. 3). When player blows the pipe, he opens his mouth, makes the larynx vibration, and the pipe resonances with it (see Fig. 4). So shoor is capable of producing multiple tones at same time. However, it is dying out.
Fig. 2 Hoomi Singer Fig. 3 Shoor Fig. 4 Shoor Performer
This work basically uses the commmon characteristic of hoomi and shoor to design the sound. The composer integrated polyphonic thinking into the sonic design of electroacoustic music. The original sounds of Hoomi and shoor lead one melody part, and the materials are processed by different FXs and plugins form another part. Two parts support mutually, and supplement each other. For example, the original sounds of Hoomi were processed by filters to get new sounds of different bands or to separate the different voices and rearrange the various timbres, and then the composer placed the materials at distinct position of the parts on the timeline to form the special sonic “counterpoint”. The sounds come into counterpoint with various material of different timbre and shape.
The culture of Tuva is gradually fading along with the invasion of modern culture. By doing so, the composer not only tries to lead audience into the river of traditional history of Tuva, but also expresses a yeaming for a quiet rural life. It also implies the national root complex that Tuvas, as the descendants of Mongolia, have.
In this paper, the authors address several issues:
What is the main oriental factors that inspire the composer? The aesthetic view point under oriental context drives the composer to conceive idea of composition, design and process the sounds. For Chinese composers, how is the impact of Chinese culture and national culture on electronic music composition? How can the composers explore the best approach to combine traditional culture and modern technology to inherit the traditional culture in the modern era and protect the minority nationality’s culture?
- Yang Wanjun
Yang Wanjun (Born in 1977, Yunnan, China) is an engineer, programmer, sound designer, researcher and electronic music musician. Now he is an associate professor of Electronic Music Department, Sichuan Conservatory of Music. In the recent 20 years, he lives at Chengdu City, Sichuan Province, Southern of China. His research and creative interests lie in Acoustics and Psychoacoustics, Sound Design, Software Developing, New Media Art, Multimedia Design. He has taught at Sichuan Conservatory of Music for over 18 years.
He is the author of the first Chinese Csound book “The operating manual of Csound”, and the upcoming the first Chinese Pure Data book “The Graphical Music Programming Technology and Application of Pure Data”, he is also the co-author of the book “Theory of Electronic Music”. All his book are available in the bookstore and Amazon online.
As a professor of Electronic Music Department, he gives several courses for undergraduate and graduate students, including: “Acoustics and Psychoacoustics”, “Fundamental of Programming”, “Sound Design of Csound and Pure Data”, “The Open Source Music Softwares for Music Production”, “Linux Operating System and Electronic Music”, “Plugin Design and Sound Design”, etc.
He is also a member of Center of Electronic Music Composition and Research, SCCM. He was invited to attend EMS 2011 Annual in New York, 2011. In 2012, he was invited to attend electronic music exchange in University of Oregon.
- Zhang Xiyue
Zhang Xiyue was born in Jiangxi, China. Her received her bachelor degree in music education and received master degree in composition from Sichuan Conservatory of Music. Now, she is a teacher of Electronic Music Department of Sichuan Conservatory of Music.
She was invited to attend high level academic activities including: “Shanghai New Music Week”, “Wuhan Music Analysis Conference”, ”International Summer course of Darmstadt”, etc. When she was a graduate, she devoted herself to the study of contemporary music theory and traditional harmony, she wrote some papers based on her research, and 2 of them were published. Now, she became a teacher of EMD, SCCM, her research is focused on contemporary music, music theory and electronic music.
Daoism and Tibetan Buddhism in Chinese Electroacoustic Music:
Technology as a Poetic Trope
The first generation of Chinese composers of electroacoustic music, who came into prominence in the 1980s, largely received their principal training in Europe. The pioneering composer Zhang Xiaofu, for instance, attended the École Normale de Musique de Paris (ENMP), where he absorbed ideas and techniques of electroacoustic music originating in France in the 1940s and ‘50s. After completing their studies, the mission of these composers became the promotion of electronic music in China. With institutional support, Zhang Xiaofu and others have initiated numerous international music festivals and master classes in the past thirty years.
Although the technique of manipulating recorded sounds in the work of Zhang Xiaofu is grounded in the French school of musique concrète, other aesthetic principles guiding this production are distinctively Chinese. Zhang has emphasized the importance of creating music in which electronic sounds and techniques carry a clear poetic meaning. For him, asserting a rhetoric of immediate accessibility, if a musical work proves to be incomprehensible among listeners, it is to be judged as meaningless. Zhang’s position mirrors the prevalent use of programmatic titles in many Chinese electronic works.
Electronic sounds are reimagined in this repertory as poetic tropes that express aspects of Chinese philosophical doctrines. For instance, the looping techniques of Zhang’s Ruo Ri Lang symbolize a circular conception of life rooted in Tibetan Buddhism. The manipulation of reverberation in his Lian pu embodies a Taoist concept also dominant in traditional Peking opera. My argument will be that it is not the integration of two distinct musical languages that sets Chinese electronic music apart from Western practices, as a kind of hybrid or locally-inflected dialect; instead, Chinese composers have achieved a distinct identity by refashioning inherited techniques with symbolic meaning.
- Yinuo Yang
Yang Yinuo is a graduate student studying musicology at the Soochow University School of Music, where her advisor is Dr. Yen-Ling Liu. She is interested in the history and aesthetics of Chinese electronic music, Chinese traditional aesthetic thought, and transcultural phenomena relating the West and East. She also studies electronic composition at Soochow University and presented research on Chinese electronic music at the IMS this year and AAWM last year.
Musique concrète and dance,
Pierre Henry’s collaboration with Maurice Béjart
On the 5th of October 1948, the French audience had the opportunity to listen to musique concrète for the first time. The Paris-Inter radio broadcasted Pierre Schaeffer’s “Concert de bruits” (Concert of Noises) which consisted of five different musique concrète pieces. This new compositional approach – music composed and played exclusively on recordings (phonograph discs in this case) – would mark an important turn in the history of music as it introduced a pioneering aspect when performed: no human being was present on stage during the entire concert. In this situation where the sounds came directly out of the loudspeakers without any performer on stage, how could one have held the attention of listeners in the concert hall? However, the absence of one of the essential components of a musical event seemed to give certain advantages to this type of music. For instance, other visual elements could now occupy the place left empty by performing musicians.
From the beginning of his career as a main collaborator of Pierre Schaeffer, Pierre Henry (1927- ) explored his musique concrète while maintaining a strong relationship with other art fields. From Orphée 51 (or Toute la lyre, 1951), a lyrical pantomime written with Schaeffer, to his sound design for Nicolas Schöffer’s “Tour spatiodynamique de Saint-Cloud” tower (1955), the first ever interactive multimedia creation, Pierre Henry’s work in the 1950s contributed to the development of electroacoustic music as a kind of music that could enhance visual performances or material elements.
Among Pierre Henry’s collaborators in those days was in particular Maurice Béjart (1927-2007), a French ballet choreographer born in the same year as the composer, who had just made his debut as dancer-choreographer. One of the first ballets they worked on together was Symphonie pour un homme seul, initially early masterpiece of musique concrète composed by Schaeffer and Henry in 1950, which evolved into a distinctive work after it was re-conceived with Béjart’s dance in 1955. After this successful adaptation, Pierre Henry continued his collaboration with Maurice Béjart. While Béjart choreographed a series of ballets on pre-existing musical pieces of Henry, the latter composed a piece especially for Maurice Béjart in 1956 entitled Haut Voltage, one of his key works according to the composer. In his text “Mes seize années-clés” (My sixteen key years), Pierre Henry mentions this piece following their first collaboration on Symphonie pour un homme seul. It is the first work in which he looked for a fusion of acoustics and electronic sounds. From this key work in 1956 to La Reine verte in 1963, most of Henry’s works were composed for Béjart’s choreographies. These include Orphée-ballet (1958), Le Voyage (1962) or Variations pour une porte et un soupir (1963).
But why is Maurice Béjart so important in the musical creation of Pierre Henry? What role did the visual arts and dance in particular play, and to what extent did the choreographer influence the composer’s work? This study will analyze these two artists’ collaborations by examining the circumstances and the processes of their creation. It will also be an analysis of the works of Pierre Henry in the years 1956-1963, a period which corresponds to his transition from the GRMC to his private studio.
- Reiko Yoshida
Born in Fukui, Japan, Reiko Yoshida studied Musicology at the Tokyo University of the Arts. She completed her Masters degree with a dissertation on the relationship between dance and music in twentieth-century ballet, focusing on the creation of George Balanchine and Maurice Béjart. After working at ALM Records/Kojima Recordings Inc. in Japan, she went to France to pursue her research on Béjart’s choreographies. She is currently a Ph.D. student in Musicology at the University of Paris-Sorbonne under the supervision of Professor Marc Battier. Her ongoing doctoral thesis explores the collaboration of Maurice Béjart with his contemporary composers.
The Xenophone, an electroacoustic representation of
intercultural communication trends on social media
Vast digital communication networks allow for the rapid and effective dissemination of ideas and concepts, which can significantly impact how people interact in a global context. This integrated relationship with technology has the potential to directly influence global trends in human communication, inclusive of possible interactions with musical thinking and creativity. The principle of causation can be applied as a form of reverse engineering to explore possible existing taxonomies of sound objects for analytical purposes and further our understanding of sound based composition as a representation of human thought/activity. With that premise in mind, the aim of this research is to represent intercultural communication trends by sampling the global intercultural communication pulse at a given time and create a virtual sonic experience to generate new musical content based on contemporary human thought and activity.
It is proposed in this paper that technology such as internet-based social networks can be an effective form of virtual dynamic controller for the purpose of composition. Since the field of music technology is in itself interdisciplinary, it is only natural to integrate various other forms of technology to develop new tangible instruments. The conceptual and aesthetic value of integrating elements from other disciplines can therefore significantly alter traditional artistic approaches taken by composers to use, integrate, modify existing technologies to forge new musical ideas. Most often these approaches include synthesis, signal processing, parametric and algorithmic control methods to generate musical/sonic material. A further goal with this project is to examine the nature of craft versus creativity focusing on the compositional experience itself, in this case, a technology driven exercise created by the new socio-cultural contexts through social media networks rather than the music technology itself to generate new musical works. As an immediate corollary, the question arises whether these new ideas stem from existing creative constructs or from new compositional methodologies influenced by external systems such as social media networks. The idea behind the Xenophone project is to integrate the ‘techné’ of technology driven musical creativity in a broader sense to reveal the mechanisms of creativity through technology. Is there not a connection between global technological evolution and human creativity?
The Xenophone Music Generator is used as a case study to take advantage of readily accessible communications data pulled from social media networks to
influence procedurally generated harmonies, melodies and sonic material. This is accomplished by sampling communication activity on social networking services such as Twitter for example, where users post and interact with short messages of up to 140 characters to communicate an idea or a thought. The generated music attempts to mirror the mood and intent of the sourced Twitter posts in real time, informing art through it’s interpretation of global discourse. The framework analyses the grammar and syntax of the twitter posts, searching for embellishments such as capitalized adjectives or expletives to inform musical parameters. This will create more consonant or dissonant musical structures based on the syntactical/semiological analysis.
Musical parameters such as instrumentation and spectral qualities of sounds will also be influenced by regions linked to the discussion; therefore providing geo- cultural variation following the evolution of online discussions. Users of differing background and political association discussing a certain topic would therefore influence the system to produce a musical mood based on their opinion of that topic. In essence this will provide a new platform for the musical representation of meaning that can be used in conjunction with existing spectro-morphological analysis methods to link compositional musical gestures with human behaviors and thoughts. Most importantly, the musical content emerging from the data sonification itself may provide some leads towards a better understanding of electroacoustic music analysis and spectro-morphological patterns of construction within larger forms of musical expression.
This prototype builds upon the idea of using spectro-morphological analysis vocabulary for the choice of sound material and creation in compositional activity (Blackburn 2009). The proposed system also reorients the concept of ‘searched objects’ (Climent 2008) for re-composition/re-instrumentation purposes within the realm of ‘searched meaning’ and symbolic association through sound, while integrating the unpredictable and perhaps reactive nature of network media interaction. The main advantage of the system as a compositional tool is that it does not depend on a fixed medium compositional paradigm, and as the information database evolves over time, it will develop into a more innovative platform to generate new compositional material based on current communication trends in social media, with further applications in other fields such as real-time interaction/sonification for online gaming platforms.
The data shared by these media interactions can be recycled for the purposes of generating a visual feed representing the Xenophone analytics. The resulting assets can be expanded to formulate a complete audiovisual representation of global discussion fit for stage production or live installations, as it is intended to work autonomously after a single hashtag search. The resulting symbolic expression provides a unique view of cultural trends with the added complexities of multi-parametric musical interpretations. Twitter’s access to vast archives of digitized human interaction can provide the grounds for in depth artistic representation of the global community. The Xenophone framework therefore intends to simplify the monolithic bank of user data integral to the function of modern social media, and replicate the simplified data in the form of a schematized musical mood profile. While its function here is linked specifically to Twitter, the concept behind its construction is fluid and adaptable to other sources of data and human communication and leads toward a richer and wider capacity for interactive communication and expansion of the musical realm of human expression. This experimental approach may provide insight on the foundations of creative activity through analytical strategies to enrich our knowledge of the musical phenomenon, its transformation and dissemination.
- Ivan Zavada
Ivan Zavada is a composer, multimedia programmer and designer who works in computer music and electronic music theory at Sydney Conservatorium of Music. His research focus is on the interactive relationship between image and sound within the realm of digital music. Zavada creates innovative multi-sensorial events that incorporate sophisticated audiovisual techniques to express artistic individuality in the digital era. His work Chronotope is an example of the vast creative potential available through new mediums of artistic expression, the work was premiered at the Galileo Galilei Planetarium in Buenos Aires at Understanding Visual Music Symposium and Fulldome Festivals in Germany and Brazil. Ivan Zavada’s work questions the conceptual nature of music by examining the relationship between sound and visual elements of abstraction. His visual music works were recently featured in festivals and international symposia in Australia, Argentina, Brazil, Canada, France, Germany, Greece, Macedonia and USA. Ivan Zavada was born in Montreal Canada and started his musical career as a violinist.
- Dale Keaveny
Dale Keaveny is an active composer and graphic artist based in Wollongong currently studying at the Sydney Conservatorium of Music. Dale attended the University of Wollongong to study Journalism in the year 2013, building networks throughout the film and production community in the Illawarra region. His first professional work included 3 orchestral compositions commissioned by Nepean Mining to be advertised at the Asia Pacific International Mining Exibition (AIMEX) in 2013. This was followed by commissioned advertisements for Warilla based sporting goods manufacturer Spartan Sport, directed by Daniel Cartwright and starring cricket legend Chris Gayle.
Dale is currently studying composition at the Sydney Conservatorium and has collaborated with artists at both the University of Sydney and the University of Technology Sydney, building a body of work specialised in film and media composition. He has contributed to a number of exhibitions and concerts featuring string based works and electroacoustic music. Presently, Dale focuses on cultivating the connective tissue between multimedia applications and the artistic medium, designing the means through which artists can create new and exciting forms of sonic and visual expression.
Assisting the Development of the Field of
Electroacoustic Music Studies in China
The China Electroacoustic Resource Survey (abbreviated as CHEARS) not only builds up a relevant electroacoustic music classification system, but also, in general, provides a peer-reviewed framework for “Assisting the Development of the Field of Electroacoustic Music Studies in China”. It discusses terminology of electroacoustic music in the research sector as a core issue in order to deduce the need for CHEARS in China. In addition, it takes the cultural and historical interaction into consideration deeply and widely for the influences on terminology of Music and Technology in China, as well as the phenomena that have been influenced by Chinese elements on Music and Technology to the rest of the world. The proposed talk focuses on the significant challenges that one encounters when creating a new terminology in the Chinese language.
In CHEARS, translation is never a straightforward job; rather it is a creative progression, as it involves translating terminology from a different language family, such as English with its Roman alphabet to Chinese and its hieroglyphs, which are abstracted from its pronunciation as they cannot be spelt directly and encoded into a written character that looks like a picture. How many terms exist in electroacoustic music? It seems like an endless job to translate all of these in this catalogue and it is unavoidable that arbitrary mistakes are made due to the novelty of the terms. Therefore, a new method named CHEARS Dictionary has been introduced to offer information (terms) and function (with respect to grammar) separated in a language, especially for English to Chinese, before a translator can commence. In short, there exists a loop involving the CHEARS Dictionary, Machine Mapping (a linear system to retain words in CHEARS dictionary and place them in context for potential translation) and human editing in any order. In the end, all terminology goes through Qualitative Analysis (focusing on terms, ignoring other elements in a language) and Quantitative Analysis (user feedback from a questionnaire) to derive which one is more important than the others. This is a very important step in the evolution of CHEARS. Each term is scanned for classification scientifically between a taxonomy based on attributes or properties and a folk taxonomy, or folksonomy, based on functions and habits. The most creative part in classification is that CHEARS potentially provides a function for each user regardless of age and speciality to put any amount of terms into the hierarchical (tree view) structure since everyone is capable of having things categorised in a certain way as second nature. It will attempt to show the depth of understanding in the field of electroacoustic from the users and generate a significant amount of data for analysis (e.g. data mining) in the next phase.
Regarding cultural influence on terminology, the earliest authentic book for Chinese, Explaining Graphs and Analyzing Characters is helpful offering six ways to create a hieroglyph in Chinese, such as simple indicatives, pictograms, phono-semantic compound characters, compound indicatives, borrowed or derived characters. Here is a perfect example in the field of electroacoustic: nearly 3,000 years ago, one Chinese hieroglyph “闻” (Wén) was created as "an ear in the door, listening to the sound from the outside", which is exactly the same concept as a Pythagorean term "acousmatic" concerning the "distance which separates sounds from their origin" (EARS: ElectroAcoustic Resource Site). “耳”(ear) is represented in the pictogram; “门”(simplified Chinese for door) and“門”(the traditional character is more specific to the shape of a gate or door) exemplifies the simple indicative. The combination of "putting an ear in the door" illustrates a compound indicative. What if we put a person in the door, or a piece of wood in the door or perhaps view the sun in the door?
Besides the method from Explaining Graphs and Analyzing Characters, there is another way for intuition to contribute at the lower end of this translation scale. It does not require much logic for common sense to be employed as long as a proper meaning and rhyme exist. For example, CHEARS stands for The China Electroacoustic Resource Survey in its transliteration in Chinese, Qi3 (启) Er3 (耳) Si1 (思), which means "Get Your Ears (耳) Enlightened (启) to Think (思)". It might be the ideal example of intuitive translation thus far in this research, but it can be extremely time consuming. Furthermore, one runs the risk of a translation being easily forgotten if it is not widely accepted. Eventually, it has taken an entire decade to have a proper name in Chinese for CHEARS in 2016 since it was established in 2006.
In terms of historical aspects, terminology translation is rather difficult. In the beginning of the 20th century, China was at the crossroad of ancient and modern times when the Qing (Manchurian), the last dynasty in China, was demolished. It was followed by a sort of democracy for 40 years within a situation that was very unstable (civil war in China) and chaotic (World War II) until the People’s Republic of China (PRC) was established in 1949. During this period of time, Chinese characters were still in their traditional form, but their usage was changed heavily and moved away from ancient Chinese gradually. Many new concepts (such as aesthetics, philosophy, democracy, politics and economy) and technological words (such as telephone) were taken from Japanese since Japan borrowed more than 2,000 Kanji from China around 1,000 years ago and had them planted creatively as well as integrated systematically. Interestingly, the idea of Simplified Chinese (used in the mainland nowadays) was even brought up at that time, but it did not get a chance to be put into practice until the establishment of PRC. After 1949, one of the two major forces in the civil war moved to Taiwan and kept using traditional Chinese; the other one (PRC) continued in mainland China through the use of Simplified Chinese and faced the extremely controversial Cultural Revolution (still a taboo in mainland China now) for ten years. In the 1980’s the door of China was opened again to the rest of the world.
In short, it is impossible to translate terms without taking the extremely complicated situation in the last century into account. Significant turmoil occurring three times in a century was too much for a culture. The consequence is that the Mandarin (simplified Chinese) used on the mainland has changed dramatically compared with the ancient one, even to the traditional one used in Taiwan and Hong Kong currently. All in all, CHEARS is aiming to serve Greater China, including both simplified and traditional Chinese. Therefore, the case studies are involving bilingual or trilingual speakers in Chinese (Mandarin and Cantonese), even Japanese and Uyghur, which is an official language of Xinjiang Uyghur Autonomous Region in China, the latter using the spelling system in alphabetic belonging to the branch of Turkic language family. In Xinjiang, most of the population are bilingual at least in Uyghur and Chinese, some of them are trilingual including English. For example, Computer music had a serious argument in Mandarin for its translation around 10 years ago when The Computer Music Tutorial was translated and published, but it was easily translated into Uyghur (pronounced as compiyotir muzikisi) in the same way as the translation in Japanese. The challenges mentioned in this paper will be presented to offer a means of avoiding terminology being negatively influenced by the effects of the turmoil in the last century, as well as avoiding them being trapped into a strange loop on the “Dry Translation” (offering a Chinese character without understanding the original texts); or even worse into an indefinite loop on the “Dead End Translation” (making up words without any references at all in the destination language).
Finally, this methodology could not be realised without CHEARS.info, which is a prototype of a database specifically designed for electroacoustic music studies that is presented by way of a dynamic website as its platform. The database and the website are relatively independent of each other. In other words, they can be continuously upgraded not only in Simplified, but also in Traditional Chinese. It compensates the Hanging House (a floating house in the air without any ladders to approach it) of terminology research by allowing for practical activities to be combined with its theoretical content. This methodology has been tested by way of several case studies. After the successful completion of this PhD planned for 2018, this whole project will be ‘rolled out’ in subsequent work (possibly by a team of people, since it might be far too large for an individual). While CHEARS is welcoming the tenth anniversary of its research and appreciating the scholarship from China Scholarship Council (CSC) from 2014 to 2017, it is likely to be the deepest and most direct conversation ever between English with its alphabet to Chinese and its hieroglyph, such as Spectromorphology, Sound-based Music and Organised Sound.
- Ruibo Zhang
Zhang Ruibo (Mungo) is doing his doctoral research (CHEARS) with Prof. Leigh Landy under the State Scholarship Fund by China Scholarship Council (CSC) as a full-time PhD at De Montfort Univ. (DMU), Leicester UK. He also teaches electroacoustic music composition and theory at China Shenyang Conservatory of Music. He had his master’s degree with Prof. Zhang Xiaofu and Kenneth Fields in China Central Conservatory of Music (CCOM). He was one of the translators for completing Chinese version of The Computer Music Tutorial (Curtis Roads) and The Study of Orchestration (Samuel Adler).
His work, New Ambush on All Sides, awarded by Beijing MUSICACOUSTICA festival 2005 and was performed at Audio Art Festival 2007, Krakow Poland; Synthèse Festival 2008, Bourges France; as well as Salle Oliviere Messian, Radio France, Paris. In March 2010, his new piece Birth (Audio-Visual) was performed (World Premiere) in the Concert of Central Conservatory of Music, Beijing – Chinese Electroacoustic Music Center amongst The MTI 10th-Birthday Series at De Montfort University, Leicester UK.
His research, CHEARS (China ElectroAcoustic Resources Survey), was selected into EMS07 conference, and he presented the research at De Montfort University, Leicester UK. After this, he continues his research: CHEARS.info, and presented it at the EMS08, EMS10 and EMS11 conference in Paris, Shanghai and New York respectively.
As Foreign Liaison Secretary, he successfully worked in MUSICACOUSTICA-Beijing Festival since 2005, as well as EMS Conference (Electroacoustic Music Studies Network) in 2006 and 2010 at Beijing and Shanghai respectively.
The new art age of spatial control
The history measures and new ideals of using sound diffusion
compose the electro-acoustic music
Spatial performances of electro-acoustic music and spatial component’s control of the electro-acoustic music composers are used as the key elements of sound transformation and deliver of electro-acoustic music. Sound diffusion also known as sound spatialisation refers to electro-acoustic music composers compose and perform using sound system such as speakers and mixers enhancing the spatial components of an electro-acoustic music composition by electronically delivering musical gestures, phrases, or single sounds to different loudspeaker locations surrounding the audience space.
Sound diffusion has played a significant role in certain parts of the electro-acoustic music world for more than 60 years (Tape Music, Computer Music and so on) and focused by more and more electro-acoustic music composers. This article will discuss some history measures and new ideals of using sound diffusion to compose and perform the electro-acoustic music.
- Shijia Zhu
Associate Professor and PHD of Electro-acoustic Music Composition in Central Conservatory of Music, Jury of Electro-Acoustic Music Competition of MUSICACOUSTICA-BEIJING,a board of directors of E.M.A.C(Electro acoustic Music Association of China). Born in 1978. Completed his B.M. in composition and M.M,PHD in Electro-acoustic composition in The Central Conservatory of Music, Beijing. Recipient of the Second Prize in the 1st Musicacoustica Composition Competition, (2004), the Second Prize in the 1st Musicacoustica thesis Competition (2006). His works include compositions for orchestra, chamber ensembles, solo instruments, as well as electronic music compositions and music for various events and visual images. His works have been broadcast by National Radio France and have been played in many festivals and concerts of Japan, Austral, Germany, United States, and France, etc.
 Vgl. Taylor, Gregory. „An interview with David Wessel“,
https://cycling74.com/2005/09/13/an-interview-with-david- wessel/#.WN0JdY7fOHo (30.3.2017).
 Chabot, Xavier, Roger Dannenberg, and Georges Bloch. „A Workstation in Live Performance: Composed Improvisation“, in: ICMA (Ed.), International Computer Music Conference Proceedings ICMC, Den Haag 1986, S. 57.
 Chris Watson, interview, 2013, http://thequietus.com/articles/11222-chris-watson-interview-sound-recording-cabaret-voltaire
 Adrian Moore, Sonic Art Recipes. p65-66
 Cathy Lane and Angus Carlyle, In the Field: the art of field recording, Interview with Antye Greie, p43.
 “The Three Paths: Cultural retention in Chinese electroacoustic music”. In: Simon Emmerson, ed. Routledge Research Companion to Electronic Music (provisional title) – expected publication date, 2018.
 Still more participants will conduct the experiment in the forthcoming months.
 This paper is the achievement of Interactive Music Research (Project No. 16YJC760040) which is sponsored by Ministry of Education Humanities and Social Sciences Youth Fund (2016-2019).
 Video link https://drive.google.com/open?id=0ByrrWIC2kMFraGR0VHRzRFFmR2M l
 John Richards, the electronic equipment producer and improvisational musicians, works at center of science and technology and innovation research at de Montfort University of music.
 [The “audio-visual relationship” as described in this paper refers to the audio-visual contract proposed by Chion and incorporates the further categorisations of isomorphism and concomitance introduced by Coulter]
 See Kazutomo Tanogashira, “1960-nendai no Takemitsu Tôru no Eiga-Ongaku kara: Chinmoku tono Kankei o megutte [The Film Music of Tôru Takemitsu in the 1960s: From the Viewpoint of Silence],” Ôsaka Geijutsu Daigaku Kiyou, Geijutsu 34 (2011–12): 37.
 Author unknown, PROSPECTUS of the AUDIO-VISUAL RESEARCH FOUNDATION, dated February 1959, 2. The prospectus is in typescript and archived as “Vortex: Henry Jacobs” in the file Sôgetsu Âto Sentâ: Gaikokusakka Shôheishiryô (Shorui, Tegami) [Sôgetsu Art Center: Foreign Artists Invitation Materials (documents and letters)] at the Sôgetsu Art Center, Sôgetsu Foundation, Tokyo.
 Ingrid Fritsch, “Zur Idee der Weltmusik,” Die Musikforschung 34 (1981): 259: “das gleichzeitige unabhängige Nebeneinander von Musik verschiedener Kulturen betont.”
 Author unknown, Vortex 4, concert program at Morrison Planetarium, San Francisco, 12, 13, 19, and 20 May 1958. The sound source of Static Relief used in the concert seems to be the vinyl record Universal Recording PBU1, Tokyo, 25cm LP, only 300 copies of which were sold. It also includes Takemitsu’s Eurydice – La Mort and Vocalism A-I. See Hugh Davies, Répertoire International des Musiques Electroacoustiques: International Electronic Music Catalog, (Cambridge, MA: MIT Press, 1968), 267.
 Author unknown, VORTEX 5, concert program at Morrison Planetarium, San Francisco, 12, 13, 19, 20, 26, 27 January 1959.
 See Peter Burt, The Music of Tôru Takemitsu (Cambridge: Cambridge University Press, 2001), 43.
 Christopher Balme, Einführung in die Theaterwissenschaft (Berlin: Erich Schmidt Verlag, 2008), 164: “der Einfluss des Radios auf das Theater vor allem im Bereich der Tongestaltung.”
 Ibid.: “Das Mischen von komplexen Tonspuren auf Tonbändern wurde hauptsächlich in den Hörfunkstudios entwickelt, fand aber sehr schnell Eingang in die Inszenierungspraxis des Theaters und des Films. Auch die Theaterdramaturgie blieb von den technischen und ästhetischen Entwicklungen im Hörfunk nicht unbeeinflusst. . . . Es besteht kein Zweifel, dass moderne Dramaturgie eine Tendenz zum Episodenhaften, zur Montage, zu schnellen Szenenwechseln usw. aufweist.”
 This radio play was written by the writer Yasushi Inoue (1907-1991).
 The second commercial radio broadcasting system in Japan. Since 1958 it has been Mainichi Hôsô [Mainichi Broadcasting System (MBS)].
 Tôru Takemitsu, “Watashi no Genjitsu,” The Ongaku-Geijutsu (1959): 36.
 Understanding the Art of Sound Organization (MIT Press) and La musique des sons/The Music of Sounds (Sorbonne: MINT).
 Mexican writer of the twentieth century.
 “The Sound in Rulfo, that noise”.
 “Zeami’s mark in japanese art: the flower and the nothingness”.
 Both documents held at the Paul Sacher Stiftung, Basle (Sammlung György Ligeti).
 Bruno Bossis, “Mortuos Plango, Vivos Voco” de Jonathan Harvey ou le miroir de la spiritualité, Musurgia, 11/1-2 (2004), pp. 119-144.
 Paul Sacher Stiftung, Basle (Sammlung Jonathan Harvey).
 This paper is the achievement of Multimedia Convergence Composition of Electronic Musicin New Media Context (Project No. xnyy2016008) which is sponsored by Research Center of Southwestern Music of Sichuan Province. (2016-2018)