A - B - C - D - E - F - H - K - L - M - N - O - P - R - S - T - W - Z
University of Huddersfield, UK
Roberto Gerhard was a pioneer of electronic music in England creating a number of substantial concert, theatre and radio works from as early as 1954. Gerhard’s electronic music is one of the richest repositories for understanding the development of the composer’s late compositional technique. Apart from the Symphony no.3, ‘Collages’, none of Gerhard’s electronic music is published. This paper will discuss aspects of Gerhard’s electronic music, focusing on Audiomobiles (1958-59) and Sculptures (1963).
Although Gerhard writes in Concrete Music and Electronic Sound Composition that he approached ‘the electronic medium strictly as a sideline’ , the importance of this work and its impact on his instrumental composition has thus far received scant academic interest. Gerhard himself maintained that working in the electronic medium had resulted in a,
"number of far-reaching morphological changes in the manner of organizing sound and it seems to me that these changes are bound to affect methods of composition in the traditional field of instrumental composition as well." 
Gerhard’s approach to electronic music traversed the aesthetic paradigms that polarized early musique concrète and Elektronische Musik, often using instrumental, concrete and electronic sound materials. Working very much on his own (the BBC Radiophonic Workshop was not opened until 1958, some four years after Gerhard has started working in the medium) he was critical of the dogmatic approach of his European contemporaries, writing that,
"most of us had already noticed for some time that, whether German, Italian, Dutch or Belgian, electronic music sounds curiously alike in its timbral aspect. If the possibilities were really unlimited, one couldn’t help feeling that these composers were strangely coincident and repetitive in the use they made of them" 
and that the sine tone has a ‘rigid, cold, dead-signal quality. It is utterly unsuited to convey anything warm, tender, vivid, alive in human experience’ . Gerhard was always more interested in the transformation of acoustic source materials, stating that ‘the microphone captures the living spark of the natural acoustic source’ . Gerhard was, however, more circumspect than either Edgard Varèse or John Cage in his use of such acoustic sources. In his unpublished notebook from 1957 Gerhard writes that he considers that, ‘the term ‘musique concrète’ is ridiculous’  and later in 1959 he wrote that,
"in principle, anything that comes from an acoustic source is possible material for musique concrète. This, of course, throws the gates wide open – too wide, perhaps – to material of all sorts, musical and not so musical. The French themselves, for instance, are not above using pots and pans for their exercices aux casseroles as they describe them.’" 
Gerhard’s approach to electronic music with its emphasis on the abstract ‘musical’ quality of concrete sounds rather than their associative meaning and the sampling and transformation of his own instrumental compositions is akin to the work of Iannis Xenakis and Bruno Maderna – two composers for whom electronic music and its techniques were to play a central part in informing their compositional aesthetic. Gerhard’s use of concrete, instrumental and electronic sound sources in Audiomobile II DNA (1963) has a kinship in approach with Maderna’s La Rire (1962) which incorporates the sounds of voices, footsteps in rain, white noise and sine-tone generators, as well as transformed timpani, flute and piccolo.
Whilst Schaeffer, Stockhausen and their respective colleagues at the GRM and WDR studios propagated concert electronic music and wrote significant treatises on their work and the new medium, Gerhard was a more practical composer. Unfunded by a major radio studio Gerhard’s experiments were carried out in the public glare initially through composing incidental music.
One of the disadvantages of not working permanently in a major radio or state-funded studio meant that there was no archival administrative structure to preserve Gerhard’s electronic works. Apart from the electronic component of the Symphony no.3, ‘Collages’ neither of the publishers of Gerhard’s instrumental music hold copies of his electronic works, or incidental works incorporating electronics. The major repository of Gerhard’s electronic music is the archive held in the Cambridge University Library . This archive is not, however, complete. Four boxes, containing an undisclosed number of tapes were borrowed by David Drew, a close colleague of Gerhard’s, in 1990 from Dr Rosemary Summers, the executor of the Gerhard estate .
3. GERHARD’S ELECTRONIC MUSIC
Hugh Davies, in his 1981 Tempo article on Gerhard’s electronic music wrote that,
"Gerhard was not only the first important British composer to adopt electronic music techniques; it seems probable that he was, by a few months, the creator of the first British score to involve tape" .
Gerhard’s pioneering achievements can be put in a broader, less localized, perspective. The first musique concrète work, the Étude aux chemins de fer, was produced by Pierre Schaeffer in 1948 at the Club d’Essai, RTF (later INA-GRM). In 1950 Schaeffer and his then assistant Pierre Henry produced their first substantial work in the genre: the collaborative Symphonie pour un homme seul. The NWDR studio opened in 1953, where Stockhausen produced his first experiments with Elektronische Musik, the Studie I & II (1953 and 1954). The first acknowledged work that combined instruments and electronic sounds was Maderna’s Musica su due dimensioni produced in Bonn, in 1952 for flute, cymbal and electronic tape. One of the most famous early works incorporating electronics was Varèse’s Déserts (1954) for ensemble and tape. Varèse’s work alternates rather than integrates the instruments and electronics, having three tape ‘interpolations’. It was in the same year, 1954, that Gerhard completed his first ensemble and tape work, the incidental music for Bridget Boland’s play, The Prisoner.
Although Gerhard wrote that he was primarily interested in producing electronic music for ‘applied works... to works of radio and television, for the stage and screen’ completing twelve substantial scores for ensemble or orchestra and tape between 1954 and 1964 for BBC Radio productions or for theatre, he produced a number of works with or for electronics not intended as incidental music. These include Audiomobiles I-IV (1958-59), the second of which became the soundtrack for Hans Boye and Anand Sarabhai’s film DNA in Reflection (1963); Lament for the Death of a Bullfighter, for speaker and tape (1959); Symphony no.3, ‘Collages’ for orchestra and tape (1960); Sculptures I-V (1963) and the substantial, though unfinished Vox Humana (the Ten Pieces for tape are extracts from Audiomobile II DNA. They are listed in the Bowen catalogue as composed in 1961. However they are extracts from a work completed in 1963 and released on Electronic Music, by Roberto Gerhard, (Southern Library of Recorded Music, MQ 760) in 1964.)
4. AUDIOMOBILES & SCULPTURE(S)
If we examine the existing literature and sources referring to Gerhard’s Audiomobiles I-IV and Sculptures I-V we find a certain amount of confusion relating to both the dates of composition and the number of completed works in both cycles. In Joaquim Homs’ ‘Roberto Gerhard and his Music’  Homs refers to a visit he and his wife made to England in late 1959 to attend the premiere of Gerhard’s Symphony no.2. Homs writes that,
"we managed in addition to attend two film documentaries with concrete music by Gerhard: Four Audiomobiles (the second one about DNA being especially interesting)’" .
This reminiscence poses a number of questions and is potentially misleading in a number of ways. The second documentary may have been All Aboard (1958) an animated film or Your Skin, a Unilever documentary for which Gerhard provided music . The second and potentially more problematic issue is that the film for Audiomobile II DNA was not completed until 1963. Thirdly, Homs discusses the Audiomobiles as if they were already at this stage an existing series of works.
Whilst all extant sources have the Audiomobiles I-IV listed as completed in 1958-59, the references to Sculptures I-V are more varied. In the catalogue of works at the back of Homs’ book there is a reference to Sculptures, I-IV for tape (1963) but no reference to the works in the main body of the book. Online resources list Sculptures I-V (1963). However, in the catalogue of works listed as Appendix II in ‘Gerhard on Music’ the reference is to ‘Sculpture I (1963): Electronic composition based on sound from a small-scale model of sculpture of brass rods by John Youngman’ .
Hugh Davies writes that Gerhard’s list of works printed in the programme accompanying the London Sinfonietta’s Schoenberg/ Gerhard Series in 1973,
"gives dates and titles for some of the electronic works that conflict with the list assembled by Gerhard and the present author... four Audiomobiles are dated c.1958-9, and Sculptures I-V are listed as if all had been completed in 1963... Indeed there were other audiomobiles, including ‘a capriccio in the manner of Goya’, but they were ‘just a series of illustration-examples for a lecture’ given in 1959; Audiomobile 2 became the title of the concert version of the soundtrack for the DNA film (did it incorporate the second of the original audiomobiles, or was the original set considered as No.1?). No subsequent ones were mentioned by Gerhard in compiling the 1967 list." 
As no published versions of the Audiomobiles I-IV or Scuplture(s I-V) currently exist, the tape archive at the Cambridge University Library is the only resource available. At present this resource has not been fully catalogued. A number of tape boxes are empty, spools of tape are in bags, and some tapes are in boxes that are incorrectly labelled. The fact that some tapes and boxes have been reused only adds to the lack of clarity. It is interesting to note that references to Audiomobile II always refer to the work in conjunction with the 1963 film DNA in Reflection. Below is a list of references to Audiomobiles and Sculptures from the current catalogue of tapes held at the Cambridge University Library:
01.045 Audiomobile No.2 DNA (empty box)
01.148 Roberto Gerhard Audiomobile 2 (DNA) (1963) 15” p.s. full track
01.077 Audiomobiles examples
01.231 Version II Audiomobile 3 ‘Sculpture’ starts 7.5 then 15ips stereo’
01.235 Audiomobile I 15” Sculpture full track Roberto Gerhard
01.239 Audiomobile 3 ‘Sculpture’ (empty box)
Of the six references to the Audiomobiles, two are empty boxes; tape 01.048 has an erroneous tape in the box (an organ concert on one side and the end of a broadcast of Dvorak’s Symphony no.9 on the other); tape 01.077 is also an erroneous tape containing a chamber work, various electronic sounds, including a partial recording of Stockhausen’s Gesang der Jünglinge; tape 01.231 contains what I believe to be Sculpture I (because of both the sound materials and the length of the composition) and 01.235 contains a recording of Audiomobiles I.
There are also three references to Sculpture alone:
01.013 ‘low pitched Sculpture at end of reel’
01.015 ‘green [leader tape] for Sculpture’
01.016 ‘Sculpture last take’
Of these three references tape 01.015 and 01.016 are empty boxes. In the catalogue there is no mention of Audiomobiles IV and a number of confusing conflations of title in which Gerhard seemingly gives the both Audiomobile I and III the subtitle Sculpture.
In his Tempo article on Gerhard’s electronic music, Hugh Davies writes that following Symphony no.3 ‘Collages’ that,
"Gerhard’s electronic music was once again largely background music. Only one short work was specifically composed for concert use: Sculpture I based on sounds produced by a small sculpture of brass rods made by John Youngman. Material for four further works with the same title was assembled (early 1967: ‘as yet unedited’) but like other projects appears never to have been completed...’" .
In private letters to Davies, Gerhard indicates that he has ‘...an accumulation of work in a state of near- readiness, I mean ready for com-po-si-tion, namely ca 25 to 30 7” reels of multilevel compounds classified as ‘good’’ . One such example is tape 01.116 on the box of which Gerhard has written ‘very good bits of electronic music’ and contains 24 minutes of highly developed (almost) continuous electronic music derived from the Youngman sculpture. As none of Gerhard’s electronic concert works were commissioned, one scenario is that the pressure of earning a living as a composer meant that these works were never finished. With Gerhard’s work in the 1960s being predominantly large commissions such as The Plague (1963-4), The Anger of Achilles (1963/4), Concerto for Orchestra (1965), Epithalamium (1966), Symphony no.4 (1967), Leo (1969) and the unfinished Symphony no.5 (1969) there was little time to complete time consuming works for tape that carried little financial reward.
From the evidence of the tapes in the Cambridge University Archive, the various catalogues and the personal communications with Hugh Davies it seems that only Sculpture I was ever completed. A more tentative hypothesis is that Gerhard initially intended on giving the work the title Audiomobile 3 ‘Sculpture’ as he had for Audiomobile 2 ‘DNA’, but in processing the sounds of the Youngman sculpture realized its sonic potential for a more extended self-contained series of work.
However, we still need to answer Hugh Davies’ question regarding Audiomobiles II: did it incorporate the second of the original Audiomobiles, or was the original set considered as No.1? If the set of Sculptures were not completed what of the Audiomobiles and their date of composition?
Hans Boye in his personal recollection of the making of DNA in Reflection writes that,
"we could understand from Roberto Gerhard's remarks, that the job had taken much - too much - of his time, and he wasn't happy about that, but since he had started on it he also wanted to complete it. After some further Sunday morning test-runs of film and soundtrack - sometimes arranged with short notice - Roberto Gerhard presented us, after a total of a couple of months, with a reel of quarter-inch tape (full track, 15 inches per second) which we then could get added to the film." 
This implies that while the first Audiomobile is likely to have been complete in 1958-59 and that the projected series was planned, like the Sculptures, with extensive preparative work. However, it was only in 1963 when Boye and Sarabhai approached Gerhard that the second of the Audiomobiles was actually composed. Further evidence that this is the case can be demonstrated by the fact that Audiomobiles II DNA contains i) electronic sound materials, most probably created in the BBC Radiophonic workshop when Gerhard was working on the Lament for the Death of a Bullfighter. Gerhard did not have the facility to create electronic sounds in his private studio ii) sound materials from preparatory sounds for Symphony no.3 ‘Collages’ (including tape 01.004) made in 1959-60 iii) samples of the Youngman sculpture made in 1963. All of this evidence suggests that Gerhard used materials that he already had to hand as his starting point for Audiomobile II. This hypothesis is further strengthened by Gerhard’s own programme note for the work (here paraphrased by Hans Boye) when it was presented at the National Film Theatre, London,
"For the catalogue Roberto Gerhard explained that he intentionally compiled the soundtrack from "layer upon layer of sounds from his library of recorded sounds in such a way that it was in opposition to the precise description of the film supplied by the film makers". And he named it again "an aleatory soundtrack" meaning that the sounds were picked randomly from his library." 
As is evident from the final composition, Audiomobile II DNA may contain a disparate collection of sounds, but they are brought together in a tightly structured, dynamic and vital work. One reason that Gerhard may have termed Audiomobile II DNA an aleatory soundtrack is not because the sounds were picked randomly but because of its method of construction. Gerhard considered the sound-montage ‘something of a game; something like a jigsaw puzzle with pieces upside-down or the wrong way around, bumping into one another and thus emphasizing their isolation, rather than giving them a common purpose which would lift them onto a plane of poetic imagery’ . Gerhard was not the type of personality to consider any composition a ‘game’. What we can infer from this statement is the intuitive freedom that working in the electronic medium gave Gerhard – an immediate tactility of working with, and transforming, sound. Here a further comparison with Maderna may be drawn. About electronic music, Maderna once said, ‘we no longer listen in linear time - our consciousness casts various projections of time that can no longer be represented with logic of one dimension’ . Working with electronic music made Maderna trust in his compositional intuition. The influence of electronic music in Maderna’s instrumental composition can be found in works such as the Serenata per un satellite. Gerhard himself wrote that ‘the way time is felt in electronic music differs entirely from the way time is experienced in traditional music.’ Gerhard was adamant that there is a fundamental difference between working with electronics and instruments. He uses the term sound- behaviour to characterize this difference. Gerhard writes,
"the operative work is behaviour, it will be noticed, not colour; colour is never of decisive importance. Instead of ‘behaviour’ I might have used the term sound-activity. The electronic medium, in effect, makes possible new modes of action with sound which have greater freedom of tonal movement, of configuration and of textural weaving than those which our traditional instruments permit." 
Gerhard’s notion of sound-behaviour bears a close conceptual resemblance to what Denis Smalley would later term spectromorphology  – literally the shaping of sound through time. In line with thinking in fields of sound-activity the electronic works are driven by gesture and texture led sections. Although Gerhard did not care for Schaeffer’s term for the basic perceptual unit in musique concrète, the objet sonore, it is clear that in his electronic works and increasingly in his later instrumental works, he nevertheless moved away from the ‘note’ as the essential unit, to his own notion of the sound object or sound-field as building blocks for his works.
In examining the source documents, tapes, existing catalogues and extant writings about Gerhard’s electronic music there is a need to re-evaluate Gerhard’s catalogue of works. In proposing a title for the projected series: Audiomobiles I-IV and Sculptures I-V, Gerhard unwittingly implied that these works were in fact complete. We find fabrication becoming taken as fact merely by dint of repetition both in print and online. We can hypothesise that it is likely that Gerhard never completed all of the Audiomobiles or Sculptures. What we have are Audiomobiles I & II, Sculpture I and an extensive amount of detailed preparatory but incomplete mixes for the other projected works.
 GERHARD, R., 'Concrete music and electronic sound composition' in BOWEN, M. (ed.), Gerhard on music. Selected writings, Aldershot, Ashgate, p. 180
 Ibid. 1 p. 180
 Ibid. 1 p. 181
 Ibid. 1 p. 183
 Ibid. 1 p. 183
 Gerhard unpublished notebooks, CUL Gerhard.7.115 f.20
 Ibid. 1 p. 184
 see KARMAN, G. Roberto Gerhard’s Sound Archive at the Cambridge University Library, 29.12.2007 and KARMAN, G. Roberto Gerhard’s Tape Collection, 10.10.2008
 Personal email correspondence between Dr R. Summers and the author
 DAVIES, H. 1981. ‘The Electronic Music’ in Tempo, New Series, No.139: 35-38
 HOMS, J. (ed. BOWEN, M.) 2000. Roberto Gerhard and his Music, Anglo-Catalan Society, Sheffield, 2000
 Ibid 11, p. 60
 In ed. BOWEN, M. 2000. Gerhard on music. Selected writings, Aldershot, Ashgate, the catalogue of works,
Appendix II, p. 248 has manuscript scores deposited in the CUL for Your Skin and All Aboard. In DAVIES,
H. 1981. ‘The Electronic Music’ in Tempo, New Series, No.139 the works are listed as being electronic.
 Ibid. 1, p. 261
 Ibid. 10, p. 36
 Ibid. 10, p. 35
 Ibid. 10, p. 35
 BOYE, H. 2010. 'How Roberto Gerhard was persuaded to make the soundtrack for the 16mm film DNA in
Reflection' in Proceedings of the 1st International Roberto Gerhard Conference, Hudderfield, University
 Ibid. 18
 Ibid. 1, p. 184
 From a transcription of Maderna’s 1957 presentation at Darmstadt (made by Horst Weber, 1984)
 Ibid 1, p. 194
 SMALLEY, D. 1997. 'Spectromorphology: Explaining Sound Shapes', in Organised Sound, 2(2): 107-126
Sibelius Academy, Finland
Electroacoustic music pedagogy seems to be in a state of crisis, but it has seemed so for the last several decades, without the field being any the worse for wear. This is closely tied-up with the breathtaking speed of recent technological change, development, and evolution; any discipline as closely linked with such technology as electroacoustic music, is undoubtedly going to suffer growing pains from being forced to keep up.
But, has it kept up? Has the pedagogy kept up, not just with the technological changes of recent decades, but with the cultural changes and significant paradigm shifts that have resulted? Has electroacoustic culture itself kept up with these shifts, or recognised their full relevance and consequences?
This is perhaps an impossible task: the cultural metamorphoses we are undergoing, and have been for some time, are significant enough that it will require the clarity of hindsight to unravel their full implications. This should not, however, discourage us from trying.
Today's electroacoustic pedagogues find themselves in a challenging position. What is it, precisely, that we are trying to teach? At first glance, this seems simple enough: the tools, techniques, and tradition of electroacoustic music.
Each of these, however, is thoroughly problematic. In decades past, one only had access to the tools of electroacoustic music in the studio, access to which was generally of extremely limited duration. As a result, one spent one's class and studio time learning the use of the tools, and the basic methods and techniques. Today, however, interested students have ready access to even the most powerful tools on their laptops, and have potentially been toying with them in their spare time for years before arriving in a university classroom or studio. Many of the basic traditional tools of electroacoustic music are now freely available to even very young children on their portable gaming devices or mobile phones; children as young as five or six are quickly fluent in basic electroacoustic transformations and possibilities through the use of play-oriented sound apps. As a result, the classroom environment need no longer be used simply as a workshop for learning the tools, although of course many questions of good studio practice still need to be taught and passed on to students.
So, we move on to techniques. But these are equally problematic. Electroacoustic techniques were initially determined to a significant extent by the tools used: What can we do with sound on tape? With sound once it is converted into a voltage? Once it is converted into ones and zeros? It is, of course, tempting to say that these barriers and restrictions have disappeared, but in reality it is simply that 'converting sound into ones and zeros' has absorbed the other two, due to its ever-increasing speed, power, and most of all, convenience.
By and large, however, new tools continue to be defined by old restrictions of tape and voltage, and technique is thus guided along a similar path. We find software with montage paradigms wholly defined by archaic techniques of tape editing, and approaches to electronic composition still focused on methods once defined by the electronic circuitry with which it was produced.
Where these techniques continue to be relevant, however, is in questions of language. The language of electroacoustic music – its vocabulary, how it is structured – is a constant evolution which cannot be understood without knowing its roots and its main branches. To compose electroacoustic music today, it is extremely useful to have a thorough understanding of the primary genres to date, their materials, and their formal and structural mechanisms.
However, the language of electroacoustic music did not emerge fully formed from the void; it is part of an endless, mutually-informing cycle between tools, techniques, and language: the tool determines technique; tool and technique outline a range of possibilities, which then determine the language; which then determines the direction for further development of tools and techniques, and so on. Thus, the very languages, structures, vocabularies and genres of electroacoustic music today are, to a very important degree, products of the tools and techniques with which they were made and with whose evolutions they are inextricably entwined.
But, considering our imagined liberation from historical tools, brought on by the supposed freedom of the age of the laptop, do the languages associated with these tools continue to be relevant? Or does the teaching of the language of electroacoustic music amount to no more than a history lesson?
On the contrary, the teaching of the languages of electroacoustic music – past and present – remains critical, but perhaps less in order that these languages can be continued and maintained, but rather, to some extent, the exact opposite: in order that students and future composers will not be caught constantly reinventing the wheel, unknowingly creating 'new' electroacoustic methods and languages which are, in fact, simply a restatement of older paradigms using more recent tools or otherwise adjusted contexts. Instead, students need the ability to reference past and existing electroacoustic languages, to move fluently and knowingly between and within these languages when necessary or desirable, and when and how to leave them behind; how to recognise the difference between the blossom of the new and the stream of tradition.
To remain relevant, however, electroacoustic music pedagogy needs to provide more than just history's lessons on 'what to avoid' and a general sense of context. Instead we must seek to foster and nurture a musical culture which is relevant to today's conditions, today's tools, and today's cultural reality. Institutional contexts are perhaps losing some of their potency, as knowledge and resources become more widely accessible; lectroacoustic creation has moved out of the studios, and into our bedrooms, our studies, even into our parks, our backyards, and our sidewalks. Electroacoustic reception changes as quickly as the practice of its creation: out of the concert hall, and into our headphones, at a time when most of the corpus of electroacoustic music can be contained in one's pocket, and when musical genres bubble up and burst forth from the dams that once kept them separate, reaching listeners and audiences with less regard for prior constrictions of culture, transmission and distribution.
Technology, for both creation and reception, moves ever more rapidly towards the small-scale, the portable, the personal. Some aspects of current electroacoustic culture deliberately position it as a welcome counterbalance to this trend: the power of large-scale loudspeaker arrangements, the community experience of the concert context, etc. Too great a focus here, however, may yet become a factor in the slow erosion of electroacoustic music as a viable culture. Personal experience, and personal creation – the private, the portable, the mobile, the immediate – must be embraced.
Perhaps this, then, offers the strongest choice of direction for electroacoustic pedagogy: to understand, accept, encourage, and even emphasise new contexts and new paradigms in both creation and reception. Personal expression, personal reception, personal experience; smaller-scale creation, smaller-scale listening... Electroacoustic pedagogy must break out of the studios and out of the concert halls, while nevertheless maintaining these as vibrant alternatives.
An advertisement from Bob Moog Foundation claims he was “the man who reinvented the sound of electronic music.” But who was the man who invented electroacoustic music? If one applies evolutionary theory to technology used for musical purposes, it confirms the fact that many people around the world were working on the same problems at the same time. I suspect we don't know about most of them and the ones of whom we are aware were men and lived in Europe and America.
The 19th century elevated the composer to a singular status requiring no assistance except from performers, most of whom remained nameless. This tradition has continued up to today in the “art” music field, including electroacoustic music with the exception of collaborative music making which I will discuss later.
I was born in Los Angeles, California in 1939 but I was unaware that the same year, a few miles away, at the University of Southern California, Edgard Varèse said, “I need an entirely new medium of expression: a sound producing machine (not a sound reproducing one).” What hindered Varèse the most was the absence of engineers who could provide him with the technology. Other composers at about the same time were fortunate enough to find technical collaborators who helped them realize their dreams. In our desire to name one individual responsible for artistic innovation history tends to forget these people. Musicologists know about Pierre Schaeffer but forget about Jacques Poullin. We associate Edward Artemiev with the ANS Synthesizer but not its inventor Evgeny Murzin. There are many other such examples: Karlheinz Stockhausen and Fritz Enkel, Luciano Berio and Alfredo Lietti, Gyorgy Ligeti and Gottfried Michael Koenig, Milton Babbitt and Harry F. Olson, John Eaton and Paul Ketoff, Morton Subotnick and Donald Buchla, Knut Wiggen and Per-Olav Strömberg and others too numerous to name here. Although I am popularly credited with the invention of the Synclavier, it would never have happened without my collaborators Sydney Alonso and Cameron Jones.
A quite different form of collaboration in electroacoustic music developed in the 1950s, most notably by John Cage. Here the equipment was often homemade or put to purposes not intended by its inventors. In most cases the composer gave only minimal instructions to the performers who were often composers themselves. This musical genre, was once referred to as “live electronic music” however its essence was the rejection of special purpose hardware and the celebration of improvisation using a new sonic vocabulary. Examples include the ONCE Group, the Sonic Arts Union, FLUXUS, MEV, AMM, etc. This genre had its critics even within the rarified musical world in which it resided. In the early 1970s composer David Cope wrote that, “a composer can very well be a bad performer; “real time” can become a tiring agent; new sounds can become just that and not much more (composers tend to develop preoccupations with how sounds are produced, finding complicated ones somehow more appealing than uncomplicated ones, rather than considering “is it musical?”...The initial intrigue with electronic sounds seems somewhat exhausted at this writing...” Predicting the future, especially in the arts, seems to be a precarious enterprise.
I believe, contrary to some purists, that computer music belongs within the larger world of electroacoustic music. Since 1957, when Max V. Mathews wrote the first computer music language, Music I, most developments in this field have been a result of both intended and casual collaboration by programmers who have had musical training. These programmers built upon the work of their predecessors in a manner not unlike composers. It is difficult to think of the author of a computer music language, or a composer for that matter, who worked in isolation, unaware of precedent.
Almost all composers who work with the many recent computer music languages do not require collaboration. They work easily with existing software and frequently add their own routines, Max/MSP being the most often cited of these. Composing electroacoustic music in this manner requires the same trial and error employed by composers of instrumental music. The exception is the complete reliance on algorithmic systems. Without rehearsal or audition, the results are often as unpredictable as the “live electronic” musical practices described above. It seems to me that “laptop orchestra” is only a newer medium for electroacoustic music, sometimes requiring artistic collaboration and at other times exactly like a symphony orchestra. It portends the development of even smaller “instruments,” such as the iPhone orchestra, which it is impossible for this writer to imagine.
As new forms of electroacoustic music emerge, composers not only influence each other, but often form groups. An appropriate example that comes to mind is the birth of text-sound composition here in Sweden in which technology often played an important role. Were Lars-Gunnar Bodin, Sten Hanson and Bengt Emil Johnson collaborators? Together with colleagues of theirs, they published, recorded, broadcast and promoted their sound art.
Still alive are some of us who experienced the dissemination of electroacoustic music through previously unimaginable mediums. In the early days tape recorders and loud speakers were placed in auditoria as the only means of hearing this new music. There was a short period when alternative programming on radio stations allowed people to hear electroacoustic music for the first time. Soon commercial recordings were available to more affluent listeners. Today the internet provides a marvelously wide repertoire of electroacoustic music; all of this due to newer forms of collaboration. Science in the service of music does not cease. Eight years ago, at a conference here at The Royal Conservatory of Music (KMH), I spoke about the possible development of a “brain cap” to enable still another new medium for electroacoustic music. To some it seemed like fantasy, but in fact, it has become a reality although still confined to laboratories dedicated to music cognition. I don't the names of any of the “inventors” involved, but soon I will ask, “Who was that man? I'd like to shake his hand!”
Dr. Ignacio Baca Lobera
Universidad Autónoma de Querétaro, Mexico
This paper deals with sound production and perception in several levels.
One is about the sound morphology, another is how the mixed media could function if morphological criteria is involved and finally how all of these aspects of sound generation can be useful as a teaching tools.
In recent published papers there is a renewed interest to discuss the possibility of a new music material and if its definition correspond to a new musical morphology. Authors like Mercer, Mahnkoft, Cox, Cassidy and Schurig wonder collectively in the book “Musical Morphology” if there is a new definition of musical material, theme, motive, melody and systems of syntax, gesture. In other words, what they mean to us in recent contexts and if we are now witnessing a whole new apparatus of conception, perception and articulation of musical material. It is important to notice that every author in this volume has a very personal definition of musical morphology.
Mercer in "Musique Concrete Revisited" points out: “But it was not until virtually the entire world of sound, natural and synthetic, was placed quite literally at our fingertips, that serious inquiry into the possibilities of compositionally functional sonic morphologies began in earnest..."(Mahnkoft page 162).
On the other hand Aaron Cassidy considers that..." What I am suggesting, though, is that from a morphological perspective, the ontological identity of a musical "shape" or local "form" is highly dependent upon the physical (and even choreographic) energies involved in creating that sound or group of sounds." Mahnkoft page 35)
What seems to be important is that all the aspects of music production and how the results perceived are to be considered as a means of structuring and articulation a temporal discourse. Most of these authors relate to morphology as a means to structure and that any microscopic aspect of sound production is reflexed in the whole work.
These provisional conclusions have been the basis for an investigation of mine regarding the definition of a practical approach to electroacoustic music analysis and to re evaluate sonogram tools in conjunction with morphology considerations.
2. Mixed media and acoustic composition
In my experience working in both worlds, electroacoustic and with acoustic instruments produces a very rich and complex sound phenomena than circumscribing to only one. They both influence one to the other; orchestral music by a composer who has worked electroacoustic music is conceptually more refined, for instance. One very clear example is the music of Joji Yuasa.
As an example of a personal exploration regarding this subject I will present one work, “La Lógica de los Sueños” for singer, two guitars and electronics.
3. Sound manipulation as a tool for teaching composition
Speaking from my experience as a teacher, when electro acoustics and composition are thought at the same time, students develop more efficiently; focusing in sound gives a very organic approach to organization and structure.
• Childs, Barney. “Time and Music: A composer?s View”. Perspectives of New Music 15, No. 2. Spring-Summer 1977: 194-219.
• Cifton, Thomas. Music as Heard. New Haven: Yale University Press, 1983.• Cogan, Robert. "New Images of Musical Sound". Cambridge: Harvard University Press, 1984.
• Delalande, F., "En L' abscence de partition, le cas singulier de l'analyse de la musique électroacoustique" en Analyse Musicale, n. 3. 1986. (3)
• Doati, Roberto: "György Ligeti's Glissandi: An Analysis" Interface, Volume 20, number 2, Swets & Zeitlinger B.V.-Lisse 1991. (2)
• Greimas, A. J. and Courtes, J. Semiotics and Language. Bloomington: Indiana University Press, 1982.
• Kramer, Jonathan. “New Temporalities in Music”. Critical Inquiry 7, No. 3 Spring 1981: 539-556.
• Licata, Thomas, ed. "Electroacoustic Music". Wesport, Conneticut: Greenwood Press, 2002.
• Maconie, Robin. "Stockhausen on Music". London: Marion Boyards, 1989.
• Mahnkopf, Cox and Schurig: "Musical Morphology" New Music and Aesthetics in the 21st Century, Vol. 2; Wolke Verlag, Hofheim, 2004. (1)
• McAdams, S.:"Music: Spectral fusion and the dreation of auditory images". En "Music, Mind and the Brain: The neuropsycology of Music. New York: Plenum Press, 1987.
• Meyer, Leonard. Music, the Arts and Ideas. Chicago: University of Chicago, 1967.
• Les Cahiers de l' IRCAM: Kaija Saariaho. Ircam 1984. • Simoni, Mary, ed. "Analytical Methods of Electroacoustic Music" New York: Routledge, 2006.
Concordia University, Montreal, Canada
In the last 20 years, increasing numbers of participants and theorists in the worlds of experimental sound composition and sound art installations have turned their attention to the creation and theorizing of audiovisual works. However, the majority of scholarly writing on audiovisual relations – the means in which auditory and visual materials are integrated in relation to one another – concentrate on the combination of electroacoustic sound alongside screen-based media. I would like to extend the scholarship on audiovisual relations by focusing in greater detail on the compositional relations between light and sound in audiovisual installations, focusing on media installations using self- illuminating1 light objects and spatialized sound.
Despite the concentration on image-sound relations in screen-based media, several attempts at creating a classification framework for audiovisual relations remain relevant. Using the term “added value” to describe the “expressive and informative” (5) mutual enrichment caused by the combination of sounds and images, Chion details means in which such media combinations may generate a “product of... mutual influences” (22). Chion describes audiovisual relations through a framework relying on compositional notions of the vertical and horizontal, enacted through concepts of audiovisual harmony, dissonance, and the problematic notion of audiovisual counterpoint (36-39). Further, Chion coins the term “synchresis” to describe “the spontaneous and irresistible weld produced between [simultaneous, and thus vertically related] auditory... and visual phenomenon” (63).
In a related inquiry focusing on the integration of eletroacoustic music alongside moving images, Coulter suggests a classification model for audiovisual media pairs based on the abstract or referential content of both audio and video materials, and their degree of structural integration (27). Using this framework, Coulter suggests the relations between media pairs may be described as isomorphic or concomitant (27). Isomorphic relationships contain shared, formal “features that act as catalysis in the process of integration” (27-8), and which – despite the use of two perceptual media – “rely on the activation of solitary schema” (28). Conversely, concomitant relationships – into which Chion’s conception of synchresis falls – occur “when two (or more) schemas are simultaneously activated [or overlaid]” (28) leading to audiovisual integration through a “process of highlighting and masking” of homogenous and heterogeneous features (27).
While the overlapping analytical frameworks suggested by Chion and Coulter provide basic classification tools for the potential effects of experimental audiovisual production, they fall short of elaborating on the variety of compositional possibilities contained within the notions of synchresis, or concomitant media pairs. While such classification may be impossible within the medium of image-sound works due to the seemingly infinite combinations between sound and image, I believe that a delineation of the spectrum between isomorphism and concomitance – that is, the various shades of synchresis – may be theorized through a case-study examination of the somewhat simpler possibilities offered by light and sound media installation works.
I suggest a framework for the analysis and composition of audiovisual relations between sound media and self-illuminating light objects, which will provide detail as to the relational qualities between light and sound and the compositional means with which to achieve such qualities. The framework will relate the perceptual notion of “cross- modal binding” (Whitelaw 259) to the compositional concepts of vertical harmonicity and horizontal counterpoint (Chion 36-40). In this manner, I will suggest means in which various combinations of vertical and horizontal relations between light and sound may relate to the overall perceptual effects and affects generated by such techniques.
The perceptual notion of “cross-modal binding”, based on neuroscientific research on cross-modal integration (Shimojo and Shams 2001), is utilized, following Whitelaw, in favour of the concept of the synesthetic in “the contemporary practice of transcoded audiovisual art” (Whitelaw 259). Strong cross-modal binding results in the perception of “cross-modal objects” (Ibid.) while weak cross-modal relations lead to the perception of heterogeneous, potentially unrelated audiovisual stimuli.
The use of compositional language – such as unison, harmony and counterpoint – to describe relations between audiovisual media pairs has historical roots particularly in early artistic attempts to integrate sound and light: from Castel’s light organs, through Scriabin’s Prométheée – Le Poème du feu, to the 1960’s San Francisco-based Vortex sound and light shows all assume compositional parallels between pitch and colour, sound and light intensity, and sound and light rhythmic relations (James 2009, Jewanski 2009, Keinsherf 2009). Despite the particular problems observed by Chion with relation to audiovisual counterpoint in cinematic sound-image relations (see Chion 36-38), I believe that the application of such terminology still holds immense potential, particularly with regards to audiovisual relations between sound and self-illuminating light objects.
I will suggest two graphical versions of my proposed framework, illustrating various audiovisual relations through analysis of specific sound and light installations by Artificiel (Montreal, QC), Chris Zeigler (Germany), Hans Peter Kuhn (Germany), Bernard Gal (Germany), Iannis Xenakis (France/Greece) and United Visual Artists (UK), as well as my own audiovisual installations.
A two-dimensional framework will utilize the horizontal axis consisting of perceptual cross-modal binding, ranging from “fused... cross-modal objects” (Whitelaw 259-260) to disparate, perceptually un-coupled audiovisual relations. The vertical dimension will consist of compositional relations between audiovisual media, ranging from unison, through consonant and dissonant harmony, to audiovisual counterpoint. I will utilize this two dimensional model in order to compare its findings to existing analytical frameworks, namely Chion’s concept of synchresis and Coulter’s description of isomorphic and concomitant media pairs. Further, I will elaborate on the increased amount of detail offered by my proposed model, including the previously un-accounted for audiovisual relations: physical and mapped isomorphism; consonant and dissonant harmonies; bonded, complementary and oppositional counterpoint; and the harmonic and contrapuntal use of coincidence as metaphor.
Further, I will suggest a three-dimensional model in which to place the above categories of audiovisual relations. The three-dimensional model will separate the compositional realms of vertical (harmonic) and horizontal (temporal, contrapuntal) operations in to two separate axes, joined by the third axis of perceptual cross-modal binding. This model will be offered as a compositional tool, in which the composer can modulate fluidly between audiovisual unison and various degrees of audiovisual harmony and counterpoint, while remaining relative to the concept of perceptual bonding and their various affective possibilities.
Chion, Michel. Audio-Vision: sound on screen. New York: Columbia University Press, 1994. Print.
Coulter, John. “Electroacoustic Music with Moving Images: the art of media pairing.” Organised Sound 15.1 (2009): 26-34. Print.
James, David E. “Light Shows.” In See This Sound: Audiovisuology. Compendium. Eds. Dieter Daniels and Sandra Naumann with J. Thoben. Cologne: Walter König Verlag, 2009. Print.
Jewansky, Jörg. “Color Organs.” In See This Sound: Audiovisuology. Compendium. Eds. Dieter Daniels and Sandra Naumann with J. Thoben. Cologne: Walter König Verlag, 2009. Print.
Kienscherf, Barbara. “Music.” In See This Sound: Audiovisuology. Compendium. Eds. Dieter Daniels and Sandra Naumann with J. Thoben. Cologne: Walter König Verlag, 2009. Print.
Shimojo Shinsuke and Ladan Shams. “Sensory modalities are not separate modalities: plasticity and interactions.” Current Opinion in Neurobiology 11 (2001): 505-509. Print.
Whitelaw, Mitchell. “Synaesthesia and Cross-Modality in Contemporary Audiovisuals.” Sense & Society 3.3 (2008): 259-276. Print.
Art Works Cited
Artificiel – Condemned Bulbs (2003).
Gal, Bernhad – RGB (2001).
Kuhn, Hans Peter – A Vertical Lightfield (2009).
United Visual Artists – Array (2008).
Xenakis, Iannis – Polytope de Montréal (1967).
Ziegler, Chris – Forest 2 – cellular automation (2007-9). Ziegler, Chris and Paul Modler – Neoson (2006).
Andreas Bergsland, Asbjørn Tiller
Dept. of Music NTNU / Dept of Art and Media Studies NTNU
This paper explores meaningful relationships between voice and aural architecture between reverberation and resonance implicit in realizations of Alvin Lucier’s I am sitting in a room, expanding the range of possible meanings beyond the fixed media versions of this electroacoustic classic.
The interior of a building always carries its own sound. It contains what Blesser and Salter term an aural architecture. The aural architecture in this respect is an equivalent to the physical space of the building. That is, its volumes, geometrical construction, and the materials making up the building’s surface. The major influence of the aural architecture of a specific architectural space are the sound sources situated in that particular space, and how the sound from the sources are reflected and diffused by the architecture. In this sense, the aural architecture will also influence our moods and associations in the listening experience. 
The aural experience of an interior is closely connected to the term reverberation. Reverberation in the physical sense refers to sound reflections by nearby surfaces, and by this reverberation implies the size and shape of the space as well as the materials of its surfaces.
In addition, the term resonance to a certain degree overlaps reverberation in the sense that it can also imply sound reflection. However, resonance also involves the synchronous vibration of a surrounding space or a neighbouring object.
In this paper, we want to address the concept of resonance not only by the means of a physical definition. Resonance also implies a definition concerning mental imagery; the power or quality of evoking or suggesting images, memories, and emotions, thus referring to allusions, connotations or overtones. In this respect, the term corresponds with the aural architecture, with its influence on our moods and associations in the listening experience.
To address these issues, we have chosen Alvin Lucier’s I am sitting in a room as our case study. By referring to this work it would be possible to explore both how the aural architecture of an interior transform the initial voice to the point of unintelligibility. On the other hand, by suggesting an augmentation of the elements voice and room in this piece, it will be possible to discuss how the work can trigger mental images for the audience. By specifying the story told by the voice, a setting of the work in a specific room, this will guide the mental imagery of the audience on the basis of memory. The perceived space of the work will then be a combination of the composed space and the listening space.
Like several other electroacoustic works from the 60’s and 70’s, I am sitting in a room has had a dual existence since it was composed in 1969. On one side, it has existed and still exists in fixed form as a “tape composition”. In particular, two versions made by Lucier himself in 1970 and 1980 have been distributed on LP and CD, and the literature on electroacoustic music and sound art have frequently made reference to these. For instance, in both Broening’s and LaBelle’s analyses, Lucier’s stutter, the implicit references to these in the text, and the smoothing out of these “irregularities” by the gradual unfolding process are the central issues. Moreover, these more or less canonized recordings, and particularly the latest have been frequently performed, i.e. played back, in acousmatic concerts, and as Collins has noted, thereby made a point of bringing “the public into a private space”, into a room “different from the one you are in now”. Thus, the work seems to exist in a relatively “closed” and authoritative form, where Lucier’s own voice (and stutter), his own text (with reference to the stutter), and the rooms he has chosen (smoothing out the stutter) are crucial components, and which therefore restrict the range of possible meanings associated with the work.
On the other side, however, and perhaps more in later years, the work has existed in the form of live realizations of the written score which provides a set of instructions for the performance and a text passage that can be used. These versions are nevertheless firmly bound to the score, since it explicitly opens up for “any other text of any length”, as well as encouraging more experimental versions using different speakers, different (and multiple) rooms, different microphone placements, and lastly, “versions that can be performed real time”. Hence, the score seems to open up possibilities for a much wider range of realizations, which also naturally would widen the scope of possible meanings for a perceiver. By choosing speakers with a particular story or memory from a certain room, and letting the performance take place in that room, we contend that the work can take on another level of meanings in which, contrary to Lucier, the relationship between the “I” and the “room” is not arbitrary, but highly significant.
In our paper, we would like to exemplify how such a realization can create a heightened emotional and intellectual experience, both for performers and audience, highlighting the room not only as an acoustic environment but as the trigger of memories, imagery and emotions. We discuss a realization of the work in which Håkon, an anthropologist in his early 40s, was taken back to an empty hospital room, where a few decades earlier he had visited his grandfather for the last time before he died. Thus, the room itself was an important factor in bringing back memories of a highly emotionally charged situation, which subsequently amplified the recollection process for him, giving the telling of his story increased actuality and impact. And, as Lucier, he started by verbally locating himself, but Håkon spontaneously rephrased the opening of the text from the score: “I am sitting in that room...”, thereby making the link between his memories and the space explicit.
Subsequently, in the process whereby his story was iteratively played back in the room, the resonances that gradually built up in the room created not only a sonically pleasing process, but also a strong metaphor for the interrelationship between Håkon and the room he was sitting in, playing on both the physical and the mental meaning of the concept of resonance, as presented above: The room resonated in Håkon, triggering mental images of how he experienced the room in the past, and then, by telling his memory, and letting the sound of himself telling it progressively excite the physical resonances of the room, he could follow the room as it “took back” his memory while gradually transforming it into an aesthetically pleasing object. Thereby, the process had an element of consolation and catharsis to it, which could also be felt by those observing the process from the outside. Lastly, in our presentation we suggest a branch of realizations that might use this piece both as a form of music therapy, and as a way of creating a new level of meaning and meaningfulness for audiences observing such a process, and problematize the degree to which such realizations are localized in the peripheral zone of the space of realizations that Lucier’s score delineate.
 Blesser, B. and Salter, L.--R. (2007). Spaces Speak, Are You Listenting? : Experiencing Aural Architecture. Cambridge, Mass: MIT Press.
 This term is used in the score.
 Lucier, Alvin. I am sitting in a room. (Source Magazine 7). Source record 3, 1970. Released with: Source: Music of the Avant Garde 4, no.7 (1970)
 Lucier Alvin. I am sitting in a room. Lovely Music LCD 1013, 1990.
 See e.g. LaBelle, Brandon. Background Noise. Perspectives on sound art. New York: Coninuum. 2006 and Broening, Benjamin, ”Alvin Lucier’s I am sitting in a room,” in Analytical methods of electroacoustic music, Mary Simoni (ed.). New York: Routledge. 2006. p.89--110.
 Collins, Nicholas, liner notes to Lucier, Alvin. I am sitting in a room. Lovely Music LCD 1013. 1990.
 See e.g. Burns, Christopher. “Realizing Lucier and Stockhausen: Case Studies in the Performance Practice of Electroacoustic Music”. Journal of New Music Research, 31:1, 59--68 (2010); John Butcher plus Alvin Lucier’s I am sitting in a room, http://www.thewatchfulear.com/?p=4758 (accessed 2012--01--24)
Université Rennes 2 / MINT-OMF université Paris-Sorbonne, France
For the linguist Ivan Fónagy, every word can be considered as transmission, consisting of both a linguistic message made of socially recognized symbols of a culture, but also of a para-linguistic content that’s more universal because it’s emotional. However, this theory is incomplete in the context of vocal music.
My preliminary hypothesis is as follows: the musical speech involves a third level supporting the purely artistic expression that doesn’t solely have the propagation of a trivial or symbolic message as a goal. The solo practice of singing shows that the transmission is not inherent in the art. It provides an emotional and physical pleasure to the person who sings. A paradox therefore appears: the text is intended to be transmitted and understood, while the music is substantially different from this utilitarian role. The musical speech raises the question of intelligibility. In music history, a lot of debates have arisen regarding the polyphonic density which hinders the understanding of the words, the ornamental virtuosity of Italian baroque masking the intelligibility, or the jumping intervals in Webern’s music perturbing the correct emission of vowels and consonants.
Electroacoustics reinforces this issue between the choice of musical expression and the extent of speech intelligibility. Indeed, the artificial vocality transcends and goes beyond the scope of the physiology of the vocal tract. It therefore seems legitimate to ask how composers have addressed this apparent paradox between the voice, considered as a vehicle of meaning and vocality shaped as a sound material. In the late nineteenth and early twentieth centuries, the recording devices encouraged the study of speech, its conservation, but also the emergence of some novel artistic ideas. Apollinaire described a revolutionary art, which would be based on the realization of fixed, manipulated and superimposed sounds. However, it was not until the year 1948 that Schaeffer invented the "musique concrète". Following this, the processing and speech synthesis devices have widened opportunities while facilitating all sort of manipulations. These changes did not come about without consequences for musicians about their questions and certainties on the intelligibility of the text set to music.
To investigate this idea, three developed examples will show the gradual emergence of new conceptions in the alliance of music and speech.
First, the flexibility offered by the electronics encouraged a new way to consider the speech: as a sound material and not just as a vehicle of articulate speech. This process of assimilation as an infinitely malleable timbral resource was constructed in the early days of concrete music. These manipulations were the first fruit of the fixation of the sound on physical media. The Etude aux casseroles containing Sacha Guitry’s voice announces an extensive area of ??experimentation. The "concrete speaking" was to flow in a large part of the repertoire of the Studio d’essai, then of the GRMC, and eventually of the GRM. There are numerous examples by Pierre Henry as well, but also in other genres such as sound poetry or Hörspiel. Transmitting the meaning of words borrowed from a text gives way, at least in part, to a play on the timbre and duration.
The second orientation is more radical. It gives meaning from the destruction of meaning. Aside from electroacoustics, examples are easy to find. In 1956, Il canto sospeso by Luigi Nono, on letters from people sentenced to death in the Resistance, seized the listener with terror. The strength of the words is placed at the service of a political and universal message. The atomization of the text is the mirror of the destruction of the entire humanity. Two years later, in Omagio a Joyce, Luciano Berio pursues the idea of ??semantic degradation already implemented by James Joyce in his novel entitled Ulysses. The language dissolves until the meaning is lost, thus prolonging the pathetic character wandering in the city in his own folly. More cruelly, in Philomel (1964) by Milton Babbitt, the tragic heroine’s cut out tongue destroys any articulate speech. She warns her sister Procne by sending a tapestry that she made in prison and which describes her ordeal. Inspired by Ovid's Metamorphoses via a poem by John Hollander, the story makes sense where speech is impossible. Music does not only illustrate a text and its plot, it is its incarnation. The loss of linguistic content is the very purpose of the work. The last example, Speakings by Jonathan Harvey describes a complete cycle of the human voice, from birth to adulthood, then to a transcendence of language. In order to give life to this path for the listener, the orchestra is formantized in real time, using techniques of speech analysis, of automatic orchestration and of prediction for tempo tracking.
Finally, other composers choose to focus on the alliance of speech in a relatively traditional musical expression with the innovative power of electronics. Philippe Manoury offers a musical style that illustrates this perfectly. In the 80s, having noted the difficulties inherent in post-webernian steep lines, he quickly turned to respect the singers’ voice and the intelligibility of the text. But he also develops a very sophisticated vocality, enriched by electronics, in En Echo, K., or On-Iron. While the natural voice remains the preferred vehicle of the meaning of words, the artificial vocality provides an extension which is much freer than the limits of the body and the goal of transmitting semantics are still not accurate. Many treatments and synthezising processes are thus explored by the composer. En Echo uses all the previous experiences around the real-time in the cycle Sonus ex machina. PSOLA, applied in opera K. and On-Iron, can be considered as an improvement of the methods coming from the FFT.
Thus, the speech intelligibility, which caught the legitimate attention of composers and performers for a long time, is a concept that has opened up new horizons for the last half century. Technological means applied to electronic instrument making have greatly enlarged the possibilities of integrating meaningfulness of speech within the music. Manipulation of the voice as sonic material has given birth to a more distanced view of meaning and intelligibility. For Fónagy, speaking is primarily a transmission of a message from the speaker towards the listener that is contained in the spoken language or in the emotion of the voice. Traditionally, the musicians have tried to meet this function. But electronic music often exceeds this process. Even poor intelligibility means something. Controlling the level of understanding or misunderstanding of words makes sense. The artistic success is not only related to a better understanding of the text, but also to its inclusion at a higher level. Whether high or deteriorated, it has an expressive value.
BATTIER, Marc, « La querelle des poètes phonographistes : Apollinaire et Barzun », dans Littérature et musique dans la France contemporaine, Actes de colloque des 20-22 mars 1999 en Sorbonne (Paris), Strasbourg, Presses Universitaires de Strasbourg, 2001, p. 167-179.
BOSSIS, Bruno, La voix et la machine, la vocalité artificielle dans la musique contemporaine, Rennes, Presses Universitaires de Rennes, collection Æsthetica, monographie, 2005, 316 p.
BOSSIS, Bruno, « Jonathan Harvey, de la voix à la vocalité », livret du disque Jonathan Harvey, Speakings, æon, AECD 1090, 2010, p. 15-18.
BOSSIS, Bruno, « La voix des sirènes : Thema-Omaggio a Joyce de Luciano Berio », dans Le Modèle vocal, la musique, la voix, la langue, Bruno Bossis, Marie-Noëlle Masson et Jean-Paul Olive (dir.), Actes de colloque Le Modèle vocal, 10-11 décembre 2004, Rennes, Presses Universitaires de Rennes, 2007, p. 23-32.
CASTELLENGO, Michelle, « Particularités acoustiques de la voix des chanteurs professionnels », Bulletin du GAM, 67, Paris, 1973.
FONAGY, Ivan, « Les langages de la voix », dans L’esprit des voix, Etudes sur la fonction vocale, Grenoble, La pensée sauvage, 1990, p. 69-84.
FONAGY, Ivan, La Vive voix, Paris, Payot, 1983.
LANDY, Leigh, La musique des sons, The Music of Sounds, édition bilingue, MINT/OMF, Paris, MINT, OMF, université Paris IV-Sorbonne, 2007.
MANOURY, Philippe, Considérations [toujours actuelles] sur l’état de la musique en temps réel, Revue l’Etincelle, Prospectives, Paris, Ircam – Centre Georges Pompidou, 2007.
MANOURY, Philippe, Va-et-vient, entretiens avec Daniela Langer, Paris, Musica falsa, 2001.
SCHAEFFER, Pierre, A la recherche d’une musique concrète, Paris, Seuil, 1952.
SUNDBERG, Johan, The Science of the Singing Voice, Dekalb, Illinois, Northern Illinois University Press, 1987.
The Royal College of Music Stockholm, Sweden
A Narrative Stance - Making a Case for Narrativity in Electroacoustic Music
This paper sets out to establish a case for narrativity in electroacoustic – initially acousmatic – music. As the field is broad and complex, no claim to overall comprehensiveness is offered. Beginning with basic definitions from literature, particular attention is given to the EVENT schema (Kendall 2008). The concept of image schemata is then introduced as a unifying means of organization. Schemata are considered to perform several functions such as a basic narrative component and as a bearer of significance for narrative understanding. Initially, one moment from Karlheinz Stockhausen’s Kontakte for electronic sounds is examined.
The notion of a narrative stance is proposed: this is a deliberate approach to listening to electroacoustic music in a narrative light, one that permits the emergence of a perspective on the content and meaning of a given work that can complement analysis based on purely musical grounds.
Narrative is a fundamental expression of human experience, something people do and that people understand. Discussion about narrative in music is most often based on literary models frequently to the “disadvantage” of music. Narrative and the quality of narrativity are here not regarded as borrowed traits, but rather as innate to music albeit in both complementary and contrasting ways to other media such as literature and film, for instance. Byron Almén (Almén 2008) rejects the notion that narrativity is not possible in music (Nattiez 1990) and localizes the causes to this position in the "parasitic" manner in which the narrative aspect in music has been subordinate to literature and a language/linguistic mindset. Almén contends that “the definition of narrative itself is the source of confusion: because narrative was first conceptualized in relation to literature, we have failed to separate narrative proper and narrative as manifested in literature.” (Almén 2008) He advocates that the "descendant" model of narrative (music deriving its narrative qualities from literature) be replaced with a "sibling" model in which literature and music (and, by extension, all other media) are placed on the same generational level, each manifesting narrative in its own manner according to the particularities of each media.
Defining narrative revolves about several key words. For instance, H. Porter Abbot defines narrative as “the representation of an event or a series of events” (Abbott 2008). Gérard Genette holds a broad definition: [...] “as soon as there is an action or an event, even a single one, there is a story because there is a transformation, a transition from an earlier state to a later and resultant state.”(Genette 1988) Approaching from a musical position, Vincent Meelberg defines narrative as “the representation of a temporal development, which consists of a succession of events” (Meelberg 2006). Common to both are the notions of event or events, series or succession of these events and representation. An even broader definition is offered by David Herman: “Narrative [...] is a basic human strategy for coming to terms with time, process, and change” (Herman 2007).
With respect to music, events and their succession would seem to be straightforward. Succession implies ordering in time. Yet event is problematic in that the dimensions of the event are not provided. Does the event belong to a syntactic or higher level? Next, representation implies a telling or relating of some succession of events. Lastly Meelberg’s “temporal development” couches his definition in more musical guise as something that is a process and not necessarily discrete events. Furthermore, it is noteworthy that both Abbott and Genette hold forth that even one event may comprise a narrative.
Events and Schema as Narrative Components
Events comprise the grain of narrative discourse.. The extent of their influence includes both low-level syntagmatic progression to high-level formal aspects.
Gary Kendall has explored cognitive mechanisms involved in the perception and interpretation of electroacoustic music beginning with the single event or chain of events (Kendall 2010). The ”Event schema” suggests both a model for event-to-event sequences, but also a kind of generalized syntactical structure, which can be adapted flexibly to any work
The EVENT schema is a dynamic model that includes component parts representing processes and others representing state. The model is dynamic in several respects. First, it is a pattern that executes through time. It changes state during the process of its execution. Second, it has junctures at which the execution can be directed along alternative paths.
Image schemata offer several features, which are directly applicable to narrative and narrativity in music. In defining image schemata, Mark Johnson writes:
[...] in order for us to have meaningful, connected experiences that we can comprehend and reason about, there must be pattern and order to our actions, perceptions, and conceptions. A schema is a recurrent pattern, shape, and regularity in, or of, these ongoing ordering activities. [italics in original] It is important to recognize the dynamic character of image schemata. I conceive of them as structures for organizing our experience and comprehension. (Johnson 1990)
The dynamic and temporal aspects of schemata thus can serve well as narrative components and as bearers of metaphorical significance.
Candace Brower brings schema to bear on meaning in music, but of particular interest in the present context is her application of schema to musical patterning with intra- opus and cross-domain, i.e. metaphorical, mappings
Mapping these features of our bodily experience of the physical world (the source domain) onto music (the target domain) yields the music- metaphorical concepts of musical space, musical time, musical force, and musical motion. (Brower 2000)
One famous example
John Dack has convincingly discussed narrative in Kontakte in relation to Stockhausen’s implementation of the aforementioned criteria and the explicit musical and metaphorical goal of bringing together, making contact between, the electronic and instrumental worlds (Dack 1999). He has also made a case for narrative with respect to the serial techniques. His approach is admirably from “the inside out” using musical materials and their systematic application to build his argument.
One of the most iconic sound-events in electroacoustic music is Section X (at 17’05) in Kontakte (1960). Generally considered as a demonstration of the continuum of time and sound, one of Stockhausen’s Four Criteria for Electronic Music (Stockhausen and Maconie 1991) and described in The Unity of Electronic Music (Stockhausen and Barkin 1962), one could argue that the emergence of the famous sound can also be regarded schematically, initially as a that of BREAKING THROUGH. It is prepared, foreshadowed, towards the end of Section 9 by fleeting appearances, which lead to a pulsating crescendo of increasingly noisy sounds. At the cusp of Sections IX and X, the famous sound – as a “solo” – appears when the preceding sounds cease abruptly. As the sound transforms through the temporal field from pitch to rhythm, the impulse duration is gradually extended. A sustained sound fades in, the impulse settles on a stable frequency and appears to find a place within the spectrum. Then, still slowing, the impulse becomes ever so slightly louder and drier (closer). Abruptly the impulse is transposed, forming a phrase which is shifted to a highly reverberant space within which the impulse stays continuing the decelerando and extension of the duration. Subsequent filtration and transposition of the background spectrum seem to indicate an attempt at resolution of sorts. The schema activated during the course of these events include SOURCE-PATH-GOAL, BALANCE and NEAR-FAR.
By applying a schematic approach, an additional layer of narrative meaning is brought into play. While it must be emphasized that there is no overriding “story” for Kontakte, one possible interpretation of this renowned passage is that of struggle, breaking through and a return to an interim balance before moving onwards. In this instance, it is reasonable to assert that Stockhausen has embedded a narrative stance into Kontakte both here and at other points as well.
A quick glance at form
With respect to the overall form of the composition, both Pasler and Dack note that while moment form is essentially “antinarrative”, Kontakte displays nonetheless the quality of narrativity (Dack 1999; Pasler 2008). As such Kontakte displays neither “story” nor “plot”, but does meet Herman’s three aforementioned criteria: “time, process, and change.”
In discussing literature in which the degree of causality is low, Seymour Chatman cites Jean Pouillon suggests “contingency” as a umbrella concept to cover narratives that do not – often by design – overtly display cause-effect as an ordering principle (Chatman 1980), but rather are “kaleidoscopic”(Pasler 2008). Instead Pasler writes that emphasis is placed “on the whole rather than the precise movement of one section to another. Transformation [...] does not depend on immediate connections from one section to the next but rather on some overall connectivity. [...] [N]arrative is the sense that one has of a certain kind of a whole when one has reached the end, not necessarily while one is listening to each and every part in its middle.”(Pasler 2008)
As listeners, if we choose a narrative stance, we can “recognize” narrative passages, but to what extent is that recognition a function of familiarity? According to Heikenheimo, Stockhausen’s original plan was to implement greater variability in both the instrumental and electronic parts, but that this was impractical (Heikinheimo 1972). Once fixed, Stockhausen never returned to either compose new sections or to re-order existing ones. On the other hand, had Stockhausen presented several versions – or, given today’s technology, programmed changes of order into an ever- evolving tape part – we, as listeners, would not be accustomed to the ordering fixed in 1960 and would be open to a “truer” version of moment form. Would the work then display narrativity?
It is surely possible given that schemata, which work on local and global levels, bring both “contingency” and “overall connectivity” to bear on narrativity in music and that via schemata we as listeners can attribute meaning to what we hear.
Abbott, H. P. (2008). The Cambridge Introduction to Narrative, Cambridge: Cambridge University Press.
Almén, B. (2008). A Theory of Musical Narrative. Bloomington: Indiana University Press.
Brower, C. (2000). "A Cognitive Theory of Musical Meaning" Journal of Music Theory 44(2): 323--379.
Chatman, S. B. (1980). Story and Discourse: Narrative Structure in Fiction and Film, Ithaca: Cornell University Press.
Dack, J. (1999). "Karlheinz Stockhausen's Kontakte and Narrativity," http://cec.sonus.ca/econtact/SAN/Dack.htm (accessed 11 February 2012).
Genette, G. (1988). Narrative Discourse Revisited, Ithaca: Cornell University Press.
Heikinheimo, S. (1972). The Electronic Music of Karlheinz Stockhausen: Studies on the esthetical and formal problems of its first phase, Suomen Musiikkitieteellinen Seura.
Herman, D. (2007). Introduction, The Cambridge Companion to Narrative, D. Herman, Cambridge: Cambridge University Press: 3--21.
Johnson, M. (1990). The Body in the Mind: The Bodily Basis of Meaning, Imagination, and Reason, Chicago: University of Chicago Press.
Kendall, G. S. (2010). "Meaning in Electroacoustic Music and the Everyday Mind."
Organised Sound 15(01): 63--74.
Meelberg, V. (2006). New Sounds, New Stories, Amsterdam: Leiden University
Nattiez, J.--J. (1990). "Can One Speak of Narrativity in Music?" Journal of the Royal
Musical Association 115(2): 240--257.
Pasler, J. (2008). Narrative and Narrativity in Music. Writing Through Music:
Essays on Music, Culture, and Politics, Oxford: Oxford University Press.
Stockhausen, K. and E. Barkin (1962). "The Concept of Unity in Electronic Music."
Perspectives of New Music 1(1): 39--48.
Stockhausen, K. and R. Maconie (1991). Stockhausen on Music: Lectures and
Interviews, London: Marion Boyars.
Michael T. Bullock
<…Attending to the space around us, we notice an abundance of vibration – an airplane high up in the sky, noise of the city (for several kilometers all around us), subway vibration, water pipes, etc. I believe we can regard these vibrations as the ‘context’ of constructed space. From this context, we can increase our awareness of living space. Constructed space has limited spatial dimensionality, but our awareness of it exceeds this size. It seems that our awareness spreads over hundreds of meters or more in all directions – up, down, all around our location. My suggestion is that we must recognize space as a vibratory system. – Toshiya Tsunoda 
Japanese sound artist and electroacoustic composer Toshiya Tsunoda calls our attention not only to sound’s ability to penetrate walls, but also to our ears’ ability to cross visible barriers in conceptualizing space and place. Our awareness spreads: our listening is not a delineated zone but is in a continual state of motion and growth, reaching through walls and across open spaces and passages, membranes and thresholds.
This paper will consider the work of composers who use unprocessed field recordings to exploit the potentials of thresholds – doorways, passages, barrier zones, transitional states of the physical environment – in creating electroacoustic works. The paper will first how selection of location, means deployed, and choice of collaborators generates a new set of relational meanings for a location recordist, meanings that can carry over into the final piece. In the second part, several pertinent works by Tsunoda, Jana Winderen, and the New England Phonographers Union will be considered for their engagement with threshold situations. The final part will present the results of a workshop on thresholds recording, to be held in April 2012 at Sonic, the sound studies module of Le Quai, École Supérieure d’Art de Mulhouse, France.
II. The microphone at the gate: Literal and metaphorical thresholds for the recording composer
A threshold is a passageway from one physical place or state of being to another. This rather broad definition suggests many possibilities for the recording composer. It can be taken quite literally, as a doorway; or as a reference to the response characteristics of our ears, or of a microphone. It can be the liminal state of mind between sleep and wakefulness. In any case, thresholds tend to present barriers which take at least a modest effort to cross: the doorway is smaller and more restricted than either of the two spaces it connects; sound pressure must exert some minimum level of work on the ear or the microphone membrane in order to be registered. And the sound recordist must engage a certain amount of effort to cross the threshold between passive observer and active engager: one must summon the courage to strap on the gear and raise the microphone.
In physical reality, thresholds are generally places where we do not dwell, but are instead transitional states we pass through on our way to ostensibly more stable, permanent places. We give them very little time and even less thought. Choosing to rest in thresholds, one can engage with Pauline Oliveros’ dictum, articulated in her 1980 piece Open Field: consider a moment from your daily life, aestheticize it, then find a way to record and share it.  Transforming these threshold recordings into electroacoustic pieces allows the composer, and audience, to inhabit these unstable regions.
III. Representative works
I investigate and compare several electroacoustic works in which the crossing of environmental, personal, or public thresholds drives the character of the piece. Divergent means and processes complicate issues of intention and reception. What is being captured in these thresholds? How does the nature of the threshold act as a processor on the sound itself, and on the participants’ intentions? Faced with the slippery qualities of the threshold, does the composer’s intent cross a mental threshold from registering place and time into a preoccupation with the processes themselves, what Tsunoda describes as “…similar to a hunter who became more interested in shooting the bow than the prey itself”? 
Tsunoda’s works dwell in commonplace threshold spaces; though some, such as underground drainage pipes, are inaccessible to humans. He is concerned with the border areas where different solids, liquids, and air meet, and their different vibrations collide and cross over. He has released several CDs of recordings made in transitional spaces, using contact microphones to record vibrations at the surfaces of solid matter, and small condenser microphones to record in pipes and tubes. Works such as the Extract From Field Recording Archive series use a minimum of edits and no external sound resources, though the sequencing of recordings tells a story of Tsunoda’s own perception of these spaces.
Tsunoda’s recent work involves the threshold between a sounding body – either a musical instrument or his own body – and its sound environment. In his collaboration with percussionist Seijiro Murayama, Snared 60 Cuts, Tsunoda places a microphone inside Murayama’s snare drum. The ambient sounds of a public park are recorded as they cross the threshold of the drum’s head and body. Murayama does not play the drum but for each recording alters it in some way: e.g. the head is cut or has a rock placed on it. Another current Tsunoda project involves attaching a stethoscope with an embedded microphone to his temple, recording his blood flow simultaneously with the sound of his chosen location.
Norwegian composer, sound artist, and ecologist Jana Winderen – a scientist by training who has since turned to art – uses hydrophones to record at various depths of the ocean, as well as in glaciers. She then combines them with open-air recordings and mixes them into live performances. These works exist on both sides of the threshold of the surface of the water.
Based in the northeastern United States, The New England Phonographers Union is a group of sound artists and composers who work exclusively with untreated field recordings to create improvised live performances. Their recent work engaged with an enormously complex threshold space: that of the Deer Island Sewage Treatment Plant in Winthrop, Massachusetts, USA. Sewage treatment plants are thresholds for great volumes of water. Members of the NEPU documented the sounds of this complex facility both inside and out; and then used exclusively those sounds, unprocessed, in an improvised performance at the plant itself.
IV. Workshop: “Sound Thresholds: Indoors and Outdoors”
Finally, I present a survey of results from a workshop I conducted in April 2012 at Sonic, the sound studies module of Le Quai, École Supérieure d’Art de Mulhouse. In this workshop, participants were asked to select threshold spaces in and around the school and the Mulhouse region: e.g. doorways and patios, the shelter of the edge of a wood, on or under a bridge. In small groups, they made audio recordings in those spaces; the groups then used those recordings, edited but unprocessed, to create original electroacoustic works, which they presented live at the end of the week in the original threshold spaces.
Our work together addressed several questions: What are the overlaps between our indoor and outdoor listening, and how does the choice of threshold spaces affect the meaning of those relations? What does it mean to say that the recordings will remain “unprocessed”, when the act of recording itself can be considered a process applied to sound? Is outdoor listening qualitatively different from indoor listening, and is threshold listening a hybrid experience? How do the qualitative meanings of a threshold come through when presenting the works in the thresholds themselves?
1Tsunoda, Toshiya. Excerpt from the liner notes to "o Respirar da Paisagem" Compact Disc (sirr.ecords 2003). Translated by Tsunoda and Jeremy Bernstein.
2 Oliveros, Pauline. Deep Listening: A Composer's Sound Practice. New York: iUniverse, Inc, 2005.
3 Tsunoda, Toshiya. “Field Recording and Experimental Music Scene.” Erstwords, July 2009. Translated by Yuko Hama. http://erstwords.blogspot.com/2009/07/field-recording-and-experimental-music.html
University of Missouri / Kansas Community College, USA
Interactive multimedia offers unique challenges to researchers and theorists. These works often defy standard practice analysis methods. There is no common nomenclature or common practice of analysis or composition with which to approach a modern interactive work. A knowledge of technological factors influence analysis of the interactive environment and instrumental design. Analyzing the design and environment leads to the questioning of the purpose of interactivity and the significance of live performance. Finally, these works often operate without a set score or are fully improvisational. The same issues that plague researchers in studying other improvisational methods such as Jazz in interactive multimedia. In this paper, each issue will be examined through research into writers such as John Croft, Dennis Smalley, and Simon Emmerson that tackle analysis issues in electronic music. The goal is to create a taxonomy to better explain electronic interactive multimedia, set-up a methodology for beginning an analysis of a work based around interactivity, focusing on works which contain no score and involve improvisation, and then test the methodology by approaching a single work. The work chosen for this analysis is Christopher Burns’ “Sawtooth.” The research, taxonomy of interactive multimedia, and analysis of Burns’ work all support the use of a multi-faceted approach based upon finding fixed elements and interactions of the work. This creates the context from which the piece can be successfully analyzed.
This paper focuses on electronic interactive multimedia. Other writers have tackled the problem of creating a taxonomy of interactive multimedia. John Croft’s “Thesis of Liveness” offers a set of five paradigms of the relationship between performer, instrument, and electronics. Croft’s archetypes are found to be lacking in specificity regarding nuances between the different forms of interaction. This theory also relegates the electronics to a subservient role at all moments, thus eliminating the possibility of artificial intelligence or random functions informing the performer. The paper lays out a more specific seven-layered taxonomy of electronic interactive multimedia.
Barry Truax and Croft both tackle questions to answer in analyzing interactivity. They focus on the amount of control given to the performer of the system as well as the types of control and transference of action of which the system is capable. Smalley addresses similar issues in his ideas of surrogacy or the relationship between the identity of a sound and its changes. The research into the type and depth of controls, and the analogies between performance energy and reaction in the system form a backbone for finding fixed elements. An identification of sounds (or visuals) available to the performer becomes the next major issue. Pieces are often multi-timbral and learning how the performer can control each possible timbre and its parameters gives a researcher the tools necessary to begin dissecting the musical nature of the piece.
In a piece with a fixed score or more fixed media components, musical factors can easily be identified and analyzed through a myriad of means. These can include common practice methods of dealing with pitch and harmony, set theory, pitch space and contour relationships, rhythmic relationships, or timbre related methods prescribed by Dennis Smalley or Trevor Wishart. The more events, be it in the acoustic instrument, or in the electronic portion, that are improvisational, the more difficult it will be to find the common threads through which to enter the piece. Gunther Schuller dealt with the idea of analysis of improvisation in his article “Sonny Rollins and the Challenge of Thematic Improvisation,” and cautioned against a fixed style of analysis. Schuller argues that allowing the style of the improvisation inform the choice of analysis method is the correct approach. This is different from a standard that analyzers had held before Schuller, analyzing and grading solos based upon their adherence to the chords and in relation to pre-determined styles, such as Bop or Swing. One must also distinguish between performer and composer- using the improvisation to find which elements are salient to the construction of the work and which are ephemeral aberrations in a single performance.
This leads to the analysis of the communication between performer and instrument. Analysis of the interactive system can be done through direct interaction or through repeated listening using various methods described earlier. The goal is to understand all the ways in which the performer can control the program as well as the ways in which the program can lead the performer. This means unearthing the limitations in the system, methods of interaction, and fixed points within each piece. These limitations act in the same ways as boxed notation in aleatoric passages or chord symbols in Jazz. Analysis on this level can lead to understanding similarities in performances that exist without a score.
Christopher Burns’ “Sawtooth” is an environment with only four main directions; be attentive to quality of movement, animation, sound, and interplay; create a convincing form; expressively use the multipoint interface; and performances will vary. Burns describes, in detail, the system itself. Sawtooth uses two programs, Processing and Pure Data, to generate four types audio-visual material. Some parameters, such as colour, are controlled randomly. However, the programming of “Sawtooth” includes conditionals defining the parameters needed to create each type of material as well as the ways in which the materials interact with each other. For instance, pitch is determined via spatial location. The interaction between the four basic pieces of audio-visual material is what creates the uniqueness of the piece. Performing each instrument by itself would be working against the programming. Also, the ways in which the programmed conditions exist in “Sawtooth” directly control the ways in which the piece is performed. Even though there is no score, and it is improvisatory in nature, the limitations created by the programming create the structure with which the improvised material can exist.
The paper moves through an ephemeral analysis based upon a single viewing of a performance posted by Christopher Burns on YouTube. Burns begins slowly, working antiphonally between his hands. Over the course of the improvisation, he slowly introduces each basic element and slowly builds the texture until the climactic point. The piece ends as the over-saturation of material dissipates, Burns ending with the same gesture as the beginning.
The initial performance showed several main features caused by the limitations created in the programming. First, all the material must be generated through a physical gesture. The way in which “Sawtooth” is programmed, physical gesture is translated into audio and video gestures. The movements may not always be analogous- moving upwards does not necessarily mean rising pitch- but there is a correlation. Second, interaction between the voices causes counterpoint. Third, there are no real drones- all gestures, pitches, video do release. The only way to continue a sound is to add in another small gesture.
These facts lead to a simple explanation; while the piece is improvisational in character, each piece will have a Gesture Carried Structure. During Burns improvisation, he moves from a simple gesture to an overload of information. While Burns creates thick musical textures, the programming itself makes it difficult to create a texture carried structure, or a structure created through small changes over a long duration. Instead, the programming forces the performer to think of improvisation through counterpoint, creating large textures analogous to large multi-voice fugues.
The analysis shows how the theory and methodology can lead to a salient analysis. Research into the programming revealed information regarding fixed elements, and the style and type of interaction. This information gave the needed background to quickly move through the analysis in the same way that a score provides information in standard notated pieces. The ephemeral analysis done after the research gave the impetus for more in depth analysis into the limiting of the performer to create a structuring element. This methodology of identifying types of interaction, identifying fixed or repeated ideas in the piece, and then analyzing the piece through an idea created by the fixed or repeated ideas leads to understanding the structuring elements of “Sawtooth” and the ways in which, even though the piece is improvisational, will have continuity between each performance.
Fabio Cifariello Ciardi
Conservatorio di Musica di Perugia, Italy
Each round of technical advances, (whether in artificial intelligence, computer arts, or electronic connectivity) promises to help people better understand and manage an ever-growing host of information resources. Our ability to collect, link, transform and transmit data through human-computer interactions has changed the way knowledge is shared and distributed throughout the world. Human-computer interactions, however, can also affect the way we extract meaning from this avalanche of data. Technical specifications, indexing conventions, and descriptors of the semantic content are used to forward information distribution more than to deepen our knowledge.
Acousmatic and electroacoustic composers are usually aware of this semantic ‘bleaching’ of our consumerist society, and because of that, the sonic resources they use often stem from independent discoveries.
Yet, they perpetuate a great body of received cultural assumptions and conventions that lead them to perceive a sound in a particular way. This tendency may be regarded as (co)responsible for a 'colonial’ approach to the use of connotated sounds and real-world phenomena: any novelty is aggressively assimilated into a predefined aesthetic framework, leaving little or no space for a change or renewal of the framework itself. In the realm of experimental music, we can catch a glimpse of a ‘colonial’ approach in works that propose ‘sonic tourism’ as a way to subjugate distant locales or alien frames of reference. When this is the case, the cross-cultural endeavors and explorations of the composer are limited to the use of the sonic material which is evaluated and used for its exoticism.
In order to cope with the issue, the history of experimental music has highlighted the need for a balance between the dominant ambitions of artist and the urgency to be somehow ecologically respectful in sourcing and manipulating materials from the world. In the last forty years composers have tried to achieve this balance through a very broad set of creative practices: from soundscape composition to performance ecosystems and interactive environments.
A different ecological approach to musical creation may flow from the knowledge acquired in sonification research. Financial, seismic, astronomical or meteorological data are employed in music composition, even independently from any ecological consideration. When this is the case, some special attention needs to be paid to the differences between sonification and ‘data-inspired music’. Sonification is the transformation of data relations into perceived relations in an acoustic signal. The term refers to different techniques and processes that are at work between the data, the user and the resulting sound. The primary purpose of sonification is scientific rather than artistic: data-dependent generation of sound is based on systematic, objective and reproducible transformations for the purpose of interpreting, understanding or communicating relations in the data in question.
According to these requirements, the separation between sonification and the use of data for compositional purposes depends on methodological issues and operational goals. Sonification users need to be aware of how data cause the sound to change in order to understand that data, while listeners don’t need the compositional rules to be made explicit. From the operational point of view, researchers sonify a process to better understand that process, while composers translate data into sound for expressive and aesthetic
This clear-cut separation breaks down, however, if we shift our attention from the domain of self-referential musical aesthetic to the richness of real-world data. In some cases, physical and social time-oriented phenomena not only prompt artistic creation: they may unfold their own unique aesthetic value if investigated in terms of musical attributes. The approach here differs from other ecological models of musical creation in several ways. The sonification output is not used as a material during the compositional process, but it coincides with the desired output. There are no ‘borrowing’ constraints because there isn’t the need for an integration between the sonified events and the other events that constitute the sound-flow.
The composer doesn’t need to deal with the problem of how to tackle the sonic material so as to modify it according to his or her own needs. On the contrary, his or her compositional skills may need to be shaped so as to “elucidate” the data through the optimization of the sonification design. That means that the composer shows the willing to forgo the independence of his/her artistic expression: what is important is to reveal the inner aesthetics of real-world phenomena, rather than to express his/her personal aesthetics.
Semantic identity follows directly from the data, not from the composer’s intent. Consequently, qualities that are essential in sonification design such as clarity, efficiency and reproducibility, become key factors for the creation of effective musical results. Even more important, clarity, efficiency and reproducibility must be recognized by the listener. That is, the composer should make the listeners be aware of the data source and the rules of sound generation, in order to encourage them to really experience the ‘sound’ of the data set. Within this context, the role of the performer is dual. On one hand, a performer’s musical gestures may be used to render the sonification results. On the other hand, the composer can turn the human presence on stage into a support agent which helps the audience to assess the musical potential of data through musical and/or verbal commentaries.
The aforementioned stringent constraints may support the ecological strength of this modus operandi. One of the underlying risks embedded in the manipulation of reality is that of an irreparable deterioration of the original semantic identity of the domain in question. Focusing their efforts on sonification design, the composer can move from the unrelenting transformation of connoted materials toward a respectful exploitation of real-world data which can illuminate social and natural environments from heretofore-unknown perspectives.
An ecological approach to composition based on sonification, however, is not free from limitations. First of all, the model may be viewed as an unacceptable restriction on the autonomy of the composer, which can significantly limit the reach of his or her expression. Secondly, the real-world phenomena that are worth considering for musical enquiry may be modest in number or availability. Thirdly, although the integrity of the data is preserved, here the translation of the data can lead to a loss of information, since the mapping algorithm may fail to elicit the musical potential of the data. Finally, the information about the data set — included in the musical message as a critical component — may alter the expectations involved in the listening experience and thus can be regarded as a form of coercion against the listeners.
The final section of the paper addresses the concrete effectiveness and the fallaciousness of the approach with reference to two fields of study: real-time financial data sonification and computer-based analysis and transcription of speaking-voice rhythms and inflections.
Musical potentials of both domains will be discussed taking into consideration audio-visual installations (e.g., “Nasdaq Voices”, “ASX voices”), live interactions (e.g., “Piccoli Studi sul Potere” for solo instruments and synchronized video; “Appunti per Amanti Simultanei I” for trombone, intonarumori and electronics; “Ab” for ensemble) and software tools created by the author.
MINT/IUFM of Paris / Sorbonne University / MTI research Center of De Montfort University, France
Les pratiques musicales en électroacoustique se développent très rapidement au point qu’il semble parfois vain de vouloir en définir le champ. Que ce soit la musique acousmatique, le soundscape, le glitch, la musique mixte, la musique interactive, la musique algorithmique, l’improvisation audiovisuelle, l’electronica, l’installation sonore ou la musique créée à l’aide d’appareils hackés, l’ensemble de ces expériences artistiques relève d’un même champ. Ce champ musical, finalement très récent et surtout extrêmement mobile, oblige le chercheur à penser l’analyse d’une manière différente de la musique instrumentale. En effet, l’absence de support, la complexité du matériau sonore, l’usage des espaces interne et externe, le lien étroit entre les outils et le résultat musical, l’intégration du lieu dans le processus de création, la frontière désormais inexistante entre le sonore et le musical ou le mélange avec d’autres formes artistiques sont en train de bouleverser la musicologie. Ces bouleversements nécessitent d’avoir non seulement de nouveaux outils théoriques, mais aussi de nouveaux outils d’analyse.
Le développement du logiciel EAnalysis est intégré au projet de recherche New multimedia tools for electroacoustic music analysis du centre de recherche MTI de l’université De Montfort (Leicester, Grande--Bretagne). Les objectifs du développement de ce logiciel sont d’expérimenter de nouveaux types de représentations graphiques et de nouvelles méthodes d’analyse à travers une interface intuitive et des outils adaptés à l’analyse musicale. La version finale du logiciel sera disponible à l’automne 2013 et une version bêta est disponible depuis le mois d’avril à cette adresse : http://eanalysis.pierrecouprie.fr. Le logiciel est compatible avec le système Macintosh OS10.6 ou plus récent.
Depuis plusieurs années, une partie de mon travail de recherche se concentre sur l’utilisation de la représentation graphique en analyse musicale. Dans différentes conférences, j’ai présenté de nouveaux modes de représentations permettant de dépasser la traditionnelle représentation temps--fréquence. Ce type de représentation est très utile pour éditer les différents objets graphiques ou synchroniser les éléments analytiques sur un fichier audio. Toutefois, lors d’une utilisation sur une analyse complexe ou en manipulant un grand nombre de paramètres, elle se révèle rapidement limitée, voire difficilement interprétable pour le lecteur. EAnalysis propose différents outils de visualisation permettant au chercheur de choisir la meilleure stratégie en fonction de son objectif de recherche. Ainsi, qu’il opte pour une représentation iconique ou symbolique, qu’il décide de représenter un nombre important de paramètres sur une seule représentation ou qu’il choisisse de les répartir sur plusieurs représentations synchronisées, qu’il destine son analyse à une revue scientifique ou qu’il s’adresse à de jeunes enfants, il trouvera un ensemble d’outils adaptés à ses besoins.
Mon expérience de l’usage de différents logiciels musicaux et graphiques pour différentes publications analytiques m’a permis de progressivement imaginer un ensemble d’outils indispensables pour le musicologue. Si des logiciels comme l’Acousmographe ou Audiosculpt dans le domaine du son ou Illustrator dans celui du dessin permettent de réaliser des représentations de grandes qualités, ils ne sont bien souvent pas adaptés au travail du chercheur. Un des objectifs de EAnalysis est donc de développer un logiciel moins complet, mais dont les outils sont parfaitement adaptés à un usage musicologique.
Enfin, si l’on se tourne vers l’utilisateur et sa connaissance de la musique électroacoustique, qu’il soit un musicien désirant travailler son interprétation d’une œuvre mixte, un spécialiste de l’analyse de la musique électroacoustique, un étudiant découvrant la théorie musicale ou un enseignant désirant travailler avec ses élèves sur l’écoute d’œuvres électroacoustique, EAnalysis propose différents niveaux de complexité et des fichiers d’aide sur la théorie musicale.
A partir de ces différentes idées, j’ai imaginé un nouveau logiciel dont voici quelques-- unes des fonctions :
1. les projets peuvent contenir différents types de vues: temps--fréquence, animation (par exemple pour représenter les mouvements d’espace ou analyser un soundwalk), carte sonore (permettant de révéler les relations structurelles entre les unités ou créer un tableau paradigmatique), image (pour afficher une partition ou tout type d’image), graphique (permettant de représenter des données extraites d’autres logiciels), etc. ;
2. les projets peuvent contenir un ou plusieurs fichiers audio ou vidéo. L’analyse comparative ou l’analyse d’œuvre multipiste est ainsi simplifiée ;
3. le logiciel possède des outils graphiques permettant de modifier très rapidement l’ensemble d’une représentation ;
4. le logiciel contient quatre modes de fonctionnement : le mode normal permet d’éditer les objets graphiques et analytiques, le mode texte permet d’annoter rapidement des idées durant une écoute, le mode dessin est particulièrement adapté au travail avec une tablette graphique ou un tableau interactif, enfin le mode lecture est très utile pour les présentations publiques ;
5. les objets dessinés ou analysés sont regroupés en 2 catégories. Les évènements graphiques sont des formes graphiques simples telles que les textes, rectangles, ellipse, ligne, etc. Les événements analytiques sont des évènements graphiques possédant des paramètres analytiques ;
6. les paramètres analytiques peuvent influencer la manière dont les événements sont dessinés sur les différentes vues, ainsi il est possible de modifier sa représentation graphique en fonction de la position du son dans l’espace ou de n’importe quel autre paramètre ;
7. la version finale contiendra un système expert permettant d’accompagner le chercheur débutant dans sa découverte des différentes méthodes d’analyses de l’électroacoustique ;
8. l’utilisateur peut créer sa propre grille d’analyse et la partager avec d’autres utilisateurs ;
9. le projet peut être exporté en différents formats (image, PDF, vidéo). Il est aussi possible de distribuer un projet sans fichier audio ou vidéo si l’utilisateur n’en possède pas les droits ;
10.il sera possible d’importer des données venant d’autres logiciels comme l’Acousmographe, Audiosculpt ou Sonic Visualiser.
L’ensemble de ces fonctions sera progressivement ajouté pendant la phase de développement.
Cette présentation me permettra de démontrer l’usage du logiciel EAnalysis à travers trois types de projets analytiques : analyse acousmatique, analyse de sources multiples (multipiste audio) et analyse avec vidéo.
Université Paris, France
Electroacoustic music is always a challenge for the musicologist. Complex issues are raised both by the technological surroundings of the work and the apparent freedom in regard to the western musical tradition. One may also understand electroacoustic music as a non--symbolic representational system, which can explain the lack of structural languages for electroacoustic music. Schaeffer’s Solfège de l’objet sonore was – even in its title – a tentative effort to provide composers and listeners alike a comprehensive tool for electroacoustic music description and understanding. On the opposite side, Stockhausen’s efforts in the 1950s were to treat the studio as a complex and systematic composer’s workshop, helping to define rigorous, precise methods and processes, creating not only sonic works but also complete, finite and self--contained musical systems (particularly in the case of Studie I and Studie II). These fundamentally different approaches were embraced not only by composers, but also by musicologists and music analysts and as such, analytical methods were defined that took into account a number of these phenomena, which are greatly dependent on the context, be it musical, conceptual or methodological, in which works were composed. Those methods can therefore be grouped in a sort of retranscription of those two main initial compositional traditions. Moreover, up to a certain extent, those approaches can be thought as being respectively associated with divergent and convergent thinking, as described in 1967 by Guilford.
The most prominent tradition in electroacoustic music analysis is therefore centered on the listener – one would term it esthesic analysis – and rely heavily on understanding electroacoustic through verbal description of physical movements and structures (such an approach is exemplified by Smalley’s spectromorphology – a nod to Schaeffer’s typomorphology). Derived methodologies in respect to this tradition are based on graphical score generation (which may or may not be based on spectrograms and spectral analyses), use of a possibly common (verbal) terminology to describe sound structures and evolutions, and descriptive and abstract text projecting a meaning and direction onto the electroacoustic work. This so--called “perceptual” analysis can also be understood as a means of emancipating the composer, the listener and the musicologist from the strong chains of western classical music, and from its rigid frameworks (modal, tonal, or any “artificial” system of discrete pitches organization) – defining a context in which “colors” (timbres and sound movements) are more meaningful than “structures” (frequency and rhythmic spaces).
The second analytical tradition in electroacoustic music analysis is more directly concerned with physical sound analysis (e.g. using tools providing Fourier transforms, cepstrum representations, wavelet transforms, as found in Cogan’s New Images of Musical Sound) and deriving meaning from the thorough analysis of compositional processes – a poeitic approach – possibly leading to explaining the works in terms of technology and ultimately providing reconstruction (this approach is notably exemplified in the body of works that can be found in Computer Music Journal 31:3). Whereas the first approach strongly relies on verbal descriptors, this one routinely makes use of ontologies usually associated with computer and computing languages (be it programming languages examples, algorithmic descriptions or general problem--solving attitudes). Peripheral fields, such as auditory neuroscience and music information retrieval (MIR for short), may provide useful tools and methods for electroacoustic music analysis. Music information retrieval in particular aims at providing a number of techniques initially targeting statistical information in a large body of music. Moreover music is essentially conceived as being a pitched and time--measured for the purpose of MIR analysis; as such, electroacoustic music analysis by means of tools developed in the MIR field can be difficult, since it is not usually driven by predetermined grids of frequency--spacing pitches, or time meters and tempo, but more by timbres, frequency--space organization, or spatial grouping of musical structures. One particular aspect of MIR tools that is interesting for electroacoustic music analysis is the possibility of designing algorithms that automatically detect repetitive patterns in audio streams. Given the fact that algorithms can be used independently of specific magnitude orders – in the case of music, and notably electroacoustic music, independently of time scales – an adaptation of such algorithms is possible, in order to be able to detect different organizational motives and structures at different structural levels, without having to rely on calibrating and interpreting complex digital signal analysis techniques à la wavelet transforms to detect cross--level meaningful musical structures. Such a possibility is not negligible, as it could permit a novel approach on granular--based electroacoustic compositions, which are notoriously difficult to analyze. Another aspect of MIR is the definition of tools used for sound segregation, based on automated timbre detection: while it gives good result for traditional instrument ensembles, further adaptation will be needed to provide interesting tools for electroacoustic music. Auditory neuroscience has been proven to be successful in the study of tonality and related excitation mechanisms in the neuronal system, by providing models of artificial neural networks that behave according to dynamical oscillatory systems when subjected to tonal stimuli. Since neurons demonstrate reactions to musical tensions, those models could be used and adapted to different time scales to provide a conceptual framework on which to build an auditory model of electroacoustic listening, which respects musicological and analytical methods and concepts.
The different analytical perspectives briefly mentioned here will be taken into account and organized in order to provide a common framework from which further developments, taking into account advances and findings in music information retrieval, neurosciences and computer science. A common concept underlying a) the different methods for electroacoustic music analysis, b) music information retrieval research, c) auditory neuroscience, and d) recent computer science paradigms, is that of directionality. Meaning is provided through the organization of structures – in our case, meaning is provided by the temporal organization of sound events. This concept of temporal directionality will provide the building block that is needed for defining another approach for electroacoustic music analysis, gathering insights both from the perceptual analysis tradition (esthesic) and the more technical methods (poietic), pushing it forward to gain a better understanding and grip on the esthetical stakes of electroacoustic music, and not evacuating an intrinsic dimension of music: time itself.
Ricardo Dal Farra
Concordia University, Canada
In my search for the meaning, sense, significance of electroacoustic music I went to explore, as many others, its elusive definition. After a long research and an extended time of reflection I accepted the following statement as one of the best adapted to my own understanding of the field: I will be using the term electroacoustic music throughout this paragraphs to refer to “musical creations that involves electronically modified or generated sounds, which may or may not be accompanied by live voices or acoustic instruments, and that uses a language close to the experimental and/or academic world”. This was adapted from a definition by Otto Luening in The Odyssey of an American Composer (New York: Charles Scribner's Sons, 1980).
It is relevant to note that in this context: experimental can be understood as “the act of conducting a controlled test or investigation; the testing of an idea; or a venture at something new or different”. And that academic refers to “an art that conforms to the standards of a particular school”. Considering school as “a group of artists who have a common style which may come from geographic, movement, period or other attribute”.
Music is omnipresent in human society, but its language can no longer be regarded as transcendent or universal. Like other art forms, music is produced and consumed within complex economic, cultural, and political frameworks in different places and at different historical moments.
Publisher comments on: Leyshon, A. , Revill, G. and Matless D. [editors] (1998). The Place of Music. United States: The Guilford Press. http://books.google.ca/books/about/The_place_of _music.html?id=q2Jt4r4pnf4C&redir_esc=y
Music is essential, a constituent to each culture. It identifies a group or society. It is also a means of power, colonization and domination that the audio recording technologies have been helping to develop. We all know the tremendous impact of those technologies in the world, involving pleasure, politics, power and money.
It can only revitalize discussion on the connections between political power, ideology, and the role of music in the current cybernetic phase of capitalism's twilight years. - Border/Lines
For Attali, music is not simply a reflection of culture, but a harbinger of change, an anticipatory abstraction of the shape of things to come. The book's title refers specifically to the reception of musics that sonically rival normative social orders. Noise is Attali's metaphor for a broad, historical vanguardism, for the radical soundscapes of the western continuum that express structurally the course of social development. - Ethnomusicology
Critics/comments on: Attali, J. (1977/1985). Noise: The Political Economy of Music. United States: University of Minnesota Press. http://www.upress.umn.edu/Books/A/attali_noise.html
The audio recording and the sprouting of the music industry as a worldwide scale business have been defining the taste of hundreds of millions of listeners, conditioned by the iterative broadcasting and recording distribution of a very limited set of musical expressions. Concerning the recording industry, multiculturalism and diversity of styles and aesthetics are extremely restricted and conditioned to the market’s laws. The diversity in terms of genres and styles on the available recorded music is more apparent than real.
The turntable and audio recording technologies, mostly associated to radio production at first, were not only the basis of a global revolution but also the trigger of major transformations in our approach to music creation. Electroacoustic music was born from the explosive social, political and economic changes that marked the turn of the nineteenth into the twentieth century, combined with the aesthetic renovation and strong artistic transformations of that period and the historical appearance of the electromechanical possibilities to fix sounds in time.
People coming from the music as well as the scientific and technological worlds met to conceive this music, where knowledge, capabilities and skills from many different disciplines are usually collaborating to develop interesting new paths for experimentation, research and creation. This creative and experimental hub is always in advance of what the music market consumes and does not receive support from the music industry.
Electroacoustic music is a field that joins creativity, technical expertise, new technological development and scientific research, pushing the limits concerning the definition of music, human perception, technological advancement and the interplay between art and science.
Electroacoustic music is a major revolution of the twentieth century. It represents a complete change in music, which brought new views on composition, musical thought and musical practice.
Electroacoustic music has been changing over the years, like people, like technology, like the arts, like the world. Tape music became music on fixed-media support, and the early experiments of live electronics became highly sophisticated real time performances where sounds can be generated, processed and/or controlled using a laptop. Computers are running complex algorithms that allow us to interrelate almost anything in the world (a DNA structure, planets’ orbits, plants growing rates, traffic flow) to organize sounds in the micro or the macro level.
Electroacoustic music became part of larger performances and shows, in theatre, dance, contemporary operas, TV, radio, and more.
Electroacoustic music is looking for new paths, new ways of artistic expression as well as new interrelationships with other art forms, science fields and technology.
Currently, electroacoustic music is breaking new borders and looking to build bridges and pave new roads to be also a catalyst helping with the exploration and development of intersections between nature, art, science, technology and society. As an example, the first Balance-Unbalance (Equilibrio- Desequilibrio) conference held in Buenos Aires in 2010, and the second one held at Concordia University, Montreal in 2011, are starting to produce positive consequences in the aforementioned direction: a group of artists-researchers working in the field of electroacoustic music are now developing a project with a world-wide humanitarian organization to find common factors that could lead to engender a deeper understanding of our global environmental crisis and to create a synergy through lasting art-science partnerships, focusing on helping to solve problems arousing from climate change.
This started while going through the unusual experience of bringing together artists with scientists, economists, philosophers, sociologists, engineers, management experts and policy-makers to discuss the environmental problems we are facing and what could be the role of the arts and the artists in dealing with those challenges.
Two days of reflection, debate and the promotion of projects and actions regarding the environment and our human responsibility at this defining moment in history were good enough to start finding interesting possibilities of collaboration. The Balance-Unbalance event of 2011, at first resisted by some academic circles with traditional views and mono-disciplinary thinking, succeeded achieving its goals and was finally very well received in the same circles that were trying to stop it before. The conference included a number of transdisciplinary workshops as well as paper presentations, a multimedia exhibition and two nights with artistic events. Electroacoustic music played an important role during the conference having a strong presence during the paper presentations and also through a full multichannel concert with pieces related to the main theme of the conference.
Electroacoustic music has become the artistic center of a project to be developed in partnership with the Red Cross/Red Crescent Climate Center. The initiative will be introduced at the EMS 2012 conference as part of this presentation. It is expected a large-scale impact both in people seriously affected or not by the consequences of climate change.
While we keep looking for the meaning of electroacoustic music, its meaningfulness is reaching new levels and possibilities. Perhaps, with time, it will become better understood as an art form and appreciated in its potential for running from a very abstract world to finding links with our pragmatic problems as human beings as well.
Jean-Louis Di Santo
SCRIME , Bordeaux, France
De par le passé, de nombreux compositeurs, de C. Jannquin à O. Messiaen qui se sont inspirés des oiseaux, en passant par Chopin qui, à travers sa « Polonaise », a voulu exprimé une idée, pour ne citer qu’eux, ont tenté d’extraire la musique de son aspect intrinsèquement abstrait ou uniquement tourné vers l’expression de sentiments pour en élargir le champ. Cette démarche se trouvait limitée par l’utilisation des instruments et la conception de la musique qui se limite à la gestion des hauteurs et des rythmes pour bâtir le discours musical, du motif à la grande forme. La possibilité d’enregistrer des sons et de pouvoir les réutiliser dans des compositions musicales constitue alors une véritable révolution à bien des égards. Ce qui nous intéressera tout particulièrement ici consiste dans l’utilisation des sons dits référentiels ou anecdotiques, c'est-à-dire des sons de notre environnement dont nous pouvons identifier la cause. Nous sommes ici aux antipodes de l’écoute réduite, dans ce que nous pourrions appeler une écoute élargie: ces sons sont porteurs d’une dimension symbolique qui va au-delà de leur morphologie musicale (même si leur morphologie elle-même peut symboliser autre chose) et peuvent générer une signification. Cette signification, comme lors de l’utilisation des mots, va dépendre alors du contexte dans lequel le son est placé, et des pouvoirs de dénotation et de connotation qu’il recèle. Mieux encore : ils autorisent de manier la symbolique qui leur est attachée à la façon dont les peintres ont utilisé les images afin de relater des récits ou exprimer des idées. Les sons anecdotiques sont habituellement considérés comme indices dans la sémiotique peirceénne et repris comme tels dans « le traité des objets musicaux » de Pierre Schaeffer : un bruit de voiture par exemple nous renvoie à l’arrivée de ce véhicule même si nous ne le voyons pas. Enregistré et déconnecté de son contexte, sa « phonographie », pour reprendre l’expression de F. B. Mâche, perd sa nature d’indice pour devenir ce que F. Bayle a appelé « im-son », c'est-à-dire une image en termes sémiotiques. L’écoute élargie prend alors en considération ces aspects non musicaux des sons mais qui pourtant contribuent à la musique dans le sens où la musique n’est plus simplement une architecture sonore, un jeu formel replié sur lui-même, mais un système sémiotique de représentation. Ces « im-sons », lors du montage/mixage, de par leur association, vont générer un langage comparable au langage cinématographique. Les figures classiques de métaphore et de métonymie vont alors jouer à plein, ainsi que les phénomènes d’éloignement, gros plan, apparition..., bref tout ce que l’on peut ranger dans la catégorie des archétypes à la suite de F. Bayle, D. Smalley, T. Wishart ou F. B. Mâche. Toute manipulation ou transformation audible de ces sons devient alors porteuse de sens. Si, comme le notait B. Lortat-Jacob pour la musique en général « ... la musique produit du sens (dont la difficulté d’appréhension ne peut être mis sur le compte de sa minceur) et est elle-même générée par le sens », cette remarque prend davantage d’acuité dès lors qu’elle s’extrait de l’abstraction : le sens afférant à la musique ne devient peut-être pas plus épais, mais certainement plus aisé à saisir. Entre symbole littéraire (où, comme l’analysait R. Barthes, l’on assiste à un glissement où le signifié devient le signifiant d’un autre signifié) et symbole peircéen (un signe créé pour être signe), le son anecdotique perd sa nature de simple icône et produit du sens. Une fois brisé le tabou posé par P. Schaeffer (l’utilisation des sons anecdotiques pour leur valeur référentielle) s’ouvre alors un univers de possibles que de nombreux compositeurs ont exploré.
En utilisant principalement les théories sémiotiques de CH. Sanders Peirce, et en nous référant à des exemples musicaux, nous analyserons le fonctionnement de ces sons référentiels et serons amenés à distinguer trois grandes catégories de compositions : les poésies sonores, les pièces narratives et les arguments musicaux.
BARTHES Roland, Mythologies, Paris, Seuil, 1957
BAYLE, F. (1993). Musique acousmatique, propositions... positions. Paris : Buchet/Chastel
GRABOCZ, Marta : « Quelques formes archétypiques - ou UST - dans les écrits et les oeuvres de compositeurs contemporains », Vers une sémiotique générale du temps dans les arts, Actes du colloque "Les Unités Sémiotiques Temporelles (UST), nouvel outil d'analyse musicale : théories et applications", Paris, éditions Delatour, 2008
GREIMAS, Adrian Julian, dictionnaire raisonné de la théorie du langage, Paris, Hachette Supérieur, 1993
LORTAT-JACOB, Bernard, « Petit traité d’impertinence ou critique de la distinction », Analyse musicale n°23, 1991
MACHE, François Bernard, Musique, Mythe, Nature, ou les dauphins d’Arion, Paris, Klincksieck, 1983
SCHAEFFER, Pierre, Traité des objets musicaux, Paris, éditions du seuil, 1966
SMALLEY, Denis, « La spectromorphologie, une explication des formes du son », http://www.ars-sonora.org/html/numeros/numero08/08d.htm
PEIRCE, Ch. S. (1978). Ecrits sur le signe. Paris : Seuil.
WISHART, Trevor, On sonic art, New York, Routledge Taylor and Francis Group, 1996
Frédérick Duhautpas, Renaud Meric, Makis Solomos
University Paris, France
Traditionally, the idea that music conveys extramusical significance has often gone together with a language-like conception. The verbal paradigm often being the referential model to signification, it has frequently been tempting to turn to it to address music. Yet, it is necessary to take distance from such a model when addressing this issue, especially in the context of electroacoustic music. Modernist approaches have frequently attempted to renew and rethink expression, frequently outside the communication and language models that were most often used to explain certain aspects of tonal music. Composers, including Iannis Xenakis, have indeed escaped the language metaphor, for the benefit of a conception considering music as an “energetic” and “spatial” phenomenon. But such a conception of music does not go without a redefinition of the manner by which the issues of signification in music are understood. It is the aim of this presentation to address this large problematic through the emblematic example of Xenakis’ electroacoustic work.
Electroacoustic music has dismissed the universe of the pitches so propitious to feed the language metaphor and focused instead on the world of the sound, putting the movement forward, the becoming and, therefore, the issue of energy as one of its primary focuses. In many cases, music is regarded as an energetic phenomenon: although music touches listeners, it is not because they “understand” it, but because it carries out energetic transformations through its movements that it resonates with them. Thus, Xenakis uses the “fluid” metaphor: “To me, sound is a sort of fluid spanning through time—this is what gave me the idea of transfer from a domain to another.”
These transfers from physics to sound travel in the same direction: be it thought of as a fluid or as a gas, the sound is comprehended as a movement, a fluctuating energy. “Music is an ensemble of energetic transformations,” as Xenakis notes in his drafts for Concret PH (Archives Xenakis, Bibliothèque, Carnet 23). It was back when he imagined and put the paradigm into practice that would later be called “granular,” a paradigm in which a given sound is constituted with a large quantity of brief impulsions (spanning below the threshold of perception)—that is to say, sonorous grains, or “quantas.”
Like a significant portion of modernist music, Xenakis distanced himself from the linear and discursive model of language. From the initial conception of a work, the listening and space assume a paramount place: composers are forced to take these specificities into account and to put themselves in the place of future listeners in the middle of a space that they cannot completely control and with which they are unfamiliar. This characteristic is crucial to the issue of meaning. Out of a traditional quasi-discursive, face-to-face scenic relationship, we are taken to a more complex relationship in which the composer occupies the place of the “first listener.” Xenakis seems to have given quick rise to such a particular relationship introduced by this type of music. It is difficult, indeed, to conceive most of the electroacoustic work of this composer as forms of discourses often destined to be broadcast within architectural structures without a stage and without traditional acoustic landmarks, but that also were conceived by Xenakis himself. Musical works assume the shape of gigantic sonorous forms that envelop the listener. In other words, the composer seems to immerse the listening into an extremely complex dynamic space that is dense and continuously changing. In such dynamic structures designed for listening, the ear is incapable of perceiving everything, or of even distinguishing a general guiding line, which raises many questions with respect to signification.
As explained, the listener is transported into a musical universe where events do not occur in a linear manner the way the chain of spoken language does, but where musical events are conceived in terms of spatialization, density, and energy. Such events can be subject to extramusical associations made by the listener, but this is no longer a situation comparable to the reception and decoding of a verbal utterance. It is a multiple and complex phenomenon that far exceeds the possibility to discriminate each segmentation, each discrete unit(cf. Swain, 1997), each constitutive element as can be done when hearing a linguistic utterance. Here, if it is still possible to speak of “signification” or “meaning” in this context, it is in terms of immersion in the manner similar to the perception of the countless sonorous events we are flooded with in our daily environments. His music consequently evades the traditional expressive or descriptivist categories usually employed to address tonal music. It is no longer a question of telling a story, painting a landscape, or exposing emotional states before an audience that would stay outside the story. Instead, it is unfolding a world in which the listeners’ perceptions and interpretations are actively solicited.
In terms of the listeners’ semantic judgments (cf. Francès, 1958), one of the important things to consider with this type of music is that the electroacoustic material, for the most part, evades the semiotic codifications which, in tonal music, tend to orientate the meaning a musical work or passage could convey (cf. Meyer, 1957, pp. 256-272; cf. Cooke, 1959; Francès, 1958, pp. 347-379; Nattiez, 1987, pp. 137-155; Sloboda, 1988, pp. 89-96; Chion, 1993, pp. 47-66). Music is no longer set inside a quasi-lexical logic based upon culturally codified figure-types. In this respect, such music benefits from a large expressive freedom that plays on parameters that directly affect listeners on a sensitive and physical level without going through the mediation of common codes (Solomos, 2003, p. 73; cf. Duhautpas, 2010) —many sensations that they are free to interpret in diverse ways. Thus, Xenakis’ music turns up like a mobile or a moving, enigmatic sculpture inviting listeners to a sensitive and physical dialogue with it. It is important to stress that Xenakis aims to arouse a listening that can be qualified as “incarnate,” that is a listening in which the listener’s body constitutes the immediate inevitable reality. The purpose is to not only listen and see, but also to touch and feel. Here, the issue of the senses (meaning) meets those of the senses (perception).
The case of La Légende d’Eer
As an illustration, we propose to examine the case of La Légende d’Eer, whose music was composed for the Diatope multimedia show. Composed on seven tracks, it is also an independent musical piece whose title was borrowed from the homonymous myth told by Plato at the end of The Republic. Unlike other music for the Polytopes, the piece assumes a dramatic, arch-like form: first, a slow, progressive apparition of the music, then several waves culminating in a sort of flood and, at last, a progressive disappearing. In this presentation, we will summarize the results of the musical analysis of the piece (cf. Solomos, 2004) to focus on the elements that convey a meaning for the listener. The audition of this piece is always highly imaged with listeners progressively finding themselves merged into a moving universe where myriads and constellations of sounds assail them on all sides. With regard to the title and the program that function as an “anchorage” (Barthes, 1964, p. 44 ; Nattiez, 2004, p. 279), they can be experienced as a cosmogony or as a journey into death. But nothing prevents listeners from interpreting, picturing, or feeling these visceral sensations in other directions. For the composer, in fact:
“Every musical piece is like a highly complex rock with ridges and designs engraved within and without, that can be interpreted in a thousand ways without a single one being the best or the most true. By virtue of this multiple exegesis, music sustains all sorts of fantastic imaginings, like a crystal catalyst.” (Xenakis, 1978, p. 8).
Furthermore, in the context of the Diatope, relationships between light and space are vital. The light show, composed of laser beam and flash, generate all manner of geometrical configurations including “spinning spirals invading space, then disappearing into complete obscurity,” as Xenakis describes them. There is no contradiction in the Diatope between abstraction and figuration, but rather, a permanent back-and-forth movement. On the visual level, the galaxy-like figures could be perceived as intended figurations of galaxies. But nothing prevents the audience from considering it a pure and formal ensemble of light points. In the first case, the poetry of such figures is the primary focus and, in the second, interest is directed toward their geometrical qualities.
Archives Xenakis, Bibliothèque Nationale de France.
BARTHES Roland (1964), « Rhétorique de l’image », in Communications, n°4, Paris, Seuil, pp.40-51
CHION Michel (1993), Le poème symphonique et la musique à programme, Paris, Fayard.
COOKE Deryck, (1959), The Language of Music, New-York, Oxford University Press, 2001.
DAVIES Stephen (1994), Musical Meaning and Expression, Ithaca, Cornell University Press.
DELEUZE Gilles, GUATTARI Félix (1980), Mille plateaux, Paris, Les éditions de Minuit.
DUHAUTPAS Frédérick (2010), « Expressivité, modernité et musicologie critique. La modernité au-delà des discours formalistes », in SOLOMOS Makis et GRABÓCZ Márta (ed.), Filigrane n°11 : « New Musicology. Perspectives critiques », Paris, Delatour, pp. 37-65.
ESCLAPEZ Christine (2009), « La musique comme langage ? », in VECCHIONE Bernard et HAUEUR Christian (ed.), Le sens langagier du musical, Sémiosis et hermenéia, Actes du 1er Symposium d’Aix-en-Provence, Paris, L’Harmattan, pp. 147-167.
FRANCÈS Robert (1958), La perception de la musique, Paris, J.Vrin, 1972.
GRISEY Gérard (1996), « Entretien avec David Bündler » (1996), in Écrits ou l’invention de la musique spectrale, édition établie par Guy Lelong avec la collaboration d’Anne-Marie Réby, Paris, Musica Falsa, 2008.
HANSLICK Édouard (1854), Du beau dans la musique, traduction Charles Bannelier, Paris, Christian Bourgois, 1986. IMBERTY Michel (1979), Entendre la musique, Sémantique psychologique de la musique, vol.1, Paris, Dunod. MÂCHE François-Bernard (1998), Entre l’observatoire et l’atelier, vol.1. Paris, Kimé.
MERIC Renaud (2005), « Concret PH, un espace mouvant » in SÈDES Anne et VAGGIONE Horracio (ed.), Actes des Journées d’Informatique Musicale, Saint-Denis, AFIM/CICM/Université de Paris VIII/MSH Paris Nord, pp. 147-157.
MEYER Leonard, (1957), Emotion and Meaning in Music, Chicago, The University of Chicago Press.
NATTIEZ Jean-Jacques (2004), « La signification comme paramètre musical », in NATTIEZ Jean-Jacques (éd.) Musiques: une encyclopédie
pour le XXIème siècle, vol.2 : Les savoirs musicaux, Paris, Acte sud, pp.258-289.
NATTIEZ Jean-Jacques (1987), Musicologie générale et sémiologie, Paris, Christian Bourgois.
ROBINDORÉ Brigitte (1996), « Eskhaté Ereuna: Extending the Limits of Musical Thought - Comments On and By Iannis Xenakis », Computer Music Journal vol. 20 n°4, 1996, p. 13-16.
SLOBODA John, (1988), L’esprit musicien, la psychologie cognitive de la musique, Liège, Mardaga.
SOLOMOS Makis (2003), « De l’apollinien et du dionysiaque dans les écrits de Xenakis », in SOLOMOS Makis, SOULEZ Antonia,
VAGGIONE Horacio (ed.), Formel/Informel : musique-philosophie, Paris, L’Harmattan, pp. 49-90.
SOLOMOS Makis (2004), « Le Diatope et La légende d’Eer de Iannis Xenakis », in BOSSIS Bruno, VEITL Anne, BATTIER Marc (ed.), Musique, instruments, machines. Autour des musiques électroacoustiques, Paris, Université Paris 4-MINT, pp. 95-130 (www.iannis- xenakis.org/enligne.html, 2004).
SWAIN Joseph (1997), Musical Languages, New York, Norton & Compagny.
TARASTI Eero (2006), La musique et les signes, Paris, L’Harmattan (version française de Signs of music, A guide to musical Semiotics). VARGA Bálint A. (1996), Conversations with Iannis Xenakis, London, Faber and Faber.
XENAKIS Iannis (1958), « Les trois paraboles » (1958), in Iannis Xenakis, Musique. Architecture, Tournai, Casterman, 1971.
XENAKIS Iannis (1978), « La Légende d'Er (première version). Geste de lumière et de son du Diatope au Centre Georges Pompidou », in Centre Georges Pompidou-Xenakis, Le Diatope : geste de lumière et de son, Paris, Centre Georges Pompidou, s.d. (ca 1978), pp. 8-12 ; repris in XENAKIS Iannis. 2006. Musique de l’architecture, textes, réalisations et projets architecturaux choisis, présentés et commentés par Sharon Kanach. Marseille, Éditions Parenthèse.
University of Music and Performing Arts, Graz, Austria
How meaningful is it to consider the meaning of music while composing it? I will try to address this question from the perspective of a composer reflecting upon his own practice. This is motivated by the conviction that such a question can only be answered with respect to a particular practice and not in general terms. Nevertheless, I hope that my findings will also be of interest for other composers, especially if they share certain aspects of my practice.
The notion of composition underlying my work is an extended one, embracing the design of interfaces and instruments as well as the conception of installations and intermedial performance settings. Most of my compositional work finds its expression in computer models of systems I invent. This is why I understand composition as a form of modeling, an approach enabling me to further develop such systems by improvising with them while experiencing their sonic (and other) output. In this context, a model may encompass everything from the sensors used by performers or the audience to dynamic and generative subsystems controlling sound synthesis and spatialisation as well as the loudspeakers and other elements of the performance space, such as projectors or lights. In my work, ideally, all elements and aspects of a piece are integrated in one system such as to be able to compose their relationships and experiment with them.
My practice takes this form because I am trying to create a setting allowing me to transcend the horizon of my imagination and to liberate myself from my intentions. Being aware of the utopian character of these aims while sticking to them anyway constitutes an important driving force behind my work. The setting I am seeking should favour serendipity and contingency, allowing for the unforeseeable to happen and, what’s more, to get noticed as such. Rheinberger’s notion of the experimental system, which he developed to describe the practice of laboratory science, captures much of the kind of setting I am trying to establish and maintain in my practice. According to Hoagland, an experimental system can be understood as a “generator of surprises” or an “itinerary into the unknown”. Constituent parts of experimental systems are what Rheinberger calls “epistemic things”, which, paradoxically, represent what we do not know yet. In my practice I understand the work in the process of being composed as an epistemic thing. This is why, while composing, I cannot know what a work may eventually mean to me or to others, since I do not yet know what it is (and even less what it is about).
I base my practice on the assumption that the meaning of my work is what the audience constructs in the process of experiencing the situation evoked through my work. There is a very indirect relationship between how and what I compose and the meaning that may be ascribed to it. Therefore I see little interest in trying to compose music in such a way that its experience affords an intended meaning. In view of what has been said so far and since an audience will always ascribe meaning to what they experience, my initial question can be reformulated thus: What is the relationship between ones practice and the meaning the audience attributes to ones work?
Far from being able to answer this question, I will at least try to shade some light on it by drawing on the thinking of Herbert Brün, who wrote in 1970: “The composers find pleasure in that they first invent a wish or a question and then compose for themselves a fulfillment or an answer. The listeners to whom the composition is played can find their pleasure if they now find or invent wishes and questions for which this music means fulfillment and answer. The listener's pleasure depends on just the same talent for imagination and for having ideas as the composer's pleasure, and the title genius, or some less abused equivalent suitable to 20th century taste, is actually waiting to be granted to deserving listeners of music.” This brilliant description of the relationship between composers and listeners stresses the indirectness of their communication, which takes the following form:
wish C => fulfillment <= wish L
It is the listener’s task to invent a wish L sharing the fulfillment with a wish C invented by the composer. It is essential here that wish L and wish C will most likely be different, implying that the listener does not have a direct access to wish C – a situation reminiscent of Plato’s Allegory of the Cave.
Two years later Brün introduced the notion of anticommunication, “a human relation between persons and things which emerges and is maintained through messages requiring and permitting not yet available encoding and decoding systems or mechanisms”, as a counterpart to communication, which is based on “messages required and permitted by already available encoding and decoding systems or mechanisms.” [my emphases] It is important to note that anticommunication “is an attempt at saying something, not a refusal of saying it.” Later in his text Brün describes the situation where anticommunication typically occurs in the experience of art: “At the moment in which something new is conceived, introduced, and noticed, a temporary gap opens, an interregnum, which disappears only when that new something becomes accepted, understood, used, when it begins to grow old. This time of transition is a time in which messages are sent that no one receives, and in which messages are received that no one sent.” One of the goals of the composer is to maintain this interregnum for as long as possible, in order to keep up the productive energy of anticommunication. Brün’s notion of anticommunication bears an interesting resonance with the Rheinberger’s epistemic thing, an aspect I shall further detail in my contribution at the conference.
1 Hans-Jörg Rheinberger, Toward a History of Epistemic Things: Synthesizing Proteins in the Test Tube, Stanford University Press, Stanford, CA, 1998.
2 quoted after ibid., p. 31
3 Herbert Brün, The Listener’s Interpretation of Music - An Experience between Cause and Effect, in: Herbert Brün, When Music Resists Meaning, ed. Arun Chandra, Wesleyan University Press, Middletown, CT, 2004, p. 50
4 For Anticommunication, ibid. p. 60
Simon Fraser University, Canada
We should envy scientists, who have a clear method of determining whether their ideas make sense to others: they can come up with an idea, design a methodology to test the idea, run the experiment, describe their findings, then submit their results to their peers. An objective verification of the idea – rejection or publication – should then be produced. We electroacoustic composers, on the other hand, come up with an idea, use our tools and techniques to produce a new work – be it a fixed media work, live performance system, or something in between – then search for opportunities for public presentation. This might amount to telling the curator of a local electroacoustic music show – be it academic or “arthouse” – that you have a new 10 minute piece that is ready for performance. If your reputation suggests that your work will not cause the listening audience to run screaming from the hall, you may very well get your performance (because new works are easier to program than those that have already been performed). Following the concert, you might get the congratulations of a few friends on your new work, and, perhaps, have conferred upon you the ultimate compliment: “good piece”. However, you may also be left wondering, is it a good piece? Who makes the final judgement?
Validation within the arts is a contentious issue. Aside from the somewhat facetious situation described above, we do rely upon the judgement of our peers, as do our scientific brethren. We submit our works to festivals and competitions; however, when our work is rejected, we may assume such rejection is due to subjective reasons, rather than objective problems within our work: the jury was not favourable to our aesthetics and/or style. I personally learned long ago that to receive recognition in creative competitions (be it young composer competitions or, more recently, grant competitions) one has to be very careful with what one submits – style rarely triumphs over substance. Having now been on the other side of such competitions, I realize that judges will (ideally) attempt to put aside personal aesthetics, and concentrate instead upon craft. As such, those works that emerge as victorious are adept in their craft, but not necessarily original in their ideas.
This raises the question: can one rate creativity? Can we state that one work, or one artist, is more creative than another? Researchers in psychology, cognitive science, artificial intelligence, education, and, naturally, the fine arts, have recently begun to address the nature of creativity, what it is, and how to validate it. For example, those students of EA who hear Françis Dhomont’s Novars for the first time, and immediately try to replicate its gestures in their own work, are still considered creative, albeit only at a personal level (Margaret Boden’s p-creative) versus creating something that is historically original (Boden’s h-creative)1. While not discounting everyday creativity (what Anna Craft refers to as “little-c” creative), traditional artistic creation is considered “Big-C” creative2. Kaufman and Beghetto further separate the latter into Pro-C and Big- C: the former being the output of professionally creative individuals who are not eminent, while the latter is reserved for those few masters who alter style and history3. While we all, no doubt, hope to end up in the latter class, history suggests that only a few of us will be so judged.
Another fascinating point suggested by Boden is defining how the artist conceives of and works within their conceptual space. Working within a clearly defined space (for example, acousmatic music), a creative composer may produce h-creative works that still conform to the accepted notions of that conceptual space. If the space is narrow (as is that of acousmatic music), the success of a work will tend to be judged more by its craft than how it redefines the space (since, paradoxically, if an artist were to redefine this particular space, the work would no longer be within it). Boden separates such exploratory creativity – which remains within a well-understood space – from transformational creativity, which is more radical, and has the potential to transcend and redefine the space itself4.
Formalizing creativity has, perhaps naturally, given rise to computational creativity, which is an attempt to produce creative behaviour within a software program. In fact, whole conferences are dedicated to this exploration – for example, the International Conference on Computational Creativity5. Of course, the notion of autonomous composition systems is nothing new, and can be traced back within computer music to Hiller’s Illiac Suite, and within acoustic music to Mozart’s Musikalisches Würfelspiel, or even the process of constructing Medieval motets through the use of isorhythms. However, as the field of computational creativity is interdisciplinary between AI and cognitive psychology (on the science side) and philosophy and arts practice (on the arts side), the notion of validation has been unavoidably raised: how does one rate the output of a computationally creative system?
The author is the co-director of the Metacreation research group, a multidisciplinary endeavour between composers, artists, cognitive psychologists, and computer scientists, and robotic engineers, that are building creative music and performance systems. This presentation will not discuss the systems themselves, but will focus upon the public presentation of their output, and the validation studies that we have undertaken. For example, a recent concert presented seven works generated by software (metacreations), and one works composed by human: however, the audience was not told which piece was human composed. Results from the survey suggested that the audience could not separate the metacreations from the human-composed work; however, the entire study raised more questions than originally posed. How does one consider style and the audience’s aesthetic? What element does performance (or lack thereof) influence audience perception?
The issues raised in this presentation are not limited to computational creativity, but to the creation of art itself, the relation of our work to the public, and considering how we can judge artistic success or failure.
1 Boden, M. 2003. The Creative Mind: Myths and Mechanisms (second edition). Routledge.
2 Craft, A. 2001. “‘Little C’ creativity”. In Craft, A., Jeffrey, B. and Leibling, M. Creativity in education. Continuum International
3 Kaufman, J., Beghetto, R. 2009. “Beyond Big and Little: The Four C Model of Creativity”. Review of General Psychology 13 (1): 1–12.
4 Boden, M. 1999. Computational models of creativity. Handbook of Creativity.
Simon Emmerson and Leigh Landy
De Montfort University, Leicester, UK
This proposal is based on discoveries and challenges encountered with respect to the early phases of our current research project, New Multimedia Tools for Electroacoustic Music Analysis. We shall raise some of the key issues in this paper and discuss how we are methodically investigating potential solutions.
This project proposes to establish an analysis research programme for a range of genres of electroacoustic music. This will draw together existing methods, engage the latest interactive and hypermedia tools, and apply them to a range of works to compare their strengths and weaknesses. This aims to illuminate both the procedures and the works. We will be better able to judge what analytical approach (or approaches) would be best suited to gain an insight and understanding of a particular genre of the music. A number of new extensions, developments and refinements will result in a newly developed software application (EAnalysis derived in part from i-Analyse (Couprie 2009)) which can apply a range of possible approaches. A public beta version of this software is already available at the time of writing. In addition research student Michael Gatt has created the Online Repository for Electroacoustic Music Analysis (OREMA) [http://www.orema.dmu.ac.uk/] – a forum for sharing work and discussing the many problems and potential solutions.
When first we proposed this project we had to first ask some basic questions: What do we want from analysis of electroacoustic music and how might we get it? This is no monolithic enquiry and depends on –
- which tools/approaches -
- for which works/genres -
- for which users -
- with what intentions?
None of these four subsidiary questions comes first – they are all mutually interactive. But we needed to start somewhere, so we chose a cluster of headings covering a substantial number of the ‘genres and categories’ listed in EARS. In a preliminary discussion we divided this field into what we described as ‘arbitrary cliché genres’ – we suggested that these might well be critiqued as we progressed [Table 1]. Strictly these may be seen as ‘qualities’ rather than genres – it is immediately evident that they have hybridised continuously such that (for example) an installation may commonly include algorithmic generation, be interactive, and use soundscape and acousmatic materials. That said the individual elements named still need suitable tools for their analytical examination – tools peculiar to their different modes of perception and different needs for the presentation of knowledge and (hopefully) explanation. Thus (for example) representation in some kind of visual symbol will vary depending on whether a ‘real-world’ reference is present (and perceived) or whether another work is quoted (or ‘plundered’).
A preliminary look at Table 1 seems to suggest that some of the words and phrases used to describe the music refer to materials, others to methods and means of organisation – yet composers regularly refer to works as ‘algorithmic’. Where does this method lie in the analytical discourse, solely within the poiesis of a work? If we maintain (as is the aim of the project) a listener’s point of view then we must ask if knowledge of a generative algorithm is somehow part of the listening process – and how this might influence the analysis itself. In fact the distinction of material and method cannot be maintained – witness the often intense discussion surrounding the term ‘acousmatic’ which has come to mean to some people a term referring to both material and method.
Lying behind the ‘materials/organisation’ divide are larger social questions and attitudes which profoundly influence this discourse. While some genres cling to a (spurious) claim to universal values outside of such concerns we have previously written - “Where appropriate for the genre these [analytical concerns] may include social dimensions of performance and consumption, listening and dissemination” - that is, socially situated sites and characteristics of production, perception and consumption. Glitch, hacking and failure aesthetics might of course be analysed from their sound alone but would surely lose a substantial part of their meaning thereby. This relates exactly to the comment on algorithmic generation above – to what extent must knowledge of a generation process (or the composer’s or performer’s intention) form a necessary part of a reception and perception process?
sampling and plunderphonics;
glitch, hacking, failure aesthetics;
post-instrumental (hardware hacking, found and constructed instruments);
sound art, installation and the site-specific;
interactive (including audio in computer games)
live traditional instrumental (mixed and live electronics);
Table 1: some genres and categories of electroacoustic music
The analytical literature to date has been focused on the generation of vocabularies to describe certain phenomena in a quasi-objective way (Schaeffer, Smalley, Thoresen). Rather than repeat their terminology here we summarise it in the following more generalised areas of discourse (that is issues or elements to be addressed within and around the music as perceived) [Table 2].
Types of sound sources/sound synthesis
Heightened <–> Reduced listening (recognisability of sounds and how this affects the listening experience)
Contextual elements – where source identification is possible, the relationship between foreground and more contextual sounds
Hence relevant form of representation
Sound quality: “This is used as an umbrella term [...] referring to a single or composite sound’s aural characteristics. Instead of discussing source and cause, in this case one describes the sound’s colour or timbre, aspects related to its texture and any other description related to its sonic as opposed to contextual value.” (L. Landy ‘Making Music with Sounds’, Routledge 2012).
Order and organisation -
Some important parameters might include:
Duration information (events)
Other time-based aspects, e.g. elements at gestural level, sequence level, structural (formal) level, narrative and/or discourse issues.
Order and disorder
Simultaneities (parallel with traditional harmony)
Horizontal relationships, e.g., layering (parallel with traditional counterpoint)
Treatment of space and movement (spatiomorphology)
Table 2: some considerations for the analytical project
Other writers have begun tentatively to embrace the more subjective and responsive issues coming from the social dimensions (mentioned above with respect, for example, to glitch) on the one hand and the personal dimension (‘emotion’ and ‘meaning’) on the other. One function of our research is to establish ideas as to how these relate. How is the analysis of a noise work or a hacked instrument work (for example) to capture these additional dimensions? With video? Commentary? How are the emotions to be captured? Perhaps electrode brainwave (ideally MRI?) traces should run in parallel to the FFT and evocative transcriptions of works. But is this really a record of a subjective experience? How do we capture moments of ‘thrill’ and ‘boredom’? Or perhaps the traditional representation tools simply cannot cover this.
There are clearly many elements specific to a given genre (even piece). Indeed any practical definition of ‘genre’ presupposes some core common traits which contribute to an apparently separate identity. But any analytical procedure (especially one without much history) must balance the gravitational pull of this idea with a more networked, spider-like, tagged, relativistic world of qualities which configure and reconfigure depending on the initial questions we posed at the start.
This paper examines these issues and presents examples of work to date.
The three-year project ‘New Multimedia Tools for Electroacoustic Music Analysis’ is funded by the Arts and Humanities Research Council (UK).
León David Enríquez Macias
Centro de las Artes de San Luis Potosí, México
The purpose of my work has been to develop a theoretical model capable of aiding and guiding the observation and study of the processes of signification and communication of experiences in which music and sound-based art play a meaningful role. The model is based on pragmatist, musicological and semiotic approaches to meaning and is intended to widen the range of observation and action within creative and compositional initiatives. The model purports to recognize the multiple planes or layers of signification in which meaning is produced in the experience of music, as well as their interactions and mutual mediation.
It's important to note that the proposal of this model responds to the context of art education in Mexico and certain areas of Latin America, in which the rich cultural diversity of musical expressions contrasts the sometimes- reduced perspective in musical schooling and training. Nevertheless, the model might prove pertinent to other contexts in which a similar problematic is observed.
The current landscape surrounding artistic disciplines and practices based on sound presents diverse and complex realities that challenge their comprehension through ready-made theories in terms of meaning and communication. The advent and permanent development of new technologies of sound recording, transmission and generation that ensued throughout the last century propelled the ramification, hybridization and diversification of musical traditions, compositional methods and aesthetic theories. The experiences in which musical and sound-based art are significant resist to be grasped through a single musical or art perspective. The concepts offered by a single discipline can prove insufficient in trying to comprehend the multiple meanings that circulate even in the simplest musical performance.
With this condition in mind, the semiotic model I propose is not assumed as an all-encompassing theory capable of analyzing and deciphering all aspects of musical meaning. Rather, the model serves as an analytical tool designed to point at different planes of signification that interact in particular musical experiences in order to recognize possible and relevant lines of study and research for the comprehension of their meaning. Also, as it considers the effects of interpretations based on theories, models and beliefs that enter in such experiences, the model serves in the analysis of the relations between meaning, behavior, identity and community. In other words, my intention in building this model has been on acquiring the capacity to modulate the rage of observation in a musical semiotic study. To be able to consider the mediation between the formal and referential aspects of a musical work and to appreciate how the understanding of their integration in the communicative acts of listening and performance reflects and affects the identities of individuals and communities.
The research methodology carried out to construct the model has responded to a threefold strategy:
First, the model recognizes the general aspects of signification processes through the semiotic theory of Charles S. Peirce, as well as of other art-oriented theorists who follow on his tradition (such as Nicole Everaert-Desmedt, Francisca Perez, Herman Parrett and others). This study serves to dissipate confusions in terms of the representational capacities of sound and music, defusing the sometimes-absurd conflict between formalism and referentialism. General semiotics theories also work to clarify the nature of interpretation, establishing its mediating, associative, argumentative and habit-changing functions.
Second, the model observes the effects meaning and interpretation have on the behavior of individuals and their communication, particularly in artistic experiences. For this, Peirce?s notion of pragmatism is key, in which the study of symbols and concepts considers their effects on behavior and everyday experience. Complementarily, John Dewey?s take on pragmatism is more pertinent to art and aesthetic ambits. In conjunction, both offer an integration of aesthetics?, ethics? and logic?s approaches to meaning (Peirce, 1994), and the emotional, practical and intellectual aspects of experience (Dewey, 1980). Also, in terms of communication (production and reception of signs) and pragmatism in art, the theory of Nicole Everaert-Desmedt is of great pertinence to the model. In particular, her notion of ?iconic thought? (Everaert-Desmedt, 2008) gives important clues to the understanding of metaphoric interpretations, the process that the model takes as the underlying mediator between realms of meaning.
And third, the model recognizes particular aspects of signification in musical and sound-based art experiences. Here, the works of Leonard B. Meyer, Denis Smalley and Christopher Small are instrumental; each providing what can be considered pragmatist-compatible approximations to the formal, referential and ritual-based meanings that reside in a musical experience. Meyer?s contribution to the model, self-admittedly syntactical in orientation, is in terms of his analysis of time-based meaning and expectation (Meyer, 1956). His conceptions are essential to music and sound-based art, as both are experienced temporally. The theory of spectromorphology of Smalley explores in detail the representational capacities of sound and music (Smalley, 1997). His analysis of the referential aspects of music is of particular interest to the model as it establishes an aesthetic imperative that values metaphoric meaning over symbolic (convention-specific) and indexical (resonant-object-oriented) meanings. And lastly, the anthropological-based theory of musical meaning of Christopher Small understands music not as a thing in itself but as an activity he calls musicking (Small, 1998). To musick, for Small, requires a ritual in which participants define their identities intersubjectively by metaphorically experiencing myths held by a community.
Although the research strategy now summarized refers to particular theories that have been consulted for revealing certain general aspects of musical and sound-based art signification, other theories, bodies of knowledge and field study-information should be observed as they prove necessary to understand specific features of the experiential reality being studied. Which is to say, the model represents a general structure that must be adapted to particular studies based on the same idea. Namely, that in a meaningful musical or sound-based art experience multiple realms of meaning —of syntactical, semantic and pragmatic nature— converge, giving rise to a holistic yet multilayered whole that would be better studied and analyzed in its complexity.
The model, in this sense, traces three general planes of meaning in all musical and sound-based art experiences. Namely: the temporal, spectral and ritual planes of signification. This proposal resonates to a certain extent with Charles Morris semiotic areas or dimensions mentioned above (syntactic, semantic and pragmatic), though it tries to avoid describing them, as Morris does, in terms of dyadic relations (Morris, 1994). The planes, furthermore, are offered as specifically relevant to music and sound-based art. The description of the planes is done in terms of their orientation towards three moments of artistic communication that the model proposes as analytical concepts. Those moments are defined by the acts of reception, production and analysis of artistic signs. The model?s analytical methodology can be understood as the recognition of the planes of signification within the moments of art communication and of the meaningful intermediations between the planes.
In the moment of reception, the planes translate to categories of habit-conditioned expectation (temporal, spectral and ritual); in the moment of production, observation is directed to types of instrumental functions of production (of temporal flux, of processes of sign recombination, and of formation of categorical-qualitative units); while in the moment of analysis, observation falls over the social and cultural context in which the communicative acts of reception and production of artistic signs acquire meaning. Lastly, the model takes Small?s and Peirce?s definitions of metaphoric meaning and interpretation as the semiotic function that describes the aesthetic and affective aspects of the binding and transference of meaning between the planes of signification. The metaphoric interpretation, as an iconic relation of similarity between a sign and its meaning, points to the qualitative tension between juxtaposed signs necessary for their logical association. In a ritual experience, different realms of meaning coincide, creating the condition for the recognition of similarities between realms and for possible metaphoric interpretations.
The concepts of realm, dimension, area, plane and layer of meaning and signification are used in the model as metaphors that describe aspects of the complex sign structures or architectures that musical works as well as musical experiences represent. The model accentuates the importance of acknowledging that all interpretation in any of the moments of communication is an act of association of realms of meaning, an act in which the interpreter defines his or her identity in relation to a community; that, in every interpretation in the construction of knowledge about musical meaning, the interpreter acts upon the community-shared myth that such meaning is knowable.
Dewey, John. 1980. Art as Experience. New York, NY: Perigee Books. (Original work published on 1934).
Everaert-Desmedt, Nicole. 2008. “¿Qué hace una obra de arte? Un modelo peirceano de la creatividad artística”, in Utopía y Praxis Latinoamericana 40, pp. 83-98. Article available at http://unav.es/gep/Articulos/EveraertUtopia.html.
Hatten, Robert. 1995. “Metaphor in Music”, in Musical Signification. Essays in the Semiotic Theory and Analysis of Music (Eero Tarasti, ed.). New York: Mouton de Gruyter.
Landy, Leigh. 2007. Understanding the Art of Sound Organization. Cambridge, MA: Massachusetts Institute of Technology Press.
Meyer, Leonard. B. 1956. Emotion and Meaning in Music. Chicago: University of Chicago Press.
Morris, Charles. 1994. Fundamentos de la teoría de los signos (2a ed., Rafael Grasa, Trans.). España: Ediciones Paidós Ibérica, S.A. (Original work published en 1971).
Peirce, Charles Sanders. 1994. The Collected Papers of Charles Sander Peirce (electronic edition). Reproduction of Vols. I-VI eds. Charles Hartshorne and Paul Weiss (Cambridge, MA: Harvard University Press, 1931-1935), Vols. VII-VIII ed. Arthur W. Burks (same publishing house, 1958).
Small, Christopher. 1998. Musicking. The Meaning of Performance and Listening. Middletown, CT: Wesleyan University Press.
Smalley, Denis. 1997. “Spectromorphology: explaining sound shapes”, in Organized Sound 2 (2), pp. 107-26. (United Kingdom: Cambridge University Press).
Tarasti, Eero. 2002. Signs of Music: A Guide to Musical Semiotics. New York: Mouton de Gruyter.
Royal College of Music, Stockholm, Sweden
This paper represents excerpts from a theoretical research project initiated as a part of my licentiate thesis (Falthin 2011) in order to provide a framework for a series of empirical studies on composition learning. The research projects chiefly concern electroacoustic music but many of the problems discussed are general to music. In focus will be musical meaning making processes in algorithmic and sound based composition.
A vehicle for understanding meaning making in processes of creativity and learning is the notion of the concept development process (cdp) as introduced by Vygotskij (1999, pp. 167-250). Vygotskij’s original theory of the concept development concerns language-based learning and the relation of thinking to language. In this study the theory is applied to musical thinking and learning, and hence, deals with concepts in music as opposed to concepts about music.
The subject for this research project is music education but it will reference works from many different fields of research, such as artistic research, psychology and linguistics. Research of perception and cognition will make a good foundation for opening up questions and problems, in particular concerning pattern recognition and gestalt-psychological problems. Since all these branches of psychology rely heavily on the use of symbolic representation and symbolical systems, the field of semiotics has to be entered at some point. Theories of syntax will be visited upon as they are material to meaning making.
Research question and purpose of the study
How is meaning constructed in composing and learning composition in the context of electroacoustic music? From a psychological point of view making and learning music are closely interrelated and interdependent (e.g. Sloboda, 1985). A creative activity like composition works by combining percepts in new ways (Vygotskij, 1995), assigning them meaning by means of ordering and shaping. Learning in turn, to a large extent depends on assigning meaning to objects and events by understanding them in terms of forms, shapes and patterns (e.g. Nattiez, 1990).
The context of electroacoustic music makes some fundamental aspects of meaning making especially urgent. The lack of the physical limitations inherent in acoustical musical instruments presents a challenge to design the whole musical universe; the boundaries and the principles it should work by, including problems of division of the time and frequency continua.
From the main research question emerge a multitude of sub-questions about how these interrelations might appear and how the different aspects of meaning are constructed and developed. The most central of these sub-questions are:
Does learning of electroacoustic composition undergo processes similar to those in language-based learning?
Is it appropriate to talk about musical concepts in electroacoustic music, and if so how can they be understood to represent and convey meaning?
Can a concept development process (Vygotskij 1999) be traced in the learning of sound-based composition?
How can algorithmic composition be understood to expand creative thinking and supply base material for musical conceptualization?
How do the concepts of significance and meaning relate to musical thinking in electroacoustic music?
How do different levels of musical understanding like structure, syntax, form, expression and gesture relate to concepts of significance and meaning making in the context of electroacoustic music?
As a main purpose this project seeks to further the knowledge about musical meaning making in the act of composing and in learning composition with a special regard to electroacoustic genres. It is about developing tools for understanding the learning process and its relation to musical meaning making. A vehicle for this examination is a comparison to language-based learning, which has been more thoroughly researched and in which there is an elaborate terminology and assortment of tools for analysis. It is thus not a simple yes-or-no question whether linguistic tools can be used to understand musical thinking, but rather a sophisticated transformative adaption of theories and tools.
But musical meaning is not necessarily restricted to obeying a language-like logic. More fundamental forms of pattern recognition and spatiotemporal perception and cognition concerning for instance proportions, density, gesture and articulation will certainly have to be taken into account. Meaning making takes place in the tension between anticipation and deceit (Sloboda, 1985; Huron, 2006).
Of significance and meaning; Detail and Form
Significance is conventional, a social fact whose symbolic representation is the base-unit for communication within a culture or a language (Saussure, 1916). Meaning, on the other hand is personal and subjective, made from the ordering or organization of significant units or signifiers. Clarke (1989) makes the distinction that signification is local, specific and based on oppositions whereas meaning reaches beyond the immediate systematic context and is a matter of difference.
In language, significance for the most part has to do with single words and morphemes. When such words and morphemes are put together in a string according to some syntactical logic, the resulting form conveys meaning to a receiving subject proficient in that language. Hence meaning in that sense is a matter of relations between the significant base units of a language; form is the shape of meaning. This logic transfers to other areas of meaning making and could serve as a model for understanding in general, as put by Nattiez:
An object of any kind takes on meaning for an individual apprehending that object as soon as that individual places the object in relation to areas of his lived experience–that is, in relation to a collection of other objects that belong to his or her experience of the world. (Nattiez, 1990, p.9)
The quotation implies that we construct our world by the act of connecting new perceptions and impressions to already consolidated conceptions. Here Nattiez seems to be talking about meaning as constructed from perception of objects and activities, but what about meaning making in the creative process of composing? Is it plausible to think that these structures are symmetrical? A rationale for such a supposition is that it would provide a starting point for examining musical communication.
In language, a word can be understood to have a denotative significance. However, along with that comes any number of connotations that can represent anything in the range of emotionally colored contextualization to quite distinct significations or even opposites if irony is called upon. In everyday conversation one could typically not rely on denotation to elicit meaning, at least not a meaning in keeping with the intention of the speaker. The more formal the context, the more denotative and explicit the use of language has to be, and the less can be trusted to shared cultural understanding. But there can be no absolute and completely denotative application of natural language. This also implies that language is not dependent on a systematically intact syntax in order to be functional. Both speaker and hearer construct meaning by making assumptions of the details left out of a message. Often these meaning constructs overlap, but they are rarely identical. Language is a special means for communication, but not more so than any other form of expression.
Denotation in music can be understood to operate on three different layers. 1. The immanent qualities of musical sound as token for properties of its physical source, is a denotative aspect inherent whenever music is sounded. In electroacoustic music and especially in sound-based composition this is a key issue when assigning meaning to musical events. Often the sound is re-contextualized and the actual sound-source is meant to be transparent and not to be part of the meaning making process. 2. Music as an expression for a certain culture is another level of denotation that can nurture an intricate weave of both personal and cultural connotations. The many cultures within the field of electroacoustic music are intertwined in a complex weave of context-defining parameters. Some are time- or space dependant. Others pertain to esthetic principles or depend on technical paradigms. 3. In addition there can be a conventionally assigned symbolic reference, which is the type of denotation normally associated with natural language. A major difference between language and music in this respect then, is that language would not be of much use without this latter kind of denotation, whereas music can make perfect sense relying on the previous two.
On the nature of musical concepts
In two empirical studies (Falthin 2011) the sequential stages of the early part of the cdp, as defined by Vygotskij (1987, 1999), were traced. Some instances, such as the abstraction of syntax from immanent structural properties and construction of nestled phrases, were even suggestive of higher-level cdp:s.
There were tokens for that music depends on symbolic representation and syntactic structuring in a language-like way, but there were also indications that it has properties of a more absolute and immediate nature. In a study based on additive synthesis (Falthin 2011), wordless, strictly auditive understanding of what happened when sinewaves were blended in different ways was fundamental to the musical conceptualization process. This is a token for that musical meaning include aspects of our fundamental understanding of time and space to a degree that goes beyond words. One way of understanding these phenomena is that they are applied mathematics; sounding materializations of space. The biological basic functions for our hearing include orientation in space and alerting us to potential danger (Wallin, 1982; Dahlstedt, 2004). These systems are much faster more powerful than a language message counterpart could ever be and not only do they inform us of whereabouts and threats, but also do they have the powers to put us in a mood adequate to the situation.
But if basic biological functions play an important part in musical experience, that would entail that musical signification cannot be entirely arbitrary. Its potential for symbolic representation is constrained to objects in keeping with the immanent qualities of the character of the perceived sound. This limits the semiotic power for semantic signification in music. Meaning making as a syntactical and textual concern is only indirectly influenced, though. All this seems to point in a direction to suggest that an important part of what makes music powerful has to do with applied mathematics; namely physical experiencing of geometrical entities, mechanics, flow and motion.
Clarke, Eric F. (1989). Issues in Language and Music. In Contemporary Music Review 4:1 pp. 9-22.
Dahlstedt, Palle (2004). Sounds Unheard Of: Evolutionary algorithms as creative tools for the contemporary composer. doct. diss., Göteborg: Chalmers University of Technology.
Dahlstedt, Palle (2012, in process). Between Material and Ideas: A Process-Based Spatial Model of Artistic Creativity. in J. MacCormack & M. d'Inverno (Eds.): Creativity and Computers, Springer Verlag.
de Saussure, Ferdinand (1916). Cours de linguistique générale. Paris: Payot.
de Saussure, Ferdinand (1996). Premier Cours de linguistique générale. (1907). Printed lectures. Komatsu, E. & Wolf, G. (Eds.). Oxford: Pergamon.
Falthin, P (2011). Goodbye Reason Hello Rhyme – A study of meaning making and the concept development process in music composition. Licentiate thesis. Stockholm: KMH-Förlaget.
Huron, David (2006). Sweet Anticipation. Music and the Psychology of Expectation. Cambridge MA: MIT-press.
Nattiez, Jean-Jacques (1990). Music and Discourse: Towards a Semiology in music. New Jersey: Princeton University Press.
Sloboda, John A. (1985). The Musical Mind: The Cognitive Psychology of Music. London: Oxford University Press.
Vygotskij, Lev (1987). The Collected Works of. Problems of General Psychology. New York: Plenum Press.
Vygotskij, Lev (1995). Fantasi och Kreativvitet i Barndomen. Göteborg: Daidalos.
Vygotskij, Lev (1999). Tänkande och Språk. Göteborg: Daidalos.
Wallin, Nils L. (1982). Den musikaliska hjärnan – En kritisk essä om musik och perception i biologisk belysning. Göteborg: Kungl. Musikaliska Akademiens skriftserie: 34.
Kerry L Hagan
University of Limerick, Ireland
This paper is predicated on the argument that searching for the meaningful in electroacoustic music is to use different means for different pieces, while remaining situated within the historical context of electroacoustic music. The important factors here are, first, that the meaningful in music is different than meaning in music, and, second, that methods for examining the meaningful in electroacoustic music can be quite different than the means of investigating the meaningful in acoustic music.
Looking for meaning in music, in general, leads to two main problems: trying to interpret inherent meaning and/or trying to find a generalised model applicable to all music. Postcolonial theory and cultural theory show that meaning also comes from historical context, cultural tropes and power relationships of hegemony and subjugation, so it is not entirely inherent to the music. Most theorists apply their models to pre-20th century, conventionally notated, Western music. These models break down with the experimentalism of the 20th century and the introduction of non- notated or unconventionally notated music. Therefore, these models are not generally applicable, and it is especially true for non-notated musics.
Looking for the meaningful in music provides a beneficial vagueness that does not require the definitive conclusions of meaning but, rather, identifies carriers of meaning. Where meaning is dependent upon interpretation, the meaningful is about personal experience. It does not strive for inherent characteristics of the music but acknowledges the reception of the listener. Since the meaningful in music is a carrier for meaning, all theories and models can be applied to all works as needed because one is not using these models for conclusive meanings. These models can come from psychology, linguistics, semiotics, hermeneutics, cultural theory, narrative analysis, and more.
A discussion of the meaningful in electroacoustic music must include a contextualisation within the history of 20th century experimental Western music and the impact of technology. Is electroacoustic music marginalised? Does it fit within the practices of acoustic music experimentalism? A number of popular musics utilise technologies and methods that were invented by early electroacoustic composers, but does electroacoustic music have any further impact on these popular musics?
The meaningful in electroacoustic music is mediated by the technology. In some cases, there are no scores but instructions. In other cases, there are fixed recordings, identical in every performance. In still more cases, live performers read notated scores and perform with technology. This paper proposes that the technology of electroacoustic music and lack of standardised notation do, in fact, marginalise electroacoustic music from the traditions of 20th century experimental Western music. Similarly, electroacoustic music’s aesthetic grounding from the same experimental traditions marginalises it from the popular musics that utilise the same technologies.
Therefore, any analysis must come from within the practice. Musicologists and composers have identified new approaches adapting the search for meaning in music to electroacoustic music. Some examples are spectromorphological analysis, poietic/genetic analysis, and esthesic analysis. These examples apply the theories from psychology, linguistics, semiotics, hermeneutics, etc., but negotiate a space for them in 20th and 21st century, non-notated Western electroacoustic music.
This paper surveys three works in order to demonstrate the diversity of the meaningful by applying different perspectives to each work. These works were chosen for the ways in which they are related yet distinctive. All three works fall within the greater definition of electroacoustic. That is, all pieces are intended for the concert hall paradigm, and the musical material relies fundamentally on electronic technologies. Yet, the musical material arises from different techniques and purposes within electroacoustic media. Two pieces use recorded and processed sounds, while one synthesises its own timbres. Additionally, two are single-moded (music alone) while one is multimodal (ballet). Ultimately, all three have their own philosophies that require different starting points for analysis.
First, Denis Smalley’s work, Base Metals (2000), is analysed by identifying the meaningful in the spectromorphologies, sound-shapes and space-forms, and builds on analyses by Hirst (2011) and Lotis (2003). Secondly, a poietic analysis of Gendy3 (1991) by Iannis Xenakis reveals the meaningful in the mathematics of stochastic events, reflected in investigations by diScipio (1997, 1998), Hoffman (2000), Luque (2009), Serra (1993) and Solomos (2001). Thirdly, the footsteps in Journey, the first movement of Maa (1991) by Kaija Saariaho, leads the listener through a distinctive narrative, through which the meaningful emerges from the story. This survey does not provide exhaustive investigations of each work. Rather, it demonstrates the advantage of diverse avenues of inquiry.
diScipio, A., 1997. The problem of 2nd-order sonorities in Xenakis’ electroacoustic music. Organised Sound, 2(3), pp.165–78.
—1998. Compositional Models in Xenakis's Electroacoustic Music. Perspectives of New Music, 36(2) Summer, pp.201–243.
Hirst, D., 2011. From Sound Shapes to Space-Form: investigating the relationships between Smalley’s writings and works. Organised Sound, 16(1), pp.42–53.
Hoffman, P., 2000. The New GENDYM Program. Computer Music Journal, 24(2) Summer, pp.31–38.
Lotis, T., 2003. The creation and projection of ambiophonic and geometrical sonic spaces with reference to Denis Smalley’s Base Metals. Organised Sound, 8(3), pp.257–267.
Luque, S., 2009. The Stochastic Synthesis of Iannis Xenakis. Leonardo Music Journal, 19, pp.77–84.
Serra, M-H., 1993. Stochastic Composition and Stochastic Timbre: GENDY3 by Iannis Xenakis. Perspectives of New Music, 31(1) Winter, pp. 236–257.
Solomos, M., 2001. The Unity of Xenakis's Instrumental and Electroacoustic Music: The Case for "Brownian Movements." Perspectives of New Music, 39(1) Winter, pp.244–254.
University of Guelph , Canada
In most of his electroacoustic works prior to the computer-generated music (the UPIC and GENDYN works dating from 1978) Iannis Xenakis used instrumental sources, often in combination with other sounds. In his early works, the instrumental sources, aside from obvious percussion sonorities in Orient-Occident, were intended to contribute to massed, often noisy, textures. This is especially the case for Diamorphoses (1957) and Bohor (1962). His studio training at GRM, beginning in 1955, would have taught him the classic techniques of Schaefferian “musique concrète.” He would have learned to listen to recorded sounds, to analyze their components, and to manipulate them utilizing standard tape techniques and processing. While these works could not be considered exemplars of the “musique concrète” aesthetic, given that they tend to direct the listener to the global evolution of composite textures rather than particular sound objects, they nonetheless achieved their aims by shaping individual sounds in the same ways other composers at GRM were doing.
The first project that brought Xenakis back to the electroacoustic studio after leaving GRM in 1962 was his Polytope de Montréal (1967). This was a multimedia project, an installation of vertical steel cables, several hundred programmed flashbulbs, and music. Without easy access to an electroacoustic studio (he began teaching at Indiana University in 1967 where he began developing digital music facilities but was not otherwise involved in electronic music), Xenakis opted to write a score for four identical instrumental ensembles intended to be placed in the cardinal points of the floor space of the atrium housing the installation (the French Pavilion of World Expo 1967 in Montreal). However, documentation from the Xenakis Archives indicates that there was never any intention to present this music live in Montreal. The music was recorded in the studios of ORTF/GRM. Part of the design of his installation there included placement of groups of loudspeakers not only around the floor level of the atrium but also vertically so the loudspeakers would project onto the different levels overlooking the atrium (in conjunction with the cables and lights that stretched vertically throughout the entire space). Therefore, while Polytope de Montréal can be thought of as an orchestral work, it has functioned as an electroacoustic work.
The music of this score is built from complex composite instrumental textures that are spatialized around the four “channels” by means of delays and amplitude fluctuations. In terms of basic compositional approach, Polytope de Montréal is very much related to Xenakis's earlier electroacoustic works. Indeed, a reading of his fundamental approach to music composition (as outlined in Formalized Music) reveals that “sonic entities,” whether instrumental or electroacoustic, are the building blocks of his work, shaped by stochastically-generated densities and textures. While the technical conditions may be different, Xenakis did not approach instrumental and electroacoustic projects with distinctive aesthetic aims.
Xenakis’s next electroacoustic composition was Kraanerg, a mixed work for chamber orchestra and four-channel tape, completed in 1969, and is his largest work in terms of overall duration: 75 minutes of continuous music. Intended as music for a full-length ballet, choreographed by Roland Petit, the tape part is made up entirely of orchestral recordings involving the same instrumentation as the score for the live musicians. In this case, however, these recordings are treated in the studio, primarily using filters, reverberation, and gain distortion. The recordings are also spatialized for the four-channel presentation. The strategy for spatialization is very similar to that used for Polytope de Montréal, although the use of channel delay is much less utilized. The tape part, which mostly alternates with the live orchestra (with occasional overlapping), can never be mistaken for the other, even if it shares common score material. This is due to the studio treatment of the orchestral recordings and the spatial presentation (the loudspeakers are intended to surround the audience whereas the orchestra is seated together onstage or in the pit). Kraanerg is one of Xenakis’s very few ventures into the domain of mixed instrumental-electronic music (the only other such work he completed is Pour La Paix, for voices and computer-generated sounds from 1981, and it is actually a radiophonic work, intended for broadcast).
In Hibiki-Hana-Ma, from 1970, for tape alone, Xenakis again uses recorded orchestral sources, but adds traditional Japanese instruments (the work was produced in the NHK Studio in Tokyo for the Osaka World Fair). This work utilizes even more extensive studio processing than Kranerg, and the shaping of the music is less tied to notated score material. Originally, Hibiki-Hana-Ma was produced as a 12-track work, and was projected over a large number of loudspeakers using routing technology similar to what would have been used in the Philips Pavilion in 1958 (Xenakis was involved in the design of this pavilion and worked closely with the Philips engineers on the installation of the custom-built sound system involving a routing mechanism and several hundred loudspeakers). The primary innovation in terms of studio techniques is the extensive use of editing, i.e., cutting recordings into fragments. These fragments are usually distinguished by instrumental-textural (sometimes spectral) characteristics, and they are assigned to distinct tracks. While the work evolves over time into complex, sustained textures, there is a “collage” character to the first half of the 17-minute work, as different strands of distinctive instrumental textures are introduced. Some of the materials are borrowed from recordings of existing orchestral works while some were produced in Japan for this project. The materials created from traditional Japanese instruments (struck and plucked) are most distinctive, but some percussive textures are highly developed, producing complex textures, in one case resembling the stochastic “grains” of Concret PH. The treatment of instrumental sources in the studio to create textures that bear little direct resemblance to the sources became Xenakis’s main approach to sonic materials for subsequent electroacoustic works. The spatialization strategy for Hibiki-Hana-Ma is quite different from Polytope de Montréal and Kraanerg: there is little “movement” of material from one track to another by means of “panning”. Rather, each track is assigned distinctive materials, and the movement occurs through the routing of the tracks through the several hundred loudspeakers Xenakis had at his disposal for the premiere in Osaka. This strategy of placing distinct material onto the tracks at his disposal to be routed to available loudspeakers would become Xenakis’s primary means of spatializing sound in subsequent electroacoustic works.
In the later compositions—Persepolis (1971), Polytope de Cluny (1972), and La Légende d’Er (1978)—Xenakis blends highly-developed instrumental sources with electronic and digital sources. Some materials, such as the re-use of Japanese percussion sources, are easily recognized within the overall sonic textures; other materials, even those derived from instrumental sources, are much less easily identified. These works are definitively studio creations, where the sonorities are shaped to create the structure and pitch-based materials are much less significant.
This “instrumental” phase of Xenakis’s electroacoustic output raises questions about the treatment of source materials and the intentions of the composer, especially in the case of Polytope de Montréal, where the work could be performed as an instrumental composition. Ultimately, an understanding of such issues rests in the common aesthetic and formal approach Xenakis developed for all his music, instrumental or electroacoustic, where organizational strategies rest on the definition of sonic entities, whether they be defined by score or by studio production.
De Montfort University, UK
Electroacoustic audio-visual music can be defined as ‘the composition of sound and image informed by traditions of music in which materials are structured within time. This form is here defined as audio-visual music because works contain both sonic and image elements. The sound and image are regarded as equal components joined in the context of a work and are both structured musically. A work itself would be an audio-visual composition’. (Hill 2010a)
As a composer of such works I am fascinated by how audiences and individuals perceive and interpret works of audio-visual music. This research investigates both theoretical and practical approaches to this question through empirical study and scholarly research.
I will present and discuss a selection of the initial findings from my empirical research project investigating audience reception of electroacoustic audio-visual music works and how this relates to existing theoretical works within the field. Theoretical materials will be used to rationalise empirical data in order to understand audience perception of electroacoustic audio-visual music.
Initial Results: Understanding Electroacoustic Audio-visual Music.
What is electroacoustic audio-visual music?
This paper will focus upon works that make use of sounds and images and in which the sonic element of the works is developed from the electroacoustic tradition.
“The composition of sound and image informed by traditions of music in which materials are structured within time. This form is here defined as audio-visual music because works contain both sonic and image elements. The sound and image are regarded as equal components joined in the context of a work and are both structured musically. A work itself would be an audio-visual composition.”
Definition of audio-visual music from (Hill 2010a).
The desire to combine sound and image in a multisensory art form has fascinated and transfixed artists and philosophers for centuries. Newton divided the colours seen in his prism experiment into seven so that light might correspond to the seven notes in the western musical scale (figure 1); during the 18th and 19th centuries inventors sought to devise ever more complex colour organs for live audio visual performance (for example, Louis Bertrand Castel’s ‘Clavecin Oculaire’ and Bishop Bainbridge’s ‘colour organ’ (figure 2)); and throughout the 19th century many composers devised colour / key mappings within their music (demonstrated by figure 3).
With the development of media for capturing and playing back image and sound, along with the rise of impressionism and abstract expressionism in the late 19th early 20th century, artists were finally able to link sound and image free of causal or mechanical limitations and to introduce form and motion as compositional parameters alongside colour in the construction of audio-visual relationships. ‘Absolute film’, pioneered by artists such as Hans Richter and Walther Ruttmann in 1920’s Germany, explored the essential elements of visual form, colour and motion to create silent works of ‘Visual Music’. With the development of the optical soundtrack artists were able to draw directly onto a single filmstrip to generate both visual images and sounds (for example the work of Norman Mclaren). The subsequent development and recent affordability of digital and computer technology has empowered more and more practitioners to cross from music specialism’s towards the visual and vice-versa, creating a rich and expanding audio-visual community.
Pierre Schaffer’s seminal work ‘Traité des objets musicaux’ liberated musicians from the restrictions of pitch and opened up the musical world of timbre as a major tool of musical expression. Just as painting had been liberated from representation so had music been freed from the restrictions and conventions of pitch. Any sound could now be musical material. John Cage, one of the main proponents of this new perspective on music, interestingly may have been inspired by and influenced by conversations with the visual music artist Oskar Fishinger during the 1930’s.
The many approaches that utilised sounds for the creation of music soon blended and exchanged techniques becoming united under the banner of electroacoustic music. This is music that uses the loudspeaker and electronic means for the capture, processing and subsequent projection of sounds.
These developments in creative practice and thought led to the possibility of works built from abstract (or abstracted) sounds and images, in which the audio and visual elements could be related in almost any way. As a result of its diverse background and history there are many varying forms of works exploring the interaction of sound and image, or indeed silent image composed in a musical way. The genre has also acquired a confusing plethora of theoretical terminology through the synthesis of terms form its related disciplines further compounded by the fact that “the speed of audiovisual praxis today far outstrips that of theory formation” (Daniels and Naumann 2010).
Empirical Research Project Overview
In such a paper as this it is impossible to discuss all of the results for such a large project in sufficient detail. Therefore I shall introduce the whole of the research project before focusing upon two related sections of the results that have been of specific interest.
The desire to conduct empirical research into audience reception, perception and understanding grew directly from my own creative practice, and a desire to learn how audiences interpreted the stimuli provided by a composer or artist. Leigh Landy and Rob Weale’s Intention/Reception project (Landy 2006, Weale 2006), investigated the accessibility and audience reception of electroacoustic music. This project sought to discover how audiences with no prior knowledge of the art form reacted to works and how understanding and appreciation of works by these same audiences could be increased. The current study utilises and adapts the methodology from the Intention/Reception project in order to investigate audience reception for electroacoustic audio-visual music.
The main research hypotheses are as follows:
1. Audiences previously unexposed to electroacoustic audio-visual music works will be able to appreciate and enjoy them.
2. Information from the composer will facilitate greater appreciation of electroacoustic audio-visual music works by all audiences.
3. Clear and recognisable sound and image interactions will facilitate greater appreciation of electroacoustic audio-visual music works by inexperienced audiences.
4. Highly recognisable ‘real-world’ (mimetic) materials will facilitate greater appreciation of electroacoustic audio-visual music works.
N.B. Both the contextual information pertaining to and the recognisability (mimetic nature) of the works materials will be less significant for electroacoustic audio-visual works than for electroacoustic works due to the action of source bonding between the sonic and visual elements. Where both are similarly abstract this will be a mute issue but where there are differing levels of abstraction this source bonding is likely to be impeded.
The research hypotheses were developed from the published results and analysis of previous related research projects. One already mentioned was the Intention/Reception project (Landy 2006, Weale 2006) but other projects also provided a means to project potential results. Further information about these projects can be found in both Hill2010a and Hill2010b.
Example Works for Testing
The first challenge for this research was to select appropriate tests works that would represent a cross selection of the different styles of electroacoustic audio-visual music available. The test process also had to be practical and so there was a limitation on the number of test works that it was possible to present to the audience groups.
An open call was dispatched over various networks for electroacoustic music in early 2009 and following it thirty-seven submissions were received. Three were chosen that represented a diversity of approaches and utilised a stylistic diversity of materials.
The first (work A) contained sounds and images abstracted from a common source, recombined in the discourse of the work with fairly direct sound and image relationships.
The second (work B) contained image montage of a female face and a sonic element that contained abstract synthesised sounds and recordings of female utterances. The relationships of sound and image within this piece were much less direct due to the qualities of the abstract synthesised audio. Even where female vocal utterances were utilised they did not occur exactly in time with visual events. The sounds and images were undeniably related but did not mirror one another.
The third, and final piece (work C) contained completely synthesised, abstract sounds and visuals. However, because both the audio and image were constructed using mathematical models of swarms and other particle geometry, they posses a very organic nature.
These works were presented to groups of participants who had no previous knowledge of audio-visual or electroacoustic music. They were asked to record their own interpretation of the piece, assisted by a qualitative questionnaire, in order to collect individual and personal responses. Qualitative questioning was chosen because it provides a rich and more accurate representation of participant responses in terms of interpretation, and can also provide highly valuable unexpected insights in areas that quantitative questioning might ignore. For a much more detailed exposition of the empirical methodology please see Hill 2010a.
Initial Empirical Results
As previously indicated it is impossible to sufficiently provide a summary of the entire results of the project within the limits of this paper. Therefore we will firstly compare responses to work A and work C with regards to hypothesis 4, before examining hypothesis 2 with regards to work A. (The composers’ intention for each work is documented in Appendix 1 and Appendix 2).
This first phase of the empirical research project provided a diverse array of responses from an eclectic range of participants. Some of these responses supported the research hypotheses and others, excitingly, subverted them.
The first results to be discussed here relate to hypothesis four ‘Highly recognisable ‘real-world’ (mimetic) materials will facilitate greater appreciation of electroacoustic audio-visual music works’. For acousmatic (sound only works) in the Intention/Reception Project (Weale 2006), the recognisability of the source materials was proven to be of assistance to inexperienced audiences in interpreting the work. In the present study however, the results indicate that the recognisability of source materials can actually disrupt the audience’s interpretation by sidetracking them into thinking about real world experiences rather than focussing upon the work. For example; work A makes use of drinking glasses as sonic and visual source materials, these are then processed in a musique concrète style, exploring the properties of the sounds and images.
When asked to interpret the meaning of the work participants commonly suggested that the piece was about alcohol consumption and drinking (58% of participants). Initially this did not appear to be an issue because it is positive that participants make an interpretation of the piece, whatever this interpretation might be. However when asked about their engagement with the piece and if they would like to watch/listen to a similar work in the future, a large proportion of participants (67%) explicitly indicated that they would not wish to be presented with a similar work in future. These participants indicated that this was due to a lack of engagement or understanding of the work and what it was trying to portray, quite at odds with the fact that they were able to make an initial interpretation.
In work C the materials are entirely synthesised, and abstract. Therefore, based upon the hypothesis, one would be inclined to assume that this work would be more difficult to understand and thus less engaging for the same inexperienced audience group. However the actual results indicate something quite different, with 87% of participants recording a contextual interpretation of the work and 83% of participants indicating that they would like to see a similar work in future.
Interpretation of Results
This unanticipated result can be rationalised by considering the two works and their openness to interpretation. In work A the use of and manipulation of concrète materials (drinking glasses) suggest a more explicit and closed meaning to inexperienced audience groups. Of course the intention of the composer is to focus upon the sounds and images of objects in an abstracted way, but the concurrent combination of such recognisable sounds and images as a drinking glass make it very difficult for inexperienced participants to move beyond their experience with similar objects in the real world. In work C, the abstract and synthesised nature of the work’s materials meant that there were no direct material sources to which participants can initially and directly relate. Instead the flow and motion of particles, utilising swarm algorithms, provided recognisable forms to which audiences could relate in a diversity of ways. These flowing forms were interpreted to be ocean waves, stars and galaxies, flocks birds and a diversity of other material or object that follows similar patterns of movement; as demonstrated by the unexpectedly high level of contextual interpretations for work C. As James Gibson asserted, characteristics of motion may be abstracted and perceived to operate in different objects. Motion is defined by a particular type, independent of the material that is actually changing (Gibson 1966).
Work C is constructed of abstract materials (small coloured particles), but this material is assembled using abstracted, but recognisable, patterns or forms (swarms) (see also Appendix 1). Work A on the other hand is made up of recognisable materials (drinking glasses), assembled and structured in an abstract way (montage, development and exploration) (see also Appendix 2).
As a result, work A does not provide the participant such an open opportunity to project themselves into the work and its interpretation. In semiotic terms Work A is a proposition of a specific concept (that of montage, development and exposition). But without an understanding in the audience of what this concept actually is, the individual conceptions of each audience member are often inadequate to interpret the work as the composer intended. Inexperienced audiences do not understand the concepts of reduced or expanded listening and thus are unable to interpret the work as the composer intended. Compounding this are the materials that make up the work (drinking glasses) which act to distract the participants by being such recognisable symbols, Susan Langer highlights, ‘A symbol which interests us also as an object is distracting. It does not convey its meaning without obstruction’ (Langer 1957: 75).
Work C on the other hand, being made of abstract material, presents forms and shapes that are unlimited by explicit source bonding or obstruction. Participants can assign many different interpretations to the work, projecting themselves into the construction of any number of meanings. Work C is also a proposition of a specific concept (that of billowing waves of activity, moving between unstable states). However this concept is a far more universally experienced by humans, one could go so far as to say that our world is made up of such activity. Because this concept is more universally understood by the participants, the individual conceptions of each participant are much more adequate to reach the intended concept of the composer.
As Langer states ‘The same concept is embodied in a multitude of conceptions...but if their respective conceptions of a thing embody the same concept, they will understand each other’ (Langer 1957: 71). This is not to say that any of the works can only be interpreted in one way, they do not contain a single defined meaning, but that the underlying concept of the work can influence how ‘easy’ it is for an inexperienced audience to interpret a work in the way that the composer intended.
Information from the Composer
Hypothesis 2 states: ‘Information from the composer will facilitate greater appreciation of electroacoustic audio-visual music works by all audiences’. Following the participant response to work A we will now look at the audience responses to contextual information in order to see if it is possible to use such information in order to help elucidate the work’s concept.
Following projection of the work participants were asked if they would like to have more information about the composition. 75% of participants indicated that they would indeed like more information about composition A. All but one of these recorded a desire to understand the intentions and meaning of the work as the reason behind wanting more information from the composer.
After completing the initial questionnaire participants were indeed provided with contextual information about the work (the exact same document can be found in Appendix 2). They were then asked to rate and reflect on the information that had been provided. The participants were also asked if they would have liked to have this information before the work was projected.
50% of participants indicated that they would have preferred to receive the contextual information before projection of the work, a drop of a twenty five percent from those who had previously indicated that they would like more information. The proportion of participants who stated explicitly that they would not like to receive contextual information about the composition before projection rose to 33%. Of these the most commonly cited reason was that the participants preferred their own interpretation of the work. This presents a contradiction with the majority of participants requesting more information, but only half of the participants actually reporting, the information provided to be of any use. These responses seem to indicate that the contextual information about the work was useful to some but not all of the participants. This could be due to the nature of the contextual material itself, the concept as outlined above is not stated explicitly but must be inferred from reading between the lines, but certainly indicates that the type of contextual information provided was not appropriate.
When asked to assess how the information had affected their interpretation of the work 54% of the participants stated that exposure to the contextual information had assisted their interpretation, either increasing understanding or providing an increased level of appreciation.
The prevalence of responses championing personal interpretations of the work indicate that participants are in favour of their own conception. However it is also clear that participants do still need and desire some kind of contextual assistance (demonstrated by 75% of participants requesting more information after projection of the work).
A new format of contextual information is required. One possible style could be to provide audiences with an explanation of the art form or style in general before the projection of the work, as opposed to discrete information directly related to the work in question. This might be a preferable mode of increasing context and assisting appreciation without influencing the audience’s personal conception. Unfortunately testing such a theory is not possible within the current project but it provides an exciting opportunity for future research.
Electroacoustic audio-visual music has a complex history, borrowing terminology and experience from a wide array of disciplines and research fields. Practical exploration has outstripped theoretical research in this field for many years and the current project seeks to take stock and to attempt to investigate how audiences perceive and understand electroacoustic audio-visual music.
Previous research projects investigating audience reception have provided a foundation of knowledge upon which to build an effective empirical methodology and to propose hypotheses for the research. It has been exciting to find that results form the empirical study have disproven some of these research hypotheses and that audiences have been proven to respond to works in a more complex way than was initially anticipated.
The correlation of empirical results with semiotic theory has provided an understanding of the reception process. Conception provides each audience member with an individual interpretation of the work but the nature of the underlying concept of a work may be a fundamental factor in accessibility for inexperienced audiences.
Investigating audience response to contextual information further supports the theory of conception and provides an interesting insight into the style and type of information desired by audiences. Explicit information relating directly to the work has been suggested as inappropriate because audience members wish to retain their own personal conception, instead it has been proposed that audiences are provided with more general information about the background of the style of music in order to contextualise without influencing the individual interpretation of each audience member.
COHEN, A. J. (2001) Music as a source of emotion in film. In Juslin and Sloboda eds. Music and Emotion Theory and Research. New York, Oxford University Press.
DANIELS, D. NAUMANN, S. (2010) Audiovisuology: See This Sound, An interdisciplinary Compendium of Audiovisual Culture. Buchhandlung Walther König GmbH & Co.
EMMERSON, S. (ed.) (1986) Language of Electroacoustic Music. London, Macmillan.
GIBSON, J. (1966) The senses considered as perceptual systems. Boston: Hughton Mifflin.
HILL, A. (2010a) Investigating Audience Reception of Electroacoustic Audio-visual Compositions: Developing an Effective Methodology. eContact! 12(4) [Available online: http://cec.concordia.ca/econtact/12_4/hill_reception.html]
HILL, A. (2010b) Desarrollo de un lenguaje para la música audiovisual electroacústica: investigación sobre su comunicación y clasificación. En el Límite — Escritos Sobre Sonido, Música, Imagen y Tecnología, pp. 144–165. Editado por Universidad Nacional de Lanús, 2010; compilado por Raúl Minsburg.
JEWANSKI, J. (2010) Colour-Tone Analogies in Dieter Daniels and Sandra Neumann Eds. Audiovisuology: See this Sound, An interdisciplinary Compendium of Audiovisual Culture. Buchhandlung Walther Konig GmbH & Co. pp. 345
LANDY, L. (2006) The Intention/Reception Project. In Simoni ed. Analytical Methods of Electroacoustic Music. New York: Routledge, pp. 29–53.
LANGER, S.K. (1957) Philosophy in a New Key. 3rd ed. London. Harvard University Press.
MORITZ, W. (1999) Optical Poetry: The Life and Work of Oskar Fishinger. Indiana University Press
OX, J. and KEEFER, C. (2006) On Curating Recent Digital Abstract Visual Music. The Abstract Visual Music Catalog. The New York Digital Salon. Available online at http://www.centerforvisualmusic.org/Library.html (Last accessed 8 July 2010)
SMALLEY, D. (1994). Defining Timbre, Refining Timbre. Contemporary Music Review Vol. 10, Part 2. London: Harwood.
WEALE, R. (2006) Discovering how Accessible Electroacoustic Music can be: The Intention/Reception Project. Organised Sound 11/2, pp. 189–200.
University of New York, USA
Any discussion of meaning in electroacoustic music must deal with the nature of the sounds used. The main difference between electroacoustic and other forms of music, after all, is that electroacoustic music can use anything, whereas other music has to rely mostly on musical instruments or voices. The listener, then, tried to make sense of his or her experience.
When we think about listening to electroacoustic music, or any music for that matter, what is “real”? There is an assumption that pervades much writing about music, to say nothing of the general public, that acoustic instrumental and vocal music is “real” whereas electroacoustic music is “unreal” or “artificial.” This problem causes many people to misunderstand, and often to reject, electroacoustic music. One of the main reasons why people develop this bias is that they do not understand the sounds that they are listening to, let alone the music itself, and the basis of their misunderstanding is that they misinterpret the sounds as representing things that they have heard in other musical or non-musical contexts. I would like to argue that, in order to understand electroacoustic music better, people need to give up what I call a “realistic” interpretation of the sounds and concentrate instead on listening to the music in an imaginative and creative or “unrealistic” manner.
Composers have not necessarily abetted the wider understanding and reception of their music. Many pieces are based upon recorded sounds that are manipulated and processed into the final result. For example, imagine that a listener hears a sound that resembles the cry of an animal, and that sound is then transformed in ways that may give rise to images such as torture or cruelty. The listener may then imagine that the piece depicts such actions being carried out, and may well develop a revulsion to the music. My argument is that listeners should not necessarily be thinking of looking for the source of the sound in that manner in the first place, but should instead imagine it as they might hear an orchestral representation of the same sounds.
In order to clarify how to deal with this problem, it will be necessary to consider some basic aspects of musical perception and interpretation. Music is not our first experience with sounds; that would be language. Language is a form of communication, where sounds denote elements like objects, actions and emotions. Language is an innate characteristic of the human race; all people, unless they are disabled, learn to communicate through language and develop detailed understandings of how to group structured sounds, which have considerable variations between different speakers, into complex meanings. This is not the way that we listen to music. While music may communicate various actions and emotions, its interpretation is much more subjective and interpersonal, and it could be alleged that no two people hear exactly the same music in the same way, although they may agree with many aspects of it. Cultural biases, as well as the nature of the language that we first learn, play a part in this.
In the modern world, most of our experience in listening to music comes through recordings. With speech, most of it comes through direct interactions with other people, although we also hear many voices through television, radio, and recordings as well. One result of this is that we usually always have visual cues when we hear a person speaking, whereas we lack visual cues when listening to recorded music. Since we have often seen people playing music, we feel we have some idea of their actions, but there can be many misperceptions when imagining how music is being produced. One result of listening to music through recordings is that people develop a passive role and let it, so to speak, come in one ear and out the other without digesting it. Passive listening is actively encouraged by our culture, in which music is used in advertising and in the background of movies, television, and even elevators and in the workplace. I recently heard someone describe which music is best to listen to when studying. Passive listening actually discourages people from selecting music that engages them in an active way, and it encourages music that is bland and unintrusive. While passive listening may help people to develop a familiarity with a particular piece of music, it cannot be said that this amounts to understanding.
One of the things that people do when they listen to instrumental music is to try to identify the instruments that play different passages in the music. This is not necessarily an irrelevant activity, because it is often the case that primary and secondary melodies are played by different instruments, and identifying that aspect helps people to discern the melodies from the surrounding sounds. But the definitions of musical instruments are not all that precise; there is no such thing as “the” violin. Moreover, musical instruments have notable transient byproducts of their sound production, and they are capable of producing their expressive qualities in only a limited range. Stringed instruments cannot produce their sounds without making a certain amount of bowing noises, and wind tones are often accompanied by escaping air. A piano cannot produce a crescendo, although a succession of notes can have an increasing dynamic, and it cannot produce vibrato. Only stringed instruments can produce a pizzicato. If we think of these expressive qualities as useful properties to assign to sounds in music, why would it not be useful, for example, to be able to produce a pizzicato with a clarinet timbre? This is possible in electroacoustic music but not in instrumental music.
In electroacoustic music, the range of source materials is much greater than the instruments and voices of acoustic music. Composers may draw upon mechanical and animal noises, natural sounds, as well as the entire gamut of musical instruments, and they can also create sounds out of pure fantasy. A concept such as granulation, for example, could never be explored without the techniques that have been developed for electroacoustic music. When the sounds that occur in a piece are unmistakably derived from familiar objects, such as musical instruments, it is difficult to disassociate those aspects from the music. But I would argue that it is usually necessary to do so. One property of most electroacoustic music is that composers do not try to duplicate things that could be more easily done in live performance, but instead aim for new and more imaginative and challenging ways of presenting the sounds. A listener who can put aside the familiar recognition of the source objects will be able to form a more relevant interpretation of the music.
There is no sound that could be imagined or produced that does not resemble other sounds in some way. When we learn to identify a musical instrument, we learn to associate all the characteristics that we hear with that particular instrument. What an “original” sound really does is to take something that we may have heard before and place it into a new context, where we cannot rely on our previous experience to elucidate these associations. This means that the most important characteristic a listener needs to bring to the experience of unfamiliar music is an open mind and a willingness to discard old assumptions about music. It is when music begins to remind you that us have heard something like this before, and therefore it belongs in a particular basket, that we begin to disregard the original and unique properties of the object.
While I am arguing against the notion that instrumental sounds are “real” and electroacoustic sounds are “unreal,” it is important to recognize that there are still many valid reasons for studying instrumental sounds. Since these have been used for many centuries in the history of art music, it is important to know and measure their qualities. Another reason for studying them is to know how to reproduce them, and to manipulate their qualities in doing so. Unfortunately, synthesizer manufacturers have taken the goal of reproducing musical instruments as their sole rationale for designing their instruments. Even though most synthesizers can produce a much wider range of sounds than musical instruments can, almost no one who uses them ever explores these aspects. While there are many other valid reasons for studying instruments besides these, we must nevertheless not be misled into thinking that these sounds represent some kind of standard of excellence, or that sounds which are different are less valid or “real”.
One of the proper roles for electroacoustic music is to extend the creative aspect of composing music to the design of sounds as well as the musical structures that composers have created throughout history. We will not be able to appreciate these efforts until we are able to shed outdated notions of what is “real” and “unreal” in music.
CIRMMT & Université de Montréal, Québec, Canada
For performers of traditional acoustic instruments in mixed electroacoustic music, the location of meaning is as hybridized and perplexing as the place of the genre itself within musical practice. While most acoustic instrument performers within the (contemporary) classical tradition might insist that a search for meaning leads back to the score, those specializing in mixed music repertoire find that notation often proves a false friend, or at least not a map of the meaning of the work. This paper relies on the experiences of a number of performers who commission works for their instrument with electronics (Michael Straus/saxophones, Dana Jessen/bassoon, Luciane Cardassi/piano and myself/recorders) and a few of composers whose works they have premiered (Peter Swendsen, Chantale Laplante, Paula Matthusen). The paper aims to show how collaborative creation aids in filling the gaps between practices and working towards a redefinition of notation in mixed electroacoustic music.
The first issue is that mixed electroacoustic music cannot simply be considered a further development of the instrumental music tradition. As obvious as that might seem to an audience of electroacoustic composers, this is not at all clear to performers approaching their first experiences with electroacoustic music. As Diane Thome writes of her own discovery of electroacoustic sound, "it is the creation of coherent and evocative timbral structures, rather than a music of harmony and melody as traditionally conceived, that continues to pose the greatest challenge for me." For instrumental performers, the score has always been the harmonic, melodic and rhythmic text, one that has become increasingly complex over the course of its development, but one that has mostly only implied timbre with vague descriptive words. How then, are they to understand a work whose primary timbral language is not represented by any script? It is of little surprise that since instrumental performers are used to "reading" a work, it is difficult for them to even be aware of what is most pertinent in many electroacoustic works since there is no effective way of writing it down.
But this is miscalculation also cuts both ways: electroacoustic composers often underestimate the primacy of the score in a performer's understanding of a work. In preparing any work, a performer necessarily analyzes and internalizes the score in order to be able to give a coherent, cohesive and unified performance. As Ludger Brümmer points out: "When performed by a good performer, a bad composition is still an interesting experience because the performer modifies the information of the score by applying interpretive habits and the timbre of a good instrument." The point he is making is precisely that electroacoustic music could benefit from "this process of modification," and even pushes for (instrument-less) electroacoustic music to develop more"expressive grammatical elements." It seems to me that the tools used to analyze this grammar of electroacoustic music could find their way into existing musical notation or could suggest a new kind of notation that could utilise the "reading" expertise of instrumental performers while also giving important information for them to create successful interpretations of these works.
I believe that this underestimation of the investment that instrumental performers might have in the "totality" of the work they are presenting echoes Mari Kimura's complaint that "As a musician in classical I must ask what "musicianship" means in computer music It seems that the term "performance" in computer music expresses merely the function of machines or programs, and not excellence... Computers in music allow us to have more musical material than ever before in the history of music. We have a huge musical palette, with an abundant, relatively inexpensive supply of "paints." But one is allowed this enormous palette without the effort and rigor to develop artistry."
Sharing expertise, finding common ground
The purpose of this paper, however, is not to reiterate that the notation of mixed music is inadequate as a locus of the meaning of a work. My research has revealed that the shortcomings of notation are circumvented by a number of factors, by far the most important of which is creative collaboration.
It is my belief that the meaning of many successful mixed music pieces (and certainly the ones that will serve as examples) is not "written" into the composition, but is "discovered" somewhere between the expertise of the performer and that of the composer. I would also suggest that it is a prime example of the "Zone of Proximal Development," formulated by the early creativity scholar, L.S. Vygotsky, as "the distance between the actual developmental level as determined by independent problem solving, and the level of potential development as determined through problem solving underguidance of in collaboration with more capable peers."
In an exploration of Northern Circles by Peter Swendsen for Michael Straus and Dana Jessen, Estudo de um piano by Chantale Laplante for Luciane Cardassi, and sparrows in supermarkets by Paula Matthusen for myself, I will present the discovery that the lacunae on both sides of the score are not only circumvented by collaborative work, but that the "scaffolding" of more capable peers allows for these gaps in knowledge to start to be filled.
Curiously, however, this might not always lead to clearer scores, since much of the meaning is assumed in the oral exchanges and never recorded. This leads to the problem that for future performances, there is no more information than before. The final part of this paper will focus on the suggestions made about possible notational improvements and on how to transcribe at least some aspect of the collaborative work.
Chih-Fang Huang (1), Jin-Ting Liao (1), En-Ju Lin (2)
Yuan Ze University, Taiwan (1); Heidelberg University, Germany (2)
This research tries to analyze rhythm of Peking opera music by using rhythm complexity measure, to find out its attributions and similarities, and then to perform the automated composition in the style of Peking opera music. The result shows its possibility of the research to preserve and promote Peking opera Music with automated composition techniques.
Peking opera is the quintessence of Chinese drama, and the rhythm meaning plays a very significant role in Peking opera. There are two parts of Peking opera music: “Wen Chang” and “Wu Chang” respectively. Wen Chang uses wind and stringed musical instruments, which is an orchestra ensemble except the percussion. While part of opera focuses on its singing, it plays Wen Chang music as major accompaniment. In this research we will focus on wu chang. Wu chang music is composed by percussion instruments, to create the atmosphere to cooperate with actors on stage and control the tempo of whole opera.
Percussion instruments play a very important role in Peking opera. Different from Western opera using wind and stringed music as the main music accompaniment, Peking opera applies percussion instruments to control almost everything in its music pieces.
It is so important that almost every action inside the Peking opera, such as actors to appear on stage or go off stage, singing, dancing, saying, fighting, emotion changing, and timing control, they all rely on the specific rhythm meaning of percussion instrument. In this research, we focus on two instruments: “Ban Gu” and “Xiao Luo” . Because ban gu plays the most important role in the band, and it conducts the music direction. Ban gu and xiao luo performers are required for more skills that is higher than any other percussion instruments, because they perform more beat variations.
This research uses three kinds of rhythm complexity measures: Toussaint’s Off-Beatness, Keith’s Complexity, and Weighted Note-to-Beat Distance (WNBD). The Off-Beatness measures how an irregular rhythm is formed by counting the number of onsets which does not align with the vertices of regular polygons when placing the rhythm on a circle. Keith’s Complexity introduces a measure for rhythmic syncopation based on three rhythmic events: hesitation, anticipation, and syncopation. WNBD focuses on the relationship between onsets and strong beats.
There are some steps in our research method. First, collect the data of ban gu and xiao lu, and their rhythm complexity will be analyzed and found out the attributions, similarities, and differences among the rhythm meanings. A systematic analysis rule is established then. At last, the proposed system will automatically generate the music in the style of Peking opera, as shown in Figure 1.
Base on the above-mentioned three rhythm complexity measures, the proposed system applied these methods to find out the rhythm meaning of Peking opera. The follows is more detailed description for how these methods work:
I. Toussaint’s Off-Beatness
First, let a rhythm be with n pulses and k onsets, place n pulses evenly around the circumference of the circle, and mark each pulse from 0 to n-1. Second, find the value r which is greater than 1 and less than n, which evenly divides by n. Third, take 0 as vertex, r as side length, and inscribe all possible polygons on the circle as shown in Figure 2. Fourth, find out the pulses that are not correspondent to a vertex of any inscribed polygon, and define those vertexes as off-beat. Last, sum up the number of onsets occurred on the off-beat position. The higher numeral means the higher rhythm complexity degree.
II. Keith’s Complexity
First, let r represent a rhythm with n pulses and k onsets. Second, let an onset be i, and the following onset be j. Compute duration A = j – i, and define Â to be A rounded down to the nearest power of 2. Third, calculate the answer of i mod Â and j mod Â. If the answer is 0, we will define i or j as on beat; if the answer is not 0, we will define i or j as off beat. Fourth, compare i and j with Eq. (1), Keith’s Measure Reference Equation, and find out the value of s. Last, repeat from step two to step four until all onsets have been picked up, and then sum up all s. The higher numeral means the higher rhythm complexity degree.
III. Weighted Note-to-Beat Distance(WNBD)
First, let r be a rhythm with n pulses, k onsets, and m be the number of beats in the meter. Strong beats defined in terms of the meter. Let ei, ei+1, ei+2 ...etc. be strong beats. Second, let x be an onset in r, calculate the distance of (x, ei) and (x, ei+1), pick the smaller distance and divide by m, and let this result be T(x). Third, assign D(x) as the definition in Eq. (2), the WNBD reference equation. Last, go through all the onsets, sum up all D(x), and then divide it by the number of k onsets. The higher numeral means the higher rhythm complexity degree.
After the analysis of Ban Gu’s rhythm meaning, most of the off-beatness and Keith’s values are zeros, and only a few of these two measures show some non-positive values. In WNBD, we found out two phenomena: almost 50% values fall into the range of 0.5 ~ 1.0, and over 30% values are still zeros, as shown in Figure 3.
After the analysis of Xiao Luo’s rhythm meaning, it is found that all off-beatness values are zeros, which comes from Keith’s almost 70% zero values, so as WNBD. Besides these zero values, most of data fall into the range of 3 ~ 4 in Keith’s, and most of values are below 1.0 in WNBD, as shown in Figure 4.
Based on the resultant data of rhythm complexity analysis, we can build rules for system, extract the main characteristics of Peking opera music, and then compose the similar music in the style of Peking opera with various levels of rhythm complexity control. Eventually the proposed system will be developed with Eclipse IDE (Integrated Development Environment) using java language, and JMusic API (Application Programming Interface) is selected as the resource to generate MIDI data. Due to the limited instrument definitions of MIDI protocol, traditional instrument timbers are sampled as a wavetable which can be driven by MIDI data for a better performance. Finally Wu Chang music will be shown using the proposed system to generate the Peking opera with rhythm complexity control. Due to the growing impact of popular music on the traditional Peking opera, the proposed system with the rhythm meaning retrieval can be applied into many fields, including new music composition, multimedia music, and more creative cultural industries, to make the traditional music be well preserved into a life of rebirth.
Gary S. Kendall
Sonic Arts Research Center, Queen’s University Belfast, Northern Ireland, UK
The goal pursued here is to arrive at a better account of the mental processes by which a listener experiences meaning when listening to electroacoustic music and especially how the listener experiences emotion and feeling as integral to that meaning. But before accounting for these processes, we must acknowledge that that the listener’s mental activity occurs in many simultaneous layers. These layers can be described as starting at the bottom with the simplest sensory processes and proceeding up layer by layer to the most synoptic and abstract level. Various authors in cognitive science have systematized and described these layers in various ways. The model described here is sufficiently complex for our purposes without attempting to address many of the other concerns important for cognitive scientists. (A special debt is owed here to the work of Per Aage Brandt.) Our model has five layers that can be enumerated as follows:
Layer 1. Sensation - Constancy and perceptual organization of immediate sensation.
Layer 2. Gist - Framework of things in space extended over several seconds enabling sustained awareness in the short-term.
Layer 3. Locus – Framework for the self-governance of actions within the near term in which items have trajectories and significance.
Layer 4. Context - Framework for enlisting and assessing long-term event-oriented schemas that create expectations over an extended time frame.
Layer 5. Abstraction - Framework of schemas that represent largely background meta-knowledge of inter-connective systems such as musical languages, styles, or in general the domain of something.
We can consider each of these layers to be active in parallel and to be relatively autonomous. Each is producing potential meanings from within its own vantage point, percolating possible meanings from which dominate meanings will emerge. Each of these layers can be described in terms of what it produces, that is, the kind of mental productions it constructs and manages. For example, layer 1 produces perceptual bindings and groupings while layer 2 connects these to instances of schemas for things in space thereby compressing the spread of sensory information into something more manageable. Layer 3 takes these things and connects them with schemas for situations in which they have medium-term trajectories and significance. This allows for the governance of medium-term tasks like deciding where to focus attention. Layer 4 is trying to match these on-going situations to longer-term patterns that provide an extended time frame and therefore also generate expectations about the future. Finally, layer 5 holds background information that defines and maintains the long-term context. We just now traced a path up through the layers, but we could just as easily trace a path down through the layers. For example, layer 5 background information determines the long-term schemas accessed by layer 4. Layer 4’s schemas determine what medium-term governance is prioritized in layer 3. That causes the layer 2 organization to shift in response and thus triggering the sensory processes to retune to different auditory sources.
As an example of how the mental layers interact in the formation of meaning, we will focus on a short excerpt of Stockhausen’s Telemusik toward the end of ‘Structure 16’. This is not an attempt to define an ideal hearing, just a reasonably informed one. ‘Structure 16’ is arguably the simplest of the 32 ‘Structures’ that make up Telemusik. It begins like all of the other ‘Structures’ with the sound of a Japanese temple instrument, and like the other ‘Structures’, its length is a Fibonacci number of seconds, here actually the sum of large and small Fibonacci numbers, 55+2 = 57 seconds. Excluding the initial percussive sound that marks the beginning of the section, the content of this ‘Structure’ is exclusively made up of layers of high-frequency ring-modulated Gagaku music. It is completely unrecognizable as such, and what the listener does hear is glittering clusters between 5 and 7 kHz that are changing more or less every couple of seconds. Such high-frequency clusters are almost emblematic to Telemusik and can be heard as mimicking short-wave radio signals. From that perspective Telemusik, like Hymen, tunes into the ‘vibrations’ around the earth, in this case picking up indigenous music from around the globe. Such sounds occur numerous times before ‘Structure’ 16, where the high-frequency ring modulation is interrupted three times with rapid, stepped changes in some modulation frequencies that produce semi-melodic sequences in a lower frequency range, 0.5-1.5 kHz.
Most interestingly, the last 20 seconds of ‘Structure’ 16 are occupied only by the uninterrupted high-frequency cluster. In terms of the acoustic signals, almost nothing happens for 20 seconds. But how does the listener experience this? The lack of acoustic change actually highlights the changes in the listener’s mental processes. We will examine the 57 seconds of Structure 16 from the perspective of the five layers of mental activity with a special eye toward the 20 seconds at the end. We will consider the Structure in three phases: first the beginning stretching into the 20 seconds, second what happens when the listener realizes that nothing new is happening, and third the new state that arises in response to the situation.
Layer 2: There is a very simple situation carried forward with only the shimmering cluster continuing in the background. The feeling sense of the cluster is integrated.
Layer 3: Low situational uncertainty.
Layer 4: Reflection: The situation fits into the on-going pattern. Projection: Another melodic sequence will happen after a short delay.
Layer 4: Reflection: The pattern is broken and this situation doesn’t match previous patterns. Projection: Is something new about to happen? Alert send to layers 3 and 5.
Layer 3: Layer 4 attempts to force a re-evaluation. What is happening? There is high uncertainty. Alert sent to layer 2.
Layer 2: The foreground-background relationship is ambiguous with only one item present. This opens the way for a shift to foreground. (This is particularly possible because layer 1 has sensory changes roughly every 2 seconds.)
Layer 2: The shimmering clusters are in the foreground. The feeling of the cluster is now experienced as more complex.
Layer 4: Reflection: The pattern is different than originally projected. Projection: something must happen soon.
Layer 3: There is heightened uncertainty.
Layer 5: The domain of the composition is expanded. And possibility with repeated hearings: the lack of change is anticipated and understood as an artistic idea of its own.
During these 20 seconds of almost no acoustic change, a reasonably informed listener would have most likely shifted focus from layer 4 (Context) to layer 2 (Gist). This has also caused the Gist to shift the shimmering cluster from the background to the foreground. Then too, in terms of feeling and emotion, the listener has moved from certainty to suspense and done so in an unusual way because there was no acoustic cue to what was coming, that is, there was nothing pointing in any direction at all. This raised an initial potential for fear but without a build-up of tension in support of fear. This was a strange twist in the emotional quality in combination with complex underlying feelings. The resulting blend of feeling and emotion gives rise to an affect of mysteriousness!
City University London, UK
The primary premise of this paper is that the domain of audible spectral frequencies can sometimes attain spatial characteristics in our listening imagination, giving birth to the experience of what might be aptly termed spectral space. By ‘space’ I do not refer to an abstract space, conceived on a level of thought that is detached from the immediate experience of spatiality, but rather, the notion of space on a concrete phenomenological level – that is, space as directly experienced in music. More precisely, I shall argue that spectral spatiality, when perceptually relevant, is qualified as the vertical dimension occupied and articulated by moving sound-shapes – a notion that is conventionally understood in the context of pitch space (the metaphorical vertical dimension of pitch register) in instrumental/vocal music.
The dominance of pitch in conventional instrumental/vocal music has conditioned listeners to follow the manner of occupancy and movement of notes through pitch space, which becomes an important contributor to the creation of indirect expectations and meaning. Moreover, in some cases (example is provided from Ravel’s Daphnis) the fashion in which materials ‘texture’ the surface-structure of pitch space is directly experienced as an intrinsic facet of the musical fabric. In acousmatic music, where sounds do not default to harmonic spectral structures, the notion of pitch space is extended to spectral space. Similarly, the manner of texturing of spectral space is often perceptually revealed as a pertinent facet of the acousmatic listening experience.
The degrees of abstractness of sounds seem to have a direct relationship with the different stages of cognitive processes involved in source recognition: is the sound immediately recognisable as a bird utterance? Or perhaps only on further reflection (at a later stage) we can detect certain vestiges of bird-like utterances in its spectromorphological structure. On the other hand the sound’s apparent motion through, and occupancy of spectral space may evoke the behaviour of a bird-like entity on a multimodal level. In this context the spectromorphology itself becomes a spatial form or entity. Thus we may ascertain that in certain musical situations the ontological notions of sound and source become blurred: more abstract sounds are mentally represented in a visuo-spatial manner as entities with spatial characteristics. Such sounds are not directly recognisable as signifiers of external sources and yet, in search of meaning, our mind imposes real-world characteristics (by definition multimodal) on to the mental representation of their intrinsic spectromorphological structures. Thus spectromorphologies become sources that ‘populate’ the vertical dimension of spectral space. I call this process ‘autonomisation’ since it describes the ontological autonomy of spectromorphologies.
After an initial discussion, leading to the notion of autonomisation, the paper will describe a set of criteria for qualifying the manner of occupancy and articulation of spectral space that in turn characterise source-bonded aspects of autonomous spectromorphological entities. Musical examples are provided to demonstrate these qualities in context and in order to highlight their pertinence to meaning-making in the acousmatic listening experience.
Importantly I shall argue that spatiality in general, and in acousmatic music in particular, is manifold and dependent on a complex web of interrelated cross-modal attributes and qualities. We are often encouraged, as a result of the fairly sophisticated spatial audio technology that is available to electroacoustic composers today, into thinking that space, the final frontier, has at last been put at our disposal as a malleable compositional parameter. The possibilities are apparently endless, sound sources can move within the three-dimensional field of listening space, they can circle the audience or fly overhead. As such, space is indeed a parameter, and one that can be easily manipulated and quantified.
In contrast, attributes of spectral spatiality cannot be objectively measured, nor can they be easily visualised or placed into neat and discreet categories or controllable parameters. This is due to the complex multifaceted nature of spectral spatiality that depends on the interconnection of many elements (including source-bonded and perspectival characteristics), as well as the musical contexts in which these elements emerge and interact. Should we consider spectral space as being somewhat less ‘real’ or less ‘actual’ because it is not objectively tangible and measurable? Does spectral space perhaps only exist within a more abstract realm of musical interpretation and thinking that is divorced from the actual empirical experience of space?
In this paper I intent to demonstrate that the answer to the above question is negative by arguing that the problem lies elsewhere, namely in the erroneous belief that ‘spatialisation’ technology directly corresponds with the experience of spatiality, that space is a parameter rather than a complex multifaceted quality. This error seems to stem largely from a superficial and simplistic approach to the nature of the cognitive processes involved in the human spatial experience, and is further helped by the false sense of objectivity towards space that is implied and encouraged by the available electroacoustic technology. This is particularly threatening in music that does not lend itself to conventional analytical approaches, and lacks a certain amount of tangibility due to its emphasis on sounds alone, without a mediating body such as the musical score, instrumental idioms and familiar performance gestures and visual cues. The creation and production of acousmatic music is highly dependent on technology, therefore it is no surprise that at times this technology comes to (falsely) represent the missing ‘mediating body’, to provide a sense of security by virtue of its apparent objectivity and tangibility. In such climate it is more vital than ever to remain focused on the subjective, on the sonic experience, whereby meaning and sense-making reside.
The notion of ‘spatialisation’ encourages one to consider space as an empty canvas or frame within which sounds can be placed and moved. Here it is suggested that source-bonded and spectral spaces are inherent to all sounds, and that the experience of spatial perspective cannot be considered in isolation from these two facets. Sounds may be experienced as moving (flying, spinning, ascending or circling, etc.) without the presence of directly corresponding physical ‘movement’ of audio-signals within listening space. This can be easily put to test by establishing dialogues with non-specialist listeners. In such cases one quickly ascertains that these experiences rely on aspects of source-bonding and spectral space. Similarly, not all sounds lend themselves to all manners of perspectival projection and motion. Spectromorphologies (particularly in a musical context) are pregnant with a certain spatiality that can suggest perspectival settings, motions and configurations: as a composer I have learned that it is imperative to be guided by this inherent spatiality in order to accomplish a more sophisticated approach to the composition of space - an attitude that strongly contrasts with the notion of ‘spatialising’ sounds in a parametric manner. An investigation of space in acousmatic music must therefore be largely material-based and context-dependent. Above all, it must consider the inherent spatiality of sounds themselves, rather than divorcing space from sound in the hope of an objective, neutral (does one dare suggest sterile?) approach. The latter attitude would be akin to examining aspects of visual perspective by only looking at the frame rather than the composition itself.
In short, this paper proposes that there is far more to the composition and experience of space in acousmatic music than positioning and movement of virtual ‘sources’ within listening space. One could even go as far as to suggest that it is the power to explore and sculpt spatiality in a meaningful manner that marks acousmatic music as a unique form of artistic expression. An understanding of the nature of spectral spatiality is therefore critical in order to take advantage of the compositional possibilities offered by this art-form and to better comprehend its reception.
Yuriko Hase Kojima
Shobi University, Japan
In Japan, there are many kinds of sound devices used in the traditional gardens such as shishiodoshi (scare-the-deer), tsukubai (water basin), suikinkutsu (water-koto-cave). Many of them are made with natural rocks, stones, bamboos, and often with water. This unique sound culture was based on the concept of silence in Japanese culture, which has also been the basis for Japanese traditional musical culture such as Gagaku and Nohgaku.
Even after the Westernization in music, many Japanese composers have maintained their own characteristics different from the composers of Western countries. Toru Takemitsu was one of the most “Japanese” composers of his generation. He composed his music in Western manner but his treatment of musical elements was very Japanese. As we know, his musical thoughts were greatly influenced by Japanese Zen philosophy.
Toshiro Mayuzumi, on the other hand, was more directly influenced by Western music than Takemitsu because of his studies in Paris. Yet, as in his “Nirvana Symphony,” Mayuzumi’s musical concept often shows his profound interest in Japanese Buddhism.
In 1960, Takemitsu created a music concrète piece “Water Music.” Its sound materials are entirely created by the sound of water drops. The recorded sound materials are processed through now regular sound manipulations, but result is amazing. The piece sounds very musical as if played by the performer. The water sounds sometimes sound like percussive instruments such as tsuzumi, a Japanese traditional percussion instrument, played with the traditional manner. The silence has enhanced the percussive attacks and changes in timbre. The sounds dramatically enhance the silence as well.
It reminds us of the traditional sound devices using water. Water has often been treated as the central theme for Japanese traditional arts. Just think about the garden at Ryoanji. Even with no real water, water plays very important role.
As seen in shishiodoshi, water is always running and used to create sound in silence.
Tsukubai is the device made with rock to hold water. When it rains and/or when there is wind, the surface of the water moves and may create subtle sounds.
Suikinkutsu is a very unique device that was once flourished and then forgotten during the war. It is a ceramic pot embedded upside-down underground and it cannot be seen. When one washes hands and water is spilled out of the basin near by, water drips through the small hole of the pot and create silent but distinctive metallic sound. It is not easily heard and some people do not even notice its sound. Nowadays, with revival of the device, modern technology is sometimes used to amplify this subtle sound to be easily heard by everyone. It is also used for the sound installation work.
“Water Music” and the sound of suikinkutsu both have something in common: the unexpected timing of the attacks of the sound. There is no meter and no exact repeat of a pattern. Takemitsu’s instrumental ensemble piece called “Water Ways” shows another example of how he treated sound from different point of view. The piece is based on the concept of running water. He transferred his idea of water to the harmonic and timbral materials which changes from atonality to tonality.
Sound always changes. Everything changes. It is the concept from Zen philosophy and Takemitsu was aware of that. Perhaps because of that, his music sound very natural and is generally regarded as “intuitive.” Even in the tape piece, it may be said.
Mayuzumi’s musique concrete XYZ is regarded as one of the earliest work of this field composed by Japanese composers. Maybe influenced by music concrete scene at hand in Paris in the middle of the Twentieth Century, this piece sounds more Westernized than Takemitsu’s “Water Music” that was composed years later. Mayuzumi’s orchestral works shows the hint of spectral music from the earliest period of his career.
The Japanese composers of the next generation after Takemitsu or Mayuzumi often express the difficulties they faced while being a part of Western music community in the world. They are struggling to find their musical identities in both electro and non-electro acoustic compositions.
However, at the beginning of Twenty-First Century, there are many more Western influenced young Japanese composers than ever before. Many of them do not regard their foundation of creation of music as Japanese composer and they jump into the Western musical culture with no difficulty. They are more technologically oriented and show little interest in their own sound heritage.
There is one more notable point in Japanese traditional music. The sound is produced surrounded by profound silence. Silence is apparently another key word to create sound. Here, there is no pre-fixed tempo, meter or counting pulses even in the situation of ensemble. There is no conductor either in Gagaku or Nohgaku. The performers do not even give eye contacts to start playing. They feel the atmosphere of the performance and sense when to play, when to speed up/down, and when and how to stop. It is the concept of “ma” and completely different from Western musical settings.
When we think back about the Japanese traditional sound devices and people’s traditional listening habits, we sometimes get lost the boundaries between music and mare sound. In fact, electroacoustic music is placed right in the middle of the dispute such as “What is music?” or “Where does music begin?” Is there any connection between the perception of sound and development of the traditional music? Is there any influence by sound cultural background to the creation of music, especially to the tape music?
I would like to once take a look at the identification of Japanese traditional sound/music culture and investigate similarities and differences between that and the tape music by Takemitsu in order to see any characteristics of the Japanese sound culture and approach the meanings of creating music through non-Western listening tradition.
Andrew Lewis / Xenia Pestova
GEMINi, Bangor University School of Music, UK
This paper proposes a new gestural typology for analysis of mixed electronic music with the primary focus on music for piano and electroacoustic sound. The typology embraces both the physical performance gestures used in instrumental music and the audible gestures characteristic of acousmatic music. The aim is to develop a unified approach to serve as a common currency for the discussion, analysis and composition of works involving both live instruments and acousmatic sound. We propose the development of a lexicon of physical-sonic gesture correspondences, which may lead us to think in new ways about instrumental, acousmatic and 'mixed' musical discourse.
In taking the first steps in developing such a typology our approach is to narrow the discussion of instrumental gesture to the piano alone. This allows us explore in detail a deliberately limited range of physical performance gesture, in order to build a foundation upon which typologies considering other instruments may be built in future work.
Our work seeks to integrate several strands of research usually pursued separately. The most important of these are represented by the work of Denis Smalley and Marcelo Wanderley.
In Smalley’s widely cited and influential writings, gesture is considered as an aspect of sound itself (Smalley 1997). In Smalley’s thinking, human gestures are embodied in acousmatic sounds to varying degrees (levels of ‘surrogacy’). For Smalley, even sounds with unrecognisable sources carry traces of recognisable human agency, such as force, weight, speed, and the nature of physical processes and interaction. This thinking has become characteristic of the compositional and aesthetic approach of many acousmatic composers, both consciously and subconsciously. The influence of Smalley’s writings in this regard is also matched by that of his musical output: Smalley’s music can be considered as representing the practice-based research through which his theoretical ideas are developed, as well as the embodiment and demonstration of those ideas.
Marcelo Wanderley’s work focuses on the development of gestural technologies in order to create new instruments (Digital Musical Instruments or DMIs). As part of this research, Wanderley and Depalle have considered the nature of performers' physical interactions with sound as producing bodies, even suggesting the beginnings of a form of gestural typology (Wanderley and Depalle 2004). This suggests obvious parallels with Denis Smalley's ideas on gesture as an aspect of sound as heard (Smalley 1997). However, such parallels are as yet largely unexplored.
There is also a third strand of research, which considers the nature of musical gesture in a more general sense (that is, not limited to acousmatic or electroacoustic music). In recent years, two books in particular have gathered together some of the wide variety of ideas on the topic (Gritten and King 2006 and 2011). Some of this research contains insights applicable to the relationship between instrumental performance gesture and acousmatic sound.
The overall question is of achieving a lingua franca capable of referring in the same terms to both physical and sonic gestures. To do so we firstly consider the visible and audible gestures of the instrumentalist: what is the nature and musical 'meaning' of the physical gestures made by the performer when playing the instrumental part as composed? That is, what are the visible physical gestures manifested when, for example, the instrument is played with particular dynamics, tempi, or articulations, when phrases are shaped and controlled in certain ways, or when various kinds of material are juxtaposed?
Secondly, we consider the audible but invisible gestures of the acousmatic sounds: what are the sonic 'gestures' inherent in the digitally transformed sound? That is, how does the sound manifest gesture in Denis Smalley's sense of various categories of inferred human causality?
Thirdly, we suggest how these two strands of gesture may be related in an integrated typology, which may be applied equally and interchangeably to both. Our approach is to frame all terms as physical phenomena involving some aspect of initiation (excitation and response), prolongation (various physical interventions which perpetuate a sound), and termination (natural decay, damping or interruption). Sound examples and examples of notated piano material are given in each case.
Initiation is seen as a combination of excitation and response. By 'excitation' we mean any physical action, causing a body to sound, and by 'response' we mean any sound produced as a result. Both excitation and response may be simple or complex. An example of simple excitation is a single strike, and a simple response would be attack followed by decay. In pianistic terms, this is the 'native' excitation- response mechanism of the piano, a basic component out of which more complex excitation-response phenomena may be built. Other forms of excitation include push, stroke and scrape.
The relationship between the complexity of excitation and response is not always direct. In some cases a simple excitation may result in a complex response, and this is particularly the case where the physical structure of the sounding body is complex. For example, a bell-tree may be excited by a single stroke, but its response will be a diverse mosaic of attack-resonance events, which will be varied in loudness and temporal arrangement, but which will exhibit an overall statistical trend from more dense (more frequent events) to less dense (less frequent events) and from louder to quieter. We will hear this trend as representing a move from high energy to low energy, attributing the source of the energy to the initial excitation, despite the complexity of the result. This inference will hold whether the sound is that of an actual bell-tree, an ‘abstract’ acousmatic sound or a pianistic phrase exhibiting the same behaviour.
An example of prolongation can be an action such as lightly scraping a tam-tam in order to create a grainy, random iteration to sustain the sound. A pianistic analogue of this might be scraping a bass string with a fingernail inside the instrument, or tremolo / arpeggiation effects on the keyboard (with the use of the sustain pedal).
While the piano’s 'native' sound-type is slow decay if left to resonate, dampers provide the possibility for physical interventions to bring sounds to an end. More complex damping can include gradual release of the pedals as well as physically muting or damping the strings by applying external objects and materials (hands, cloth, metal, etc). Analogous terminations may be heard in a wide variety of acousmatic sound objects, in which physical intervention with damping objects may be inferred.
While extensive research has been undertaken on the first three of these questions (see ‘Context’), there is little practice-based research exploring this final, integrative step. It is this gap that the authors seek to address. Discussion will be illustrated with references to ‘classic’ repertoire for piano and electroacoustic sound by Denis Smalley, Simon Emmerson, Jonathan Harvey and Annette Vande Gorne alongside more recent works involving ‘live’ electronics by Andrew Lewis, Hans Tutschku and Katharine Norman.
Gritten, A. and King, E. (eds.) 2006. Music and Gesture. Aldershot: Ashgate.
Gritten, A. and King, E. (eds.) 2011. New Perspectives on Music and Gesture.
Smalley, D. 1997. Spectromorphology: explaining sound-shapes. In Organised
Sound 2(2): 107–26. Cambridge: CUP.
Wanderley, M. M. and Depalle, 2004. Gestural Control of Sound Synthesis. In
Proceedings of the IEEE, Vol. 92, No. 4, April 2004. New York: IEEE.
Lin-Ni Liao - Une analyse proposée sur la compréhension auditive des œuvres de musique mixte face à des éléments culturels : La perception sur la fusion de l’inter-culturalité - une pratique du perfectionnement de l’esprit en Extrême-Orient et une démarche intellectuelle en Occident.
Lin-Ni Liao - Une analyse proposée sur la compréhension auditive des œuvres de musique mixte face à des éléments culturels : La perception sur la fusion de l’inter-culturalité - une pratique du perfectionnement de l’esprit en Extrême-Orient et une démarche intellectuelle en Occident.
Compositrice et Docteur en musicologie à l’Université Paris-Sorbonne Membre
OMF-MINT, chercheuse chargée des recherches des projets EMSAN, ORCHID
La puissance de la culture peut s'exprimer par l'identité. Elle est abstraite mais repose sur une réalité concrète. Dans la musique, une base culturelle commune autorise une communication commune sans parole qui peut confiner à la spiritualité.
En Extrême-Orient, dans la musique contemporaine, l’utilisation des éléments culturels découle souvent d'une démarche de recherche de l’identité culturelle. Cette analyse est particulièrement présente chez les compositeurs asiatiques ayant étudié en Occident où ils retrouvent leur propre identité et leur langage après avoir longtemps vécu un conflit dû à la hiérarchisation culturelle de la société qui schématiquement pose la supériorité de la culture occidentale. Face à ce questionnement récurent, chaque compositeur apporte une réponse très personnelle.
Plusieurs questions seront posées dans l'approche de l'étude analytique : A partir de la définition des éléments essentiels de la culture vers une analyse sur l'influence des éléments inter-culturels, particulièrement l’intellectualité propre à l' Occident, il est intéressant de se demander comment les compositeurs s’appuient-ils sur leur propre héritage culturel Extrême-oriental en tenant compte de leur éducation dans le système de la musique occidentale?
L'héritage de la philosophie d'Extrême-Orient repose souvent sur le bouddhisme le taoïsme et la philosophie confucéenne, les compositeurs mettant dans leur démarche, du faite de leur culture, un désir d'apporter, même modestement, avec leur esprit et leur espérance personnelle, un élément pour améliorer la société (et non seulement décrire la situation actuelle) à l’opposé de l’intellectualité individuelle. Cette sensibilité musicale est aussi présente dans la composition et se ressent aux différents niveaux de l’harmonie intérieure et extérieure à travers une pratique personnelle.
D'autres questions se posent encore : Comment les compositeurs organisent-ils ces effets culturels? Qu’est-elle la perception dans le passé et de nos jours de la compréhension auditive? Pouvons-nous analyser des éléments perçus dans un environnement différent?
Comment les compositeurs ont-ils étudié, transcrit, traduit certains de ces éléments? Comment réagissent les compositeurs en face des éléments culturels et avec quelletechnique et quels outils?
On analysera des propositions individuelles de certaines pièces musicales sur la manière, l'organisation, la sélection, l'audition et jusqu'à la gestuelle corporelle pendant un concert. Le résultat de l’analyse est considéré en multi-dimension qui se croisse avec l'environnement philosophique commun comme l’emploi du son bruité instrumental selon l’univers musical traditionnel et dans l'individualité esthétique très spécifique comme l’influence du stage Darmstadt.
Les éléments traditionnels culturels se combinent avec la sonorité contemporaine. Cette perception continue à être découverte en permanence dans l’écriture ancienne comme celle de Bach ou de Beethoven ou des chants traditionnels asiatique qui ont confronté à la modernité y puissent un renouveau en se frottant à la musique contemporaine.
Les éléments en fond commun se divisent comme suit : L’analyse sur les éléments culturels en cinq catégories de lecture dans le contexte culturel, philosophique et spirituel des œuvres mixtes Extrême-Orientales. Comment les compositeurs créent-ils le sens culturel dans la musique (culturel + musical / culturel + non musical ...)?
1) L'usage d'éléments culturels insérés dans l’écriture musicale considérée comme une valeur ajoutée dans un but commercial qui peut être à l'occasion récupérée par des courants politiques. Cette conception se retrouve souvent dans les pays longtemps en conflit avec leur identité. Les concepts culturels sont créés afin d'exprimer une identité commune (souvent politique) entre des groupes ou des publics visés.
2) L’idée culturelle =/ la sonorité ⇒ Le concept culturel reste au niveau de l’inspiration sans avoir de rapport avec la construction de la sonorité
3) L’idée culturelle ⇒Théorie/philosophie =/ la sonorité ⇒ Le concept culturel se développe à travers une théorie ou une philosophie traditionnelle qui rappelle souvent une expérience/une pratique/un travail/une analyse du compositeur MAIS qui ne correspond pas directement à la sonorité liée aux éléments culturels évoquées par le compositeur. ex. Lin Mei-Fang (???) : Multiflication virtuelle (2004) pour percussions et électronique.
4) L’idée culturelle ⇒Philosophie traditionnelle ⇒ la sonorité ⇒ Le concept culturel se développe à travers une philosophie traditionnelle liée directement à une écoute intérieure du compositeur qui transcrit cette sonorité très personnelle dans l’écriture. Nous pouvons dire que le message spirituel et personnel est bien perçu par le public. ex. Chao Ching-Wen : Tien Nee (?? 2006) pour guzheng, violon, violoncelle et dispositif son fixé.
5) L’idée culturelle ⇒Théorie+philosophie ⇒ la sonorité ⇒ Le concept culturel s’oriente d’une théorie traditionnelle vers une philosophie. La théorie devient une méthode personnelle qui introduit des outils et des matériaux dans la composition. Cette pratique de la théorie s’approfondit dans l’aspect philosophique qui correspond directement à des sonorités liées à la recherche spirituelle du compositeur. ex. Les œuvres mixte de Wang Miao-Wen (???)
Nanyang Technological University, Singapore / KTH Institute of Technology, Stockholm School of Art, Design, Media.
Proposed for EMS 2012 under the topic:
“Soundscape, sound ecology: - Analytical tools for the understanding of soundscapes. - New approaches to sound ecology, sonification, sound environment.”
Background and aims
The Soundscape Emotion study is part of a research project started in 2011. Our project aims to chart people?s responses to everyday soundscapes in five modalities: perceptual, physiological, by movement, colour association and spontaneous commentary. We investigate how physiological indicators for stress correlate with a range of acoustic features, including loudness and relative roughness, and whether some personality traits act as moderators. The perceptual measures indicate which aspects of sound environments are salient. Our study is mainly localised to Singapore, a fast-developing city where attention to quality sonic environments is of a low priority. We hope that the knowledge outcomes will be a resource for decision-makers, architects and urban planners, in particular in regards to school environments, to assure sustainable construction standards and long-term societal health.
To provide a tool for the measurement of urban soundscape quality, and ultimately aiming to improve them, a perceptual study by Axelsson et al. (2010) investigated how people perceive recordings of soundscapes that had been categorised as ?technological?, ?natural? or ?human? depending on their prejudged dominant foreground sounds. The authors collected ratings on 116 unidirectional scales (using adjectives such as Lively, Brutal, Warm and so forth, originally in Swedish) from 5 groups of subjects listening to recordings of 10 urban soundscapes each; that is, 50 in total. A principal component analysis found 3 underlying dimensions that were significant and meaningful, labelled pleasantness (50%), eventfulness (16%) and familiarity (8%), together predicting 74% of the variability in the data. The general results were summarised by the authors as: “soundscape excerpts dominated by technological sounds were mainly perceived as unpleasant and uneventful, and soundscape excerpts dominated by human sounds were mainly perceived as eventful and pleasant”.
The Swedish study used binaural recordings with a dummy-head. A near-perfect reproduction of 3-dimensional soundscapes is possible if one corrects for pinnae and head shape effects with individual filters (HRTF), but such filters are rarely available and typically generalised HRTF are employed. It is our experience that the stereophonic effect is convincing only when the head is kept still. Binaural reproduction works well when the head?s position is kept still, but movement dissolves the virtual soundscape image. It has been pointed out that people tend to turn their heads towards unexpected sounds (Menshikov 2003). Unexpected sounds naturally contribute to the perception of an eventful soundscape, and so binaural reproduction techniques may not be ideal if this is aparameter one wishes to investigate. In real life, the world doesn?t tumble and swing when we move our head about; the proprio-receptive and kinesthetic perception system makes us aware of our body?s relation to auditory space. Therefore, we believe that perception of reproduced soundscapes should ideally allow freedom of body movement, in particular of the head, but preferably the whole body.
For our study, we draw on the advantages of a controlled laboratory setting but maintain ecological validity as far as possible by reproducing ambisonic soundscape recordings in a 3D sound installation, where subjects can walk freely in a 50 m2 floor space. Soundscapes, being per definition void of intentionality, tend to be less specific, more static, and less attractive than music. They ?take more time? to listen to, getting immersed in, before they can be perceptually evaluated. As pointed out (Schafer 1984, Sampkopf 2011), the faculties of soundscape listening can be augmented through conscious effort and training over time. However, one will find less of this kind of focussed listening attention in the general population.
There is evidence that personality and mood state influence how sounds are perceived. Vouskoski and Eerola (2011) investigated individual differences in emotional processing, specifically the role on personality and mood in music perception, and preference ratings. They hypothesised that both personality and mood would contribute to the perception of emotions in trait- and mood- congruent manners, and that mood and personality would also interact in producing affect-congruent biases. The authors investigated how mood may moderate the influence of personality traits on emotion perception in excerpts of film music which had been evaluated in a pilot experiment according to perceived basic emotion in five categories (anger, fear, happiness, sadness and tenderness). They concluded that “the degree of mood-congruence in the emotion ratings is at least to some extent moderated by personality traits”. The idea with the authors? study was to parcel out the variability of short-term mood swings from what are persistent personality traits. Among other things, they found significant correlations between ratings of perceived happiness in the music with vigorous mood state, interacting with extrovert personality. They found a correlation between vigour and happiness ratings that increased with increased extraversion. The authors administered the Profile of Mood States - Adults in a version adapted for use with adults (POMS-A; Terry et al. 1999, 2003) and the Big Five Inventory (BFI; John & Srinathavan 1999).
The POMS-A is a questionnaire with a single instruction: “mark the answer which best describes HOW YOU FEEL RIGHT NOW”, followed by 24 adjectives. The subject rates his mood on a 5-point Likert scale anchored by “not at all” and “extremely well”. The adjectives are such as muddled, alert, nervous, energetic and so forth. A score is calculated for each of 6 mood dimensions: Anger, Confusion, Depression, Fatigue, Tension and Vigour. POMS has been reported to have good concordance with other measurement instruments (Morfeld et al. 2006).
The BFI has 42 items and a score is calculated for each of 5 personality dimensions or traits: Openness, Conscientiousness, Extrovertedness, Aggressivity, and Neuroticism (OCEAN). The items are normally rated on a 5-point Likert scale (1=disagree strongly, 5=agree strongly). For our study, we chose the Ten-Item Personality Index (TIPI) developed by Gosling et al. (2001, 2005). It has been shown to have good construct validity compared with both the 42-item BFI (used by Vouskoski) and larger instruments such as the 214-item NEO-PRI-N. Gosling underlines that the TIPI is less specific but that it has a light-weight advantage when experiment designs do not allow the time for using one of the larger instruments. See screenshot below for the implementation, in MaxMSP.
For the reasons explained above, that soundscapes are best listened to when they are immersive, we did not want to design a test with short excerpts interspersed with questions and ratings on emotional dimensions. Movement (walking) has been showed to be correlated to emotional states such as stress and arousal.
Physiological measures, such as skin conductance (EDR, electrodermal resistance) are likewise known to correlate with emotional states. We tracked 5 physiological measures including EKG and EDR using a ProComp system. During the second half of the session, participants were fitted with psychophysiological equipment to record heart rate, BVP amplitude, respiration rate, electrodermal activity or galvanic skin response (GSR), and body temperature. These sensors consisted of 11 mm Ag/AgCl dry electrodes placed on the ring and middle fingers for recording electrodermal activity and secured with Velcro straps, a photoplethysmyograph sensor placed on the middle finger for recording heart rate and BVP amplitude also secured with Velcro straps, and a digital thermometer inserted inside the BVP attachment strap on the index finger for recording peripheral skin surface temperature (Figure 1). A Hall effect respiration sensor was placed around the diaphragm to record respiratory rate. Physiological data were collected with the Procom Infinity biofeedback system by Thought Technology.
We made ambisonic recordings of several Singaporean everyday sonic environments and selected 12 excerpts of 90 seconds duration each, in 4 categories: city parks, rural parks, eateries and shops/markets. We conducted 2 experiment series in Singapore. The first experiment rendered audio in a 3D installation space. We filmed participants (N=17) from a birds-eye position, capturing their movement in the available floor space, and tracked 5 physiological measures including EKG and EDR using a ProComp system. In the second experiment we used a screen-based setup with KEMAR binaural rendering. We employed the Swedish Soundscape Quality Protocol (2010) which uses 9 dimensions of perceptual quality and 5 categories of content association, and analysed data on a principal components model that is similar to the 2-dimensional valence-activity plane. Colour association was analysed using a clustering model in L*a*b* space. See below for a screenshot of the rating interface, implemented in MaxMSP.
The experiment was repeated with a group in Singapore (N=36) and a group in Norway (N=14) allowing us to see whether familiarity with the soundscapes influenced results. In both experiments, participants filled out the Ten-Item Personality Index (Gosling et al. 2003) and Profile of Mood State for Adults (Terry et al., 1999, 2005). In a forthcoming extension of the research, spontaneous commentary to the soundscapes will be transcribed and analysed for lexical diversity using Crossley's MTLD (2009), for which we have conducted a separate pilot study.
Preliminary findings include observations about the spatial organisation of coloured ?blobs? representing the 12 soundscapes. The illustration below shows one response, with clear association between groupings and colours. Recall that subjects rated soundscapes without the descriptive tags, which are added here.
Analysis results will be presented at the conference.
Adams, M., Bruce, N., Davies, W., Cain, R., Carlyle, A., Cusack, P., Hume, K., Jennings, P. & Plack, C. (2008). “Soundwalking as methodology for understanding soundscapes” In Proceedings of the Institute of Acoustics Spring Conference 2008 – Widening Horizons in Acoustics, Reading UK, April 2008, pp 552-558
Andringa, Tjeerd C. (2010). “Soundscape and core affect regulation”. Proceedings of Inter-noise 200, Portugal.
Axelsson, Östen, Nilsson, Mats E. & Berglund, Birgitta (2010). “A principal components model of soundscape perception”. Journal of the Acoustical Society of America #128 (5), November 2010.
Cain, R., P. Jennings, J. Poxon, A. Scott (2009), “Emotional dimensions of a soundscape”. In Proceedings of InterNoise 2009, 23-26th August, Ottawa, Canada [invited paper]
Cain, R., P. Jennings, M. Adams, N. Bruce, A. Carlyle, P. Cusack, W. Davies, K. Hume and C. Plack (2008), “SOUND-SCAPE: A framework for characterising positive urban soundscapes”, In Proceedings of Acoustics 08 – Euronoise, the European conference on noise control, Paris France, June 2008, pp 1019-1022
Cain, R. & P. Jennings (2007), “Developing best practice for lab-based evaluations of urban soundscapes”, In Proceedings of Inter-Noise 2007, Istanbul, August 2007
Davies, William J. and Adams, Bruce, Marselle, Cain, Jennings, Poxon, Carlyle, Cusack, Hall, Hume & Plack (2009). “The positive soundscape project: A synthesis of results from many disciplines”. Proceedings of Inter-noise 2009, Canada.
Davies, W. and M. Adams, N. Bruce, R. Cain, A. Carlyle, P. Cusack, K. Hume, P. Jennings, C. Plack (2007), “The Positive Soundscape Project”, In Proceedings of the 19th International Conference on Acoustics, Madrid, September 2007.
Gosling, Samuel D., Rentfrow, Peter J. & Swann Jr., William B. (2003). “A very brief measure of the Big- Five personality domains”. Journal of Research in Personality 37 (2003) 504–528.
Jennings, P. & Cain, R. (2009), “A Framework for assessing the change in perception of a public space through its soundscape”, In Proceedings of InterNoise 2009, 23-26th August, Ottawa, Canada [invited paper]
Lindborg, PerMagnus (2010). “Aural and Visual Perceptiions of a Landscape”. Unpublished pilot study.
Luck, Geoff, Saarikallio, Suvi, Thompson, Marc Burger, Birgitta & Toiviainen, Petri (2010). “Effects of Personality and Genre on Music-Induced Movement”. Proceedings of the 11th International Conference on Music Perception and Cognition (ICMPC11). Seattle, Washington, USA. S.M. Demorest, S.J. Morrison, P.S. Campbell (Eds)
Terry, P. C., Lane, A. M., & Fogarty, G. J. (2003). “Construct validity of the POMS-A for use with adults”. Psychology of Sport and Exercise, 4 (2), 125-139.
Terry, Peter C. , Lane, Andrew M. , Lane, Helen J. and Keohane, Lee(1999) “Development and validation of a mood measure for adolescents”. Journal of Sports Sciences, 17: 11, 861 — 872.
Toiviainen, Petri (2010). “Spatiotemporal Music Cognition”. Proceedings of the 11th International Conference on Music Perception and Cognition (ICMPC11). Seattle, Washington, USA. S.M. Demorest, S.J. Morrison, P.S. Campbell (Eds)
John, Oliver P. & Srivastava, Sanjay (1999). “The Big Five Trait Taxonomy: History, Measurement, and Theoretical Perspectives”. Chapter 4, pp. 102-38 in Handbook of Personality. Theory and Research. 2nd edition. Pervin, Lawrence A. & John, Oliver P. (Eds). The Guilford Press 1999.LaBelle, Brendon (2009). XXX LaBelle, Brendon (2010). XXX
McCrae, Robert R. & Costa, Paul T. (1999). “A Five-Factor Theory of Personality”. Chapter 5, pp. 139-53 in Handbook of Personality. Theory and Research. 2nd edition. Pervin, Lawrence A. & John, Oliver P. (Eds). The Guilford Press 1999.
Maisonneuve, Nicolas, Matthias, Stevens, Niessen, Maria E., Hanappe, Peter & Steels, Luc (2009). “Citizen Noise Pollution Monitoring”. The Proceedings of the 10th International Digital Government Research Conference.Maisonneuve et al. (2008-11). NoiseTube. http://www.noisetube.net (last accessed 28 March 2011). Menshikov, Aleksei (2003). “3D Sound vs. Surround Sound.” Available at http://ixbtlabs.com/articles2/sound-technology/index.html (last accessed 29 March 2011).
Morfeld, Matthias, Petersen, Corinna, Krüger-Bödeker, Anja, Mackensen, Sylvia von & Bullinger, Monika (2006). “The assessment of mood at workplace - psychometric analyses of the revised Profile of Mood States (POMS) questionnaire”. Psychosoc Med. 2007; 4: Doc06. Published online in May 2007 and available at http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2736534/ (last accessed 28 March 2011).
Poxon, J., Jennings, P. & Cain, R. (2009), “Creation and Use of a Simple/Informative Method for Displaying and Analysing Soundscapes Recordings”, In Proceedings of InterNoise 2009, 23-26th August, Ottawa, Canada
Vuoskoski, Jonna K. & Eerola, Tuomas (2011, in press). “The role of mood and personality in the perception of emotions represented by music”. Cortex, XX(X), XXX-XXX.
Ákos Rózmann (1939-2005) was a Hungarian composer of electroacoustic music, active in Sweden between 1971 and 2005. In his six and a half hours long pre-recorded electroacoustic composition, Twelve Stations (composed 1978-1980 and 1998-2001), Rózmann presented a journey that one might call a move from samsara to nirvana, or from hell to heaven. His 1984 concert program notes written about the first parts of 1978-1980 relate to the buddhist concept of samsara. The 1998-2001 parts lack any similarly explicit and well- defined religious reference: the composer gave these parts programmatic titles that describe a process starting in “hell” – whatever it may mean – and arriving gradually at some sort of cosmic celebration.
While Rózmann’s outline described above gives a useful key for the narrative layer of the work, it doesn’t help to approach the metaphysical value of its meaning, that is, its actual relevance in the fundamental problems of our existence. In my paper, I aim to show one possible way to explain the emergence of this value during the listening process.
A characteristic feature of Rózmann’s work is the tenacious immersion of the listener in sounds that one naturally tends to interpret as coming from anthropomorphic or zoomorphic beings dwelling in imaginary spaces. I argue that this interpretation can happen according to one of three models, which I call the “documentary model”, the “Dante model” and the “inmate model”. In all three models the listener takes the loudspeakers as sort of windows opening to an imaginary space and time distinct from his own, and relates to its events either like an objective observant of a documentary, or like a reader of fiction, identifying with one of the characters. This character can either travel freely through the different regions at his own will, as the figure of Dante does in Divina commedia, or can be more or less confined to his actual place, being an inmate of, for example, hell or purgatory.While listening to Twelve Stations, one is motivated to all these three kinds of interpretation, but each of them finally leads to contradictions or proves some other way insufficient.
The work thus forces the listener to cease to assume imaginary spaces and time distinct from his own, and accept the sounds as part of his own reality proper at the present place, at the present moment. This way, accepting that the sounds are not part of any artificial, artistic reality, the performance of the work functions not so much as a concert but rather as a sort of treatment, and instead of being a depiction of a journey from darkness to light, proves to be itself the means of cleaning.
Master’s student, Université Paris-Sorbonne (Paris-IV)
Wishart coins the term landscape to describe the apparent origin of a sound. More than that, landscape describes sounds as they are presented in the musical world, along three mains axes: the nature of the acoustic space; the disposition of sound-objects within the space; the individual recognition of said objects (what I called before “sonic identification”). Bayle calls “images-de-son” the diffusion of sound-objects in the virtual space of the speakers. He then distinguishes between those images, according to the level of referentiality they present, using Pierce’s semiotic theory: images-de-son may then be iconic (if they refer to something via resemblance), indexical (if they refer to something via denotation) or metaphorical (if they refer to something via convention). Pousseur proposes a sketch for a typology of sounds according to their link to intentionality, whether direct or indirect.
The emphasis on intentionality leads me to make specific considerations about voice. Russolo and Schaeffer underlined the specificity of voice, even when Schaeffer wouldn’t include consider the particular domain of perception it represented. Wishart calls utterance all non-speech (human or animal) voice-produced phenomena, emphasizing the fact that voice is the most recognizable and ear-catching type of sound. Bergsland explains and extends the recognizability of voice in acousmatic music by referring to the prototype theory of cognition: there could then be a maximal and a minimal “voiceness” to sound. He argues that there are several experiential domains and that while some of them are non- specific, some others can be specific to voice or non-voice sounds.
Models for music’s reception all have in common a division of the audience according to listening strategies. Kaltenecker studies 18th century listening, Schaeffer distinguishes listening intentions, Baboni conceptualises four interpretative models to interpret a work of art and Delalande (then picked up by Anderson) experiments on acousmatic music reception behaviours. The point of convergence of all of those models is the coexistence of, on the first hand, an analytical domain, and on the other hand, an emotional and an imaginative domain.
In Vers une narratologie ‘naturelle’ de la musique, inspired by Monika Fludernik, I coin the term narrativization to designate all of the processes used by a listener to apprehend a sonic work. On the basis of listeners’ verbalizations after hearing samples from Wishart’s Journey into Space, I call: ecological, those narrativizations that are based on basic cognitive thought like causal attribution and agent-object links and organized according to processes of semantization and contextualization; and semiotic, the narrativizations thatuse more conscious thought, ranging from semiotization of kinaesthetic feeling to verbal conceptualization, through cultural synaesthesia and learned associations.
I wish to submit the division of the concept of agentisations in three sub-types: anthropormorphizations, humanizations and focalizations. Anthropomorphizations are the considerations of sound as willing, of musical gestures (whether in space, pitch, or any other parameter) as intentional (physical) gesture, that are induced partly by the streaming phenomenon described by the gestalt psychologists, partly by the causal attribution inherent to human cognition. The consideration of an utterance as an index to the presence of a human (or animal) being, is a special kind of anthropomorphization, namely a humanization (which is related to Bergsland’s theories about experiential domains).
I argue that the concept of diegesis (as the world in which events occur), though enormously associated with literary narratology, is relevant to the study of the reception of electroacoustic musics, in that it allows for the consideration of multiple (more or less definite) levels of organizations (gestalt’s figure and ground phenomenon, studied by Roy in his thesis, conceptualized by Smalley as gesture and texture, used by Bayle in Paysage, personnage, nuage to define the opposition between what Schaeffer called “trames” and what comes into the foreground as “characters”). Without ever trying to say that a “story” is told, that “characters” move and act as in a literary narrative, I argue that anthropomorphization and objectization of foreground sound-events, their placing in a global diegesis or “narrative frame” composed of less prominent sound-events, as well as semantizations via sound-source identification and contextualizations via causal attribution and relational links, are frequently used strategies in the listening of electroacoustic musics, and can be, partly because of the presence of space and concrete spatial movement, considered as more than the metaphor they often are in regard to instrumental musics. This can take the form of a referential diegetization (when the sound-sources are identified and the diegesis is constructed mainly from contextualization of those sound-sources) or of an abstract diegetization (when the sonic identification is limited to gesture or form).
Focalizations are the placement of narrativity as Fludernik defines it (that is, experientiality) on a human or anthropomorphic subject: if the listener is listening in an empathic manner, he will himself be the focalizer, the experiencer; if he is listening in an imaginative manner, focalization might be on a character or a narrator he invents; if he is listening to instrumental music in a causal manner (which might be the case when watching a movie showing a concert by the main protagonist, for example), the interpret might be the focalizer; if he is listening to any music in a reduced manner, the focalization may be more abstract and find itself attached to the musical system. Deleuze’s distinction between haptic and optic (“smooth” and “striated” spaces) and Mole’s psychology of space are of great help in the description of focalizations in spatialized electroacoustic musics. I will thus separate egocentered, heterocentered and outcentered focalisations.
To conclude, I argue that the contents and characteristics of the music, just as well as the choice, expertise and habits of a listener can explain the listening strategies s/he will employ to apprehend music, and that considerations of all of those factors might help widen an area of investigation concerned with psychological and cognitive evaluation of listeners.
Dr. Andra McCartney
Associate Professor, Communication Studies, Concordia University, Montreal.
Soundscape studies consider the acoustic environment as it is created, shaped and heard by those who move through and inhabit the environment. This is a multidisciplinary, qualitative approach that considers the sounds heard as well as the social and cultural contexts in which they are produced. A soundwalk is an exploration of a location through walking, in which listening becomes the primary source of information. Soundwalks are used for exploratory and research purposes, they can be recorded, and can also be guided. Soundwalks provide a way for people to think through the cultural, political, sonic and social meanings of everyday sounds.
The Soundwalking Interactions research project in the Communication Studies department at Concordia University considers approaches to soundwalks and interactive soundscape installations that challenge audiences to approach such walks and installations actively, using interpretive and inventive listening practices. Such practices can lead to heightened opportunities for reflexive development of creativity in artists and audiences. Soundwalks have become an important creative form, especially in recent years, but remain under-theorized in the contemporary literature. Andra McCartney's artistic practice features audience interaction and dialogue as integral parts of each installation and public soundwalk, thus becoming recursive or reflexive in approach, with audience responses influencing the direction of the projects. This research attempts to integrate the insights of reflexivity studies, concerned with artist-audience interaction, with thinking about soundwalk art and improvisation. The work interweaves these areas that are generally regarded as distinct and separate, considering this from the perspective of sound art as communication, that is, art as a mutually constitutive process between the artist and the participating audience.
While models for reflexive artistic strategies can be found in the work of some conceptual artists (Fluxus, Situationists and others) and proponents of interactivity (Shanken), there are relatively few studies about such artistic works that use ethnographic methods to analyze and discuss the influence of different reflexive models on audience engagement. Our research project aims to develop, present, and analyze improvisational and reflexive strategies; engage audiences in public listening soundwalks; make audio recordings of soundwalks as sources for artworks; create works based on these soundwalks; make those works publicly available for future listening and commentary.
Listening in these soundwalks is active, critical, dynamic and attendant to the requirements of the moment, similar to the listening of improvising musicians. While the work of the artist is crucial indesigning the structures and providing models for soundwalking and listening strategies, participants are encouraged to engage actively and creatively with the soundwalk, to use improvisational tactics more than pre-planned routes to respond to sound immediately in the space, then to bring listening insights to discussions following each walk. The focus on listening underlines the important role of the audience as well as the listening artist. In some iterations of the project, an interactive installation gives audiences the opportunity to dynamically move sound around a space using motion tracking of gestural movements, as well as to contribute immediately to the sound sources of the installation using a live microphone as input. In other cases, depending on the context, audiences are offered logbooks to dialogue on paper with the work, through writing or drawing. In yet other cases, extended live discussions are the focus of interaction. We compare technologically- sophisticated approaches to audience reflexivity such as tracking of gestural movement using computer technology to allow sound mixing, and the use of social media to represent and disseminate soundwalks, with other approaches such as logbooks and live discussion. Participants’ responses during installations and post-soundwalk discussions are collected, thematically explored and consulted in the research and creation.
Our integration of reflexive modes allows us to compare how audiences engage with different methods for interacting with the work, and how these methods are combined within the activity, depending on context. Audience participation in the work is not tied to any one technology, and therefore allows for a variety of different modes of engagement. Borrowing an idea from Laurel Richardson, this is conceptualised as a crystalline practice. Crystalline interaction integrates several perspectives by using more than one method of recording ideas and sensations (e.g. sound recording, writing, drawing, gesturing, videotaping, and interaction with computer systems), and by inviting the participation of several people simultaneously, creating a multi-faceted crystalline structure of representations, integrating many perspectives and modalities.
This approach to reflexivity in soundwalk art is influenced by contemporary thinking about conceptual art, especially in relation to live performance and interactive installations. Edward Shanken (2002) points out that art history has drawn distinctions between conceptual art and art and technology, but that the overlaps between these areas can be fruitfully considered. Shanken discusses early work in art and technology which moved towards the realm of conceptual art and was concerned with how artistic concepts were communicated with audiences, while later work in the field became more concerned with the materiality of the particular technological apparatus than with concepts or how they are communicated. I am interested in maintaining the insights of early work in art and technology while making use of contemporary technologies in a more consciously communicative way, and not neglecting the important role of more low-tech methods such as writing and speaking but rather integrating a wide number of approaches and paying attention to how these interact and affect each other.
The presentation at the Electroacoustic Music Studies Network meeting in Stockholm will discuss recent soundwalks and soundwalk art projects undertaken by the research group, drawing out themes about how audiences interact, listen, and make meaning in each context. Extensive quotes from soundwalk participants will be used along with short video examples of sound pieces created from the walks and installation, to discuss the many ways that people have engaged with the sounds of places and their social, political and sonic meanings, through this project.
University of Huddersfield, Department of Music and Drama, England
An extended sound universe is available for composing in 11-14 music curricula using new technology. It offers pedagogic opportunities for pupils to learn about composing music by exploring sonic possibilities that exist beyond traditional paradigms based on pitch, time and timbre. The potential for pupils to explore creative thinking as part of the composing process is increased, to shape and structure sounds in ways showing meaningful intention. However, recent investigations in a context of the National Curriculum for England found there was insufficient use of digital technology in 11-14 music education and often it’s use is for basic MIDI sequencing and notational score writing (Ofsted 2009; Savage 2010). It is the potential and the challenge of the convergence of new technology, original music composition and composing pedagogy at 11-14 years that is the focus of this paper. It considers theoretical issues and discusses learning resources for developing music composition practice in the classroom.
1. THEORETICAL RELATIONSHIPS
1.1 An extension of the sonic universe
New technology affords an extension of the sonic universe beyond a pitch, time and timbre paradigm. It allows the inquisitive composer to explore sonic possibilities extending beyond the capabilities of finite lattice frameworks and acoustic instruments traditionally found in Western music. Such a sonic universe offers a sound space of continuums and possibilities for composing ‘where every sound and imaginable process of transformation is available’ (Wishart 1994 p.9). Continuums exist between the fundamental sonic properties of pitch, duration timbre offering the potential for dynamic morphology within the domain of such a sonic space. This has resulted in developments in compositional techniques and how sound might be shaped, organized, perceived and understood (Wishart 1996; Landy 2007; Dean 2009).
1.2 Technology in an 11-14 year music curriculum
In England, the use of technology in the 11-14 year music curriculum is statutory. It should be used ‘to create, manipulate and refine sounds. Including the use of music technologies to control and structure sound in performing and composing activities (QCA 2007 p.183). Curriculum opportunities should be provided for creativity and in exploring ways music can be combined with other art forms and subject disciplines (QCA 2007 p.181). Savage (2005 22:2 pp.178-179) warned that ‘just adopting new technology in the classroom will not effect any meaningful educational change. There needs to be a wider appreciation of the working practices that accompany such technologies’. Educators should be clear about how technology can be used pedagogically to help develop musical intelligence and foster a culture of creativity (Brown 2007; Price & Savage 2011).
1.3 Creativity and composing music
‘For many music educators, creativity is at its strongest in the act of composition.’ (Barnes 2001:7 p.92) The compositional process allows the mind to explore ideas, which at the point of them being imagined may be worth very little musically, but there comes a point when imagination and decision-making become involved (Gardner 1993 p.100). Musical imagination is being able to think in sound, to realize and manipulate ideas in sound towards musical structures of meaningful intention. It is the positioning of sounds together and what might be conveyed that is composing (Wishart 1994; Paynter 2000; Swanwick 2001).
The development of such cognitive processes in music composing can be predictable, progressive and sequential. These are represented in a musical development spiral showing the modes and characteristics of each stage of development from infancy to adulthood (Swanwick & Tillman 1986). ‘At times it may be necessary to re-activate the spiral again, for example when working in a new idiom, or on a new piece of music as a composer, performer or listener, but it should not be short-circuited’ (Swanwick 1988). This concept suggests that such re- activation therefore might be necessary when introducing new technology for composing.
It is intended here that in considering theoretical issues of technology and creative thinking in an 11-14 year music curriculum, some clarification and illumination of their interrelationship is evident. In practice these domains become interdependent upon each other and ought not to be considered in isolation. What follows is a discussion of practical learning resources that might be constructive for music educators who may be considering alternative uses of technology for composing in their own music teaching.
2. LEARNING RESOURCES 2.1 A discussion for music educators
Kirkman (2009) categorizes digital technologies that can be used to support a musical curriculum. These are broadly listed as mobile systems, web-based services, computer-based tools and hardware or user interfaces. In practice terms these range from mobile phones, mp3 players, e-learning and networking platforms, computer workstations and DJ software through to turntables and gaming platforms offering sound capabilities. For music educators wishing to plan and design their own units of work incorporating such technologies, Kirkman (2009) provides detailed information, guidance and learning materials to assist in this. These include exemplar materials, mapping documents and existing units of work.
In the course of framing learning perspectives of computer music education, Rudi & Pierroux (2009) provide insight and discussion of music software as learning tools. They identify that whilst many of the software tools for musical applications available on the open market are designed for professional or semiprofessional users, a growing number are becoming available for non-specialists. These may be of particular suitability to 11-14 year old users who are sometimes experiencing such technology for the first time. Rudi discusses a range of software designed for a variety of contexts, although of particular interest might be NOTAM’s (Norwegian Network for Technology, Acoustics and Music) DSP software application. It has a particular focus on computer music approaches as an alternative to more pitch-based approaches. It is an application for composition and signal processing, making use of common synthesis and signal processing methods, whilst also providing in-built help features. It has, and continues, to undergo development since its initial publication (Rudi 1997), today being renamed DSP02 . The application is available for free download as a cross platform, java based, standalone application.
Music educators interested in approaching composition pedagogy from alternative perspectives beyond notation paradigms may also be interested in learning resources currently under development by the author. These include software tools currently being developed in secondary schools as part of action research in England. Each tool implements a time or frequency domain processing technique designed to be simplistic to operate yet effective in scope. The processing techniques employed in each tool are real-time and include independent pitch-shifting and time compression/expansion, delay, filtering, reverberation and ring modulation. These particular techniques have been selected for this project to allow exploration of sonic parameters, whilst introducing the user to typical manipulation techniques. The tools are intended to be easily deployed in a classroom environment to encourage creative- thinking for the exploration and manipulation of sound prior to the actual sound structuring process. The tools are programmed using the software Max/MSP by Cycling74 and are compiled as a standalone application for both Windows or Apple computer platforms.
Music educators interested in alternative approaches to composition based on note and duration based frameworks may be interested in learning resources recently exhibited (HCMF 2011) by Baudel (2008). A software application named High C draws inspiration from Xenakis and his work with UPIC where sound events are created through graphical input. High C is a graphical music creation tool incorporating a sequencer and mixer. The sound engine draws upon a variety of sound synthesis including modulation, additive and granular techniques, although can be expanded to offer samples as the source waveform. The user begins to create elementary sound objects by inputting data by sketching graphical representations in a GUI to represent pitch, duration and envelope. Sound objects can be manipulated and layered into more complex structures over time, enabling the composition to evolve and progress into more complex structures. A basic version of the software is available as a free download, enabling young people or non-specialist users to quickly experience and familiarize themselves with an alternative approach to using new technology for composing. A more advanced version with increased functionality is available at a nominal cost circa €30. The computer platforms supported include Windows®, Macintosh® and Unix/Linux. To support the software are a number of learning resources including manuals, tutorials and exemplar material. Researchers might be interested in the information offered from the field studies and the offer by Baudel to get involved in collaborative empirical research with High C in the classroom.
In conclusion this paper concentrates upon issues of technology for composing in the 11-14 music classroom in England. It presents a theoretical interrelationship between new technology, creative-thinking and music composition pedagogy. It presents some examples of technological learning resources and discusses what these might afford music teachers operating within music composition contexts at 11-14 years. It is intended that such a discussion may prove stimulating and constructive to music educators not only in England but globally around the world, who are interested and engaged in developing the use of new technology in music curricula for composing activities at 11-14 years.
Barnes, J. (2001) ‘Creativity and composition in music’ In: Philpott, C. and Plumeridge, C., (eds.) Issues in Music Teaching. London: Routledge/Falmer.
Brown, A. (2007) Computers in music education: amplifying musicality. Oxon: Routledge.
Buadel, T. (2008) High C: draw your music. [online] URL: http://www.highc.org (accessed 19.12.11)
Dean, R. (2009) (ed.) The Oxford handbook of computer music. Oxford: Oxford University Press.
Gardner, H. (1993) Frames of mind: the theory of multiple intelligences, 2nd ed. London: Fontana Press.
2 High C download available http://www.highc.org/download.html (accessed 19.12.11)
5HCMF (2011) Huddersfield Contemporary Music Festival. Huddersfield, UK: University of Huddersfield. 16-25Nov 2011. http://www.hcmf.co.uk
Kirkman, P. (2009) Embedding digital technologies in the music classroom: an approach for the new music National Curriculum. [online] National Association Music Educators. (accessed 8.1.2009), URL:http://www.name2.org.uk/proj/ncm2.php
Landy, L. (2007) Understanding the art of sound organization. Cambridge, MA.: MIT Press.
Ofsted (2009) Making more of music: an evaluation of music in schools 2005/08. London: HMSO. (HMI ref.080235)
Paynter, J. (2000) ‘Making Progress with Composing’ In: British Journal of Music Education 2000 17:1 pp.5-31. Cambridge: Cambridge University Press.
Price, J. & Savage, J. (2011) Teaching secondary music. Sage Publications Ltd. QCA (2007) ‘Music: programme of study for key stage 3’ In: The National Curriculum 2007. London: Qualifications Curriculum Authority (QCA).
Rudi, J. & Pierroux, P. (2009) ‘Framing learning perspectives in computer music education’ In: Dean, R. (ed.) The Oxford handbook of computer music 2009 (26) pp.536-555
Rudi, J. (1997) ‘DSP-for children’, In: Proceedings of the International Computer Music Conference (ICMC), Thessaloniki, Greece, Sept 25th-30th.
Savage, J. (2005) ‘Working towards a theory for music technologies in the classroom: how pupils engage with and organise sounds with new technologies’ In: British Journal of Music Education 2005 22:2 pp.167-180. Cambridge: Cambridge University Press.
Savage, J. (2010) ‘A survey of ICT usage across English secondary schools’ In: Journal of Music, Technology and Education 2007 1:1 pp.7-21. Bristol: Intellect Ltd.
Swanwick, K. and Tillman, J. (1986) ‘The sequence of musical development: a study of children’s composition’ In: British Journal of Music Education 1986 3:3 pp.305- 339. Cambridge: Cambridge University Press.
Swanwick, K. (1988) Music, mind, and education. Oxon: Routledge. Swanwick, K. (2001) ‘Musical technology and the interpretation of heritage,’ In: International Journal of Music Education. 2001 37 pp.32-43. Sage Publications Ltd. Wishart, T. (1994)
Audible Design. Orpheus the Pantomime Ltd. Wishart, T. (ed.) (1996) On Sonic Art. Oxon: Routledge.
During the last few years some impressive initiatives centring on the anecdotal music of Luc Ferrari have been implemented all over Europe, which tell us a lot about the difference between the concepts of meaning and meaningfulness in that kind of music and those in so-called ‘traditional’ music. The principle of (re)écouter, re-listening, as the basis of re-mixing, offers considerable input to the scientific debate on how to deal with the internal and external meaning of pre-existing sound in audio art.
At the initiative of the ‘Presque rien’ association and Ferrari's widow Brunhild several competitions focusing on a work or a sound collection of the composer have been held to encourage new compositions with a current perspective on the material. The results are as different as they are impressive. Not only by looking at the sounds selected by the re-mixers but also by asking about the context, we can learn a lot about the different levels of the meaning of a pre-existing sound within and without a context, and thus learn more about the meaningfulness of that kind of sound in general. Looking for similarity in the ‘understanding’ of a characteristic sound and its use, we can discover a lot of information. By asking whether these propositions differ very much from Ferrari’s original proposition, we discover a kind of spirit of time forming the meaningfulness of sound.
In this paper the observational position of the re-mixer is introduced as a departure point, taking account of the fact that this position has consider two kinds of ‘meaning’ – the anecdote itself, somehow taken from the real world, and the anecdote of anecdotal music. The somewhat provocative question therefore arises of whether meaning in audio art is generally only generated by the difference between the two, and whether this process of generating meaning is reproducible at the same level, as would be the case of a quote in the abstract world of traditional instrumental music. Finally, the paper asks when it might be useful to introduce the category of meaningfulness as a fixed term into the analysis. We ask polemically whether categories from the analysis of ‘note based’ music such as that of the quote are useful at all for anecdotal music. A quote is a quote is a quote – but is it anything else?
The paper presents examples taken from several works composed by Luc Ferrari at different periods of his life. Primarily I am looking for examples which match Ferrari's own idea of anecdotal music, which will be briefly introduced.
Bearing in mind that the composer wants his listener to build up his/her own anecdotes while listening, we will have a look at the building blocks for that procedure, looking for something like a guiding difference. Here my assumption is that the interpenetration of internal meaning (a dramaturgic one) and external meaningfulness (a communicative matter) develops the character of a guiding difference.
After examination of this formal principle of structuring external meaningfulness with an internal meaning, characteristic of Ferrari’s method of composition, the question is asked whether external meaning can be relevant for the observation of anecdotal music, too. Returning to the re-mixer’s perspective we look at the sounds as such, constructing an ideal typical characteristic:
This area will be mentioned only briefly because it is not really of relevance to the discussion of Ferrari’s compositions. Here, all kind of sound is embraced, including natural sound in its simplest form and function, outside any communication process, which is the case in selected sound walks and installations with a decided ecological approach (even so it is rare).
With regard to the ‘Petite Symphonie intuititve pour un paysage de printemps’ it is shown why this category is not of interest in this case, and thus that thinking in categories of meaning and meaningfulness sustains the paradox that the non-social is rendered impossible.
Here the central problem is the balance between the internal and external meaning of sound in art work, by which I mean all pre-existing sounds with a communicative character. Thus, sub-categories are developed – following the chosen examples – which look for the levels of meaning in the use of voice and language, of machines and so on.
At the end of this section the question is asked whether anecdotal music can be dealt with on a social level at all. This marks the introduction to the most paradoxical as well as the most interesting field of external meaning.
This will be a central point of the talk, because it appears to be prototypical of more than one of the main questions asked above. Here I first of all refer to examples wherein the anecdote consists of pre-existing music, recorded sound, which has already been music before. It has to be asked at this point which moments of meaning persist, which are reproduced and which are – perhaps – newly constructed. The role of pre-existing internal meaning is a central point for examination, particularly and thus the point at which meaning becomes meaningfulness and vice versa. Thus it is not simply works such as ‘Strathoven’ which are of interest at this point, but works using music deriving from a previous social environment.
Other kinds of sounds will be mentioned, too, but will be related to one of the fields mentioned above in terms of their functionality.
Finally, these categories are used to develop a new perspective on the idea of meaningfulness which can be used as an intermediator between the social and the aesthetic.
Thus, the central and provocative question of the talk is this: is meaning re-mixable? I offer this up to the debate on possible analytical procedures for anecdotal music.
Nagoya City University, Japan
1. Social and cultural background in Japan
This paper focuses on the Japanese reception of Schaeffer both in technical sense and in aesthetic meaning. The name of Pierre Schaeffer and the basic method to make a musical piece with recorded sounds had been imported in1952. But during sixties when Japanese technology and economics have drastically changed, Japanese method of creating musique concrète has separated from the thought and the theory of Schaeffer.
After several years of the preliminary creations by Japanese composers, the technique and machines to create musique concrète had been developed in Japanese original way, while the texts of Schaffer seems to be interpreted not directly from the French text but through the comments or the partial translation by Shibata, Moroi and others. As the style of the electronic music has changed, Schaeffer’s statements have been less influential in Japan in sixties.
How was the actual reception of the texts of Schaeffer? There were several mediating steps from Schaeffer to Japanese society. I will show several Japanese texts which testify the representative stages of interpreting Schaeffer’s thoughts.
The presentation will refer in what sense or in what framework Japanese music professional people understood the meaning of musique concrète. I will not speak so much about the musical pieces, as the first generation of Japanese electroacoustic music are broadly known, but will discuss the aesthetical and theoretical thinking by Japanese composers , musicologists and theorists.
The main sources are the articles in the art journals and the music magazines. Some articles in <Ongakugeijyutsu> written by Shibata and Moroi were the foundation for understanding of the history of electronic music in early days.
2. First acquaintance with musique concrète
Musique concrète was introduced into Japan by Toshiro Mayuzumi in 1952 after he had experienced musique concrète in Paris GRMC. Mayuzumi observed the activities of Club d’essai in GRMC and participated in the concert of 1952. The concerts which Mayuzumi heard in Paris were <Deux Concerts de musique concrète> which were held in <Salle de l’Ancien Conservatoire> on May 21st and 25th. The concerts comprised Schaeffer’s <Etude aux chemins de fer>, <Etude pathéique>, <Air d’Orphée>, <Symphonie pour un homme seul> and the projection of the film < Masquerage> with musique concrète by Schaffere as well as the concrete pieces by Messiaen and Boulez.
In the pamphlet of the first concert of musique concrète and electronic music in Japan(1956), Mayuzumi wrote the strong impression of the Paris concerts even though he mistook the place name <Salle gaveau> for <Salle de l’Ancien Conservatoire>. He wrote agitatedly:
The concerts held in Salle Gaveau in May 1952 had impressed me so strongly that my musical life had radically changed.
This phrase should be interpreted upon the two facts. This was written in the pamphlet for the audience of the concert, and the concert was intended to present the electronic music as well as musique concrète. In other words, the sentences should stimulate the audience with exaggeration together with the rakish Parisien theater name. The second fact is that the concert was held after Mayuzumi had fabricated the three-set electronic music in NHK; Music for sine wave by proportion of prime number, Music for modulated wave by proportion of prime number, Invention for square wave and saw-tooth wave.
Just after Mayuzumi came back to Japan in July 1952, he made concrete music for the film <Carmen Jyunjyosu> premiered in November 1952. The music included moles signals and some mechanical sounds. Mayuzumi continued to create with the recorded sounds and composed out <X, Y, Z for musique concrète> in 1953. This is the internationally well-known piece as the first Asian concrete music. And the radio drama titled <Boxing> had more sophisticated sounds. Mayuzumi also used the electric instrument, Claviolin. So it can be said Mayuzumi got information and started to create musique concrète and electronic music and to use original electronic instrument during 4 years from 1952 to 1956. Mayuzumi’s creative activities and the sounds were decisive to the image of musique concrète or electronic music in Japanese society , and there were less discernment between musique concrète and electronic music in Japan than that in Europe as written in the articles by Mayuzumi, Shibata and Uenami. Makoto Moroi and Mayuzumi created <Shichi no Variation> (1956) in NHK studio for electronic music, and after the creation they wrote important reports saying how they thought of the materials both of electronic sounds and of concrete sounds.
Some critical articles were published as of Hikaru Hayashi, composer. They seemed to be helpless because the critics had no theoretical or verbal information about musique concrète. Strange but understandable that Taro Matsumoto, music critic, convinced rhat musique concrète was from various self-centered interpretation of performers.
3. Meaning of the concrete sounds – – – enlightenment and education by Shibata
It was Minao Shibata who introduced the conceptual meaning of musique concrète. In 1947 Shibata started to work in NHK as a commentator concerning the European music. His descriptive report about <Shichi no Variation> was the starting point for Japanese theory and aesthetics of musique concrète, while Mayuzumi’s former text about his <X, Y, Z for musique concrète> had been describing the composer’s personal hearing.
It was also Shibata who had made a comment on the NHK radio when <Shichi no Variation> was broadcasted. Wataru Uenami, director of NHK, testified that Shibata had mostly attended at the production even though it was not his piece. Shibata’s tape music included <musique concrète for stereophonic broadcasting>(1955) and <Road to Rome>(1961) . Shibata’s creation in NHK electronic music studio included only <Improvisation for electronic sounds>(1968) and <Display ‘70>(1969) even though he often worked in NHK and always commented to the studio technicians who were working for composers. It is also noticeable Yuji Takahashi said that his <Phonogène> (1962)was created in NHK Electronic Studio with Schaeffer’s phonogène.
Since Shibata became the associate professor of musicology at Tokyo University of Arts, his historical viewpoints and his terminology about the contemporary music have been widely received and used. In addition, in the sixties and the seventies, Shibata was so active on presenting internationally Japanese electronic music as to be authority of Japanese history of contemporary music. He made a lecture in Stockholm in June 1970 in case of a UNESCO conference on Music and Technology. The lecture title was <Music and Technology in Japan>.
Shibata emphasized that Japanese traditional instruments held much noises compared to the European ones. He explained the concrete sounds in the musical piece are autonomous expression even though the original sounds are easily identified. He wrote in his book of 1990 ?Hearing the Japanese Sounds? how Japanese ears have had an acoustic space because of the noises the instruments had.
We can also refer to Takemitsu’s text of 1975, titled <Mirror>:
I dare to say intuitively that sounds in Japanese traditional music deny the scale into which the sounds are belonging to. The more refined and prominent each sound is, the thinner becomes the meaning of scale, at the same time, each sound becomes zero, and the nearer we goes to the natural sounds which are filled with the unique sounds and only nothing as a whole.
Toru Takemitsu, <Mirror> In:Ki no Kagami, Sogen no Kagami 1975. p.20
4. Schaeffer as information theorist
During the decade 1965~1975 , linguistics and information theory affected musicology also in Japan. The musicologists, epecially ethnomusicologists, were very positive to introduce the Euro-American theories. The situation related to the computer music of that days.
The aforesaid UNESCO conference in 1970 was important in the global point as well as in the point for Japanese reception of electroacoustic music, or electronic music and computer music. There were J.C.Risset, Max Mathews, Pierre Schaeffer, Minao Shibata and so on.
Three years later(1973) a quarterly for avantgarde art <tranSonic > started. Volume 4 , published in the autumn 1973, featured <space of technology> and there included the translation of Max Mathews’ article about the technology of 1970’s and Kurt Blaukopf’s <Space in the electronic music> .
A philosophical magazine <Episteme> was specialized for <Sound/Music> and includes Schaeffer’s text of 1971, <Music, language and information theory> translated by Usaburo Mabuchi. <Episteme>(Aug. +Sep. 1976) included Xenakis’s interview with D. Jameux.
Yoshihiko Tokumaru, musicologist, defined computer music in 1965 as computer assisted composition. Tokumaru himself is an ethnomusicologist and one of his research point was oral elements and semiotics. He discussed Duffrennne and Nattiez but not Schaeffer as theorist.
It was only after the French spectral music and the noise music like Japanoise were widely popular when Schaeffer has attracted the composers.
Dr James Mooney
School of Music, University of Leeds, UK
My aim in this paper is to propose a framework for pedagogy and analysis in technologically-mediated music by applying the principles of affordance and constraints to the use of musical tools, having first extended those principles with a new concept: that of the spectrum of affordance.
My ultimate aim is to maintain the primacy of the aesthetic experience in technologically-mediated music whilst acknowledging the fact that, for students and practitioners, direct engagement with the technology remains necessary. My purpose is to make meaning and meaningfulness in electroacoustic music more easily accessible primarily to students and practitioners, therefore, though perhaps what I have to say might also be more widely applicable. The motivation for this paper comes, in part, from my experience as a teacher of so-called ‘music technology.’
Background and Terminology
The theoretical background against which my discussion takes place draws upon writings by Kelly (2010), Landy (2007), McLuhan (1964), Waters (2007) and others. My conceptual starting point, however, is the account of ‘affordances’ and ‘constraints’ given by Donald Norman in his book The Design of Everyday Things.
– The affordances of a tool relate to the actions that it is possible to carry out using that tool. Luke Windsor (2000) gives the following simple example: ‘A cup [...] affords drinking.’ The cup gives its user the opportunity to drink; drinking is therefore an affordance of the cup.
– The constraints of a tool determine those actions that it is not possible to carry out. Perhaps the cup can hold only 100 ml of fluid. This is a constraint insofar as the cup does not afford its user the opportunity to carry more than this volume of liquid.
Viewed in this particular way—and it is certainly not the only way to understand the notion of affordance; see for example Clarke (2005), Waters (2007), both in some sense echoing Gibson (1979)—the affordances and constraints of tools are functions of design, and it is this position that is adopted by Norman.
An important aspect of Norman’s argument is the suggestion that affordances and constraints should be carefully contrived by product designers in order to ensure that the correct outcomes are achieved by the user as easily as possible, ideally without even thinking about it. This makes sense when applied to the design of every-day objects (door handles, light switches etc.) that are intended to have a single clearly- defined functional purpose that is more important than any alternative or secondary use. The model is problematic, however, in contexts where the tools are used in many different ways to achieve many different outcomes, as is the case in music. To directly apply the concepts of affordances and constraints as described by Norman to the use of musical tools is, therefore, correspondingly problematic.
It is for this reason that, in a recent journal article, I proposed the concept of a ‘spectrum of affordance’ (Mooney 2010), in recognition of the fact that tools actually afford their users multiple things, and can be used in many different ways to achieve many different outcomes.
– A spectrum of affordance ranks the possible outcomes according to how easy or difficult they are to achieve, such that the most easily achieved outcomes are at one end of the spectrum, and the most difficult to achieve are at the other.
The spectrum of affordance is dependent upon a number of things, among them: the physical design of the tool; the user; the context. If any of these things changes, the spectrum of affordance will also change. The spectrum of affordance for a given tool is, therefore, not a simple fixed list of potential outcomes, but rather a multi-dimensional array of possibilities conditioned by many interacting variables. For the sake of a simple example, however, one might suggest that it is very easy to smash a cup (let’s assume for this example that it is a fragile cup made out of porcelain), ever-so-slightly less easy to drink from a cup (because it requires more skill), relatively difficult to juggle with three cups (more skill again), and practically impossible to fly to the moon in a cup. The cup has a spectrum of affordance with the ‘easiest’ affordances at one end of the spectrum, and ‘impossible’ at the other, though we acknowledge that the affordances and their ranking are tool, user, and context sensitive.
A Description of ‘The Problem’
My work as a ‘music technology’ lecturer has, over the years, involved tutoring students in the use of audio software and hardware for various musical purposes. This includes composition, the creation of sound- tracks to accompany films, sound recording and production, and the use of MaxMSP to achieve ostensibly creative ends such as the design of interactive instruments and multimedia installations.
Some students, of course, produce very good work, that is, work which is some combination of aesthetically interesting, thought-provoking, emotionally engaging, or culturally or politically relevant, as well as being merely technically proficient. Of those students producing less-good work, I would like to suggest two stereotypes.
One the one hand, some students become so lost in the technology that they lose sight of any creative goals or aspirations they might once have had, or, rather, the aspiration becomes simply to fiddle with the technology. In other words, the technology becomes both the means and the end. When I talk to students about their creative aspirations, quite often they tell me, ‘I would like to do something with MaxMSP.’ This is rather like a novelist deciding that their new book will ‘do something with the English language.’ The student is interested in the tool itself, but has no clear idea of what they would like to achieve musically or aesthetically.
On the other hand, some students engage with the technology using only on the most obvious ‘path of least resistance’ interactions, resulting in music that is tool-driven rather than aesthetically-driven. A simple and quite common example is the over-reliance on preset synthesized sounds or other default settings that can easily be identified in the sounding results. Unlike student 1, student 2 does not necessarily claim to be interested in the technology itself, appearing more preoccupied by musical considerations. However, upon closer examination it turns out that actually it is the software that has made many of the musical choices on behalf of the student, and the student is not aware of this.
I am going to refer to these two stereotypical scenarios as:
In both scenarios the technology has, in one way or another, hijacked the creative process, diverting attention away from the realisation of musical objectives and focusing it, instead, upon the technology itself. This diverts attention away from what really ought to be the primary motivating factor in any musical activity: the aesthetic experience.
Towards a Solution: Research Questions
From this predominantly pedagogical perspective, then, my questions are:
a) Why is it that students (and, no doubt, some professionals too) often become trapped in tool- driven or techno-centric exercises instead of creating music that is aesthetically satisfying? What are the mechanisms by which this happens?
b) What can be done to address this? From an analytical perspective, and hopefully contributing toward the answers to questions (a) and (b):
c) To what extent, and in what ways, might the affordances and constraints of musical tools (and their corresponding spectra of affordance) shape musical results, both on the level of individual compositions and performances, and on the broader cultural level of musical genres, praxes and idioms? (And, implicitly, what other—non-deterministic—forces are at work?)
d) How might this be incorporated into a broader analytical framework, alongside formal, structural, cultural and aesthetic considerations? (And, implicitly, to what extent is this useful?)
The analytical and pedagogical perspectives should not be thought of as separate but, rather, as mutually reinforcing.
Eric Clarke (2005), Ways of Listening: An Ecological Approach to the Perception of Musical Meaning (Oxford: Oxford University Press).
James Gibson (1979), The Ecological Approach to Visual Perception (New Jersey: Lawrence Erlbaum).
Kevin Kelly (2010), What Technology Wants (New York: Viking).
Leigh Landy (2007), Understanding the Art of Sound Organisation (London: MIT Press).
Marshall McLuhan (1964), Understanding Media (London: Routledge).
James Mooney (2010), ‘Frameworks and Affordances: Understanding the Tools of Music-Making’, Journal of Music, Technology and Education, vol. 3, issues 2 – 3, pp. 141–154.
Donald Norman (2002), The Design of Everyday Things (New York: Basic Books).
Simon Waters (2007), ‘Performance Ecosystems: Ecological Approaches to Musical Interactions’, Proceedings of EMS07, ‘The Languages of Electroacoustic Music’, De Montfort University, Leicester.
Luke Windsor (2000), ‘Through and Around the Acousmatic: The Interpretation of Electroacoustic Sounds’, in S. Emmerson (ed.), Music, Electronic Media and Culture (Aldershot: Ashgate), pp. 7–35.
Music Department, Concordia University, Montreal, Quebec, Canada
I was delighted to read the theme of the conference, but then hesitated. It was the same feeling I had. when someone mentioned aesthetics a couple of years ago. Surely these are central concepts which should drive my thinking, teaching, and creative work - but in fact they never seem to figure into my discourse, whether internal or external. I don't think about meaning at all while composing; but surely the results are not meaningless? Doubt haunts me. I undertake further reflection before turning to Wikipedia, which confirms the range and most of the specifics of the various usages I had determined. (I am reminded of the word rhythm, which I have taken to using only with qualifiers, as one of those evocative words which is asked to denote too many things.)
For my talk, I would focus on typical usages & understanding of the term "meaning" by musicians who (like myself) lack extensive training in philosophy, metaphysics, semiotics, etc. Likewise, the presentation would make few references to studies in these other fields, but instead articulates personal interpretations of various manifestations of meaning in (electroacoustic) music. This format would be refined to stimulate reflection among the conference attendees without demanding fluency in the language of the experts; conversely, I would count on the impatience of my more learned colleagues to supply clarifications and contradictions as necessary. I will clarify this approach as underlying my book-in-progress Conversational Musicology and explain why I have chosen to advocate it rather than apologize for an apparently less rigorous method.
I propose to answer several of the questions listed in the conference call from my own (shifting) perspectives of composer, analyst, and teacher. As I perform all these roles in both the acoustic and electroacoustic fields, I will reflect also on the differences that seem to arise in each. This would be supplemented by a few observations of and contributions from both students and colleagues.
I am most at ease with the context of meaning in music in the semiotics sense: semantics, image schemas, lexicology, etc. I will explain how I see this approach linked with the latent associations we have with sound, such as those described by Rolf Inge Godøy, and how l am incorporating such considerations very explicitly in my own compositions, in my research, and in teaching.
I am increasingly fond of an approach to composition that involves a kind of conceptual mapping: sonic elements are created and developed with (increasingly) clear ideas of the identity and behavioural characteristics of each. The character and behaviour of each suggest how to express the result of an encounter within the composition (merge, submerge, distort, etc.). In such a process it becomes useful to employ models that are clearly imagined with a high degree of inner coherence(even if its inner coherence is, for example, 'chaotic' or 'ephemeral'). I recently assigned an exercise to my upper-level undergrad course in electroacoustic composition to identify and record a few short more-or-less repetitive rhythmic patterns - at least two of which could have been encountered in the Bronze Age, and at least one of which could not. One of my motivations for this is a hunch that if, as suggested by Bregman et al, the human system is well-adapted to our environment, but evolves extremely slowly, then latent associations with 'Bronze Age' rhythms (for example) may be potent, albeit in a different way than a direct association triggered by a recognizable sonic tag.
If meaning is linked to clarity of intention, then presumably part of a composer's skill involves being able to construct "believable" coherent images. I suggest that this is more of a challenge in EA at some level, because there are no physical constraints on having illogical behaviours in a sound configuration; conversely, a performer will usually try to 'parse' what they perceive as a (sub-)phrase or gesture, which will contribute a sense of coherence to the line shape/texture/character which is being performed (even if not intrinsically present). On the other hand, the EA practitioner is more likely to be aware (at some level) of this danger and therefore take more care to shape each at, for example, spectral level as well as amplitude and spatial treatment.
Therefore, I would say, in answer to “What are the differences between delivery via large arrays of loudspeakers, iPod earplugs, internet, installations or live performance?” - "Only to the extent that the means of delivery reinforces or weakens the image. Live performance can enhance the quality of the received image if the performer(s) grasp the image they are meant to help bring to life. Likewise, in response to ”How does the intermedial environment affect meaning in music and across media?" I would argue that the coherence or contrast between the two (or more) media will affect their perception (contrast having the potential, like coherence, to clarify the characteristics of each element).
There are two other senses of the word 'meaning' which I wish to explore more freely, which I could call the empirical and the existential. For example, I find it easy to imagine (if difficult to articulate to a skeptic) that listening to, creating, performing, sharing music somehow gives 'meaning' to my life - and that similarly by creating new music I might be contributing to meaning for others.... I think this has to do with a sense that art has the potential to help humans retain a perspective on life which transcends the mundane and commercial. In addition, there is another sense of music's 'meaning' which might be thought of as lying between the semiotic and the existential although I include it in a very broad semantic sense: this is the idea that music can represent and thereby explore concepts which are generally difficult to grasp, such as the nature of time or the structure of the universe. Xenakis delighted in music's power for exploring time and in being unfettered by things like gravity which had apparently constrained his ideas as an architect. Can we conceive of a “sonification of time” just as the 'music of the spheres' can be understood as sonification of planetary motion?
In the penultimate section, I reflect on ways in which a professor might encourage a strong output of more 'meaningful' compositions. However, in the final section, I admit that as a composer, l am tempted to defy the need for meaning, and speculate on the possibilities of music which is enjoyable in spite of, or because of, being designed without regard to meaning - as a kind of anti- sonification usage.
Faculty of Arts and Social Sciences, University of Technology, Sydney
The Loudspeaker as artefact
Devices that amplify and transduce electro magnetic signals to audible sound are essential to electronic and electroacoustic music. As Jonathan Sterne suggests in his The Audible Past (2003) sound--reproduction technologies are artefacts of particular practices and relations (p7). This idea, I argue, is not limited to reproduction technology but can be extended to all sound transducers, i.e. loudspeakers and microphones. A clear example concerning the latter can be found in images of popular singers since the late 1920s. Rudy Vallée, early in his career was identified through his use of a megaphone at concerts (McCracken 1999; Vallée 1930). Later, Vallée and other ‘crooners’ such as Bing Crosby became synonymous with microphone singing (whether for a recording, radio broadcast or a concert) and like many pop stars after them would be pictured with a microphone at or in hand. Several pop idols developed iconic styles of utilizing a microphone, not strictly related to its function (think of Freddy Mercury’s feetless boom stand, Roger Daltrey’s microphone swinging). The microphone becomes a powerful symbol for the musics it is predominately used for (even when pretending to be singing pop idols cling on to their microphones).
Something similar can be said for loudspeakers although pop stars are rarely photographed with a loudspeaker stack in sight; for a rock guitarist a stack of Marshall (or brand of choice) amps is essential. Common rumour has it that occasionally such big guitar amp stacks are part of the set design and as such have nothing to do with sound.
In addition to these cultural connotations it is interesting to look at how these transducers relate to the music that created using them. With microphones for both recording and amplification there are a number of choices related to directional sensitivity, dynamic range and transduction principle that influence the microphone’s ‘sound’ (or coloration). These choices can be considered ‘social technology’ as described by Jon Frederickson (1989): the patterns of cooperation and artistic conventions related to the production of (musical) performances.
Likewise there is a social technology related to the use of loudspeakers, offering a number of choices that, apart from a certain (or marketed) ‘sound’ result in different a very different relation to the acoustics of the environment that a loudspeaker is used in. In comparison musical instruments have their own very particular relation to an acoustic (Meyer & Hansen 2009). Newer loudspeaker technologies very specifically address that relation to a room. The currently omni--present (at least in the entertainment industries) line array loudspeaker systems are supposed to offer more control over the directivity of the loudspeaker. Other developments are aimed particularly at replicating the directional patterns radiation of acoustic instruments for instance by using (digitally controlled) spherical loudspeakers. Wave field synthesis aims at recreating a sound field inclusive of a real or virtual relation to a room’s acoustics.
The loudspeaker as loudspeaker
Electro acoustic music that mixes electronic sources and (amplified) acoustic sources, including for the scope of this paper, most popular music, use a number different approaches. In traditional band amplification a sonic ‘frame’ is created using loudspeaker stacks or rigs left, right and occasionally above (centre) the act. This, in concordance with stage lighting, emphasizes the relation between what we hear and what we see, even though the sounds we hear no longer emanate from the musicians we see. The attractiveness of a frame in mediated arts as we see in TV movie, paintings as it sets a limit to what we should perceive as ‘the music’ (cf Simon Emmerson (2007) p99 and Barthes (1978) p69 ). In addition we can think about the relation between what we hear and see at concert from the perspective of the ‘sound hermeneutic’ as brought forward by Rick Altman (1980) and quoted in Emmerson (2007) p124.
In other domains spatial set ups are used to achieve the opposite, sound is used to draw away from what happens on stage by positioning loudspeakers in (for instance) the corners of a room. Apart from incorporating space into a performance (or a work) it also emphasizes the acousmatic nature of the electronic sounds.
The meaningful loudspeaker
Musical meaning is as problematic and debatable as it is subjective, I prefer after Theo van Leeuwen (1999) to write about meaning potential, not dissimilar to the title of this conference: meaningfulness. The question of how the use of loudspeakers in performances of music adds to or takes away from that meaning potential is easier to answer in mixed music (as before, including pop music). Although dislocated in time and space (cf Emmerson’s (2007) p143.) the relation to performers remains unambiguous in the case of band amplification, in other practices that relation can become a musical parameter.
For Bob McCarthy (2007), author of one of the few modern sound system engineering books, a loudspeaker is neutral: “A central premise of this book is that the Loudspeaker is not given any exception for musicality. It’s Job is as dry as the wire that feeds it an input signal: Track the Waveform.” (p18). This is a statement that is easier to defend for concerts where the acoustics of the venue are of little influence or where the balance between direct sources and amplified sources is in favour of the latter. A rock band in a romantic 19th century concert hall (and its 20th and 21th century cousins) poses a problem in the sheer volume of acoustic sources (eg snare drums), backline and monitoring. Because of the co--presence of acoustic sources and reproducing loudspeakers, amplification is not a neutral transmission channel (Blaukopf 1992; Moles 1966).
In electronic and acousmatic music the loudspeakers are the only sound source, lacking obvious causality. Relations to interactions on stage (laptop, novel musical instruments and interfaces etc.) can become more ambiguous, adding to that music’s complexity. The reproducing or producing transducers (loudspeaker, headphones, synthesized sound wave) are signifying and delimitating the context. The choice and setup of loudspeakers decide on how a work, a composition or an interaction is brought into the natural, acoustic listening environment.
Loudspeakers can modify or create meaning potential or meaningfulness in relation to the performances of music they are used for. One way is the artefact a loudspeaker becomes when in view. In addition the presence, typology and set up of loudspeakers or a loudspeaker system in relation to a venue’s acoustics is decisive for the context of a concert or performance and as such a parameter of that event’s meaningfulness.
Altman,R. 1980, 'Moving Lips: Cinema as Ventriloquism', Yale French Studies, no. 60, pp. 67-79.
Barthes, R. & Heath, S. 1978, Image, music, text, Hill and Wang.
Blaukopf, K. 1992, Musical life in a changing society: aspects of music sociology, Amadeus Press, Portland, Or.
Emmerson, S. 2007, Living electronic music, Ashgate, Aldershot.
Frederickson, J. 1989, 'Technology and Music Performance in the Age of Mechanical Reproduction', International Review of the Aesthetics and Sociology of Music, vol. 20, no. 2, pp. 193-220.
McCarthy, B. 2007, Sound systems: design and optimization: modern techniques and tools for sound system design and alignment, 1st edn, Focal, Oxford; Burlington, MA.
McCracken, A. 1999, '"God's Gift to Us Girls": Crooning, Gender, and the Re-Creation of American Popular Song, 1928-1933', American Music, vol. 17, no. 4, pp. 365-95.
Meyer, J. & Hansen,U. 2009, Acoustics and the performance of music: manual for acousticians, audio engineers, musicians, architects and musical instruments makers, Springer Science+Business Media.
Moles, A.A. 1966, Information theory and esthetic perception, University of Illinois Press, Urbana [Ill.].
Sterne, J. 2003, The audible past: cultural origins of sound reproduction, Duke University Press, Durham.
Vallée, R. 1930, Vagabond dreams come true, E.P. Dutton & Co. Inc., New York,.
Van Leeuwen, T. 1999, Speech, music, sound, Macmillan, Houndmills, Basingstoke, Hampshire.
Per Anders Nilsson
University of Gothenburg, Sweden
In this paper we discuss the meaning of musical instruments, particularly an electronic instrument intended for ensemble improvisation that is called the exPressure Pad. A basic premise is that the actual instrument in play mediates and actualizes particular aesthetical ideas, comparable to a composition. In play a multitude of interactions and cross relations occur: between the players, between the instrument and the player, and between the player and the musical outcome. However, the inherent properties of the instrument delimit what we can, and cannot do, therefore, it is feasible to claim that the instrument directs and informs our playing as much as we are shaping the musical output.
We distinguish between two modes of music making, which we call design time and play time. Design time is activity outside chronological time, which deals with articulation and application of ideas and knowledge, whereas play time is about real time activity where interaction with the environment, embodied knowledge, and the present are at the forefront. Iannis Xenakis (1992) claims: “Music participates both in space outside time and in the temporal flux” (p. 264). British improviser Edwin Prévost (1995), by referring to British composer Cornelius Cardew, talks about “the two modes of music-making” (p. 59). Finally, improviser, instrument builder, and author Tom Nunn (1998) distinguishes between the intellectual mind and the intelligent body (p. 40).
How do design time and play time connect to each other? Implemented theories and playing techniques are interdependent, and necessary to take into account when discussing musical instruments. Norwegian musicologist Tellef Kvifte (2007) states that playing technique is intimately bounded to music theory and music produced. Kvifte argues about relations between instrument/playing, musical sound, and music theory/notation:
The instrument and playing action are meaningful because of their relationship to sound and theory. The theory is meaningful because of its relationship to the sound and the instruments, and the sound is meaningful because of its relationship to the instruments and theory (p. 89).
It is unthinkable to conceive music theory without music, or a theory without any kind of sound generation device. The output from any instrument must be understood as music, and as Kvifte asserts, a musician aims to experience music, rather than physical variables.
Design of the exPressure Pad
When we started to develop the exPressure Pad we made up the following criterion: “How can we explore and control complex electronic sound spaces in improvisation, retaining the millisecond interaction that is taken for granted in acoustic improvisation, but has somehow gotten lost in electronic music?”
The design of our instrument makes use of commercially available equipment: the M- audio Trigger Finger is a midi controller, which consists of an array of sixteen pads that sends velocity and pressure data in addition to a number of faders and knobs, while mapping and sound generation take place within a Clavia Nord Modular G2.
In order to design an exploratory instrument such as the exPressure Pad, one must think in potential, rather then trying to imagine all possible combinations of parameter values. Therefore, we choose a vector implementation operating in a multidimensional musical space. The design consists of a set of fifteen randomized vectors in a fourteen dimensional synthesis parameter space. Each individual pad (1-15 in Figure 1) on the interface is assigned a particular vector. All vectors add up in order to arrive at a single point in the parameter space of a monophonic sound. It is possible to explore a parameter space around the current point in all directions. Sound morphology, such as attack and decay times, are under direct control from designated knobs and faders.
Initially we mapped pitch from one vector component, which resulted in a lot of glissandi and did not made musical sense. Therefore, we designed an additive pitch algorithm, and superimposed it on the mapping engine; consecutive pads are assigned a chromatic scale, which start at the bottom left and increasing to the right and upwards. Simultaneously pressed pads (= intervals) add up and form a result interval, similar to the valves of a trumpet. This solution is compatible to our notion that higher pitches require more effort (engage more fingers), and the design of clavier instruments where low to high pitches goes from left to right. The interval sum is scaled by a controller, which allows a continuum from no pitch control, via micro tuning to chromatic pitch.
The sound engine consists of two intermodulating oscillators, high and low pass filters, comb filter, amplitude control, and reverb. Important musical parameters are controlled by vector components, which sum up to approx 30 synthesis and control parameters respectively. However, some parameters are under direct control from assigned controllers, such as oscillator waveforms.
The development of the exPressure Pad was done within the realm of duo pantoMorf. The duo’s music can be characterized as electronic free improvisation. Here, free refers to the traditions and practices of free improvisation, rather than implying no confinements at all. When designing the exPressure Pad, the intention was to realize a musical idea and theoretical concept into a playable instrument, and an essential idea was to create an instrument with the ability to create surprise. A basic playing action is to select and press arbitrary number of pads and see what happens, and from there taking advantage of the upcoming situation by applying control. By memorizing the location of certain sonic subspaces, we can control and nurture the emergent identity. In this setting, musical outcome, interactions, and instrumental properties are intimately connected to each other, and in practice inseparable. A change of any active agent involved, such as controller, mapping, and sound engine employed will have significant impact on musical output. The instrument is the music and vice versa.
What role does a musical instrument play with respect to musical outcome in improvised music? We refer to a point of view and relation David Borgo (2005) claims British saxophonist Evan Parker holds to his instrument:
The saxophone takes on a musical identity only in interaction with a performer. Parker’s horn is not simply dependent on his playing, nor an extension of it, but in important ways his horn shapes his playing. […] Just as it is valid from one perspective to say that Parker plays the saxophone, it is equally valid from another perspective to say that the saxophone plays Parker (p. 57).
Rather than saying that the instrument plays Parker, the saxophone shapes, and to a certain extent confines his playing. Parker understands the musical world by means of his saxophone, is shaped by it, but he also acts within it, and shapes the musical world through the saxophone.
During a stepwise design process our instrument takes shape. However, only by playing and listening are we able to experience and evaluate the implemented concepts. When pushing a pad and listen, we experience and understand what the conception truly is, we perceive the connection between theory, physical action, and sound produced. Most likely, there is always a gap between aim and result, but after each iterative step in the design process of playing-refinement-playing, our understanding of the underlying theoretical concept increases. As Merleau-Ponty (2002) points out: “To understand is to experience the harmony between what we aim at and what is given, between the intention and the performance – and our body is the anchorage in a world” (p. 167). Furthermore, it is also possible to draw a parallel to Kvifte’s model, which states that music theory concepts only are meaningful in relation to an instrument: an instrument is meaningful in relation to theory and sound; a theoretical concept is meaningful in relation to playing action and the sound it produces, and a sound is meaningful in relation to theory and the instrument that produces it. The development of duo pantoMorf was as much about forming a new ensemble, as to collectively learning to play a new instrument. The duo’s music is as much shaped by properties and possibilities of the expPressure Pad, as we form the music. We claim that it plays the role of an open work type of composition: it makes up a musical space, which are explored and actualized at each performance.
Per Anders Nilsson - firstname.lastname@example.org
Palle Dahlstedt - email@example.com
Borgo, David 2005, Sync or Swarm: Improvising Music in a Complex Age, Continuum, New York.
Kvifte, Tellef 2007 (1989), Instruments and the Electronic Age: Toward a Terminology for a Unified Description of Playing Technique, Solum, Oslo.
Merleau-Ponty, Maurice 2002 (1945), Phenomenology of Perception, Routledge, London.
Nunn, Tom 1998, Wisdom of the Impulse, On the Nature of Musical Free Improvisation, Thomas E. Nunn, San Francisco.
Prévost, Eddie 1995, No Sound is Innocent: AMM and the Practice of Self-invention, Meta- musical Narratives, Essays, Copula, Essex, UK.
Xenakis, Iannis, and Kanach, Sharon 1992 (1960), Formalized Music: Thought and Mathematics in Composition, Pendragon Press, Stuyvesant, N.Y.
Université de Montréal, Montréal (Québec) Canada
There are two things that are necessary to become a good composer : to be able to appreciate the works by the colleagues on one side ; and to compose works that add meaning in the history of music (why remaking Henry, Harrison, Aphex Twin or Daoust ?). In order to do so, the composer has to learn the right tools to be able to approach creation from different angles and specially, we believe, with a special emphasis on the musical aesthetics.
Acousmatic music in particular is largely a matter of perception. So the question is: how do we perceive? What do we perceive? And especially: do we have, as a community of composers, a common perception, a common aesthetics? Like an answer yes or no, in one case as in the other, one must ask why? And the answer to this question, again we believe, has to go through learning, through the pedagogy surrounding this music.
Electroacoustic music composition at l’UdeM
At the Faculty of Music at the University of Montreal, we have developed a comprehensive training program in electroacoustic music since 1980. The program is specialized to the three levels of university education: the bachelor (3 years), master (2 years) and doctorate (4-5 years). Not only are we the only university in Canada to offer such a program but we are also the only francophone university in the world to do so.
This presentation describes the program, the tools and the courses we offer. Could it be considered as a model? I don't know, but it may inspire some people.
Long Abstract in French
Je vous présente aujourd’hui mon projet de communication pour la conférence EMS12. Pour des raisons de commodité, je l’ai écrite en français mais je ferai ma présentation en anglais à Stockholm, si elle est acceptée. Je pourrai alors fournir une traduction anglaise de ce résumé.
Une approche pédagogique à l’Université de Montréal : un modèle?
Pour qu’un compositeur puisse d’une part apprécier l’œuvre de ses collègues et d’autre part, composer des œuvres qui ajoutent du sens à l’histoire de la musique (pourquoi refaire Schaeffer, Henry, Harrison et Parmegiani ?), il faut que celui-ci soit outillé de différentes manières afin d’aborder la création sous différents angles avec, selon nous, une emphase toute particulière sur l’esthésique musicale. La musique acousmatique en particulier est surtout affaire de perception. Alors la question est posée : comment percevons-nous ? Que percevons-nous ? Et surtout : avons-nous, comme collectivité de compositeurs, une perception commune, une esthésique commune ? Que l’on réponde oui ou non, dans un cas comme dans l’autre, il faut se demander pourquoi ? Et la réponse à cette question, encore une fois selon nous, passe par l’apprentissage, par la pédagogie qui entoure cette musique.
Programme de composition électroacoustique à l’UdeM
À la faculté de musique de l’Université de Montréal, nous avons mis sur pied un programme complet de formation en musique électroacoustique dès 1980. Le programme a été créé par Marcelle Deschênes — aujourd’hui retraitée — et alternativement dirigé par Francis Dhomont, Jean et Piché et moi-même. Le programme est spécialisé aux trois niveaux d’études universitaires soit le baccalauréat (3 ans), la maîtrise (2 ans) et le doctorat (4-5 ans). Non seulement nous sommes la seule université au Canada à offrir un tel programme mais nous sommes aussi la seule université francophone à le faire. Trois professeurs assurent aujourd’hui la pédagogie — Jean Piché, Caroline Traube et moi — assisté par une dizaine de chargés de cours. Le nombre d’étudiants varie entre 30 et 50 selon les années, à peu près répartis également
entre le bac et les études supérieures. En 2012, il y a plus d’étudiants inscrits au doctorat (10) que le nombre de doctorats décernés depuis sa fondation (6).
Le programme est constitué d’en ensemble de cours qui appartiennent aux trois grands domaines de formation en électroacoustique : 1. Scientifique — acoustique, psychoacoustique, analyse-synthèse ; 2. Programmation et outils — programmation Python et Max-MSP, techniques de studio, prise de son créative, séquenceur-audio, musique et médiatique ; 3. Esthétique — Typologie et morphologie sonore, analyse de la musique électroacoustique, histoire, création et nouvelles technologies. En plus de l’ensemble de ces cours théoriques et pratiques, les étudiants suivent des cours de composition hebdomadaires — collectifs au bac, individuels aux études supérieures. Les étudiants admis au programme sont obligés de passer à travers les examens d’admission de la faculté de musique sauf ceux du solfège dont ils sont dispensés tout au long de leurs études (c’est le seul programme qui fait exception à la faculté de musique : la typo-morphologie ayant remplacé le solfège traditionnel). Les œuvres des étudiants de tous cycles sont présentés en concert deux fois l’an sur un acousmonium de 36 haut-parleurs dans notre grande salle de concert de 900 places. Lors des dernières éditions, nous avons présenté plus de 4 heures de nouvelles musiques à chaque édition, réparties sur trois jours, dans plusieurs genres : acousmatique (majoritaire), vidéo- musique, mixte, en direct, etc.
Composantes du programme
Un baccalauréat à l’UdeM est constitué de 90 crédits (comme dans toutes les universités québécoises mais à l’inverse des universités canadiennes où c’est 120 crédits car nous avons un programme collégial de deux ans entre le secondaire et l’université au Québec). La plupart des cours ont 3 crédits et le temps de parcours étant de 3 ans, les étudiants font donc en moyenne 15 crédits par session. Un crédit représente 3 heures de cours/étude par semaine, donc 45 heures/semaine en moyenne. Le programme de composition électroacoustique est constitué de 42 crédits obligatoires, le reste étant répartis en différents blocs thématiques.
L’apprentissage et la formation
Nous considérons que pour bien former un compositeur en musique électroacoustique, celui-ci doit passer par plusieurs étapes de formation. Mais en général, nous privilégions les apprentissages parallèles plutôt que hiérarchiques. À l’exception notable du cours Introduction à l’acoustique musicale, préalable à beaucoup de cours de notre programme, ou de Typologie préalable à l’Analyse, la plupart de nos cours peuvent se suivre en parallèle les uns par rapport aux autres. Autrement dit, nous ne voyons pas la nécessité de former d’abord les étudiants sur le plan technique, puis ensuite sur le plan historique pour ensuite les laisser composer (l’équivalent en musique instrumentale de la hiérarchie solfège-harmonie-composition). Les notions apprises en Typologie ou en Programmation sont intégrées au fur et à mesure dans le travail de composition. Nous ne visons pas un apprentissage local exceptionnel (l’étudiant fait des pastiches pendant trois ans par exemple) mais une formation globale au terme de son parcours.
Signification et pertinence
Pourquoi une telle description? Parce que nous croyons que la musique électroacoustique doive s’inscrire dans une pédagogie complète et originale si on veut en voir émerger des compositeurs complets. On le voit d’ailleurs assez bien lorsqu’on regarde la relation qui existe entre le nombre de compositeurs nationaux et surtout leur qualité dans les pays qui ont consacré beaucoup d’énergie et d’effort à la formation universitaire supérieure en électroacoustique (Angleterre, Canada, États-Unis) par opposition à ceux dont la formation reste partielle (Allemagne, France, Espagne, Italie). Alors quelle relation avec le thème de la conférence ? Un compositeur doit pouvoir à la fois être en mesure de créer des œuvres fortes et exceptionnelles (pourquoi se contenter de moins) mais il doit le faire en toute connaissance de cause. Et il doit pouvoir aussi évaluer les musiques des autres de la même manière. Combien de fois, lorsque nous faisons les premiers exercices critiques avec les jeunes étudiants, sommes nous obligés de constater la pauvreté non seulement de leur vocabulaire, mais aussi — et cela est plus fondamental — de leur pauvreté perceptive. Non seulement ils expriment mal ce qu’ils perçoivent, mais ils perçoivent peu. Or la perception n’est pas ici la seule en cause. Elle est à la fois la première et la dernière étape du parcours du compositeur. Entre celles-ci se situent à divers degrés les connaissances technologiques — comment les choses se font concrètement —, scientifiques — sur quels fondements — et esthésiques au sens large — qu’est-ce qui est pertinent sur le plan perceptif ? Ainsi chez nous, après ses années d’apprentissage, un compositeur est capable non seulement de manipuler les outils pour faire une pièce selon les plus hauts standards de qualité mais il est aussi capable de nommer les éléments constitutifs de sa musique et de celle des autres et enfin il est capable d’exercer son sens critique à l’endroit des écrits théoriques et des prises de position esthétiques exprimés par la communauté.
Nous croyons que la pédagogie musicale dans notre domaine, qui est un domaine de pointe, ainsi que les différentes découvertes devraient être partagées entre toutes les institutions. Certes nous sommes les seuls en Amérique du Nord à offrir une formation aussi complète en français, alors la compétition n’est pas vraiment en cause et il se peut que des enjeux de cet ordre entrent en considération ailleurs — au Royaume-Uni notamment — mais notre domaine d’activité s’enrichirait considérablement si le savoir était partagé. Pourquoi est-ce que les outils développés un peu partout ne seraient pas Open Source par exemple ? Pourquoi une institution comme l’IRCAM déjà largement subventionnée par l’état ne met pas à disposition des institutions d’enseignement gratuitement l’ensemble de ses logiciels (les étudiants français sont par exemple ici doublement pénalisés : non seulement ils paient l’IRCAM à travers leurs taxes mais en plus ils doivent payer un supplément à la pièce)? On pouvait attendre beaucoup du site EARS par exemple mais celui-ci ne recèle finalement que peu de contenu informatif après plus de 10 ans. Est-ce une question de ressources ? D’informations à son sujet ? De cloisonnement ? Cette conférence permettra surement d’en discuter. De notre côté à l’UdeM l’ensemble de nos cours et de nos notes de cours est disponible à la communauté par le biais de notre site web et d’autres instituions francophones ou des professeurs isolés s’en seront largement inspirés depuis quelques années. Nous ne pouvons qu’en être fiers.
Le site web se la faculté de musique est essentiellement constitué d’informations sur les différents programmes, sur le personnel et sur nos activités. Pour des raisons de sécurité informatique qui commencent à être résolues nous n’avons jamais mis officiellement en ligne des liens avec l’ensemble de nos cours. Mais si vous faites une recherche Google avec comme mots clé typologie sonore, vous trouverez immédiatement le site de mon cours. De même avec acoustique musicale (6ème référence en date d’aujourd’hui) ou Analyse-synthèse (9ème en date d’aujourd’hui).
Composer and Associate Professor from Music Department (Universidad Autónoma, Madrid)
This paper is part of a bigger project, which has the goal to find taxonomy for the live sound processing, intended to be applied from the listener viewpoint and independent of technology implementation.
Live processing of sound in a concert situation has a long tradition, with precedents such as Imaginary Landscapes (Cage) from 1930s, but it started definitively on 1960s with Microphonie I (Stockhausen) and other pioneer works. The study of this practice is close related to electroacoustic mixed music and live electronics.
In the 1980s the spread of MIDI protocol introduced the new type “event processing” (Emmerson, 19991) in which the music is represented by a stream of events that could be manipulated, this allowed the introduction of live interactive composition with the computer reacting to the outputs of musicians. The previous live "signal processing” evolved using dedicated hardware, but the increase of power of personal computers allowed the use of programming languages dedicated to signal processing like Max, PD or SuperCollider. Today both signal and event processing tend to merge in a hybrid set of practices.
The evolution of technology has been very rapid and has generated many problems. Just to name three of them: a) The technical obsolescence: When a piece gets old many composers renounce live processing and prepare a new version with recorded electronics. b) Some lack of musical thinking: The complexity of technological details sometimes obscures the description of sound processing from a music functional viewpoint. c) The peculiarity of live electroacoustic music performer: there is no standard profession compared to pianist or sound technician. Composers in many cases are the performers or improvisers of their own works leaving little documentation.
Some attempts of classification and testing on works produced at LIEM
Since the opening on 1989 the LIEM has been involved in the production of several hundred of works. In this paper we focus in a set of 25 works composed at LIEM facilities, or produced in its concerts, from 1989 to 2009. All of them use live sound processing and belong to Spanish composers. The technology of LIEM has been based mainly on commercially available equipment, starting with dedicated sound processors, manually or computer controlled via MIDI, to evolve, at the end of 1990s, to use computer applications like Max or PD.
Trying to classify the different types of sound processing from a musical viewpoint could be useful for music analysis and for understanding the real contribution of live sound processing to the music. A useful classification needs the definition of significant criteria that could be applied to any type or work and to the listener experience. As a first prototype, we are going to apply several criteria, found in the literature, to the proposed set of works, and test its usefulness.
1. The processing severity: The relation degree between the acoustic original and the live electroacoustic result. According to Savouret (2002) if there is any relation he call it “transformation” and if there isn’t “transmutation”. From the considered set of works only one used processing in this last category, this peculiarity makes sense since the presence of live acoustic instruments is usually strong and composers rather leave transmutation for recorded electronics (“tape part”).
2. Time behaviour: Static, dynamic and modulating. In the case processing has no change on time, change in some direction or fluctuates following periodic or non- periodic function. In our set of works there is no preference among these.
3. Relationship with live acoustic instruments (Vandenbogaerde, 1972): It can be of “division”, “dialogue”, “fusion” and “extension”. “Division” is when there is not a clear relationship between original acoustic part and the processed one; none of the studied works showed this type. In the “dialogue” type there is a question-answer relation, 36% of the considered works had this one. “Fusion” when both worlds merge in a new sound (4% of works). And “extension”, when the processing is used to enrich the acoustic sound, is the most used type (60% of works). This leads us to the conclusion that most of the studied works use live sound processing rather as an orchestration effect.
4. Relationships based on the processed sound parameter. Some of most usual treated parameters are duration, pitch, timbre and texture. Processing of duration, like time inversion, expansion or compression, delay, etc., is used in 36% of works. Pitch processing, like pitch change, harmonizing or pitch shift, is found in 44% of cases. There are many ways of transforming timbre, like filtering, envelope modification, delaying shorter than 50 milliseconds, etc, in the case study 64% of works showed this. And texture processing, such as proliferation, granulation, modulation, etc., was in 28% of works.
5. Relationship with space: This comes from Emmerson (1997) who propose the ideas of “local” and “field” in live electronics. The processing which “seek to extend the perceived relation of human performer action to sound production” (for instance the processing of spectrum) would be “local”, and when the sound is placed on a virtual space using reverberation, echo, panning, etc., we talk of “field”. An overwhelming 92% or the studied works use some type of space processing, but 56% of them combine this with local processing, leaving a 36% of works that only process space. This love for the space is normal in live electroacoustic music, but even more in this set of pieces, which were influenced by the availability of Quadrapan, spacialization program developed at LIEM by Céster, Arias and Pérez (1996).
Finding an appropriate taxonomy for live sound processing can help to characterize, classify and analyze a repertory of works. We have tested several criteria on a set of works by Spanish composers. Collecting statistical data based on those criteria has shed light on composition practice tendencies, and at the same time could show whether the criteria are useful or not.
Statistics results in this set of works show that the most used type of live sound processing is that of placing the source in a virtual space, followed by timbre transformation. I think reasons for these preferences make sense and are coherent with western music evolution. First of all, space dimension in music and playing with the space, has always fascinated musicians, and electroacoustic media has allowed implementing these interactions very easily. On the other hand, live electronic timbre transformation is a variation of extended instrumental practices, a very common tendency that started on last century, both placed in the steady search for new sounds.
City University Northampton Square, London, UK
Introducing a set of principles treating levels of perceived structure, and transformations, in textural processes, this paper discusses the aesthetic consequences and potentials of an entropic continuum, wherein one direction is productive of morphological articulation and heterogeneity, and the other leads to deterioration and homogeneity. The objective is to draw attention to the malleability of a structural context created by textural processes, and the potential for musical adventure discovered in the polarity of focused application of energy on one hand, and chaotic dissipation on the other. A conceptual framework is created in a derivation from two sources: (1) the concept of entropy originating in thermodynamics (Coveny and Highfield, 1991), and (2) art theorist Rudolf Arnheim’s discussion of the inconsistency between the notion of order and disorder in the perspectives of physics and art, respectively, in his essay Entropy and Art (1971). Borrowing from Arnheim’s terms, a classification of textural entropic processes is presented. A discussion of structural levels and hierarchies in texture follows, emphasising the context-dependency of concepts such as equilibrium, order and disorder. Examples from the acousmatic works realised as part of this research are used as illustrations for the various aesthetic processes, and problems, that are treated in the paper.
The paper is part of a research seeking to make a contribution to the understanding of texture as a spectromorphological phenomenon and its potentials for articulating spatial structure in acousmatic music (Nyström, 2011). The project is particularly oriented towards exploring the possibilities of textural spatiality afforded by multi- channel composition formats, an area of current interest in acousmatic music. Textural manifestations of spatiality are explored through both composition and theory in order to create a network of principles that build on Smalley’s writing on space-form (2007), but also, offer fresh perspectives on aesthetics in acousmatic music. Texture, as a forming principle, or a sonic phenomenon, are here understood as a surface quality or a sonic collective, typically appearing as a mass within which interior activity is present or at least suggested. Smalley’s suggestion that “music which is primarily
textural ... concentrates on internal activity at the expense of forward impetus” (1997, pp. 113-114), in contrast with the linearity and goal orientation associated with a more gesture-oriented aesthetic, is here viewed as pertinent to the present discussion where non-linearity can be connected to the principle of entropy. However, it is also showed here that entropic processes can be conducive to the formation of gestural events in what Smalley has termed texture-carried music (ibid., p. 114).
Entropy, originally “representing the unavailability for a system’s thermal energy for conversion into mechanical work, often interpreted as the degree of disorder or randomness in the system” (Oxford Dictionary of English, 2010), is a measurement of the loss, or dissipation, of energy when it undergoes a conversion from one form to another. It is here found to be a principle pertinent to a discussion of acousmatic textural structure, from several analytical angles. First, textures often have source-bonded relationships with matter through which energy potential flows. The temporally non-linear utilisation of energy, typical of texture as structuring principle, suggests that it is often animated through such a process of perpetually injected energy that is dissipated without resulting in forward motion. Second, following this, entropy can be viewed as relative to listeners’ expectations of motion in texture – a context of suggested available energy and its possible utilisation in music through time. Third, a generalised spatial, or environmental quality is afforded by the inability to perceive certain aspects of lower-level organisation in texture. And finally, the diffuse qualities and ambiguities often emerging in texturally layered music suggests a form of speculative order that ties in with the notion of a liminal, entropic stage in transitions towards increasing or decreasing complexity of structure – the degree to which structure is articulated within texture, and its relation to macroscopic qualities, is applicable to the notion of an entropic continuum, which in turn suggests entropic processes of change.
Extrapolating relevant aspects of the concept of entropy from its origins towards the present context, a new definition applicable to acousmatic music is offered in order to individuate the term. This is done with awareness, however, of the potential incoherence that can result from transporting principles from one discipline (physics) to another (music). For this reason, Arnheim’s eloquent exposure of the thermodynamic view of entropy, order, and equilibrium to the realm of art and humanistic perspectives on structure is discussed (1971). Arnheim argues that the thermodynamic view of the universe is contrary to our human experience of it – the physicist’s concept of disorder, resulting from thermal equilibrium, is rather a low form of homogenous order. From the perspective of meaningful structure, however, disorder “is not the absence of all order but rather the clash of uncoordinated orders” (Arnheim, 1966, p. 125). The dissipation of potential energy, he suggests, results from two kinds of opposed processes: the anabolic tendency – a “shape-building cosmic principle”, always tending towards the ideal order within given constraints (1971., p. 31) – and the catabolic effect – a degradation of structure towards macroscopic uniformity, resulting from removal of constraints, “comprising all sorts of agents and events that act in an unpredictable, disorderly fashion and have in common the fact that they all grind things into pieces” (Ibid., p. 28). In reflection on this, I postulate that the ultimately anabolic tendency in musical structure subsumes a purposeful existence of catabolic processes in textures – a form of “useful decay.” Since music forms itself through time, the constraints of textural distributions in space are malleable, and the catabolic lowering of order in texture creates “thermal crises”, where opportunities for more complex structures appear. This suggests an interaction among textures not unlike the dissipative processes of spontaneous organisation in matter proposed in late 20th century developments in thermodynamics within physics of non-equilibrium processes (Prigogine, 1997).
A crucial problem that follow from this thinking is the articulation and perception of structural contexts wherein entropic processes occur; where listeners’ expectations and associations with textural materiality and spatiality must be considered in order for a meaningful experience to be established. These issues are examined and elucidated with the help of musical examples from works composed in the exploration of textural processes.
Schulich School of Music McGill University Montréal, QC, Canada
This paper will describe two models for transmitting mimetic information in music. It will also outline some of the functions of placing mimetic information in an aesthetic context and the challenges idiosyncratic to such a project in the field of music. I will focus particularly on compositional applications of sound whose 'information content' is extra-musical: sounds that reference naturally- occurring or man-made sounds not historically considered musical, and the compositional function of employing such sounds to carry 'non-musical' information. I will propose two approaches to aestheticising these sounds sensitive to preserving the original information-content while enhancing or transforming their meaning. The first approach, which is purely electroacoustic, concerns the use of digital signal processing on recorded sounds, and the second involves the use of software to transcribe information from recordings into notation reproducible by acoustic instruments. I will examine some key works that demonstrate these approaches, illustrate some of the technology used in their realisation, but most particularly I will be outlining a philosophy of composition concerned with integrating extra- musical meaning in an aesthetic context.
I will situate my discussion of mimesis in music in a framework for describing meaning that draws on a wealth of sources from different disciplines, including: information theory, structuralism, musical semiology, and soundscape theory. While there are numerous kinds of meaning music can be said to convey, the focus of my study is on acoustic reference to 'real-world' imagery: in brief, sounds which are mimetic in that they are directly representational of extra-musical objects, which may also be called iconic signifiers from a semiotic perspective1.
The two approaches to mediation of mimetic sounds I will be describing involve an intentional encoding of such a signal for an aesthetic purpose. While all forms of communication (and representation) involve an encoding process, I will distinguish between encodings created in an attempt to preserve the original information-content of a sound as clearly as possible, and mediated encodings which abstract the signal in some way for an artistic purpose. I will be adapting models from information theory and musical semiology in order to describe this process. (Shannon 1948, Nattiez 1990)
The first model of mimetic mediation is very familiar to practitioners and scholars of electroacoustic music. The use of digital signal processing to 'encode' a signal with aesthetic information is one of the central tools of electroacoustic composition. However, I would like to outline a philosophy of digital signal processing which is centred around a sensitivity toward the original information-content of the signal. It will be no surprise that processing can be applied to recordings in ways and extents which completely transform the source beyond recognisability: this is often a favoured means of generating musical material for acousmatic compositions. This can be virtuosic and musically interesting, but it also carries new signifiers: often the processing completely replaces the information-content of the sound, and so the 'message' of the encoded sound becomes the same as the process: e.g. 'frequency shift', 'convolution', 'ring modulation', and so on. However, processing can also be applied in a way that extends or highlights the characteristics of the source recording. Some processes, such as resonant filters, equalisation, and reverberation can behave similarly to 'natural processes'. Even more 'extreme' processes like flange effects have natural equivalents. Sensitive applications of such processes can have the effect of placing the source sound in a different (realistic) space, or highlighting qualities already present in the sound (emphasising resonant frequencies, etc.). Similarly, processing can reflect more imaginative ways of 'looking inside' the sound – separating individual elements, or stretching time, and so on. These processes can take the sound outside of the realm of what is 'realistic' or 'believable', but when the original 'content' is preserved and extended, it has the effect of re-contextualising the sound, rather than using it as clay to be moulded into any shape. This approach to processing has been described as one of the chief characteristics of Soundscape Composition (Truax 2002). Several works will be discussed in brief as illustrations of this approach to processing, including works by Barry Truax, Empty Vessels by Denis Smalley, and Streams by Jonty Harrison.
The second model of mimetic mediation I will present is in the transcription of recordings onto material playable by acoustic instruments. This approach has its roots in the 'spectral technique' associated with the Ensemble l'Itinéraire, and subsequently, IRCAM. Traditionally, spectral compositions have typically used instrumental sounds as their source material, manipulated to such an extent that the composition tends to lose reference to the source (analogous to processing in the first model). Some recent compositions have used this approach to allow instruments to 'imitate' extra- musical sounds with increasing success. This has been made possible, in part, with new software that assists in the verisimilitude of such orchestrations. While there have been many historical examples of 'transcribing' natural sounds in instrumental music (Messiaen's birdsong transcriptions are a good example), this new approach presents a different focus and increasing fidelity, enabling more immediate recognition of the semantic content of the source. (There is also a wealth of familiar 'programme music', but the kind of extra-musical signification found there is several degrees removed – it is perhaps, more symbolic than iconic.) As traditional musical instruments are conceived for an harmonic music governed by its own abstract principles, transcribing recordings to be played by them naturally causes the source material to be imbued with the character of the instruments. The process inherently abstracts the source sound, much in the same way as digital signal processing abstracts sources, but as with processing, there are approaches which connect more closely with the semantic content of the source. Often the juxtaposition of the original recordings played back electroacoustically with the instrumental imitations facilitates the association of the mediated (in this case, transcribed) sound with its source. Two works, Speakings by Jonathan Harvey, and Drei Minuten für Orchester by Peter Ablinger, will be presented to illustrate this approach, in addition to some of the techniques I have been developing in my own compositions.
In both instrumental and electroacoustic music there exist some barriers to the successful transmission of extra-musical information-content in musical contexts. Instrumental music is traditionally abstract in nature, and has a wealth of frameworks of infra-musical meaning that in some ways act as the 'default listening position' for understanding new works. Similarly, though to a lesser degree, electroacoustic music has a tradition of reduced listening – even works with unprocessed referential sounds are often interpreted first for their spectromorphological characteristics, and not for their semantic or narrative characteristics. While I will argue that this listening position is learned through historical and ideological perspectives on music, and not inherent to its structure, it is nevertheless an important consideration. If an abstract listening position is dominant, then mediating the mimetic qualities of source recordings through the two methods I have outlined could be seen to encourage abstract interpretations.
However, it is hoped that the approaches taken to digital signal processing and instrumental transcription facilitate interpretation of semantic content, rather than negate it. Under Simon Emmerson's proposed continuum describing relations between musical language to materials, they fall under abstracted categories, rather than abstract (Emerson 1986). In both cases, sensitivity to verisimilitude to the source recording, juxtaposition of unmediated source and aestheticised mediation, and musical/cultural context, can assist in steering interpretation toward the semantic and narrative. In these ways, composers are not limited strictly to 'phonographic' recordings in order to create mimetic music. The aestheticisation of recordings provides a means to effectively situate semantic communication within and throughout abstract discourses, allowing both 'levels' of information to exist simultaneously and complement each other. Finally, the very process of aestheticising mimetic material expands its meaning. When divorced from their naturalistic context, mimetic sounds can be understood from new perspectives: broader sonic, conceptual, emotional, social (perhaps even ethical) narratives are possible. While the historical domain of music has been the abstract, the application of these and other tools can assist in creating avenues of meaning that have flourished in other art forms for centuries.
Atkinson, S. 2007. Interpretation and Musical Signification in Acousmatic Listening. Organised Sound 12(2): 113-22.
Bernard Mâche, F. 1992. Music, Myth and Nature or The Dolphins of Arion. Switzerland: Harwood Academic Publishers.
Emmerson, S. 1986. The Relation of Language to Materials. In S. Emmerson (ed.) The Language of Electroacoustic Music. New York: Harwood Academic Publishers.
Messiaen, O. 1994-2002. Traité de rhythme, de couleur, et d'ornithologie: (1949-1992). Completed by Yvonne Loroid. Paris: Leduc.
Nattiez, J. 1990. Music and Discourse: toward a semiology of music. Princeton, NJ: Princeton University Press.
Saussure, F. 2002. Écrits de Linguistique Générale. eds. Simon Bouquet and Rudolf Engler. Paris: Gallimard. English translation: Writings in General Linguistics. Oxford: Oxford University Press. (2006).
Schafer, R. M. 1977. The Tuning of the World. Philadelphia, PA: University of Philadelphia Press.
Shannon, C. E. 1948. A mathematical theory of communication. Bell System Technical Journal, 27. 379-423 and 623-656.
Smalley, D. 1986. Spectro-morphology and Structuring Processes. In S. Emmerson (ed.) The Language of Electroacoustic Music. New York: Harwood Academic Publishers.
Truax, B. 1994. The Inner and Outer Complexity of Music. Perspectives of New Music 32(1): 176-91
Truax, B. 2001. Acoustic Communication. Westport, CT: Ablex Publishing.
Truax, B. 2002. Genres and Techniques of Soundscape Composition as developed at Simon Fraser University. Organised Sound 7(1): 5-13
Wishart, T. 1986. Sound symbols and landscapes. In S. Emmerson (ed.) The Language of Electroacoustic Music. New York: Harwood Academic Publishers.
Wishart, T. 1996. On Sonic Art. Amsterdam: Harwood Academic Publishers.
1 It is important to note that there has been much scholarship problematising the closeness of the relationships implied by semiotic theory (applicable equally to the other perspectives I mentioned), which I will address more closely in the final paper. However, I will qualify that my research is based on a quasi-structuralist perspective and follows some of its essential assumptions.
M.A. Department of Musicology, University of Helsinki, Finland
Ph.D. Department of Musicology, University of Helsinki, Finland
Mr. Erkki Kurenniemi (b. 1941) was a central figure in Finnish experimental and avant-garde scene in the 1960’s and early 1970’s. He collaborated with several Finnish and Swedish composers and artists, designed a series of unique electronic instruments and founded the first electronic music studio in Finland in 1962. Kurenniemi's technologically oriented approach to the composition process challenges the traditional idea of the realization of a musical work – and blurs the definition and meaning of the work-concept.
Kurenniemi’s work includes electronic music, avant-garde films, media art as well as visionary texts on technology and the future. A distinctive feature in this material is certain unfinishedness. Most of the Kurenniemi’s material lacks final touch. For example most of his films lack a soundtrack and most of his musical instruments were quite austere with little consideration paid to user interface design or aesthetics.
Traditionally works of art are analysed and studied either on the level of a creative process (poetic), the work itself (immanent) or on the level of the listener’s experience (aesthetic). This division gives an opportunity to study how meaning evolves in different phases in the piece’s life. In Kurenniemi’s case due to this unfinishedness the intention of the composer is diminished or implicit.
Kurenniemi’s compositions can be roughly divided into two categories. Some of the pieces are composed in a traditional manner in which the composer's initial ideas are realized in the final version of the piece – even when composed in a close interaction with technology. In addition to these official works Kurenniemi produced a large amount of electroacoustic material to be used in compositions by him or other composers. Some of this material ended up on audio releases as such. Although this is not unlike many other composition processes at the time and surely not unlike the accelerating use of sampled material later in music, the use of raw, unedited and unprocessed material gives an interesting starting point in analysing the definition and meaning of the work-concept in electroacoustic music.
One interesting aspect is Kurenniemi’s double role as a composer and instrument designer, which appears as an integral relationship between music and technology in many of his musical pieces. The pieces Andropoidien tanssi (1968), Improvisaatio (1969), Inventio-Outventio (1970) and Deal (1971) are used as examples in this presentation. The first two works began initially as equipment testing but gained status of work later when published on record. In these cases the meaning of a work arose – not as an intention of a composer – but as an outcome of a later process. The distinctive features for these two pieces are that they have been composed and realized more or less in real time, no editing involved in the process and they were composed for – or with! – one certain instrument.
A material which turned out to be Andropoidien tanssi (The Dance of the Anthropoids) (1968) was originally a test tape recorded with the Andromatic-synthesizer (1968). Kurenniemi recorded the material to test the instrument just before he shipped it to Stockholm. Andromatic – a 10-step sequencer-synthesizer – was designed and built by Kurenniemi for the Swedish composer Ralph Lundsten. The instrument’s name is originating from the Lundsten’s Andromeda studio and its first public appearance was in the Feel It -exhibition in the Samlaren art gallery in Stockholm where it produced the music and controlled the lights of a plexiglass sculpture.
A similar example is the piece Improvisaatio (1969) which was an improvised session by Kurenniemi with Dico-synthesizer (1969). The session was documented by the Finnish Broadcasting Company YLE. Both Andropoidien tanssi and Improvisaatio were realized using a single instrument, respectively. Kurenniemi was commissioned to build Dico by Finnish composer Osmo Lindeman, who needed in his home studio. In Improvisaatio the work-concept is more disputable than in Andropoidien tanssi. The latter was named after an image Kurenniemi conjured up in his mind thinking what kind of alien creatures could dance to these sounds, while Improvisaatio (Improvisation) is named simply after a real life event.
A somewhat different case is the work Inventio-Outventio (1970). Like the first two examples it has been realized as a demonstration material composed with the newly completed Dimi-A synthesizer. However unlike the first two examples it realized as an outcome of a more intentional composition process. The piece was realised as a 7” vinyl record to promote the new instrument. First part of the piece is Johann Sebastian Bach’s Inventio no. 13 in A minor (BWV 784) – probably inspired by Walter Carlos’s Switched-On Bach -recordings. The last part of the piece (Outventio) was composed jointly by Kurenniemi and Jukka Ruohomäki.
The final musical example in this presentation is the intermedia work Deal (1971). In the 1971 Kurenniemi completed a new synthesizer with the optical input, Dimi-O. The initial idea of the synthesizer was to read the graphical notation with a camera and convert the notation into music. In the very early stage of the design process became clear that more interesting application of the instrument would be to use it in the context of experimental film or in happenings. Eventually, Dimi-O was used with a dancer who created her own accompaniment with Dimi-O by dancing in front of the camera. Later the instrument was used with an orchestra the camera turned towards the conductor’s hands and even in the psychological tests camera reading the testee’s expressions. The intermedia work Deal was an improvised piece and it was realized only as the printed instructions for dancer and musician playing the Dimi-O. These instructions consisted of only loose boundaries for the dancer and musician to improvise within. Kurenniemi applied for a travel grant for a trip to Norway where he introduced his instruments and for that application he needed a plan for a piece which became Deal. Notable is the term – intermedia – Kurenniemi used somewhat 30 years beforehand of the emergence of the concept multimedia.
Like his electroacoustic music, as well his instrument can be seen as works of art – although Kurenniemi is reserved with this kind of likening. According to Kurenniemi for him the electronic musical instrument design in the 1960’s was merely a research project of how to apply components and digital technology in the musical context. Despite that Kurenniemi’s nearly 50 year-old historical instruments are frequently played in concerts and especially displayed in exhibitions around the world. Although most of the instruments are unfinished and austere some aesthetical and usability issues were considered in the Dimi-A which gained status of a work of art when displayed in exhibitions.
Luis Alejandro Olarte
Center for Music and Technology, Sibelius Academy, Helsinki, Finland.
In this paper I will present and discuss a proposition for the contents of an electroacoustic improvisation class. These contents or units have been identified following a method of inductive-deductive analysis from a series of workshops given to art students at the university level in Finland. The next questions have guided the research process:
What are the values and skills required in the discipline of electroacoustic improvisation? Which frameworks, situations, exercises or tasks can be designed to develop and improve the flow of the performer’s intuition and musical thinking in the context of electroacoustic improvisation? How the study of electroacoustic concepts (synthesis and control paradigms) can be articulated with musical improvisation? What are the specific features and potential of the computer and electroacoustic tools in improvisation?
Improvisation is a powerful tool for the electroacoustic musician; intuition and exploration are at the heart of the creative work at the studio or on the stage. This is mostly true in individual work, however when playing in a mix context involving acoustic instruments or other artists—dancers, actors, visual artist—different challenges appear. How to follow, support, imitate, lead, disappear, argue, read and response to a particular moment, atmosphere or group intention in a fast and accurate way? Those situations require very specific skills and attitudes that can be trained and developed by doing and practicing, therefore a working space and a structured program are important keys on the consolidation of the discipline of performance of electroacoustic music1.
Improvisation is usually understood as an interaction between a governing framework and freedom (this applies to fields as diverse as C.P.E. Bach's description of free fantasy and the jazz musician's improvisation on a given harmonic background). Therefore, improvisation is not total freedom, but rather freedom within pre-determined boundaries—however the utopia of total freedom is a very inspiring a fruitful tough. A classic example to illustrate this relation is the concept of contrapuncti alla mente defined in Tinctoris's Liber de arte contrapuncti (1477) when he describes counterpoint as a polyphonic art that can be extemporized alla mente or written down scripto.
Playing within frameworks or conducted tasks is an important tool to approach and develop improvisation skills, consequently, it is crucial to design performing situations isolating or focusing in particular topics and structuring the contents in an increasing progression of freedom. Of course, I am not suggesting that those frameworks can be generalized because every group of musicians has its own requirements, needs, history and particularities. I am reporting here my experience hoping that it will be useful or inspiring for further developments by other electroacoustic musicians engaged in the pedagogical research.
The inductive-deductive process of find out the contents of an electroacoustic improvisation class started with the study of some luminaries on the field of improvisation who have pointing out important aspects to be developed by the musician practicing improvisation. In his article "Towards an ethic of improvisation", Cornelius Cardew (1971) outlines a group of seven virtues: simplicity, integrity, selflessness, forbearance, preparedness, identification with nature and acceptance of death. The last point is particularly profound: "From a certain point of view improvisation is the highest mode of musical activity, for it is based on the acceptance of music's fatal weakness and essential and most beautiful characteristic — its transience. The desire always to be right is an ignoble taskmaster, as is the desire for immortality. The performance of any vital action brings us closer to death; if it didn't it would lack vitality. Life is a force to be used and if necessary used up" — and he finishes by quoting Lieh Tzu: "Death is the virtue in us going to its destination" (Cardew 1971).
A necessary element for creating an improvisation culture is the letting go of fear. Nachmanovitch (1991) writes of "five fears" the Buddhists describe that are obstacles to our freedom to create:
1) fear of loss of life,
2) fear of loss of livelihood,
3) fear of loss of reputation,
4) fear of unusual states of mind, and
5) fear of speaking before an assembly.
Fear of speaking before an assembly is taken to mean "stage fright," or fear of performing. Fear of performing is "profoundly related to fear of foolishness, which has two parts: fear of being thought a fool (loss of reputation) and fear of actually being a fool (fear of unusual states of mind). To these fears Nachmanovitch adds the fear of ghosts, that is, being overcome by teachers, authorities, parents, or great masters. Pianist Werner Kenny also discusses the aspect of fear and music: fear-based practicing, fear- based teaching, fear-based listening, and fear-based composing. He writes "improvisation and self-expression require the taming of the mind, the dissolution of the ego, and the letting go of all fears." Werner (1996).
In the development of his music-learning theory, Azzara (1993) work on the term "audiation". Audiation offers a more precise definition of musical imagery, that is, aural perception and kinesthetic reaction, and a definition of how people understand and create meaning in music. He defines audiation as hearing and comprehending in one's mind the sound of music that is no longer or may never have been physically present: Audiation is to music what thinking is to language. The abilities to retain, recall, compare, and predict are recognized as primary mental functions in Gordon's definition of audiation. Gordon suggests that for meaningful improvisation to take place, an individual must audiate what he or she is going to create or improvise.
After a period of experimenting with frameworks, guided improvisations and performance situations, I arrive to the following set of units as a proposition for the core of the electroacoustic improvisation class.
1) Awakeness and openness;
2) Mimesis and togetherness;
3) Risk and fear;
4) Selflessness and forbearance;
5) Contradiction, catalyst and interpolation;
6) Catharsis, ecstasy and histrionism;
7) Memory, anticipation and immediacy.
I will describe those units and propose strategies to articulate them with fundamental concepts of electroacoustic sound, for example: the concept of dirac and awakeness, mimesis and dynamic envelopes, togetherness and spectral ear training, risk and feedback, fear and limits of the audible, forbearance and binary gates, interpolation and control lines, catalyst and frequency-amplitude tracking, histrionism and vocoder, memory and delay lines.
This hypothesis requires further testing and evaluation but it has proven to work as a pedagogical framework to organize and articulate in an increasing progression the conceptual complexity of sound technology and the creative freedom of improvisation—both disciplines been studied in a holistic manner. It is my conviction that the development of a proper dexterity performing with electroacoustic tools should be considered as a fundamental part of modern musicianship and that improvisation is a powerful pedagogical tool for the enhancement of such dexterity.
Attali, Jacques. Noise: The Political Economy of Music. 1st ed. Univ Of Minnesota Press, 1985.
Azzara, Christopher D. An Aural Approach to Improvisation. Music Educators Journal 86, no. 3 (November 1, 1999): 21-25.
Azzara, Christopher D. Audiation-Based Improvisation Techniques and Elementary Instrumental Students’ Music Achievement. Journal of Research in Music Education 41, no. 4 (December 1, 1993): 328-342. Azzara, Christopher D. Improvisation (n.d.).
Cardew, Cornelius. Towards an Ethic of Improvisation. In Treatise Handbook. London: Peters Edition, 1971.
Dean, Roger T. (Author). Envisaging improvisation in future computer music, 2009.
Dean, Roger T. Hyperimprovisation: Computer-interactive sound improvisation. Computer music and digital audio. Middleton: A-R Editions (WI) Middleton, WI, USA, 2003.
Globokar, Vinko. Laboratorium: Texte zur Musik 1967-1997 (Quellentexte zur Musik des 20. Jahrhunderts). Pfau, 1998.
Jorda, S. Digital Lutherie: Crafting musical computers for new musics performance and improvisation. PhD. Dissertation, Universitat Pompeu Fabra, Barcelona, 2005.
Landy, Leigh. Experimental music notebooks. Taylor & Francis, 1994.
Levaillant, Denis. L’improvisation musicale: Essai sur la puissance du jeu. Actes Sud, 1998.
Lewis, George E. Interactivity and Improvisation. In Dean, Roger T., ed. The Oxford Handbook of Computer Music. New York and Oxford: Oxford University Press (2009), 457-466.
Nachmanovitch, Stephen. Free Play: Improvisation in Life and Art. Tarcher, 1991.
Nettl, Bruno, and Melinda Russell. In the Course of Performance: Studies in the World of Musical Improvisation. 1st ed. University Of Chicago Press, 1998.
Pressing, Jeff. Cognitive processes in improvisation, n.d.
Pressing, Jeff. The interplay of multicultural, improvisational, and compositional elements in the construction of large- scale electroacoustic compositions: The symphony Zalankara. MikroPolyphonie 1 (January 1, 1996).
Prevost, Edwin. No Sound Is Innocent: Amm and the Practice of Self-Invention Meta-Musical Narratives Essays. Small Press Distribution, 1997.
Rzewski, Frederic. Inner Voices. Perspectives of New Music 33, no. 1/2 (January 1, 1995): 404-417.
Rzewski, Frederic. Little Bangs: A Nihilist Theory of Improvisation. Current Musicology, no. 67/68 (Winter1999): 377.
Rzewski, Frederic. Parma Manifesto. Leonardo Music Journal 9 (January 1, 1999): 77-78.
Rzewski, Frederic, and Monique Verken. Musica Elettronica Viva. The Drama Review: TDR 14, no. 1 (October 1, 1969): 92-97.
Sarath, Ed. A New Look at Improvisation. Journal of Music Theory 40, no. 1 (April 1, 1996): 1-38.
Sarath, Edward W. Improvisation for Global Musicianship. Music Educators Journal 80, no. 2 (1993): 23-26.
Schafer, R. Murray. A sound education: 100 exercises in listening and soundmaking. Notes: Quarterly journal of the Music Library Association 51, no. 1 (September 1994): 221.
Schafer, R. Murray. The thinking ear: Complete writings on music education. Indian River: Arcana Indian River, ON, 1986.
Sawyer, R. Keith. Improvisation and the creative process: Dewey, Collingwood, and the aesthetics of spontaneity. The journal of aesthetics and art criticism 58, no. 2 (March 1, 2000): 149.
Smith, Julius O. Viewpoints on the History of Digital Synthesis. III Proceedings of the International Computer Music Conference. Montreal Computer Music Association, (October 1991), 1-10.
Stenström, Harald. Free ensemble improvisation. Goteborgs Universitet Goteborg, 2009.
Tinctoris, Johannes. Liber de arte contrapuncti, Liber secundus. Thesaurus musicarum latinarum. Bloomington: Indiana University (Center for the History of Music Theory and Literature) Bloomington, IN, United States, 1998.
Werner, Kenny. Effortless mastery: Liberating the master musician within. Jazz educators journal 30, no.2 (September 1997): 64.
1I strongly believe in the sociological and political importance of the creation and support of those pedagogical spaces. I share the opinion of Jacques Attali to whom the political dimension of music cannot be neglected and it is subjacent to the pedagogical aspect. "Music is something to be done more than contemplated, appreciated, consumed, or exchanged. Accordingly, musical activity must not capitulate to the deterministic influence of centralized power, to overspecialization, or to the conformist forces of mass production and distribution. Composition [Improvisation] entails a loosening of restrictions and a corresponding relaxation of order. It rejects pressures to uniformity and nurtures diversity. It is, in short, a relation that is open, tolerant, and friendly to individual difference and a plurality of musics: a postmodern political economy" (Attali 1985).
Jaime E. Oliver La Rosa
Columbia University Department of Music, New York, USA
This paper is not directly about the theremin musical instrument nor about its creator Leon Theremin. It is rather about the reactions that the person and the instrument provoked in the general public as documented in the written press. In particular, it is about the way the theremin was used as a point of departure to imagine what electric music would be like, as documented in the written press.
In 1927, Lev Sergeyevich Termen, or Leon Theremin, went on a tour of demonstrations and concerts through Frankfurt, Berlin, Paris, London, and finally arrived in New York on December 20th, where he lived for approximately 10 years. His new instrument, the etherphone, thereminvox, or simply the theremin, intruded in the musical world causing a commotion in the press and attracting the attention of scientists, the cultural elite, and the general public. As the instrument was demonstrated, it provoked passionate reactions, receiving both praise and criticism. In its path through Europe and well after its arrival in New York, Theremin became a media phenomenon and the world’s reaction to the instrument, the myth that grew around it, and the speculations about the new music it heralded were thoroughly documented in the press.
Negative criticism was generally concerned with pointing out the instrument’s inadequacy for the Western concert musical tradition, with actual performance flaws (intonation, incessant vibrato and legato), with the instrument’s monophonic nature, and so on. It was also however, a counter-reaction to the media’s excessive praise of the new instrument. Visionary or shortsighted, this kind of criticism is, in any case, irrelevant to the aim of this paper.
Initially, the theremin was often associated with the human voice and the violin. The instrument was legitimized through using traditional western repertoire and established concert halls. On one hand, the instrument followed the traditional model of a musical instrument: that of a stable timbre over which pitch, duration and amplitude are articulated; on the other, it inserted itself in the traditional composer-score-performer- instrument-listener model. These are some of the reasons why the theremin had such a strong impact in the musical establishment of the time.
The theremin provides us with an opportunity. Although there had been several electric instruments before the theremin, it was the first instrument to draw the Western musical world’s attention to electric sounds. This attention stimulated a vigorous debate about what the role of these sounds should be in music and about what this new ‘electric’ music would be like. In these debates, a divide was seen between the instruments and music of the past and the “music of the future”.
The reactions to the theremin portray a society that embraced modernism and scientific developments, while retaining many romantic values, beliefs and practices that prefigured later reactions to electronic and computer music. These reactions are analyzed in order to identify how ‘electric music’ was first understood, and the way these first responses reflected the expectations of musicians towards technology.
The central discourse around the theremin construed “electric music” as overcoming the limitations of the physical world. This allowed the musician to express himself directly or transparently. The way in which the physical world was surpassed was articulated through several ideals: a sound that is purer and louder than any mechanical sound, and able to have any timbre or pitch desired; in short to an idealized medium capable of producing any imaginable sound. The theremin is seen as an immediate experience in the sense that nothing - or at least less - mediates between musician and music, allowing for the translation of thoughts into sounds. Paradoxically, the medium itself was almost always the center of attention and most critics could not get past it to talk about the music made with it.
Many saw in the theremin an instrument so intuitive that it could be played by anyone and learnt almost immediately. Such ease of use, added to an infinite timbre, contributed to the idea that the theremin could become a universal musical instrument. This lead to an ultimately failed commercial attempt by RCA. Several critics, and Theremin himself, entertained the idea of creating and “orchestra without instruments”, where the music stands would be equipped with antennae and physical instruments would be discarded.
Some of these ideas resonated with the modernist trends heralded by Busoni, Russolo, Varese, Cowell, Cage, Grainger, and Schillinger, amongst others. Many of these composers demanded musical change, and expressly thought of the “gliding tone” as a liberating sound. Cowell and Grainger, would take further steps in automating the production of electric sounds to form machines that could discard the performers too. The discourse of overcoming the limitations of the physical world was appropriated by the popular imaginary through movie soundtracks that used the theremin to represent unbalanced states of mind, aliens and the otherworldly.
I will however argue that the main innovation of the theremin is that a musical device can be - and is in fact - expressed as an operable code, and in this case, a schematic. Such schematics naturally belonged to the era's growing DIY culture of radio and electronics, and constituted an important medium for the storage and communication of the instrument. Moreover, such practices are traced directly to the invention of the modular synthesizer of the 1960's, not only because Robert Moog was a theremin builder himself, but because in the theremin, we can already observe the principles of modular design at work.
This paper belongs to a larger agenda concerned with understanding the way we conceive of a musical instrument in the computing era. In this sense, it tries to understand the way the idea of musical instrument has changed throughout the 20th century, in which new electric instruments emerged and coexisted with acoustic ones and a larger technological context. The ability to represent the device as information, opened up new possibilities for storage, replication, exchange, education, realization, modification, and so on. This operable code and the practices that derive from it, became a central element of electronic and computer music practices.
Tae Hong Park
School of Music Georgia State University Atlanta, GA, USA
English/Communication Dep. Georgia State University Atlanta, GA, USA
Computer Science Dept. Georgia State University Atlanta, GA, USA
In this paper we discuss the Electro-Acoustic Music Mine (EAMM) project. The project is a collaboration between the New & Emerging Media Cluster, Special Collections and Archives, Music Education Department, and the School of Music at Georgia State University, International Computer Music Association (ICMA), and the Boston Athenæum. The main goal of the project is to create a permanent, sustainable, expandable, open, and easily accessible electro-acoustic preservation model using paradigms that follow a crowd-sourced submission, curation, and electro-acoustic music (EAM) exploration system. EAMM will provide conferences and festivals opportunities to archive and preserve its music in the larger goal of creating a permanent and sustainable preservation model.
Electro-acoustic music is typically presented, attended, and preserved by specialists at academic conferences and festivals. Unlike the majority of popular music, it is not economically-driven, nor is it readily preserved by market mechanisms or archival projects sponsored by industry, libraries, and museums. Furthermore, the that is presented at conferences and festivals is typically lost after a event concludes. When the music is archived, the burden to create such preservation systems falls on conference organizers unprepared to model, build, or maintain a proper archive. The physical inaccessibility of for the expert and wider audience is an issue in itself, and the lack of information about such works limits the wider aesthetic and pedagogical potential of this young and rich musical heritage. The musical inaccessibility further affects the wider audience's accessibility to , a problem exacerbated by the lack of existing learning platforms for such “difficult” art music. These factors contribute to the difficulty of fitting to existing preservation models.
Existing models for electro-acoustic music preservation
There are a number of existing models for preservation and include professional archival services, record labels, artists' personal web-pages, and other Internet-based sites. Although personal web-pages from artists are common and contribute to the preservation of , the fragility of such sites highlight key negative points: (1) accessibility: randomly spread over the Internet; (2) sustainability: temporary archive model that can go offline at any time; and (3) reasonable filtering mechanism for user: burdening the user to sift through a tremendous ocean of data. Record labels dealing with are at beast far and in between. That in itself results in neglect of the vast majority of potentially significant that fall through large cracks. Additionally, record labels operate on the basic framework of economic viability, which is often at odds with elements of aesthetics and artistic merit, and also involves the complexities of politics, “name value” (opposed to musical value) of composers and inherent biases that record labels (rightfully) may have.
Some of the existing electro-acoustic music-related archives include Digital Anthology of Recorded American Music1 (DRAM), International Archives2 (IDEAMA), and UbuWeb3. DRAM is a non-profit resource with an archive boasting 3,000 albums of recorded American music and also includes . The archives are from recordings provided to DRAM by independent and small record labels. Although DRAM is wonderful resource for new music and some , perhaps the biggest drawback is that it is limited to American music recordings that are already available on the market. For example, a DRAM search for the of pioneers John Chowning, Max Mathews, Daniel Teruggi, and Barry Truax yield no results. The service provided to users is not free but is available through participating universities and public libraries. IDEAMA was created in 1988 in an effort to preserve the “most endangered early [electro-acoustic music] works” up to around 1970. In 1990 the project developed further into a collaboration between Stanford University and Zentrum feur Kunst und Medientechnologie Karlsruhe (ZKM)4 where 570 works, selected by an “international advisory board,” are now archived. These are very valuable collections in mp3 format and the database, now maintained by ZKM, has grown since the 1990s to include newer works under the <mediaartbase.de> framework. The IDEAMA archive has some drawbacks as well, as it catalogs works up to 1970 only. The extended ZKM archive which includes additional newer works are based on self-submitted contributions or go through ZKM. UbuWeb is yet another resource for and contemporary Avant-garde music which went online in 1996 and is not supported by any institution or industrial partner. Although it has a substantially-sized music archive, it suffers as a sustainable preservation and archival model due to issues concerning longevity, reliability, and music selection process. According to their website, “UbuWeb posts much of its content without permission; we rip out-of-print LPs into sound files; we scan as many old books as we can get our hands on; we post essays as fast as we can OCR them.”5 In short, their data collection practices are at best non-transparent and do not adhere to any type of accepted standard/format, and the site may cease to exist without warning as legal aspects and artists' rights are entirely bypassed.
Towards a comprehensive electro-acoustic music preservation model
To address some of the aforementioned issues in preserving of , we introduce the EAMM preservation model to enhance and contribute, rather than replace, existing models of digital archives and curating methodologies. This will allow for various models to co-exist and also address issues concerning copyright and contractual obligations that some authors may have with record labels and publishers. EAMM key-points will include effectiveness, efficiency, sustainability, expandability, technological currency, innovative interface designs, transferable technologies, scalability, low-risk structures, and adaptability to changing digital archive environments via modular system design implementation.
The EAMM project seeks to create a comprehensive preservation and exploration portal based on: (1) a filtered-crowd-sourced music collection module that is curated according to a credentialed peer- reviewing system and, (2) a comprehensive archival module. A content-based analysis module, which will initially be built by porting the Electro-AcouStic music analYsis (EASY) Toolbox to allow interactive visualization, navigation, and discovery of musical works. This third module exploits techniques based on Music Information Retrieval (MIR) research to extend and enhance traditional text-based discovery and delivery systems. As of yet, no similar credentialed, peer-reviewed preservation system exists for , and no MIR-based exploration interfaces exist for music archival systems.
The first module is currently being developed and anticipate it to be for deployment for the 2013 ICMC conference. A number of free and open source-software conference submission systems like OpenConf6 and EasyChair7 do exist, but user feedback from past ICMC conference organizers have been consistently negative. Instead of writing the code from the ground up or using an existing conference management software, we are currently working with the highly modular and flexible content management system (CMS) Drupal8. The availability of third-party modules with specific functionality are aplenty in Drupal, making it ideal to build sophisticated web interfaces. For then archival module, we are considering using DSpace9, Omeka10, CollectiveAccess11, or CONTENTdm12 as there is ample evidence of success of these software which are widely used by professional libraries, museums, and universities (each of them have standard models and adhere to standards like Dublin Core).
The third content-based analysis module is perhaps the most technical and most research-oriented module in EAMM. MIR research for content-based exploration is still a work-in-progress where many problems remain unsolved for traditional music, let alone : automatic score generation, automatic tagging of musical files, automatic timbre recognition, automatic genre classification, finding patterns of repetition, global/local music structural segmentation The reader is referred to (Casey et al 2008; Klapuri and Davy 2006 ) for an overview of the state of the content-based analysis research and to (;Bello 2005, 2012, Gulluni et al 2011; Mayer and Rauber 2011; Paulus et al, 2010 , Park 2004, 2005, 2009, 2010, 2011 ;Weiss & Bello 2010, 2011 ) for examples of research pertinent MIR. That is not to say there have been no significant advances as the reader will note from the above references. Furthermore, content-based analysis research outcomes will be in incremental stages, and thus deploying systems that currently work, and incrementally adding sophistication through user feedback may prove to be a good model forwards towards reaching a comprehensive content-based analysis framework. We are in the process of beginning to fine-tune the EASY Toolbox which is currently implemented in MATLAB. After fine-tuning, the software will be ported to the server environment and include interactive visualization formats for users to explore through our server.
The EASY Toolbox is currently implemented in MATLAB and features an interactive interface for music exploration. EASY can handle very large files, limited only by available hard disk space, and includes transport functions for audio playback. The output of time/frequency domain analysis algorithms are displayed via visualization formats including waveform, 3D spectrogram, feature vector plotting, clustering/segmentation, and the timbregram displays. The EASY Toolbox outputs quantitative information that the user can use to aid in exploring EAM.
Casey, M., Veltkamp, R., Goto, M., Leman, M., Rhodes, C., & Slaney, M. (2008). Content-Based Music Information Retrieval: Current Directions and Future Challenges. Proceedings of the IEE E , 96 (4), 668-696.
Signal Processing Methods for Music Transcription," A. Klapuri, M. Davy (Editors), Springer, New York, 2006
Bello, J., & Pickens, J. (2005). A robust mid-level representation for harmonic content in music signals Proceedings of the International Conference on Music Information Retrieval (pp. 304-311). London: IS MIR-05.
Juan Pablo Bello, Kent Underwood, (2012) "Improving Access to Digital Music through Content-based Analysis", OCLC Systems & Services, Vol. 28 Iss: 1
Sebastien uni, Slim Essid, Olivier Buisson, Gaël Richard: An Interactive System for Electro-Acoustic Music Analysis. ISMIR 2011: 145-150
Rudolf Mayer and Andreas Rauber. Musical Genre Classification by Ensembles of Audio and Lyrics Features. In Proceedings of the 12th International Society for Music Information Retrieval Conference (ISMIR 2011), pages 675-680, October 24-28 2011
Paulus, J., Mueller, M., & Klapuri, A. (2010). Audio-Based Music Structure Analysis. Proceedings of t he International Conference on Music Information Retrieval (pp. 625–636). Utrecht: ISMIR-10.
Park, T. H. 2004. “Towards Automatic Musical Instrument Timbre Recognition”. Ph.D. thesis, Princeton University.
Park, T. H., P. Cook 2005. "Nearest Error Centroid Clustering for Radial/Elliptical Basis Function Neural Networks in Timbre Classification". Proceedings of the 2005 Int. Computer Music Conf. (ICMC), Barcelona, Spain.
Park T. H, Li Z. Wu W. 2009. “Easy Does It: The Electro Acoustic Music Analysis Tool”. Proceedings of the 2009 International Society for Music Retrieval Conf. (ISMIR), Japan.
Park T. H, Hyman D., Leonard P. Wu W. 2010. “SQEMA: Systematic and Quantitative Electro- Acoustic Music Analysis”. Proceedings of the 2010 International Computer Music Conference (ICMC), Stony Brook, USA.
Park T. H, Hyman D., Leonard P. Hermans P. 2011. “Towards a Comprehensive Framework for Electro-Acoustic Music Analysis”. Proceedings of the 2011 International Computer Music Conference (ICMC), Huddersfield, UK.
Weiss, R., & Bello, J. (2010). Identifying Repeated Patterns in Music Using Sparse Convolutive Non-Negative Matrix Factorization. Proceedings of the International Conference on Music Information Retri eval (pp. 123–128). Utrehct: ISMR-10.
Weiss, R., & Bello, J. (2011). Unsupervised discovery of temporal structure in music. IEEE Journal of Selected Topics in Signal Processing .
Dr. Peter Rothbart
Professor of Electroacoustic Music Director, Electroacoustic Music Studios, School of Music Ithaca College Ithaca, NY, USA
In the early, evolving years of the genre, electroacoustic music could often be defined in terms of technological, not cultural or geographic ones. Musique concrete came from the Paris studio, pure electronically derived pieces from Cologne. Columbia- Princeton was known for the signature sound of its one-ton plate reverb. The unique equipment that generated the sound associated with that specific studio defined the cultural roots of the electronic studio.
But electroacoustic music itself lost what little cultural identity the technology provided when the technology went corporate and global. Only in recent years has the technology developed to the level at which we can explore the cultural influences on the composer, rather than the limitations imposed upon the composer’s expression by the equipment.
This paper presents the opportunity to delve into the ethnomusicology of electroacoustic music. As an example, I examine Jewish influences in electroacoustic music with the intention of providing a framework or model for future “ethno-electro” studies.
I begin by describing characteristics of Jewish influenced music in the traditional sense. I then examine how those cultural markers manifest themselves in non- traditional fashion in the electronic medium. Numerous examples from historical and contemporary electroacoustic music composers will be heard, including Robert Gluck, Amnon Wolman, Joseph Tal, Eitan Avitsur, Jonathan Berger, Arie Shapira, Stephen David Beck, Yehuda Yannay, Anna Rubin, Richard Teitelbaum, and Dror Feiler.
Defining Jewish influences in music is difficult enough given that the Jewish people and their culture have wandered homeless for most of the past 5000 years, assimilating local customs and influences into their own experiences. Jewish music is a diasporatic one. Jews have constantly had to the preserve the continuity of their culture, values and heritage while assimilating elements of the cultures in which they lived, sometimes by choice, often by necessity.
Defining Jewish influences in electroacoustic music becomes even more daunting, in which we abandon traditional characteristics of form, tonality, harmony, melody and rhythmic structures. The problem then becomes one of recognizing and defining Jewish influences in a music that is no longer reliant on melodic modes, rhythmic and melodic embellishments, even timbres of the traditional instruments. In this sense, the problem of defining Jewish influence in electroacoustic music is the same problem encountered when trying to define any cultural influence in the electroacoustic music field. How can we define the ethnic influences in a music when we abandon the traditional means of the expression of that ethnicity?
In electroacoustic music, harmonic and melodic function, as conventionally conceived, is abandoned or at least redefined. Rhythmic structuring of time, so embedded in all musical cultures must be re-examined as well. Spatial placement has never been a major aspect of cultural definition though early Christian church polyphony may be the exception. Certainly the Canadian predilection for spatial manipulation in electroacoustic music must be acknowledged, but the issue of spatial movement is not inherent in Canadian cultural experience itself.
So what aspects of Jewish sounding music can be translated into electroacoustic terms? What is Jewish sounding or influenced music? How do traditional cultural markers translate in electroacoustic music?
Jeffrey Burns from Denmark is a strong promoter and performer of Jewish electroacoustic music, especially the works of Josef Tal and Arie Shapiro, two winners of the Israel Prize. Burns says that, "By Jewish Music, I mean music that has been created in the cultural environment of Israel as well as that using specifically Jewish musical or topical elements and having been created in the diaspora.” I prefer Burns's definition though his term "Jewish musical elements" seems a bit circular.
We can begin by examining the fundamental aspects of music itself. One essential element of music is timbre, an area of specific interest to electroacoustic composers. Timbre is an important aspect of many musical cultures, often the defining characteristic. The sound of the shachuhachi flute is characteristic of Japanese music. The sound of a philharmonic orchestra is characteristic of Western culture. Timbral inflections are characteristic of many non-Western cultures, Arabic, Indian and Eastern European among others. The realm of timbre is one defining cultural characteristic that may be traceable in electroacoustic music.
Traditional musical aspects such as melody and harmony are hard to characterize as inherently Jewish; Mediterranean-influenced might be a better choice of terms as songs in minor keys are characteristic of that geographic area. Jewish melody is a bit clearer to characterize if one considers the "orientalism" of the eastern Jews.
Orientalism in Jewish music can be defined to a large extent by several characteristics: placement and repetitive use of the half-step, modal harmony, melismatic melodic lines, melodic ornamentation, and melodies with repeated pitches performed in a freely rhythmic manner that emulates the hazzan or leader of religious services in the synagogue. The style is distinctive, though by no means limited to just the Jewish culture. Certainly melismatic melodies are indigenous to the music, derived from ancient Arabic and Aramaic chant, along with other music gathered and incorporated during the Diaspora.
One could say just as easily that early Christian chant and Arabic music reflects the same characteristic. And this is my point, as Jews wandered the world, they both incorporated their host cultures' influences and simultaneously dispersed their own to the host culture. By signifying this characteristic of a piece of music and coupling it with other information discovered in the music and the composer's intentions, it is possible to assert that the influence is Judaic by conscious or subconscious design.
We turn to melody and ornamentation because it is in this area that we see many of the clearest musical signifiers of a Jewish influence. Jewish cantillation is rich and florid, codified in the 1400s and committed to the written page with a system of symbols that indicated the desired embellishments. Jewish music is a vocal-based one. It never wanders far from the sound of the voice, regardless of the instrument. Traditional klezmer improvisation did not wander far from the melody. The chirpings or krechs, the nyuk-nyuks, the shmears and glissandi, especially of the klezmer clarinet are clearly vocal emulating effects. Therefore we can look for the use of vocal-like sounds, effects and embellishments as a signifier. To this end, we can look for sounds in the vocal pitch range with timbres and envelopes that somewhat resemble human sounds.
Modality can be an important signifier when coupled with melodic ornamentation, melismatic phrasing and even repeated note figures. Dorian, phrygian, aeolian modes can be heard in addition to the more oriental ones such as the misheberach scale. Modality is a direct connection to ancient melodies and the linear rather than harmonically related structure of the vocal music. We hear this linearity in a great many of Jewish influenced pieces; sprawling blocks of sound over which a melodic (in the expanded definition) line unfolds, irrespective of a steady pulse. Indeed, electroacoustic music lends itself to the rhythmic freedom of Jewish cantillation.
Jewish influenced music can include collages and soundscapes from Jewish environments, including synagogues, religious and secular events such as prayers and prayer services, weddings, bar mitzvahs, marriages and deaths. Programmatic and texted music affords us the opportunity to explore the composers' intentions in the creation of Jewish influenced music. Texted and untexted programmatic works fall into several categories:
1. Biblically or Talmudically derived, including psalms, midrashic stories, and spiritually intended incantations.
2. Secular stories and reminisces including Holocaust remembrances and homages, family remembrances and homages, stories of childhood or spiritual encounters.
3. Re-interpretations and incorporation of previously composed religious and secular pieces of music.
4. Composer derived commentaries, interpretations or representations of both current and historical political events.
5. Works based on Jewish secular and religious poetry and prose, including verbally transmitted stories.
These are descriptive categories. There is no need to strictly classify works in one area or another. The overlaps are obvious and even necessary to provide depth to the listening experience.
Historically, there is much to make music about. While Biblical tales and references are manifest in any western music whether it be art or folk, I find relatively little use of the Bible in electroacoustic music. Rather than the Bible, mystical tales, Chasidic stories, and midrashes (Jewish discussions and stories that further explore issues raised in the Torah and Bible), serve as source material.
Norwegian Center for Technology in Music and the Arts, Oslo, Norway
Sound and deliberate sonic constructions are never experienced as neutral events unconnected to space and time. Listeners will always perceive and create their experiences through a filter, and the filter is not limited to psychoacoustic negotiations, but is cultural. Any type of music or sonic expression will mean something unique for every type of listener, as well as on an individual basis, although the physical stimuli might be exactly the same. For listening to make sense thus depends on activation of a rich set of memories and associations, which in themselves are cross-cut by social coding, past and present. Furthermore, one can say that the act of listening itself is changing as well, in the sense that the balance between the senses are never the same, but changing over time. An example can be observed in electroacoustic music, where current performance practices emphasize the visual aspects much more that was the case only twenty years ago. In combination, sound, image and other stimuli fix the current concert experience quite differently than the electroacoustic canon typically does.
The paper discusses meaning-making as a socially situated practice, as something that follows directly from the signification processes that occur when external stimuli are perceived and appropriated. Listening is discussed as the coupling of memory and association, without disregarding the objective existence of physical reality.
When considering listening and meaning-making as socially situated processes, consequences emerge for the study of all types of auditory art, clearly for technology-based music and sound art of all genres, and also for expressions that are typically grouped as soundscape art. Soundscape art often posits notions of authenticity, of providing true renderings of environmental sound through seemingly neutral recording free from artistic intent. The intention then lies elsewhere, similar to what we find in program music, where listener attention is sought directed towards specific topics or events. Soundscape art also embraces highly constructed soundscapes, where the combination of sounds and recording techniques renders deliberate interpretations. The notion of authenticity is criticized, and replaced with a focus on social context as the more important determining factor in appropriation of also soundscape art.
This focus on social context is grounded in an understanding of place in an expanded sense; how sound ties the listener to both physical place and place in social structure., In turn, consequences for the understanding of action and participation emerge, in particular where issues of ecology as a holistic approach are concerned. The acoustic ecology field largely considers modern soundscapes as noise and unfortunate disturbances of a “natural” condition, which is an unfortunate reductionist approach to the theory of the soundscape, limiting the possibilities for understanding the cross-fertilization of the surface of sonic art with its referential and social qualities. When nature is discussed as a substance not to be disturbed, and not as a principle of change, human presence and action is excluded from nature. This paper, with point of departure in modern theorists as Timothy Morton and Slavoj Zizek, will criticize this position as untenable, *especially* in an ecological perspective.
This perspective of social situatedness in a broad sense connects the aesthetic consideration to the fabric of socially generated values and value criteria, and rejects the notion of a context-free art. The paper continues with a recommendation of adopting frameworks from the social sciences, and the study of technology as being socially constructed, in the analysis and understanding of music and other audio arts – the social sciences being better equipped for developing knowledge of such structures and practices than traditional musicology, including the musicology of the electroacoustic field.
The aura of technology changes over time, as does the technologies themselves. This influences the concrete expressions as they develop over time, as well as our expectations of what technology can bring. Currently, the situation appears to be more gathered around innovative use of the technology in music, rather than the new technology or the new sounds in themselves. This trend has developed in parallel with an inflow of new sound practitioners - artists from other disciplines, people without formal background, new genres of art practices that aims for other arenas and contexts than the conventional concerts. The struggling genres reflect changes in the social construction around sound practitioners and audiences.
Institute of Electronic Music and Acoustics, Graz, Austria
The tape composition Hétérozygote (1963) by French composer Luc Ferrari is widely acknowledged as the starting point of the so called musique anecdotique (anecdotic music). This piece incorporates recordings of manifold everyday life situations, which are left widely unprocessed, next to parts of instrumental and electronic music. In later works, such as the well known Presque rien series, Ferrari refined his technique of using field recordings for creating sound collages that establish quasi-narrative strands.
Does the advent of narrative recordings in electroacoustic music change the meaning this music contains or transports, if there is a meaning at all? Particularly, which role plays the unveiled “everyday”, Ferrari's frequently recurring main topic, in this context? The term musique anecdotique was coined by Ferrari himself in order to distinguish his concept from the musique concrète. The main point of opposition towards musique concrète surely is in the deliberate negation of the “ban on association”. Pierre Schaeffer, one of the pioneers of musique concrète, featured the listening of sounds as such, avoiding associations, also refered to as écoute reduite (reduced listening).
In contrast, Luc Ferrari declared those formerly undesired associations a constitutive role of his musique anecdotique: “My anecdotic music shall encourage the listeners to play with the images of their own surrounding, their own experience, their own dreams” (Spiegel 1971). He further justified the use of unprocessed everyday life recordings with an egalitarian approach towards the audience he wanted to reach: “The consuments of music I am interested in belong to classes of population that were excluded from the development of musical material and æsthetical criteria. They have to be given more than æsthetical objects, rather processes they can intervene” (Spiegel 1971). Do Ferrari's opposition to musique concrète and even more his expressed political motivation give a hint at a different level of meaning in this genre? Obviously, for approaching this question it would be necessary first to clarify what “meaning” means and, in a second step, what “meaning in musique concrète” might mean. These questions seem far too generic and too broad in order to be covered satisfyingly in the scope of this paper. But even assuming—for a moment—that these differences do have an impact which is transported as a certain kind of meaning to the listeners: How could this meaning be grasped? What is the meaning of using unveiled field recordings as opposed to processed sound objects other than opposing? What does using a specific field recording mean other than “I have been here” or “doesn't sound that nice”? Ferrari itself pointed out that his montages do not tell a linear story, rather a story would be constructed by the listeners' imagination (Pauli 1982). And what does the political motivation mean, which lead to the use of everyday recordings rather than abstract æsthetical objects, other than the motivation itself? Apparently, this way of looking at anecdotic music does not allow for fruitful insights with respect to its possible meanings. This paper thus starts from the general assumption that there is no meaning in music which the composer intends to transport to an audience. In a constructivist tradition—and supported by Ferrari's statements—any meaning connected to music is not understood to be present in the various manifestations of music itself (notation, performance, recorded substrate) but rather to be constructed individually by both the composer and the recipients, be it “normal” audience, critics or researchers. This understanding, of course, does not contradict the assumption that certain qualities of music substantially trigger and influence such construction processes. Assuming that music does not mean anything by itself, the questions above may be rephrased: How does the inclusion of the anecdotic, the quasi-narrative everyday, into electroacoustic music change the potential constructions of meaning by the listeners? While a certain affinity of field recordings to the triggering of imaginations might be evident in the first place, nevertheless, the assumption that the meaning of music is constructed by the listener also holds for any other genre, including instrumental or electroacoustic music which deal with abstract sound entities.
There is a long tradition of programme music which intentionally deals with external references, although they might not be even named explicitly (cf. Straebel 2010). Also, the original concept of an objet sonore which is listened to without associations is blurred by historical development. Gradually, many of the formerly autonomous sounds became anecdotic themselves. There is a growing general scepticism against the idea that listening to music without extrinsic references could be possible at all (cf. Ciciliani 2011). Is there anything special then to field recordings in musical composition?
An important quality of anecdotic elements in compositions is their relation to the everyday reality. Ferrari called these elements “recordings from reality maintaining their reality value, which very concretely speak” (Pauli 1982). Still, many of Ferrari's montages obtain their æsthetical finesse from the deliberate unclarity about which compositionally related scraps of reality actually refer to the same place or situation of a recording. How many layers of disparate “realities” are interwoven at the same time? Hilberg 2005 called this an “anekdotisches Vexierspiel” (“anecdotic game of deception”). Unlike a documentary, not only the specific strand contained in a recording seems to be of little importance here, but also the integrity of the pictured reality has become a degree of freedom to the composer. In fact, by introducing field recordings, Ferrari did not give up on organising the sounds according to chosen principles of music composition. The structures of his pieces do not primarily evolve from structures in the recordings but are rather based on conceptual decisions. In this context, the reference of “concretely speaking reality” becomes an abstract sonic quality by itself whose musical function is decoupled from the reality of the original recording. This is another strong reason why everyday life recordings in Ferrari's compositions do not carry a certain meaning, in particular none which is connected to the represented auditory situation. Anecdotic music by itself is as meaningless as music of any other genre.
This abstraction from the actual reality towards a musical entity is constituent to the special role of the “as-if reality” for the construction of meaning by the listener. By its own meaninglessness, not only finds the everyday its way into an artificial æsthetic process, but it also allows for mechanisms which deal with the everyday to take place in an abstract, artificial context. Some of those mechanisms are investigated by psychologists under the term “narrative identity”.
The field of narrative identity assumes that humans manifest their concepts of themselves by constructing stories which layout one's own role in the context of the perceived surrounding. Psychological research on narrative identity focuses on understanding strategies of weighting elements of the reality and using it as signs for constructing a plausible view to the self. According to Kraus 1999, these construction processes include constant intersubjective negotiations about which interpretations of reality lead to socially accepted identities with certain qualities. Constructing narrative identities may be understood as a creative process taking place in an abstract domain but based on the concrete experience of the everyday. Indeed, the research focus of psychology towards narrative identity constructions may not be directly applied to music. Nevertheless, assuming the existence of a well trained cognitive mechanism that connects appearances of the everyday and the synthesis of narration can serve as a vehicle for investigating the role of anecdotic music in the construction of meaning.
In this context, the “anecdotic” could be understood as an invitation to the listeners to resort to their inner knowledge of identity construction. As the real everyday and the musically structured “as-if everyday” in the compositions share a similar appearance, they both interact with the listeners' experience by triggering narrative construction processes. The difference is that the existential function of identity construction may be suspended in the artificial context of listening to music, which turns this æsthetical act into an active process carried out for “pleasure”. This, of course, does not exclude any possible feedback to the listeners' real everyday life.
Concluding from that, a major quality of anecdotic music may not be described as bringing certain sound situations, possibly from far away, to the listeners' ears, but to raise the medial substrate of such situations' recordings towards abstract musical entities while still retaining their appearance of an “as-if”. This must not be misunderstood as “cheating” the listeners by undermining their perception, rather an “æsthetical satisfaction” (Thomas Mann) only becomes possible through consciously understanding the perceived as a “game of deception”, as a “Vexierspiel”. Ferrari's trick of activating our well trained mechanism of narrative identity construction also underlines his egalitarian political motivation of reaching people of all classes. Listening to anecdotic music does not need an academic training, not even an overly developed benevolence towards contemporary art: “Musik für arme Schlucker” (“music for poor fellows”, Spiegel 1971).
Ciciliani, Marko (2011): Das Ohr hört nie allein. Musikalisches Erlebnis jenseits des Hörbaren, Kunsttexte Auditive Perspektiven 4/2001, kunsttexte.de, 30.01.2012.
Hilberg, Frank (2005): Anekdotisches Vexierspiel, Die Zeit, 13.10.2005, translation by the author of this abstract.
Kraus, Wolfgang (1996): Identität als Narration: Die narrative Konstruktion vonIdentitätsprojekten, Kolloquium Psychologie und Postmoderne, FU Berlin, 22.04.1999, http://web.fu-berlin.de/postmoderne-psych/berichte3/kraus.htm, 30.01.2012.
Pauli, Hansjörg (1982): Für wen komponieren Sie eigentlich?, Frankfurt.
Spiegel (1971): Ferrari. Absurde Geschichten, Spiegel 5/1971, p. 135, translation by the author of this abstract.
Straebel, Volker (2010): The Sonification Metaphor in Instrumental Music and Sonification's Romantic Implications, in: Proceedings of the 16th International Conference on Auditory Display (ICAD 2010), Washington, pp. 287–294.
Performing a music piece with the voice, or an acoustic instruments and live electronics systems imply several different actions that involve mental processes and body movements. If we want to know the instrument’s playing process, at first we need to define all the movements necessaries to produce the sound , then we have to define the character of the produced sound, his spatialisation and intensity. At least we must decide which of these actions are essential in our performance. Playing is thus a complex action connected with mental processes and physical activities.
I deduced from the playing experience the idea to built an interface, simulating an acoustic instrument able to play in real time a musical piece related with the historical repertoire of the instrument itself , or independent of it.
In this perspective,this project analyses different elements of a musical event like to play an instrument in real time (event’s formal structure, gesture,sound). The final objective is to one hand to know the different actions needed to playing an instrument, to the other hand to realize an interactive interface simulating the instrument itself and allowing to:
1- Use the same gesture and procedures like these used playing the instrument to producing sound.
2- Produce instrumental sounds simulating these of the real instrument.
3- Compose a music piece with references to the historical repertoire of the simulated instrument.
4- Compose a new musical piece.
Instruments and references
In this project the interface is realized with a wii controller simulating a violin.
Using this controller like a violin bow we can perform music with sound synthesis in real time,. Sound examples in this project are realized using frequency modulation synthesis and granular synthesis. A historic music reference are the Paganini’s Caprices.
Reasons and methods
We have different kinds of analysis :
1- A gestural analysis of movements producing sound, corresponding to different bow strokes needed to an interface definition.
2- A spectral analysis of violin sounds aimed to built synthesis algorithms.
3- A formal and harmonical analysis of Paganini’s Capriccio n. 20.
From the analysis we get the following mapping used for the interface working:
1- A movement’s map of gesture, related to the virtual interface.
2- One or more algorithms of synthesis in real time.
3- A new composition’s sketch started from the original piece and becoming an interactive composition.
4- A pattern of the formal analysis of the Paganini’s Caprice n. 20.
The Capriccio is a short instrumental form and in the same time is generally characterized with technical emblematic elements. The first Capriccio’s forms appeared in the Baroque period , and are relatively free forms related to the origins of the instrumental repertoire. With Paganini the Capriccio become a virtuoso piece, in which we can find a timbrical research. The op. 1 was composed in 1817 and generally, the 24 Capriccios have or a A-B structure, or an A-B-A one.
Capriccio n. 20 of the Paganini’s 24 Capricci has an A-B-A structure: Inside we can find some typical kind of instrumental gesture.
An experience of using the interface is related to the Capriccio’s form as a two part composition as showing in the following table:
Another kind of experience can be to play a free composition independent from the historical reference of Paganini, but using the same kind of spectral analysis and bow movements.
All these considerations produce:
1- Interface definition: Why the wii controller?
Wii controller is an existing technological object. It’s possible to realize on it movements similar to the violin bow‘s movements. The wii controller has number and letters and arrows that we can use as algorithm’s control elements.
2- Sound synthesis techniques
To built the virtual instrument able to play a composition related to the historical reference of Paganini’s Capriccio, we use in the first part of composition (corresponding on the part A of the Capriccio), the FM synthesis. This technique needs few parameters: fp (portant frequency), fm (modulation frequency), im (modulation index). Also we have only three parameters associated with the controller’s buttons.
FM synthesis can be used even if we haven’t a reference to an existing composition to realize a new piece.
To built the violin sound with FM synthesis, we do a spectral analysis of some characteristics of the violin sounds. The values obtained from the spectral analysis are used in the algorithm. In the second part of the piece (corresponding to the second part of the Capriccio) we choose to use the granular synthesis. In granular synthesis a buffer containing an audio file is processed with different parameters, like velocity of lecture. With the granular synthesis we can play different audio files and create all kind of compositions.
This project allows us to compose, play in interactive mode. In analytical perspective we can:
1- To analyze specifically the movement to produce sound on a particular instrument (in this case the violin)
2- To analyze the spectral compositon of the instrumental sound and the characteristic of a particular part of a piece.
All the steps of the project can be represented graphically:
1- Interface’s structure
2- Movements map, related with the interface
3- Spectral analysis of violin sounds and of the Paganini’s Capriccio 20
4- Map of the spectral analysis 5- Algorithms (synthesis and interface control)
The software used for the algorithms is MAX/MSP
Tracing the processes necessary for playing an instrument, we realize the complexity of live performance and music.
PhD student at City University, London, U.K
Sound Material Correspondence and Temporal Relationships in Acousmatic Composition: Proposing a Taxonomy of Recurrent Phenomena
Impressions of musical structure can often be traced to sound materials that occur and recur throughout a work, and the concept of recurrence in acousmatic music composition is currently being researched and developed to provide a view of structuring processes in terms of a work’s constituent sound materials and the observed connections among them. The concept of recurrence in acousmatic music was originally presented at EMS-07 (Seddon 2007), and aims to stimulate both analytical and creative strategies; existing works may be appraised in such terms yet an awareness of the various issues may enrich the compositional process. Recent research has focused on the development of a taxonomy of recurrent phenomena, which seeks to clarify the ways in which sound material recurrences might be observed and how they relate over different timescales, providing a framework for assessing and discussing recurrence within acousmatic works. This paper will explore two significant aspects of the proposed taxonomy: the issues of sound identity correspondence and the temporal relationships existing among those corresponding identities. In this way the paper will address the conference themes of analysis, taxonomy and terminology.
The notion of recurrence is not new. Many genres of music, both present and past, make use of recurrence at different levels of structure, and examples can be found in (but are, of course, not restricted to) classical symphonies, popular songs, and jazz standards. However, the means through which recurrent phenomena may be observed within acousmatic works deserves particular attention because the range of potential sound materials and transformational possibilities available to the composer is so broad. A recurrence can be defined as a repeatedly occurring event over both short and long timescales. During this paper, the notion of musical recurrence will be expanded to encompass sound materials that evoke projection back to earlier related instances, whether overtly similar or of moderate or minimal correspondence. This might include returning states, event types, and/or the perception of their derivations through transformation processes. Accordingly, recurrences may explicitly refer to previous instances, yet subtler connections among sounds may also be perceived through particular common characteristics. In order to recur, sound material must have a strong identity and be memorable in the first instance. Striking aspects of a sound’s identity may draw attention to recurrent instances, and these aspects can usefully be considered in terms of contour, spectromorphology (Smalley 1997), source association and gist (Harding, Cooke et al. 2007; Kendall 2008).
Listening approaches and attitudes will be briefly addressed, suggesting that, in adopting a recurrence-based approach to acousmatic music, it is assumed that memory has a fundamental role in the musical experience, and that the remembered sound materials have a structural significance. Furthermore, the difference between listening in real-time and the concentrated listening of study, as highlighted by Nattiez (1990) and Roy (2003), will be acknowledged.
A taxonomy of recurrence applicable to acousmatic musical works will then be introduced, focusing on the two key areas of sound identity correspondence and temporal relationships.
Different aspects of sound identity correspondence will be considered, and a continuum of correspondence will be proposed, indicating the degree of consistency among correlating sound materials. This continuum can be viewed from the perspectives of spectromorphology or source bonding, to use Smalley’s terminology (1997), and the viewpoint adopted depends on how correspondences are most strongly perceived. The notion of spectromorphological correspondence accounts for aspects of spectromorphology that connect identities whatever their provenance, and connections perceived may draw apparently different identities together in unique ways, illuminating more covert relationships. Source-bonded correspondences will be founded on impressions of common source and/or cause among instances. In certain circumstances there will be a degree of overlap between these two perspectives; in many cases source-bonded correspondences will be perceived because the identities are sufficiently spectromorphologically consistent with one another that they share their source bondings. Additionally, source-bonded identities can usefully be considered spectromorphologically, as this may reveal potential connections aside from solely the presumed source and/or cause. The relevance of space will also be briefly addressed because (i) it is a significant and essential aesthetic aspect of acousmatic music, and (ii) all sound identities (and composites of identities) exist spatially and convey a sense of spatiality, which will affect notions of correspondence.
Temporal relationships among corresponding identities will then be outlined, based on the different timescales over which they occur. These range from lower-level to higher-level relationships, referring respectively to connections among identities at local and global levels of structure. In all cases recurrent phenomena will be perceived through the comparison of identities, assessing their aspects of correspondence. The notion of what constitutes a lower- or higher-level relationship may change as the work unfolds, and lower-level relationships may develop higher- level significance over time. The significance of structural function will be acknowledged because the nature of a temporal relationship and its musical significance is defined by the contextual role that each recurrence fulfils within the structure of the work.
Lower-level temporal relationships will be described in terms of repetition and identity variation, which can be viewed as complementary: identity variation is founded on the comparison of instances and the ways in which they differ. Different types of repetition and variation will be outlined, in relation to the musical function that they fulfil. While there is no single ‘lowest level’ structural unit for all acousmatic music akin to the note in instrumental music, discrete events arranged over relatively short timescales may establish lower-level relationships.
Higher-level temporal relationships occur among recurrent phenomena that provide a more global sense of structure, and may be conveyed by discrete identities or events, as well as spatial environments or settings. Seven categories of higher-level relationship will be proposed and described in terms of their musical significance or function. Each relationship type is founded on the notion of return, implying that an earlier instance has been ‘left behind’ in some way but is then recalled. (Return is defined in the Oxford English Dictionary as: “v. 1. come back or go back to a place. 2. (return to) go back to (a particular state or activity).”)
The contemplation of recurrence from the perspectives illustrated in the taxonomy potentially illuminates analytical and creative practice. In conclusion, future directions and practical applications of the concept will be briefly outlined.
Harding, S., M. Cooke, et al. 2007. Auditory Gist Perception: An Alternative to Attentional Selection of Auditory Streams? In Lecture Notes in Artificial Intelligence, ed. Lucas Paletta and Erich Rome, 4840: 399-416. Berlin/Heidelberg: Springer.
Kendall, G. 2008. What is an Event? The EVENT Schema, Circumstances, Metaphor and Gist. International Computer Music Conference. Belfast, U.K. Source: http://www.garykendall.net/papers/KendallICMC2008.pdf (Accessed 19th September 2011)
Nattiez, J.-J. 1990. Music and Discourse. Toward a Semiology of Music. Princeton, New Jersey: Princeton University Press.
Roy, S. 2003. L'analyse des musiques électroacoustiques: Modèles et propositions. Paris: L'Harmattan.
Seddon, A. 2007. Recurrence in Acousmatic Music: Creative and Analytical Possibilities. Electroacoustic Music Studies Network Conference. June 12 - 15, 2007. De Montfort University, Leicester, United Kingdom. Source: http://www.ems-network.org/spip.php?article281 (Accessed 29th January 2012)
Smalley, D. 1997. Spectromorphology: explaining sound-shapes. Organised Sound 2(2): 107-126.
Faculty of Music, University of Arts in Belgrade, Serbia Department for Musicology, Serbia
Is sound art still beyond music and between categories? An attempt to explain and define the meaning of the term from the musicological point of view.
For more than three decades sound art exist as a practice that intrigues and attracts many artists who use sound as medium of creation, as well as those who studied art. Although it would be probably logical to conclude that sound art, after so many years of existence, cold be understood as a completely defined discipline, the situation is quite opposite. The reason is multidirectional disciplinary orientation of the term - the term sound art refers on diverse artistic disciplines whose common denominator is usage of sound, especially sound as a listening object, and this is exactly the weak point and place of misunderstanding and scientific disagreements.
A brief diachronic view at the use of the term and evolution of the concept will explain the complexity of the situation. The term sound art was coined by the composer (it was Dan Lander, in the mid-1980s, according to Alan Licht) as a synonym for the term “new” or “experimental” music. Having in mind various musical experiments directed toward liberation of music/sound during the 20th century (by Luigi Russolo, Edgar Varèse, John Cage, Pierre Schaefer and others), it is understandable why the talk about sound art comes from the musical point of view. In the same time, there have been an increasing number of exhibitions at institutions of visual art – sound art entered into the gallery, museum and other spaces, becoming a medium per se, like video, lasers, “but not as performance” (here I paraphrased definition of Annea Lockowod). History of creating artistic works based on sound, thus, presents the existence of two types of sound art (or aural art, or arts of sound): those who operate in space, and the other, who operate in time. In other words, the term sound art has been applied to the experimental music of the second half of the 20th century, and, simultaneously, to the various visual practices of the same period. Though this is an almost obvious fact, theoretical discourse on sound art indicates the opposite - sound art is often seen as a practice that is associated with the visual arts rather than music. A paradigmatic discourse about sound art is set in the influential study Sound Art – Beyond Music, Between Categories by Alan Licht, published in 2007 (to which, as you probably realize, I am referring in the title of this paper). As the title already suggest, Licht defined sound art as a non-musical practice which is between categories. In fact, he sees this phrase as a defining feature of sound art, explaining that “its creators historically coming to the form from different disciplines and often continuing to work in music and different media” (Licht 2007, 210). In the same time, this type of conceptualization found its place in other disciplines which concerns this issue, such as aesthetic. Example for this is Andy Hamilton’s study Aesthetic and music (dated in 2007) in which he regards music and sound art as “increasingly divergent tendencies, even though there is considerable overlap between them” (Hamilton 2007, 62). After this, definitions of sound have become more open and broadly conceived: “sound art as a practice harnesses, describes, analyzes, performs, and interrogates the condition of sound and the processes by which it operates” (Brandon LaBelle, 2008); “sound art takes many forms: sound installations, performances, recordings, whether for direct public consumption, or as purchasable objects to listen to domestically...” (Paul Hegarty, 2010), etc.
In order to clarify the ways of artistic working with sounds in the second half of the 20th and early 21st century, these authors have mainly opted for a historical-empirical approach, seeking to present and interpret the series of events, phenomena, relations that have caused the transformation of the status of the sound. It is interesting to note that a discourse of contemporary sound art was initially formed by sound artists, composers, musicians/performers (of popular genres), philosophers, theorists of art, media, cultural studies, while the contribution of musicologists in this field is almost negligible. Such a profile of sound art writers, but also the lack of musicological interpretation (the only one was offered by Joanna Demers, in 2010), influenced on separation of several critical areas of research in the field of exploration of sound as a medium, i.e. the subject/object of artistic creation, and the most obvious is that which emphasizes the sound art and music as antipodes. So, my goal as a musicologist is to show that sound art could be understood from a musical point of view, and that gives a new meaning to the term. Although there are sound art practices whose discourse is closer to the visual art apparatuses, there are, on the other hand, those based on the musical rules, to the extent that they are (or could be) posted on the musical ground. This state of affairs could be defined as a secondary relationship between music and sound art, a relationship which is, therefore, arbitrary. It is preceded by a primary relationship between these two practices, mediated by the sound itself - what connects music and sound art is sound. In other words, the artistic transformation of sound, as well as focusing on the sound as an object of perception, are in fact a meeting point of music and sound art. From this follows the conclusion: music is the art of sound, sound art could be defined as music, because there are different arts of sound; namely, sound art practice could be categorized as music if it is based on the dominance of the parameters of music. At this point the intensity of the relationship between music and sound art is on a very high level, and because of that the role of musicologist is crucial, as a competent expert which could provide additional arguments for a speech about these two fields. Due to the lack of these competencies, the problem of interaction and networking of music and sound art is a "weak" point in the current discourse on the sound art, so this particular problem spheres will be the focus of my research.
This paper examines both the direct references to electroacoustic music in Adorno’s writings and the wider implications of his aesthetic theory for critique and composition. This includes Adorno’s notion of Sachlichkeit (objectivity or functionalism) and his exploration of the crisis of the relationship between subject and object in creative expression. This paper also explores the notion that the ideological critique of music can in fact be seen as being part of unreflected Sachlichkeit itself, as a rationalization of critique that reduces the possibilities of interpretation and treats its material as a mere means to an end, ignoring the resistance of the material against such forms of critique.
The Crisis of the Relation between Subject and Object
In the wake of Kant’s unknowable thing-in-itself and the Nietzschean notion of the will to power, a self-reflective act of creativity now seems bound to reflect upon its adequacy of expression and ethical relation towards the materials used. Whereas Samuel Beckett concentrates upon the insufficiency of language and the permanent failure of the adequacy of description Adorno concentrates upon the antagonism and contradictions present in the relationship between subject and object. Theorists such as Morton Schoolman have been keen to make the comparison between Adorno’s position and the ethical treatment of alterity in Heidegger (Gelassenheit) and Levinas (l’autre) as all three philosophers reject the notion of a sovereign subjectivity that dominates others and objects, yet this comparison seems to neglect the importance of mediation for Adorno and the reciprocal relation between subject and object. Adorno doesn’t seek an alternative to the subject-object relation, but how it comes to appear as such. This paper examines the relationship between the composer and the everyday sounds that appear in their work by a series of comparisons between Luc Ferrari and Denis Smalley.
The growing importance of sound-art for electroacoustic music can perhaps be seen as having its roots in this crisis of confidence about the adequacy of musical composition for what it attempts to express as increasing numbers of composers are turning to extra-musical elements such as images, texts and physical environments to aid them in realising their compositions. This paper examines some of the effects on the subject-object relationship in contemporary sound-art by looking at the work of Bernhard Leitner and Esther Venrooy. The paper also looks at the use of electroacoustic music as a means of phenomenological description and what this means for the subject-object relationship.
It is perhaps easiest to think of Adorno’s notion of Sachlichkeit in terms of rationality or functionalism, yet it would also be possible to see it as an expression of Hegel’s objective Geist as it strives towards a collective universality, as it attempts to create an art of the ”we” or ”our” art, this paper explores the connection between the composer and the community in its analyses of Sachlichkeit. It is precisely the argument of Horkheimer and Adorno’s Dialectic of Enlightenment that we find in Adorno’s treatment of Schoenberg in his Philosophy of New Music, that Schoenberg’s musical reason turned over into unreason at its most extreme point. As the composer gains greater control over his/her material through rationalised methods of composition they expect to reach greater heights of expression through the material, instead of this, we find that the material evaporates under the weight of the form and intention of the composer. This can be heard in compositions as a kind of obviousness that robs the listener of the experience of interpretation, music becomes an infertile medium of communication. According to Adorno, Schoenberg was able to avoid such a situation by his awareness of the relationship between the subjectivity of his musical technique and the objectivity of the musical material itself by maintaining a dissonant relationship between the two. The notion of counterpoint was vital to Adorno as a mode of representation that allows two different things to be represented at the same time, thus making musical compositions a truly dialectical medium.
What Adorno sought as a counterbalance to Sachlichkeit was the Kantian notion of purposiveness that is perceived without any representation of purpose, the composition that renounces its utility (i.e. its ability to make money, to make the composer popular and its communicative aspects) for the sake of greater reflection. It is also important here to consider the significance of notions of use-value and exchange value for Adorno; in its most extreme forms Sachlichkeit ends up in the exchange mechanisms of the culture industry as a public utility where the musical material itself becomes little more than a tool. This paper looks at attempts within electroacoustic music to reflect upon the Sachlichkeit of composition by looking at the work of musicians such as Sachiko M, Otomo Yoshihide, Toshimaru Nakamura and Taku Unami. The paper also looks at the implications of the genre known as EAI (electroacoustic improvisation) for the relationship towards Sachlichkeit.
This paper attempts to follow Adorno's notion of immanent critique in its analyses of the work of electroacoustic composers and hopes to expose many of the inadequacies of the transcendent form of ideological critique that is often used in the interpretation of musical meaning. Much of the failure of ideological critique’s usage of Adorno can be put down to the confusion between the intentions and opinions of the composer and the immanent conditions that are to be found within the composition itself. Such a critique raises up the communicative aspects of music at the cost of the expressive aspects of the experience of listening, by doing so it aligns itself with that which Adorno referred to as the objectifying spirit of music, or what Weber referred to as the rationalization of music. Such a critique ignores the resistance of the material and the antagonisms between the composer’s desire and the mediation of that desire through the historical presence of available forms for their expression. By seeing music as a communicative medium for the intentions of the composer the ideological critic confirms music to be the rational control of musical elements, though they tarry with the negative in true Hegelian style, it is clear to see that they view the music as nothing more than the synthesis of the composer’s thesis with their own antithesis, the material itself is allowed to evaporate.
A critique which uses the intentions of the composer as the basis for ideological critique cannot avoid taking up an apercu moral standpoint of formalized concepts of good and evil, its lack of self-reflection gives us nothing more than subjective opinion, an opinion that bares all the marks of a fundamental moralistic position that is already well known to us as the expression ”holier-than-thou”. A critique that focuses on the correctness of the desires of Stockhausen and Nono ignores the fact that what is significant to us as critics is the performance of that desire and not the desire itself. Whether the desire itself is good or bad is neither here nor there, what is of interest is if the piece itself offers us the possibility of reflection upon that desire. It is surely this difference in the performance of desire that marks the difference between Schoenberg and Stravinsky in Adorno’s Philosophy of New Music, where Schoenberg recognises his desire for reconciliation and the mythical past and reflects upon it musically, Stravinsky misconstrues his desire as being a kind of second nature, that eternal forms of history and culture are acting through him and all he has to do is express it. It is the performance of desire in compositional structures that forms the focus of the analyses of electroacoustic composers in this paper.
Maître de conférences Université Catholique de l’Ouest, Angers, France Conservatoire National Supérieur de Musique et de Danse de Paris, Paris, France Membre permanent de l’Observatoire musical français, Paris IV, Paris Sorbonne.
Aural perception, intention-reception, Research into the history of electroacoustic music, Pedagogy
From composition to reception; the identity of sense and meaning.
On listening to an electro acoustic work for the first time, the listener’s attention is initially captivated by the novelty of the sounds he hears. At the moment of this first encounter, the meaning of the work and, even more so, the composer’s intentions seem to elude him. And yet, our researches into the field of aural perception (Terrien, 2005, 2006, 2010) have shown that the composer’s intention was almost always discernible by the listener. Relationships between the aesthetic and the creative elements (Nattiez, 1975) emerge fairly spontaneously through the verbal expression of emotions delineating the aural experience of the listener (Imberty, 1997, McAdams, Bigand, 2004, Levitin, 2010 et al). These verbal descriptions amount to clear evidence proving the existence of sonic indicators or signposts (Deliege, 1997) and semantic features (Le Ny, 1975) which enable the listener to apprehend and to comprehend the music that the composer has created. Our last contribution made during the Shanghai conference (Terrien, 2010) dealt with listening to electroacoustic music by adopting the didactic approach linked to the teaching of a particular work (Maresz, Metallics, 1994/2004). We propose on this occasion to study problems of sense and meaning arising out of what the listener actually hears and what the composer intends, by identifying the aural signposts and points of reference recognised by the listener, and which act as a roadmap to the composer’s intentions. For a musical work to make sense; that it has meaning for the listener; he must be able to hear, discern, and recognise its aural components. The listener gradually becomes aware of differing aural parameters, allocating to each a significance that enables him to construct the meaning of the work. But what are the aural stimuli that he recognises as he listens? Do such elements constitute a common code for all listeners? What might their distinguishing features actually be? Do they have specific functions? Are they the same as those selected by the composer? Are there not signs discerned by the listener that the composer was unaware of? And conversely, is it not possible for certain signs and musical intentions to pass completely unnoticed by the listener? These are just some of the enquiries that should help us better to understand the relationship between sense and meaning brought by listeners to their hearing of electroacoustical works.
Our contribution rests on a study currently being undertaken of fifty listeners (amateur musicians as well as non-musicians) for whom we have drawn up a listening test based on the work Sonora by Francois Bayle, taken from the album Fabulae (1998). This test is administered in four stages. First, the listeners hear three presentations, each of three minutes duration, of the same musical extract, with three tasks attached to each hearing. The first task requires a description of the impressions and emotions experienced on the initial hearing of the extract. The second asks for a description of those aural features that each listener judges to have prompted their first impressions. The third task complements the second, inviting the listeners to identify the musical or aural components previously selected. The hearings are punctuated by two-minute writing periods in which listeners record on paper what they have heard. On completion of the three hearings, the listeners are allowed discussion time in which to exchange their reactions to the extract heard. This exchange is recorded for subsequent analysis and correlation with the written responses.
Analysis of this test is intended to highlight the connections existing between first impressions, reactions, affective responses, and the music heard. Furthermore, the results should enable a better understanding of the meaning that non-specialist listeners can attach to a work of electro acoustic music they are hearing for the first time. We have shown in previous studies (Terrien, 2006) that emotions, once given verbal expression, allow the individual listener to demonstrate personal technical knowledge of music. We suggest that the same process occurs when confronting electro acoustic music, and that the listener draws on his own musical vocabulary in order to describe what he hears. Moreover, this test should point up certain perceptive characteristics in the listener; to what he directs his attention, and what he discriminates most easily, but also the significance he attaches to these aural indications. The information gathered will also indicate the ability of non-specialist listeners of this music to identify and to give meaning to these indications.
The results of these observations will be compared with the stated aims and objectives of the composer, and we will endeavour to understand the nature of those elements linking the aesthetic and creative experiences.
Our enquiry lies at the heart of the programme proposed for this symposium, and if the subjects of analysis, semiotics and semiology are pursued in the light of our contribution, we claim for the latter the title On Listening: Intention and Reception, since our research and present contribution deal specifically with the problems of the listener’s perception and interpretation seen in relation to the intentions of the composer. We hope to be able to describe and analyse how a person hearing a particular musical work is able to confer on it a meaning derived from his own, personal responses. We put forward the hypothesis that works of electro acoustic music have a meaning particular to each listener which he becomes aware of by learning to take stock of these emotional responses.
Following A. Damasio (1995, 1999, 2010) we believe that the emotions underlie every cognitive construction, and that the recognition and expression of these lend a meaning to the work being heard that interacts with the aims envisioned by the composer. We propose that this applies equally in the field of electro acoustic music. The relations existing between composer and listeners are actualised in the work itself which is the expression in sound of a dual reality; a nexus generating meaning through perception.
PhD Candidate at De Montfort University
MTIRC: Music, Technology and Innovation Research Centre, Leicester, UK
The theme of the EMS12 Conference “meaning and meaningfulness in electroacoustic music” can be approached from many different perspectives, but holds a special place in music education. Pedagogy of electroacoustic music is based on meaning and meaningfulness of its ideas, concepts and techniques.
A few years back, in the EMS10 Conference, Ricardo Dal Farra concluded his paper reporting that people need to have the knowledge to do things differently and in their own way, especially in music and electroacoustic music, after they have received the appropriate knowledge; thus educating the youth around electroacoustic music is essential to the development of our field and the generation of meaning and meaningfulness.
Educating people in the field of electroacoustic music is always about understanding and meaning, especially with young and inexperienced listeners. Being able to provide the tools for others to reach a meaningful understanding and listening experience in electroacoustic music is essential for the continuation of this sound-based art. In order for this music to reach out and attract more people, it needs to get out of its marginalisation as ”elite university music”.
Researchers especially in the last decades, have been developing new ways to approach and educate more people in the field of electroacoustic music. Research in schools has also been taking place, like Jonathan Savage’s (2005) research in secondary schools, in which the ideas of electroacoustic music were approached through the experimentation, selection and structure of sound on electroacoustic music concepts (thus meaning) and techniques. Also, other researches like Higgins and Jennings in 2006 tested different teaching approaches of sound manipulation and structure of sound events, using a digital audio editor, reporting that students can better “construct their understanding through doing”, thus meaning was a result of creativity.
Nonetheless, one of the most important research that took place in 2006 by Leigh Landy and Rob Weale was the Intention/Reception project in which revealed that the types of electroacoustic music used in the research could be appreciated and enjoyed by inexperienced listeners after “meaning”, in this case composer’s intention, was provided.
Following the steps of this research, many online environments have been developing in order to educate people, such as the EARS website and the EARS II pedagogical website; multilingual online search engines for electroacoustic music as well as e-learning environments. Moreover, projects such as the “Sonic Postcards” of the Sonic Arts Network, try to interact more with students and provide meaningfulness to their sonic world, by exchanging soundscape compositions with other schools participating in the project from different places.
There are many researches and projects that are trying to educate and approach more people into this field of electroacoustic music, but as Jeff Martin proposed in the EMS 10 conference, there is a need for a music curriculum development “that enable[s] meaningful participation with the living and transforming traditions of electroacoustic music” (my emphasis). Thus I am going to talk during this paper not only for projects proving that meaning is important in electroacoustic music education, but for my own on-going research, which is developing a music curriculum based on sound-based music (not solely on electroacoustic music) for the public schools of Cyprus.
Choosing to use the term sound-based music rather than electroacoustic music in my project is not only because sound-based music is a more general term than electroacoustic music, but because is clearer for young students to distinguish that is the music of sounds not the music of notes. The term was created by Leigh Landy, in order to address the problem of having many definitions which are around electroacoustic music, such as electronic music or acousmatic. Thus his own definition to the term is: “sound-based music is the art form in which the sound and not the musical note is the basic unit” (2007).
Acknowledging the positive results of the above research projects, this project is investigating the accessibility issues of sound-based music in the national music curriculum of Cyprus. The opportunity for this research to take place in Cyprus, was provided during the Education Reformation Project that the Government of Cyprus commenced under the indications of the European Union. This Reformation Project affected the music curriculum, providing the opportunity for this research project to start. The on-going evaluations of the new music curriculum’s targets, aims and objectives, offered the opportunity for this research to be initiated before the curriculum’s final evaluation.
This research identifying the participating teachers and schools background in relation to music, new technologies and sound-based music, with a series of observation, interviews and questionnaires, implemented lesson plans that used sound-based music ideas and techniques in the music classroom. The creation of the lesson plans was based on concepts and techniques of sound-based music, such as soundscape composition principles and listening exercises, but in simplistic forms and in gradually developing tasks.
The sequence of the lessons created and implemented was particularly focused on meaning and understanding. Starting as a pyramid design, each level was adjusting the previous knowledge of the students implementing it with a new concept around sound-based music and reaching it to a new level. During this paper, I will present the different levels of lessons created and how meaning and understanding was reached during evaluations of both the teachers and the students.
Moreover the results of the research have shown that sound-based music can provide many educational, musical and technological benefits to the students, enhancing their musical creativity, providing an inclusive environment in the classroom, as well as providing a sense of freedom to the students and enjoyment in the music lesson. Most importantly it can be identified that meaning and meaningfulness is a prerequisite for the development of curriculum projects that can educate others in relation to sound-based music and in extend to electroacoustic music, as understanding and appreciation is its driving force.
Professor of Musical Composition The Norwegian Academy of Music, Oslo, Norway
THE EXOSEMIOTICS OF MUSIC-AS-HEARD
This presentation will outline a novel method of musical semiotics capable of integrating semiotic elements developed by P.Schaeffer, D. Smalley, F.Bayle, Ph.Tagg et al. The method applies equally to instrumental and EA music. This presentation will focus on the method itself, rather than presenting applications.
I have earlier, during previous EMS conferences and in three papers published in Organised Sound presented analytical tools for a systematic analysis of music-as-heard. The overall perspective was a phenomenological one, since a differentiation of various listeners’ intentions was basic to our approach. The analytical methods were focused on three levels of articulation:
Level 1. Sound-objects (accessed through the listener intention of ‘reductive listening’ and elaborated by spectromorphological analysis)
Level 2. Compound sound-patterns (accessed through ‘taxonomic listening level two’, and communicated through identification and description the structure of motives, textures, composite sound-characters etc.)
Level 3. Form-building (accessed through ‘taxonomic listening level three’, identifying and describing segmentation, layers, patterns of similarity/dissimilarity, dynamic forms, and form- building transformations).
I will pursue the phenomenological approach into the field of musical semiotics and present an outline of a new approach called the Exosemiotic of music-as-heard. By the term exosemiotic or exosemantic I refer to the way music is associated with entities beyond its own material and intrinsic structure. (In contrast, a taxonomic description would be characterized as endosemantic; the description of endosemantic elements will serve as a description of the signifier of an exosemantic signified). The approach builds on basic semiotic distinctions introduced by C. Peirce and F. de Saussure, but, consistent with a phenomenologically informed approach, shifts the focus from the sign as a reified entity towards the semiosis, i.e. the mental acts that constitute the sign. The definition of a musical sign will consist of three aspects: the manifest aspect (i.e. the signifier, the perceptible sound), the hidden aspect (i.e. the signified, the meaning of the sign), and the link between them (the semiosis or signifying act). In other words, semiosis can be defined as the nature of the mental act that joins the signifier and the signified. In the context of the present project – a post-Schaefferian study of music-as-heard – the semiosis can be identified as being the listening intentions that imbue what we here with meaning.
Four semioses will be discussed: Comparison (abbreviation: CMPAR), Causal Inference (abbr. INF), Association (abbr. ASSOC), and Recognition (abbr. RECOG). These correspond to the semioses involved in the constitution of, respectively, Iconic Signs, Indexical Signs, Metonymic Signs, and Arbitrary Signs. However, musical signs turn out to have a more complex nature than what is involved in the latter four types of signs; this has been pointed out by semioticians such as Umberto Eco and Raymond Monelle. Frequently the semiosis of motivated signs (Comparison, Causal Inference, Association) are combined to some degree with differing degrees of Recognition, the semiosis characteristic of arbitrary signs. Arbitrary signs are based on processes of definition, such as conventions, codes, explanations, etc. The four established sign categories Icons, Indexical Signs, Metonymic Signs, and Arbitrary Signs are conventionally described as being constituted by one single semiosis, and thus do not allow more complex semioses. I have resolved this problem by developing a matrix that combines the primary semiosis with a secondary one. Thus in a motivated sign, where the primary semiosis will be either Comparison, Causal Inference or Association there will be added a secondary semiosis, Recognition. The secondary semiosis is specified by stating how far the process of definition has gone in fixating the meaning of the sign; thus the degree of fixity or conventionality of the sign will have to be indicated, from full openness (a new sign, not conventionalized or defined) to signs with a clearly defined semiosis (e.g. national anthems). The constitution of the musical sign may is shown in the diagram below:
So far we have shown that our shift of focus from sign-definition to semiosis is adding nuance to the description of musical signs. By adding the secondary semiosis, one has also opened the sign to a process of historical change, since new things tend to be conventionalized, coded and eventually taken for granted. But beyond this it opens the possibility to describe even more complex constitutions of exosemantic meanings. We would then speak of semiotic chains: concatenations of semioses. The description of a semiotic chain will be made as formulae of letters. An analysis of exosemantic elements of a piece of baroque music could be described as follows; in the first matrix below the signifier is a tremolo played by the orchestra:
The pain of the musical subject is described through the diminished intervals of the melodic part and the contorted melodic contour. In this case the signifier is on level two (compound sound-patterns):
A new stratum of musical meanings is revealed by considering what interpreting what happens on the third level, that of form-building.
The overall structural pattern for musical semiosis that I have demonstrated enables us to correlate and integrate in an overall perspective, valuable viewpoints put forth by Pierre Schaeffer, Francois Bayle, Michel Chion, Denis Smalley, Phillip Tagg, Peter Faltin, Jean-Jacques Nattiez, Umberto Eco, and Winfried Nöth. E.g. the Im’son defined by Francois Bayle will be a Level one signifier interpreted through Causal Inference (you hear a recorded sound and infer it is the sound of a bird; thus the image of a bird is appearing to the listener). The Di’son of Bayle will be a Level 2 sound pattern appreciated for its intrinsic structure (thus a taxonomic listening which is endosemantic). His Me’son will then be defined as over all features on Level two or three whose meaning is found by Comparison (e.g. textures or lines that describe the trajectory of a soaring bird). Denis Smalley’s concepts concerning ‘surrogacy’, as well as Chion’s ‘chose sonore’ are all related to Level one phenomena, in which Comparison can be combined with Causal Inference and Association. A case of Comparison on Level one would be when one sound is made to refer to another sound through imitation (more common in instrumental music, where e.g. a kettle drum roll is supposed to imitate thunder). Phillip Tagg refers to this as a ‘sonic anaphone’.
While an explication of semiosis reveals the logic by which a certain interpretation of extramusical meaning is being made, it does not account for the meaning itself, the semantic content. An extramusical meaning will have to be arrived at both through spontaneous insights and through hermeneutic processes of interpretation; thus the method proposed leaves the question of actual meaning completely open. The analyst, after having arrived at an interpretation, will in hindsight have to analyse the mental acts involved in the constitution of the interpretation, as part of a reflective process. When interpretations of different analysts differ, one may possibly trace at which point in the chain of semioses the different options arise, thus opening for a reasonable discussion of musical meaning.
In addition to the above method of analysing the meaning of music-as-heard I have developed a complementary approach dealing with the semiotics of musical communication. This theory is based on R. Jacobson’s original communication model in combination with an elaboration of F. Delalande’s Listening Behaviours. I have identified and described a number of listening behaviours beyond those described by Delalande. A matrix representation has been worked out, by which the analysis of signifier, signified and semiosis is combined with the model functions of communication. However, time constraints will make me able only to hint at this larger perspective.
Oslo December January 2012.
Pierre Alexandre Tremblay
CeReNeM, University of Huddersfield Queensgate Campus, United Kingdom
This paper will explore ways in which improvisational practice within the studio can be proposed as a avenue to bridge the historical dichotomy between what Ted Gioia points as ‘the aesthetics of perfection’ and ‘the aesthetics of imperfection’. The aim was to re-embody the fixed music, and this paper presents the results of the experiments by the author through the composition of his last fixed- media work. This will be put in the context of a wider trend observed amongst the current emerging generation of composers interested in the aesthesics of the work, as opposed to the previous generation who tended to situate the value of the work in its poietics. As a result of this observation, this paper also advocates the vital and primal importance of musical outcomes as the main document of practice-based research.
The improvisatory exploration of the studio was always, and still is, at the centre of electroacoustic composition. The classic ‘séquence-jeu’, as defined by the first generation of electroacoustic composers (Deschênes, Dhomont and Parmegiani amongst others, have talked about it extensively), allows the composer to get sound material out of an object or a piece of equipment by ‘playing’ with it, in a sort of improvisatory game. These composers tend to talk about composing in the studio as a two-part process: generating a pool of material, and then composing with it. We could define this approach as constructivist, as they use the studio itself as an instrument to experiment with their source material, before deciding what to use from this experience to compose the piece. This often goes hand-in-hand with the discourse of ultimate control of the final music. This precision and fixity of the final work by the composer leads some critics to consider it the nearest possibility to the aesthetic of perfection (Hamilton).
On the other hand, professional improvisational practice, whether on laptop (Barrett), DSP instruments (Casserley), or acoustic ones (Bailey), has a completely different ethic: improvisation is not a means to generate material to be filtered by a composer, but rather an art of composing in real- time, more often than not within an ensemble context (an interesting denomination is Critical Improvisation, based around the musical outcome, by opposition to Inclusive Improvisation, more based on the participatory element).
The arguments in favour of improvisation are as varied as there are different practitioners: some argue that it gives back music its spontaneity and raison d’être, others say that this art is dealing with the humanity of imperfection and risk (Gioia in Hamilton). In any case, most improvisers agree that professional improvisational practice requires years of training to reach a real-time musical transparency on the instrument (again Bailey’s book has interesting contribution to this matter, but so are Dobrian & Koppelman, Casserley, Schloss, etc), whatever the musical idiom in which this improvisation takes place.
Nevertheless, there is amongst improvisers a taboo subject, where views differ wildly: the studio edition of improvised music sessions in a final, near-perfect version to be released, which is a common-yet-mostly-unspoken practice in improvised music, despite some practitioners considering it as a betrayal of the ethic of improvisation and refusing to take part of such practice (for instance, many labels insist on the ‘high fidelity’ and authenticity of a live event being captured on tape, be it in the studio or in concert).
But more and more practitioners sit in between these two practices - the studio as a tool of perfection, or as a tool of accurate, fidèle capturing of an imperfect moment. Actually they do not sit so much in between but are comfortable in both, where these parallel practices in many fields cross- pollinate to offer some interesting proposals. Some composers have started to document their research and ideas on this middle ground (Barrett, for instance), yet nothing has been documented thoroughly with musical examples, ethical/aesthetical concerns and technological methodologies.
Embracing this ‘in-between-ness’ is what the author set our to explore in this project, by focusing on a crossover practice during a composition sabbatical, and reports here his early conclusions: this paper will share reflections on methodology, practical considerations, and findings relating to both approaches, as well as reflections that emerge concerning practice based research and a current shift in focus amongst a younger generation of composers.
It might have been tempting to realise a series of ‘takes’ of different improvisations and to edit a perfect version of each, but the focus was more on the grey zone between studio composition and performance practice; to answer questions such as: how to capture improvisations on DSP and acoustic instruments within the environment of the composition studio, whilst maintaining gestural freshness? How does the expressive virtuosity of instrumental performance translate into the studio compositional process and influence it? How to deal with with improvisation’s inherent imperfection and organic gestures in the studio, where deferred-time allows the refinement of certain gestures to near-perfect results by taking the time and means to improve them?
Herein lies the originality of this project, at the cross-over point of both worlds: the exploration of the back-and-forth process between improvisational performance and its editing and further transformation as a studio composer, in an approach similar to that of popular music producers (Eno). If Appleton talks about a dichotomy between the studio monologue and the dialogue of improvising with others, I want to find ways of engaging in dialogue with myself, in order to fill this gap.
The Mixing the Immiscible project has been used as the test ground of these hypothesis, through a series of four studio residencies by the author (GRM, Miso Music, Technische Universität Berlin, Musiques et Recherches). In order to answer the questions previously stated, I explored ways of deconstructing his practices, both of which I engage with at professional level, to allow a better integration of the performance aspect in the studio composition time.
The final works, released as a single DVD-audio on Empreintes DIGITALes, vary in style, intensity and sonorities, but they all share the same questions as their genesis, the same methodologies, and the same desire to capture the interaction (and the tension) between the two opposite approaches to the studio.
Appleton, J (1999). Reflections of a Former Performer of Electroacoustic Music. In Contemporary Music Review Vol. 18, part 3, pp. 15 –19.
Bailey, D (1992) Improvisation: Its Nature and Practice in Music. Philadelphia: Da Capo Press, 146 p.
Barrett, R (2006). Improvisation Notes August 2005. In Contemporary Music Review Vol. 25, Nos. 5/6, October/December 2006, pp. 403 – 404.
Casserley, L. (2001) Plus ça change: Journeys, Instruments and Networks, 1966-2000. In Leonardo Music Journal, Vol. 11 p.43.
Dobrian, C. and Koppelman, D. (2006) The E in NIME: Musical Expression with New Computer Interfaces. In Proceedings of the 2006 International Conference on New Interfaces for Musical Expression (NIME06), Paris, France.
Eno, B (1996) A year with swollen appendices. London : Faber and Faber, 424 p.
Fels, S., Gadd, A. and Mulder, A. (2002) Mapping transparency through metaphor: towards more expressive musical instruments. In Organised Sound 7(2): 97-108, Cambridge University Press, United Kingdom.
Hamilton, A (2000) The Art of Improvisation and the Aesthetics of Imperfection. In British Journal of Aesthetics, Vol. 40, No. 1, January 2000.
Schwartz, D (2000) A System for Data-Driven Concatenative Sound Synthesis. In DaFX Conference Proceedings, Verona.
Schloss, W. A., (2003) Using Contemporary Technology in Live Performance: The Dilemma of the Performer. In Journal of New Music Research, 32: 3, 239 – 242.
Professor of Music, Hampshire College Amherst, MA U.S.A.
Luc Ferrari’s Presque rien No. 1 'Le Lever du jour au bord de la mer' (1970) has almost mythic status in present-day electroacoustic music circles. It currently stands as a seminal musical composition. This was not always the case. When Ferrari’s piece was released on Deutsche Gramophone’s Avant-garde 3 (DGG 256 104) in 1972, the composition was largely underappreciated by composers and listeners, including this writer. More “unheard” than misunderstood, the piece was marginalized for years within a musical historiography that rendered it scarcely audible or, in effect, inaudible without the technical/theoretical listening apparati that could contribute to and fulfill its meaning. I will argue that Presque rien produced another branching at the bifurcation of Schaeffer’s musique concrète with its insistence on the acousmatics (perception of everyday sounds without reference to their sources) and Stockhausen’s elektronische musik (sounds produced by purely electronic means). “Adequate modes of listening” as theorized by Ola Stockfelt are key to understanding how the composition has gradually gained its fullness of meaning during recent years. This paper will undertake an “audiography” of the composition through record reviews and interviews with composer, as well as recent writings on audio culture. I will trace the expansion of the piece’s meaning, with a focus on its sonic disclosure in relation to a revised historiography of electronic music. Additionally, this paper aims to contribute to reconsiderations of analogy in aesthetics and cultural theory.
Stockhausen and Others...
As I have indicated, initial critical responses to Ferrari’s piece were not enthusiastic. A review appearing in a 1972 issue of Tempo described Presque rien as “charmingly inactive” and suggests that the composer is a jokester. The title of this dismissive review, “Stockhausen and Others” reflects the background into which Ferrari’s offering recedes. It is Stockhausen who is hailed as the avant-garde’s “problem-solver par excellence.” The question of “the problem” to which individual art works are “solutions” mocked by at least one avant-garde artist, Marcel Duchamp, but has remained an established trope of contemporary music compositional strategy, listening practice, and criticism.
I will argue that Ferrari was not trying to problem solve in the way that Western music has seen its own history--as generating a series of problems that are to be resolved in the musical composition (This logic of differentiation pervaded other twentieth-century visual art forms, as well). Ferrari, I will argue, made a turn towards analogy in this important work, one that can be more interestingly understood in terms of similitude and connection.
Ferrari’s Anecdotal Music...
In retracing the process of signification in Presque rien, I invoke Ferrari’s notion of “anecdotal music.” I challenge (with him) the embargo of the use of directly recorded sound and its referential implications in electroacoustic composition:
One day I went away...with a borrowed tape recorder. I did not travel very far, but nevertheless travelled a lot and I recorded things of life... music of a genre I called “anecdotal music.” This means, I intended to produce a language that situates itself between the musical and the dramatic field. The employment of elements of reality allows me to tell a history, or allow the listener to create images, since the montages propose ambiguities. –Luc Ferrari (1976)
Instead of consciously eliminating every trace of the origins of the material that has been taken from reality, reality is understand as reference and as a means of making connections to the hearer's experience, memory, and imagination. In other words, I will be addressing not only what the musical composition may mean, but, using recent theoretical work on similitude as opposed to difference, what it means to musically signify in the fullness of subjectivity.
To do this, I will develop what I term a “constellation of sound analogies.” The analogy is potentially, but not always, more democratic than the metaphor. Constellations of sound analogies would allow relationships—both similarities and differences—to emerge with the listener, rather than directed by the composer. Here, I will refer Kaja Silverman’s commentary on Gerhard Richter’s work in her recent book Flesh of My Flesh.
I believe that Ferrari had begun to develop a music that flowed from the recording process itself, and, rather than “solve problems,” created them for the listener. John Cage’s music certainly did this as well but often with a benign indifference to the sonic material of the composition. Here I will comment on his composition “Radio Music.”
In contrast, we find in Presque rien the sonic material and Ferrari’s compositional/recording process in two-way communication, that is to say they inform each other. The structure of the composition was completely generated by the composer’s attention to the recurring cycles of early morning daily life in a small fishing village. The recordings were made at the same time each day, and then compressed from several hours down to just less than twenty minutes.
Why has the meaning of this composition changed so dramatically? It is not a question of fashion, I’m certain. As I wrote in the introduction to Audio Culture: Readings in Modern Music (a collection co-edited with philosopher Christoph Cox), a new sound culture has emerged in the last fifty years, made of "musicians, composers, sound artists, scholars, and listeners attentive to sonic substance, the act of listening, and the creative possibilities of sound recording, playback, and transmission...Exploiting these new technologies and networks, the emergent audio culture has achieved a new kind of sonic literacy, history and memory.”
This new sonic literacy has led us to new modes of listening. In the case of Presque rien, the dominant adequate listening required for “problem-solving” musical structures made the composition virtually “inaudible” and therefore virtually meaningless. Looking back on the early minimalism of Steve Reich’s phase compositions such as “Come Out” or Brian Eno’s Ambient Music, we begin to see how new adequate listening modes were going to be necessary in order to navigate the proliferation of musics in the 60s and 70s.
In the case of minimalism, what was required was a Deleuzian mode of listening that sees repetition as dynamic, not static. With Eno’s Ambient Music, an adequate mode of listening included the admittance of ambient sound, but, more importantly, one that saw music as a “tint” on the soundscape, something I have termed “radical easy-listening music.”
Finally, I will argue that in Presque rien an adequate listening mode would be a state where the semantic content is allowed to form, develop, dissolve, and play freely within a constellation of sound analogies.
Ruibo (Mungo) Zhang
Composition Department, Shenyang Conservatory of Music, Liaoning, China
At the beginning of the twenty-first century in China (mainland only), electroacoustic music was in its infancy compared to the Western world. However, in the first decade, the development from both artistic and technological aspects of EA music was advanced at the considerably fast pace. The problem is that, on one hand, the fast development is lacking of solid support from either local (mainland China) academic researchers or serious scholarship from our musicologists diachronically and synchronically. On the other hand, all of the academic institutes, for example, centers or departments under the conservatories, colleges and universities all around the China have rapidly developed various concepts and criteria regarding electroacoustic music study, often making the boundary fuzzy between commercial products with artistic projects from popular music arrangements to interactive designs on computer music. Furthermore, without communication amongst Chinese specialists and only arbitrarily importing the expensive and luxury electronic equipments from the Western world, but ignoring the knowledge of operating them properly, most of the academic institutes are making themselves relatively isolated, so that Chinese EA music society has remained rather mysterious to the rest of the world until the end of the first decade of twenty-first century.
Without frequent international exchange, the development of EA music in China would not have flourished as much during the last decade. Besides a lot of individual foreign scholars and artists coming to China as visiting or guest professors for a short term in the universities, some of them even became resident to teach and live in China for a long term. An international conference, the Electroacoustic Music Studies Network (EMS Network), has been held twice in Beijing and Shanghai in 2006 and 2010 respectively. Moreover, after the jointly held EMS06 conference with the Musicacoustica Festival in Beijing, another smaller scale meeting specifically focused on EA music development in East Asia was establishe. which has became an annually held event during the Beijing Musicacoustica Festival in each October since 2008: CEMC/EMSAN Day.
The idea of Beijing Musicacoustica Festival emerged in 1994, but it did not transform to its formal scale (one-week long activity) as the one of the largest and most important festival focused on electroacoustic music held annually in China until 2004. It was initiated and pushed to the most sophisticated level by Zhang Xiaofu who is the director of China Electroacoustic Music Center (CEMC) under China Central Conservatory of Music (CCoM). In 2003, The Electroacoustic Music Studies Network (EMS Network) was initiated by Marc Battier (MINT/OMF - Univ. of Paris-Sorbonne) and Leigh Landy (MTI Research Centre - De Montfort Univ.), as well as Daniel Teruggi (INA/GRM) who was the third Executive Director of EMS. It is an international initiative which aims to encourage the better understanding of electroacoustic music in terms of its genesis, its evolution, its current manifestations and its impact. In general speaking, those two sophisticated events had great influences to Chinese EA music development in the last decade.
The Electroacoustic Music Studies Asia Network (EMSAN) is a research project initiated by Marc Battier at the MINT research unit (Observatoire musical français) in 2006 which aims at creating a body of musicological studies of the musical repertoire and musical practices and its main goal is to conduct a vast project on electroacoustic music and music technology in East Asia. EMSAN Track is acting as an independent session, besides held at the same time with EMS conference annually, it is not limited by the conference and usually held several times through out the East Asia counties and regions, such as China, Japan and Taiwan etc. The CEMC/EMSAN Day is a meeting in Beijing that is jointly held by Muscacoustica Festival and EMSAN, it acts as a relatively lower- level event that fills the gap of upper-level events held in China, such as EMS conference and Musicacoustica Festival. In other words, it fills the gap between the macro and micro level for Chinese EA music community and international EA music society.
This paper will cover briefly the history of the connection among Musciacoustica festival, EMS and CEMC/EMSAN Day from the historical point of view by mentioning about several key persons who made these events happened in China in the last decade, such as Prof. Zhang Xiaofu and Kenneth Fields from Central Conservatory of Music and Prof. An Chengbi from Shanghai Conservatory of Music; as well as Marc Battier (MINT/OMF - Univ. of Paris-Sorbonne) and Leigh Landy (MTI Research Centre - De Montfort Univ.) etc. And then, it will try to explain the influences and contributions that these people and events made to the Chinese EA music community, of course, by mentioning about achievement that have been made by Chinese scholars since 2006 when the first EMS conference held in China. For example, the Chinese paper session was completely isolated from English and French paper session without any translation or interpretation into English during Musicacoustica Festival time in Beijing, but the improvement was made at 2010 conference in Shanghai by combining all the Chinese papers into the EMSAN track and integrated into the conference with bilingual presentation by some post-graduate students, such as Yin Yang, Tiantian Wang, Yuan Zhou, Qing Shao from Shanghai Conservatory of Music. After that, a paper was presented by Lu Minjie from Sichuan Conservatory of Music during the CEMC/EMSAN day 2010 in Beijing during Musicacoustica Festival also made a point for “Primary Research on Interactive Music of Chinese New Media Art in the Recent Decade” etc.
Finally, the paper will illustrate this international conference also made an influence and provide an opportunity that helps some scholars in China (mainland) to build up complicated academic project, which can connect each side of Chinese EA music community and international EA music society: CHEARS.info (China Electroacoustic Resource Survey, abbreviated as CHEARS), which is a CONTINUOUS research of its own kind in China from 2006 until present. One of the initiators of CHEARS, Zhang Ruibo who is also one of the organizers of EMS conference in Beijing and Shanghai in 2006 and 2010 respectively, has participated following conferences held in Leicester, UK (2007); Paris, France (2008); as well as New York, US (2011) to have the progress of his research project presented year by year. So far, he became the first scholar in his field to present original research at an international conference outside China. His experience was followed by Zhou Qian who is also one of the organizers of EMS conference in Shanghai by presenting her paper in New York (2011): “New Trends of China Electronic Music After 2005”.