Thirteenth Annual Florida Electroacoustic Music Festival

April 1-3, 2004

It was my great pleasure to attend the thirteenth annual Florida Electroacoustic Music Festival, which was held April 1-3, 2004. Unfortunately, I had to miss the first day and could attend only four of the nine scheduled concerts; but what I did hear represents some of the finest work presently going on in this field.

The Florida festivals have been held for thirteen years at the campus of the University of Florida in Gainesville, which makes it one of the longest-running festivals of its kind in the country. For the first eleven years it was undertaken by just one faculty member, James Paul Sain, but since joining the faculty two years ago, his colleague Paul Koonce now assists. Every festival has featured a guest composer who is honored on the occasion and has several of his works played. (I was one of the first guests, in 1992.) This year's guest was Alvin Lucier, who unfortunately was unable to attend because of serious last-minute health problems. Nevertheless, three of his works were presented. The festival includes both juried concerts and invited or "curated" concerts. This year there were three curated concerts, presented by Jon Appleton and Eric Lyon of Dartmouth College, the trombonist James Fulkerson, who has long worked with Alvin Lucier, and Javier Alejandro Garavaglia, a composer originally from Argentina but living for the previous 30 years in Germany and for the last year in England, who brought electroacoustic music from Europe. As always, there was a strong international presence at the festival, including composers from England, Sweden, Germany, Holland, Brazil and Korea

In the three concerts and one special event that I will cover in this review, 26 works were heard. Eleven of these involved live performers of various sorts (not always of musical instruments!). One was only instrumental and had no electronic component; one involved live performance at a MIDI keyboard, but not a keyboard that played notes in any conventional manner; and one included a live dancer, video projectionist, and radio and video baton that fed data to a computer running Csound and MAX/MSP. (One work which I do not mention below, Afterimage 3 by Ronald Keith Parks, involved "percussion" that included two concrete cinder blocks and two bricks that were rubbed together, and buckets of bolts of various sizes that were poured from one container to another and struck against the bricks. All of those sounds were miked and fed into a computer that further altered them and produced a "residue" that constituted the main electronic component of the piece.) Among the works for conventional acoustic instruments, some of them involved more or less conventional performance techniques that would be found in acoustic music, but some involved techniques that would be unrecognizable to most players and produced sounds that seemed more "electronic" than some of the recorded or computer-produced sounds.

In my discussion below, I do not mention the works in the order in which they were played, but rather according to their content. Each individual concert presented a good variety of the pieces. I heard one concert that I do not include below, because at that time I hadn't decided to write this review and thus did not take notes on the works, and my recollection of the pieces is just not accurate enough for me to include them.

Most electroacoustic works involved prerecorded sounds, but the term "tape music" should certainly be dropped, because almost none of the music has ever been put on tape but rather on digital media such as CD's and DVD's. (In fact, the only mention of tape involved the use of portable recorders to capture sounds in the field.) Many of the sounds were spatially distributed in the concert space, which included an excellent octaphonic playback environment. Even works that were for two channels were spatialized by the composers from the mixing console. Many of the works did not involve prerecorded music but generated or processed sounds in real time controlled by the composer or performer or both. When computers were used in real time, they were always laptops, and each work involved a different laptop. This meant that some were left on during other works, when they either showed a screen full of icons or launched into a screen saver. Since the theater was usually otherwise dark, this was sometimes a distraction. Even when works were played from laptops, which faced both towards and away from the audience, the contents of the screen were usually not understandable.

As far as the content of the music is concerned, it has to be said that much electroacoustic music seems to be about noises of various kinds. In fact, the works can be divided into pieces involving noise, those based on acoustic sounds, including instruments but much more often vocal sounds and speech, and a few pieces that involved synthesis or recordings of conventional pitched tones. And what a variety of noise! Imagine all the noises you have ever heard - not just percussion instruments (which were much in evidence), but also natural sounds such as the wind, the surf, water noises from babbling brooks to flushing toilets to waterfalls, birds singing, crowd sounds, traffic, train sounds, machines of every kind from automobiles to factories, bells, chimes, buzzers, telephones, crying babies, distortions of any of the above as well as any other sounds you can imagine - and you still will only have an inkling of the incredible variety and depth of these sounds. In fact, it is difficult just to try to describe these sounds, and that is something that electroacoustic composers have not been very helpful with. While there is a very good vocabulary to describe techniques and processes, very little attention has been payed to developing terminology that could describe these sounds to non-specialists. Noises and other kinds of complex tones were often produced by the transformations of instruments or voices, and they provide very interesting components and contrasts to the original sounds.

Another issue that related to many different works involves intelligibility. Many works involved processed instrumental or vocal sounds. Sometimes speaking voices could be understood. Sometimes the syllables could be understood, but the meanings of the words didn't make sense. Sometimes that was because the words were nonsense, or, annoyingly, because they were in a different language and no translations were given. Sometimes the music could be heard as a commentary on the words, and sometimes it was just a contrast or had no relationship. Many works played on this interplay between intelligibility and unintelligibility, where the music would wax and wane between these extremes.

Composers who have worked to derive new sounds from either live or recorded sounds know well what they had to do to get from the original to the transformations that they use; but these relationships are often not apparent to the listener. It is unfortunate that some of the most interesting sounds that are transformations would not be apparent to most listeners, because they have not gone through the process. (Ever since I began working in an electronic music studio, I have felt that it was a great experience for developing my perception and understanding of sound, but not always useful directly in composition.) This thought provides a smooth segue into a discussion of individual works, starting with Alvin Lucier's I am sitting in a room, a work which very carefully clarifies that process to the listener.

I am sitting in a room is a piece in which the composer reads a short text beginning "I am sitting in a room different from the one you are in now." The sound of his voice was recorded on a tape recorder and then played back and recorded on a different recorder using the same microphone, after which the process was repeated again and again. On each successive iteration the recording picks up more of the resonances from the room (and the microphone) than the original spoken voice, which becomes more and more unrecognizable until, an the end, only the rhythm of the voice is apparent and the sound is dominated by a few frequencies. The gradual unfolding of the process makes it abundantly clear to the audience how the sound is transformed. This time the piece was performed in an unusual way, first being read in a Korean translation by Chan Ji Kim, a student, and then in English. Instead of a tape recorder, a computer was used to record and play back the sounds. The only glitches were, first, that the speaker, who was pressed into service after Lucier couldn't come, stumbled a bit reading the first phrase, and that stumbling became part of the performance; and the student technicians sitting at the mixing console unfortunately decided to ride gain when they thought it was getting too soft, with the result that these amplitude changes were magnified on each iteration, also becoming an indelible part of the performance. The overall performance was about 36 minutes long, which means that there were about 17 iterations after the initial reading. At the end, only a few sine tones were dominant in the room, the strongest being A 110. A grand piano was also present with the lid open, which might have had an effect.

Lucier's music has always emphasized subtle details about sounds and the environment. His other work which I heard, Bar Lazy J, was the only purely acoustic work played. A trombone and clarinet face each other and play a series of 72 long tones (each was about 8 seconds) separated by silences (about 2 seconds), starting in unison. On successive iterations they become increasingly out of tune, both in unisons, where considerable beating is apparent, and on other notes, where the clarinet goes one half-step higher and lower than the trombone, and the trombone plays in microtones navigating the space between the clarinet tones. The beating is the main point of the work. This piece lasted about 13 minutes, and its minimalist time scale was strikingly different from every other work heard. Indeed, most electroacoustic music moves at a frenetic pace compared to this.

Other works also emphasized subtle details in the audio sources they were based on. Soave sia il vento [sound stimuli: for Mozart] by David Kim-Boyle, used source material from a performance of the terzettino from Mozart's Cosi fan Tutte and recordings from a glass harmonica. The composer was seated as a "performer" playing three wine glasses filled to different levels with water, producing different pitches. The wine glasses began inaudibly followed by the electronic tones entering imperceptibly and forming soft chords. This work of less than six minutes was the most soft and delicate piece heard, but I could not detect any vestiges of the original Mozart.

Most works did not have this kind of subtlety. Several pieces consisted entirely of noises of various kinds. T equals Zero by Jeung-Yoon Lee imagines the universe at the time of the big bang, and indeed it included several explosions. The noises started in the high register and low sounds emerged later. The texture was sparse with lots of repeated notes. Sounds moved back and forth in the space occasionally being overtaken with reverberation.

Peter Traub's retour began with quickly repeated noises that faded in and out, changing speeds. He made an effective use of space, moving sounds around the hall. The noises morphed into metallic sounds increasing and decreasing in amplitude, and eventually the piece became shrieking, followed by a sudden dropout leaving mid-range quasi-vocal sounds and more noises. Over a backdrop of sustained vocal-like sounds, percussive bursts of noise came in, repeated in different rhythms and locations.

Jeffrey Stolet's weirdly-named A Prayer before Dying (which seemed to have nothing to do with praying or dying) began inaudibly and built to a thick noise texture. Over this background, bell-like tones entered, repeated in random rhythms. The stereo sound track was manually panned to different locations. While there was little change in texture, slow dynamic changes brought out different aspects of an incredibly rich variety of sounds.

While we were advised to listen to Karl-Heinz Blomann's gone - urban flashback as a "hörspeil" (literally, "radio play"), it too contained a continuous din of noises, out of which other intelligible and unintelligible sounds emerged. Some noises resembled walking and water sounds as well as unintelligible voices with some "gasps". A jazzy saxophone melody came in and out. At times there was a steady noise beat in the background, but it was not one of drums. Voices in both English and German were heard, including what would sound to a vocal teacher as bad singing. But most notable was a woman's voice reading a deadpan description about hot-mix asphalt in English with an English accent. Breathing sounds grew and revealed a portrait of a medical patient in distress, followed by a beep, the stopping of breathing, and the piece ended.

James Paul Sain's Tåg till... was based on train (and other) sounds recorded during a visit to Stockholm, where the piece was created. Opening noises lead to processed echoes moving around the space. This piece too had a continuous noise backdrop that changed color and had elements entering and leaving. Some sounds that didn't seem to be trains still had a "chugging" quality. Unintelligible spoken voices in several languages were heard. Toward the end it became very soft, out of which emerged a voice that seemed to be speaking French, although most of the background was soft modulated noises with echoes similar to the beginning. Throughout there was effective mixing and panning, with a nice fade-out at the end.

It has to be said that, over the last 20 or so years, the English composers have become the world's leaders in noise compositions, and the three English works heard were the most effective of this type. Simon Emmerson's Points of Continuation, the middle work of a trilogy, was based as much on processed sounds of the two instruments heard in the other works (both were plucked strings) as noise, but it included healthy doses of rich noises of various kinds. Sounds that were originally plucked were sustained and changed in other ways, building up richly varied mixtures. Every sound oscillated back and forth in changing rhythms, which were even more the rhythm of the music than that of the sounds themselves entering and leaving the mix. It was all very sensitive. At times the music contained metallic sustained sounds, high bell-like sounds, and mid-range vocal sounds, everything sustained and amplitude modulated. Occasionally, vestiges of the original plucked tones emerged out of the noise collage with softly sustained mid-range wind noises. The noises suggested the sea or the wind, but they were also abstract sound images interesting in their own right. There was a very effective unfoldment of the materials and transitions from one texture to another. The piece also had an improvisatory quality.

Penmon Point by Andrew Lewis is one of four works inspired by satellite images of English beaches. This piece was a musique-concrète sound portrait where most sounds suggested rather descriptive images of the scene. The surf crested into large wind-blown waves. The incessant motion of the waves was felt in the motion back and forth between the loudspeakers. Tinkling bells morphed into sustained chimes, and bottle noises became metallic, reminiscent of rattling chains. Vocal noises emerged from the background to become audible as sustained chanting. The wind became intense, building to a huge climax. After the storm, seagulls sang over gentle waves. The piece ended with a long deep bell sound that took about half a minute to decay.

The most outstanding of these pieces was Pete Stollery's Vox Magna. It was based entirely on sounds recorded in a now-defunct steel manufacturing plant in Rotherham, UK, where it resides as part of a multimedia visitor attraction. The composer recorded the sounds of machines used in the steelmaking process and noted their incredible variety and richness. There was less sound processing in this work than in the others, and most of the credit goes to the composer's mixing these into a rich and interesting collage. The sounds resembled slowly-changing pitched noises, low thick chords, metallic and industrial sounds (of course!), drums, gongs, explosions, droning motors, pieces of metal flying around, sandpaper, a baby crying, the chug of a locomotive, rain, whistles, bells, wind chimes, bangs, gunshots, thunder, walking footsteps, animals, and other things too numerous to describe. The composer made good use of space and fading. This was an effective piece, and it didn't contain a single sound that was reminiscent of a musical instrument except percussion.

Many pieces did contain instrumental sounds, of course, especially those that involved performers playing them. Opposed Directions by Hye-Kyung Lee, who also gave a competent performance on the Yamaha MIDI Grand Piano, involved running a program called Interactor on a Macintosh laptop that recorded the pianist's motions (keys pressed, dynamics, and pedal movements) and played back music on the same instrument, while the pianist continued in other ways. The program crashed on the first attempt, but it worked the next time. The piano playing was pretty conventional, featuring repeated notes, runs, clusters, and short phrases, and the point of the work seemed to be the struggle between the computer and the performer. Nevertheless, the piece was a blur.

McGregor Boyle's Landfall II: Flaming Skull (a heavy-metal allusion?) Involved the composer playing an electric guitar into a computer, which captured some of the notes and responded by playing back some of them but also improvisationally selecting samples from prerecorded sound files. More than any other work, this piece had lots of conventional percussion among the samples, along with other kinds of noises suggesting water, wind, surf, and, according to the composer, farm animals, although I couldn't detect the latter. Most of the guitar playing was restrained and sensitive, single-tone melodies with vibrato and a few arpeggios. Much of the interest in this work involved hearing connections between the diverse materials, even though their selection was random. The piece ended with a flourish.

All the other live-performed pieces involved increasingly bizarre and wild performing techniques, and I will take then in ascending order of difficulty. INKAN: digital version by Frank Niehusmann in based on a huge sample library collected by the composer over many years containing a variety of sounds recorded in mines, factories, and in the wild (birds, etc.). He used to work with 8-track and 2-track analog tapes, mixing delays of various lengths, and this version attempted to automate that process. While one might hear the sounds as painting a sound picture, he stated that he was thinking of them more abstractly. Each key of a MIDI keyboard was set to play a different sample loop. The top 75 keys played by the right hand controlled samples, and the lower 8 keys played by the left hand controlled space. The piece built up such an incredible jumble of noises that it became hard to hear anything distinctly. The rhythms that emerged were those of the loops. At one point, loud industrial hammering joined the mix and jumped around the room. Jazzy chords entered, seeming like a progression, but they simply looped back to the beginning, going nowhere. The most remarkable aspect of this work was the performance: the composer artfully depressed the keys, knees bending and head bobbing up and down while his wife snapped copious pictures. At over 22 minutes, it was the longest piece played except for Lucier's I am sitting in a room.

Christopher Cook's The Castle of Otranto, based on the Gothic novel by Horace Walpole, began in darkness with low slow sounds and percussion bursts while the solo trombonist William Bootz, who also commissioned the work, snuck in and made a grand entrance. There was some conventional playing, but he mostly played glissandos and short emotionally "declaimed" phrases, much like an actor uttering cries and gasps. Air was both sucked and blown through the instrument with and without the mouthpiece, creating breathing sounds and noise effects. The background included processed instruments, prominently the piano, and many percussive effects. Lots of material, played by both the soloist and the electronic sounds, veered on unintelligibility. The background was in stereo with some movement. There were numerous varieties of colored noises and trombone glissandos. At the end, the performer covered his forehead, looking dejected, as the lights faded along with the music.

Tusalava by the Swedish composer PerMagnus Lindborg involved improvisatory elements and interaction between a solo saxophonist and computer accompanying an ancient direct animation movie of the same name created in 1929 by Len Lye. The film, in black and white (of course!) was said to be "a search for equilibrium between the forces of man and nature." The saxophone soloist played almost nothing resembling traditional performance. He played glissandos, finger slaps, flutter tonguing, screeching, short "spit" tones, microtones, vocal-like sounds, and some fast phrases that sounded like short sentences. About three minutes in, the film began. It presented a screen that was usually divided by color (one side white on black, the other black on white) with a series of moving images that looked like bubbles, dots, single-celled organisms, and ultimately quasi-human figures with parts of bodies, such as a face, arms, or torso. Everything was in constant motion, sometimes up and off the screen, sometimes moving around on the screen in random directions. At the film's climax, a leg-like appendage emerged from the humanoid figure on the left with a face and arms holding on to a body-like figure on the right, inserting the appendage into a circle in the middle of the object (intercourse!). The computer processed some of the saxophone playing, producing pitch shifting, echoes, and other effects, and occasionally it played alone while the soloist rested. Although the contrast between the modernistic musical techniques and the primitive film methods was apparent, it reminded me that the film's techniques were also very cutting-edge and modernistic when it was created.

Cort Lippe's Music for Contrabass and Computer also involved a mix of traditional and contemporary performance techniques, and in some sense it could be seen as a catalogue of effects, including everything that could be done with the instrument. The piece was very effective. The performer played every type of gesture, rhythm, in every octave, with all possible dynamic levels, all bowing techniques, scraping, pizzicato, etc. Sometimes he didn't even hold the instrument in a traditional way. The computer responded to the performer, both processing the sounds and producing new materials from them and playing various predetermined materials. The sense of interaction was clear, with the performer in charge. The sounds that emerged from the speakers moved all around the space, sometimes engulfing the performer, and resembled everything from metallic noises to meowing cats and shrieking. The electronic sounds all seemed to have their origin in the live sounds, although there were many kinds of transformations. Maybe the sounds were weird, but so was the playing!

Richard Boulanger's at last... Free involved a prerecorded video, live video, live dancer, and live video and radio baton, and a computer playing both live and prerecorded music. Based on Boulanger's 1979 composition Trapped in Convert, it begins with a four-minute film showing the dancer in various bleak, mostly empty cityscapes, including the New York City subway, a bridge (it was the George Washington bridge, but that was not evident in the footage shown), buildings, various scenes through a chain-link fence, a floor grill, and a barren street. The dancer (the same one who later appeared live) faded in and out from some of the scenes. Sometimes the images dissolved into snow. After the initial four minutes, the dancer appeared on the floor covered by veils. She mostly moved around slowly, bending, turning and throwing her hair back, but with a few fast jerky motions. As the work progressed, video captures were integrated into the prerecorded video and combined with other elements. The music was resynthesized in real time under the control of Boulanger's radio baton, which he played frenetically, sometimes like percussion mallets. While there was an omnipresent noise background, the music was often playing low, soft, slowly changing sounds. The performers didn't seem to end together. This was the most technologically advanced multi-media presentation at the festival.

The works that involved more recognizable instrumental and vocal sounds as part of their content had a different feel from the more abstract pieces, depending on the extent to which these elements were used. Paul Rudy's Love Song was inspired by the Utah desert, and it included images of some of the things found there: wind, water, and ideas about vast expanses, erosion, and things that change slowly over time. The most prominent feature of the piece was a woman's voice reading a passage from Edward Abbey's Desert Solitaire in a deadpan, emotionless manner. The voice ranged from being intelligible to distorted. The images described by the words nevertheless brought to mind some of the beauty and majesty that inspired him to write the piece. It was a beautiful and sensitive mix, ending with a long consonant passage.

Jonathan Hallstrom's In Memoriam for violin and computer was only one of two pieces in this group that included processing of instrumental rather than vocal sounds. The work was a tribute to the composer Toru Takemitsu, and all the sounds in the computer part were drawn from his compositions. The composer wrote in his program notes that he felt this allowed Takemitsu to participate in his own tribute. The violin soloist played tones that were miked and distributed around the hall (although some of these could have been prerecorded), and sometimes the player imitated the computer part. The computer part included sounds that were heavily processed, and the results ranged from metallic chords with interesting timbre changes to various kinds of noises. The background was always changing and delicate. Sounds faded in and out, forming clusters before decaying. The work ended with a violin line.

Craig Walsh's Terma for voice and electronic sounds was based on "the inherent acoustic properties of the Greek language." A text is both sung and spoken and introduces each of the letters of the Greek alphabet in six stanzas, each of which introduces "a new layer of deception and hurt." It began with a jumble of vocal sounds and noises, the voice entering and singing in a conventional manner. The second stanza had a vocal "accompaniment" where all of the consonants had been removed, leaving only a disjointed sequence of vowels in short regular rhythms. The background morphed into noises, followed by timbre changes. The third stanza returned to a more operatic style of singing, which the voice reaching a screeching high note that was amplified. After this, a low range voice entered the background, and the singer started a declamatory spoken passage moving to vocal flourishes. As the piece progressed, the speaker became more and more emotional, speaking frantically in short phrases and becoming more intense. In this work, I was frustrated by not knowing the meanings of the words, which clearly determined much of the character of the music, and this would have been clear to someone who understood Greek. The composer later apologized that a translation of the text had not been included in the program notes.

Mark Zaki's Everything we Say is Deformed was based on a text from the play Reading Frankenstein by Antoinette LaFarge, Annie Louie and Mary Shelley, read by an actress. The text was usually understandable, and the words were suggestive: "verbs were created without original sin - make, break, give, run, die... Adverbs appear to us as angels... The brain cannot model itself on anything bigger than itself... You must remain free" (these are just a few quotes from the text). The electronic sounds, which were described the piece as an "'art song' for voice and virtual ensemble," resembled bells, a distorted guitar, noises, "crickets," and other things. The text, or rather sounds derived from the reading, often became part of the background, sliding in and out of intelligibility. At one point, disconnected vowels accompanied the reading, which became more and more emotional as the piece went on. The interesting thing about this work was the interplay between the words and the accompanying sounds, which were clearly derived from the words but in ways that made one wonder how, just as the phrases which were quoted above made the listener pause and wonder what they meant.

Mark Engebretson's Where does Love Go for viola and computer was based on a poem, Conservation of Energy by Dana Richardson, copies of which were distributed before the concert (but was not included in the program book). A prerecorded voice was processed by effects and transformations, while the viola played more or less traditional melodies. The accompaniment included a wide range of materials, resembling at times out-of-tune voices, water sounds, instruments, percussion, and sounds imitating the live viola. The viola was miked and distributed spatially through the loudspeakers. The text, which was suggestive of visual and emotional images and posed philosophical questions to ponder, veered in and out of intelligibility, with the effects being taken in as a kind of commentary on the words. The fact that people could read the text before and during the performance meant that the audience didn't so much have to wonder what the words were as the could ponder what they meant, especially in the context of the composer's music. It is remarkable how much a simple thing like having the text can effect a person's experience of a piece like this!

Family Stories: Sophie, Sally by Anna Rubin and Laurie Hollander told the life story of Anna Rubin's mother, Sophie Rubin, who grew up as the daughter of Russian Jewish immigrants in Atlanta in the early 20th century (one event in the text mentioned a date of 1915). Sophie Rubin's mother died when she was seven, and she was then raised by an African-American woman, Sally Johnson, but the family later moved away without the nanny, leaving the child without either her mother or surrogate mother. The story included elements of the racism and anti-Semitism of the times. This was a relatively long piece (14 minutes), and the story of the narrative clearly came through. Most of the speaking was in the person of the daughter (composer Anna Rubin), but occasionally the voice of the mother and her nanny, who spoke in dialect, were heard. Most of the accompanying computer-generated sounds were derived from the speaking voices, but there were some other elements as well. What was heard resembled whooshing wind-like noises, sustained guitar sounds, chains, broken glasses, bells, chimes, metallic and water sounds, a "chorus" derived from the spoken voices, another chorus singing a negro spiritual, and a harmonica playing a sad tune that came in at several points. Unlike the pieces based on poetry and plays, these words did not leave the audience to ponder what they meant, so it was easier to focus on the story and hear the accompanying sounds as a backdrop.

Paul Koonce's Anacrusis was described as a "search" of a "virtual violinist ... for sound in new domains of timbre and space." A recorded violin played an upwardly ascending scale (the piece could also be described as a search for a new scale!) several times while different kinds of transformations of the sounds occurred. The transformations became more and more involved and interesting, resembling flutes, bells, voices, and including different kinds of ambient qualities. The composer made effective use of space and movement. At one point, the flute sound became so dominant that it led me to speculate that the search for a new violin timbre had produced - the flute! But as it continued, many different ideas were suggested, and the violin sound, which also kept returning to the odd scales that began the piece, became part of a more interesting electronically transformed environment.

Conflict of interest prevents me from attempting to evaluate my own work that was presented, Iridescence, but I can say that it does not fit into any of the categories described above for the other works. It was the only piece presented (on these concerts at least) that was based solely on synthesis of pitched tones, which had lots of controlled changes of timbre, amplitude and pitch deviations, but which included no processed instrumental or vocal sounds, noises, or extra-musical connotations. This may make it a more traditional conception, resembling instrumental music more than other electroacoustic music, but then, it's not instrumental music either.

For me, this festival was a thoroughly enjoyable occasion. The concerts took place in a "black box" theater that had an excellent playback environment with a mixing console in the center from which composers could control the dynamics and spatial playback of their music. The organizers had put together a crack team of about a dozen students who handled most of the technical and organizational details. One of them, composer Samuel Hamm, has worked for the festival for several years now and can only be described as a complete professional, able to deal with an extraordinary range of requirements that different composers bring.

In this day and age of lightning-fast internet connections and mass communications, it is hard to imagine that many campuses like the University of Florida are still relatively isolated and offer few opportunities to hear electroacoustic music except in festivals like this one. In fact, there are few opportunities to hear this music anywhere except in such festivals. That is one reason why this festival has endured so long and has given the students there the chance to be exposed to a wider range of influences and experiences than they would get otherwise.

If there is any criticism, or rather, suggestions for improvement that I could offer, it is that the festival has failed to reach a wider audience. Almost all the composers were present to hear their works, and most of them stayed to hear the rest as well. Apart from some students, very few others attended, and almost no one that could be described as being from the general public. The black box theater is part of a larger arts center that includes a large concert hall next door, and hundreds of people showed up there to watch a Cirque de Soleil performance. Did they know of the event taking place next door? I'm not sure if attendance would increase if they simply advertised more, but surely a city the size of Gainesville (over 200,000) has a few other people who might be interested. After thirteen successful years, those in the community should know what they're missing!