Since the 1960s, Robert Moog’s name has been insolubly linked with the modular analog synthesizer he invented and marketed, where transistors replaced the vacuum tubes of the earlier Hammond organ to create a smaller, more versatile, commercially viable electronic system. Moog (say it with a long o) showed us how deeply our experience of sound is influenced by our interface with it, whether on the passive side—in 1969, Wendy Carlos put Bach in the Top 10 of the Billboard 200 pop chart simply by playing it on a Moog—or the active side, where the new gestures of hand-sculpting the air around a Theremin’s antenna or patching connections like a switchboard operator engendered new ways of thinking in sound, and a reexamination of the role of the user.
Now, of course, you can buy all those modules pre-stuffed into one eminently portable box, and electronic music is no longer a novelty. But we are still sizing up the psychological and aesthetic effects of severing music from the resonator, as the electronic musician Daedalus put it at Kill Screen’s afternoon of discussions of sound and videogames at the music festival honoring Bob Moog in his company’s mountain hometown of Asheville, North Carolina. On Friday, April 25, on a neon-slashed stage in a large bar called the Mill Room at the Asheville Brewing Co., Kill Screen founder Jamin Warren played the interlocutor for a series of inventive creators from the borderlands of music and videogames.
The panelists revolved through a broad variety of topics, all circling the point that new interfaces are rapidly changing games as well as our relationship to sound in them—recalling the impact on pop music when Robert Moog turned a box with buttons and knobs, not so unlike a videogame controller, into a popular instrument. The difference is that emergent interface technology arguably mediates our experience of making music, putting technological distance between the human body and the sounds created. In games, the effect is inarguably the opposite, weaving us ever more deeply into what we play.
The first panel paired two guests who embodied the convergence of instruments and mechanical controllers, one more from the world of music and the other more from the world of games. Joel Ford is half of Ford & Lopatin, with Oneohtrix Point Never’s Daniel Lopatin, which makes a fetish of ’80s-style analog synthesis and drum programming. Ford said the influential electronic artist Prefuse 73 was his gateway into MPC music. (An MPC, or music production center, is a digital sequencer and sampler that turns the old-fashioned DJ into a live producer and editor, jamming on pads instead of scratching and blending vinyl). The artistry of people like Prefuse and the physically dexterous nature of using the MPC make it an appealing transition into electronics for more traditional acoustic and electric instrumentalists, which Ford formerly was. He praised the passage of electronic instruments from cultural novelties to standard tools. “The only thing more punk than playing three chords on a guitar,” he remembered the new wave band Human League saying, “is playing one note on a synthesizer.”
Ford came up on the Boston music scene with co-panelist Matt Boch, who also knows a thing or two about the relation between so-called real and virtual or electronic instruments. Boch spent the end of the last decade as a hardware designer at Harmonix—where he is now a creative director—developing the plastic guitars and other instrument-like controllers for the Rock Band franchise. He explained that the Rock Band guitar was more like a mouse, a precise interface, than a toy or instrument, and that getting the right feel was the hardest part (both musicians praised the haptic feedback of Moog instrument knobs), just like it is for real instrument designers. Boch said that while the buttons on a Rock Band guitar don’t correlate with real strings or frets, they have to feel intuitive and satisfying on their own terms. Rock Band started off with basic rock songs to teach people how instrumental parts fit together—a modular approach to heuristic learning—before getting more abstract with guitar-less fare such as Lady Gaga’s “Poker Face.” The panelists described the future as a proliferation of accreting technologies, where new things get linked into old ones, but nothing really goes away. Despite the rise of solid-state, the module still reigns.
The difference is that emergent interface technology arguably mediates our experience of making music.
The second panel featured Karla Zimonja, a veteran of big-studio games who then went on to co-found The Fullbright Company. Zimonja, who worked on the lauded Bioshock 2 DLC “Minerva’s Den,” discussed the use of sound to tell rather than simply wallpaper in-game stories, from the creaks and groans of Bioshock’s failing submerged structure to the use of audio logs to transmit narrative info concisely and at the player’s discretion. “Pull rather than push,” she said of her theory of audio storytelling, noting how sound can be a breadcrumb trail leading the player around less obtrusively than flashing arrows or HUDs.
If the first two panels mostly explored fidelity—the realistic representation of sound—the last two ventured into the farther frontier of sound as a trick, an illusion, a befuddling sorcery. One of the most fascinating guests was Robin Arnot, and not just for his coxcomb of hot-pink hair. An interactive artist (Deep Sea, SoundSelf) and game sound designer (Antichamber, The Stanley Parable), Arnot spoke of “hacking” the user’s brain via blends of sound and virtual or augmented reality. In his work, sound breaks away from the mimetic and the supplementary, becoming an end and an interactive world unto itself. In Deep Sea, for instance, the player’s own respiration makes it hard to hear threats, so you literally have to play with bated breath, balancing virtual success and real survival (Arnot said that someone passed out playing it on the SXSW show floor).
“Sound design is hypnosis,” he said, and spoke of wanting to shut up the player’s internal monologue to get the world he’s creating under their skin. He brought Zimonja on stage to try out SoundSelf on an Oculus Rift, with the internal display projected on a monitor visible to the audience. As she hummed, musical audio output resonated with and transformed her voice, which was linked to synesthetic visuals in a complex, meditative biofeedback loop. Arnot is more interested in perception and experience than gameification, working toward the ecstatic trance states that are familiar to fans of electronic dance music, but in a more private, sense-encompassing way. His work is like a dance party for your neurons in the hermetic nightclub of your brain.
The final panel featured the aforementioned Daedalus, who more resembles a beatific barbershop quartet singer than an electronic musician, and Microsoft Audio Innovation Director Matthew Lee Johnston. Johnston had the deepest, most diverse resume of the afternoon, as the sound designer of Peggle—a potent example of the slot-machine-like feedback intrinsic to many casual games—and a veteran of famed PC titles such as Flight and Train Simulator, not to mention Counter Strike and Forza. “When you play a videogame and trigger sounds, you’re basically playing an instrument,” Johnston said. He told a story about convincing Microsoft of the importance of recording real golf sounds for MS Golf, but then only getting leaf-blowers and traffic sounds when he tried to do so. Instead, he created illusory nature sounds—what we would like to imagine a golf course sounding like—and then eased players into the more realistic, less romantic version as the game progressed, with distant shotgun blasts and employees in utility sheds complaining about bosses.
The pair also discussed the “stoic German clarity” of Moog synths versus the “Berkeley wildness” of one of Moog’s most august contemporaries, Don Buchla, whose son worked on one version of the Monome, the controller Daedalus currently favors to perform music. To envision the Monome, basically picture an open-source Boss Dr. Sample with more buttons on the grid, stripped of any preprogrammed functionality—a purist, minimalist approach to control. Since the Monome has no manufacturer’s programming, it’s a versatile tool for many fields beyond music. Certainly, a creative videogame designer could find novel ways to use it. It neatly summarizes the opening, flourishing ways we can touch our virtual worlds, whether sonic, ludic or, increasingly, both.
Header image by synhack
Inline images via stereogab