In over fifty years of computing the way most users interact with computers hasn’t changed much at all. After it was first invented in 1961, it took about 2 decades for the mouse to hit the public market at affordable rates with many design changes. It became popular with the release of Apple’s Mac in 1984, but apparently Apple received numerous complaints from users describing the high complexity in using it . The design was purely functional and reflected the form of the computers they were attached to, but no ergonomic details were paid attention to. Even still the fact that the modern mouse still resembles its ancient version in many ways is a little saddening to say the least.
About two decades ago SEGA released “Activator” a sensor controller that allowed players to control their game characters without a physical controller. “You are the controller” it announced. It failed to attract commercial player base and gradually disappeared. Not less than six months ago Microsoft released “Kinect“, again promising “You are the controller”. This time giving the users/players way more control over their characters and thereby opening an immense amount of possibilities for new user interfaces. The future we’ve all been waiting for, the future of “Minority Report” is finally here, and it’s up to the present generation of designers/programmers/hackers to define its rules and impact its survival. A new world of natural interaction not based on gestures that are unintuitive to the point of being ridiculous and painful to learn, but are flexible, clever, meaningful, playful and just plain good . Adam Kendon, in his “Visible Action as Utterance” defines gesture as “a universal form of expression, which although seems to be spontaneous and crafted through the whim of the individual, is at the same time regulated and subject to social convention.” . As simple as this sounds it is still an enormous task coming up with a lexicon of gestures that is not just well defined and functional but also internationally acceptable and overly encompassing a wide range of actions for millions of applications. But this is a necessary first step to create a future of ubiquitous natural interaction, that is constantly being side-stepped as we see more and more interfaces emerge either blindly recreating the scene from Minority Report or coining random mappings of random hand movements.
The point of this whole rant is that we have been so obsessed and engrossed in creating the technology for a flawless natural user interface to be possible, that we have unconsciously neglected a very important finishing detail, that of its design. It needs to be addressed sooner than later, and keeping Kendon’s definition in doing so could be a meaningful gesture in itself.
 Atkinson, Paul. The best laid plans of mice and men: the computer mouse in the history of computing.
 Saffer, Dan. Designing Gestural Interfaces. “Designing Interactive Gestures“.
 Kendon, Adam. Gesture: Visible Action as Utterance.
… of seeing a favorite artist perform on stage is quite different from listening to their music at home or anywhere else being digitally played back. What makes a live performance so different is obviously the presence of the artist itself, but not just that. The unexpected and seemingly arbitrary improvisations on stage, the lack of total digital perfection and the moments when the artist becomes one with their music reflected by the expressions or gestures, make the whole experience so enjoyable.
I am not a person well versed with electronic music. If I’d heard some of the music I listen to now a few years back I’d have thought it rubbish or plain weird. Attending on-campus concerts and listening to selected pieces in a music class, I gradually grew to appreciate it. Both- being bored to hell and wishing the ear blasting awesomeness would never end, have been a part of the process. But the performances that had the most impact were always ones with the artist being on stage, playing with the elements, sometimes being a part of them.
In “Human Bodies, Computer Music“, Bob Ostertag poses an interesting question- that of incorporating the body of an artist working with electronic music, in the performance itself. Nine years back maybe this was a more difficult problem then it is today. If there was ever a solution to a seamless interface between man and machine then it’d have to be now and it’d have to be computer vision, which although does come with its own set of various problems, is still advanced enough for constrained scenarios. That and other media processing, sensor advancements make it easier than ever to incorporate an artist’s body in their performance. Pamela Z is an example of innovative media artists who use techniques varying from proximity sensing to gesture recognition in their performance and it definitely has a unique, more engrossing effect on the audience.
I worked on a project few months back where the user could interact with virtual objects around him (visible on the screen) to play different frequencies of sine tones. It’s called Virtual Synth, and I was told it had a good performative characteristic. I didn’t fully understand what this meant till I read the article above. If I imagine a fully developed, 10 times more robust, n-th iteration of this software I see an application with virtual instruments around an artist who could control all the parameters on screen which they once had to with keyboard and mouse, merely with hand movements and gestures, to create computer music while being part of the performance.
In the end though, it all comes down to what an artist has in mind for the audience to explore in their piece. I’ve been to shows where the artist would perform on-stage along with live visuals even, to concerts where there is virtually no performative aspect except controlling parameters on a laptop’s screen, but both being equally amazing. That’s the best thing about electronic music- the immense amount of control and flexibility it has that open up a wide range of possibilities to explore and unique ways of carving a niche in.