Reality Mining collects aggregate data about human – human interactions and maps them to patterns to predict human behaviors and decision making from first dates to job interviews . Supposedly it won’t impose on people’s privacy (by working on a very large scale data like a city block or town neighborhood). This reminds me of how facebook collects user information to customize what shows up on our walls. Even though Reality Mining might not use the data to personally focus a single person that data might actually be used to influence collective behavior (which sounds even worse)! Also it claims that human behavior is very predictable but I like how psychologist Bernard Rime argues that human behavior is actually affected by a lot of different attributes like past experiences, cultural beliefs, habits, shared knowledge about a situation etc.  It remains to be seen how much human behavior can reality mining actually predict without being too intrusive to the private lives of its subjects.
I am quite amazed after reading this article. Vannevar Bush was a visionary and an engineer in the true meanings of those words. Some of the technologies he talks about were definitely some 4 or 5 decades before their time of realization, while others have only just begun to be pursued. He talks about things that I can easily relate to today, like internet, blogging and even google’s voicemail transcription (or dragon dictation for that matter).
But the heart of his argument lies in the universal question of organizing and quickly retrieving information in the most productive way. A question which is even more crucial to today’s time than it was in 1945. Although I think we might have solved part of the problem- that of accessing data. But the other part- that of meaningfully indexing it remains unsolved. Every researcher, every Masters or PhD student goes through this problem. There is just so much information out there in the world for every field imaginable, but no way of finding the one that is most relevant instantly. Once you know what you’re looking for it’s much easier to find but getting there is still not convenient.
Although it doesn’t just end there. The third aspect of his argument is working with this data in the most productive way, the most human way. His imagined machine, the Memex, is an excellent solution in its nascent concept and we most certainly have the technology to build one today. It’s funny when interaction designers try to move away from the traditional desktop metaphor, while reading about visionaries like Vannevar Bush and Pierre Wellnor (“Digital Desk”) keep reminding us that the essence of the solution lies in augmenting or transforming it instead of making it extinct.
In over fifty years of computing the way most users interact with computers hasn’t changed much at all. After it was first invented in 1961, it took about 2 decades for the mouse to hit the public market at affordable rates with many design changes. It became popular with the release of Apple’s Mac in 1984, but apparently Apple received numerous complaints from users describing the high complexity in using it . The design was purely functional and reflected the form of the computers they were attached to, but no ergonomic details were paid attention to. Even still the fact that the modern mouse still resembles its ancient version in many ways is a little saddening to say the least.
About two decades ago SEGA released “Activator” a sensor controller that allowed players to control their game characters without a physical controller. “You are the controller” it announced. It failed to attract commercial player base and gradually disappeared. Not less than six months ago Microsoft released “Kinect“, again promising “You are the controller”. This time giving the users/players way more control over their characters and thereby opening an immense amount of possibilities for new user interfaces. The future we’ve all been waiting for, the future of “Minority Report” is finally here, and it’s up to the present generation of designers/programmers/hackers to define its rules and impact its survival. A new world of natural interaction not based on gestures that are unintuitive to the point of being ridiculous and painful to learn, but are flexible, clever, meaningful, playful and just plain good . Adam Kendon, in his “Visible Action as Utterance” defines gesture as “a universal form of expression, which although seems to be spontaneous and crafted through the whim of the individual, is at the same time regulated and subject to social convention.” . As simple as this sounds it is still an enormous task coming up with a lexicon of gestures that is not just well defined and functional but also internationally acceptable and overly encompassing a wide range of actions for millions of applications. But this is a necessary first step to create a future of ubiquitous natural interaction, that is constantly being side-stepped as we see more and more interfaces emerge either blindly recreating the scene from Minority Report or coining random mappings of random hand movements.
The point of this whole rant is that we have been so obsessed and engrossed in creating the technology for a flawless natural user interface to be possible, that we have unconsciously neglected a very important finishing detail, that of its design. It needs to be addressed sooner than later, and keeping Kendon’s definition in doing so could be a meaningful gesture in itself.
 Atkinson, Paul. The best laid plans of mice and men: the computer mouse in the history of computing.
 Saffer, Dan. Designing Gestural Interfaces. “Designing Interactive Gestures“.
 Kendon, Adam. Gesture: Visible Action as Utterance.
… of seeing a favorite artist perform on stage is quite different from listening to their music at home or anywhere else being digitally played back. What makes a live performance so different is obviously the presence of the artist itself, but not just that. The unexpected and seemingly arbitrary improvisations on stage, the lack of total digital perfection and the moments when the artist becomes one with their music reflected by the expressions or gestures, make the whole experience so enjoyable.
I am not a person well versed with electronic music. If I’d heard some of the music I listen to now a few years back I’d have thought it rubbish or plain weird. Attending on-campus concerts and listening to selected pieces in a music class, I gradually grew to appreciate it. Both- being bored to hell and wishing the ear blasting awesomeness would never end, have been a part of the process. But the performances that had the most impact were always ones with the artist being on stage, playing with the elements, sometimes being a part of them.
In “Human Bodies, Computer Music“, Bob Ostertag poses an interesting question- that of incorporating the body of an artist working with electronic music, in the performance itself. Nine years back maybe this was a more difficult problem then it is today. If there was ever a solution to a seamless interface between man and machine then it’d have to be now and it’d have to be computer vision, which although does come with its own set of various problems, is still advanced enough for constrained scenarios. That and other media processing, sensor advancements make it easier than ever to incorporate an artist’s body in their performance. Pamela Z is an example of innovative media artists who use techniques varying from proximity sensing to gesture recognition in their performance and it definitely has a unique, more engrossing effect on the audience.
I worked on a project few months back where the user could interact with virtual objects around him (visible on the screen) to play different frequencies of sine tones. It’s called Virtual Synth, and I was told it had a good performative characteristic. I didn’t fully understand what this meant till I read the article above. If I imagine a fully developed, 10 times more robust, n-th iteration of this software I see an application with virtual instruments around an artist who could control all the parameters on screen which they once had to with keyboard and mouse, merely with hand movements and gestures, to create computer music while being part of the performance.
In the end though, it all comes down to what an artist has in mind for the audience to explore in their piece. I’ve been to shows where the artist would perform on-stage along with live visuals even, to concerts where there is virtually no performative aspect except controlling parameters on a laptop’s screen, but both being equally amazing. That’s the best thing about electronic music- the immense amount of control and flexibility it has that open up a wide range of possibilities to explore and unique ways of carving a niche in.