Virtual Synth is now available on the App Store!
Virtual Synth is an experimental synthesizer for your iPhone/iPod touch. You can generate sine tones and square waves from a wide range of frequencies. Experiment with different beat frequencies to generate modulating signals. A minimal control interface and engaging graphics make it even more fun to use.
Touch to create an instrument anywhere on the grid. The labels display frequency values. You can select a different instrument/frequency and tap on an existing instrument to change it. The beat frequency for modulating waves increases diagonally. Tap with two fingers at any time to remove all instruments.
X: to toggle frequency labels.
+/-: to toggle adding or removing instruments.
F: to toggle slider display to change frequency range.
sin: to add a sine tone.
mod: to add a modulating signal.
sqr: to add a square wave.
Virtual Synth was created as a class project at the Media, Arts and Technology program of UCSB in the Spring of 2011. It was realized with constant help and encouragement from Charlie Roberts and Lance Putnam’s generic synthesis library- Gamma.
Interactive Installation @ MAT End of the Year Show 2011. (June 9, 2011)
Animus is an interactive multimedia installation inspired by Quorum Sensing in bacteria. It stands as a metaphor for the governing spirit of the system, a collective brain if you will, for cell to cell communication in microbes which in turn defines its gene expression and the collective behaviors that emerge from it. Micro organisms use this kind of communication constantly to check for their population density and crossing a certain threshold display behaviors that vary from bioluminescence and toxic secretion to sporulation and conjugation.
Animus places the user as a controller for this system, where how they choose to interact with it defines its outcomes. The user gets a continuous feedback of the threshold required to display a certain behavior (bioluminescence in this case), affecting how they interact with it, making them more a part of the system then its controller eventually.
The installation uses a Kinect to create a sensing field in front of the projection, making direct manipulation of the system possible unaffected by any background noise. The user needs to perform a certain gesture to identify them as the controller and start affecting the system’s behavior reflected by the audio interface and the display projection.
3D (Stereographic) Visualization based on Cellular Automata
Morphon is a visualization of Cellular Automata in 3D, for a specific rule set and the structures/patterns that emerge from it. Cellular Automaton is a discrete mathematical model studied in a number of advanced scientific fields. In its most elementary form it is a one dimensional row of cells, each with one of two possible states (ON or OFF) at any discrete moment of time, based on the states of its two neighboring cells and its previous self. There are rules governing this change of states over time, and thus emerges a complex mix of patterns sometimes periodic and sometimes highly stochastic from very basic building blocks.
This project investigates a subset of an extremely wide range of possibilities to visualize 3D Cellular Automata and explore the generative structures that emerge in the process. The user can interact with the emergent structures with an iPhone interface that communicates with the app using OSC messages. The sonification process is very basic in that it maps the number of living cells at any time to the number of instruments.
Immersive Environment for the TransLab Space (MAT 261 A/B : Transvergence, Fall 2010)
The TransLab is a space installed with IR trackers, multiple projectors and a 16-channel speaker system. This project involved creating an immersive environment where the user was surrounded by a fluid field that he could measure, interact with and manipulate with a sensor in his hand. The sensor was implemented with touchOSC on an iPhone. The fluid density values around the user were displayed on the iPhone as well as the projector screens and also played back as frequency modulations of sine tones, hence making it natural for the user to explore the space around him in trying to locate the fluid source. The sensor screen as seen by the user is displayed in the image below.
The 3D fluid field simulation was implemented by adapting Jos Stam’s 2D real-time fluid simulation algorithm for games using 8000 points, with OpenGL in C++. Check out the video to see how the simulation looks in it’s mouse interactive version:
( …Continued from Visualizing Cellular Automata – II )
The possibilities to explore these structures were immense and so I skipped the next logical part where normally I would simulate 2D Cell Automata (a more specific example of which is “Conway’s Game of Life”) with 4294967296 (2^(2^5)) possibilities to explore. Instead I moved on to explore 3D CA. With millions of possibilities to explore again, it becomes crucial to constraint the system. So I explored Totalistic CA, where the rules are not updated based on state of a single cell, but based on the sum of their states, in a cell’s neighborhood. I ended up getting some interesting structures as shown in the images below. The sonification for this is rather primitive with the number of Cells being updated directly affecting the frequencies.
( …Continued from Visualizing Cellular Automata – I )
Since the system has many cells, and hence many states it was important to constraint it somehow when attempting to sonify it. Some attempts at creating generative music are following. The last video is an example of mapping Rule 110 on Pelog Scale to generate rythmic sounds, midway through which I dynamically increment/decrement the rules to change the structure.
( Continued on Visualizing Cellular Automata – III… )
Genetics has been a topic I was interested in for as long as I can remember. Recently I started reading about how biological systems function and how their principles are applied in creating programs. This led me to read more about genetic programming and AI, and eventually I came across Complex Systems and Cellular Automata. A quick google search was enough to get me excited about its generative nature and emergent patterns. The fact that I could create a purely rational system on my computer which applied logical rules on a set of states to generate such complex structures- orderly and chaotic at the same time, encouraged me to explore this as a project for 594P.
The origins of Cell Automaton lie in Von Neumann’s simplification of the process of Kinematic Automata, a system designed to create self-replicating robots, due to Stanislaw Ulam’s insight on his methods. Though it became popular within a small computing community with John Conway’s “Game of Life”, it was Stephen Wolfram’s publication of “A New Kind of Science”, a book that explains how complex systems emerge from seemingly simplistic ones like Cell Automata, that reintroduced its concept as a thoroughly systematic investigation. The basics are very straightforward- you start with a set of initial states, iterate through all the cells, checking each cell’s neighborhood (a finite number of cells around it) and mapping its states to the rule being employed to calculate the next state of the cell. All the cells are updated once the rule is employed and then the process is repeated.
I started with Elementary Cellular Automata- 1D structure of cells, where each cell’s neighborhood is composed of itself, the cell on its right and the cell on its left, and there are only two possible states for each cell: ’0′ and ’1′. With this configuration you have a possibility of 256 (2^(2^3)) rules to govern the behavior. Interesting behaviors emerge when the evolution of 1D cellular automata is tracked for a number of iterations. The following images display some of the interesting rules. The major observation Wolfram made was how some structures were very orderly while some very stochastic in nature. Although some of the most interesting ones are with a combination of both, order and randomness in their structure, for example rule 110.
( Continued on Visualizing Cellular Automata – II… )
Image Processing Techniques (implemented in MatLab) for ECE 181B : Computer Vision (Winter 2010)
A. Face Recognition with Eigen Faces
B. Object Detection with Bag of Features
C. Homogeneous Transformation
D. Scale Invariant Feature Transform
E. Corner Detection (Harris Corner Detector)
Interactive Multimedia Project for MAT 200C : Multimedia Systems (Spring 2010)
Virtual Synth is an application of OpenCV for Processing. It creates a virtual frame on top of the camera stream that contains interactive objects. The square objects are updated on every frame of the stream to detect motion in that region, and as a result update the frequency of a Synth in supercollider to play different tones. Communication between Processing and Supercollider is done via oscP5 which uses Open Sound Control. GUI objects include sliders for changing contrast, brightness and threshold values for motion detection.
Check out the demo video:
Audio Visual Piece for MAT 200B : Music and Technology (Winter 2010)
This composition was an exploration in mapping of one digital domain into another. The basic process involved creating visuals in Flash, importing them in MatLab and then converting each frame into a corresponding Sound based on its frequency constituents. The conversion was done using Fast Fourier Transform. It involved a lot of experimentation, some expected results and some unexpected. The final result is an audio-visual whole that is inherently in sync with both components. Initial idea for this project stemmed from experiments with inverse conversion of Sound sonograms.
Showcased at the MAT EoYS Concert- May, 2010.
Interactive Multimedia Project (MAT 594O : Sensors)
This project was an exploration to link ideas in Human Computer Interaction and Computational Geometry to create an engaging audio-visual environment. The concept of the project was to trigger a series of 3D forms (based on the SuperFormula proposed by Johan Gielis) transformations & Sound (Vibrato) transformations in Virtual (3d) Space. All action is triggered through a series of hand gestures (Arduino w/ Accelerometer), motion of the hand rotation measured (X,Y,Z) axis and the up/down quick movements to change the SuperShape. Further adjustments to signal received in the system can be used to create more expressive transformations of the SuperShapes.
The user/viewer will be presented an interface (Arduino attached to either right/left hand) that will engage the system, stimuli of site, sound and motion. The areas of HCI, Computer Graphics and Electronic Music were concurrent areas for research.
Check out the video:
Concept project for Mat 200A : Arts and Technology (Fall 2009)
The main concept is to convert the CNSI building wall (UCSB Campus), which currently holds a fake Kandinsky painting, into an animate/perceptive entity with intelligent properties. By navigating the above flash presentation by clicking anywhere on it, the user can realize my concept of achieving this. Click here for more information about this project. (MAT 200a: Fall 2009)
Group Collaboration Project for MAT 200C: Spring 2010
I worked with a team of six other people to create a multi-touch surface table for MAT. We used FTIR for touch detection, TUIO for tracking and Processing for the display.
The concept of Natural User Interfaces is becoming more and more widespread from applications in user appliances to research instruments. In this domain of technology where user experiences range from simple touch screens to fluid interfaces, multi-touch surfaces have an important role to play. For some they might promise of a more integrated, interactive and intuitive multi-user solution, while for others they might pose a situation of unwanted change. This project takes a positive approach towards adapting to this technology and investigates to a certain extent, the mechanics of it. It also presents a brief overview of certain applications for such an interface and how they can be expanded. It is concluded that touch interfaces provide a sense of instant feedback and a feeling of prompt reaction which makes it a richer and faster experience. Further investigations and user feedback would lead to more streamlined applications that could take collaborative software applications to the next level.
SIFT or Scale Invariant Feature Transform is a nifty transform for object recognition using images , to make them scale invariant and to some extent illumination invariant as well. The first image shows a lamp being recognized in a larger photoof the room.
The second image (of the cameraman) shows how even after an image is scaled, rotated and tampered with its brightness a bit, the algorithm still matches it with the original one. This was implemented in MatLab for a Computer Vision class.
We (group of 4 people) had a lot of fun experimenting at every stage to check what the results were. Since the algorithm works on an image on various Octaves and different scales, at one stage we had more than a dozen images to look at. The last image is a screenshot of that one time!
I implemented a “Texture Synthesis” algorithm (proposed by Efros and Freeman in SIGGRAPH 2001) in MatLab for a Media Signal Processing class. (MAT 201a: Fall 2009)
Shown above are a couple of results from this implementation. Click on the image to enlarge them.