5.22.2007

multi-touch + sound + haptics = groupthink?

A couple of things occurred to me during Jeff Han's multi-touch interface demo (see it here).

1) Add sound. The part where he's zooming in and out of a cloud of dots and talking about data modeling, I wanted to hear whooshing sounds and changes in background hum to give me additional information about my realitve location within the 3D space.

2) Use sound to simulate haptic (touch) information. Can you fool the brain with sound? As you drag your fingers around on a virtual surface, if you represent changes in surface texture as changes in sound, do you kind of feel it? Not to mention efficiency. You can type really fast if you can touch-type, not having to look at the screen to see where your hands are. Wht if instead you could "hear/feel" where they are? Keep the eyes free for detail work.

3) What about eye-tracking, too? Hands can do one thing while eyes look somewhere else.

4) Multi-user. Multiple people could look at a screen and create something: images and music based on where they're each looking, highlight certain data, meanwhile you can track what each is doing with their hands. If you had enough processing power (how about multiple automonomous CPUs sending synched OSC info to a server which runs the display) you could track a whole crowd's responses in real-time. Latency would be kind of okay because it would make people focus on one spot until the changes began - might be less chaotic, too.