I've seen lots of weird research ideas into "alternative" interfaces for computers, but this one seems really, really interesting. It's a touch-panel interface that can recognize multiple contact points at once, and do things based on the movement of both points. For example: if you touch opposite corners of an image and then pull your fingers apart, the image will scale up. If you rotate. Or, (depending on how the software is set up) you could zoom in.
This NYU page has a quicktime video showing some great proof-of-concept programs in action. The "Google Earth"-ish one looks especially fun. I want one!