Most refreshable Braille displays are constructed with pin-grid arrays like these:
Posts with tag: ideas
Computer interfaces are mostly sequential. Consider telephone menu systems: enter 1 for parts, enter 2 for service, etc. As another example, when you kill an unresponsive program, Windows XP pops up a dialog asking me if you want to send an error report to MS. You must respond to it before proceeding. An alternative user interface strategy (for both sighted and blind) depends on asynchronous alerts and user responses. Think of the underlining of misspelled words in many editors; it occurs sometime after typing and can be corrected (or not) anytime. Emacspeak has some nice features like this. The presence of a footnote associated with a word is indicated by a audible signal played along with the speech for the word without stopping. The listener can respond to the signal by requesting the footnote be followed or ignore it. A project investigating what is known about asynchronous user interfaces and perhaps a prototype implementation would be really interesting and likely result in a paper.
Concept mapping programs are important literacy tools used by many schools. They are currently inaccessible to people who are only able to use one or two switches for input.
Most video games are too hard for kids with cognitive difficulties. About the only approach currently available is to use things like “Game Genie” to “cheat”. We’d like an interesting and visually attractive computer game that emphasizes memory and has variable levels of difficulty. This will require an imaginative team willing to do some experimentation and willing to work with potential game players to get ideas. Of course, there are many kinds of impairment and one game will certainly not work for everyone. Our goal will be to make a game that is fun for one or two kids and see if it appeals to a larger audience.
People with impaired vision often use CCTV devices to enlarge printed text. How well could we simulate this with a cheap web cam, a milk crate, a strip light, and some software?
Develop a communication system that visually shows speech in some form to people who are deaf without relying on a full-blown and often faulty speech recognition system. One idea would be to break the speech into phonemes which could the be watched and assembled into words by the user.
I have just begun to investigate the possibility of doing for 3D audio what image-based rendering does for computer graphics. I have an 8-microphone array and can record sound simultaneously at up to 96kHz with 24 bits per sample. I’d like to process the recorded sound to produce the parameters of a random process that produces sound with similar spatial and temporal statistics.
Making a Braille embosser is really hard for CS types. We make software. So how can we use commodity devices and software to help children learn to read Braille? The method teachers use now is make a letter/feel a letter. The child writes by pressing the keys on the embosser and reads by feeling the result. The is comparable to sighted kids making the same letter over and over.
I have the beginnings of a tool to convert humming to musical notes. A simple music composition system with speech driven and/or simple keyboard commands would be very popular. You hum a bit of a tune, choose an instrument to play it, loop that, and it becomes the background for the next track. Then you hum something over that, choose an instrument and you’ve on your way to a brilliant composition.
Current implementations of DDR specify steps the user must match exactly to score points. Someone recently introduced matching silhouettes but it is essentially the same thing. The step files are synchronized with the music using a simple offset and rate. Thus the music has to have a strictly fixed beat and (practically) has to be shipped with the game resulting in copyright issues.
« Previous Page -- Next Page »