Computer interfaces are mostly sequential. Consider telephone menu systems: enter 1 for parts, enter 2 for service, etc. As another example, when you kill an unresponsive program, Windows XP pops up a dialog asking me if you want to send an error report to MS. You must respond to it before proceeding. An alternative user interface strategy (for both sighted and blind) depends on asynchronous alerts and user responses. Think of the underlining of misspelled words in many editors; it occurs sometime after typing and can be corrected (or not) anytime. Emacspeak has some nice features like this. The presence of a footnote associated with a word is indicated by a audible signal played along with the speech for the word without stopping. The listener can respond to the signal by requesting the footnote be followed or ignore it. A project investigating what is known about asynchronous user interfaces and perhaps a prototype implementation would be really interesting and likely result in a paper.
T.V. Raman wrote to say that another example is tetris in emacspeak. His paper "Conversational Gestures for Direct Manipulation On the Audio Desktop" talks about it.