Stotts' Recent Work
In the past 18 months, our work has been concentrated in
a few projects:
Smith PhD: Elemental Design Patterns and SPQR
This work has been supported by the U.S. Environmental Protection Agency's
Science to Achieve Results (STAR) program, grant #R-82795901.
Although the research described here has been funded in whole or in part by the
U.S. EPA, it has not been subjected to any EPA review and therefore does not necessarily
reflect the views of the Agency, and no official endorsement should be inferred.
One of my PhD students, Jason Smith, finished his PhD dissertation in December 2005.
The topic was automated discovery of design patterns in OO source code. The
approach is to decide what OO concepts are necessary and sufficient to define a
specific pattern, and then formalize those properties in the Rho Calculus
(a modified form of Cardelli's Sigma Calculus). We also define some base patterns,
called Elemental Design Patterns (EDP), which are essentially a formalization of
the essential and fundamental programming concepts in OO languages.
The formal structure is completed with inference rules that allow EDPs to be
composed into larger structures; also, inference rules that allow actual
realized patterns (as found in code) to vary from ideal patterns (as found in the
formal definitions). Finally, a toolset called SPQR has been implemented for
showing the practical utility of the formal concepts. SPQR uses an automated theorem
prover (OTTER) to do the composition of base patterns and demonstrate that
specific elements found in a program comprise a high-level design pattern
(or a valid variant).
UNC has a patent pending on this technology.
Early results from this work appeared in the IEEE Automated Software Engineering
conference, October 2003, Montreal: Smith, J., and D. Stotts, "SPQR: Flexible
Automated Design Pattern Extraction from Source Code,"
pp. 215-224. (22 of 170 acceptance rate, 13%).
(pdf)
We have one book chapter accepted for publication,
"Elemental Design Patterns and Compositional Detection Methods for Object
Oriented Source Code," to appear in Design Pattern Formalization Techniques,
T. Taibi (ed.), Idea Group, Inc., Fall 2006.
A paper on extending this work to software architecture analysis was presented at
the "Extending SPQR to Architectural Analysis by Semi-Automated Training,"
Working IEEE/IFIP Conference on Software Architecture (WICSA), Pittsburgh,
PA, Nov. 6-10, 2005.
We have 2 journal papers submitted from this work.
One is on the basic formalism and concepts of composing EDPs,
and is submitted to ACM Transactions of Software Engineering.
The other uses the EDP formalism to show that Gang of Four OO design patterns
are minimal in the sense of the information theoretic concept of Minimum
Description Length (MDL).
Facetop: Ongoing Development and User Trials
This work has been supported by the U.S. Environmental Protection Agency's
Science to Achieve Results (STAR) program, grant #R-82795901.
Although the research described here has been funded in whole or in part by the
U.S. EPA, it has not been subjected to any EPA review and therefore does not necessarily
reflect the views of the Agency, and no official endorsement should be inferred.
Facetop is a novel user interface concept that has single user applications
as well as collaborative system applications.
Early results were published at the 2004 ACM Hypertext Conference
on how virtual hyperlinks could be imbedded in the video space
of the user's environment: D. Stotts, J. Smith, and K. Gyllstrom,
"FaceSpace: Endo- and Exo-Spatial Hypermedia in the Transparent Video Facetop,"
Santa Cruz, Aug 15-18, pp. 48-57 (25% acceptance rate).
(pdf)
UNC has a patent pending on this technology.
Best paper award:
We won the best paper award
at the 2004 XP/Agile Universe Conference:
D. Stotts, J. Smith, and K. Gyllstrom, "Support for Distributed Pair
Programming in the Transparent Video Facetop," Calgary, Aug 15-18, pp. 92-104.
(25% acceptance rate).
(pdf)
Washington Hospital Center Emergency Medicine trials
We have been funded by the Emergency Medicine Department at
the Washington Hospital Center to develop a Windows version
of Facetop that can be used to allow physicians to remotely
collaborate on medical records.
This project began in September 2005, and is only now yielding
publishable material. We have a basic Windows version working, and
are adding functionality specific to the needs of the emergency medicine
domain. For example, the video images of the collaborating physicians
are very similar to video medical data like sonograms and radiographs,
so we have developed high contrast video renderings to allow the eye to
distingish user video from data.
Tablet PC Facetop for Hearing Impaired Users
We have developed a derivative Facetop application that allows hearing-impaired
users to take notes in class on a Tablet PC. The camera is pointed at the
student's signing interpreter, and the image of the interpreter is placed
semi-transparently on the screen of a Tablet PC. The student may then
take hand-written notes while watching the ASL spoken by the interpreter.
The basic concept is illustrated in this
Technical Report.
We are preparing a submission on this project for the 2006 ACM ASSETS
Conference on Assistive Technology (Portland, October 2006).
We will be agumenting the technical report with
user data being collected now in trials with hearing impaired students
at UNC and at East Carolina University.
Miller PhD: Assistive Technology for Remote Collaboration
This work has been supported by the U.S. Environmental Protection Agency's
Science to Achieve Results (STAR) program, grant #R-82795901.
Although the research described here has been funded in whole or in part by the
U.S. EPA, it has not been subjected to any EPA review and therefore does not necessarily
reflect the views of the Agency, and no official endorsement should be inferred.
Another PhD Student, Dorian Miller, is within a year of completing his
research in assistive technology for users with hearing and visual impairments.
He is supported by an IBM Dissertation Fellowship.
Miller has chosen a very challenging problem to solve -- to create user interface
concepts and supporting system implementations that will allow people with
audio and visual impairments to participate in synchronous remote collaborations.
Using a modified Facetop interface, Miller has shown that hearing impaired users
can collaborate on shared computing activities such as document editing,
game playing, and diagram creation; deaf collaborators employ the
semi-transparent video overlay as a communication medium (signing and lip reading)
while the gaze is concurrently centered on the work being discussed.
They also retain the normal advantages of Facetop -- namely, the ability to
point and gesture on the shared work.
Miller's experiments have shown this to work very effectively.
He is also involved in the Tablet PC version that supports note taking for
students who need an ASL interpreter.
We are preparing a paper on these results for the 2006 ACM ASSETS
Conference on Assistive Technology (Portland, Octover 2006).
The other part of Miller's work is to support visually impaired users to
remotely collaborate. This is a tougher problem, so he is developing
a more task-specific solution -- a system called DeepView. DeepView
presents an audio interface that allows the blind collaborators to edit,
explore, and understand diagrams (such as UML notations).
User trials are just beginning here and will continue through the Fall 2006.
Gyllstrom PhD: Multi-user Facetop and User Task Inference
This work has been supported by the U.S. Environmental Protection Agency's
Science to Achieve Results (STAR) program, grant #R-82795901.
Although the research described here has been funded in whole or in part by the
U.S. EPA, it has not been subjected to any EPA review and therefore does not necessarily
reflect the views of the Agency, and no official endorsement should be inferred.
Another PhD student, Karl Gyllstrom, is beginning his work. We have been
developing a problem area and are moving forward now with research that
extends the basic Facetop idea to multiple users in larger group collaborations.
This research will leverage a hardware project in our department to make
wall-size projected bit fields in 20 offices. These 20 large "desktops"
will be connected via dedicated video network and able to be assembled
into virtual collaborative spaces, combining desktop contents from several
different users into a shared projected space.
One property of such a large projected workspace will be many windows.
Gyllstrom is developing algorithms for infering task groups of windows
than can be opened, closed, and moved as groups. Inference is done from
user actitvity traces, where actions in related windows are grouped in
temporal clusters.