Search our Site

RELATED PAGES:
Research Project Index
Faculty Research Areas Index
|
|
Research Area Descriptions
The Department of Computer Science is actively engaged in research
on a broad range of topics. Follwing are descriptions of the
principal areas of computer science in which our graduate
faculty members conduct research. Algorithms
and Complexity Theory
Computation can be studied both experimentally, by building prototype
systems, and mathematically, by investigating the capabilities and limits
of the mathematical abstractions of algorithms and the problems that
they are asked to solve. Research in algorithms and complexity theory
tends to take the mathematical approach, even when problems are motivated
by practical applications or when implementations and experimentation
are used to test the capabilities and limits of mathematical abstractions.
Projects in this area range from algorithms for geometric computation
to scheduling in real-time systems.
See also: Computer Graphics, Image Analysis
and Computer Vision; Real-Time Systems
Assistive
Technology
Bioinformatics
and Computational Biology
Bioinformatics
is concerned with the interpretation and analysis of the large quantities
of data being generated by biological experiments. These data include
the genetic sequence (genome) of various species, the expression of
proteins under the direction of the genome in response to a variety
of experimental conditions, and the observed incidence of\ disease in
a group of individuals, to name just a few examples. From a computer
science point of view, this analysis requires the development of new
algorithms for data mining, statistics, and modeling, all of which must
operate in the presence of noisy or incomplete data.
Computational Biology is
concerned with elucidation of a protein's 3-dimensional structure and
its relation to the protein's function. It combines insights from biochemistry
with techniques in geometric modelling and computation, as well as computational
methods for the simulation of a protein's dynamic behavior based on
molecular and quantum mechanical models.
See also: Databases
and Data Mining; Geometric Modeling and
Computation; Parallel Computing
Computer Architectures
The study of computer architectures is concerned with the selection
of the internal components of a computer and the design of their arrangement
and connectivity. The key to understanding any computer architecture
lies in the way trade-offs are made to optimize certain characteristics
at the expense of others, subject to budgetary and physical constraints.
Computer architectures are often optimized to solve particular kinds
of problems, such as rendering computer graphics images, signal analysis,
or string comparison. Much current work in this area involves the design
of parallel computer systems optimized for particular kinds of problems.
See also: Computer Graphics, Image Analysis
and Computer Vision; Hardware Systems
and Design; Parallel Computing
Computer Graphics,
Image Analysis
and
Computer Vision
Computer graphics systems produce the images that are the visual interface
between people and computers. Our research concerns interactive graphics
where the main challenges are the rapid generation of high-quality images
in response to user inputs and the development of mechanisms for human
interaction with the graphics system. We provide users with convincing,
interactive, often immersive experiences with 3D data sets. We work
in all the technologies required for these synthetic environments: image-generation
hardware, display devices, tracking, model acquisition and management,
image-generation algorithms, and interaction techniques, including haptic
devices.
Image-analysis
systems measure properties of image objects or prepare
image data for presentation by a graphics system. Our research focuses
on the rapid and accurate identification and measurement of objects in 2D
and 3D medical images (organs, blood vessels, tumors), combining and
presenting information obtained from different imaging modalities, and
matching images to models or atlases of the structures we expect to
find in the images.
See also: Computer Architectures;
Geometric Modeling and Computation Computer-Supported
Cooperative Work
The design of computer applications and systems has traditionally assumed
that a single user interacts with a computer program at any one time.
The field of computer-supported cooperative work (CSCW) addresses the
design, implementation, and use of programs that allow distributed
users--people working together from separate locations--to collaborate
with each other on a common task such as creating and editing a document
or developing a program. To enable multiple, distributed users to communicate
simultaneously with a program requires the rethinking of basic concepts
in operating and distributed systems, programming environments, user-interface
frameworks, software engineering, and transaction models. Thus, research
in CSCW tends to extend and to integrate concepts from diverse subfields
of computer science and from other disciplines such as social science
and psychology. Research in CSCW at UNC-Chapel Hill includes augmenting
existing software systems such as World Wide Web servers and browsers
for supporting collaborative work; supporting real-time communications
for interpersonal, multimedia communications; composing shared virtual
environments for collaboration in immersive virtual worlds; and designing
a common software framework for collaborative applications.
See also: Distributed Systems; Multimedia
Systems; Networking; Real-Time
Systems
Databases
and Data Mining
Distributed Systems
Distributed systems research concerns the design and implementation
of computer system services on groups of computers that communicate
by exchanging messages across an internetwork. Examples of such services
include file services, remote computation services, information retrieval
services, and communication services such as electronic mail. Research
in this area considers the trade-offs between the features of a service
and its performance and cost on issues such as: (1) consistency of performance
regardless of where the computation supporting a service is located,
(2) the viability of the service as the number of computers or the physical
span of the distributed system grows, and (3) the system's ability to
maintain reliable service in spite of the failure of some of its components.
At UNC-Chapel Hill, research ranges from theoretical investigations
of distributed algorithms for process coordination and synchronization
to system building efforts involving the construction of a next-generation
Web server, hypermedia file systems, and network communication protocols
for real-time, multimedia-based communications.
See also: Computer-Supported Cooperative
Work; Hypertext;
Networking; Parallel Computing;
Real-Time Systems Geometric
Modeling and Computation
Geometric modeling is concerned with representing and manipulating objects
in preparation for graphical rendering, manufacturing, computer-aided
design, or physical simulation. Objects can be represented by their
boundaries or by construction from simpler shapes. Finite-element analysis
can be used to represent physical properties of objects (mass, density,
rigidity, flexibility). Geometric models are sometimes created at multiple
resolutions to support interactive graphics, efficient network transmission,
or rapid object manipulation.
Computational geometry provides algorithmic foundations and analytic
tools for solving geometric problems encountered in many fields of
science and engineering. Problems include computing convex hulls,
locating intersections of 3D objects, efficiently fitting a surface
with triangular patches, locating exactly the point on an object
having a particular property (e.g., highest, frontmost, closest to a
given point or object in space), proximity problems, detecting
collisions between objects, and approximating smooth surfaces. We
study both the theoretical computational complexity of these problems
and the techniques for efficiently and robustly computing their
solutions, exactly or approximately.
See also: Computer Graphics, Image Analysis
and Computer Vision
Hardware Systems
and Design
Electronic digital computers are the main but not the only technology
for information processing. Information occurs in many forms, most of
which are not digital and often not even electrical. Correspondingly,
many applications require systems that include rather disparate technologies
in addition to computers and software. Therefore, we adopt an unusually
broad doctrine of information-processing systems in which the needs
of an application define the technologies needed for system design.
This doctrine is embodied for example in the HiBall
Tracker which comprises infrared optics, LEDs, photo detectors,
analog circuits, A-to-D conversion, digital circuits, communications
protocols, a synchronizing control processor, I/O driver, host PC, the
SCAAT algorithm, and system and application software. Another example
is the 3D
Force Microscope which comprises a similarly rich mix of technologies
and disciplines. Moreover, when the computational performance needs
of a system cannot be met by software on general-purpose machines, it
may become necessary to reconsider the computing hardware itself. Customized
hardware systems can offer dramatic performance improvements. Examples
of such hardware built here are the Pixel-Planes, PixelFlow, and BioSCAN
systems. This research doctrine is also incorporated in parts of our
hardware systems curriculum.
See also: Computer Architectures; Computer
Graphics, Image Analysis and Computer Vision
Human-Machine Interaction
Humans have marvelous capabilities for pattern recognition and strategic
judgment that computers cannot equal. Computers have marvelous capabilities
for data transmission and manipulation and huge, exact data storage
that humans cannot equal. Powerful systems for solving difficult problems
can arise from combining the capabilities of human operators and computer
systems. Research in human-machine interaction aims to devise hardware
input and output devices, including new kinds of sensors and displays,
communication protocols, interaction idioms, data manipulation algorithms,
display and visualization formats, and monitoring and evaluation methods.
These developments aim to make the combined human-computer system a
more effective problem-solver by providing: (1) information most effectively
to the sensory apparatus of the human operator, (2) natural, and thus
easily remembered and reliably performed, interaction sequences for
the human operator, and (3) fast, effective guidance for the computations
performed by the computer system. This research area permeates the work
of the two largest laboratories in our department; the Graphics and
Image Lab and the Collaboratory.
See also: Computer Graphics, Image Analysis
and Computer Vision; Computer-Supported Cooperative
Work; Distributed Systems
Hypertext
Hypertext refers to a network of linked, computer-based documents that
allows a user to select a highlighted region called an anchor to obtain
another document. This second document may consist of text, audio, or
video and may contain further links. When the user selects a link, the
referenced document is retrieved (often via the Internet from a remote
computer system) and is displayed on the user's screen by a program
called a browser. The World Wide Web is a well-known example of a hypertext.
Netscape Navigator and Microsoft Internet Explorer are examples of browser
programs. Research in hypertext includes studies of how to organize
information, how to handle requests for hypertext documents rapidly
and reliably, how to maintain hypertexts, how to monitor usage of a
hypertext, and how to enhance the current hypertext protocols to support
more kinds of documents, such as documents containing mathematical notations.
See also: Computer-Supported Cooperative Work;
Distributed Systems
Mobile
Computing
The Monte Carlo Method
Many problems are so large or complex that solving them completely by
any of the standard methods would require much more computing time than
is available. If we relax the demand to solve the problem exactly, however,
it is sometimes possible to obtain a solution that is within a few percent
of the optimal or correct solution after a reasonably small amount of
computation. Stochastic algorithms often obtain such rapid but approximate
solutions using a directed random search. One particularly useful stochastic
algorithm is the Monte Carlo method. Instead of trying to solve for
the answer directly, this method sets up a random process whose mean
is the correct answer to the problem. By running such (possibly sequentially
improving) processes many times, better and better estimates of the
correct answer to the original problem are obtained. Research in this
area is concerned with how to formulate efficient stochastic variants
of the original problem, how to generate instances of the random process
quickly, and how to measure how well the Monte Carlo process has approximated
the correct answer. An associated field is the study of the fast, algorithmic
generation of "random" (pseudo-random, quasi-random) samples for such
Monte Carlo computations.
Multimedia Systems
Computers and networks can manipulate data representing text, graphics,
images, audio, video, and even more exotic media. Each form of data
demands different capabilities of the computer systems and networks
through which it passes, including high throughput, reliable transmission
rate, easy manipulation, high reliability, and fast update. Satisfying
these diverse requirements is a difficult challenge both for the computer
systems at the transmission ends as well as for the software controlling
the Internet. Our Multimedia Systems research is concerned with how
to use processing power at the sending and receiving ends to decrease,
or at least to even out, the demands on the network so that the real-time
data stream can be presented faithfully to the receiver. Other aspects
of this research include how data in different media can be abstractly
organized and seamlessly integrated to present unified multimedia documents
and visualizations; how static concepts such as hyperlinks can be done
in dynamic media like video; and how media can be manipulated to disambiguate
the actions of different users in a collaborative environment. In addition,
this research helps to define the capacity and features that will be
needed in the next generation of the Internet.
See also: Computer-Supported Cooperative Work;
Networking;
Real-Time Systems Networking
The current implementation of the Internet was not designed to handle
the kinds of multimedia traffic that people want to send across it.
Audio and video transmission make different demands on the network than
more traditional e-mail or Web pages. What is needed are mechanisms
in the software systems controlling the Internet to allocate the network's
capacity for different uses and to guarantee a suitable level of service
to data streams that have real-time requirements. Without such service
level guarantees, real-time audio or video transmissions are subject
to unpredictable interruptions and dropouts due to network congestion.
Research in networking technology aims to modify the hardware and software
systems that operate the Internet to increase capacity and to allow
for quality-of-service guarantees that support multimedia communications.
See also: Computer-Supported Cooperative Work;
Multimedia Systems;
Real-Time Systems Parallel
Computing
Parallel computing is concerned with the development and programming
of computer systems containing multiple processors. The objective is
to create systems that can achieve performance far greater than any
single-processor computer system. Maximum performance requires the use
of new algorithmic, analytical, and system implementation techniques
in ways that are specific to each parallel computer architecture.
See also: Computer Architectures;
Distributed Systems;
Programming Language Design and
Implementation Programming
Language Design and Implementation
In the study of programming languages we are concerned with features
that allow us to express complex programs reliably. For example:
- Data-parallel constructs can be used to describe repeated computations
in a succinct fashion and also provide opportunities for high-performance
implementation.
- Object-oriented features of programming languages, such as Java
and C++, can be used to describe, extend, and reuse recurring design
patterns.
- Declarative programming languages, such as Prolog, can be used
to describe complex symbolic computations, such as theorem provers.
A clear understanding of programming languages is
also required to create programs that process other
programs. In the construction of collaborative
systems, for example, it is important to identify
domain-specific design patterns involving components
of these systems, and to automatically generate or
modify some of these components.
At UNC-Chapel Hill, research in programming
language implementation centers on high-performance
execution, which is required for many scientific and
real-time problems. Achieving high-performance
execution is complicated by the increasing divergence
between the high-level features of programming
languages and the low-level performance-oriented
features of modern computer architectures, such as
multiple functional units and multi-level caches. Our
research targets efficient parallel execution and
effective use of the memory hierarchy, using
techniques applied at compile time and in execution.
See also: Computer Architectures;
Computer-Supported Cooperative Work;
Distributed Systems;
Parallel Computing;
Software Engineering and Environments;
Theorem Proving and Term Rewriting Real-Time
Systems
Real-time systems are a class of computer systems that interact with
the real world in a time frame that is defined by processes in the external
environment. A typical real-time system is one that must obtain an input
from an external device such as a sensor and generate an output in a
bounded amount of time. This output must be provided on time, regardless
of other activities contending for the processing resources of the system.
Real-time systems research studies the mathematical problem of allocating
computer system resources such as processing capacity, memory space,
and network bandwidth so that the competing demands of multiple, simultaneously-executing
applications can be met. Based on this theory, UNC-Chapel Hill researchers
are building real-time operating systems, databases, and communication
protocols, and are demonstrating how these systems can be used to solve
problems in collaborative computing, multimedia communications, and
distributed virtual environments.
See also: Computer-Supported Cooperative Work;
Multimedia Systems Software
Engineering and Environments
Software engineering is concerned with the creation of programming system
products that meet client specifications as well as physical, regulatory,
budgetary, and other constraints. Software engineering environments
are software systems that help software engineers to document and communicate
their system's architecture and current status; develop their products,
including code and documentation, test suites, and demonstration versions;
manage the whole endeavor with schedules, plans, progress reports, critical
path charts, deadlines, milestones, and version control features; and
to evaluate the products via testing and coverage tools, notations for
proving correctness of components, and programming language features
that support construction of new components from existing components.
See also: Programming Language Design
and Implementation Theorem
Proving and Term Rewriting
In many artificial intelligence systems, the facts that the system knows
about the world are represented as assertions, declarative statements
about objects or relationships among objects, and rules of inference--procedures
that describe how to derive correct new statements from known statements.
When the system needs to determine the validity of some assertion that
is not explicitly stored in its knowledge base, the system treats the
assertion as a theorem to be proved from the known facts using the rules
of inference. Theorem proving, then, is the mechanism by which such
a system reasons. Term rewriting systems are a particular implementation
of such a system in which all of the knowledge relevant to a problem
is stored as a set of equations. Theorems are proved by replacement
of equal expressions and by unification to attempt to reduce the assertion
to a trivial true statement. If this reduction can be completed, the
assertion is accepted as true; otherwise it may be false, or it may
require a more complete description of the world to determine its validity.
Research in this area involves the development of efficient theorem-proving
strategies, methods for analyzing the complexity of theorem provers,
methods for incorporating semantics in theorem provers, methods for
implementing theorem proving efficiently, and applications of theorem
provers.
See also: Programming Language Design
and Implementation
|