Project Proposal
Project Proposal
Project Proposal
I will research how methods from a theory known as Space Syntax can be used to guide multi-agent simulations where the shortest path is not entirely known. I plan to extend the existing UNC multi-agent simulator to provide “vision” to agents that will help them (1) discover new paths, (2) avoid collisions by using early avoidance mechanisms, and (3) avoid large crowds of agents if desirable in a given scenario (such as an evacuation). These changes should integrate well into the existing baseline and should not preclude the use of the recently added proxy agent behaviors.
Motivations and Background:
Real people rarely take the shortest path between two points in an environment. One can theorize many reasons that might prevent a person from following a shortest path. For instance, a person simply may not be aware of the shortest path. Another, there may be physical obstacles that prevent or guide a person’s movement. One more, there may be human social factors where “personal space” must be respected. Yet people still usually manage to get to where they want to go. How are these reasons overcome? How are people able to correct for these forces? The primary sense people use is vision and this is where my final project with focus.
I will be researching the architectural concept of Space Syntax and how it may be integrated into a multi-agent simulator. In its most general form, Space Syntax suggests that a person’s movement through an environment is strongly affected by the portion of the space that is visible to them. Put simply: A person moves where they can see. This common sense notion should be integrated into any realistic multi-agent.
Some of the earliest work in multi-agent simulation recognized the importance of vision in a simulation. Craig Reynolds, in his seminal Boids paper, said, “It is possible to construct simple maze like shapes that would confuse the current boid model but would be easily solved by a boid with vision.” Methods from Space Syntax can be applied to give agents this useful “vision.”
What is Space Syntax? Developed in the ‘70s at the University College London by Bill Hillier and Julienne Hanson, Space Syntax defines a set of techniques that may be used to describe a space (e.g. the open areas in a floor-plan) as a formal graph of nodes and edges. They call this graph a “configuration.” Configurations can be processed algorithmically to create data structures (axial space, isovist, convex space) and derive metrics (visibility map, depth map). These structures and metrics can be useful in studying the qualities of a space and be used to answer architectural questions such as:
•Which areas of the space will have the most traffic?
oIs this a good place to put a display?
•What areas of a space provide privacy?
oCould too much privacy lead to undesirable behaviors like crime?
•What regions in a space are control areas (regions where most of the space is visible)? Which regions are controllable (regions where little can be seen, but can be seen by others)?
•What areas are easy to navigate to and which ones are difficult to find?
While the answers to these questions may be useful to an architect, how can Space Syntax be useful in a multi-agent simulation? Short Answer: People move where they can see. We can use the data from Space Syntax analysis to impart vision to, and thus help direct, an agent’s movement.
State-of-the-Art & New Challenges:
Alan Penn and Alasdair Turner at the University College London have done the primary research in Space Syntax and its application to multi-agent simulation. In their simulations they pre-compute a visibility map of the space in which their agents will navigate. This data is used during the simulation to allow agents to answer the following question: Given my location and cone of vision, what points in space are visible? To generate an exploration behavior, Penn and Turner (in 2001) direct their agent to pick a random point in their cone of vision and move towards it. This is repeated every few steps of the simulation. Penn and Turner, in real world studies, were able to show that there is a strong correlation between this randomized visibility map exploration behavior and real people within a shopping market environment.
So what is a visibility map and how is it computed? A visibility map is a set of uniformly sampled points in a space with an attached list of all line-of-sight visible samples from a given point. It is essentially an environment viewshed for each an every sample point within the space.
This approach to applying Space Syntax ideas clearly has its drawbacks. First, this pre-computation is expensive as it is an O(n2) operation. This is exacerbated by the fact that a floor plan generally needs to have a high sampling rate to drive a realistic simulation. Furthermore, pre-computed solutions have problems with the high memory requirements.
Another area for growth in this area is Space Syntax to be applied in realistic multi-agent simulations. To my knowledge, Penn and Turner’s simulations merely track agent paths (like migration patterns) and have not been used to drive animated scenes or handle collision resolution.
Note: Penn and Turner have also explored using higher-level Space Syntax metrics to guide agent behavior. For instance, there is a Space Syntax measure that can be used to identify quick changes in visibility (visibility junctions). Since these usually occur in door ways, Penn and Turner suspect that people intuitively recognize junctions and reevaluate their motion at these points. I need to do more research to see how this work has come along over the last few years.
Proposed Tasks and Alternate Ideas:
In the short-term, I will integrate Space Syntax into UNC’s existing multi-agent simulator. I will also attempt to improve Penn and Turner’s algorithm. There are several ways to go about this. I am not sure what form this will take at this time. The solution might be an adaptively sampled visibility map, or perhaps a massively parallel implementation (GPU), or even an entirely different structure that mimics the visibility map function. My hope is for the later as it might avoid the costly pre-computation step all together.
With Space Syntax integrated into the simulator, I think I will set up a building evacuation scenario where some/all agents are not aware of fire exits (which may be the shortest path exit) and only know one of several main building entrances. My hope is that I can get agents to discover exit short-cuts as they move towards the main exits. I am hoping that other behaviors will emerge such as early collision avoidance, crowd avoidance. Perhaps even the common real-life awkward event where two people traveling in opposite directions on a sidewalk, moving left and right in an attempt to avoid each other, still get in each other’s way, can be recreated.
Continuing in this evacuation scenario, I’d like to see if I can validate architectural best (and worst) practices for evacuation planning. Interestingly, the pressure-relieving obstructions Helbing suggests be placed in front of exits to avoid injuries and enhance crowd flow seem to contradict the ideas from Space Syntax.
In the long-term, I’d like to see if other Space Syntax measures and structures could be used to drive multi-agent simulation. There are higher level measures such as the depth map which might be used to trigger an agent to take in additional visual information in planning their motion. Furthermore, the axial map seems to have never been used in a simulation—I’d like to investigate if this is because it does not apply or because it just hasn’t been done yet.
In the very long-term, I have attended presentations (http://www.siggraph.org/s2007/attendees/courses/14.html) where urban environments have been generated using procedural modeling. I think axial maps have a place here. Architects have all but proved its application to this domain, but to my knowledge, no one has yet applied it to an urban procedural generator.
Contributions:
By November 10th, I plan to have Space Syntax algorithms integrated into the existing UNC multi-agent simulator. I hope by this time that I have made headway into making significant improvements to the Visual Map algorithm.
By December 4th, I plan to have refined my November solutions and have run several evacuation scenarios and made assessments of their success or failure.
Friday, October 3, 2008
Image: Space Syntax Limited