As a former moderator of the UNC subreddit and someone who currently frequents the subreddit (I have been living in Richmond), I have noted that the sub often works as a good pulse for how the overall student body is doing. It does skew towards non-traditional students and STEM students, but it has a large readership and it has been fascinating to read during events such as COVID and the UNC CS admissions process.
I propose a web-scraping application that uses NLP and other heuristics to get a “pulse” of what the heartbeat of the subreddit is and flag potential areas for concern. I am aware of an incident that arose in Summer 2021 regarding a post from a disgruntled student that blamed poor performance on the CS department, instructor, CAPS, and basically everything but themselves. Bad actors on the subreddit have the potential to cause real harm in the community, and an external tool (i.e. not the Reddit moderators) would be a good way to get ahead of these issues before they get out of hand and result in the spread of misinformation.
Can talk more into specifics / software architecture after getting past the initial stage. I’m most familiar with AWS, so I would want to use that as a tech stack but am open to any cloud-based solution. The project could theoretically be extended to any other subreddit but the focus here is on UNC to make sure students are personally invested in the development.
Experience using the UNC subreddit could be useful, but not required.
Problem we need solved: We are working on developing a Media Search Engine and related software for Television, Radio, Newspapers, Magazines, Social Media, Blogs, and Web Media to not miss data. To help people become media aware and media informed with Media Analytics. Work on Administration to keep track of computers and hardware on systems with security hardware problems and error messages data missed with Administration Analytics. Plus there is a little more work needed on the information webpage.
brief summary
Project Info
capstone info
We request support to continue the extraordinary work a group of students (led by Amelia Paulsen) who worked on this project during the 2022 fall semester.
The STARx (Successful Transition to Adulthood with Rx= treatment) Program serves adolescents and young adults who have chronic physical and/or mental health conditions, by teaching them health self-management; as part of a process called healthcare transition from pediatric- to adult-focused providers. We also prepare parents/caregivers to "let go" of responsibilities to manage their children' health management, going from "active manger" to "coaches". We use patient and caregiver versions of our tools (The self-administered STARx Questionnaire and the provider-verified TRxANSITION Index) that have been validated and are now being used in 10 countries (8 languages).
The expansion we seek to accomplish includes: 1. entering the tools in languages other than English, 2. an attractive and sensitive score reporting system for longitudinal tracking, 3. links to existing resources customized by responses, and 4. securing a host server so other institutions across the globe can enter HIPAA-compliant responses to the surveys, and 5. get a printable PDF report of the patients'/parents' responses to be used in the patients' health records (not always electronic record in many low and middle income countries).
We envision the result of this work as a web-based system initially but would like this to be an application that is available in computers and mobile.
Implementation of a tool/app (for mac and/or linux) that allows the automatic/semi-automatic computation of the head circumference measure from an MR Image. Prior code for the computation exist and might need conversion/adaption. An intuitive user-interface made for biomedical users is a major need for the project.
This will be a good project for students interested in medical imaging and graphics.
My vision for the app would be a map overlaid with all our gardens. Someone should be able to then click on any individual garden and all the plants that are currently in that particular garden would be displayed. Further, someone should be able to then click on any of the plants and get a bit of information about it (when and how to harvest, perhaps a recipe or two). The big thing will be the back end of all this. It needs to be such that someone such as myself with no programming experience could edit the contents of each garden as the plants change 3 or 4 times a year.
This is a continuation and extention of a previous project and the existing code base is available. It involves integration with Google Map services.
We worked with students this last semester (Lia, Dedeepya, Nori, and Ashna) and were profoundly impressed and have launched an amazing application. We wanted to work on this application more on the admin side and I’d love to work further with students to master this developed tool.
Original Blurb:
Wake Smiles provides health services for under-resourced adults
through better oral health.
We work with hundreds of volunteers, from dentists to pre-dental students. We have an AWFUL sign up and tracking system and the ones that are available are absolutely not budget friendly.
It is my dream to have an app that volunteers can utilize to sign up for shifts, track hours, and that will hold pertinent certifications (pdfs). This system must also be user friendly on the admin side so that we can run reports and track hours/people and even communicate with the mass amount of volunteers.
Currently we use a plugin in wordpress but it falls short on many levels. As we continue to grow, having something that volunteers of all levels could have IN THEIR HANDS would be ideal.
The overall goal is to support faculty with a set of "micromoment" activities that they could implement in the classroom and each activity takes less than 30 minutes of class time. These activities are designed to promote an entrepreneurial mindset in students, which helps students to question, adapt, and make positive change by promoting curiosity, connections, and value creation.
These activities were developed as part of a grant from the Kern Family Foundation. The activities are described here:
We would like to create an app that has a virtual card deck, with each virtual card taking the information from one of the 25 activities at the above link. In the classroom, faculty should be able to use the app to browse and search the above activities and then easily follow the information described on the virtual card. After the activity is over, faculty should be able to use the app to provide feedback to the project faculty coordinator (me!) on the activity.
Finally, the project faculty coordinator should be able to edit existing activities or add new activities to a master list, and the app should be able to incorporate this updated information to the virtual cards in real time.
Kinetik underpins enterprise revenue growth through our go-to market optimization software. Our approach applies data science, predictive analytics and simulation to optimize resource allocation and identify productivity drivers across channels, functions, and markets. Our visualization engine enables an executive understanding of complex operational processes. Kinetik enables strong leaders to motivate their teams to take bold action with the power of predictive analytics. Kinetik | Insights in Motion.
PROBLEM
Many large enterprises struggle to grow the business because they lack insight into the constraining factors in their go-to market models. Go-to market models cross multiple functions including Sales, Marketing, Sales Operations, Business Partners, and Delivery. For example Tech companies often have multiple opportunity identification sources, seller communities and roles, marketing campaigns and tactics, target segments and product families. The permutations through the go-to market model can quickly surpass a billion. Enterprises try to deal with this complexity by adding reporting resources, expanding the attributes associated with the opportunity records in their CRM system and increasing the frequency of sales cadences.
What these approaches lack is an understanding of the impact of productivity drivers and resource mix across the end-to-end sales pipeline dynamics. Each functional executive (CMO, CRO, Sales Executive) often can only make changes within their function and will lack the understanding of constraints in the other functions that will impact the overall outcomes. For example, a marketing executive can make many changes in the campaigns, tactics, and methods, however, if the primary challenge is seller capacity, seller productivity, or product differentiation additional marketing resources will simply create more activity with minimal impact on revenue.
Our solution provides scenario analysis to support go-to market optimization through a proprietary set of pipeline velocity metrics, sophisticated predictive analytics and simulation, and a user experience that combines animated simulation results and dynamic visualization of the model. The proprietary opportunity velocity metrics are calculated through the comparison of opportunity states in an automated comparison of historical weekly views extracted from leading CRM systems including Salesforce, Microsoft, and SugarCRM.
The model produces superior forecasts through cluster analysis of opportunities to identify opportunity attributes (e.g. identification channel, seller community, marketing tactic, prospect industry, etc) with similar opportunity progression profiles. These clusters "traunches" become the foundation of the forecasting engine and a unique go-to market profile is developed through an automated machine learning method that assessed week by week changes of a series of historical pipeline snapshots. The solution support executive level dashboards and simulation visualizations that allow scenario analysis to be executed in real-time without requiring a detailed understanding of the underlying data science and velocity statistics.
The user experience is built around visual components (e.g. sliders, knobs, incremental improvement inputs) that allows senior executives to create their own scenarios and hit the 'play' button to see the results through a visual representation of the opportunity pipeline. In addition, executives can see a visual representation of actual results through the same user experience leveraging the historical snapshots used to create the model. This approach is radically different that the hundreds of pages of static pipeline views that executives have access to today. The ability to visualize the impact of changes to the go-to market model for easily developed scenarios is what Kinetik describes as 'Insights in Motion'
Core to the success of the venture is the domain expertise that has shaped the creation of the foundational go-to market simulation. The creation of the model required the invention of a new set of statistics that captures the velocity of movement of opportunities through an organization’s sale stages and marketing and selling processes.
Where traditional efforts rely on metrics like days-in-sales-stage that cannot be incorporated into a forecasting model; Kinetik employs velocity statistics that can be proven to deliver higher forecast accuracy. While enterprises have invested millions of dollars in CRM systems that have delivered workflow simplification, there is currently no capability available in the market that we have found that leverages the CRM investment for improved understanding of the sales process and opportunity pipeline.
This is a project that will introduce the team to brain MRIs and brain CTs. The goal of the project is to develop and train a machine learning model (either in tensor flow or pytorch) to allow for auto segmentation of the brain across 2 modalities (MRI and CT). Our goal would be to segment out the skin, bone, gray matter, white matter, and CSF. Extra credit would be to assess the accuracy of a convolutional network that can take these segmentations and build a model image of the same sample in the other modality (eg. go from MRI to CT, or go from CT to MRI).
This is a mobile app that we will develop that provides neurosurgeons with the ability to track objects in real-time with their phone. We have developed a tracking device that fits onto a catheter that neurosurgeons routinely place in the brain without guidance. Our aim is to develop a light-weight yet robust tracking algorithm that is capable of tracking the 6 degrees of freedom that a catheter has using only the selfie-camera, in real-time. Our tracker utilizes 12 ARUco markers placed equidistant to a single point on the catheter. We will work to develop a robust algorithm to track this in multiple clinical settings (different distances, lighting, angles) to identify where the tip of the catheter is in space relative to the phone's camera.
This is a high performance computing / GPU project aimed at back propagating contrast signal from a 2D projection to it's appropriate location in 3D space.
Digitally reconstructed radiographs (DRR) are essentially xrays that are rendered computationally from CT scans. They utilize GPUs to trace the density properties of tissue as they pass through an object and integrate the resultant signals of rays to construct a final projection or DRR (or Xray).
Digital subtraction angiography is a study that we utilize to characterize blood vessels. The first X-ray that is taken in a series of shots is inverted and added to subsequent shots, allowing for X-rays to display any changes that occur when contrast is given to a patient. The contrast flowing through vessels is the signal change, and this is attenuated by the mask application. Any patient motion degrades these images and introduces motion artifact.
I have the algorithm in place to determine the exact alignment of the 3D source and each frame in a DSA. This project aims to back propagate signal from multiple xrays (blood flow), to a source CT scan (model the flow of contrast), and then reconstruct the projections in any orientation we're interested in.
We are developing a user friendly app for nurses to go to in order to learn the latest in changes or updates throughout the hospital. The prior team did a great job this fall getting us pretty far along but they recommended continuing with another group to keep things going and get the app to be usable.
Original Blurb
We are in need of a technological tool, most likely an app, that our nurses and nursing assistants can access for real time practice updates, regulatory changes, and ongoing educational blasts. Currently information within the medical center mainly flows through emails, flyers, and short huddles that you must be present at to hear the information. In today's tech-savvy generational workforce, we believe providing these nurses and teams a more applicable and "just-in-time" approach to receiving this information could help close the gaps in communication from "bench to bedside."
The medical center currently has access to websites, applications, etc.. and these could be a great starting point to build something that can move from a static place to a mobile application that team members can draw upon at any moment for up to date resources. We also want the availability to meet the demands of all adults learners by being able to post videos, podcasts, and readable resources here.
The Center for Nursing Excellence comprises of educators and quality experts that will be able to drive and guide content in a meaningful way for a tool such as this. What we lack is the technological know-how to make it come to life in a better way for all the new and seasoned workforce we have. Our healthcare system is so complex and any way that we can find to mitigate risk, close communication gaps, and flow important information to those that need it directly at the bedside will be immeasurably valuable. If selected to work with a team, we can provide the current technology available to build off of in more detail. We would love nothing more that to stand back and all you all the creativity and freedom to build us such a living, breathing repository to help provide quality care to the patients of North Carolina!
I will try to explain what the students did for us this semester.
We want to create an online portal for behavioral health providers to input or upload data when a group of Institute and DHHS staff evaluate their team. We use this data to rate items based on a fidelity scale. Teams currently submit a spreadsheet with a lot of data and we make calculations on the data. The online portal would allow the evaluators to see the data in one spot and make calculations for ratings. We also use a consensus spreadsheet for ratings which we would like to add to the online portal (if feasible during the semester). Long-term we would like to add the report template to the portal. I have attached all 3 documents.
The Fall 2022 students took the spreadsheet and created code for the tabs on the spreadsheet. It was a huge project and the students did all they could during the semester. The code needs additional work and calculations added to it.
I have a Teams channel set up where the students added a video about what they used and how they set up the code. We also recorded our last session together where the students described the final project in Teams. I took screenshots of a platform my supervisor uses for a visual. I will add the new students to the Teams account so they can access the documents and videos and for communication. I have a GitHub account where they transferred the code to me.
An online platform that allows content creators to easily create animated videos through drag and drop features. The platform features the ability for users to upload and drag and drop images, gifs, videos and music to a slide (creating a video scene). The platform will also have its own images, gifs, videos and music. Users can drag and drop these elements into the slide and adjust how long each element will be on the slide using a timeline. Each element can have intro and outro effects. Users can use pre-built video templates or design their own from scratch. Users should be able to preview each slide with a start and stop button. Users should be able to export the final video as a powerpoint or mp4. Or as a bonus, users can upload to YouTube. Users should be able to organize their videos in folders to come back to edit it if need be.
As a new college student, I have been tasked with the undoubtedly difficult job of managing almost all facets of my life. Of course, this sounds silly at first, but it is an obstacle that almost all can relate to. For some, it may come earlier or later in life, but the same realization appears nonetheless. That realization being that you have absolute responsibility of managing your own life. Now, food is one of the most, if not the most important part of your life. I mean, we literally can’t live without it. Your thoughts linger around the question of where you’re going to get your food from, but the answer to that question is not the end of the journey. You must then ask if that food can even be eaten. And I don’t just mean can I viably eat the probably undercooked ramen I made in the corner microwave of the kitchen on my dorm floor. I mean, am I allergic? It can be hard for a person still learning how to balance the rest of the responsibilities in their life to also correctly manage such an important question. That’s where SafeEats comes in. For the customer trying to manage their food allergies, they simply enter in their known food allergies, and we handle the rest. After a customer enters their allergies, the allergies are cross-referenced with the menus of restaurants to give the customer an easily accessible list of the foods on the menu which they can eat.
SafeEats would be a mobile application that served both iOS and Android devices. Since the app serves as a way to make dining easier for the customer, it would have links to other applications such as DoorDash, Postmates, or the actual online ordering system for the restaurant, and potentially a partnership with any of the previously named organizations. Because these companies have already mastered online ordering, SafeEats would be unsustainable if it were to have online ordering of its own. Ideally, the application only makes the items on the menu which the customer can actually eat available to them, so as to avoid confusion or any possibility that they choose an item they are allergic to. Approximately 32 million people in the United States are affected by food allergies, so the problem this app is solving is simple, dining accessibility for those with food allergies. Although I used a college student as an example of someone who would greatly benefit from this app, another relevant demographic is parents, as the safety of their children is of the utmost importance. While dealing with the pressures of raising a child, they would absolutely benefit from an app that ensured the safety of their children when eating food from restaurants. Overall, the app has one goal and one goal alone, easy dining in a safe way.
DICOMs are the standard format for medical imaging. Unfortunately, there's no unified or uniform accepted storage format, so medical images can be presented as a list of files, or contain a complicated DICOMDIR that describes were different files are located. This makes it difficult to process these files in iOS projects. I would like to work directly with a team of 4 students interested in building an industrial strength DICOM viewer for iOS capable of displaying CTs and MRIs with different number of sequences. Will focus on 1) processing any number of storage formats and 2) UI design for usability and ease of use.
ARUco markers are used to track both object orientation and scale for augmented reality systems. This project will work on creating a performant model to track a set of ARUco models to deduce the exact location of an object in space for use in a neurosurgical procedure. We have developed an attachment to one of our tools that has 12 ARUco markers, and we would like to have the front facing camera on an iPhone track the precise location of the attachment.
UNC Blue Sky Innovations is a research lab funded by Hussman School of Journalism and Media and Kenan-Flagler Business School. This lab focuses on solving business problems using emerging technologies such as AR/VR, computer vision, and robotics.
Loomo is a robotics Platform from Segway that provides an API/SDK to enable Java app development on a locomotion robot. In our lab, we are creating various uses for the robot in the media industry from gathering information to engaging with audiences in new ways.
This project will include designing two mobile applications, one for the controler and one for the robot itself to enable the robot to engage and capture information.
This student team will work in the Reese Lab on Franklin Street and will have access to our team of ai and computer vision engineers. The team will be led by Steven King, associate professor of emerging technologies and director of Blue Sky Innovations.
Technologies: Python and Mobile for Android
Design documents to be finalized with team input based on skills.
This proposal entails building software (SAS) to automate the volumetric capture project pipeline.
As a brief overview, the volumetric capture project builds 3D models of humans using AI. We have built a studio in the lab which uses 4 cameras in order to do this. The 3D models generated will be used for virtual reality, augmented reality and Metaverse applications. This is done by leveraging three different machine learning models.
At the moment, each model is run one step at a time. The output of two of the models are used as input for the third model. For each step in between, there are multiple configurations that need to be done (e.g. modification of the data). The images will have to be renamed, categorized, moved to folders, etc. for each step.
I am proposing that the students build a solution to automate the process. This would greatly increase the efficiency of results while also reducing human intervention, which can reduce the chance of errors.
Success in this project would look like having an interface (GUI) that allows the user to input all of the variables (# of views, # of training images, etc.) and then click a button (“run”) that then begins the volumetric model generation process.
This will be a continuation of a previous project already done by this class.
As a side project, I’ve been working on building a software platform that's designed to help faculty (particularly junior faculty) align their priorities and tasks and keep track of their progress, etc. It's basically a task/project management system for academics. I initially built a prototype based on an application I made for myself in Notion (a no-code platform). This is a system of interlinked databases/tables and dashboards dedicated to specific areas of work/life. It incorporates the best advice on time and task management that I encountered while figuring out my own system. The core relies loosely on guidance from the National Center for Faculty Development and Diversity on Semester and Weekly planning processes and incorporates some behavioral economics style nudges and gamification to help researchers get writing and other tasks done.
The prototype in Notion is here:
notion link
If you set up an account on Notion, you can “duplicate” the template to your own account and mess around with it there. (Note: Notion gives you a free professional plan if you use your .edu email address). You can also see a description of the system, including key features etc on the notion template here: https://collectivegood.notion.site/System-Design-530a13f284bb4c6ea23a37035ab9bff0
The unique aspect of this compared to other "GTD" and project management apps is that it is designed using insights from psychology and economics...instead of giving users tons of options and flexibility (which can be debilitating), it is purposefully restrictive. Though not yet implemented, there are some additional features that I plan to incorporate in the future. One is a suite of “decisions support tools” that use user data (shared optionally) & a series of questions to give junior faculty guidance on key decision points based on principles for optimal decision-making under uncertainty. There are several others that I could discuss with the team.
The initial version I built in Notion have several limitations that make it clunky for others to use (for example, I’ve set up automations through third-party apps (e.g. Zapier) which have a bit of a learning curve). I'd like to build a more robust stand alone piece of software.
My immediate goal is to release this as a public good for the UNC community. Eventually, it could also be monetized if there is a market among academics more broadly.
To get a sense of the project, best place to start is the Notion template. I've also attached a previous proposal to CFE (not funded as it did not align particularly well with the grant opportunity).
If needed, the development team at the Sheps Center for Health Services Research is familiar with the project and could provide support to the student team if needed/desired.
Background: I am an oncologist who sees patients with leukemia, an aggressive blood cancer. I lead a research group involving PhD students, medical fellows, and medical students. Our research group has developed a prototype electronic patient-facing values clarification tool for older adults with leukemia called “PRIME” (Preference Reporting to Improve Management and Experience) using Adobe XD. We used a participatory co-design process involving patients, caregivers, and clinicians to develop PRIME. Our formational studies demonstrate that patients and clinicians find PRIME acceptable, and that it may be effective at improving treatment decisions. PRIME uses best-worst scaling (BWS), a simple yet robust values clarification method, to capture patient values. We engaged over 800 patients, caregivers, and oncologists to create and refine a blood cancer specific BWS measure that is used in PRIME. Our goal is that PRIME would generate a personalized values dashboard in real-time to inform shared decision-making between oncologists and patients about chemotherapy.
Unfortunately, survival is tragically short for most patients with leukemia. It is critical that decisions about chemotherapy are aligned to patient preferences. Choosing the “best” treatment option often depends on what each patient values most: quality of life or living longer. Patients with leukemia differ substantially in their preference for prioritizing these outcomes. Therefore, accurately clarifying patient values is essential to inform treatment decisions and limit unnecessary suffering. This currently is done very poorly as patients and oncologists often do not have the tools to adequately discuss patient preferences and incorporate them into treatment decisions. Currently, no tools have been developed to assist in this process.
The objective of this proposal is to build upon our current prototype to develop a mHealth application that is able to deliver the BWS measure (and other surveys) to patients and to produce real-time visualizations of their results. We plan to use this tool as part of a randomized clinical trial to improve treatment decision-making among older patients with leukemia. This tool needs to have the following functionality:
1. Deliver the BWS measure (similar to a survey) to patients electronically along with other patient-reported outcome measures (electronic consent would be ideal)
2. Analyze the results of the BWS measure in real-time.
3. Generate a personalized visualization of the results of the BWS (see examples in attached PDFs) for patients including longitudinal results if they have taken the BWS measure more than once.
4. Send results of the BWS survey to the treating oncologist.
5. Be HIPPA compliant / secure
Other non-essential functionality would include:
- Incorporate educational videos for patients to orient them to the tool and the study
- Interact with existing electronic health record systems (EPIC, UNC MyChart)
- Securely store all data for all participants
- Incorporate the functionality/usability of our current prototype
We envision either a web-based platform or an app, either would be appropriate.
Our goal is to create a demonstration tool that supports robotics education by visualizing different aspects of robotics algorithms and concepts, e.g., robot kinematics, motion planning, and controls. Target audiences include university students, advanced high school students, and community members interested in learning about how robots work. We already have a basic version of this tool, which was developed by a great COMP523 team last year, but we are still missing some key components.
The broader work from which the proposed project emerges based is a joint endeavor between Lauren Leve, Associate Professor of Religious Studies at UNC and Baakhan Nyane Waa (“Come Tell Stories,” in the Newari language), a group of Nepali cultural heritage activists based in Kathmandu. Members of Baakhan Nyane Waa are architects, engineers, artists and filmmakers who came together to help reconstruct a major cultural site in Kathmandu that was destroyed in the 2015 earthquake. Professor Leve is an expert on Himalayan Buddhism and Nepali Buddhist culture. They met in the summer of 2018 when Professor Leve traveled to Nepal to create 3D models of Buddhist temples in the Kathmandu Valley. This meeting led to a plan to create a publicly accessible internet-based record of tangible and intangible religious heritage in Nepal. This will take the initial form of a VR-accessible 3D model of Swayambhunath stupa, a major cultural monument and UNESCO World Heritage site, that is linked to an archive of recorded interviews and historical documents and is annotated with audio, video, photos and text that illustrates its many meanings and deep significance to the diverse local and international communities that focus their ritual and cultural lives there.
Professor Leve and her Nepali partners spent two months this past summer (2022) constructing a preliminary 3D model of the monument, filming ritual practices, and recording interviews with Buddhist monks, priests, scholars, and other stakeholders and visitors to the site. The goal of these interviews was to document cultural and ritual knowledge, record devotees’ descriptions of their connections to and practices at the site, and capture the kinds of religious and cultural changes that are taking place in response to globalization and modernization in Nepal in an engaging, user-friendly way. These interviews will integrate into the model as audio and/or visual annotations
The goal is to create a dynamic web application that hosts a 3D model of the Swayambhu heritage site with a first-person user experience where a visitor to the site can click key artifacts in the model and interact with annotations (text, photo, audio and video). For example, the user would be able to hear an old man talking about visiting the site as a child, what it was like back then, and how he loved to play with the object that the visitor clicked on while his grandpa performed prayers. Or they might click elsewhere to access an explanation of an object and a link to another page containing a video of someone doing a ritual that uses it. Clicking in a different location might bring up a monk telling a story of how a Buddhist deity produced monkeys at the temple from the lice in a yogi’s hair, or a list of statues that have been stolen from the premises that are now in private collections or European museums, or a pilgrim explaining that she came to the stupa to complete the death rituals for her mother, or a twenty-something would-be soldier explaining that he recently applied to a competitive army position after which he came to the temple to play a game that he’d seen on TikTok and believed would tell him his luck. There is a diversity of myth, knowledge, experience and cultural history embedded in Himalayan traditions and materialized at the site and we seek to make it accessible to audiences that include UNC students.
The application should be compatible with computers, tablets and mobile phones. It should load 3D models smoothly with some animation where needed and incorporate the annotations in an elegant way. It will also ideally have an option to switch between various languages so that English speakers can listen to translations of the audio or video interviews while speakers of local language can hear the interviews in their original languages.
Last semester (Fall 2022), a team from COMP 523 initiated work on the project and were able to produce a prototype/proof of concept using the Potree platform (see https://tarheels.live/teamd/). We have realized, however, that in order for the final product to be VR compatible, we need a Unity-based platform. Therefore, I'm seeking a team that can recreate the annotated model using Unity and, ideally, add onto the basic framework that's been created (more annotations, more complex annotations, perhaps some animation if there is time) and optimize it for VR.
Together, the 3D model and annotations/archive of interviews that we are producing constitute important records of the tangible and intangible heritage of Nepal. They will preserve vital knowledge of both cultural practices and the built Buddhist environment into the future. This is a crucial need given that the Kathmandu Valley is still geologically active with more earthquakes expected and considering the rapid pace of cultural change.
We expect that the site will attract users with a variety of types of interest in Buddhism and/or Nepali culture. UNESCO has expressed interest in promoting our completed product, as has the US Embassy in Kathmandu and the Tourism ministry of Nepal. Professor Leve will integrate it into the courses on Buddhism she teaches at UNC; she expects it may be appealing to faculty and students at other colleges as well.