We seek to expand a project developed by a COMP 523 team (S25). The website is running a Unity viewer which displays the 3D model of the Swayambhu temple environment that we created in 2022. We have a new version of this model and plans to continue to refine it. We need a fully functional website that incorporates a landing page, a model viewer that will allow users to explore an annotated model in a way that is true to life and intuitive to navigate (i.e., trackpad/mouse), additional contextualizing sub- pages, and the possibility of linking to the video database. This will entail work within Unity.
This project involves documenting and preserving Buddhist cultural heritage in Nepal, using annotated VR models. Your coach this semester (Sam Shi) was a member of the original project team. Thus you have good help for this proposed continuation.
Last semester, students worked to create a video database that uses a LLM to transcribe and translate video interviews from Nepal (in several languages). We want to expand the database and improve user functionality. The most important feature of this database is being able to search for text within the transcription. However, it must be optimized to be public-facing. Hence, we're also interested in building an intuitive user-interface.
For an accessible description of the project overall, see:
project coverage
Crane Selector Web Application Specification Document
Buckner Heavylift Cranes is a crane company based out of Graham, NC. They supply very large cranes and lift equipment to industries needing such equipment on a rental basis.
OVERVIEW We aim to develop a clean, user-friendly web application that assists potential clients in identifying the most suitable crane model for their project. The interface will resemble automotive configurator websites, where users input project-specific data and receive tailored equipment recommendations. GOALS Provide an intuitive dashboard for users to input project requirements. Match users with the most appropriate crane model based on technical parameters. Collect user contact information and project descriptions for lead generation. USER INPUTS The application will prompt users to enter the following data: -- Load Weight: Estimated weight of the load (in tons). -- Lift Radius: Distance from the crane’s center to the load’s center (in feet or meters). -- Lift Height: Maximum hook height required (in feet or meters). -- Project Description: A brief summary of the user's project. -- Contact Information: Name, email, phone number, and company (optional). MATCHING LOGIC The system will use the input parameters to filter and recommend crane models from our catalog. The matching algorithm will consider: -- Load capacity limits -- Reach capabilities (radius and height) -- Operational constraints OUTPUT A recommended crane model (or a shortlist of suitable models). Key specifications of the recommended crane(s). Option to download a summary or request a quote. ADDITIONAL FEATURES Responsive design for desktop and mobile. Optional user account creation for saving project data. Admin dashboard for managing crane data and reviewing user submissions.
As a new college student, I have been tasked with the undoubtedly difficult job of managing almost all facets of my life. Of course, this sounds silly at first, but it is an obstacle that almost all can relate to. For some, it may come earlier or later in life, but the same realization appears nonetheless. That realization being that you have absolute responsibility of managing your own life.
Now, food is one of the most, if not the most important part of your life. I mean, we literally can’t live without it. Your thoughts linger around the question of where you’re going to get your food from, but the answer to that question is not the end of the journey. You must then ask if that food can even be eaten. And I don’t just mean can I viably eat the probably undercooked ramen I made in the corner microwave of the kitchen on my dorm floor. I mean, am I allergic?
It can be hard for a person still learning how to balance the rest of the responsibilities in their life to also correctly manage such an important question. That’s where SafeEats comes in. For the customer trying to manage their food allergies, they simply enter in their known food allergies, and we handle the rest. After a customer enters their allergies, the allergies are cross-referenced with the menus of restaurants to give the customer an easily accessible list of the foods on the menu which they can eat.
SafeEats would be a mobile application that served both iOS and Android devices. Since the app serves as a way to make dining easier for the customer, it would have links to other applications such as DoorDash, Postmates, or the actual online ordering system for the restaurant, and potentially a partnership with any of the previously named organizations.
Because these companies have already mastered online ordering, SafeEats would be unsustainable if it were to have online ordering of its own. Ideally, the application only makes the items on the menu which the customer can actually eat available to them, so as to avoid confusion or any possibility that they choose an item they are allergic to.
Approximately 32 million people in the United States are affected by food allergies, so the problem this app is solving is simple, dining accessibility for those with food allergies. Although I used a college student as an example of someone who would greatly benefit from this app, another relevant demographic is parents, as the safety of their children is of the utmost importance. While dealing with the pressures of raising a child, they would absolutely benefit from an app that ensured the safety of their children when eating food from restaurants. Overall, the app has one goal and one goal alone, easy dining in a safe way.
This is a continuation of an earlier COMP 523 project (S'24).
SafeEats is a mobile application with a format similar to that of DoorDash or GrubHub, and will be available for both iOS and Android devices. Around the world, food allergies have become more widespread and more severe, with hospital visits stemming from food allergies increasing threefold between 1993 and 2006, and hospitals seeing the largest amounts of cases in history. SafeEats serves as a way to make dining out more accessible and safe for those with food allergies. After downloading the app, the user enters their food allergies into the application's database, saving this to their client profile for reference. The application then cross-references the user's food allergies with the ingredients on the menus of restaurants and fast food chains, showing the user the menu items which they can eat. Therefore, the issue of safe online ordering and easier accessibility for those with food allergies (and parents /guardians of children with food allergies) are solved through the application. There are not any other particular systems which SafeEats must interact with.Supporting Doc
ACCIDDA has been developing the Flexible Epidemic Modeling Pipeline (flepiMoP) for several years, with its origin during the COVID pandemic. Briefly, flepiMoP is intended to be a low-code orchestration tool for conducting a variety of infectious disease modeling analyses (e.g. inference, scenario comparison, forecasting) within a unified command line interface. The tool effectively reduces the complexity of model specification and analysis by providing a simpler language and solving the book-keeping issues associated with this sort of work.
We have several elements of that overarching project that would make for focused projects for CS / CE students. I'm not sure what level of detail to go into here, but we have an active issues list at
github.com/HopkinsIDD/flepimopand the general documentation at
https://www.flepimop.org/I would say roughly there are 3 categories of projects that could be done, which would pull from / build on those issues:
-- software engineering tasks to do with package / library organization and continuous integration -- design architecture modularization / plugin system development -- particular algorithm implementation / optimization (e.g. work distribution and collection for parallelization, different stepper / inference numerics)
Stochastic models typically model the progress of some process, that pulls from a PRNG to determine which probabilistic models happen with what outcomes. In infectious disease models, we commonly also want to conduct scenario analysis - basically comparing outcomes across parallel universes, with vary levels / kinds of interventions. When making those comparisons, we need a way to practically ensure that the same random events happen across scenarios (when the same events are actually present across scenarios), in a way which is practical for typical usage - that means drop-in code API, appropriate randomness, and sufficient time/space performance. These criteria seem achievable with a hash-based approach - basically using event characteristics as hash inputs and the hashcode result as the PRNG deviate.
I have completed some preliminary work to demonstrate the capability, but what is now needed is a more well engineered approach and systematic benchmarking on the PRNG / hashing characteristics, as well as run time and memory performance:
https://github.com/epinowcast/hashPRNGthis is an example R-based implementation, but I suspect the project to aim at another lower level language.
For our own personal use, a team of us built a system for easily managing and creating CVs based on data contained in YAML files. This has proved very useful to it's small (~3 people) user base as they need to present the same information for various institutions, funding agencies, etc. in different forms. However, the current system is a bit unwieldy to add and manage citations and information in, particularly as CVs move to the 100+ paper range, relies on a mishmash of code and tools (e.g., there is code in perl, python, R, etc.), and lacks any unified or quality user interface beyond a simple one for reference management written using AI tools. The goal of this project would be to create a more unified and usable version of this system that would be suitable for adoption by a wider user base and more dynamic and flexible in the creation of new templates and the creation of specialized biosketches and CVs for targeted purposes.
The features it would be good to see developed in any project would be (note that it would not be necessary to do all for this to be a success):
Our initial implementation of this is available at
https://github.com/jlessler/CVBuilderand I have uploaded examples of current output and input:
example CV example YAML for CV example YAML for Refs
VCIC (Venture Capital Investment Competition) is a competition. Judges score teams through multiple rounds. We already had a Comp 523 team set up online scoring that works great using Google Sheets. This project would be to take that process up another level, especially esthetically...that is, make it look better. It functions great.
I've attached an example Google Sheet.
It is shared with
vcic-judge-voting-sheets-api@vcic-214312.iam.gserviceaccount.com.
Judges use this link to submit their votes:
https://www.vcic.org/voting/?sheetId=1xVbxUTt-9-xI-g5Z5aLbGjttGAHqflzKtfLp8r1lrZg
and they are tallied on the attached Google Sheet. Thanks!
To process media data from Television Radio Web Streaming TV podcast social media blogs newspapers magazines from the frontend to generate media results reports for both General Public and Subsribers, plus also to administer the system.
This will explain the projects needed to be worked on. Description
Let me know if you have any questions. Henry and I do you to be emailed a resume with email addresses phone numbers github branch and work code experience. We will help set up the work enviornment. After you have studied the code developed from other capstone students we like you to make a plan and create the plan to help make it deliverable and presentable.
The frontend has to be written in Angular 19 https://angular.dev/ This exlains the backend processed data and what is also needed and to make some improvements to get this better. https://docs.google.com/document/d/1jiaOhnZjKJamTMRmap_-qzXr794HW4XYkXEUT0DfMK8/edit?usp=sharing
To make presenable it take working with other capstone teams and it will take to merge the code you worked on to digiclips henry's github master repository branch with comments notes linked to readme and version number, then put up the frontend up on digi-frontend computer do a production build and put up on AWS Amazon Lightsail to link to the processed media data on our local computers.
Her Night is more than just an app. It is a global sisterhood built on fun, inclusivity, and mental wellness. The platform is designed to give women a sense of community and belonging, no matter their location, background, or stage of life. Through themed “Her Nights,” users can come together to share stories, celebrate wins, and connect over lighthearted topics such as self-care, friendships, relationships, and personal growth. These nightly gatherings spark joy and laughter while also creating the foundation for lasting connections.
At the same time, Her Night goes beyond entertainment. The app integrates mental health features that make it a safe and supportive space. Wellness nights encourage open conversations about stress, balance, and resilience. Users can complete mood check-ins to track their well-being, share experiences anonymously, or access curated resources that connect them to professional support when needed. Sub-groups, or “squads,” allow women to bond more deeply around shared experiences, whether it is managing college life, navigating career stress, or exploring body positivity.
The app also reinforces positivity through interactive features like affirmations, hype notes from peers, and gamified self-care challenges that reward consistency with encouragement and community recognition. By design, Her Night avoids the comparison-driven culture of traditional social media and instead fosters a culture of support, inclusivity, and authenticity. Our vision is to transform what might seem like a casual nightly ritual into a global movement. Her Night is a safe corner of the internet where women of all ages can laugh, share, support one another, and thrive together. It is a place to combat loneliness, reduce stigma around mental health, and build a generation of women connected not just by technology, but by empathy, resilience, and sisterhood.
As women in our early twenties, navigating life away from home can often feel isolating, overwhelming, and at times lonely. This project comes from our own experiences of searching for community, belonging, and support during such an important stage of life. With Her Night, our goal is to create a safe and uplifting space where women can laugh together, share openly, and build connections that remind them they are not alone. We welcome any feedback, guidance, or questions that can help us strengthen our vision and make it as impactful as possible.
SPFMatch is a web application designed to help users find the most effective sunscreen for their specific skin tone, skin type, and lifestyle needs. While sunscreen is essential for skin cancer prevention and maintaining healthy skin, many people struggle to choose a product that provides adequate protection while suiting their unique needs. This challenge is especially important for individuals with darker skin tones, who are often underserved in dermatological resources, face marketing bias, and may believe sunscreen is unnecessary for them.
This website will guide users through a short questionnaire covering their skin tone (using a visual scale such as the Fitzpatrick classification), skin type (dry, oily, combination, sensitive), lifestyle factors (daily sun exposure, time spent outdoors, activity level), and preferences (mineral vs. chemical sunscreen, fragrance-free, texture). Based on their responses, SPFMatch will generate tailored sunscreen recommendations drawn from a curated, dermatologist-reviewed product database, backed by research on SPF efficacy and skin safety.
Core features will include:
-- AI-powered recommendations based on Dermatologists and Research -- Sunscreen recommendations tailored to all skin tones -- Reapplication reminders based on UV index and user activityThe target audience includes:
-- Individuals looking for personalized sunscreen recommendations -- People of color who lack targeted sun safety recommendations -- Health-conscious users seeking evidence-based skincare guidance
Add to this the ability to glean (or confirm) input questionaire data via image analysis of camera image(s) of a user's face and other skin areas (arm, hand, etc.).
sample data
This project is with an industrial partner of our Comp Sci department.
It would be a significant industrial experience for the team that selects it.
The project is described in
this document.
This diagram
further explains the project structure.
The goal of this project is to develop a set of programs for running psychological experiments as web-based class-room demonstrations for Sensation and Perception (NSCI 225). A valuable aspect of this class is for students to experience phenomena of perception so that they can better understand the nature of the underlying cognitive and neural processes. Giving students this experience is easy with visual and auditory illusions. A stimulus is presented to the students, and they are asked what they see (or hear); discrepancies between their subjective experiences and a physical description of the stimulus are then discussed. Such demonstrations are very valuable (and sometimes entertaining) but can be used for only a small subset of topics. With students now having laptops in class, the range of topics that can be taught using demonstrations is greatly expanded.
I will describe a simple classroom experiment that I have done to illustrate what I mean by an in-class demonstration and to describe some of the challenges of conducting one in the absence of custom software. In this experiment, each student was presented with a series of trials consisting of photos of two faces shown on their laptop screens. For half of the trials, the display consisted of different photos of the same person and for the other half of the trials, the display consisted of photos of different people. The students' task is to judge whether the photos in a pair are of the same person or of different people. The design of the experiment was based on a published study that provided all the pairs of faces. The experiment was designed to make two points: that people are on average surprisingly poor at recognizing the faces of people they don't know and that there is tremendous variability across people in the ability to recognize faces. Both points were made: the median of the students' accuracy was 85% (chance was 50%) and individuals' accuracy scores ranged from 58% to 100%. These results were very similar to those of the published paper. This experiment was successfully performed using Google Forms but setting up even this simple experiment was very tedious. The images had to be copied and pasted into Google Forms which took over an hour and half. The results had to be downloaded from Google Forms and analyzed using prepared scripts, a process requiring a break before they could be displayed to the class.
I envision that the student team would write code (most likely in JavaScript) for three or four web-based experiments: (1) a simple experiment consisting of a version of the face recognition experiment described above, (2) a task such as visual search where the system must accurately record the response and the time to make the response, (3) a task requiring precise control of the duration of a visual stimulus (limited only by the screen refresh rate) in order to determine thresholds of recognition, and (4) possibly an experiment in which sound is presented.
The programs should as much as possible reuse a set of functions that provide capabilities (like those listed below) making it easy to extend these basic programs to other classroom experiments in the future:
-- Data and trial information would be saved from all the students in the class in a single file that could be accessed by the instructor (e.g., on OneDrive or a UNC server). -- Individual results would be calculated and presented to individual students after completion of the experiment. -- Parameters that are common across experiments would be read from a file (e.g., number of trials, types of randomizations, use of a fixation point at the start of a trial, duration of fixation point, etc.) -- Where needed stimulus content would be read in using lists of file names so that there would be no need for copying and pasting.
Our project is a browser-based “grid-builder” that lets social-science researchers design and deploy spatial survey experiments without writing code. In the web interface a researcher chooses the grid size (e.g., 5 × 3), defines attribute lists for each cell (race, gun ownership, political party, etc.), and selects experimental options such as scenario randomization or carry-forward logic across tasks. The tool then renders a responsive preview (desktop and mobile) identical to what participants will later see. Behind the scenes it stores the design as JSON and can regenerate the exact survey any time, ensuring full reproducibility.
When the researcher clicks “Export” the app compiles everything—HTML + CSS + JavaScript, embedded-data field names, and basic validation code—into a package that can be (a) inserted directly into Qualtrics via a single question, or (b) served as a standalone web experiment that posts JSON results to an endpoint . Thus the student team would build a React/TypeScript front end, a lightweight Node/Express back end for user accounts and design storage, and a small build pipeline that bundles the exported files. No native mobile build is needed because the interface is fully responsive; testing shows it works down to a 360 px-wide phone in landscape.
The software serves policymakers, academics, and other researchers undergraduates who study neighborhoods, communities, layouts, or any context where WHERE an attribute is placed matters as much as WHAT attribute it is. Today, such researchers must hand-code grids for each study—time-consuming, error-prone, and hard to share. Our tool makes rigorous spatial experiments as easy as creating a Qualtrics multiple-choice question, lowering barriers to entry and encouraging transparent, open science.
I think this project is interesting because it is about systematizing the generation of code based on both UI and logical specifications.
abstractsThis would be an extension of the work completed last semester, developing AI methods to scrape, clean, organize and link foods and drinks in order to ultimatley build an personalized food shopping platform that makes it easier for consumers to make healthy choices. We are working with the Eshleman Kairos program on potential commercial opportunities for this app.
Lola's 2.0 will be a grocery platform that uses artificial intelligence to create a customized food environment that promotes healthy, sustainable foods and drinks tailored to customer's budget and taste preferences.
Research to design and evaluate food policy relies on large datasets of food products which include many attributes (price, nutrients, ingredients, marketing elements, environmental sustainability); historically, processing, categorizing, and linking this data has been enormously time consuming and burdensome, as well as ultimately limited by the small number of attributes that can be considered at one time. This narrow focus on a single variable at a time has limited our ability to fully evaluate the impacts of implemented policies as well as design maximally effective policies, since being able to target the attributes and their combinations that consumers care about most would help make nudges, interventions, and policies more effective (e.g. incorporating price or front-of-package marketing elements into our models for how consumers make decisions in response to labels).
To address these gaps, we propose to develop an artificial intelligence (AI) model for food and nutrition data that will allow us to 1) expedite the processing time to review and categorize products for policy evaluation; and 2) create an AI-powered, personalized version of our experimental online food store (Lola’s 2.0) that will allow us to test and develop policies that are responsive to a variety of factors (including but not limited to nutrition, sustainability, and price), depending on a particular country’s policy and advocacy goals. We foresee potential applications in current focus countries and future additional countries. Together, this new AI approach for collecting, organizing, and categorizing food and nutrition data and testing policy design via Lola’s 2.0 will allow us to have a greater policy impact.
Client currently is doing research with a commercial system "Lola's 1.0". This proposal is to develope a new system from scratch and also making it customizable. The first step would be creating the data infrastructure for all the food and beverages products to categorize them by nutrition, price, brand and other attributes (Aim 1 in the proposal attached) so that those attributes can be used to create a customizable food shopping environment (Aim 2).
Client has Amazon Web Service credits that could be used to support the project. Also, I recognize the idea is ambitious- it would be fabulous if we only did part of it (for example, creating an app that promotes healthy, sustainable selections within a single food category; forgoing the "customizable" aspect of it, etc). I'm flexible! Thanks for the opportunity to submit this idea.
full project descriptionThis project will work with an extensive Web site that includes information on an existing ecologically significant conservation reserve ( chicorylane.com ) and maps it into a dynamic document system to be developed by the project. The doc systerm is a node/network based structure, having nodes that may contain pages of mixed media content that can be accessed through a conventional Web browser.
Nodes will be recursive, in the sense of nodes containing other node structures as content. Innovations will also include an automatic feature to feed data incrementally into some to-be-identified existing LLM system as well as explore techniques for incorporating relational database information (ecological attributes concerning plants in the reserve) into the LLM.
Earlier work on several similar data models are available.
RDMC possesses an outdated Selenium suite which was a step towards automated testing of the current, JSF-based Dataverse front-end.
The student team would rewrite the test suite using a modern browser automation framework such as Playwright, with the goal of producing a portable and reusable testing framework that validates core Dataverse functionality. We can provide detailed requirements, existing test cases, and technical guidance as needed.
The final code should work on multiple Dataverse installations and as such would be released as open source code through the Global Dataverse Community Consortium ( https://github.com/gdcc )
Existing, installation-specific code is available at https://github.com/uncch-rdmc/dataverse-automated-user-tests
NC ASK is a funded research project I am leading to address a critical need I see every day in my work. As a developmental pediatrician and clinical informatician who treats children with Autism Spectrum Disorder (ASD), I witness firsthand the immense challenges families and providers face with information overload and the complexities of navigating healthcare, insurance, and educational systems. NC ASK is an educational tool that is driven by an LLM application that provides clear, tailored, and actionable guidance, cutting through the noise to deliver reliable information. To bring this vision to life, we are seeking a student team to build the foundational software that will connect our model to its users. The primary asks for the COMP 523 class is to design and develop the complete backend service and the front-end web client for the NC ASK tool.
The project will be developed within a modern, enterprise-grade environment on the Red Hat OpenShift Platform. The core task for the student team is to build a scalable backend service that exposes the NC ASK model's capabilities through a secure RESTful API. This service will also include a database to capture de-identified metadata and user feedback, which is critical for our research and for improving the model over time. This project has the real-world potential to support thousands of families across North Carolina, and I would welcome some motivated Tar Heels to join us in this important work.
I am working with the NC Tracs team to help construct the RAG pipeline. While I welcome any interested student to participate for that part of the project, it certainly would never be the primary responsibility.
PPTFANz PLAY, “The Game Within A Game!” FANz PLAY is a free, interactive mobile sports trivia app that transforms fans from spectators into active participants going against their opponents/the other team’s FANS, LIVE on Game Day! Yes, that’s FANs versus FANS!
Every sports fan loves a great game and there are many to speak of, Texas A&M and Texas Football, LSU versus South Carolina Women’s Basketball, and the list could not begin unless we add one of the greatest Duke versus North Carolina men’s basketball. A rivalry that began on January 24, 1920, when the University of North Carolina defeated Trinity College (now Duke University) 36–25. As of February 1, 2025, the teams have faced each other 263+ times. This rivalry is widely regarded as one of the most intense in U.S. sports, fueled by the proximity of the universities—approximately ten miles apart—and their consistent basketball excellence.
These are only a few of the greatest team rivalries in the history of sports—rivalries where winning fans cheer and brag for days and weeks, eagerly awaiting the next matchup. Fans support their teams through snow, rain, or shine, but no matter how passionate they are, the outcome is ultimately determined by the coaches and the athletes. This can be frustrating for fans who want to be more involved in the game and enjoy the bragging rights that come with victory. What if these die-hard fans had the chance to play for their team against the opposing team’s fans using their mobile phones?
More about FANzPLAYThe team will develop a library of AI modules to support the launch of a new service: a DIY + Coaching Model
-- Access to a library of AI modules (five today, expanding to nine by December), covering role inference, stage inference, engagement scoring, and journey visualization. -- Enterprises pay a monthly IP subscription fee, reducing time and cost of custom builds by up to 90%. -- Embedded coaching (Jupyter notebooks, advisory sessions, labs) accelerates adoption.
A Github Repo will be the location of the code / AI models. The GitHub Repo will include the following best practices (but not limited to):
-- Multi-layered documentation -- Top-level README: what the repo does, quickstart in <5 mins. -- Tutorials/notebooks: walk through use cases (e.g., “first model on buying center data”). -- API reference: for advanced users. -- Examples folder -- Include real-world enterprise B2B use cases (e.g., sales cycle simulation, account scoring). -- Provide ready-to-run Jupyter notebooks with comments. -- Config-driven workflows -- Let users modify YAML/JSON configs instead of editing Python. -- Self-service with coaching hooks -- Embed notes like: “If you get stuck, check /docs/troubleshooting.md or schedule a coaching call.” -- Testing & reproducibility -- Provide Dockerfile, environment.yml, or requirements.txt. -- Include simple test scripts so users can confirm their setup works.specs
At the Research Data Management Core, we manage several applications for Research Data including REDCap and Harvard Dataverse. These applications have a host of users that interact with it, ranging from PIs to researchers. As with any application that has a user base, these users occasionally run into issues or have questions that need answers, leading to them submitting tickets to our team.
Submitted tickets are not user facing and users can only access the tickets they report. While there are unique tickets that require human attention, plenty of them are duplicate tickets, requests or reports that users make frequently and independently of each other. Due to this our ticketing team is overwhelmed and overworked, forcing them to spend all working hours responding to tickets.
We request an AI chat bot that can be user-facing and solve simple ticket requests, deriving solutions from our large data set of tickets. It can be based on any simple LLM model, but it must be further trained on our ticket data so that it can accurately solve user issues, only elevating requests to a ticket if a simple solution is not found.
It should be able to interact with users, submit tickets on issue elevation, and potentially perform rudimentary tasks for our users.
It should also be horizontally scalable, allowing for any numbers of users to interact with it as ephemeral instances.
The idea is to create a small, educational simulation based on the Marine Corps Small Unit Leadership Evaluation (SULE). To be clear up front: this is not an official Marine Corps or ROTC project—there’s no endorsement, no connection to the Department of Defense, and no use of restricted content. It’s just a student-led project aimed at helping people like me and my peers prepare for leadership training. The thing is --- I have a lot of peers.
The majority of Marine Corps officers (around 70–75%) commission through either ROTC or Officer Candidate School (OCS). Both paths involve evaluations like SULE that test leadership, decision-making, and tactical thinking under pressure. Having a tool like this would be incredibly useful for students trying to study and practice outside of official training hours. And an enormously interesting point on anyone's resume --- how many people can truthfully say they've helped create a simulation that has helped thousands of people prepare for OCS and real combat down the line?
The project wouldn’t try to be a polished, high-graphics game. Instead, it would focus
on logic, decision-making, and scenario branching. A few possible features:
-- A top-down map where users move fireteams around and make tactical decisions.
-- Prompts for writing or selecting parts of a five-paragraph order (SMEAC).
-- Stress elements like time limits or “friction” events to mimic pressure.
-- Simple feedback to show how choices play out.
-- (Optional) multiplayer or co-op to practice small unit coordination.
For computer science students, this is a chance to build something that looks great on a portfolio—combining game mechanics, decision logic, and UI design. For ROTC/OCS students, it could be a practical way to rehearse leadership challenges before the real thing.
Again, this would be entirely educational and unofficial, created by students for students.
How can LLM/AI be used to enhance the education effectiveness, and learn from trainees?
The Pediatric Blue Book (PBB) is envisioned as a comprehensive web application designed to support pediatric clinical dietitians. Primarily web-based, with potential for mobile access, the platform will be available throughout clinical environments, serving both inpatient and outpatient settings. Notably, PBB operates as a standalone system; it does not store patient data nor interface with other system (Though potential for interaction with Epic in the future is an aspiration). Both user and administrative interfaces are required.
Challenges in Pediatric Nutrition PracticeCurrently, pediatric dietitians face significant challenges due to fragmented resources, including outdated print manuals and dispersed online references. This fragmentation makes it difficult for clinicians to quickly locate and apply accurate information to patient care planning. Moreover, calculating nutritional needs is a crucial but often burdensome task typically relying on hand calculations rough estimations and generic formulas. These shortcuts may not sufficiently account for the unique requirements of individual patients, resulting in possible delays or inaccuracies in care. Such limitations can impact the quality, safety, and recovery of nutritionally vulnerable infants and children.
Proposed SolutionThe Pediatric Blue Book aims to streamline information access and improve the precision of nutritional assessments by integrating evidence-based calculators and digital tools into a single, user-friendly system. By consolidating essential resources, PBB will enable dietitians to work more efficiently, minimize errors, reduce documentation fatigue, and personalize nutrition plans. Ultimately, this automation enhances care for pediatric patients with complex health needs, helping them to thrive and grow.
User Interface Features
The application’s user interface comprises two main sections:
-- First, dietitians can quickly calculate nutritional needs based on patient
age, length, and height (metric units) and view the Dietary Reference
Intakes (DRI) for the child. DRIs will include approximately 33 nutrients,
covering patients from preterm infants to age 21. Key growth measures,
including ideal body weight and catch-up growth are also calculated.
-- Second, users can generate custom formula feeding plans utilizing up to three
different formulas (powder or liquid), water or breastmilk, and two modular
additives (powder or liquid). The nutrient content is instantly calculated as
changes are made to the recipe. Age- and sex-specific DRIs are displayed
allowing the dietitian to evaluation the appropriateness of the recipe.
Visual alerts are provided if any single DRI is less than 67%. The feeding
plan recipe displays the volume of each product, total recipe volume, and
calories per milliliter. These plans, excluding patient information, can be
easily printed.
-- Third, users can look up ingredients and key nutrition information about any
formula or product in the database. This feature will help with clinical
decision making.
This streamlined workflow empowers dietitians to confidently refine and optimize nutrition plans, leading to improved patient outcomes.
The administrative interface functions as a secure database for formula management, quality control, and auditing. It requires password protection and supports up to five administrators. Building on previous formula databases, this component will house roughly 300 formula products and 100 data fields, serving as a robust foundation. Products will be updated regularly, including adding new products and retiring those no longer on the market.
   The following projects require a team with 4 members that have taken COMP 435 taught by Kris Jordan
This project is an extension to the existing UNC Computer Science Experience Labs' TA Application and Hiring system. Every semester our department hires around 200 undergraduate and graduate TAs across all courses, representing approximately $700,000 in salary stipends. At present, students apply to become a TA through the CSXL web site, instructors prioritize their hiring selections, a small committee matches selections based on course enrollments and resources available, and departmental staff process hiring actions in UNC's own HR and Payroll system.
After three semesters of active usage, there are many ideas for improving this system for each of the stakeholders involved: students, instructional faculty, matching committee, and HR staff. There are also ideas for improvements and new functions for this feature to serve department administration in better understanding and planning future funding needs.
The technology stack for the CSXL web site is taught in COMP423 (formerly 590): Foundations of Software Engineering and is FastAPI in the backend, PostgresQL as a data store, and Angular for the front-end. For a group to take this project on they should have already completed COMP423/590.
This project is focused on business process improvement, is critical to our departmental operations, and is well suited for a team that has one or more members who are double majoring in business, entrepreneurship, or have an interest in organizational operations. After completing this project, given the annual spending of one and a half million dollars it manages, students will walk away with an impressive resume line item and talking point.
Modern AI Agents are capable of carrying out step-by-step tasks that call out to "tools", which are functions written by software engineers, to carry out time intensive tasks. This project proposal is to develop an agent system, integrating with the OpenAI API, that can investigate a student's project code in a project-based learning course like COMP423.
By "investigate", the goal is to wade through a students project's commits and project code looking for evidence of specific qualities, or rubric items, expected as part of the assignment. For example, perhaps we are looking for 100% test coverage of a specific module, the agent system could invoke a tool to run the unit test software on the student's codebase in order to verify coverage. Alternatively, if a specific concept such as coding standards for FastAPI route definitions or Pydantic Models is present or counter examples can be identified. The desired outcome is to automate some of the routine checks TAs currently perform so that TAs are able to focus on giving higher-level project feedback and guidance.
Ideally this project leads to a tool that is not overfit on COMP423 projects and could be used in many other computer science courses. However, starting with some specific examples in COMP423-sized code bases offers a focused direction that is challenging enough to be confident other courses such as COMP110, 301, or 426 could also benefit from having it.
This is an experimental project and may become an official CSXL hosted and supported tool. As such, and due to the popular state of the art of today's agentic systems that utilize Python, Pydantic, and FastAPI, this project is suited for a team of students who have already successfully completed COMP423/590.
Even though this seems greenfield, it's somewhat ambitious and has some specific stack expectations (Pydantic/OpenAI/FastAPI). The most suitable teams would be one where students took the course this past Spring (we used OpenAI API for the first time in it), however as long as everyone on the team has completed 423, it should be fine.