Titles link to client presentations, if available.
Campus representatives and advisory members asked the Department of Transportation and Parking to develop a mobile application to help community representatives navigate parking on campus. Changes in operations and campus demands, such as the weeknight parking program, academic mission needs, and increased event parking needs initiated the request for a comprehensive and interactive mechanism to communicate with customers and provide information.
Users express interest in knowing where permits are valid, what kind of permit is valid at different locations, and when/how long the permit is valid at a given time. Essentially, users want an interactive and easy-to-use app that indicates where to legally park on campus.
Primary points of interest include:
Allow administrators the ability to:
T&P would like the mobile app to be accessible through all mobile operating systems (including iOS and Android).
Transportation currently uses the following apps that may help with enhancing the T&P mobile app:
I'm a graduate student working with professor dewan on a difficulty resolution system. The overall goal is to 1) detect when a user is experiencing difficulty programming in their IDE 2) determine what kind of difficulty the user is experiencing 3) determine an appropriate response and give it 4) determine if the response was helpful.
For the scope of the class, I believe the best option is having the students build the system that keeps track of user searches and the user's document state. They would need to create some type of chrome plugin that gives them access to real-time searches being done by the user. We have code working that replays user logs to create the users document state at whatever time you want. Using this the students would need to create code to pull out keywords from the documents state (ideally in good practice so that different mappings can be plugged in depending on research results, they will not be responsible for that part). They would use some distance algorithm to search a database for a document with similar state based on the keywords. If there is one, they will update that entry with the new search data, adding it to some list of URLs. If there is none, then they will create one and add it to the database. I think they will add entries every time there is a search (or string of searches, there could be some heuristic for how long to wait before deciding the user is done searching), and constantly make recommendations live based on the closest known entry (the final system for my research will obviously not make constant predictions, but again that seems out of the scope of their project).
I'm not sure how large or detailed the project needs to be, but let me know if this seems to be in the ballpark of something youre looking for. This is all in Java for the eclipse plugin, then theres the chrome extension, and they would need to setup and access some type of database. There were some points in my description where there are some algorithmic decisions that need to be made, some of them were more research-based (like keyword mappings), but others (distance algorithm) seem like something a developer should be able to make a call on…I'm willing to let them make the call, or if you want me to make it thats fine as well. The main thing is the system needs to be built so that at every one of these decision points, we can plug in different methods of doing that thing so that we can evaluate their effectiveness (maybe some design pattern like strategy maybe?). There are some ways to expand the project if we need more, the most obvious is making it distributed (we have an xmpp server and messagebus central server that they develop for).
Terrell: Can you elaborate a bit about what the database contains? It sounds like you're saying it contains documents, each of which has (possibly many) keywords, and each of which has (possibly many) searches. Each search contains a search string and a set of search results. Is that right so far?
Persson: Well in regards to the contents of the database, it could be whatever the team decides is important to store, but I was thinking it would be good to have the the URLs that the user visited as well as the keywords that were pulled out of the users programming files (basically representing the state of the program). In this way the state of the program would map to the URLs that were visited (then some heuristic could be applied to which URLs were most helpful, I was thinking like some type of ladder where the most recent search was the most useful, and then step down the ladder for the rest, but really I'm open to any reasonable ideas there - but this means some type of value associated with each URL as well).
Terrell: I'm a little confused about what gets updated if there's a known document with a similar set of keywords to the set provided by the search. Can you elaborate on that point?
Persson: As to what would be updated, if there was a record in the database that was "close enough" to the current document (based on keywords) then any new URLs that the current user searched would be added to the URLs searched by the previous user/s. Then the "value" associated with the URLs would be changed based on some algorithm mentioned above.
This proposal is for a web app (Security-N-Compliance) where users can login (User account control), and complete a PCI (Credit Card Security) SAQ (Self-Assessment Questionnaire). There are other industry sites out there that do this function, but they require the user to have a good understanding of PCI. This site will prompt the user with advice and recommendations to further meet the PCI requirements defined in the SAQ. I plan on working with the project team to provide my insights as a 10 year veteran of PCI. In addition, should an item in SAQ have a recurring review date, the web app will email the end user reminders ahead of due dates to help assist in maintaining compliance.
The goal is to provide a web app tool for users to complete a PCI SAQ without needing to contact an IT security professional. The web app will allow for users to complete and submit the SAQ with attached supporting documentation. Other functionally would include saving completed SAQ's in an archive when the SAQ standards change, which happens every 18 months.
This web app, in addition to user account controls and users completing PCI SAQ's the web app should have tie-in to some industry standard online help tool (ie ZenDesk). The help tool will allow for tracking of users problems and questions, allowing for improvements to the advice and recommendations or where warranted full articles to be incorporated into an online Wiki. The attached diagram shows the major points and work flows. There are some features listed as "Future" which are open for discussion with the team should this project be selected.
Emergency crews work every day to save the lives trauma victims. In addition to the already difficult task of treating the patient, paramedics are also faced with navigating a complex regional trauma system with hospitals providing different and often conflicting guidance on alerting respective trauma teams. The inaccurate or delayed mobilization of trauma resources could mean the difference between life and death for many.
The Mid Carolina Regional Advisory Committee would like help with developing an innovative mobile app that can assist emergency crews with correctly matching patient injury with the proper trauma alert criteria for trauma centers within the region.
This app should be portable to all devices, intuitive enough to use in austere and low light environments (ambulances and helicopters) and intuitively designed to reduce cognitive strain. The app should also be develop in a way that will allow for updates and refinements to the criteria with reasonable amounts of training.
By helping emergency crews to more quickly and accurately identify trauma alert levels, this app has the potential reduce frustration, save precious minutes, mobilize the correct resources and ultimately save lives. As no such app has yet been developed, this project has the potential for adoption across the state and beyond.
Connect-Home is a proven methodology and communications process to assist patient discharge from skilled nursing facilities (SNF) to their homes and reduce re-hospitalization rates. By providing tools, training videos and quality improvement interface, staff from the SNFs can share patient-centric information with in-home caregivers and patients. Connect-Home seeks to develop a web portal that complements its printed material binder and augments it with web-based platform for disseminating training videos and other intervention-related materials. The scope of this project will include creation of a branded website (logo and color scheme provided) that includes an "About Us" section, blog, and secure user portal. The user portal will require unique user sign-up and password-protected entry for the pilot program locations (12+), and it will enable access to sections for (1) tools and (2) videos. If time allows, the user portal will also provide a form for pilot program participants to enter notes about their monthly QI meetings and project check-in meetings with Dr. Toles, as well as a calendar or scheduling mechanism for the meetings. Data from the meeting notes form should be downloadable in Excel or similar format. The portal will NOT manage HIPAA-related data. The portal and access need to be user friendly and have accessibility for a wide range of pilot program participants. The PI is interested in having a brief and easy-to-follow instruction sheet for portal updates and maintenance.
Most people use applications like Google Maps or Apple Maps when they navigate especially for longer trips. I've worked for years in Sales and spend a significant amount of time in the car. There have been many instances where I could have benefited from knowing the local or anticipated local weather. It would have allowed me to adjust my expectations for travel time or perhaps even taken a different route.
I see the application being of use for logistics companies doing long haul in addition to general users on trips. The application would combine route navigation with anticipated weather along the route. The application needs to calculate locations along the way and then apply forecasted weather for the location. The mobile application would give icons for anticipated conditions (high winds, snow, freezing rain, rain, sunshine, severe weather alerts).
If needed, the scope of the initial application could be limited to a single journey so all the feeds are more static.
I would envision an interface similar to Waze. I have not created any wire frames for this application.
Spot is a mobile app that unlocks the world around you. Spot gives users access to a public map of interesting places found by other users, and empowers anyone to add spots of their own. Each spot has its own community, and its own digital guest book of content that can be unlocked by going to the spot and checking in through GPS. We believe that the world is unexplored, from your own backyard to a city you've never visited. Spot empowers users to forge new experiences and beautifully archive the places that they've been.
So far, I've been building Spot as an iOS-first app with Swift and Firebase as a backend. It's only about 30% complete, so I would consider starting fresh with our SWL team to build Spot cross-platform. Our most immediate goal is to release our MVP to beta testers by this spring.
Spot's team consists of myself and Tyler Trocinski (CC'd here). I'm an entrepreneurship concentration in the business school and a computer science and history minor. For Spot I have been leading development and business strategy. When we bring on developers, I'll shift into more of a product-focused role (I have a background in product management). Tyler has been designing Spot thus far and has built a fully functional prototype. He's a design major in the journalism school.
My product is a web-based tool in which an administrator can manage a participant list, generate randomized groupings, and, after a set time interval, generate randomized groupings again, avoiding previous combinations. The activity creates and strengthens links among constituents. And it can be used to shuffle two subgroups (such as students and faculty) together on alternating meetings, or always.
This concept does not have any IP.
On the user end, my current product ends with the emails participants receive to notify them of their new combination.
My goal in working with your course would be to create a smart-phone-based user interface. In addition to containing the information about the current participant combination, I thin it encourage, facilitate, and enrich meetings if it could link to:
Another short term goal is to "click-wrap" the website with an agreement based on a contract or access with a purchased code.
In the longer-term, I aim to game-ify it, so that when participants collect tokens (photos?) representing folks in their meet-ups, or bits of meaningful personal info (hometown, source of inspiration, hobby), these play into a game board like CandyCrush etc and encourage ongoing collection.
I'm also open to other ideas for embellishment and improvement!
The child life department at UNC Children's Hospital has been approached with a request to offer same age peer-to-peer mentoring for pediatric patients. After a review of the literature and other programs that exist in the UNC Health Care system, we have decided to take a first step in this direction with the creation of a matching app to link patients who share commonalities such as diagnosis or personal interests. Last semester, a group of students laid the groundwork for this web-based application, but we are in need of more refining before we release it for use with our patients.
Current functionality of the app includes:
In this next phase, we would like to see the following adjustments and/or additions to the app:
We look forward to taking the next phase of the app to our hospital administration for approval to use with patients and families and hope to have this ready to roll out by Fall 2019!
I am looking to develop a web platform (MLDYMusic) which would allow artists to collaborate with producers and other figures in the music world as well as push their music to the public. Coming from a music background, a simple and synergetic platform is needed for efficient and cohesive music production. The platform that I am looking to achieve as a minimum viable product is one that will allow for each member to have a profile page which overviews their skills and showcases their work. Also a feature that would allow members to contact other members and be able to share their work directly through mostly waveform audio and mp3 files. I also have other features I'd like implemented but I want to work with you to see how much work would be feasible for the software engineering team to complete within the semester. I see this as being a website, and I believe that hosting and server space would be a complication we'd have to look into. Currently I am the only founder and stakeholder of the startup and I would be able to meet with the team every Friday during a specified time frame. I look forward to your questions and I hope to hear back from you soon.
There is a cubic metric boatload of stuff to collect and generate in order to find, organize, and manage COMP 523 clients and projects. I need a web-based system that will collect, maintain, and present these various types of information in order to create and manage the COMP 523 class in a more timely and convenient manner.
Client and project information. We need to collect client information and information about project ideas. This segment should gnerate a web page with a summary of all project proposals with all relevant links embedded (word docs, PPTs, web sites, etc.). Clients will be indentified by name, organization/department, email address. This section will also provide clients with an explanation of the process of proposing, as well as an explanation of client obgligations once a project is selected. We want clients to fill out this information themselves via the web interface.
Team scheduling. Students will be able to select prefereces from the web description of project ideas. Teams will be formed to maximize (as much as practical) student matches. Unselected projects will remain in the database for use in following semesters.
Team management. This section allows the instructor to enter grades and evals for each milestone a team must reach. It will be viewable by individual team (protected somehow).
At the UNC Hospital School, teachers work with all patients between the ages of 3 and 20 that are admitted to the UNC Hospitals or who visit our outpatient Hematology/Oncology Clinic. The teachers help with issues that may arise due to medical needs, other academic issues that have not been resolved, continuation of necessary school services, and, of course, instruction while students are in the hospital. Over the years, the school has employed many ways of tracking this work, but it has been difficult to maintain consistent and helpful system for all teachers and the school as a whole.
In August of 2017, a new system was created and introduced using Microsoft Access to attempt to meet these needs. This database system collects information on each patient as well as on all of the work that teachers do for and with each patient. The data is currently stored on the UNC network as a back end dataset with front ends on each teacher's computer to allow them to enter and access data. However, with the constantly expanding nature of the data system, we are quickly running out of viable space for this system, which will cause it to slow down and eventually overcome the two gigabyte limit for Access databases.
Our interest is to develop a system that avoids this space limitation and increases ease of use and application speed while also allowing for continued data analysis and future alterations if possible. The current relationships between data functions well and would work as a reference for structuring a new data system, but coding a new database is beyond our skills at this point.
The objective of this project is to develop a performance metric visualization tool for the Insight Toolkit (ITK). ITK is an open-source, cross-platform C++ toolkit with Python wrapping for N-dimensional image processing, segmentation and registration. Segmentation is the process of identifying and classifying data found in a digitally sampled representation. Typically the sampled representation is an image acquired from such medical instrumentation as CT or MRI scanners. Registration is the task of aligning or developing correspondences between data. For example, in the medical environment, a CT scan may be aligned with a MRI scan in order to combine the information contained in both.
A suite of performance metrics are available, which produce JSON files that describe the metric results, the version of the software, the build options for the software, and the system on which the metrics were executed.
The task for this project is to create a JAMStack (client-side JavaScript) web application to visualize the performance metrics to allow 1) performance on a single system to compared over time as the software has changed, and 2) compare performance between two time points on multiple systems.
UNC Health System is expanding all over the state of North Carolina to provide cancer care services. Unfortunately, a board-certified oncology dietitian is only available at Main Campus in Chapel Hill or at Rex Healthcare in Raleigh. This leaves a huge gap in nutrition care for patients who are experiencing eating challenges during their cancer journey at our UNC Affiliated sites. My idea is to create an app that can provide some of the basic nutrition info we routinely provide, in person in Chapel Hill, to our cancer patients and their caregivers throughout the entire state. I want the app to house content that provides informational data specific to cancer and nutrition.
We think this would be great for our cancer patients who don't have access to a dietitian so that they can accurate and reliable nutrition information, rather than getting misinformation off the internet.
Project opportunity: Building a digital dashboard to include patients voices in health systems in Kenya and improve quality of maternal healthcare
Our proposal for the COMP 523 class is to help us help us develop a dashboard tool that incorporates patient feedback to improve quality of care for maternal health in Kenya. This will involve creatively synthesizing data from several sources, including responses from thousands of mothers using Jacaranda's two-way SMS platform and publicly available DHIS2 data with outcomes from public hospitals. It will challenge the team and expose them to both data science and app development: data analysis and visualization, and building out software tools to automate data synthesis. If the team is interested, it could potentially also incorporate advanced skills such as sentiment analysis. If the team is successful, this tool could help empower mothers and influence decision-making across Kenya's public hospitals that serve hundreds of thousands of women.
60% of the worlds deaths in health systems, including those arising from maternal and newborn complications, are due to poor-quality care. The global maternal health community has recently been focused on the issue of quality of hospital-based care, and there is a growing interest in integrating patient voices into quality of care.
Jacaranda Health has its roots as a healthcare provider delivering high quality maternal health care for low-income women in Kenya. We are currently working with Kenya's public sector to improving access and quality of maternal healthcare in public hospitals. One of our central programs is set of mhealth tools that have been proven to improve continuity of care – these tools are now rapidly being taken up at government hospitals across the country. Key components of this mHealth solution are:
Behavior-changing content informed by a rigorous evidence base: Over the past two years, we have designed a sequence of simple, low-cost SMS "behavioral nudges": checklists, prompts, and messages that increase care-seeking behavior and outcomes in the pregnancy and postpartum period. Rigorous evaluation has demonstrated that that these SMS messages encourage mothers to seek care at appropriate times - when pregnant women or new mothers receive our SMS messages, we see improved knowledge of danger signs and improved uptake of postpartum visits and postpartum family planning.
Rapid uptake of a service integrated within county health structures: Jacaranda is rapidly establishing relationships with County Health Management Teams to deliver the SMS messages to women across the country. Jacaranda launched SMS programs for three Kenyan county health systems: Nairobi, Bungoma and Kiambu. In six months, 60 facilities have used the service. Our enrollment rate is now 1000 mothers/month, with a 95% retention rate in the program. Importantly, this level of coverage gives us a platform through which we can continue to test innovative technology-based approaches for improving quality of care.
We propose to work with a COMP 523 team to help us integrate patients' voices into the health system in Kenya by leveraging data from our SMS interactions to help create a dashboard tool that incorporates patient experience into facility quality of care. Given our growing base of users, we can now ask about patient experiences in hospitals during childbirth (e.g. disrespect and abuse, consent) and aggregate this feedback to create a dashboard. We want to take that patient feedback data about their experience during childbirth and combine it with facility level quality data that is publicly available (number of births, infant deaths, maternal complications, etc) so that facility administrators can have a valuable tool that shows how their facilities are performing, and where they need to improve quality.
The key objectives for the team will be to:
Students would work with two key data sets:
Note this is not a web app, but a desktop Qt C++ application
The Pulse Physiology Engine is a comprehensive C++ library that models human physiology needed to drive medical education, research, and training technologies. However, without a visualization tool the Pulse library is difficult to understand. We have developed a desktop visualization tool written in C++ and built on Qt to provide an interactive experience for users to explore the models used in the Pulse engine and display the data generated by the physiology engine. The explorer should allow users the ability create new patients, insert any insults or interventions at any point in time, and display the calculated effects in as the simulation calculates.
Currently the explorer is able to create and simulate a new patient, but has limited functionality for adding insults and interventions to a running simulation. We would like you to design and implement a generic user interface to support all Pulse actions in the explorer. There are hundreds data values that may be of interest to an end user evaluating Pulse, but there is a rudimentary data display implemented in the explorer that plots only a predefined set of data overtime. We would like you to design and implement a new way for users to select the data they are interested in and present it to the user.
This app would allow for short, manageable Cherokee language lessons that people can complete in day-to-day life. Unlike apps like DuoLingo that rely heavily on translation, á¤á™áŽáŠ orients language learning in real social contexts by triggering lesson prompts based on the user's physical location. This solves two problems: first, language learners often over-rely on translation and cannot think or respond quickly enough in the target language. By showing the user an example of the language being used in the location in which they are standing, it becomes clear what the particular speech act being presented is doing, thus obviating the need for much translation. Second, would-be learners often claim that language learning "takes too much time" out of their day. Instead of making language learning a separate task, á¤á™áŽáŠ integrates it into tasks they are already doing. Using geo-tagging to remind learners not just that they could do a "lesson," in the place they're currently standing, but also that they could actively practice in that place creates a reliable cue that users can build on to form a habit. That habit formation, combined with consistent practice, encourages the development of skills. Placing the language in real contexts allows users to practice using the language in day-to-day life; a crucial component of revitalizing the Cherokee language, which is currently endangered.
Certain locations in the community would be geo-tagged as learning-spaces, much like "gyms" in Pokemon Go. The user would enter the location (a coffee shop, a hair salon, etc.) and receive a push notification on their smart device that a short (~30 second) video lesson was available. A message like "Learn to order coffee in Cherokee?" would display. When the user clicked "yes," the video would play of a person ordering coffee (etc.) in the same physical location the user was standing. Afterward, a word list would appear with clickable phrases. When clicked, the phrases would play in Cherokee using the sound from the video. At the bottom of that screen, the user would be offered the option to practice the conversation in realtime. If the user clicked "yes," the barista (or whoever) would get a notification that the customer wanted to practice. If they agree, both people could go through the conversation together and rate each other via a star rating system, much like in Uber or Lyft. If the customer received 4/4 stars for five days in a row, for example, they might receive a 10% discount on their coffee for the following week. This platform would also encourage customers to visit certain places of business that host the activity more frequently, thereby providing an incentive to business owners.
I'm Phil, a 5th PhD student working here on computer vision with Alex Berg.
We could use an application developed to show off some of our object recognition work when people come in for demos. I'm not sure how big the project should be for this class, this might be too small, but I'll start with the bare bones:
A python GUI application that allows a user to capture 1-5 images of an object, and manually crop the object in each image. Then apply the object detection system on all subsequent incoming frames from the camera. Allow user to stop and restart with a new object. We would need the application to be able to run on a Linux (ubuntu) machine. Running the object detector requires a GPU, but the students can easily just simulate the object detector when working on a machine without a GPU.
There are some more things that could be added to make the project more involved, here are some in no particular order:
We already have all the computer vision and machine learning stuff done so they wouldn't need to do any of that, just use the libraries to call a few functions. I'm happy to let the students make most or all of the design decisions. We have cameras/a linux laptop we might be able to let them use as well, but it would be fine if they just use their own laptops/webcams.
Terrell: You say that you have the object detection done already, and that they'd just have to call a few functions in a library to use it, is that right? Can you outline what functions they'd be calling, and what arguments those functions expect and what values they return, to give me a sense of the interface they'd be using? Also, are you confident that the code will work without bugs or hiccups for the intended use case?
Ammirato: Our object detector is implemented using the Pytorch library. They would need maybe 3-5 lines to load the model, which I can give them, and then just call model(input_image, cropped_object_images) and it would return a list of 4-tuples that represent bounding boxes of the object in the image. That's all. So they could easily just proxy this by making a function that has the same input and returns a list of random bounding boxes, if they have any trouble using the real model. We have a working/tested code already for the object detector, so bugs shouldn't be a problem.
Terrell: I'm still not quite clear on the boundary between the code that you've already written for object detection and the code that the students would create. Here's what I think I understand. Students would show some images in the GUI and give users tools to crop the image so that only an object of interest remains. They do this cropping process for 1–5 (presumably similar) images, then pass the original images (plural?) and cropped regions to a function in your library that returns a model. Then they can pass that model along with another image containing the object to another function that returns a set of rectangles indicating where the object is within the image, which the app would indicate somehow in the GUI. Does that sound about right? Am I missing anything?
Ammirato: That's close. If they use my code, they
They can proxy my code by using this function to simulate the object detection:
import numpy as np
def object_detect(cropped_images, not_cropped_image):
return np.random.rand(5,5) #return random bounding boxes
So then what they are doing is totally separate from my code, and they can plug it in later if they like. This way they would:
Dr. John Yoon Dr. Oyshik Banerjee
Patients in the surgical intensive care unit (SICU) are critically ill and require an astute physician to manage a multitude of medications. The physician must not only know the indication to give a certain drug, but knowing facts such as route of administration, pharmacokinetics, metabolism, and even contraindications can be troublesome. Fortunately, in our SICU at the University of North Carolina, we have dedicated ICU pharmacists who provide physicians with useful information that allows physicians to administer the best medication. Doctors are dedicated life-long learners and use cheat sheets, note cards, and now our mobile phones to gain access to this information.
We would like to ask the great talented students at the UNC department of Computer Sciences to help health care providers to create a mobile app that allows easy access to the important medications we use in the SICU. We want a mobile app to have a visually pleasing interface that is laid out in an intuitive fashion. This app will be a reference guide for medications. While we are blessed at UNC hospitals with talented pharmacists, having this app would allow medical students, residents, fellows, and attending physicians across the nation to have access to this reference guide on their phones. In addition, as knowledge is constantly changing, we want to add a tab where we can constantly upload new and upcoming studies on the use of these medications so that healthcare providers across the nation can stay up to date.
SHOPherLOOK started 6 months ago as fashionXproject and has become the first pre-owned apparel shopping platform exclusively for Instagram fashion influencers https://shopherlook.app/. Unlike competitors, such as Poshmark and ThredUp, who sell pre-owned apparel from anyone, SHOPherLOOK wants to establish itself as the LiketoKnow.it of pre-owned apparel, where fashion influencers can leverage their network from Instagram and sell their closet directly to their followers.
Today, Influencers are selling through us in 3 easy steps once they are accepted to our program:
Our team is composed by Ashley and myself, both MBA students at Duke, 1 Product Manager also MBA student at Duke with 5 years' experience in Full-stack development, and 3 Duke CS Junior students. The UNC team will work together with the Product Manager and Ashley and Myself.
Today, we are working with ~100 influencers that submit their listings through our website and we then post to our Instagram account with shoppable links so they can sell their pre-owned apparel through us. Although it's a novel approach, it has very limited scalability; we can't post more than 20 items/day because they will be pass unseen by shoppers due to the chronological order of posts on Instagram. Also, in order to own our shoppers, we need to establish our own separate platform, and thus avoid the potential problem of losing all our work if we have our Instagram account cancelled for any reason as it has happened with many other Instagram-based platforms.
Therefore, we need UNC students support to build our IOS app (IOS is the selected platform for fashion) where each influencer has their own "closet" that users can follow and buy from them. We want to maintain the Influencers submission as we have it today (we have all the databases already done), and develop the platform for shoppers (pulling data from the existent databases). Ideally, we want users to log in to our IOS app with their Instagram account, and once they log in, they automatically follow the closets of all the influencers they follow on Instagram.
The app should be an IOS app.
Design:
Functionalities for shoppers: