Before coming to Fuqua, I studied statistics at UNC, where I was first introduced to machine learning and its potential to uncover patterns in complex data. I gained hands-on experience building models and analyzing datasets, but I knew I wanted to take things further and learn more to apply these tools to real-world business challenges. That’s what drew me to the MQM program: a chance to strengthen my technical foundation while developing the business acumen to turn insights into impact.

At Fuqua, I’ve looked for opportunities to explore how analytics can make a tangible difference. One of the most meaningful opportunities came during the Modern Analytics course, where I joined a team project focused on using deep learning to classify American Sign Language (ASL) gestures. This project was a perfect example of what I came to MQM to do — combine data science with purpose.

Our Mission To Turn Insights Into Impact

Deep learning has the power to redefine accessibility, opening new doors for the deaf and hearing-impaired by translating this silent language. American Sign Language (ASL) is an effective way for them to communicate, but it comes with unique barriers and a lack of widespread use. Most unimpaired people do not know ASL, which makes it less useful for those who need it. After all, there are slim chances an unimpaired person will encounter a need to use the language.

My team and I sought to remedy this issue by exploring the use of deep learning to classify images of ASL signs. A solution like this could enable many business applications, such as apps for ASL translation or learning, and accessibility resources that transcribe or caption the use of ASL in virtual meetings.

Our goal was to create a model that can analyze an image of a sign and output the letter it translates to, and we were extremely proud when our best model achieved over 99% accuracy in testing. This impressive accomplishment not only demonstrated the model’s effectiveness but also reflected how much we learned and grew throughout the Modern Analytics course.

Working With Data

To train and test our deep learning model, we used the Sign Language MNIST, available on Kaggle. This dataset is designed to facilitate the development of machine learning models that recognize American Sign Language gestures. The dataset includes a training set of 27,455 images and a testing set of 7,172 images corresponding to 24 unique hand gestures for letters A-Y (excluding J and Z due to their dynamic nature), making it an excellent resource for image classification training.

Each image in the dataset is labeled, simplifying the implementation of supervised learning. We augmented and processed this data to make it more diverse to improve the training of our models. This, we learned in class, is important to ensure models can respond to unique and unfamiliar inputs.

American sign language hand signals for various letters used to train a deep learning model

Testing Different Models

Our team trained three different models, each progressing in its complexity and accuracy.

1. Logistic Regression Model

The logistic regression model has the strengths of being simple and computationally efficient, and serves well as a baseline to be improved upon at 81.08% accuracy. However, this is not practical enough to encourage deployment for business applications.

In class, we often discussed how to evaluate whether an error rate is acceptable. For implementations such as translation, miscommunication can be costly in the business world. We felt accuracy should be as close to 100% as possible.

2. Convolutional Neural Network

We pivoted to using a convolutional neural network (CNN). CNNs are powerful tools for capturing complex and hierarchical patterns in data, particularly excelling with high-dimensional inputs like images. CNNs can effectively detect subtle details in a hand gesture, like finger placement or hand orientation, that other models might miss.

However, these capabilities come at a cost, as CNNs are computationally expensive, requiring significant processing power and memory. Additionally, they are prone to overfitting, where a model performs well on training data but struggles with new, unseen images.

In the context of our project, we learned that overfitting might mean the model is memorizing the training images instead of learning to recognize ASL letters. To resolve these concerns, we applied the appropriate regularization techniques we learned in class, such as dropout layering.

3. Optimized Convolutional Neural Network

In our most accurate model, which we titled our optimized CNN, we achieved over 99% testing accuracy by enhancing the model architecture, well within the range we’d consider reliable for practical use. We were proud to see how refining the model brought us closer to a solution that could support real-world applications in accessibility.

Tangible Deep Learning Insights

Overall, our research shows the potential for deep learning to improve accessibility in the lives of the deaf and hearing-impaired by enabling more accurate and real-time sign language translation. By automating the recognition of sign language, these systems could significantly reduce communication barriers in daily interactions. This technology holds promise for fostering greater inclusion, independence, and overall quality of life for those who rely on sign language as their primary means of communication.

Reflecting on this project, I am proud of just how much I learned and applied what I practiced in class to solve real-world challenges. The Modern Analytics course has equipped me with the skills and confidence to approach demanding analytical tasks with a structured, data-driven mindset.

Kenan Bauer, a student in the MQM: BA Class of 2025, at a table with four of his classmates enjoying a meal

As a group, my teammates Gwen Quo, Soojung Kim, Aditya Menon, and I felt a real sense of pride and excitement as our models improved. Watching the accuracy climb to over 99% was incredibly rewarding. It made all the hours of testing, debugging, and reworking feel worth it.

We started this project to learn, but we walked away feeling like we had created a prototype that could lead to something bigger. This project highlighted the real-world impact of AI in making communication more accessible, reinforcing my passion for using technology to drive meaningful change.

Kenan Bauer, a student in the MQM: BA Class of 2025, standing in a blue Duke sweatshirt to the left of a sign reading "Duke - The Fuqua School of Business - 100 Fuqua Drive"