Research Acceleration Program

Mentoring early-stage AI researchers

Lead: Lu Dong Advisor: Ifeoma Nwogu

We initiated the Research Acceleration Program to support the rapid growth of early-stage AI researchers, emphasizing human-centered multimodal intelligence—spanning generative AI, multimodal large models, and LLM-based agentic systems for applications in education, sign language communication, and digital humans.

What you can expect

A structured mentoring program designed to accelerate research productivity and build strong technical and academic foundations for students. The program offers guidance on:

  • Efficiently identifying core ideas and contributions in research papers
  • Tracking and reproducing state-of-the-art GenAI techniques
  • Training generative models
  • Deploying models on Hugging Face
  • Releasing well-documented open-source code
  • Building clear demos to visualize staged research progress

Projects will be tailored to each student’s background, research interests, and starting level, with appropriately calibrated difficulty.

If you’re interested, please feel free to send me your resume. Spots are limited and will be filled on a first-come, first-served basis.

Sample Project Progress Demonstrations

The following projects will be updated periodically. The demos are fully interactive, but because they are hosted on free-tier Hugging Face resources, they may enter a sleeping state when idle. As a result, the first run may take some time to wake up.

Student A Project

Rajvi Zala

SignOmni

Project Participation:
SignOmni — American Sign Language Search and Generative Models to Support Early Learners

Branch Goal:
Build SignMotionGPT by pre-training an LLM for 3D ASL generation, along with quantitative and qualitative evaluation.

Student B Project

Nitish Yeramilli

SCOPE

Project Participation:
SCOPE – Student Cognitive Observation, Perception, and Explanation

Branch Goal:
Analyzing students’ cognitive states : engagement, confusion, frustration, and boredom—along with the intensity level of each state in online studies using image-based data.

Student C Project

Sudheendra Peddiraju

SignOmni

Project Participation:
SignOmni — American Sign Language Search and Generative Models to Support Early Learners

Branch Goal:
Building a RAG-based system that maps sign language descriptions to signs, using key linguistic features such as handshape, palm orientation, movement, and location.

Student D Project

Suman Mandava

SCOPE

Project Participation:
SCOPE – Student Cognitive Observation, Perception, and Explanation

Branch Goal:
Branch Goal: Analyze students’ cognitive states—engagement, confusion, frustration, and boredom—by leveraging facial Action Units (AUs) and fine-tuned large language models for deeper interpretability.

Student E Project

Hemasree Pujari

StrategyGen

Project Participation:
StrategyGen: Generating Adult–Child Interaction Strategies for Early Education

Branch Goal:
Reconstruct 3D body meshes from 2D interaction videos and train an audio-driven generative model that generates matching 3D body motion.

Student F Project

Gayathri Adulla

StrategyGen

Project Participation:
StrategyGen: Generating Adult–Child Interaction Strategies for Early Education

Branch Goal:
Exploring multiple denoising methods to reduce jitter in reconstructed 3D meshes and develop an automated denoising pipeline.

Student G Project

Jaya Chandra Galda

StrategyGen

Project Participation:
StrategyGen: Generating Adult–Child Interaction Strategies for Early Education

Branch Goal:
Given an interaction video, annotate adult strategies, yield their timestamps, and further develop an end-to-end detection pipeline.