Knowledge Distillation for Multimodal Egocentric Action Recognition Robust to Missing Modalities

1University of Zaragoza, 2TU Darmstadt, 3hessian.AI

*Equal contribution.

TL;DR 🚀


We propose KARMMA, a multimodal-to-multimodal knowledge distillation approach for egocentric action recognition that leverages multiple modalities while remaining robust to missing ones. Our method is flexible and well-suited for real-world applications, as it can perform inference with any combination of the trained modalities while requiring low memory usage and computational complexity.

Abstract


KARMMA overview

Overview of our proposed multimodal-to-multimodal distillation pipeline (KARMMA) for egocentric action recognition.

Action recognition is an essential task in egocentric vision due to its wide range of applications across many fields. While deep learning methods have been proposed to address this task, most rely on a single modality, typically video. However, including additional modalities may improve the robustness of the approaches to common issues in egocentric videos, such as blurriness and occlusions. Recent efforts in multimodal egocentric action recognition often assume the availability of all modalities, leading to failures or performance drops when any modality is missing. To address this, we introduce an efficient multimodal knowledge distillation approach for egocentric action recognition that is robust to missing modalities (KARMMA) while still benefiting when multiple modalities are available. Our method focuses on resource-efficient development by leveraging pre-trained models as unimodal feature extractors in our teacher model, which distills knowledge into a much smaller and faster student model. Experiments on the Epic-Kitchens and Something-Something datasets demonstrate that our student model effectively handles missing modalities while reducing its accuracy drop in this scenario.

Method


Teacher diagram
Distillation diagram

The KARMMA pipeline consists of two stages. In the first stage , the teacher processes all modalities using frozen unimodal feature extractors and learns to fuse their features through a combination of cross-entropy and alignment losses. In the second stage , the student learns from the frozen, previously trained teacher via knowledge distillation, incorporating modality dropout and a strategy for handling missing modalities to enhance robustness in incomplete input scenarios.

Missing modality strategy

Our proposed strategy for handling missing modalities introduces two types of learnable tokens into the embedding layer of the student. This layer first projects the tokens from all availalbe feature extractors. Then, it adds the corresponding learned modality token \(\breve{\mathbf{t}}^{m}\) to all projected tokens. Finally, a learned token \(\dot{\mathbf{t}}_{i}^{m}\) is added to each individual token. The key difference between \(\breve{\mathbf{t}}^{m}\) and \(\dot{\mathbf{t}}_{i}^{m}\) is that \(\breve{\mathbf{t}}^{m}\) is learned per modality, whereas \(\dot{\mathbf{t}}_{i}^{m}\) is learned per token.

Results


We evaluate our teacher (KARMMAT) and student (KARMMAS) models on the Epic-Kitchens-100 and Something-Something (V2) datasets. Since using the validation set for both validation and testing may not reflect the generalization capability of our method, we created a custom split of the Epic-Kitchens training set, named Epic-Kitchens*, allocating 90% for training and 10% for validation, and kept the validation split for testing.

Analysis of KARMMA Enhancements

Our KARMMAS incorporates multimodal-to-multimodal knowledge distillation, modality dropout, and our strategy for handling missing modalities, collectively referred to as “KARMMA enhancements.” To analyze the impact of these enhancements, we compare KARMMAS to two baselines with the same architecture. The first, Baseline, does not include any KARMMA enhancements. The second, Baseline w/ δ, includes modality dropout and our proposed strategy for handling missing modalities. We denote “V” (video), “F” (optical flow), “A” (audio), and “D” (object detection annotations). “[A/D]” indicates that either audio or object detection is used, depending on the dataset. The reported results represent action accuracy percentages (higher is better). Rows with a gray background indicate our final student while bold and underline values indicate the best and second best results.

Table of results

Dynamic Missing Modality Patterns

Real-world scenarios often involve dynamic missing modality patterns due to sensor malfunctions. To assess robustness, we evaluate both baselines and our KARMMAS under increasing probabilities of missing modalities, ranging from 0% to 90% during inference. The results of Baseline w/ δ demonstrate that incorporating modality dropout and our strategy for handling missing modalities consistently improves accuracy. Likewise, integrating our distillation approach enables KARMMAS to consistently outperform both baselines across all scenarios and datasets.

Modality dropout plot

(a) Epic-Kitchens

Modality dropout plot

(b) Something-Something

Resource Efficiency

Our KARMMA student reduces memory consumption by at least 50% compared to the teacher model while significantly lowering GFLOPs, resulting in faster inference times.

memory
gflops

Qualitative Results

BibTeX


@article{santoscarrion2025knowledge,
    author  = {Santos-Villafranca, Maria and Carrión-Ojeda, Dustin and Perez-Yus, Alejandro and Bermudez-Cameo, Jesus and Guerrero, Jose J and Schaub-Meyer, Simone},
    title   = {Knowledge Distillation for Multimodal Egocentric Action Recognition Robust to Missing Modalities},
    journal = {arXiv},
    year    = {2025}
}