TL;DR: EMAT processes high-resolution correlation tokens, boosting few-shot classification and segmentation, especially for small objects, while using at least four times fewer parameters than existing methods. It supports N-way K-shot tasks and correctly outputs empty masks when no target is present.
Few-shot classification and segmentation (FS-CS) focuses on jointly performing multi-label classification and multi-class segmentation using few annotated examples. Although the current state of the art (SOTA) achieves high accuracy in both tasks, it struggles with small objects. To overcome this, we propose the Efficient Masked Attention Transformer (EMAT), which improves classification and segmentation accuracy, especially for small objects. EMAT introduces three modifications: a novel memory-efficient masked attention mechanism, a learnable downscaling strategy, and parameter-efficiency enhancements. EMAT outperforms all FS-CS methods on the PASCAL-5i and COCO-20i datasets, using at least four times fewer trainable parameters. Moreover, as the current FS-CS evaluation setting discards available annotations, despite their costly collection, we introduce two novel evaluation settings that consider these annotations to better reflect practical scenarios.
Our proposed EMAT builds on the classification-segmentation transformer (CST), the previous SOTA for FS-CS. Both models share the same feature extraction process: a frozen, pre-trained ViT extracts support and query tokens, which are correlated via cosine similarity to form the correlation tokens.
EMAT differs from CST in its two-layer transformer (purple blocks in the figure below), which processes the correlation tokens and feeds task-specific heads for multi-label classification and multi-class segmentation. EMAT enhances this transformer with three key improvements:
The original few-shot classification and segmentation (FS-CS) setting follows the N-way K-shot formulation: each task has a support set and one query image, and the support set contains N classes with K examples each. This setting assumes every support image has annotations for only one class. If a support image includes annotations from multiple classes, its label vector and segmentation mask are adjusted to keep just one class before the support set is created, discarding available annotations and wasting costly labeling effort.
To better utilize existing annotations and reflect real-world scenarios, we propose two new FS-CS evaluation settings:
The figure below shows how a 2-way 1-shot task changes under the three evaluation settings and how each setting affects CST* and EMAT. Note that CST* uses the same backbone as EMAT (DINOv2-S), whereas the original CST uses DINO-S.
EMAT provides the largest improvement over CST* for the smallest objects (objects occupying 0-5% of the image), gradually decreasing as object size increases. The enhanced classification and segmentation accuracy of EMAT is likely due to better localization enabled by its higher-resolution correlation tokens.
@inproceedings{carrion2025emat,
author = {Dustin Carrión-Ojeda and Stefan Roth and Simone Scahub-Meyer},
title = {Efficient Masked Attention Transformer for Few-Shot Classification and Segmentation},
booktitle = {GCPR},
year = {2025},
}
This work was funded by the Hessian Ministry of Science and Research, Arts and Culture (HMWK) through the project "The Third Wave of Artificial Intelligence - 3AI". The work was further supported by the Deutsche Forschungsgemeinschaft (German Research Foundation, DFG) under Germany's Excellence Strategy (EXC 3057/1 "Reasonable Artificial Intelligence", Project No. 533677015). Stefan Roth acknowledges support by the European Research Council (ERC) under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 866008).