Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning
Nov 18, 2024·,
Tobias Christian Nauen,,
Brian Bernhard Moser
Federico Raue
Stanislav Frolov
Andreas Dengel

Abstract
Dataset distillation has gained significant interest in recent years, yet existing approaches typically distill from the entire dataset, potentially including non-beneficial samples. We introduce a novel “Prune First, Distill After” framework that systematically prunes datasets via loss-based sampling prior to distillation. By leveraging pruning before classical distillation techniques and generative priors, we create a representative core-set that leads to enhanced generalization for unseen architectures - a significant challenge of current distillation methods. More specifically, our proposed framework significantly boosts distilled quality, achieving up to a 5.2 percentage points accuracy increase even with substantial dataset pruning, i.e., removing 80% of the original dataset prior to distillation. Overall, our experimental results highlight the advantages of our easy-sample prioritization and cross-architecture robustness, paving the way for more effective and high-quality dataset distillation.
Type
Publication
arXiv preprint
(arXiv)
Citation
If you use this work, please cite our paper:
@misc{moser2024distillbestignorerest,
title = {Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning},
author = {Brian B. Moser and Federico Raue and Tobias C. Nauen and Stanislav Frolov and Andreas Dengel},
year = {2024},
eprint = {2411.12115},
archiveprefix = {arXiv},
primaryclass = {cs.CV}
}
Authors
PhD Student
I’m a researcher of artificial intelligence at DFKI and RPTU Kaiserslautern-Landau.
My research interests include efficient deep learning, transformer models, multimodal learning, and computer vision.
In my PhD project, my focus lies on the development of efficient transformer models for vision, language, and multimodal tasks.