Article

When Pretty Isn't Useful: Investigating Why Modern Text-to-Image Models Fail as Reliable Training Data Generators featured image

When Pretty Isn't Useful: Investigating Why Modern Text-to-Image Models Fail as Reliable Training Data Generators

arXiv
We show that newer text-to-image models are progressively worse as training data generators, despite better visual quality, because they collapse to a narrow aesthetic-centric distribution that diverges from real data.
krzysztof-adamkiewicz
PDF
PRISM: Diversifying Dataset Distillation by Decoupling Architectural Priors featured image

PRISM: Diversifying Dataset Distillation by Decoupling Architectural Priors

arXiv
We introduce PRISM, a framework that disentangles architectural priors for dataset distillation, outperforming single-teacher setups.
brian-bernhard-moser
PDF
SubZeroCore: A Submodular Approach with Zero Training for Coreset Selection featured image

SubZeroCore: A Submodular Approach with Zero Training for Coreset Selection

arXiv
We introduce SubZeroCore, a novel, training-free coreset selection method that integrates submodular coverage and density into a single, unified objective.
brian-bernhard-moser
PDF
HyperCore: Coreset Selection under Noise via Hypersphere Models featured image

HyperCore: Coreset Selection under Noise via Hypersphere Models

arXiv
We present HyperCore, a lightweight adaptive coreset selection framework designed for noisy environments. HyperCore utilizes per class hypersphere models and adaptively selects pruning thresholds.
brian-bernhard-moser
PDF
ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation featured image

ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation

arXiv
We improve the training of vision transformers by segmenting and recombining objects and backgrounds from datasets. This makes the transformers more accurate, as well as more robust.
avatar
Tobias Christian Nauen
A Study in Dataset Distillation for Image Super-Resolution featured image

A Study in Dataset Distillation for Image Super-Resolution

arXiv
We conduct the first systematic study of dataset distillation for Super-Resolution.
tobias-dietz
PDF
Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning featured image

Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning

arXiv
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
arundhati-s-shanbhag
PDF
Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning featured image

Distill the Best, Ignore the Rest: Improving Dataset Distillation with Loss-Value-Based Pruning

arXiv
We improve dataset distillation by distilling only a representative coreset.
brian-bernhard-moser
A Low-Resolution Image is Worth 1x1 Words: Enabling Fine Image Super-Resolution with Transformers and TaylorShift featured image

A Low-Resolution Image is Worth 1x1 Words: Enabling Fine Image Super-Resolution with Transformers and TaylorShift

arXiv
We utilize the TaylorShift attention mechanism for global pixel-wise-attention in image super-resolution.
sanath-budakegowdanadoddi-nagaraju
PDF