We show that newer text-to-image models are progressively worse as training data generators, despite better visual quality, because they collapse to a narrow aesthetic-centric distribution that diverges from real data.
We introduce PRISM, a framework that disentangles architectural priors for dataset distillation, outperforming single-teacher setups.
We introduce SubZeroCore, a novel, training-free coreset selection method that integrates submodular coverage and density into a single, unified objective.
We present HyperCore, a lightweight adaptive coreset selection framework designed for noisy environments. HyperCore utilizes per class hypersphere models and adaptively selects pruning thresholds.
We extend pretrained super-resolution models to larger images by using local-aware prompts.
We improve the training of vision transformers by segmenting and recombining objects and backgrounds from datasets. This makes the transformers more accurate, as well as more robust.
A comprehensive benchmark and analysis of more than 45 transformer models for image classification to evaluate their efficiency, considering various performance metrics. We find the optimal architectures to use and uncover that model-scaling is more efficient than image scaling.
We conduct the first systematic study of dataset distillation for Super-Resolution.
This paper introduces TaylorShift, a novel reformulation of the attention mechanism using Taylor softmax that enables computing full token-to-token interactions in linear time. We analytically and empirically determine the crossover points where employing TaylorShift becomes more efficient than traditional attention. TaylorShift outperforms the traditional transformer architecture in 4 out of 5 tasks.
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
We improve dataset distillation by distilling only a representative coreset.
We utilize the TaylorShift attention mechanism for global pixel-wise-attention in image super-resolution.
This paper proposes a new method to parameterize open loop controls in stochastic optimal control problems using path signatures. We show that these controls are dense in the space of all admissible controls and establish conditions for stability of the controlled dynamics and target functional.
We consider a stochastic control problem and try to solve it using the signature method.
We extend and test KEdge, an interpretable-by-design approach for graph neural networks, and compare it to gradient-based attribution techniques.
No publications match your search.