Efficient AI

ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation featured image

ForAug: Recombining Foregrounds and Backgrounds to Improve Vision Transformer Training with Bias Mitigation

arXiv
We improve the training of vision transformers by segmenting and recombining objects and backgrounds from datasets. This makes the transformers more accurate, as well as more robust.
avatar
Tobias Christian Nauen
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers featured image

Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers

Presentation at WACV 2025 on a large-scale benchmark of 45+ transformer models for image classification, evaluating accuracy, speed, and memory efficiency.
Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers featured image

Which Transformer to Favor: A Comparative Analysis of Efficiency in Vision Transformers

WACV 2025
A comprehensive benchmark and analysis of more than 45 transformer models for image classification to evaluate their efficiency, considering various performance metrics. We find the optimal architectures to use and uncover that model-scaling is more efficient than image scaling.
TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax featured image

TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax

ICPR 2024 (oral)
This paper introduces TaylorShift, a novel reformulation of the attention mechanism using Taylor softmax that enables computing full token-to-token interactions in linear time. We analytically and empirically determine the crossover points where employing TaylorShift becomes more efficient than traditional attention. TaylorShift outperforms the traditional transformer architecture in 4 out of 5 tasks.
avatar
Tobias Christian Nauen
TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax featured image

TaylorShift: Shifting the Complexity of Self-Attention from Squared to Linear (and Back) using Taylor-Softmax

Oral presentation at ICPR 2024 introducing TaylorShift, a novel reformulation of the attention mechanism using Taylor-Softmax that enables full token-to-token interactions in linear time.
avatar
Tobias Christian Nauen
Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning featured image

Just Leaf It: Accelerating Diffusion Classifiers with Hierarchical Class Pruning

arXiv
We speed up diffusion classifiers by utilizing a label hierarchy and pruning unrelated paths.
arundhati-s-shanbhag
PDF
SustAInML featured image

SustAInML

SustainML is dedicated to creating a sustainable ML framework for Green AI. By prioritizing energy efficiency, SustainML aims to pave the way for environmentally conscious AI solutions that are both efficient and effective.
Sustainable Embedded AI featured image

Sustainable Embedded AI

Energy- and data-saving methods for environmental perception in embedded AI systems using the case study of smart factory and smart farming applications; funded by the Carl Zeiss Foundation.