KTH Machine Learning Seminars

27 Apr 2021

Kai Han: Transformer in Transformer

Title: Transformer in Transformer

Speaker: Kai Han, Huawei Noah’s Ark Lab

Date and Time: Tuesday, April 27, 1-2 pm

Place: Zoom Meeting

Meeting ID: 621 2899 7306 Pass code: 373072

Abstract: Transformer is a type of self-attention-based neural networks originally applied for NLP tasks. Recently, pure transformer-based models are proposed to solve computer vision problems. These visual transformers usually view an image as a sequence of patches while they ignore the intrinsic structure information inside each patch. In this paper, we propose a novel Transformer-iN-Transformer (TNT) model for modeling both patch-level and pixel-level representation. In each TNT block, an outer transformer block is utilized to process patch embeddings, and an inner transformer block extracts local features from pixel embeddings. The pixel-level feature is projected to the space of patch embedding by a linear transformation layer and then added into the patch. By stacking the TNT blocks, we build the TNT model for image recognition. Experiments on ImageNet benchmark and downstream tasks demonstrate the superiority and efficiency of the proposed TNT architecture. For example, our TNT achieves 81.3% top-1 accuracy on ImageNet which is 1.5% higher than that of DeiT with similar computational cost. Paper link: https://arxiv.org/abs/2103.00112

Bio: Kai Han is a researcher at Huawei Noah’s Ark Lab, working on computer vision, in particular visual backbone models and model compression. Before Huawei, he obtained M.S. degree in Computer Vision at Peking University and B.E. degree at Zhejiang University. You can learn more about Kai’s work and research here: https://www.semanticscholar.org/author/Kai-Han/3826388.

Organizer: Heng Fang