We omit head_number, relative position encoding for simplicity, you can visit here to view more detailed code. You can apply PSA with just a few lines of code, significantly reducing computational complexity. We hope our simple and effective approach can serve as a useful tool for future research in super-resolution model design. Without any bells and whistles, we show that our SRFormer achieves a 33.86dB PSNR score on the Urban100 dataset, which is 0.46dB higher than that of SwinIR but usesįewer parameters and computations. Our permuted self-attention is simple and can be easily applied to existing super-resolution networks based on Transformers. PSA strikes an appropriate balance between the channel and spatial information for self-attention, allowing each Transformer block to build pairwise correlations within large windows with even less computational burden. We rethink the design of the popular shifted window self-attention, expose and analyze several characteristic issues of it, and present permuted self-attention modelĪbstract: In this paper, we introduce SRFormer, a simple yet effective Transformer-based model for single image super-resolution. The table below are performance comparison with SwinIR under same training strategy on DIV2K dataset (X2 SR), SRFormer greatly outperform SwinIR with less Paramaters(10.40M vs 11.75M) and Flops(2741G vs 2868G), More results can be found here. SRFormer( arxiv link) achieves state-of-the-art performance in The core of SRFormer is PSA, a simple, efficient and effective attention mechanism, allowing to build large range pairwise correlations with even less computational burden than original WSA of SwinIR. SRFormer is a new image SR backbone with SOTA performance. Yupeng Zhou 1, Zhen Li 1, Chun-Le Guo 1, Song Bai 2, Ming-Ming Cheng 1, Qibin Hou 1ġTMCC, School of Computer Science, Nankai University SRFormer: Permuted Self-Attention for Single Image Super-Resolution This repository contains the official implementation of the following paper: SRFormer: Permuted Self-Attention for Single Image Super-Resolution (ICCV2023)
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |