基于Swin Transformer做目标检测和实例分割

Swin变压器:使用移位窗口的分层视觉变压器
原链:https://arxiv.org/abs/2103.14030
代码:https://github.com/microsoft/Swin-Transformer
作者:韩虎

介绍

Swin Transformer(这个名字代表Shifted window)最初是在arxiv中描述的,它能够作为一个 计算机视觉的通用骨干网。它基本上是一个分层转换器,其表示为 使用移位窗口进行计算。移位窗口方案通过限制自我注意带来更高的效率 计算到不重叠的本地窗口,同时还允许跨窗口连接。

Swin 变压器在 COCO 对象检测(和测试开发)和 ADE20K语义分割(在val上),大大超过了以前的模型。 58.7 box AP 51.1 mask AP
53.5 mIoU

使用预训练模型的 ImageNet 的主要结果

ImageNet-1K 和 ImageNet-22K 预训练 Swin-V1 模型

name pretrain resolution acc@1 acc@5 #params FLOPs FPS 22K model 1K model
Swin-T ImageNet-1K 224x224 81.2 95.5 28M 4.5G 755 - github/baidu/config/log
Swin-S ImageNet-1K 224x224 83.2 96.2 50M 8.7G 437 - github/baidu/config/log
Swin-B ImageNet-1K 224x224 83.5 96.5 88M 15.4G 278 - github/baidu/config/log
Swin-B ImageNet-1K 384x384 84.5 97.0 88M 47.1G 85 - github/baidu/config
Swin-T ImageNet-22K 224x224 80.9 96.0 28M 4.5G 755 github/baidu/config github/baidu/config
Swin-S ImageNet-22K 224x224 83.2 97.0 50M 8.7G 437 github/baidu/config github/baidu/config
Swin-B ImageNet-22K 224x224 85.2 97.5 88M 15.4G 278 github/baidu/config github/baidu/config
Swin-B ImageNet-22K 384x384 86.4 98.0 88M 47.1G 85 github/baidu github/baidu/config
Swin-L ImageNet-22K 224x224 86.3 97.9 197M 34.5G 141 github/baidu/config github/baidu/config
Swin-L ImageNet-22K 384x384 87.3 98.2 197M 103.9G 42 github/baidu github/baidu/config

ImageNet-1K 和 ImageNet-22K 预训练 Swin-V2 模型

name pretrain resolution window acc@1 acc@5 #params FLOPs FPS 22K model 1K model
SwinV2-T ImageNet-1K 256x256 8x8 81.8 95.9 28M 5.9G 572 - github/baidu/config
SwinV2-S ImageNet-1K 256x256 8x8 83.7 96.6 50M 11.5G 327 - github/baidu/config
SwinV2-B ImageNet-1K 256x256 8x8 84.2 96.9 88M 20.3G 217 - github/baidu/config
SwinV2-T ImageNet-1K 256x256 16x16 82.8 96.2 28M 6.6G 437 - github/baidu/config
SwinV2-S ImageNet-1K 256x256 16x16 84.1 96.8 50M 12.6G 257 - github/baidu/config
SwinV2-B ImageNet-1K 256x256 16x16 84.6 97.0 88M 21.8G 174 - github/baidu/config
SwinV2-B* ImageNet-22K 256x256 16x16 86.2 97.9 88M 21.8G 174 github/baidu/config github/baidu/config
SwinV2-B* ImageNet-22K 384x384 24x24 87.1 98.2 88M 54.7G 57 github/baidu/config github/baidu/config
SwinV2-L* ImageNet-22K 256x256 16x16 86.9 98.0 197M 47.5G 95 github/baidu/config github/baidu/config
SwinV2-L* ImageNet-22K 384x384 24x24 87.6 98.3 197M 115.4G 33 github/baidu/config github/baidu/config

注意:

输入分辨率为 2x2 和 256x256 的 SwinV384-B (SwinV384-L) 均使用较小的输入分辨率 192x192 从相同的预训练模型进行微调。
SwinV2-B (384x384) 在 ImageNet-78K-V08 上达到 1.1 acc@2,而 SwinV2-L (384x384) 达到 78.31。

ImageNet-1K 预训练 Swin MLP 模型

name pretrain resolution acc@1 acc@5 #params FLOPs FPS 1K model
Mixer-B/16 ImageNet-1K 224x224 76.4 - 59M 12.7G - official repo
ResMLP-S24 ImageNet-1K 224x224 79.4 - 30M 6.0G 715 timm
ResMLP-B24 ImageNet-1K 224x224 81.0 - 116M 23.0G 231 timm
Swin-T/C24 ImageNet-1K 256x256 81.6 95.7 28M 5.9G 563 github/baidu/config
SwinMLP-T/C24 ImageNet-1K 256x256 79.4 94.6 20M 4.0G 807 github/baidu/config
SwinMLP-T/C12 ImageNet-1K 256x256 79.6 94.7 21M 4.0G 792 github/baidu/config
SwinMLP-T/C6 ImageNet-1K 256x256 79.7 94.9 23M 4.0G 766 github/baidu/config
SwinMLP-B ImageNet-1K 224x224 81.3 95.3 61M 10.4G 409 github/baidu/config

Note: access code for baidu is swin. C24 means each head has 24 channels.

ImageNet-22K 预训练 Swin-MoE 模型

  • 有关运行 Swin-MoE 的说明,请参阅get_started。
  • Swin-MoE的预训练模型可以在MODEL HUB中找到。

下游任务的主要结果

COCO 对象检测 (2017 val)

Backbone Method pretrain Lr Schd box mAP mask mAP #params FLOPs
Swin-T Mask R-CNN ImageNet-1K 3x 46.0 41.6 48M 267G
Swin-S Mask R-CNN ImageNet-1K 3x 48.5 43.3 69M 359G
Swin-T Cascade Mask R-CNN ImageNet-1K 3x 50.4 43.7 86M 745G
Swin-S Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 107M 838G
Swin-B Cascade Mask R-CNN ImageNet-1K 3x 51.9 45.0 145M 982G
Swin-T RepPoints V2 ImageNet-1K 3x 50.0 - 45M 283G
Swin-T Mask RepPoints V2 ImageNet-1K 3x 50.3 43.6 47M 292G
Swin-B HTC++ ImageNet-22K 6x 56.4 49.1 160M 1043G
Swin-L HTC++ ImageNet-22K 3x 57.1 49.5 284M 1470G
Swin-L HTC++* ImageNet-22K 3x 58.0 50.4 284M -

Note: * indicates multi-scale testing.

ADE20K 语义分割 (val)

Backbone Method pretrain Crop Size Lr Schd mIoU mIoU (ms+flip) #params FLOPs
Swin-T UPerNet ImageNet-1K 512x512 160K 44.51 45.81 60M 945G
Swin-S UperNet ImageNet-1K 512x512 160K 47.64 49.47 81M 1038G
Swin-B UperNet ImageNet-1K 512x512 160K 48.13 49.72 121M 1188G
Swin-B UPerNet ImageNet-22K 640x640 160K 50.04 51.66 121M 1841G
Swin-L UperNet ImageNet-22K 640x640 160K 52.05 53.53 234M 3230G

引用斯文变压器

@inproceedings{liu2021Swin,
  title={Swin Transformer: Hierarchical Vision Transformer using Shifted Windows},
  author={Liu, Ze and Lin, Yutong and Cao, Yue and Hu, Han and Wei, Yixuan and Zhang, Zheng and Lin, Stephen and Guo, Baining},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  year={2021}
}

引用局部关系网络(第一个全注意力视觉骨干)

@inproceedings{hu2019local,
  title={Local Relation Networks for Image Recognition},
  author={Hu, Han and Zhang, Zheng and Xie, Zhenda and Lin, Stephen},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  pages={3464--3473},
  year={2019}
}

引用 Swin 变压器 V2

@inproceedings{liu2021swinv2,
  title={Swin Transformer V2: Scaling Up Capacity and Resolution}, 
  author={Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

引用SimMIM(一种支持SwinV2-G的自我监督方法)

@inproceedings{xie2021simmim,
  title={SimMIM: A Simple Framework for Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Bao, Jianmin and Yao, Zhuliang and Dai, Qi and Hu, Han},
  booktitle={International Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2022}
}

引用 SimMIM 数据缩放

@article{xie2022data,
  title={On Data Scaling in Masked Image Modeling},
  author={Xie, Zhenda and Zhang, Zheng and Cao, Yue and Lin, Yutong and Wei, Yixuan and Dai, Qi and Hu, Han},
  journal={arXiv preprint arXiv:2206.04664},
  year={2022}
}

引用斯文萌

@misc{hwang2022tutel,
      title={Tutel: Adaptive Mixture-of-Experts at Scale}, 
      author={Changho Hwang and Wei Cui and Yifan Xiong and Ziyue Yang and Ze Liu and Han Hu and Zilong Wang and Rafael Salas and Jithin Jose and Prabhat Ram and Joe Chau and Peng Cheng and Fan Yang and Mao Yang and Yongqiang Xiong},
      year={2022},
      eprint={2206.03382},
      archivePrefix={arXiv}
}
0 条回复 A文章作者 M管理员
    暂无讨论,说说你的看法吧