From 7d44d65bf5f6bc3042b7ca6a72e3ce6d8f7e5d6d Mon Sep 17 00:00:00 2001 From: Ross Wightman Date: Wed, 27 Jul 2022 14:04:29 -0700 Subject: [PATCH] Update README and changelogs --- README.md | 197 ++++----------------------------------- docs/archived_changes.md | 32 +++++++ docs/changes.md | 107 ++++++++++++++------- 3 files changed, 124 insertions(+), 212 deletions(-) diff --git a/README.md b/README.md index e4c058f1..a964f72e 100644 --- a/README.md +++ b/README.md @@ -21,6 +21,20 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before ## What's New +### July 27, 2022 +* All runtime benchmark and validation result csv files are finally up-to-date! +* A few more weights & model defs added: + * `darknetaa53` - 79.8 @ 256, 80.5 @ 288 + * `convnext_nano` - 80.8 @ 224, 81.5 @ 288 + * `cs3sedarknet_l` - 81.2 @ 256, 81.8 @ 288 + * `cs3darknet_x` - 81.8 @ 256, 82.2 @ 288 + * `cs3sedarknet_x` - 82.2 @ 256, 82.7 @ 288 + * `cs3edgenet_x` - 82.2 @ 256, 82.7 @ 288 + * `cs3se_edgenet_x` - 82.8 @ 256, 83.5 @ 320 +* Add output_stride=8 and 16 support to ConvNeXt (dilation) +* deit3 models not being able to resize pos_emb fixed +* Version 0.6.7 PyPi release (/w above bug fixes and new weighs since 0.6.5) + ### July 8, 2022 More models, more fixes * Official research models (w/ weights) added: @@ -178,185 +192,6 @@ More models, more fixes * SGDP and AdamP still won't work with PyTorch XLA but others should (have yet to test Adabelief, Adafactor, Adahessian myself). * EfficientNet-V2 XL TF ported weights added, but they don't validate well in PyTorch (L is better). The pre-processing for the V2 TF training is a bit diff and the fine-tuned 21k -> 1k weights are very sensitive and less robust than the 1k weights. * Added PyTorch trained EfficientNet-V2 'Tiny' w/ GlobalContext attn weights. Only .1-.2 top-1 better than the SE so more of a curiosity for those interested. - -### July 12, 2021 -* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare) - -### July 5-9, 2021 -* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res) - * top-1 82.34 @ 288x288 and 82.54 @ 320x320 -* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models. -* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare). - * `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426 - -### June 23, 2021 -* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6) - -### June 20, 2021 -* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270) - * .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg) - * See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights - * Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work. - * Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1) - * `vit_deit_*` renamed to just `deit_*` - * Remove my old small model, replace with DeiT compatible small w/ AugReg weights -* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params. -* Add weights from official ResMLP release (https://github.com/facebookresearch/deit) -* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384. -* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237) -* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now - * weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered) - * eps values adjusted, will be slight differences but should be quite close -* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models -* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool -* Please report any regressions, this PR touched quite a few models. - -### June 8, 2021 -* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1. -* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1. - * NFNet inspired block layout with quad layer stem and no maxpool - * Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288 - -### May 25, 2021 -* Add LeViT, Visformer, ConViT (PR by Aman Arora), Twins (PR by paper authors) transformer models -* Add ResMLP and gMLP MLP vision models to the existing MLP Mixer impl -* Fix a number of torchscript issues with various vision transformer models -* Cleanup input_size/img_size override handling and improve testing / test coverage for all vision transformer and MLP models -* More flexible pos embedding resize (non-square) for ViT and TnT. Thanks [Alexander Soare](https://github.com/alexander-soare) -* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params. - -### May 14, 2021 -* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl. - * 1k trained variants: `tf_efficientnetv2_s/m/l` - * 21k trained variants: `tf_efficientnetv2_s/m/l_in21k` - * 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_in21ft1k` - * v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3` - * Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s` - * Some blank `efficientnetv2_*` models in-place for future native PyTorch training - -### May 5, 2021 -* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen) -* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit) -* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora) -* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin) -* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23) -* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai) -* Update ByoaNet attention modules - * Improve SA module inits - * Hack together experimental stand-alone Swin based attn module and `swinnet` - * Consistent '26t' model defs for experiments. -* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1. -* WandB logging support - -### April 13, 2021 -* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer - -### April 12, 2021 -* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256. -* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training. -* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs - * Lambda Networks - https://arxiv.org/abs/2102.08602 - * Bottleneck Transformers - https://arxiv.org/abs/2101.11605 - * Halo Nets - https://arxiv.org/abs/2103.12731 -* Adabelief optimizer contributed by Juntang Zhuang - -### April 1, 2021 -* Add snazzy `benchmark.py` script for bulk `timm` model benchmarking of train and/or inference -* Add Pooling-based Vision Transformer (PiT) models (from https://github.com/naver-ai/pit) - * Merged distilled variant into main for torchscript compatibility - * Some `timm` cleanup/style tweaks and weights have hub download support -* Cleanup Vision Transformer (ViT) models - * Merge distilled (DeiT) model into main so that torchscript can work - * Support updated weight init (defaults to old still) that closer matches original JAX impl (possibly better training from scratch) - * Separate hybrid model defs into different file and add several new model defs to fiddle with, support patch_size != 1 for hybrids - * Fix fine-tuning num_class changes (PiT and ViT) and pos_embed resizing (Vit) with distilled variants - * nn.Sequential for block stack (does not break downstream compat) -* TnT (Transformer-in-Transformer) models contributed by author (from https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT) -* Add RegNetY-160 weights from DeiT teacher model -* Add new NFNet-L0 w/ SE attn (rename `nfnet_l0b`->`nfnet_l0`) weights 82.75 top-1 @ 288x288 -* Some fixes/improvements for TFDS dataset wrapper - -### March 17, 2021 -* Add new ECA-NFNet-L0 (rename `nfnet_l0c`->`eca_nfnet_l0`) weights trained by myself. - * 82.6 top-1 @ 288x288, 82.8 @ 320x320, trained at 224x224 - * Uses SiLU activation, approx 2x faster than `dm_nfnet_f0` and 50% faster than `nfnet_f0s` w/ 1/3 param count -* Integrate [Hugging Face model hub](https://huggingface.co/models) into timm create_model and default_cfg handling for pretrained weight and config sharing (more on this soon!) -* Merge HardCoRe NAS models contributed by https://github.com/yoniaflalo -* Merge PyTorch trained EfficientNet-EL and pruned ES/EL variants contributed by [DeGirum](https://github.com/DeGirum) - - -### March 7, 2021 -* First 0.4.x PyPi release w/ NFNets (& related), ByoB (GPU-Efficient, RepVGG, etc). -* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation. -* Tested with PyTorch 1.8 release. Updated CI to use 1.8. -* Benchmarked several arch on RTX 3090, Titan RTX, and V100 across 1.7.1, 1.8, NGC 20.12, and 21.02. Some interesting performance variations to take note of https://gist.github.com/rwightman/bb59f9e245162cee0e38bd66bd8cd77f - -### Feb 18, 2021 -* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets). - * Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn. - * These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants. - * Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated). - * Matching the original pre-processing as closely as possible I get these results: - * `dm_nfnet_f6` - 86.352 - * `dm_nfnet_f5` - 86.100 - * `dm_nfnet_f4` - 85.834 - * `dm_nfnet_f3` - 85.676 - * `dm_nfnet_f2` - 85.178 - * `dm_nfnet_f1` - 84.696 - * `dm_nfnet_f0` - 83.464 - -### Feb 16, 2021 -* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py. - * AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc` - * PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0` - * PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value` - * AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet. - -### Feb 12, 2021 -* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs - -### Feb 10, 2021 -* First Normalization-Free model training experiments done, - * nf_resnet50 - 80.68 top-1 @ 288x288, 80.31 @ 256x256 - * nf_regnet_b1 - 79.30 @ 288x288, 78.75 @ 256x256 -* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks') - * GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py` - * RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py` - * classic VGG (from torchvision, impl in `vgg.py`) -* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models -* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not. -* Fix a few bugs introduced since last pypi release - -### Feb 8, 2021 -* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352. - * `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256 - * `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256 - * `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320 -* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed). -* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test. - -### Jan 30, 2021 -* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692) - -### Jan 25, 2021 -* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer -* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer -* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support - * NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning -* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit -* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes -* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script - * Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2` -* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar - * Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp` -* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling - -### Jan 3, 2021 -* Add SE-ResNet-152D weights - * 256x256 val, 0.94 crop top-1 - 83.75 - * 320x320 val, 1.0 crop - 84.36 -* Update [results files](results/) - ## Introduction @@ -379,7 +214,8 @@ A full version of the list below with source links can be found in the [document * ConvNeXt - https://arxiv.org/abs/2201.03545 * ConViT (Soft Convolutional Inductive Biases Vision Transformers)- https://arxiv.org/abs/2103.10697 * CspNet (Cross-Stage Partial Networks) - https://arxiv.org/abs/1911.11929 -* DeiT (Vision Transformer) - https://arxiv.org/abs/2012.12877 +* DeiT - https://arxiv.org/abs/2012.12877 +* DeiT-III - https://arxiv.org/pdf/2204.07118.pdf * DenseNet - https://arxiv.org/abs/1608.06993 * DLA - https://arxiv.org/abs/1707.06484 * DPN (Dual-Path Network) - https://arxiv.org/abs/1707.01629 @@ -411,6 +247,7 @@ A full version of the list below with source links can be found in the [document * HardCoRe-NAS - https://arxiv.org/abs/2102.11646 * LCNet - https://arxiv.org/abs/2109.15099 * MobileViT - https://arxiv.org/abs/2110.02178 +* MobileViT-V2 - https://arxiv.org/abs/2206.02680 * NASNet-A - https://arxiv.org/abs/1707.07012 * NesT - https://arxiv.org/abs/2105.12723 * NFNet-F - https://arxiv.org/abs/2102.06171 diff --git a/docs/archived_changes.md b/docs/archived_changes.md index 36b7b9a1..9c2b62b6 100644 --- a/docs/archived_changes.md +++ b/docs/archived_changes.md @@ -1,5 +1,37 @@ # Archived Changes +### July 12, 2021 +* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare) + +### July 5-9, 2021 +* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res) + * top-1 82.34 @ 288x288 and 82.54 @ 320x320 +* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models. +* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare). + * `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426 + +### June 23, 2021 +* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6) + +### June 20, 2021 +* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270) + * .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg) + * See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights + * Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work. + * Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1) + * `vit_deit_*` renamed to just `deit_*` + * Remove my old small model, replace with DeiT compatible small w/ AugReg weights +* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params. +* Add weights from official ResMLP release (https://github.com/facebookresearch/deit) +* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384. +* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237) +* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now + * weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered) + * eps values adjusted, will be slight differences but should be quite close +* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models +* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool +* Please report any regressions, this PR touched quite a few models. + ### June 8, 2021 * Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1. * Add ResNet51-Q model w/ pretrained weights at 82.36 top-1. diff --git a/docs/changes.md b/docs/changes.md index d2965e8f..709acfed 100644 --- a/docs/changes.md +++ b/docs/changes.md @@ -1,5 +1,80 @@ # Recent Changes +### July 27, 2022 +* All runtime benchmark and validation result csv files are up-to-date! +* A few more weights & model defs added: + * `darknetaa53` - 79.8 @ 256, 80.5 @ 288 + * `convnext_nano` - 80.8 @ 224, 81.5 @ 288 + * `cs3sedarknet_l` - 81.2 @ 256, 81.8 @ 288 + * `cs3darknet_x` - 81.8 @ 256, 82.2 @ 288 + * `cs3sedarknet_x` - 82.2 @ 256, 82.7 @ 288 + * `cs3edgenet_x` - 82.2 @ 256, 82.7 @ 288 + * `cs3se_edgenet_x` - 82.8 @ 256, 83.5 @ 320 +* Add output_stride=8 and 16 support to ConvNeXt (dilation) +* deit3 models not being able to resize pos_emb fixed +* Version 0.6.7 PyPi release (/w above bug fixes and new weighs since 0.6.5) + +### July 8, 2022 +More models, more fixes +* Official research models (w/ weights) added: + * EdgeNeXt from (https://github.com/mmaaz60/EdgeNeXt) + * MobileViT-V2 from (https://github.com/apple/ml-cvnets) + * DeiT III (Revenge of the ViT) from (https://github.com/facebookresearch/deit) +* My own models: + * Small `ResNet` defs added by request with 1 block repeats for both basic and bottleneck (resnet10 and resnet14) + * `CspNet` refactored with dataclass config, simplified CrossStage3 (`cs3`) option. These are closer to YOLO-v5+ backbone defs. + * More relative position vit fiddling. Two `srelpos` (shared relative position) models trained, and a medium w/ class token. + * Add an alternate downsample mode to EdgeNeXt and train a `small` model. Better than original small, but not their new USI trained weights. +* My own model weight results (all ImageNet-1k training) + * `resnet10t` - 66.5 @ 176, 68.3 @ 224 + * `resnet14t` - 71.3 @ 176, 72.3 @ 224 + * `resnetaa50` - 80.6 @ 224 , 81.6 @ 288 + * `darknet53` - 80.0 @ 256, 80.5 @ 288 + * `cs3darknet_m` - 77.0 @ 256, 77.6 @ 288 + * `cs3darknet_focus_m` - 76.7 @ 256, 77.3 @ 288 + * `cs3darknet_l` - 80.4 @ 256, 80.9 @ 288 + * `cs3darknet_focus_l` - 80.3 @ 256, 80.9 @ 288 + * `vit_srelpos_small_patch16_224` - 81.1 @ 224, 82.1 @ 320 + * `vit_srelpos_medium_patch16_224` - 82.3 @ 224, 83.1 @ 320 + * `vit_relpos_small_patch16_cls_224` - 82.6 @ 224, 83.6 @ 320 + * `edgnext_small_rw` - 79.6 @ 224, 80.4 @ 320 +* `cs3`, `darknet`, and `vit_*relpos` weights above all trained on TPU thanks to TRC program! Rest trained on overheating GPUs. +* Hugging Face Hub support fixes verified, demo notebook TBA +* Pretrained weights / configs can be loaded externally (ie from local disk) w/ support for head adaptation. +* Add support to change image extensions scanned by `timm` datasets/parsers. See (https://github.com/rwightman/pytorch-image-models/pull/1274#issuecomment-1178303103) +* Default ConvNeXt LayerNorm impl to use `F.layer_norm(x.permute(0, 2, 3, 1), ...).permute(0, 3, 1, 2)` via `LayerNorm2d` in all cases. + * a bit slower than previous custom impl on some hardware (ie Ampere w/ CL), but overall fewer regressions across wider HW / PyTorch version ranges. + * previous impl exists as `LayerNormExp2d` in `models/layers/norm.py` +* Numerous bug fixes +* Currently testing for imminent PyPi 0.6.x release +* LeViT pretraining of larger models still a WIP, they don't train well / easily without distillation. Time to add distill support (finally)? +* ImageNet-22k weight training + finetune ongoing, work on multi-weight support (slowly) chugging along (there are a LOT of weights, sigh) ... + +### May 13, 2022 +* Official Swin-V2 models and weights added from (https://github.com/microsoft/Swin-Transformer). Cleaned up to support torchscript. +* Some refactoring for existing `timm` Swin-V2-CR impl, will likely do a bit more to bring parts closer to official and decide whether to merge some aspects. +* More Vision Transformer relative position / residual post-norm experiments (all trained on TPU thanks to TRC program) + * `vit_relpos_small_patch16_224` - 81.5 @ 224, 82.5 @ 320 -- rel pos, layer scale, no class token, avg pool + * `vit_relpos_medium_patch16_rpn_224` - 82.3 @ 224, 83.1 @ 320 -- rel pos + res-post-norm, no class token, avg pool + * `vit_relpos_medium_patch16_224` - 82.5 @ 224, 83.3 @ 320 -- rel pos, layer scale, no class token, avg pool + * `vit_relpos_base_patch16_gapcls_224` - 82.8 @ 224, 83.9 @ 320 -- rel pos, layer scale, class token, avg pool (by mistake) +* Bring 512 dim, 8-head 'medium' ViT model variant back to life (after using in a pre DeiT 'small' model for first ViT impl back in 2020) +* Add ViT relative position support for switching btw existing impl and some additions in official Swin-V2 impl for future trials +* Sequencer2D impl (https://arxiv.org/abs/2205.01972), added via PR from author (https://github.com/okojoalg) + +### May 2, 2022 +* Vision Transformer experiments adding Relative Position (Swin-V2 log-coord) (`vision_transformer_relpos.py`) and Residual Post-Norm branches (from Swin-V2) (`vision_transformer*.py`) + * `vit_relpos_base_patch32_plus_rpn_256` - 79.5 @ 256, 80.6 @ 320 -- rel pos + extended width + res-post-norm, no class token, avg pool + * `vit_relpos_base_patch16_224` - 82.5 @ 224, 83.6 @ 320 -- rel pos, layer scale, no class token, avg pool + * `vit_base_patch16_rpn_224` - 82.3 @ 224 -- rel pos + res-post-norm, no class token, avg pool +* Vision Transformer refactor to remove representation layer that was only used in initial vit and rarely used since with newer pretrain (ie `How to Train Your ViT`) +* `vit_*` models support removal of class token, use of global average pool, use of fc_norm (ala beit, mae). + +### April 22, 2022 +* `timm` models are now officially supported in [fast.ai](https://www.fast.ai/)! Just in time for the new Practical Deep Learning course. `timmdocs` documentation link updated to [timm.fast.ai](http://timm.fast.ai/). +* Two more model weights added in the TPU trained [series](https://github.com/rwightman/pytorch-image-models/releases/tag/v0.1-tpu-weights). Some In22k pretrain still in progress. + * `seresnext101d_32x8d` - 83.69 @ 224, 84.35 @ 288 + * `seresnextaa101d_32x8d` (anti-aliased w/ AvgPool2d) - 83.85 @ 224, 84.57 @ 288 ### March 23, 2022 * Add `ParallelBlock` and `LayerScale` option to base vit models to support model configs in [Three things everyone should know about ViT](https://arxiv.org/abs/2203.09795) @@ -96,35 +171,3 @@ * SGDP and AdamP still won't work with PyTorch XLA but others should (have yet to test Adabelief, Adafactor, Adahessian myself). * EfficientNet-V2 XL TF ported weights added, but they don't validate well in PyTorch (L is better). The pre-processing for the V2 TF training is a bit diff and the fine-tuned 21k -> 1k weights are very sensitive and less robust than the 1k weights. * Added PyTorch trained EfficientNet-V2 'Tiny' w/ GlobalContext attn weights. Only .1-.2 top-1 better than the SE so more of a curiosity for those interested. - -### July 12, 2021 -* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare) - -### July 5-9, 2021 -* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res) - * top-1 82.34 @ 288x288 and 82.54 @ 320x320 -* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models. -* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare). - * `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426 - -### June 23, 2021 -* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6) - -### June 20, 2021 -* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270) - * .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg) - * See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights - * Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work. - * Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1) - * `vit_deit_*` renamed to just `deit_*` - * Remove my old small model, replace with DeiT compatible small w/ AugReg weights -* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params. -* Add weights from official ResMLP release (https://github.com/facebookresearch/deit) -* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384. -* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237) -* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now - * weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered) - * eps values adjusted, will be slight differences but should be quite close -* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models -* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool -* Please report any regressions, this PR touched quite a few models.