Compare commits
No commits in common. 'main' and 'refactor-imports' have entirely different histories.
main
...
refactor-i
@ -1,112 +0,0 @@
|
||||
*This guideline is very much a work-in-progress.*
|
||||
|
||||
Contriubtions to `timm` for code, documentation, tests are more than welcome!
|
||||
|
||||
There haven't been any formal guidelines to date so please bear with me, and feel free to add to this guide.
|
||||
|
||||
# Coding style
|
||||
|
||||
Code linting and auto-format (black) are not currently in place but open to consideration. In the meantime, the style to follow is (mostly) aligned with Google's guide: https://google.github.io/styleguide/pyguide.html.
|
||||
|
||||
A few specific differences from Google style (or black)
|
||||
1. Line length is 120 char. Going over is okay in some cases (e.g. I prefer not to break URL across lines).
|
||||
2. Hanging indents are always prefered, please avoid aligning arguments with closing brackets or braces.
|
||||
|
||||
Example, from Google guide, but this is a NO here:
|
||||
```
|
||||
# Aligned with opening delimiter.
|
||||
foo = long_function_name(var_one, var_two,
|
||||
var_three, var_four)
|
||||
meal = (spam,
|
||||
beans)
|
||||
|
||||
# Aligned with opening delimiter in a dictionary.
|
||||
foo = {
|
||||
'long_dictionary_key': value1 +
|
||||
value2,
|
||||
...
|
||||
}
|
||||
```
|
||||
This is YES:
|
||||
|
||||
```
|
||||
# 4-space hanging indent; nothing on first line,
|
||||
# closing parenthesis on a new line.
|
||||
foo = long_function_name(
|
||||
var_one, var_two, var_three,
|
||||
var_four
|
||||
)
|
||||
meal = (
|
||||
spam,
|
||||
beans,
|
||||
)
|
||||
|
||||
# 4-space hanging indent in a dictionary.
|
||||
foo = {
|
||||
'long_dictionary_key':
|
||||
long_dictionary_value,
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
When there is descrepancy in a given source file (there are many origins for various bits of code and not all have been updated to what I consider current goal), please follow the style in a given file.
|
||||
|
||||
In general, if you add new code, formatting it with black using the following options should result in a style that is compatible with the rest of the code base:
|
||||
|
||||
```
|
||||
black --skip-string-normalization --line-length 120 <path-to-file>
|
||||
```
|
||||
|
||||
Avoid formatting code that is unrelated to your PR though.
|
||||
|
||||
PR with pure formatting / style fixes will be accepted but only in isolation from functional changes, best to ask before starting such a change.
|
||||
|
||||
# Documentation
|
||||
|
||||
As with code style, docstrings style based on the Google guide: guide: https://google.github.io/styleguide/pyguide.html
|
||||
|
||||
The goal for the code is to eventually move to have all major functions and `__init__` methods use PEP484 type annotations.
|
||||
|
||||
When type annotations are used for a function, as per the Google pyguide, they should **NOT** be duplicated in the docstrings, please leave annotations as the one source of truth re typing.
|
||||
|
||||
There are a LOT of gaps in current documentation relative to the functionality in timm, please, document away!
|
||||
|
||||
# Installation
|
||||
|
||||
Create a Python virtual environment using Python 3.10. Inside the environment, install the following test dependencies:
|
||||
|
||||
```
|
||||
python -m pip install pytest pytest-timeout pytest-xdist pytest-forked expecttest
|
||||
```
|
||||
|
||||
Install `torch` and `torchvision` using the instructions matching your system as listed on the [PyTorch website](https://pytorch.org/).
|
||||
|
||||
Then install the remaining dependencies:
|
||||
|
||||
```
|
||||
python -m pip install -r requirements.txt
|
||||
python -m pip install --no-cache-dir git+https://github.com/mapillary/inplace_abn.git
|
||||
python -m pip install -e .
|
||||
```
|
||||
|
||||
## Unit tests
|
||||
|
||||
Run the tests using:
|
||||
|
||||
```
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
Since the whole test suite takes a lot of time to run locally (a few hours), you may want to select a subset of tests relating to the changes you made by using the `-k` option of [`pytest`](https://docs.pytest.org/en/7.1.x/example/markers.html#using-k-expr-to-select-tests-based-on-their-name). Moreover, running tests in parallel (in this example 4 processes) with the `-n` option may help:
|
||||
|
||||
```
|
||||
pytest -k "substring-to-match" -n 4 tests/
|
||||
```
|
||||
|
||||
## Building documentation
|
||||
|
||||
Please refer to [this document](https://github.com/huggingface/pytorch-image-models/tree/main/hfdocs).
|
||||
|
||||
# Questions
|
||||
|
||||
If you have any questions about contribution, where / how to contribute, please ask in the [Discussions](https://github.com/huggingface/pytorch-image-models/discussions/categories/contributing) (there is a `Contributing` topic).
|
@ -1,3 +1,2 @@
|
||||
include timm/models/_pruned/*.txt
|
||||
include timm/data/_info/*.txt
|
||||
include timm/data/_info/*.json
|
||||
include timm/models/pruned/*.txt
|
||||
|
||||
|
@ -1,14 +0,0 @@
|
||||
# Hugging Face Timm Docs
|
||||
|
||||
## Getting Started
|
||||
|
||||
```
|
||||
pip install git+https://github.com/huggingface/doc-builder.git@main#egg=hf-doc-builder
|
||||
pip install watchdog black
|
||||
```
|
||||
|
||||
## Preview the Docs Locally
|
||||
|
||||
```
|
||||
doc-builder preview timm hfdocs/source
|
||||
```
|
@ -1,160 +1,149 @@
|
||||
- sections:
|
||||
- local: index
|
||||
title: Home
|
||||
- local: quickstart
|
||||
title: Quickstart
|
||||
- local: installation
|
||||
title: Installation
|
||||
title: Get started
|
||||
- sections:
|
||||
- local: feature_extraction
|
||||
title: Using Pretrained Models as Feature Extractors
|
||||
- local: training_script
|
||||
title: Training With The Official Training Script
|
||||
- local: hf_hub
|
||||
title: Share and Load Models from the 🤗 Hugging Face Hub
|
||||
title: Tutorials
|
||||
- sections:
|
||||
title: Pytorch Image Models (timm)
|
||||
- local: models
|
||||
title: Model Summaries
|
||||
- local: results
|
||||
title: Results
|
||||
- local: models/adversarial-inception-v3
|
||||
title: Adversarial Inception v3
|
||||
- local: models/advprop
|
||||
title: AdvProp (EfficientNet)
|
||||
- local: models/big-transfer
|
||||
title: Big Transfer (BiT)
|
||||
- local: models/csp-darknet
|
||||
title: CSP-DarkNet
|
||||
- local: models/csp-resnet
|
||||
title: CSP-ResNet
|
||||
- local: models/csp-resnext
|
||||
title: CSP-ResNeXt
|
||||
- local: models/densenet
|
||||
title: DenseNet
|
||||
- local: models/dla
|
||||
title: Deep Layer Aggregation
|
||||
- local: models/dpn
|
||||
title: Dual Path Network (DPN)
|
||||
- local: models/ecaresnet
|
||||
title: ECA-ResNet
|
||||
- local: models/efficientnet
|
||||
title: EfficientNet
|
||||
- local: models/efficientnet-pruned
|
||||
title: EfficientNet (Knapsack Pruned)
|
||||
- local: models/ensemble-adversarial
|
||||
title: Ensemble Adversarial Inception ResNet v2
|
||||
- local: models/ese-vovnet
|
||||
title: ESE-VoVNet
|
||||
- local: models/fbnet
|
||||
title: FBNet
|
||||
- local: models/gloun-inception-v3
|
||||
title: (Gluon) Inception v3
|
||||
- local: models/gloun-resnet
|
||||
title: (Gluon) ResNet
|
||||
- local: models/gloun-resnext
|
||||
title: (Gluon) ResNeXt
|
||||
- local: models/gloun-senet
|
||||
title: (Gluon) SENet
|
||||
- local: models/gloun-seresnext
|
||||
title: (Gluon) SE-ResNeXt
|
||||
- local: models/gloun-xception
|
||||
title: (Gluon) Xception
|
||||
- local: models/hrnet
|
||||
title: HRNet
|
||||
- local: models/ig-resnext
|
||||
title: Instagram ResNeXt WSL
|
||||
- local: models/inception-resnet-v2
|
||||
title: Inception ResNet v2
|
||||
- local: models/inception-v3
|
||||
title: Inception v3
|
||||
- local: models/inception-v4
|
||||
title: Inception v4
|
||||
- local: models/legacy-se-resnet
|
||||
title: (Legacy) SE-ResNet
|
||||
- local: models/legacy-se-resnext
|
||||
title: (Legacy) SE-ResNeXt
|
||||
- local: models/legacy-senet
|
||||
title: (Legacy) SENet
|
||||
- local: models/mixnet
|
||||
title: MixNet
|
||||
- local: models/mnasnet
|
||||
title: MnasNet
|
||||
- local: models/mobilenet-v2
|
||||
title: MobileNet v2
|
||||
- local: models/mobilenet-v3
|
||||
title: MobileNet v3
|
||||
- local: models/nasnet
|
||||
title: NASNet
|
||||
- local: models/noisy-student
|
||||
title: Noisy Student (EfficientNet)
|
||||
- local: models/pnasnet
|
||||
title: PNASNet
|
||||
- local: models/regnetx
|
||||
title: RegNetX
|
||||
- local: models/regnety
|
||||
title: RegNetY
|
||||
- local: models/res2net
|
||||
title: Res2Net
|
||||
- local: models/res2next
|
||||
title: Res2NeXt
|
||||
- local: models/resnest
|
||||
title: ResNeSt
|
||||
- local: models/resnet
|
||||
title: ResNet
|
||||
- local: models/resnet-d
|
||||
title: ResNet-D
|
||||
- local: models/resnext
|
||||
title: ResNeXt
|
||||
- local: models/rexnet
|
||||
title: RexNet
|
||||
- local: models/se-resnet
|
||||
title: SE-ResNet
|
||||
- local: models/selecsls
|
||||
title: SelecSLS
|
||||
- local: models/seresnext
|
||||
title: SE-ResNeXt
|
||||
- local: models/skresnet
|
||||
title: SK-ResNet
|
||||
- local: models/skresnext
|
||||
title: SK-ResNeXt
|
||||
- local: models/spnasnet
|
||||
title: SPNASNet
|
||||
- local: models/ssl-resnet
|
||||
title: SSL ResNet
|
||||
- local: models/swsl-resnet
|
||||
title: SWSL ResNet
|
||||
- local: models/swsl-resnext
|
||||
title: SWSL ResNeXt
|
||||
- local: models/tf-efficientnet
|
||||
title: (Tensorflow) EfficientNet
|
||||
- local: models/tf-efficientnet-condconv
|
||||
title: (Tensorflow) EfficientNet CondConv
|
||||
- local: models/tf-efficientnet-lite
|
||||
title: (Tensorflow) EfficientNet Lite
|
||||
- local: models/tf-inception-v3
|
||||
title: (Tensorflow) Inception v3
|
||||
- local: models/tf-mixnet
|
||||
title: (Tensorflow) MixNet
|
||||
- local: models/tf-mobilenet-v3
|
||||
title: (Tensorflow) MobileNet v3
|
||||
- local: models/tresnet
|
||||
title: TResNet
|
||||
- local: models/wide-resnet
|
||||
title: Wide ResNet
|
||||
- local: models/xception
|
||||
title: Xception
|
||||
title: Model Pages
|
||||
isExpanded: false
|
||||
- sections:
|
||||
- local: reference/models
|
||||
title: Models
|
||||
- local: reference/data
|
||||
title: Data
|
||||
- local: reference/optimizers
|
||||
title: Optimizers
|
||||
- local: reference/schedulers
|
||||
title: Learning Rate Schedulers
|
||||
title: Reference
|
||||
- local: scripts
|
||||
title: Scripts
|
||||
- local: training_hparam_examples
|
||||
title: Training Examples
|
||||
- local: feature_extraction
|
||||
title: Feature Extraction
|
||||
- local: changes
|
||||
title: Recent Changes
|
||||
- local: archived_changes
|
||||
title: Archived Changes
|
||||
- local: model_pages
|
||||
title: Model Pages
|
||||
isExpanded: false
|
||||
sections:
|
||||
- local: models/adversarial-inception-v3
|
||||
title: Adversarial Inception v3
|
||||
- local: models/advprop
|
||||
title: AdvProp (EfficientNet)
|
||||
- local: models/big-transfer
|
||||
title: Big Transfer (BiT)
|
||||
- local: models/csp-darknet
|
||||
title: CSP-DarkNet
|
||||
- local: models/csp-resnet
|
||||
title: CSP-ResNet
|
||||
- local: models/csp-resnext
|
||||
title: CSP-ResNeXt
|
||||
- local: models/densenet
|
||||
title: DenseNet
|
||||
- local: models/dla
|
||||
title: Deep Layer Aggregation
|
||||
- local: models/dpn
|
||||
title: Dual Path Network (DPN)
|
||||
- local: models/ecaresnet
|
||||
title: ECA-ResNet
|
||||
- local: models/efficientnet
|
||||
title: EfficientNet
|
||||
- local: models/efficientnet-pruned
|
||||
title: EfficientNet (Knapsack Pruned)
|
||||
- local: models/ensemble-adversarial
|
||||
title: Ensemble Adversarial Inception ResNet v2
|
||||
- local: models/ese-vovnet
|
||||
title: ESE-VoVNet
|
||||
- local: models/fbnet
|
||||
title: FBNet
|
||||
- local: models/gloun-inception-v3
|
||||
title: (Gluon) Inception v3
|
||||
- local: models/gloun-resnet
|
||||
title: (Gluon) ResNet
|
||||
- local: models/gloun-resnext
|
||||
title: (Gluon) ResNeXt
|
||||
- local: models/gloun-senet
|
||||
title: (Gluon) SENet
|
||||
- local: models/gloun-seresnext
|
||||
title: (Gluon) SE-ResNeXt
|
||||
- local: models/gloun-xception
|
||||
title: (Gluon) Xception
|
||||
- local: models/hrnet
|
||||
title: HRNet
|
||||
- local: models/ig-resnext
|
||||
title: Instagram ResNeXt WSL
|
||||
- local: models/inception-resnet-v2
|
||||
title: Inception ResNet v2
|
||||
- local: models/inception-v3
|
||||
title: Inception v3
|
||||
- local: models/inception-v4
|
||||
title: Inception v4
|
||||
- local: models/legacy-se-resnet
|
||||
title: (Legacy) SE-ResNet
|
||||
- local: models/legacy-se-resnext
|
||||
title: (Legacy) SE-ResNeXt
|
||||
- local: models/legacy-senet
|
||||
title: (Legacy) SENet
|
||||
- local: models/mixnet
|
||||
title: MixNet
|
||||
- local: models/mnasnet
|
||||
title: MnasNet
|
||||
- local: models/mobilenet-v2
|
||||
title: MobileNet v2
|
||||
- local: models/mobilenet-v3
|
||||
title: MobileNet v3
|
||||
- local: models/nasnet
|
||||
title: NASNet
|
||||
- local: models/noisy-student
|
||||
title: Noisy Student (EfficientNet)
|
||||
- local: models/pnasnet
|
||||
title: PNASNet
|
||||
- local: models/regnetx
|
||||
title: RegNetX
|
||||
- local: models/regnety
|
||||
title: RegNetY
|
||||
- local: models/res2net
|
||||
title: Res2Net
|
||||
- local: models/res2next
|
||||
title: Res2NeXt
|
||||
- local: models/resnest
|
||||
title: ResNeSt
|
||||
- local: models/resnet
|
||||
title: ResNet
|
||||
- local: models/resnet-d
|
||||
title: ResNet-D
|
||||
- local: models/resnext
|
||||
title: ResNeXt
|
||||
- local: models/rexnet
|
||||
title: RexNet
|
||||
- local: models/se-resnet
|
||||
title: SE-ResNet
|
||||
- local: models/selecsls
|
||||
title: SelecSLS
|
||||
- local: models/seresnext
|
||||
title: SE-ResNeXt
|
||||
- local: models/skresnet
|
||||
title: SK-ResNet
|
||||
- local: models/skresnext
|
||||
title: SK-ResNeXt
|
||||
- local: models/spnasnet
|
||||
title: SPNASNet
|
||||
- local: models/ssl-resnet
|
||||
title: SSL ResNet
|
||||
- local: models/swsl-resnet
|
||||
title: SWSL ResNet
|
||||
- local: models/swsl-resnext
|
||||
title: SWSL ResNeXt
|
||||
- local: models/tf-efficientnet
|
||||
title: (Tensorflow) EfficientNet
|
||||
- local: models/tf-efficientnet-condconv
|
||||
title: (Tensorflow) EfficientNet CondConv
|
||||
- local: models/tf-efficientnet-lite
|
||||
title: (Tensorflow) EfficientNet Lite
|
||||
- local: models/tf-inception-v3
|
||||
title: (Tensorflow) Inception v3
|
||||
- local: models/tf-mixnet
|
||||
title: (Tensorflow) MixNet
|
||||
- local: models/tf-mobilenet-v3
|
||||
title: (Tensorflow) MobileNet v3
|
||||
- local: models/tresnet
|
||||
title: TResNet
|
||||
- local: models/wide-resnet
|
||||
title: Wide ResNet
|
||||
- local: models/xception
|
||||
title: Xception
|
||||
title: Get started
|
||||
|
||||
|
@ -0,0 +1,418 @@
|
||||
# Archived Changes
|
||||
|
||||
### July 12, 2021
|
||||
|
||||
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
|
||||
|
||||
### July 5-9, 2021
|
||||
|
||||
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
|
||||
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
|
||||
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
|
||||
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
|
||||
* `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426
|
||||
|
||||
### June 23, 2021
|
||||
|
||||
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
|
||||
|
||||
### June 20, 2021
|
||||
|
||||
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
|
||||
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
|
||||
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
|
||||
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
|
||||
* Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1)
|
||||
* `vit_deit_*` renamed to just `deit_*`
|
||||
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
|
||||
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
|
||||
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
|
||||
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
|
||||
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
|
||||
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
|
||||
* weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered)
|
||||
* eps values adjusted, will be slight differences but should be quite close
|
||||
* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models
|
||||
* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool
|
||||
* Please report any regressions, this PR touched quite a few models.
|
||||
|
||||
### June 8, 2021
|
||||
|
||||
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
|
||||
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.
|
||||
* NFNet inspired block layout with quad layer stem and no maxpool
|
||||
* Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288
|
||||
|
||||
### May 25, 2021
|
||||
|
||||
* Add LeViT, Visformer, Convit (PR by Aman Arora), Twins (PR by paper authors) transformer models
|
||||
* Cleanup input_size/img_size override handling and testing for all vision transformer models
|
||||
* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params.
|
||||
|
||||
### May 14, 2021
|
||||
|
||||
* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl.
|
||||
* 1k trained variants: `tf_efficientnetv2_s/m/l`
|
||||
* 21k trained variants: `tf_efficientnetv2_s/m/l_in21k`
|
||||
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_in21ft1k`
|
||||
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
|
||||
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
|
||||
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
|
||||
|
||||
### May 5, 2021
|
||||
|
||||
* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen)
|
||||
* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit)
|
||||
* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora)
|
||||
* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin)
|
||||
* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23)
|
||||
* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai)
|
||||
* Update ByoaNet attention modles
|
||||
* Improve SA module inits
|
||||
* Hack together experimental stand-alone Swin based attn module and `swinnet`
|
||||
* Consistent '26t' model defs for experiments.
|
||||
* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1.
|
||||
* WandB logging support
|
||||
|
||||
### April 13, 2021
|
||||
|
||||
* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer
|
||||
|
||||
### April 12, 2021
|
||||
|
||||
* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256.
|
||||
* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training.
|
||||
* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs
|
||||
* Lambda Networks - https://arxiv.org/abs/2102.08602
|
||||
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
|
||||
* Halo Nets - https://arxiv.org/abs/2103.12731
|
||||
* Adabelief optimizer contributed by Juntang Zhuang
|
||||
|
||||
### April 1, 2021
|
||||
|
||||
* Add snazzy `benchmark.py` script for bulk `timm` model benchmarking of train and/or inference
|
||||
* Add Pooling-based Vision Transformer (PiT) models (from https://github.com/naver-ai/pit)
|
||||
* Merged distilled variant into main for torchscript compatibility
|
||||
* Some `timm` cleanup/style tweaks and weights have hub download support
|
||||
* Cleanup Vision Transformer (ViT) models
|
||||
* Merge distilled (DeiT) model into main so that torchscript can work
|
||||
* Support updated weight init (defaults to old still) that closer matches original JAX impl (possibly better training from scratch)
|
||||
* Separate hybrid model defs into different file and add several new model defs to fiddle with, support patch_size != 1 for hybrids
|
||||
* Fix fine-tuning num_class changes (PiT and ViT) and pos_embed resizing (Vit) with distilled variants
|
||||
* nn.Sequential for block stack (does not break downstream compat)
|
||||
* TnT (Transformer-in-Transformer) models contributed by author (from https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT)
|
||||
* Add RegNetY-160 weights from DeiT teacher model
|
||||
* Add new NFNet-L0 w/ SE attn (rename `nfnet_l0b`->`nfnet_l0`) weights 82.75 top-1 @ 288x288
|
||||
* Some fixes/improvements for TFDS dataset wrapper
|
||||
|
||||
### March 7, 2021
|
||||
|
||||
* First 0.4.x PyPi release w/ NFNets (& related), ByoB (GPU-Efficient, RepVGG, etc).
|
||||
* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.
|
||||
|
||||
### Feb 18, 2021
|
||||
|
||||
* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets).
|
||||
* Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.
|
||||
* These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants.
|
||||
* Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).
|
||||
* Matching the original pre-processing as closely as possible I get these results:
|
||||
* `dm_nfnet_f6` - 86.352
|
||||
* `dm_nfnet_f5` - 86.100
|
||||
* `dm_nfnet_f4` - 85.834
|
||||
* `dm_nfnet_f3` - 85.676
|
||||
* `dm_nfnet_f2` - 85.178
|
||||
* `dm_nfnet_f1` - 84.696
|
||||
* `dm_nfnet_f0` - 83.464
|
||||
|
||||
### Feb 16, 2021
|
||||
|
||||
* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.
|
||||
* AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc`
|
||||
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
|
||||
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
|
||||
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
|
||||
|
||||
### Feb 12, 2021
|
||||
|
||||
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
|
||||
|
||||
### Feb 10, 2021
|
||||
|
||||
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
|
||||
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
|
||||
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
|
||||
* classic VGG (from torchvision, impl in `vgg`)
|
||||
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
|
||||
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
|
||||
* Fix a few bugs introduced since last pypi release
|
||||
|
||||
### Feb 8, 2021
|
||||
|
||||
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
|
||||
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
|
||||
* `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256
|
||||
* `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320
|
||||
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
|
||||
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
|
||||
|
||||
### Jan 30, 2021
|
||||
|
||||
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
|
||||
|
||||
### Jan 25, 2021
|
||||
|
||||
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
|
||||
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
|
||||
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
|
||||
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
|
||||
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
|
||||
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
|
||||
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
|
||||
* Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2`
|
||||
* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar
|
||||
* Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp`
|
||||
* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling
|
||||
|
||||
### Jan 3, 2021
|
||||
|
||||
* Add SE-ResNet-152D weights
|
||||
* 256x256 val, 0.94 crop top-1 - 83.75
|
||||
* 320x320 val, 1.0 crop - 84.36
|
||||
* Update results files
|
||||
|
||||
### Dec 18, 2020
|
||||
|
||||
* Add ResNet-101D, ResNet-152D, and ResNet-200D weights trained @ 256x256
|
||||
* 256x256 val, 0.94 crop (top-1) - 101D (82.33), 152D (83.08), 200D (83.25)
|
||||
* 288x288 val, 1.0 crop - 101D (82.64), 152D (83.48), 200D (83.76)
|
||||
* 320x320 val, 1.0 crop - 101D (83.00), 152D (83.66), 200D (84.01)
|
||||
|
||||
### Dec 7, 2020
|
||||
|
||||
* Simplify EMA module (ModelEmaV2), compatible with fully torchscripted models
|
||||
* Misc fixes for SiLU ONNX export, default_cfg missing from Feature extraction models, Linear layer w/ AMP + torchscript
|
||||
* PyPi release @ 0.3.2 (needed by EfficientDet)
|
||||
|
||||
|
||||
### Oct 30, 2020
|
||||
|
||||
* Test with PyTorch 1.7 and fix a small top-n metric view vs reshape issue.
|
||||
* Convert newly added 224x224 Vision Transformer weights from official JAX repo. 81.8 top-1 for B/16, 83.1 L/16.
|
||||
* Support PyTorch 1.7 optimized, native SiLU (aka Swish) activation. Add mapping to 'silu' name, custom swish will eventually be deprecated.
|
||||
* Fix regression for loading pretrained classifier via direct model entrypoint functions. Didn't impact create_model() factory usage.
|
||||
* PyPi release @ 0.3.0 version!
|
||||
|
||||
### Oct 26, 2020
|
||||
|
||||
* Update Vision Transformer models to be compatible with official code release at https://github.com/google-research/vision_transformer
|
||||
* Add Vision Transformer weights (ImageNet-21k pretrain) for 384x384 base and large models converted from official jax impl
|
||||
* ViT-B/16 - 84.2
|
||||
* ViT-B/32 - 81.7
|
||||
* ViT-L/16 - 85.2
|
||||
* ViT-L/32 - 81.5
|
||||
|
||||
### Oct 21, 2020
|
||||
|
||||
* Weights added for Vision Transformer (ViT) models. 77.86 top-1 for 'small' and 79.35 for 'base'. Thanks to [Christof](https://www.kaggle.com/christofhenkel) for training the base model w/ lots of GPUs.
|
||||
|
||||
### Oct 13, 2020
|
||||
|
||||
* Initial impl of Vision Transformer models. Both patch and hybrid (CNN backbone) variants. Currently trying to train...
|
||||
* Adafactor and AdaHessian (FP32 only, no AMP) optimizers
|
||||
* EdgeTPU-M (`efficientnet_em`) model trained in PyTorch, 79.3 top-1
|
||||
* Pip release, doc updates pending a few more changes...
|
||||
|
||||
### Sept 18, 2020
|
||||
|
||||
* New ResNet 'D' weights. 72.7 (top-1) ResNet-18-D, 77.1 ResNet-34-D, 80.5 ResNet-50-D
|
||||
* Added a few untrained defs for other ResNet models (66D, 101D, 152D, 200/200D)
|
||||
|
||||
### Sept 3, 2020
|
||||
|
||||
* New weights
|
||||
* Wide-ResNet50 - 81.5 top-1 (vs 78.5 torchvision)
|
||||
* SEResNeXt50-32x4d - 81.3 top-1 (vs 79.1 cadene)
|
||||
* Support for native Torch AMP and channels_last memory format added to train/validate scripts (`--channels-last`, `--native-amp` vs `--apex-amp`)
|
||||
* Models tested with channels_last on latest NGC 20.08 container. AdaptiveAvgPool in attn layers changed to mean((2,3)) to work around bug with NHWC kernel.
|
||||
|
||||
### Aug 12, 2020
|
||||
|
||||
* New/updated weights from training experiments
|
||||
* EfficientNet-B3 - 82.1 top-1 (vs 81.6 for official with AA and 81.9 for AdvProp)
|
||||
* RegNetY-3.2GF - 82.0 top-1 (78.9 from official ver)
|
||||
* CSPResNet50 - 79.6 top-1 (76.6 from official ver)
|
||||
* Add CutMix integrated w/ Mixup. See [pull request](https://github.com/rwightman/pytorch-image-models/pull/218) for some usage examples
|
||||
* Some fixes for using pretrained weights with `in_chans` != 3 on several models.
|
||||
|
||||
### Aug 5, 2020
|
||||
|
||||
Universal feature extraction, new models, new weights, new test sets.
|
||||
* All models support the `features_only=True` argument for `create_model` call to return a network that extracts feature maps from the deepest layer at each stride.
|
||||
* New models
|
||||
* CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet
|
||||
* ReXNet
|
||||
* (Modified Aligned) Xception41/65/71 (a proper port of TF models)
|
||||
* New trained weights
|
||||
* SEResNet50 - 80.3 top-1
|
||||
* CSPDarkNet53 - 80.1 top-1
|
||||
* CSPResNeXt50 - 80.0 top-1
|
||||
* DPN68b - 79.2 top-1
|
||||
* EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by [@hal-314](https://github.com/hal-314))
|
||||
* Add 'real' labels for ImageNet and ImageNet-Renditions test set, see [`results/README.md`](results/README.md)
|
||||
* Test set ranking/top-n diff script by [@KushajveerSingh](https://github.com/KushajveerSingh)
|
||||
* Train script and loader/transform tweaks to punch through more aug arguments
|
||||
* README and documentation overhaul. See initial (WIP) documentation at https://rwightman.github.io/pytorch-image-models/
|
||||
* adamp and sgdp optimizers added by [@hellbell](https://github.com/hellbell)
|
||||
|
||||
### June 11, 2020
|
||||
|
||||
Bunch of changes:
|
||||
* DenseNet models updated with memory efficient addition from torchvision (fixed a bug), blur pooling and deep stem additions
|
||||
* VoVNet V1 and V2 models added, 39 V2 variant (ese_vovnet_39b) trained to 79.3 top-1
|
||||
* Activation factory added along with new activations:
|
||||
* select act at model creation time for more flexibility in using activations compatible with scripting or tracing (ONNX export)
|
||||
* hard_mish (experimental) added with memory-efficient grad, along with ME hard_swish
|
||||
* context mgr for setting exportable/scriptable/no_jit states
|
||||
* Norm + Activation combo layers added with initial trial support in DenseNet and VoVNet along with impl of EvoNorm and InplaceAbn wrapper that fit the interface
|
||||
* Torchscript works for all but two of the model types as long as using Pytorch 1.5+, tests added for this
|
||||
* Some import cleanup and classifier reset changes, all models will have classifier reset to nn.Identity on reset_classifer(0) call
|
||||
* Prep for 0.1.28 pip release
|
||||
|
||||
### May 12, 2020
|
||||
|
||||
* Add ResNeSt models (code adapted from https://github.com/zhanghang1989/ResNeSt, paper https://arxiv.org/abs/2004.08955))
|
||||
|
||||
### May 3, 2020
|
||||
|
||||
* Pruned EfficientNet B1, B2, and B3 (https://arxiv.org/abs/2002.08258) contributed by [Yonathan Aflalo](https://github.com/yoniaflalo)
|
||||
|
||||
### May 1, 2020
|
||||
|
||||
* Merged a number of execellent contributions in the ResNet model family over the past month
|
||||
* BlurPool2D and resnetblur models initiated by [Chris Ha](https://github.com/VRandme), I trained resnetblur50 to 79.3.
|
||||
* TResNet models and SpaceToDepth, AntiAliasDownsampleLayer layers by [mrT23](https://github.com/mrT23)
|
||||
* ecaresnet (50d, 101d, light) models and two pruned variants using pruning as per (https://arxiv.org/abs/2002.08258) by [Yonathan Aflalo](https://github.com/yoniaflalo)
|
||||
* 200 pretrained models in total now with updated results csv in results folder
|
||||
|
||||
### April 5, 2020
|
||||
|
||||
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
|
||||
* 3.5M param MobileNet-V2 100 @ 73%
|
||||
* 4.5M param MobileNet-V2 110d @ 75%
|
||||
* 6.1M param MobileNet-V2 140 @ 76.5%
|
||||
* 5.8M param MobileNet-V2 120d @ 77.3%
|
||||
|
||||
### March 18, 2020
|
||||
|
||||
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
|
||||
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### April 5, 2020
|
||||
|
||||
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
|
||||
* 3.5M param MobileNet-V2 100 @ 73%
|
||||
* 4.5M param MobileNet-V2 110d @ 75%
|
||||
* 6.1M param MobileNet-V2 140 @ 76.5%
|
||||
* 5.8M param MobileNet-V2 120d @ 77.3%
|
||||
|
||||
### March 18, 2020
|
||||
|
||||
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
|
||||
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### Feb 29, 2020
|
||||
|
||||
* New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1
|
||||
* IMPORTANT CHANGE - default weight init changed for all MobilenetV3 / EfficientNet / related models
|
||||
* overall results similar to a bit better training from scratch on a few smaller models tried
|
||||
* performance early in training seems consistently improved but less difference by end
|
||||
* set `fix_group_fanout=False` in `_init_weight_goog` fn if you need to reproducte past behaviour
|
||||
* Experimental LR noise feature added applies a random perturbation to LR each epoch in specified range of training
|
||||
|
||||
### Feb 18, 2020
|
||||
|
||||
* Big refactor of model layers and addition of several attention mechanisms. Several additions motivated by 'Compounding the Performance Improvements...' (https://arxiv.org/abs/2001.06268):
|
||||
* Move layer/module impl into `layers` subfolder/module of `models` and organize in a more granular fashion
|
||||
* ResNet downsample paths now properly support dilation (output stride != 32) for avg_pool ('D' variant) and 3x3 (SENets) networks
|
||||
* Add Selective Kernel Nets on top of ResNet base, pretrained weights
|
||||
* skresnet18 - 73% top-1
|
||||
* skresnet34 - 76.9% top-1
|
||||
* skresnext50_32x4d (equiv to SKNet50) - 80.2% top-1
|
||||
* ECA and CECA (circular padding) attention layer contributed by [Chris Ha](https://github.com/VRandme)
|
||||
* CBAM attention experiment (not the best results so far, may remove)
|
||||
* Attention factory to allow dynamically selecting one of SE, ECA, CBAM in the `.se` position for all ResNets
|
||||
* Add DropBlock and DropPath (formerly DropConnect for EfficientNet/MobileNetv3) support to all ResNet variants
|
||||
* Full dataset results updated that incl NoisyStudent weights and 2 of the 3 SK weights
|
||||
|
||||
### Feb 12, 2020
|
||||
|
||||
* Add EfficientNet-L2 and B0-B7 NoisyStudent weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)
|
||||
|
||||
### Feb 6, 2020
|
||||
|
||||
* Add RandAugment trained EfficientNet-ES (EdgeTPU-Small) weights with 78.1 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### Feb 1/2, 2020
|
||||
|
||||
* Port new EfficientNet-B8 (RandAugment) weights, these are different than the B8 AdvProp, different input normalization.
|
||||
* Update results csv files on all models for ImageNet validation and three other test sets
|
||||
* Push PyPi package update
|
||||
|
||||
### Jan 31, 2020
|
||||
|
||||
* Update ResNet50 weights with a new 79.038 result from further JSD / AugMix experiments. Full command line for reproduction in training section below.
|
||||
|
||||
### Jan 11/12, 2020
|
||||
|
||||
* Master may be a bit unstable wrt to training, these changes have been tested but not all combos
|
||||
* Implementations of AugMix added to existing RA and AA. Including numerous supporting pieces like JSD loss (Jensen-Shannon divergence + CE), and AugMixDataset
|
||||
* SplitBatchNorm adaptation layer added for implementing Auxiliary BN as per AdvProp paper
|
||||
* ResNet-50 AugMix trained model w/ 79% top-1 added
|
||||
* `seresnext26tn_32x4d` - 77.99 top-1, 93.75 top-5 added to tiered experiment, higher img/s than 't' and 'd'
|
||||
|
||||
### Jan 3, 2020
|
||||
|
||||
* Add RandAugment trained EfficientNet-B0 weight with 77.7 top-1. Trained by [Michael Klachko](https://github.com/michaelklachko) with this code and recent hparams (see Training section)
|
||||
* Add `avg_checkpoints.py` script for post training weight averaging and update all scripts with header docstrings and shebangs.
|
||||
|
||||
### Dec 30, 2019
|
||||
|
||||
* Merge [Dushyant Mehta's](https://github.com/mehtadushy) PR for SelecSLS (Selective Short and Long Range Skip Connections) networks. Good GPU memory consumption and throughput. Original: https://github.com/mehtadushy/SelecSLS-Pytorch
|
||||
|
||||
### Dec 28, 2019
|
||||
|
||||
* Add new model weights and training hparams (see Training Hparams section)
|
||||
* `efficientnet_b3` - 81.5 top-1, 95.7 top-5 at default res/crop, 81.9, 95.8 at 320x320 1.0 crop-pct
|
||||
* trained with RandAugment, ended up with an interesting but less than perfect result (see training section)
|
||||
* `seresnext26d_32x4d`- 77.6 top-1, 93.6 top-5
|
||||
* deep stem (32, 32, 64), avgpool downsample
|
||||
* stem/dowsample from bag-of-tricks paper
|
||||
* `seresnext26t_32x4d`- 78.0 top-1, 93.7 top-5
|
||||
* deep tiered stem (24, 48, 64), avgpool downsample (a modified 'D' variant)
|
||||
* stem sizing mods from Jeremy Howard and fastai devs discussing ResNet architecture experiments
|
||||
|
||||
### Dec 23, 2019
|
||||
|
||||
* Add RandAugment trained MixNet-XL weights with 80.48 top-1.
|
||||
* `--dist-bn` argument added to train.py, will distribute BN stats between nodes after each train epoch, before eval
|
||||
|
||||
### Dec 4, 2019
|
||||
|
||||
* Added weights from the first training from scratch of an EfficientNet (B2) with my new RandAugment implementation. Much better than my previous B2 and very close to the official AdvProp ones (80.4 top-1, 95.08 top-5).
|
||||
|
||||
### Nov 29, 2019
|
||||
|
||||
* Brought EfficientNet and MobileNetV3 up to date with my https://github.com/rwightman/gen-efficientnet-pytorch code. Torchscript and ONNX export compat excluded.
|
||||
* AdvProp weights added
|
||||
* Official TF MobileNetv3 weights added
|
||||
* EfficientNet and MobileNetV3 hook based 'feature extraction' classes added. Will serve as basis for using models as backbones in obj detection/segmentation tasks. Lots more to be done here...
|
||||
* HRNet classification models and weights added from https://github.com/HRNet/HRNet-Image-Classification
|
||||
* Consistency in global pooling, `reset_classifer`, and `forward_features` across models
|
||||
* `forward_features` always returns unpooled feature maps now
|
||||
* Reasonable chance I broke something... let me know
|
||||
|
||||
### Nov 22, 2019
|
||||
|
||||
* Add ImageNet training RandAugment implementation alongside AutoAugment. PyTorch Transform compatible format, using PIL. Currently training two EfficientNet models from scratch with promising results... will update.
|
||||
* `drop-connect` cmd line arg finally added to `train.py`, no need to hack model fns. Works for efficientnet/mobilenetv3 based models, ignored otherwise.
|
@ -1,54 +0,0 @@
|
||||
# Sharing and Loading Models From the Hugging Face Hub
|
||||
|
||||
The `timm` library has a built-in integration with the Hugging Face Hub, making it easy to share and load models from the 🤗 Hub.
|
||||
|
||||
In this short guide, we'll see how to:
|
||||
1. Share a `timm` model on the Hub
|
||||
2. How to load that model back from the Hub
|
||||
|
||||
## Authenticating
|
||||
|
||||
First, you'll need to make sure you have the `huggingface_hub` package installed.
|
||||
|
||||
```bash
|
||||
pip install huggingface_hub
|
||||
```
|
||||
|
||||
Then, you'll need to authenticate yourself. You can do this by running the following command:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
```
|
||||
|
||||
Or, if you're using a notebook, you can use the `notebook_login` helper:
|
||||
|
||||
```py
|
||||
>>> from huggingface_hub import notebook_login
|
||||
>>> notebook_login()
|
||||
```
|
||||
|
||||
## Sharing a Model
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> model = timm.create_model('resnet18', pretrained=True, num_classes=4)
|
||||
```
|
||||
|
||||
Here is where you would normally train or fine-tune the model. We'll skip that for the sake of this tutorial.
|
||||
|
||||
Let's pretend we've now fine-tuned the model. The next step would be to push it to the Hub! We can do this with the `timm.models.hub.push_to_hf_hub` function.
|
||||
|
||||
```py
|
||||
>>> model_cfg = dict(labels=['a', 'b', 'c', 'd'])
|
||||
>>> timm.models.hub.push_to_hf_hub(model, 'resnet18-random', model_config=model_cfg)
|
||||
```
|
||||
|
||||
Running the above would push the model to `<your-username>/resnet18-random` on the Hub. You can now share this model with your friends, or use it in your own code!
|
||||
|
||||
## Loading a Model
|
||||
|
||||
Loading a model from the Hub is as simple as calling `timm.create_model` with the `pretrained` argument set to the name of the model you want to load. In this case, we'll use [`nateraw/resnet18-random`](https://huggingface.co/nateraw/resnet18-random), which is the model we just pushed to the Hub.
|
||||
|
||||
```py
|
||||
>>> model_reloaded = timm.create_model('hf_hub:nateraw/resnet18-random', pretrained=True)
|
||||
```
|
@ -1,22 +1,89 @@
|
||||
# timm
|
||||
# Getting Started
|
||||
|
||||
<img class="float-left !m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[150px]" src="https://huggingface.co/front/thumbnails/docs/timm.png"/>
|
||||
## Welcome
|
||||
|
||||
`timm` is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation scripts.
|
||||
Welcome to the `timm` documentation, a lean set of docs that covers the basics of `timm`.
|
||||
|
||||
It comes packaged with >700 pretrained models, and is designed to be flexible and easy to use.
|
||||
For a more comprehensive set of docs (currently under development), please visit [timmdocs](http://timm.fast.ai) by [Aman Arora](https://github.com/amaarora).
|
||||
|
||||
Read the [quick start guide](quickstart) to get up and running with the `timm` library. You will learn how to load, discover, and use pretrained models included in the library.
|
||||
## Install
|
||||
|
||||
<div class="mt-10">
|
||||
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./feature_extraction"
|
||||
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
|
||||
<p class="text-gray-700">Learn the basics and become familiar with timm. Start here if you are using timm for the first time!</p>
|
||||
</a>
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./reference/models"
|
||||
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
|
||||
<p class="text-gray-700">Technical descriptions of how timm classes and methods work.</p>
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
The library can be installed with pip:
|
||||
|
||||
```
|
||||
pip install timm
|
||||
```
|
||||
|
||||
I update the PyPi (pip) packages when I'm confident there are no significant model regressions from previous releases. If you want to pip install the bleeding edge from GitHub, use:
|
||||
```
|
||||
pip install git+https://github.com/rwightman/pytorch-image-models.git
|
||||
```
|
||||
|
||||
### Conda Environment
|
||||
|
||||
<Tip>
|
||||
|
||||
- All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically 3.7, 3.8, 3.9, 3.10
|
||||
|
||||
- Little to no care has been taken to be Python 2.x friendly and will not support it. If you run into any challenges running on Windows, or other OS, I'm definitely open to looking into those issues so long as it's in a reproducible (read Conda) environment.
|
||||
|
||||
- PyTorch versions 1.9, 1.10, 1.11 have been tested with the latest versions of this code.
|
||||
|
||||
</Tip>
|
||||
|
||||
I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda:
|
||||
|
||||
```bash
|
||||
conda create -n torch-env
|
||||
conda activate torch-env
|
||||
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
|
||||
conda install pyyaml
|
||||
```
|
||||
|
||||
## Load a Pretrained Model
|
||||
|
||||
Pretrained models can be loaded using `timm.create_model`
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
|
||||
>>> m = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> m.eval()
|
||||
```
|
||||
|
||||
## List Models with Pretrained Weights
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models(pretrained=True)
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'adv_inception_v3',
|
||||
'cspdarknet53',
|
||||
'cspresnext50',
|
||||
'densenet121',
|
||||
'densenet161',
|
||||
'densenet169',
|
||||
'densenet201',
|
||||
'densenetblur121d',
|
||||
'dla34',
|
||||
'dla46_c',
|
||||
]
|
||||
```
|
||||
|
||||
## List Model Architectures by Wildcard
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models('*resne*t*')
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'cspresnet50',
|
||||
'cspresnet50d',
|
||||
'cspresnet50w',
|
||||
'cspresnext50',
|
||||
...
|
||||
]
|
||||
```
|
||||
|
@ -1,74 +0,0 @@
|
||||
# Installation
|
||||
|
||||
Before you start, you'll need to setup your environment and install the appropriate packages. `timm` is tested on **Python 3+**.
|
||||
|
||||
## Virtual Environment
|
||||
|
||||
You should install `timm` in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep things tidy and avoid dependency conflicts.
|
||||
|
||||
1. Create and navigate to your project directory:
|
||||
|
||||
```bash
|
||||
mkdir ~/my-project
|
||||
cd ~/my-project
|
||||
```
|
||||
|
||||
2. Start a virtual environment inside your directory:
|
||||
|
||||
```bash
|
||||
python -m venv .env
|
||||
```
|
||||
|
||||
3. Activate and deactivate the virtual environment with the following commands:
|
||||
|
||||
```bash
|
||||
# Activate the virtual environment
|
||||
source .env/bin/activate
|
||||
|
||||
# Deactivate the virtual environment
|
||||
source .env/bin/deactivate
|
||||
```
|
||||
`
|
||||
Once you've created your virtual environment, you can install `timm` in it.
|
||||
|
||||
## Using pip
|
||||
|
||||
The most straightforward way to install `timm` is with pip:
|
||||
|
||||
```bash
|
||||
pip install timm
|
||||
```
|
||||
|
||||
Alternatively, you can install `timm` from GitHub directly to get the latest, bleeding-edge version:
|
||||
|
||||
```bash
|
||||
pip install git+https://github.com/rwightman/pytorch-image-models.git
|
||||
```
|
||||
|
||||
Run the following command to check if `timm` has been properly installed:
|
||||
|
||||
```bash
|
||||
python -c "from timm import list_models; print(list_models(pretrained=True)[:5])"
|
||||
```
|
||||
|
||||
This command lists the first five pretrained models available in `timm` (which are sorted alphebetically). You should see the following output:
|
||||
|
||||
```python
|
||||
['adv_inception_v3', 'bat_resnext26ts', 'beit_base_patch16_224', 'beit_base_patch16_224_in22k', 'beit_base_patch16_384']
|
||||
```
|
||||
|
||||
## From Source
|
||||
|
||||
Building `timm` from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rwightman/pytorch-image-models.git
|
||||
cd timm
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Again, you can check if `timm` was properly installed with the following command:
|
||||
|
||||
```bash
|
||||
python -c "from timm import list_models; print(list_models(pretrained=True)[:5])"
|
||||
```
|
@ -0,0 +1,5 @@
|
||||
# Available Models
|
||||
|
||||
`timm` comes bundled with a number of model architectures and corresponding pretrained models.
|
||||
|
||||
In these pages, you will find the models available in the `timm` library, as well as information on how to use them.
|
@ -1,228 +0,0 @@
|
||||
# Quickstart
|
||||
|
||||
This quickstart is intended for developers who are ready to dive into the code and see an example of how to integrate `timm` into their model training workflow.
|
||||
|
||||
First, you'll need to install `timm`. For more information on installation, see [Installation](installation).
|
||||
|
||||
```bash
|
||||
pip install timm
|
||||
```
|
||||
|
||||
## Load a Pretrained Model
|
||||
|
||||
Pretrained models can be loaded using [`create_model`].
|
||||
|
||||
Here, we load the pretrained `mobilenetv3_large_100` model.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
|
||||
>>> m = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> m.eval()
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: The returned PyTorch model is set to train mode by default, so you must call .eval() on it if you plan to use it for inference.
|
||||
</Tip>
|
||||
|
||||
## List Models with Pretrained Weights
|
||||
|
||||
To list models packaged with `timm`, you can use [`list_models`]. If you specify `pretrained=True`, this function will only return model names that have associated pretrained weights available.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models(pretrained=True)
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'adv_inception_v3',
|
||||
'cspdarknet53',
|
||||
'cspresnext50',
|
||||
'densenet121',
|
||||
'densenet161',
|
||||
'densenet169',
|
||||
'densenet201',
|
||||
'densenetblur121d',
|
||||
'dla34',
|
||||
'dla46_c',
|
||||
]
|
||||
```
|
||||
|
||||
You can also list models with a specific pattern in their name.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models('*resne*t*')
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'cspresnet50',
|
||||
'cspresnet50d',
|
||||
'cspresnet50w',
|
||||
'cspresnext50',
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
## Fine-Tune a Pretrained Model
|
||||
|
||||
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
|
||||
|
||||
```py
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
|
||||
```
|
||||
|
||||
To fine-tune on your own dataset, you have to write a PyTorch training loop or adapt `timm`'s [training script](training_script) to use your dataset.
|
||||
|
||||
## Use a Pretrained Model for Feature Extraction
|
||||
|
||||
Without modifying the network, one can call model.forward_features(input) on any model instead of the usual model(input). This will bypass the head classifier and global pooling for networks.
|
||||
|
||||
For a more in depth guide to using `timm` for feature extraction, see [Feature Extraction](feature_extraction).
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> import torch
|
||||
>>> x = torch.randn(1, 3, 224, 224)
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> features = model.forward_features(x)
|
||||
>>> print(features.shape)
|
||||
torch.Size([1, 960, 7, 7])
|
||||
```
|
||||
|
||||
## Image Augmentation
|
||||
|
||||
To transform images into valid inputs for a model, you can use [`timm.data.create_transform`], providing the desired `input_size` that the model expects.
|
||||
|
||||
This will return a generic transform that uses reasonable defaults.
|
||||
|
||||
```py
|
||||
>>> timm.data.create_transform((3, 224, 224))
|
||||
Compose(
|
||||
Resize(size=256, interpolation=bilinear, max_size=None, antialias=None)
|
||||
CenterCrop(size=(224, 224))
|
||||
ToTensor()
|
||||
Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
|
||||
)
|
||||
```
|
||||
|
||||
Pretrained models have specific transforms that were applied to images fed into them while training. If you use the wrong transform on your image, the model won't understand what it's seeing!
|
||||
|
||||
To figure out which transformations were used for a given pretrained model, we can start by taking a look at its `pretrained_cfg`
|
||||
|
||||
```py
|
||||
>>> model.pretrained_cfg
|
||||
{'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth',
|
||||
'num_classes': 1000,
|
||||
'input_size': (3, 224, 224),
|
||||
'pool_size': (7, 7),
|
||||
'crop_pct': 0.875,
|
||||
'interpolation': 'bicubic',
|
||||
'mean': (0.485, 0.456, 0.406),
|
||||
'std': (0.229, 0.224, 0.225),
|
||||
'first_conv': 'conv_stem',
|
||||
'classifier': 'classifier',
|
||||
'architecture': 'mobilenetv3_large_100'}
|
||||
```
|
||||
|
||||
We can then resolve only the data related configuration by using [`timm.data.resolve_data_config`].
|
||||
|
||||
```py
|
||||
>>> timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
{'input_size': (3, 224, 224),
|
||||
'interpolation': 'bicubic',
|
||||
'mean': (0.485, 0.456, 0.406),
|
||||
'std': (0.229, 0.224, 0.225),
|
||||
'crop_pct': 0.875}
|
||||
```
|
||||
|
||||
We can pass this data config to [`timm.data.create_transform`] to initialize the model's associated transform.
|
||||
|
||||
```py
|
||||
>>> data_cfg = timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
>>> transform = timm.data.create_transform(**data_cfg)
|
||||
>>> transform
|
||||
Compose(
|
||||
Resize(size=256, interpolation=bicubic, max_size=None, antialias=None)
|
||||
CenterCrop(size=(224, 224))
|
||||
ToTensor()
|
||||
Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: Here, the pretrained model's config happens to be the same as the generic config we made earlier. This is not always the case. So, it's safer to use the data config to create the transform as we did here instead of using the generic transform.
|
||||
</Tip>
|
||||
|
||||
## Using Pretrained Models for Inference
|
||||
|
||||
Here, we will put together the above sections and use a pretrained model for inference.
|
||||
|
||||
First we'll need an image to do inference on. Here we load a picture of a leaf from the web:
|
||||
|
||||
```py
|
||||
>>> import requests
|
||||
>>> from PIL import Image
|
||||
>>> from io import BytesIO
|
||||
>>> url = 'https://datasets-server.huggingface.co/assets/imagenet-1k/--/default/test/12/image/image.jpg'
|
||||
>>> image = Image.open(requests.get(url, stream=True).raw)
|
||||
>>> image
|
||||
```
|
||||
|
||||
Here's the image we loaded:
|
||||
|
||||
<img src="https://datasets-server.huggingface.co/assets/imagenet-1k/--/default/test/12/image/image.jpg" alt="An Image from a link" width="300"/>
|
||||
|
||||
Now, we'll create our model and transforms again. This time, we make sure to set our model in evaluation mode.
|
||||
|
||||
```py
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True).eval()
|
||||
>>> transform = timm.data.create_transform(
|
||||
**timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
)
|
||||
```
|
||||
|
||||
We can prepare this image for the model by passing it to the transform.
|
||||
|
||||
```py
|
||||
>>> image_tensor = transform(image)
|
||||
>>> image_tensor.shape
|
||||
torch.Size([3, 224, 224])
|
||||
```
|
||||
|
||||
Now we can pass that image to the model to get the predictions. We use `unsqueeze(0)` in this case, as the model is expecting a batch dimension.
|
||||
|
||||
```py
|
||||
>>> output = model(image_tensor.unsqueeze(0))
|
||||
>>> output.shape
|
||||
torch.Size([1, 1000])
|
||||
```
|
||||
|
||||
To get the predicted probabilities, we apply softmax to the output. This leaves us with a tensor of shape `(num_classes,)`.
|
||||
|
||||
```py
|
||||
>>> probabilities = torch.nn.functional.softmax(output[0], dim=0)
|
||||
>>> probabilities.shape
|
||||
torch.Size([1000])
|
||||
```
|
||||
|
||||
Now we'll find the top 5 predicted class indexes and values using `torch.topk`.
|
||||
|
||||
```py
|
||||
>>> values, indices = torch.topk(probabilities, 5)
|
||||
>>> indices
|
||||
tensor([162, 166, 161, 164, 167])
|
||||
```
|
||||
|
||||
If we check the imagenet labels for the top index, we can see what the model predicted...
|
||||
|
||||
```py
|
||||
>>> IMAGENET_1k_URL = 'https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt'
|
||||
>>> IMAGENET_1k_LABELS = requests.get(IMAGENET_1k_URL).text.strip().split('\n')
|
||||
>>> [{'label': IMAGENET_1k_LABELS[idx], 'value': val.item()} for val, idx in zip(values, indices)]
|
||||
[{'label': 'beagle', 'value': 0.8486220836639404},
|
||||
{'label': 'Walker_hound, Walker_foxhound', 'value': 0.03753996267914772},
|
||||
{'label': 'basset, basset_hound', 'value': 0.024628572165966034},
|
||||
{'label': 'bluetick', 'value': 0.010317106731235981},
|
||||
{'label': 'English_foxhound', 'value': 0.006958036217838526}]
|
||||
```
|
@ -1,9 +0,0 @@
|
||||
# Data
|
||||
|
||||
[[autodoc]] timm.data.create_dataset
|
||||
|
||||
[[autodoc]] timm.data.create_loader
|
||||
|
||||
[[autodoc]] timm.data.create_transform
|
||||
|
||||
[[autodoc]] timm.data.resolve_data_config
|
@ -1,5 +0,0 @@
|
||||
# Models
|
||||
|
||||
[[autodoc]] timm.create_model
|
||||
|
||||
[[autodoc]] timm.list_models
|
@ -1,27 +0,0 @@
|
||||
# Optimization
|
||||
|
||||
This page contains the API reference documentation for learning rate optimizers included in `timm`.
|
||||
|
||||
## Optimizers
|
||||
|
||||
### Factory functions
|
||||
|
||||
[[autodoc]] timm.optim.optim_factory.create_optimizer
|
||||
[[autodoc]] timm.optim.optim_factory.create_optimizer_v2
|
||||
|
||||
### Optimizer Classes
|
||||
|
||||
[[autodoc]] timm.optim.adabelief.AdaBelief
|
||||
[[autodoc]] timm.optim.adafactor.Adafactor
|
||||
[[autodoc]] timm.optim.adahessian.Adahessian
|
||||
[[autodoc]] timm.optim.adamp.AdamP
|
||||
[[autodoc]] timm.optim.adamw.AdamW
|
||||
[[autodoc]] timm.optim.lamb.Lamb
|
||||
[[autodoc]] timm.optim.lars.Lars
|
||||
[[autodoc]] timm.optim.lookahead.Lookahead
|
||||
[[autodoc]] timm.optim.madgrad.MADGRAD
|
||||
[[autodoc]] timm.optim.nadam.Nadam
|
||||
[[autodoc]] timm.optim.nvnovograd.NvNovoGrad
|
||||
[[autodoc]] timm.optim.radam.RAdam
|
||||
[[autodoc]] timm.optim.rmsprop_tf.RMSpropTF
|
||||
[[autodoc]] timm.optim.sgdp.SGDP
|
@ -1,19 +0,0 @@
|
||||
# Learning Rate Schedulers
|
||||
|
||||
This page contains the API reference documentation for learning rate schedulers included in `timm`.
|
||||
|
||||
## Schedulers
|
||||
|
||||
### Factory functions
|
||||
|
||||
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler
|
||||
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler_v2
|
||||
|
||||
### Scheduler Classes
|
||||
|
||||
[[autodoc]] timm.scheduler.cosine_lr.CosineLRScheduler
|
||||
[[autodoc]] timm.scheduler.multistep_lr.MultiStepLRScheduler
|
||||
[[autodoc]] timm.scheduler.plateau_lr.PlateauLRScheduler
|
||||
[[autodoc]] timm.scheduler.poly_lr.PolyLRScheduler
|
||||
[[autodoc]] timm.scheduler.step_lr.StepLRScheduler
|
||||
[[autodoc]] timm.scheduler.tanh_lr.TanhLRScheduler
|
@ -0,0 +1,35 @@
|
||||
# Scripts
|
||||
A train, validation, inference, and checkpoint cleaning script included in the github root folder. Scripts are not currently packaged in the pip release.
|
||||
|
||||
The training and validation scripts evolved from early versions of the [PyTorch Imagenet Examples](https://github.com/pytorch/examples). I have added significant functionality over time, including CUDA specific performance enhancements based on
|
||||
[NVIDIA's APEX Examples](https://github.com/NVIDIA/apex/tree/master/examples).
|
||||
|
||||
## Training Script
|
||||
|
||||
The variety of training args is large and not all combinations of options (or even options) have been fully tested. For the training dataset folder, specify the folder to the base that contains a `train` and `validation` folder.
|
||||
|
||||
To train an SE-ResNet34 on ImageNet, locally distributed, 4 GPUs, one process per GPU w/ cosine schedule, random-erasing prob of 50% and per-pixel random value:
|
||||
|
||||
```bash
|
||||
./distributed_train.sh 4 /data/imagenet --model seresnet34 --sched cosine --epochs 150 --warmup-epochs 5 --lr 0.4 --reprob 0.5 --remode pixel --batch-size 256 --amp -j 4
|
||||
```
|
||||
|
||||
<Tip>
|
||||
It is recommended to use PyTorch 1.9+ w/ PyTorch native AMP and DDP instead of APEX AMP. --amp defaults to native AMP as of timm ver 0.4.3. --apex-amp will force use of APEX components if they are installed.
|
||||
</Tip>
|
||||
|
||||
## Validation / Inference Scripts
|
||||
|
||||
Validation and inference scripts are similar in usage. One outputs metrics on a validation set and the other outputs topk class ids in a csv. Specify the folder containing validation images, not the base as in training script.
|
||||
|
||||
To validate with the model's pretrained weights (if they exist):
|
||||
|
||||
```bash
|
||||
python validate.py /imagenet/validation/ --model seresnext26_32x4d --pretrained
|
||||
```
|
||||
|
||||
To run inference from a checkpoint:
|
||||
|
||||
```bash
|
||||
python inference.py /imagenet/validation/ --model mobilenetv3_large_100 --checkpoint ./output/train/model_best.pth.tar
|
||||
```
|
@ -1,5 +1,4 @@
|
||||
mkdocs
|
||||
mkdocs-material
|
||||
mkdocs-redirects
|
||||
mdx_truly_sane_lists
|
||||
mkdocs-awesome-pages-plugin
|
||||
mkdocs-awesome-pages-plugin
|
@ -0,0 +1,2 @@
|
||||
model-index==0.1.10
|
||||
jinja2==2.11.3
|
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,73 +0,0 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
|
||||
class DatasetInfo(ABC):
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def num_classes(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_names(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def index_to_label_name(self, index) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
pass
|
||||
|
||||
|
||||
class CustomDatasetInfo(DatasetInfo):
|
||||
""" DatasetInfo that wraps passed values for custom datasets."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
label_names: Union[List[str], Dict[int, str]],
|
||||
label_descriptions: Optional[Dict[str, str]] = None
|
||||
):
|
||||
super().__init__()
|
||||
assert len(label_names) > 0
|
||||
self._label_names = label_names # label index => label name mapping
|
||||
self._label_descriptions = label_descriptions # label name => label description mapping
|
||||
if self._label_descriptions is not None:
|
||||
# validate descriptions (label names required)
|
||||
assert isinstance(self._label_descriptions, dict)
|
||||
for n in self._label_names:
|
||||
assert n in self._label_descriptions
|
||||
|
||||
def num_classes(self):
|
||||
return len(self._label_names)
|
||||
|
||||
def label_names(self):
|
||||
return self._label_names
|
||||
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
return self._label_descriptions
|
||||
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
if self._label_descriptions:
|
||||
return self._label_descriptions[label]
|
||||
return label # return label name itself if a descriptions is not present
|
||||
|
||||
def index_to_label_name(self, index) -> str:
|
||||
assert 0 <= index < len(self._label_names)
|
||||
return self._label_names[index]
|
||||
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
label = self.index_to_label_name(index)
|
||||
return self.label_name_to_description(label, detailed=detailed)
|
@ -1,92 +0,0 @@
|
||||
import csv
|
||||
import os
|
||||
import pkgutil
|
||||
import re
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from .dataset_info import DatasetInfo
|
||||
|
||||
|
||||
_NUM_CLASSES_TO_SUBSET = {
|
||||
1000: 'imagenet-1k',
|
||||
11821: 'imagenet-12k',
|
||||
21841: 'imagenet-22k',
|
||||
21843: 'imagenet-21k-goog',
|
||||
11221: 'imagenet-21k-miil',
|
||||
}
|
||||
|
||||
_SUBSETS = {
|
||||
'imagenet1k': 'imagenet_synsets.txt',
|
||||
'imagenet12k': 'imagenet12k_synsets.txt',
|
||||
'imagenet22k': 'imagenet22k_synsets.txt',
|
||||
'imagenet21k': 'imagenet21k_goog_synsets.txt',
|
||||
'imagenet21kgoog': 'imagenet21k_goog_synsets.txt',
|
||||
'imagenet21kmiil': 'imagenet21k_miil_synsets.txt',
|
||||
}
|
||||
_LEMMA_FILE = 'imagenet_synset_to_lemma.txt'
|
||||
_DEFINITION_FILE = 'imagenet_synset_to_definition.txt'
|
||||
|
||||
|
||||
def infer_imagenet_subset(model_or_cfg) -> Optional[str]:
|
||||
if isinstance(model_or_cfg, dict):
|
||||
num_classes = model_or_cfg.get('num_classes', None)
|
||||
else:
|
||||
num_classes = getattr(model_or_cfg, 'num_classes', None)
|
||||
if not num_classes:
|
||||
pretrained_cfg = getattr(model_or_cfg, 'pretrained_cfg', {})
|
||||
# FIXME at some point pretrained_cfg should include dataset-tag,
|
||||
# which will be more robust than a guess based on num_classes
|
||||
num_classes = pretrained_cfg.get('num_classes', None)
|
||||
if not num_classes or num_classes not in _NUM_CLASSES_TO_SUBSET:
|
||||
return None
|
||||
return _NUM_CLASSES_TO_SUBSET[num_classes]
|
||||
|
||||
|
||||
class ImageNetInfo(DatasetInfo):
|
||||
|
||||
def __init__(self, subset: str = 'imagenet-1k'):
|
||||
super().__init__()
|
||||
subset = re.sub(r'[-_\s]', '', subset.lower())
|
||||
assert subset in _SUBSETS, f'Unknown imagenet subset {subset}.'
|
||||
|
||||
# WordNet synsets (part-of-speach + offset) are the unique class label names for ImageNet classifiers
|
||||
synset_file = _SUBSETS[subset]
|
||||
synset_data = pkgutil.get_data(__name__, os.path.join('_info', synset_file))
|
||||
self._synsets = synset_data.decode('utf-8').splitlines()
|
||||
|
||||
# WordNet lemmas (canonical dictionary form of word) and definitions are used to build
|
||||
# the class descriptions. If detailed=True both are used, otherwise just the lemmas.
|
||||
lemma_data = pkgutil.get_data(__name__, os.path.join('_info', _LEMMA_FILE))
|
||||
reader = csv.reader(lemma_data.decode('utf-8').splitlines(), delimiter='\t')
|
||||
self._lemmas = dict(reader)
|
||||
definition_data = pkgutil.get_data(__name__, os.path.join('_info', _DEFINITION_FILE))
|
||||
reader = csv.reader(definition_data.decode('utf-8').splitlines(), delimiter='\t')
|
||||
self._definitions = dict(reader)
|
||||
|
||||
def num_classes(self):
|
||||
return len(self._synsets)
|
||||
|
||||
def label_names(self):
|
||||
return self._synsets
|
||||
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
if as_dict:
|
||||
return {label: self.label_name_to_description(label, detailed=detailed) for label in self._synsets}
|
||||
else:
|
||||
return [self.label_name_to_description(label, detailed=detailed) for label in self._synsets]
|
||||
|
||||
def index_to_label_name(self, index) -> str:
|
||||
assert 0 <= index < len(self._synsets), \
|
||||
f'Index ({index}) out of range for dataset with {len(self._synsets)} classes.'
|
||||
return self._synsets[index]
|
||||
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
label = self.index_to_label_name(index)
|
||||
return self.label_name_to_description(label, detailed=detailed)
|
||||
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
if detailed:
|
||||
description = f'{self._lemmas[label]}: {self._definitions[label]}'
|
||||
else:
|
||||
description = f'{self._lemmas[label]}'
|
||||
return description
|
@ -1,39 +0,0 @@
|
||||
""" Global Response Normalization Module
|
||||
|
||||
Based on the GRN layer presented in
|
||||
`ConvNeXt-V2 - Co-designing and Scaling ConvNets with Masked Autoencoders` - https://arxiv.org/abs/2301.00808
|
||||
|
||||
This implementation
|
||||
* works for both NCHW and NHWC tensor layouts
|
||||
* uses affine param names matching existing torch norm layers
|
||||
* slightly improves eager mode performance via fused addcmul
|
||||
|
||||
Hacked together by / Copyright 2023 Ross Wightman
|
||||
"""
|
||||
|
||||
import torch
|
||||
from torch import nn as nn
|
||||
|
||||
|
||||
class GlobalResponseNorm(nn.Module):
|
||||
""" Global Response Normalization layer
|
||||
"""
|
||||
def __init__(self, dim, eps=1e-6, channels_last=True):
|
||||
super().__init__()
|
||||
self.eps = eps
|
||||
if channels_last:
|
||||
self.spatial_dim = (1, 2)
|
||||
self.channel_dim = -1
|
||||
self.wb_shape = (1, 1, 1, -1)
|
||||
else:
|
||||
self.spatial_dim = (2, 3)
|
||||
self.channel_dim = 1
|
||||
self.wb_shape = (1, -1, 1, 1)
|
||||
|
||||
self.weight = nn.Parameter(torch.zeros(dim))
|
||||
self.bias = nn.Parameter(torch.zeros(dim))
|
||||
|
||||
def forward(self, x):
|
||||
x_g = x.norm(p=2, dim=self.spatial_dim, keepdim=True)
|
||||
x_n = x_g / (x_g.mean(dim=self.channel_dim, keepdim=True) + self.eps)
|
||||
return x + torch.addcmul(self.bias.view(self.wb_shape), self.weight.view(self.wb_shape), x * x_n)
|
@ -1,52 +1,207 @@
|
||||
""" Position Embedding Utilities
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import logging
|
||||
import math
|
||||
from typing import List, Tuple, Optional, Union
|
||||
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
from torch import nn as nn
|
||||
|
||||
from .helpers import to_2tuple
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
def pixel_freq_bands(
|
||||
num_bands: int,
|
||||
max_freq: float = 224.,
|
||||
linear_bands: bool = True,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
):
|
||||
if linear_bands:
|
||||
bands = torch.linspace(1.0, max_freq / 2, num_bands, dtype=dtype, device=device)
|
||||
else:
|
||||
bands = 2 ** torch.linspace(0, math.log(max_freq, 2) - 1, num_bands, dtype=dtype, device=device)
|
||||
return bands * torch.pi
|
||||
|
||||
|
||||
def resample_abs_pos_embed(
|
||||
posemb,
|
||||
new_size: List[int],
|
||||
old_size: Optional[List[int]] = None,
|
||||
num_prefix_tokens: int = 1,
|
||||
interpolation: str = 'bicubic',
|
||||
antialias: bool = True,
|
||||
verbose: bool = False,
|
||||
):
|
||||
# sort out sizes, assume square if old size not provided
|
||||
new_size = to_2tuple(new_size)
|
||||
new_ntok = new_size[0] * new_size[1]
|
||||
if not old_size:
|
||||
old_size = int(math.sqrt(posemb.shape[1] - num_prefix_tokens))
|
||||
old_size = to_2tuple(old_size)
|
||||
if new_size == old_size: # might not both be same container type
|
||||
return posemb
|
||||
|
||||
if num_prefix_tokens:
|
||||
posemb_prefix, posemb = posemb[:, :num_prefix_tokens], posemb[:, num_prefix_tokens:]
|
||||
def inv_freq_bands(
|
||||
num_bands: int,
|
||||
temperature: float = 100000.,
|
||||
step: int = 2,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
) -> torch.Tensor:
|
||||
inv_freq = 1. / (temperature ** (torch.arange(0, num_bands, step, dtype=dtype, device=device) / num_bands))
|
||||
return inv_freq
|
||||
|
||||
|
||||
def build_sincos2d_pos_embed(
|
||||
feat_shape: List[int],
|
||||
dim: int = 64,
|
||||
temperature: float = 10000.,
|
||||
reverse_coord: bool = False,
|
||||
interleave_sin_cos: bool = False,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None
|
||||
) -> torch.Tensor:
|
||||
"""
|
||||
|
||||
Args:
|
||||
feat_shape:
|
||||
dim:
|
||||
temperature:
|
||||
reverse_coord: stack grid order W, H instead of H, W
|
||||
interleave_sin_cos: sin, cos, sin, cos stack instead of sin, sin, cos, cos
|
||||
dtype:
|
||||
device:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
assert dim % 4 == 0, 'Embed dimension must be divisible by 4 for sin-cos 2D position embedding'
|
||||
pos_dim = dim // 4
|
||||
bands = inv_freq_bands(pos_dim, temperature=temperature, step=1, dtype=dtype, device=device)
|
||||
|
||||
if reverse_coord:
|
||||
feat_shape = feat_shape[::-1] # stack W, H instead of H, W
|
||||
grid = torch.stack(
|
||||
torch.meshgrid([torch.arange(s, device=device, dtype=dtype) for s in feat_shape])).flatten(1).transpose(0, 1)
|
||||
pos2 = grid.unsqueeze(-1) * bands.unsqueeze(0)
|
||||
# FIXME add support for unflattened spatial dim?
|
||||
|
||||
stack_dim = 2 if interleave_sin_cos else 1 # stack sin, cos, sin, cos instead of sin sin cos cos
|
||||
pos_emb = torch.stack([torch.sin(pos2), torch.cos(pos2)], dim=stack_dim).flatten(1)
|
||||
return pos_emb
|
||||
|
||||
|
||||
def build_fourier_pos_embed(
|
||||
feat_shape: List[int],
|
||||
bands: Optional[torch.Tensor] = None,
|
||||
num_bands: int = 64,
|
||||
max_res: int = 224,
|
||||
linear_bands: bool = False,
|
||||
include_grid: bool = False,
|
||||
concat_out: bool = True,
|
||||
in_pixels: bool = True,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
) -> List[torch.Tensor]:
|
||||
if bands is None:
|
||||
if in_pixels:
|
||||
bands = pixel_freq_bands(num_bands, float(max_res), linear_bands=linear_bands, dtype=dtype, device=device)
|
||||
else:
|
||||
bands = inv_freq_bands(num_bands, step=1, dtype=dtype, device=device)
|
||||
else:
|
||||
posemb_prefix, posemb = None, posemb
|
||||
if device is None:
|
||||
device = bands.device
|
||||
if dtype is None:
|
||||
dtype = bands.dtype
|
||||
|
||||
if in_pixels:
|
||||
grid = torch.stack(torch.meshgrid(
|
||||
[torch.linspace(-1., 1., steps=s, device=device, dtype=dtype) for s in feat_shape]), dim=-1)
|
||||
else:
|
||||
grid = torch.stack(torch.meshgrid(
|
||||
[torch.arange(s, device=device, dtype=dtype) for s in feat_shape]), dim=-1)
|
||||
grid = grid.unsqueeze(-1)
|
||||
pos = grid * bands
|
||||
|
||||
pos_sin, pos_cos = pos.sin(), pos.cos()
|
||||
out = (grid, pos_sin, pos_cos) if include_grid else (pos_sin, pos_cos)
|
||||
# FIXME torchscript doesn't like multiple return types, probably need to always cat?
|
||||
if concat_out:
|
||||
out = torch.cat(out, dim=-1)
|
||||
return out
|
||||
|
||||
|
||||
class FourierEmbed(nn.Module):
|
||||
|
||||
def __init__(self, max_res: int = 224, num_bands: int = 64, concat_grid=True, keep_spatial=False):
|
||||
super().__init__()
|
||||
self.max_res = max_res
|
||||
self.num_bands = num_bands
|
||||
self.concat_grid = concat_grid
|
||||
self.keep_spatial = keep_spatial
|
||||
self.register_buffer('bands', pixel_freq_bands(max_res, num_bands), persistent=False)
|
||||
|
||||
def forward(self, x):
|
||||
B, C = x.shape[:2]
|
||||
feat_shape = x.shape[2:]
|
||||
emb = build_fourier_pos_embed(
|
||||
feat_shape,
|
||||
self.bands,
|
||||
include_grid=self.concat_grid,
|
||||
dtype=x.dtype,
|
||||
device=x.device)
|
||||
emb = emb.transpose(-1, -2).flatten(len(feat_shape))
|
||||
batch_expand = (B,) + (-1,) * (x.ndim - 1)
|
||||
|
||||
# FIXME support nD
|
||||
if self.keep_spatial:
|
||||
x = torch.cat([x, emb.unsqueeze(0).expand(batch_expand).permute(0, 3, 1, 2)], dim=1)
|
||||
else:
|
||||
x = torch.cat([x.permute(0, 2, 3, 1), emb.unsqueeze(0).expand(batch_expand)], dim=-1)
|
||||
x = x.reshape(B, feat_shape.numel(), -1)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
def rot(x):
|
||||
return torch.stack([-x[..., 1::2], x[..., ::2]], -1).reshape(x.shape)
|
||||
|
||||
|
||||
def apply_rot_embed(x: torch.Tensor, sin_emb, cos_emb):
|
||||
return x * cos_emb + rot(x) * sin_emb
|
||||
|
||||
|
||||
def apply_rot_embed_list(x: List[torch.Tensor], sin_emb, cos_emb):
|
||||
if isinstance(x, torch.Tensor):
|
||||
x = [x]
|
||||
return [t * cos_emb + rot(t) * sin_emb for t in x]
|
||||
|
||||
|
||||
def apply_rot_embed_split(x: torch.Tensor, emb):
|
||||
split = emb.shape[-1] // 2
|
||||
return x * emb[:, :split] + rot(x) * emb[:, split:]
|
||||
|
||||
|
||||
def build_rotary_pos_embed(
|
||||
feat_shape: List[int],
|
||||
bands: Optional[torch.Tensor] = None,
|
||||
dim: int = 64,
|
||||
max_freq: float = 224,
|
||||
linear_bands: bool = False,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
):
|
||||
"""
|
||||
NOTE: shape arg should include spatial dim only
|
||||
"""
|
||||
feat_shape = torch.Size(feat_shape)
|
||||
|
||||
sin_emb, cos_emb = build_fourier_pos_embed(
|
||||
feat_shape, bands=bands, num_bands=dim // 4, max_res=max_freq, linear_bands=linear_bands,
|
||||
concat_out=False, device=device, dtype=dtype)
|
||||
N = feat_shape.numel()
|
||||
sin_emb = sin_emb.reshape(N, -1).repeat_interleave(2, -1)
|
||||
cos_emb = cos_emb.reshape(N, -1).repeat_interleave(2, -1)
|
||||
return sin_emb, cos_emb
|
||||
|
||||
|
||||
class RotaryEmbedding(nn.Module):
|
||||
""" Rotary position embedding
|
||||
|
||||
NOTE: This is my initial attempt at impl rotary embedding for spatial use, it has not
|
||||
been well tested, and will likely change. It will be moved to its own file.
|
||||
|
||||
# do the interpolation
|
||||
posemb = posemb.reshape(1, old_size[0], old_size[1], -1).permute(0, 3, 1, 2)
|
||||
posemb = F.interpolate(posemb, size=new_size, mode=interpolation, antialias=antialias)
|
||||
posemb = posemb.permute(0, 2, 3, 1).reshape(1, new_ntok, -1)
|
||||
The following impl/resources were referenced for this impl:
|
||||
* https://github.com/lucidrains/vit-pytorch/blob/6f3a5fcf0bca1c5ec33a35ef48d97213709df4ba/vit_pytorch/rvt.py
|
||||
* https://blog.eleuther.ai/rotary-embeddings/
|
||||
"""
|
||||
def __init__(self, dim, max_res=224, linear_bands: bool = False):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.register_buffer('bands', pixel_freq_bands(dim // 4, max_res, linear_bands=linear_bands), persistent=False)
|
||||
|
||||
if verbose:
|
||||
_logger.info(f'Resized position embedding: {old_size} to {new_size}.')
|
||||
def get_embed(self, shape: List[int]):
|
||||
return build_rotary_pos_embed(shape, self.bands)
|
||||
|
||||
# add back extra (class, etc) prefix tokens
|
||||
if posemb_prefix is not None:
|
||||
print(posemb_prefix.shape, posemb.shape)
|
||||
posemb = torch.cat([posemb_prefix, posemb], dim=1)
|
||||
return posemb
|
||||
def forward(self, x):
|
||||
# assuming channel-first tensor where spatial dim are >= 2
|
||||
sin_emb, cos_emb = self.get_embed(x.shape[2:])
|
||||
return apply_rot_embed(x, sin_emb, cos_emb)
|
||||
|
@ -1,270 +0,0 @@
|
||||
""" Relative position embedding modules and functions
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import math
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from .mlp import Mlp
|
||||
from .weight_init import trunc_normal_
|
||||
|
||||
|
||||
def gen_relative_position_index(
|
||||
q_size: Tuple[int, int],
|
||||
k_size: Tuple[int, int] = None,
|
||||
class_token: bool = False) -> torch.Tensor:
|
||||
# Adapted with significant modifications from Swin / BeiT codebases
|
||||
# get pair-wise relative position index for each token inside the window
|
||||
q_coords = torch.stack(torch.meshgrid([torch.arange(q_size[0]), torch.arange(q_size[1])])).flatten(1) # 2, Wh, Ww
|
||||
if k_size is None:
|
||||
k_coords = q_coords
|
||||
k_size = q_size
|
||||
else:
|
||||
# different q vs k sizes is a WIP
|
||||
k_coords = torch.stack(torch.meshgrid([torch.arange(k_size[0]), torch.arange(k_size[1])])).flatten(1)
|
||||
relative_coords = q_coords[:, :, None] - k_coords[:, None, :] # 2, Wh*Ww, Wh*Ww
|
||||
relative_coords = relative_coords.permute(1, 2, 0) # Wh*Ww, Wh*Ww, 2
|
||||
_, relative_position_index = torch.unique(relative_coords.view(-1, 2), return_inverse=True, dim=0)
|
||||
|
||||
if class_token:
|
||||
# handle cls to token & token 2 cls & cls to cls as per beit for rel pos bias
|
||||
# NOTE not intended or tested with MLP log-coords
|
||||
max_size = (max(q_size[0], k_size[0]), max(q_size[1], k_size[1]))
|
||||
num_relative_distance = (2 * max_size[0] - 1) * (2 * max_size[1] - 1) + 3
|
||||
relative_position_index = F.pad(relative_position_index, [1, 0, 1, 0])
|
||||
relative_position_index[0, 0:] = num_relative_distance - 3
|
||||
relative_position_index[0:, 0] = num_relative_distance - 2
|
||||
relative_position_index[0, 0] = num_relative_distance - 1
|
||||
|
||||
return relative_position_index.contiguous()
|
||||
|
||||
|
||||
class RelPosBias(nn.Module):
|
||||
""" Relative Position Bias
|
||||
Adapted from Swin-V1 relative position bias impl, modularized.
|
||||
"""
|
||||
|
||||
def __init__(self, window_size, num_heads, prefix_tokens=0):
|
||||
super().__init__()
|
||||
assert prefix_tokens <= 1
|
||||
self.window_size = window_size
|
||||
self.window_area = window_size[0] * window_size[1]
|
||||
self.bias_shape = (self.window_area + prefix_tokens,) * 2 + (num_heads,)
|
||||
|
||||
num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 * prefix_tokens
|
||||
self.relative_position_bias_table = nn.Parameter(torch.zeros(num_relative_distance, num_heads))
|
||||
self.register_buffer(
|
||||
"relative_position_index",
|
||||
gen_relative_position_index(self.window_size, class_token=prefix_tokens > 0),
|
||||
persistent=False,
|
||||
)
|
||||
|
||||
self.init_weights()
|
||||
|
||||
def init_weights(self):
|
||||
trunc_normal_(self.relative_position_bias_table, std=.02)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)]
|
||||
# win_h * win_w, win_h * win_w, num_heads
|
||||
relative_position_bias = relative_position_bias.view(self.bias_shape).permute(2, 0, 1)
|
||||
return relative_position_bias.unsqueeze(0).contiguous()
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
||||
|
||||
|
||||
def gen_relative_log_coords(
|
||||
win_size: Tuple[int, int],
|
||||
pretrained_win_size: Tuple[int, int] = (0, 0),
|
||||
mode='swin',
|
||||
):
|
||||
assert mode in ('swin', 'cr')
|
||||
# as per official swin-v2 impl, supporting timm specific 'cr' log coords as well
|
||||
relative_coords_h = torch.arange(-(win_size[0] - 1), win_size[0], dtype=torch.float32)
|
||||
relative_coords_w = torch.arange(-(win_size[1] - 1), win_size[1], dtype=torch.float32)
|
||||
relative_coords_table = torch.stack(torch.meshgrid([relative_coords_h, relative_coords_w]))
|
||||
relative_coords_table = relative_coords_table.permute(1, 2, 0).contiguous() # 2*Wh-1, 2*Ww-1, 2
|
||||
if mode == 'swin':
|
||||
if pretrained_win_size[0] > 0:
|
||||
relative_coords_table[:, :, 0] /= (pretrained_win_size[0] - 1)
|
||||
relative_coords_table[:, :, 1] /= (pretrained_win_size[1] - 1)
|
||||
else:
|
||||
relative_coords_table[:, :, 0] /= (win_size[0] - 1)
|
||||
relative_coords_table[:, :, 1] /= (win_size[1] - 1)
|
||||
relative_coords_table *= 8 # normalize to -8, 8
|
||||
relative_coords_table = torch.sign(relative_coords_table) * torch.log2(
|
||||
1.0 + relative_coords_table.abs()) / math.log2(8)
|
||||
else:
|
||||
# mode == 'cr'
|
||||
relative_coords_table = torch.sign(relative_coords_table) * torch.log(
|
||||
1.0 + relative_coords_table.abs())
|
||||
|
||||
return relative_coords_table
|
||||
|
||||
|
||||
class RelPosMlp(nn.Module):
|
||||
""" Log-Coordinate Relative Position MLP
|
||||
Based on ideas presented in Swin-V2 paper (https://arxiv.org/abs/2111.09883)
|
||||
|
||||
This impl covers the 'swin' implementation as well as two timm specific modes ('cr', and 'rw')
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
window_size,
|
||||
num_heads=8,
|
||||
hidden_dim=128,
|
||||
prefix_tokens=0,
|
||||
mode='cr',
|
||||
pretrained_window_size=(0, 0)
|
||||
):
|
||||
super().__init__()
|
||||
self.window_size = window_size
|
||||
self.window_area = self.window_size[0] * self.window_size[1]
|
||||
self.prefix_tokens = prefix_tokens
|
||||
self.num_heads = num_heads
|
||||
self.bias_shape = (self.window_area,) * 2 + (num_heads,)
|
||||
if mode == 'swin':
|
||||
self.bias_act = nn.Sigmoid()
|
||||
self.bias_gain = 16
|
||||
mlp_bias = (True, False)
|
||||
else:
|
||||
self.bias_act = nn.Identity()
|
||||
self.bias_gain = None
|
||||
mlp_bias = True
|
||||
|
||||
self.mlp = Mlp(
|
||||
2, # x, y
|
||||
hidden_features=hidden_dim,
|
||||
out_features=num_heads,
|
||||
act_layer=nn.ReLU,
|
||||
bias=mlp_bias,
|
||||
drop=(0.125, 0.)
|
||||
)
|
||||
|
||||
self.register_buffer(
|
||||
"relative_position_index",
|
||||
gen_relative_position_index(window_size),
|
||||
persistent=False)
|
||||
|
||||
# get relative_coords_table
|
||||
self.register_buffer(
|
||||
"rel_coords_log",
|
||||
gen_relative_log_coords(window_size, pretrained_window_size, mode=mode),
|
||||
persistent=False)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
relative_position_bias = self.mlp(self.rel_coords_log)
|
||||
if self.relative_position_index is not None:
|
||||
relative_position_bias = relative_position_bias.view(-1, self.num_heads)[
|
||||
self.relative_position_index.view(-1)] # Wh*Ww,Wh*Ww,nH
|
||||
relative_position_bias = relative_position_bias.view(self.bias_shape)
|
||||
relative_position_bias = relative_position_bias.permute(2, 0, 1)
|
||||
relative_position_bias = self.bias_act(relative_position_bias)
|
||||
if self.bias_gain is not None:
|
||||
relative_position_bias = self.bias_gain * relative_position_bias
|
||||
if self.prefix_tokens:
|
||||
relative_position_bias = F.pad(relative_position_bias, [self.prefix_tokens, 0, self.prefix_tokens, 0])
|
||||
return relative_position_bias.unsqueeze(0).contiguous()
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
||||
|
||||
|
||||
def generate_lookup_tensor(
|
||||
length: int,
|
||||
max_relative_position: Optional[int] = None,
|
||||
):
|
||||
"""Generate a one_hot lookup tensor to reindex embeddings along one dimension.
|
||||
|
||||
Args:
|
||||
length: the length to reindex to.
|
||||
max_relative_position: the maximum relative position to consider.
|
||||
Relative position embeddings for distances above this threshold
|
||||
are zeroed out.
|
||||
Returns:
|
||||
a lookup Tensor of size [length, length, vocab_size] that satisfies
|
||||
ret[n,m,v] = 1{m - n + max_relative_position = v}.
|
||||
"""
|
||||
if max_relative_position is None:
|
||||
max_relative_position = length - 1
|
||||
# Return the cached lookup tensor, otherwise compute it and cache it.
|
||||
vocab_size = 2 * max_relative_position + 1
|
||||
ret = torch.zeros(length, length, vocab_size)
|
||||
for i in range(length):
|
||||
for x in range(length):
|
||||
v = x - i + max_relative_position
|
||||
if abs(x - i) > max_relative_position:
|
||||
continue
|
||||
ret[i, x, v] = 1
|
||||
return ret
|
||||
|
||||
|
||||
def reindex_2d_einsum_lookup(
|
||||
relative_position_tensor,
|
||||
height: int,
|
||||
width: int,
|
||||
height_lookup: torch.Tensor,
|
||||
width_lookup: torch.Tensor,
|
||||
) -> torch.Tensor:
|
||||
"""Reindex 2d relative position bias with 2 independent einsum lookups.
|
||||
|
||||
Adapted from:
|
||||
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
|
||||
|
||||
Args:
|
||||
relative_position_tensor: tensor of shape
|
||||
[..., vocab_height, vocab_width, ...].
|
||||
height: height to reindex to.
|
||||
width: width to reindex to.
|
||||
height_lookup: one-hot height lookup
|
||||
width_lookup: one-hot width lookup
|
||||
Returns:
|
||||
reindexed_tensor: a Tensor of shape
|
||||
[..., height * width, height * width, ...]
|
||||
"""
|
||||
reindexed_tensor = torch.einsum('nhw,ixh->nixw', relative_position_tensor, height_lookup)
|
||||
reindexed_tensor = torch.einsum('nixw,jyw->nijxy', reindexed_tensor, width_lookup)
|
||||
area = height * width
|
||||
return reindexed_tensor.reshape(relative_position_tensor.shape[0], area, area)
|
||||
|
||||
|
||||
class RelPosBiasTf(nn.Module):
|
||||
""" Relative Position Bias Impl (Compatible with Tensorflow MaxViT models)
|
||||
Adapted from:
|
||||
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
|
||||
"""
|
||||
def __init__(self, window_size, num_heads, prefix_tokens=0):
|
||||
super().__init__()
|
||||
assert prefix_tokens <= 1
|
||||
self.window_size = window_size
|
||||
self.window_area = window_size[0] * window_size[1]
|
||||
self.num_heads = num_heads
|
||||
|
||||
vocab_height = 2 * window_size[0] - 1
|
||||
vocab_width = 2 * window_size[1] - 1
|
||||
self.bias_shape = (self.num_heads, vocab_height, vocab_width)
|
||||
self.relative_position_bias_table = nn.Parameter(torch.zeros(self.bias_shape))
|
||||
self.register_buffer('height_lookup', generate_lookup_tensor(window_size[0]), persistent=False)
|
||||
self.register_buffer('width_lookup', generate_lookup_tensor(window_size[1]), persistent=False)
|
||||
self.init_weights()
|
||||
|
||||
def init_weights(self):
|
||||
nn.init.normal_(self.relative_position_bias_table, std=.02)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
# FIXME change to not use one-hot/einsum?
|
||||
return reindex_2d_einsum_lookup(
|
||||
self.relative_position_bias_table,
|
||||
self.window_size[0],
|
||||
self.window_size[1],
|
||||
self.height_lookup,
|
||||
self.width_lookup
|
||||
)
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
@ -1,219 +0,0 @@
|
||||
""" Sin-cos, fourier, rotary position embedding modules and functions
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import math
|
||||
from typing import List, Tuple, Optional, Union
|
||||
|
||||
import torch
|
||||
from torch import nn as nn
|
||||
|
||||
|
||||
def pixel_freq_bands(
|
||||
num_bands: int,
|
||||
max_freq: float = 224.,
|
||||
linear_bands: bool = True,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
):
|
||||
if linear_bands:
|
||||
bands = torch.linspace(1.0, max_freq / 2, num_bands, dtype=dtype, device=device)
|
||||
else:
|
||||
bands = 2 ** torch.linspace(0, math.log(max_freq, 2) - 1, num_bands, dtype=dtype, device=device)
|
||||
return bands * torch.pi
|
||||
|
||||
|
||||
def inv_freq_bands(
|
||||
num_bands: int,
|
||||
temperature: float = 100000.,
|
||||
step: int = 2,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
) -> torch.Tensor:
|
||||
inv_freq = 1. / (temperature ** (torch.arange(0, num_bands, step, dtype=dtype, device=device) / num_bands))
|
||||
return inv_freq
|
||||
|
||||
|
||||
def build_sincos2d_pos_embed(
|
||||
feat_shape: List[int],
|
||||
dim: int = 64,
|
||||
temperature: float = 10000.,
|
||||
reverse_coord: bool = False,
|
||||
interleave_sin_cos: bool = False,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None
|
||||
) -> torch.Tensor:
|
||||
"""
|
||||
|
||||
Args:
|
||||
feat_shape:
|
||||
dim:
|
||||
temperature:
|
||||
reverse_coord: stack grid order W, H instead of H, W
|
||||
interleave_sin_cos: sin, cos, sin, cos stack instead of sin, sin, cos, cos
|
||||
dtype:
|
||||
device:
|
||||
|
||||
Returns:
|
||||
|
||||
"""
|
||||
assert dim % 4 == 0, 'Embed dimension must be divisible by 4 for sin-cos 2D position embedding'
|
||||
pos_dim = dim // 4
|
||||
bands = inv_freq_bands(pos_dim, temperature=temperature, step=1, dtype=dtype, device=device)
|
||||
|
||||
if reverse_coord:
|
||||
feat_shape = feat_shape[::-1] # stack W, H instead of H, W
|
||||
grid = torch.stack(
|
||||
torch.meshgrid([torch.arange(s, device=device, dtype=dtype) for s in feat_shape])).flatten(1).transpose(0, 1)
|
||||
pos2 = grid.unsqueeze(-1) * bands.unsqueeze(0)
|
||||
# FIXME add support for unflattened spatial dim?
|
||||
|
||||
stack_dim = 2 if interleave_sin_cos else 1 # stack sin, cos, sin, cos instead of sin sin cos cos
|
||||
pos_emb = torch.stack([torch.sin(pos2), torch.cos(pos2)], dim=stack_dim).flatten(1)
|
||||
return pos_emb
|
||||
|
||||
|
||||
def build_fourier_pos_embed(
|
||||
feat_shape: List[int],
|
||||
bands: Optional[torch.Tensor] = None,
|
||||
num_bands: int = 64,
|
||||
max_res: int = 224,
|
||||
linear_bands: bool = False,
|
||||
include_grid: bool = False,
|
||||
concat_out: bool = True,
|
||||
in_pixels: bool = True,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
) -> List[torch.Tensor]:
|
||||
if bands is None:
|
||||
if in_pixels:
|
||||
bands = pixel_freq_bands(num_bands, float(max_res), linear_bands=linear_bands, dtype=dtype, device=device)
|
||||
else:
|
||||
bands = inv_freq_bands(num_bands, step=1, dtype=dtype, device=device)
|
||||
else:
|
||||
if device is None:
|
||||
device = bands.device
|
||||
if dtype is None:
|
||||
dtype = bands.dtype
|
||||
|
||||
if in_pixels:
|
||||
grid = torch.stack(torch.meshgrid(
|
||||
[torch.linspace(-1., 1., steps=s, device=device, dtype=dtype) for s in feat_shape]), dim=-1)
|
||||
else:
|
||||
grid = torch.stack(torch.meshgrid(
|
||||
[torch.arange(s, device=device, dtype=dtype) for s in feat_shape]), dim=-1)
|
||||
grid = grid.unsqueeze(-1)
|
||||
pos = grid * bands
|
||||
|
||||
pos_sin, pos_cos = pos.sin(), pos.cos()
|
||||
out = (grid, pos_sin, pos_cos) if include_grid else (pos_sin, pos_cos)
|
||||
# FIXME torchscript doesn't like multiple return types, probably need to always cat?
|
||||
if concat_out:
|
||||
out = torch.cat(out, dim=-1)
|
||||
return out
|
||||
|
||||
|
||||
class FourierEmbed(nn.Module):
|
||||
|
||||
def __init__(self, max_res: int = 224, num_bands: int = 64, concat_grid=True, keep_spatial=False):
|
||||
super().__init__()
|
||||
self.max_res = max_res
|
||||
self.num_bands = num_bands
|
||||
self.concat_grid = concat_grid
|
||||
self.keep_spatial = keep_spatial
|
||||
self.register_buffer('bands', pixel_freq_bands(max_res, num_bands), persistent=False)
|
||||
|
||||
def forward(self, x):
|
||||
B, C = x.shape[:2]
|
||||
feat_shape = x.shape[2:]
|
||||
emb = build_fourier_pos_embed(
|
||||
feat_shape,
|
||||
self.bands,
|
||||
include_grid=self.concat_grid,
|
||||
dtype=x.dtype,
|
||||
device=x.device)
|
||||
emb = emb.transpose(-1, -2).flatten(len(feat_shape))
|
||||
batch_expand = (B,) + (-1,) * (x.ndim - 1)
|
||||
|
||||
# FIXME support nD
|
||||
if self.keep_spatial:
|
||||
x = torch.cat([x, emb.unsqueeze(0).expand(batch_expand).permute(0, 3, 1, 2)], dim=1)
|
||||
else:
|
||||
x = torch.cat([x.permute(0, 2, 3, 1), emb.unsqueeze(0).expand(batch_expand)], dim=-1)
|
||||
x = x.reshape(B, feat_shape.numel(), -1)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
def rot(x):
|
||||
return torch.stack([-x[..., 1::2], x[..., ::2]], -1).reshape(x.shape)
|
||||
|
||||
|
||||
def apply_rot_embed(x: torch.Tensor, sin_emb, cos_emb):
|
||||
return x * cos_emb + rot(x) * sin_emb
|
||||
|
||||
|
||||
def apply_rot_embed_list(x: List[torch.Tensor], sin_emb, cos_emb):
|
||||
if isinstance(x, torch.Tensor):
|
||||
x = [x]
|
||||
return [t * cos_emb + rot(t) * sin_emb for t in x]
|
||||
|
||||
|
||||
def apply_rot_embed_split(x: torch.Tensor, emb):
|
||||
split = emb.shape[-1] // 2
|
||||
return x * emb[:, :split] + rot(x) * emb[:, split:]
|
||||
|
||||
|
||||
def build_rotary_pos_embed(
|
||||
feat_shape: List[int],
|
||||
bands: Optional[torch.Tensor] = None,
|
||||
dim: int = 64,
|
||||
max_freq: float = 224,
|
||||
linear_bands: bool = False,
|
||||
dtype: torch.dtype = torch.float32,
|
||||
device: Optional[torch.device] = None,
|
||||
):
|
||||
"""
|
||||
NOTE: shape arg should include spatial dim only
|
||||
"""
|
||||
feat_shape = torch.Size(feat_shape)
|
||||
|
||||
sin_emb, cos_emb = build_fourier_pos_embed(
|
||||
feat_shape,
|
||||
bands=bands,
|
||||
num_bands=dim // 4,
|
||||
max_res=max_freq,
|
||||
linear_bands=linear_bands,
|
||||
concat_out=False,
|
||||
device=device,
|
||||
dtype=dtype,
|
||||
)
|
||||
N = feat_shape.numel()
|
||||
sin_emb = sin_emb.reshape(N, -1).repeat_interleave(2, -1)
|
||||
cos_emb = cos_emb.reshape(N, -1).repeat_interleave(2, -1)
|
||||
return sin_emb, cos_emb
|
||||
|
||||
|
||||
class RotaryEmbedding(nn.Module):
|
||||
""" Rotary position embedding
|
||||
|
||||
NOTE: This is my initial attempt at impl rotary embedding for spatial use, it has not
|
||||
been well tested, and will likely change. It will be moved to its own file.
|
||||
|
||||
The following impl/resources were referenced for this impl:
|
||||
* https://github.com/lucidrains/vit-pytorch/blob/6f3a5fcf0bca1c5ec33a35ef48d97213709df4ba/vit_pytorch/rvt.py
|
||||
* https://blog.eleuther.ai/rotary-embeddings/
|
||||
"""
|
||||
|
||||
def __init__(self, dim, max_res=224, linear_bands: bool = False):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.register_buffer('bands', pixel_freq_bands(dim // 4, max_res, linear_bands=linear_bands), persistent=False)
|
||||
|
||||
def get_embed(self, shape: List[int]):
|
||||
return build_rotary_pos_embed(shape, self.bands)
|
||||
|
||||
def forward(self, x):
|
||||
# assuming channel-first tensor where spatial dim are >= 2
|
||||
sin_emb, cos_emb = self.get_embed(x.shape[2:])
|
||||
return apply_rot_embed(x, sin_emb, cos_emb)
|
@ -1,686 +0,0 @@
|
||||
""" DaViT: Dual Attention Vision Transformers
|
||||
|
||||
As described in https://arxiv.org/abs/2204.03645
|
||||
|
||||
Input size invariant transformer architecture that combines channel and spacial
|
||||
attention in each block. The attention mechanisms used are linear in complexity.
|
||||
|
||||
DaViT model defs and weights adapted from https://github.com/dingmyu/davit, original copyright below
|
||||
|
||||
"""
|
||||
# Copyright (c) 2022 Mingyu Ding
|
||||
# All rights reserved.
|
||||
# This source code is licensed under the MIT license
|
||||
import itertools
|
||||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
from typing import Tuple
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
from torch import Tensor
|
||||
|
||||
from timm.data import IMAGENET_DEFAULT_MEAN, IMAGENET_DEFAULT_STD
|
||||
from timm.layers import DropPath, to_2tuple, trunc_normal_, SelectAdaptivePool2d, Mlp, LayerNorm2d, get_norm_layer
|
||||
from timm.layers import NormMlpClassifierHead, ClassifierHead
|
||||
from ._builder import build_model_with_cfg
|
||||
from ._features_fx import register_notrace_function
|
||||
from ._manipulate import checkpoint_seq
|
||||
from ._pretrained import generate_default_cfgs
|
||||
from ._registry import register_model
|
||||
|
||||
__all__ = ['DaViT']
|
||||
|
||||
|
||||
class ConvPosEnc(nn.Module):
|
||||
def __init__(self, dim: int, k: int = 3, act: bool = False):
|
||||
super(ConvPosEnc, self).__init__()
|
||||
|
||||
self.proj = nn.Conv2d(dim, dim, k, 1, k // 2, groups=dim)
|
||||
self.act = nn.GELU() if act else nn.Identity()
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
feat = self.proj(x)
|
||||
x = x + self.act(feat)
|
||||
return x
|
||||
|
||||
|
||||
class Stem(nn.Module):
|
||||
""" Size-agnostic implementation of 2D image to patch embedding,
|
||||
allowing input size to be adjusted during model forward operation
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_chs=3,
|
||||
out_chs=96,
|
||||
stride=4,
|
||||
norm_layer=LayerNorm2d,
|
||||
):
|
||||
super().__init__()
|
||||
stride = to_2tuple(stride)
|
||||
self.stride = stride
|
||||
self.in_chs = in_chs
|
||||
self.out_chs = out_chs
|
||||
assert stride[0] == 4 # only setup for stride==4
|
||||
self.conv = nn.Conv2d(
|
||||
in_chs,
|
||||
out_chs,
|
||||
kernel_size=7,
|
||||
stride=stride,
|
||||
padding=3,
|
||||
)
|
||||
self.norm = norm_layer(out_chs)
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B, C, H, W = x.shape
|
||||
x = F.pad(x, (0, (self.stride[1] - W % self.stride[1]) % self.stride[1]))
|
||||
x = F.pad(x, (0, 0, 0, (self.stride[0] - H % self.stride[0]) % self.stride[0]))
|
||||
x = self.conv(x)
|
||||
x = self.norm(x)
|
||||
return x
|
||||
|
||||
|
||||
class Downsample(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
in_chs,
|
||||
out_chs,
|
||||
norm_layer=LayerNorm2d,
|
||||
):
|
||||
super().__init__()
|
||||
self.in_chs = in_chs
|
||||
self.out_chs = out_chs
|
||||
|
||||
self.norm = norm_layer(in_chs)
|
||||
self.conv = nn.Conv2d(
|
||||
in_chs,
|
||||
out_chs,
|
||||
kernel_size=2,
|
||||
stride=2,
|
||||
padding=0,
|
||||
)
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B, C, H, W = x.shape
|
||||
x = self.norm(x)
|
||||
x = F.pad(x, (0, (2 - W % 2) % 2))
|
||||
x = F.pad(x, (0, 0, 0, (2 - H % 2) % 2))
|
||||
x = self.conv(x)
|
||||
return x
|
||||
|
||||
|
||||
class ChannelAttention(nn.Module):
|
||||
|
||||
def __init__(self, dim, num_heads=8, qkv_bias=False):
|
||||
super().__init__()
|
||||
self.num_heads = num_heads
|
||||
head_dim = dim // num_heads
|
||||
self.scale = head_dim ** -0.5
|
||||
|
||||
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
||||
self.proj = nn.Linear(dim, dim)
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B, N, C = x.shape
|
||||
|
||||
qkv = self.qkv(x).reshape(B, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
|
||||
q, k, v = qkv.unbind(0)
|
||||
|
||||
k = k * self.scale
|
||||
attention = k.transpose(-1, -2) @ v
|
||||
attention = attention.softmax(dim=-1)
|
||||
x = (attention @ q.transpose(-1, -2)).transpose(-1, -2)
|
||||
x = x.transpose(1, 2).reshape(B, N, C)
|
||||
x = self.proj(x)
|
||||
return x
|
||||
|
||||
|
||||
class ChannelBlock(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dim,
|
||||
num_heads,
|
||||
mlp_ratio=4.,
|
||||
qkv_bias=False,
|
||||
drop_path=0.,
|
||||
act_layer=nn.GELU,
|
||||
norm_layer=nn.LayerNorm,
|
||||
ffn=True,
|
||||
cpe_act=False,
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.cpe1 = ConvPosEnc(dim=dim, k=3, act=cpe_act)
|
||||
self.ffn = ffn
|
||||
self.norm1 = norm_layer(dim)
|
||||
self.attn = ChannelAttention(dim, num_heads=num_heads, qkv_bias=qkv_bias)
|
||||
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
||||
self.cpe2 = ConvPosEnc(dim=dim, k=3, act=cpe_act)
|
||||
|
||||
if self.ffn:
|
||||
self.norm2 = norm_layer(dim)
|
||||
self.mlp = Mlp(
|
||||
in_features=dim,
|
||||
hidden_features=int(dim * mlp_ratio),
|
||||
act_layer=act_layer,
|
||||
)
|
||||
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
||||
else:
|
||||
self.norm2 = None
|
||||
self.mlp = None
|
||||
self.drop_path2 = None
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B, C, H, W = x.shape
|
||||
|
||||
x = self.cpe1(x).flatten(2).transpose(1, 2)
|
||||
|
||||
cur = self.norm1(x)
|
||||
cur = self.attn(cur)
|
||||
x = x + self.drop_path1(cur)
|
||||
|
||||
x = self.cpe2(x.transpose(1, 2).view(B, C, H, W))
|
||||
|
||||
if self.mlp is not None:
|
||||
x = x.flatten(2).transpose(1, 2)
|
||||
x = x + self.drop_path2(self.mlp(self.norm2(x)))
|
||||
x = x.transpose(1, 2).view(B, C, H, W)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
def window_partition(x: Tensor, window_size: Tuple[int, int]):
|
||||
"""
|
||||
Args:
|
||||
x: (B, H, W, C)
|
||||
window_size (int): window size
|
||||
Returns:
|
||||
windows: (num_windows*B, window_size, window_size, C)
|
||||
"""
|
||||
B, H, W, C = x.shape
|
||||
x = x.view(B, H // window_size[0], window_size[0], W // window_size[1], window_size[1], C)
|
||||
windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size[0], window_size[1], C)
|
||||
return windows
|
||||
|
||||
|
||||
@register_notrace_function # reason: int argument is a Proxy
|
||||
def window_reverse(windows: Tensor, window_size: Tuple[int, int], H: int, W: int):
|
||||
"""
|
||||
Args:
|
||||
windows: (num_windows*B, window_size, window_size, C)
|
||||
window_size (int): Window size
|
||||
H (int): Height of image
|
||||
W (int): Width of image
|
||||
Returns:
|
||||
x: (B, H, W, C)
|
||||
"""
|
||||
C = windows.shape[-1]
|
||||
x = windows.view(-1, H // window_size[0], W // window_size[1], window_size[0], window_size[1], C)
|
||||
x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, H, W, C)
|
||||
return x
|
||||
|
||||
|
||||
class WindowAttention(nn.Module):
|
||||
r""" Window based multi-head self attention (W-MSA) module with relative position bias.
|
||||
It supports both of shifted and non-shifted window.
|
||||
Args:
|
||||
dim (int): Number of input channels.
|
||||
window_size (tuple[int]): The height and width of the window.
|
||||
num_heads (int): Number of attention heads.
|
||||
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
||||
"""
|
||||
|
||||
def __init__(self, dim, window_size, num_heads, qkv_bias=True):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.window_size = window_size
|
||||
self.num_heads = num_heads
|
||||
head_dim = dim // num_heads
|
||||
self.scale = head_dim ** -0.5
|
||||
|
||||
self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
|
||||
self.proj = nn.Linear(dim, dim)
|
||||
|
||||
self.softmax = nn.Softmax(dim=-1)
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B_, N, C = x.shape
|
||||
|
||||
qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
|
||||
q, k, v = qkv.unbind(0)
|
||||
|
||||
q = q * self.scale
|
||||
attn = (q @ k.transpose(-2, -1))
|
||||
attn = self.softmax(attn)
|
||||
|
||||
x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
|
||||
x = self.proj(x)
|
||||
return x
|
||||
|
||||
|
||||
class SpatialBlock(nn.Module):
|
||||
r""" Windows Block.
|
||||
Args:
|
||||
dim (int): Number of input channels.
|
||||
num_heads (int): Number of attention heads.
|
||||
window_size (int): Window size.
|
||||
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
|
||||
qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
|
||||
drop_path (float, optional): Stochastic depth rate. Default: 0.0
|
||||
act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
|
||||
norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
dim,
|
||||
num_heads,
|
||||
window_size=7,
|
||||
mlp_ratio=4.,
|
||||
qkv_bias=True,
|
||||
drop_path=0.,
|
||||
act_layer=nn.GELU,
|
||||
norm_layer=nn.LayerNorm,
|
||||
ffn=True,
|
||||
cpe_act=False,
|
||||
):
|
||||
super().__init__()
|
||||
self.dim = dim
|
||||
self.ffn = ffn
|
||||
self.num_heads = num_heads
|
||||
self.window_size = to_2tuple(window_size)
|
||||
self.mlp_ratio = mlp_ratio
|
||||
|
||||
self.cpe1 = ConvPosEnc(dim=dim, k=3, act=cpe_act)
|
||||
self.norm1 = norm_layer(dim)
|
||||
self.attn = WindowAttention(
|
||||
dim,
|
||||
self.window_size,
|
||||
num_heads=num_heads,
|
||||
qkv_bias=qkv_bias,
|
||||
)
|
||||
self.drop_path1 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
||||
|
||||
self.cpe2 = ConvPosEnc(dim=dim, k=3, act=cpe_act)
|
||||
if self.ffn:
|
||||
self.norm2 = norm_layer(dim)
|
||||
mlp_hidden_dim = int(dim * mlp_ratio)
|
||||
self.mlp = Mlp(
|
||||
in_features=dim,
|
||||
hidden_features=mlp_hidden_dim,
|
||||
act_layer=act_layer,
|
||||
)
|
||||
self.drop_path2 = DropPath(drop_path) if drop_path > 0. else nn.Identity()
|
||||
else:
|
||||
self.norm2 = None
|
||||
self.mlp = None
|
||||
self.drop_path1 = None
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
B, C, H, W = x.shape
|
||||
|
||||
shortcut = self.cpe1(x).flatten(2).transpose(1, 2)
|
||||
|
||||
x = self.norm1(shortcut)
|
||||
x = x.view(B, H, W, C)
|
||||
|
||||
pad_l = pad_t = 0
|
||||
pad_r = (self.window_size[1] - W % self.window_size[1]) % self.window_size[1]
|
||||
pad_b = (self.window_size[0] - H % self.window_size[0]) % self.window_size[0]
|
||||
x = F.pad(x, (0, 0, pad_l, pad_r, pad_t, pad_b))
|
||||
_, Hp, Wp, _ = x.shape
|
||||
|
||||
x_windows = window_partition(x, self.window_size)
|
||||
x_windows = x_windows.view(-1, self.window_size[0] * self.window_size[1], C)
|
||||
|
||||
# W-MSA/SW-MSA
|
||||
attn_windows = self.attn(x_windows)
|
||||
|
||||
# merge windows
|
||||
attn_windows = attn_windows.view(-1, self.window_size[0], self.window_size[1], C)
|
||||
x = window_reverse(attn_windows, self.window_size, Hp, Wp)
|
||||
|
||||
# if pad_r > 0 or pad_b > 0:
|
||||
x = x[:, :H, :W, :].contiguous()
|
||||
|
||||
x = x.view(B, H * W, C)
|
||||
x = shortcut + self.drop_path1(x)
|
||||
|
||||
x = self.cpe2(x.transpose(1, 2).view(B, C, H, W))
|
||||
|
||||
if self.mlp is not None:
|
||||
x = x.flatten(2).transpose(1, 2)
|
||||
x = x + self.drop_path2(self.mlp(self.norm2(x)))
|
||||
x = x.transpose(1, 2).view(B, C, H, W)
|
||||
|
||||
return x
|
||||
|
||||
|
||||
class DaViTStage(nn.Module):
|
||||
def __init__(
|
||||
self,
|
||||
in_chs,
|
||||
out_chs,
|
||||
depth=1,
|
||||
downsample=True,
|
||||
attn_types=('spatial', 'channel'),
|
||||
num_heads=3,
|
||||
window_size=7,
|
||||
mlp_ratio=4,
|
||||
qkv_bias=True,
|
||||
drop_path_rates=(0, 0),
|
||||
norm_layer=LayerNorm2d,
|
||||
norm_layer_cl=nn.LayerNorm,
|
||||
ffn=True,
|
||||
cpe_act=False
|
||||
):
|
||||
super().__init__()
|
||||
|
||||
self.grad_checkpointing = False
|
||||
|
||||
# downsample embedding layer at the beginning of each stage
|
||||
if downsample:
|
||||
self.downsample = Downsample(in_chs, out_chs, norm_layer=norm_layer)
|
||||
else:
|
||||
self.downsample = nn.Identity()
|
||||
|
||||
'''
|
||||
repeating alternating attention blocks in each stage
|
||||
default: (spatial -> channel) x depth
|
||||
|
||||
potential opportunity to integrate with a more general version of ByobNet/ByoaNet
|
||||
since the logic is similar
|
||||
'''
|
||||
stage_blocks = []
|
||||
for block_idx in range(depth):
|
||||
dual_attention_block = []
|
||||
for attn_idx, attn_type in enumerate(attn_types):
|
||||
if attn_type == 'spatial':
|
||||
dual_attention_block.append(SpatialBlock(
|
||||
dim=out_chs,
|
||||
num_heads=num_heads,
|
||||
mlp_ratio=mlp_ratio,
|
||||
qkv_bias=qkv_bias,
|
||||
drop_path=drop_path_rates[block_idx],
|
||||
norm_layer=norm_layer_cl,
|
||||
ffn=ffn,
|
||||
cpe_act=cpe_act,
|
||||
window_size=window_size,
|
||||
))
|
||||
elif attn_type == 'channel':
|
||||
dual_attention_block.append(ChannelBlock(
|
||||
dim=out_chs,
|
||||
num_heads=num_heads,
|
||||
mlp_ratio=mlp_ratio,
|
||||
qkv_bias=qkv_bias,
|
||||
drop_path=drop_path_rates[block_idx],
|
||||
norm_layer=norm_layer_cl,
|
||||
ffn=ffn,
|
||||
cpe_act=cpe_act
|
||||
))
|
||||
stage_blocks.append(nn.Sequential(*dual_attention_block))
|
||||
self.blocks = nn.Sequential(*stage_blocks)
|
||||
|
||||
@torch.jit.ignore
|
||||
def set_grad_checkpointing(self, enable=True):
|
||||
self.grad_checkpointing = enable
|
||||
|
||||
def forward(self, x: Tensor):
|
||||
x = self.downsample(x)
|
||||
if self.grad_checkpointing and not torch.jit.is_scripting():
|
||||
x = checkpoint_seq(self.blocks, x)
|
||||
else:
|
||||
x = self.blocks(x)
|
||||
return x
|
||||
|
||||
|
||||
class DaViT(nn.Module):
|
||||
r""" DaViT
|
||||
A PyTorch implementation of `DaViT: Dual Attention Vision Transformers` - https://arxiv.org/abs/2204.03645
|
||||
Supports arbitrary input sizes and pyramid feature extraction
|
||||
|
||||
Args:
|
||||
in_chans (int): Number of input image channels. Default: 3
|
||||
num_classes (int): Number of classes for classification head. Default: 1000
|
||||
depths (tuple(int)): Number of blocks in each stage. Default: (1, 1, 3, 1)
|
||||
embed_dims (tuple(int)): Patch embedding dimension. Default: (96, 192, 384, 768)
|
||||
num_heads (tuple(int)): Number of attention heads in different layers. Default: (3, 6, 12, 24)
|
||||
window_size (int): Window size. Default: 7
|
||||
mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
|
||||
qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
|
||||
drop_path_rate (float): Stochastic depth rate. Default: 0.1
|
||||
norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_chans=3,
|
||||
depths=(1, 1, 3, 1),
|
||||
embed_dims=(96, 192, 384, 768),
|
||||
num_heads=(3, 6, 12, 24),
|
||||
window_size=7,
|
||||
mlp_ratio=4,
|
||||
qkv_bias=True,
|
||||
norm_layer='layernorm2d',
|
||||
norm_layer_cl='layernorm',
|
||||
norm_eps=1e-5,
|
||||
attn_types=('spatial', 'channel'),
|
||||
ffn=True,
|
||||
cpe_act=False,
|
||||
drop_rate=0.,
|
||||
attn_drop_rate=0.,
|
||||
drop_path_rate=0.,
|
||||
num_classes=1000,
|
||||
global_pool='avg',
|
||||
head_norm_first=False,
|
||||
):
|
||||
super().__init__()
|
||||
num_stages = len(embed_dims)
|
||||
assert num_stages == len(num_heads) == len(depths)
|
||||
norm_layer = partial(get_norm_layer(norm_layer), eps=norm_eps)
|
||||
norm_layer_cl = partial(get_norm_layer(norm_layer_cl), eps=norm_eps)
|
||||
self.num_classes = num_classes
|
||||
self.num_features = embed_dims[-1]
|
||||
self.drop_rate = drop_rate
|
||||
self.grad_checkpointing = False
|
||||
self.feature_info = []
|
||||
|
||||
self.stem = Stem(in_chans, embed_dims[0], norm_layer=norm_layer)
|
||||
in_chs = embed_dims[0]
|
||||
|
||||
dpr = [x.tolist() for x in torch.linspace(0, drop_path_rate, sum(depths)).split(depths)]
|
||||
stages = []
|
||||
for stage_idx in range(num_stages):
|
||||
out_chs = embed_dims[stage_idx]
|
||||
stage = DaViTStage(
|
||||
in_chs,
|
||||
out_chs,
|
||||
depth=depths[stage_idx],
|
||||
downsample=stage_idx > 0,
|
||||
attn_types=attn_types,
|
||||
num_heads=num_heads[stage_idx],
|
||||
window_size=window_size,
|
||||
mlp_ratio=mlp_ratio,
|
||||
qkv_bias=qkv_bias,
|
||||
drop_path_rates=dpr[stage_idx],
|
||||
norm_layer=norm_layer,
|
||||
norm_layer_cl=norm_layer_cl,
|
||||
ffn=ffn,
|
||||
cpe_act=cpe_act,
|
||||
)
|
||||
in_chs = out_chs
|
||||
stages.append(stage)
|
||||
self.feature_info += [dict(num_chs=out_chs, reduction=2, module=f'stages.{stage_idx}')]
|
||||
|
||||
self.stages = nn.Sequential(*stages)
|
||||
|
||||
# if head_norm_first == true, norm -> global pool -> fc ordering, like most other nets
|
||||
# otherwise pool -> norm -> fc, the default DaViT order, similar to ConvNeXt
|
||||
# FIXME generalize this structure to ClassifierHead
|
||||
if head_norm_first:
|
||||
self.norm_pre = norm_layer(self.num_features)
|
||||
self.head = ClassifierHead(
|
||||
self.num_features,
|
||||
num_classes,
|
||||
pool_type=global_pool,
|
||||
drop_rate=self.drop_rate,
|
||||
)
|
||||
else:
|
||||
self.norm_pre = nn.Identity()
|
||||
self.head = NormMlpClassifierHead(
|
||||
self.num_features,
|
||||
num_classes,
|
||||
pool_type=global_pool,
|
||||
drop_rate=self.drop_rate,
|
||||
norm_layer=norm_layer,
|
||||
)
|
||||
self.apply(self._init_weights)
|
||||
|
||||
def _init_weights(self, m):
|
||||
if isinstance(m, nn.Linear):
|
||||
trunc_normal_(m.weight, std=.02)
|
||||
if isinstance(m, nn.Linear) and m.bias is not None:
|
||||
nn.init.constant_(m.bias, 0)
|
||||
|
||||
@torch.jit.ignore
|
||||
def set_grad_checkpointing(self, enable=True):
|
||||
self.grad_checkpointing = enable
|
||||
for stage in self.stages:
|
||||
stage.set_grad_checkpointing(enable=enable)
|
||||
|
||||
@torch.jit.ignore
|
||||
def get_classifier(self):
|
||||
return self.head.fc
|
||||
|
||||
def reset_classifier(self, num_classes, global_pool=None):
|
||||
self.head.reset(num_classes, global_pool=global_pool)
|
||||
|
||||
def forward_features(self, x):
|
||||
x = self.stem(x)
|
||||
if self.grad_checkpointing and not torch.jit.is_scripting():
|
||||
x = checkpoint_seq(self.stages, x)
|
||||
else:
|
||||
x = self.stages(x)
|
||||
x = self.norm_pre(x)
|
||||
return x
|
||||
|
||||
def forward_head(self, x, pre_logits: bool = False):
|
||||
x = self.head.global_pool(x)
|
||||
x = self.head.norm(x)
|
||||
x = self.head.flatten(x)
|
||||
x = self.head.drop(x)
|
||||
return x if pre_logits else self.head.fc(x)
|
||||
|
||||
def forward(self, x):
|
||||
x = self.forward_features(x)
|
||||
x = self.forward_head(x)
|
||||
return x
|
||||
|
||||
|
||||
def checkpoint_filter_fn(state_dict, model):
|
||||
""" Remap MSFT checkpoints -> timm """
|
||||
if 'head.fc.weight' in state_dict:
|
||||
return state_dict # non-MSFT checkpoint
|
||||
|
||||
if 'state_dict' in state_dict:
|
||||
state_dict = state_dict['state_dict']
|
||||
|
||||
import re
|
||||
out_dict = {}
|
||||
for k, v in state_dict.items():
|
||||
k = re.sub(r'patch_embeds.([0-9]+)', r'stages.\1.downsample', k)
|
||||
k = re.sub(r'main_blocks.([0-9]+)', r'stages.\1.blocks', k)
|
||||
k = k.replace('downsample.proj', 'downsample.conv')
|
||||
k = k.replace('stages.0.downsample', 'stem')
|
||||
k = k.replace('head.', 'head.fc.')
|
||||
k = k.replace('norms.', 'head.norm.')
|
||||
k = k.replace('cpe.0', 'cpe1')
|
||||
k = k.replace('cpe.1', 'cpe2')
|
||||
out_dict[k] = v
|
||||
return out_dict
|
||||
|
||||
|
||||
def _create_davit(variant, pretrained=False, **kwargs):
|
||||
default_out_indices = tuple(i for i, _ in enumerate(kwargs.get('depths', (1, 1, 3, 1))))
|
||||
out_indices = kwargs.pop('out_indices', default_out_indices)
|
||||
|
||||
model = build_model_with_cfg(
|
||||
DaViT,
|
||||
variant,
|
||||
pretrained,
|
||||
pretrained_filter_fn=checkpoint_filter_fn,
|
||||
feature_cfg=dict(flatten_sequential=True, out_indices=out_indices),
|
||||
**kwargs)
|
||||
|
||||
return model
|
||||
|
||||
|
||||
def _cfg(url='', **kwargs):
|
||||
return {
|
||||
'url': url,
|
||||
'num_classes': 1000, 'input_size': (3, 224, 224), 'pool_size': (7, 7),
|
||||
'crop_pct': 0.95, 'interpolation': 'bicubic',
|
||||
'mean': IMAGENET_DEFAULT_MEAN, 'std': IMAGENET_DEFAULT_STD,
|
||||
'first_conv': 'stem.conv', 'classifier': 'head.fc',
|
||||
**kwargs
|
||||
}
|
||||
|
||||
|
||||
# TODO contact authors to get larger pretrained models
|
||||
default_cfgs = generate_default_cfgs({
|
||||
# official microsoft weights from https://github.com/dingmyu/davit
|
||||
'davit_tiny.msft_in1k': _cfg(
|
||||
hf_hub_id='timm/'),
|
||||
'davit_small.msft_in1k': _cfg(
|
||||
hf_hub_id='timm/'),
|
||||
'davit_base.msft_in1k': _cfg(
|
||||
hf_hub_id='timm/'),
|
||||
'davit_large': _cfg(),
|
||||
'davit_huge': _cfg(),
|
||||
'davit_giant': _cfg(),
|
||||
})
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_tiny(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 3, 1), embed_dims=(96, 192, 384, 768), num_heads=(3, 6, 12, 24), **kwargs)
|
||||
return _create_davit('davit_tiny', pretrained=pretrained, **model_kwargs)
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_small(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 9, 1), embed_dims=(96, 192, 384, 768), num_heads=(3, 6, 12, 24), **kwargs)
|
||||
return _create_davit('davit_small', pretrained=pretrained, **model_kwargs)
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_base(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 9, 1), embed_dims=(128, 256, 512, 1024), num_heads=(4, 8, 16, 32), **kwargs)
|
||||
return _create_davit('davit_base', pretrained=pretrained, **model_kwargs)
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_large(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 9, 1), embed_dims=(192, 384, 768, 1536), num_heads=(6, 12, 24, 48), **kwargs)
|
||||
return _create_davit('davit_large', pretrained=pretrained, **model_kwargs)
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_huge(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 9, 1), embed_dims=(256, 512, 1024, 2048), num_heads=(8, 16, 32, 64), **kwargs)
|
||||
return _create_davit('davit_huge', pretrained=pretrained, **model_kwargs)
|
||||
|
||||
|
||||
@register_model
|
||||
def davit_giant(pretrained=False, **kwargs):
|
||||
model_kwargs = dict(
|
||||
depths=(1, 1, 12, 3), embed_dims=(384, 768, 1536, 3072), num_heads=(12, 24, 48, 96), **kwargs)
|
||||
return _create_davit('davit_giant', pretrained=pretrained, **model_kwargs)
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue