Compare commits
7 Commits
Author | SHA1 | Date |
---|---|---|
Ross Wightman | 9ecab46e18 | 1 year ago |
Ross Wightman | b64acf5386 | 1 year ago |
Claudiu Leoveanu | a4823653b9 | 1 year ago |
Wauplin | ce4d3485b6 | 1 year ago |
Wauplin | c037db00fc | 1 year ago |
Ross Wightman | 3ec5918a74 | 1 year ago |
Ross Wightman | 3044b1a1cb | 1 year ago |
@ -1,112 +0,0 @@
|
||||
*This guideline is very much a work-in-progress.*
|
||||
|
||||
Contriubtions to `timm` for code, documentation, tests are more than welcome!
|
||||
|
||||
There haven't been any formal guidelines to date so please bear with me, and feel free to add to this guide.
|
||||
|
||||
# Coding style
|
||||
|
||||
Code linting and auto-format (black) are not currently in place but open to consideration. In the meantime, the style to follow is (mostly) aligned with Google's guide: https://google.github.io/styleguide/pyguide.html.
|
||||
|
||||
A few specific differences from Google style (or black)
|
||||
1. Line length is 120 char. Going over is okay in some cases (e.g. I prefer not to break URL across lines).
|
||||
2. Hanging indents are always prefered, please avoid aligning arguments with closing brackets or braces.
|
||||
|
||||
Example, from Google guide, but this is a NO here:
|
||||
```
|
||||
# Aligned with opening delimiter.
|
||||
foo = long_function_name(var_one, var_two,
|
||||
var_three, var_four)
|
||||
meal = (spam,
|
||||
beans)
|
||||
|
||||
# Aligned with opening delimiter in a dictionary.
|
||||
foo = {
|
||||
'long_dictionary_key': value1 +
|
||||
value2,
|
||||
...
|
||||
}
|
||||
```
|
||||
This is YES:
|
||||
|
||||
```
|
||||
# 4-space hanging indent; nothing on first line,
|
||||
# closing parenthesis on a new line.
|
||||
foo = long_function_name(
|
||||
var_one, var_two, var_three,
|
||||
var_four
|
||||
)
|
||||
meal = (
|
||||
spam,
|
||||
beans,
|
||||
)
|
||||
|
||||
# 4-space hanging indent in a dictionary.
|
||||
foo = {
|
||||
'long_dictionary_key':
|
||||
long_dictionary_value,
|
||||
...
|
||||
}
|
||||
```
|
||||
|
||||
When there is descrepancy in a given source file (there are many origins for various bits of code and not all have been updated to what I consider current goal), please follow the style in a given file.
|
||||
|
||||
In general, if you add new code, formatting it with black using the following options should result in a style that is compatible with the rest of the code base:
|
||||
|
||||
```
|
||||
black --skip-string-normalization --line-length 120 <path-to-file>
|
||||
```
|
||||
|
||||
Avoid formatting code that is unrelated to your PR though.
|
||||
|
||||
PR with pure formatting / style fixes will be accepted but only in isolation from functional changes, best to ask before starting such a change.
|
||||
|
||||
# Documentation
|
||||
|
||||
As with code style, docstrings style based on the Google guide: guide: https://google.github.io/styleguide/pyguide.html
|
||||
|
||||
The goal for the code is to eventually move to have all major functions and `__init__` methods use PEP484 type annotations.
|
||||
|
||||
When type annotations are used for a function, as per the Google pyguide, they should **NOT** be duplicated in the docstrings, please leave annotations as the one source of truth re typing.
|
||||
|
||||
There are a LOT of gaps in current documentation relative to the functionality in timm, please, document away!
|
||||
|
||||
# Installation
|
||||
|
||||
Create a Python virtual environment using Python 3.10. Inside the environment, install the following test dependencies:
|
||||
|
||||
```
|
||||
python -m pip install pytest pytest-timeout pytest-xdist pytest-forked expecttest
|
||||
```
|
||||
|
||||
Install `torch` and `torchvision` using the instructions matching your system as listed on the [PyTorch website](https://pytorch.org/).
|
||||
|
||||
Then install the remaining dependencies:
|
||||
|
||||
```
|
||||
python -m pip install -r requirements.txt
|
||||
python -m pip install --no-cache-dir git+https://github.com/mapillary/inplace_abn.git
|
||||
python -m pip install -e .
|
||||
```
|
||||
|
||||
## Unit tests
|
||||
|
||||
Run the tests using:
|
||||
|
||||
```
|
||||
pytest tests/
|
||||
```
|
||||
|
||||
Since the whole test suite takes a lot of time to run locally (a few hours), you may want to select a subset of tests relating to the changes you made by using the `-k` option of [`pytest`](https://docs.pytest.org/en/7.1.x/example/markers.html#using-k-expr-to-select-tests-based-on-their-name). Moreover, running tests in parallel (in this example 4 processes) with the `-n` option may help:
|
||||
|
||||
```
|
||||
pytest -k "substring-to-match" -n 4 tests/
|
||||
```
|
||||
|
||||
## Building documentation
|
||||
|
||||
Please refer to [this document](https://github.com/huggingface/pytorch-image-models/tree/main/hfdocs).
|
||||
|
||||
# Questions
|
||||
|
||||
If you have any questions about contribution, where / how to contribute, please ask in the [Discussions](https://github.com/huggingface/pytorch-image-models/discussions/categories/contributing) (there is a `Contributing` topic).
|
@ -1,3 +1,2 @@
|
||||
include timm/models/_pruned/*.txt
|
||||
include timm/data/_info/*.txt
|
||||
include timm/data/_info/*.json
|
||||
include timm/models/pruned/*.txt
|
||||
|
||||
|
@ -1,14 +0,0 @@
|
||||
# Hugging Face Timm Docs
|
||||
|
||||
## Getting Started
|
||||
|
||||
```
|
||||
pip install git+https://github.com/huggingface/doc-builder.git@main#egg=hf-doc-builder
|
||||
pip install watchdog black
|
||||
```
|
||||
|
||||
## Preview the Docs Locally
|
||||
|
||||
```
|
||||
doc-builder preview timm hfdocs/source
|
||||
```
|
@ -0,0 +1 @@
|
||||
default_branch_name = "master"
|
@ -1,160 +1,149 @@
|
||||
- sections:
|
||||
- local: index
|
||||
title: Home
|
||||
- local: quickstart
|
||||
title: Quickstart
|
||||
- local: installation
|
||||
title: Installation
|
||||
title: Get started
|
||||
- sections:
|
||||
- local: feature_extraction
|
||||
title: Using Pretrained Models as Feature Extractors
|
||||
- local: training_script
|
||||
title: Training With The Official Training Script
|
||||
- local: hf_hub
|
||||
title: Share and Load Models from the 🤗 Hugging Face Hub
|
||||
title: Tutorials
|
||||
- sections:
|
||||
title: Pytorch Image Models (timm)
|
||||
- local: models
|
||||
title: Model Summaries
|
||||
- local: results
|
||||
title: Results
|
||||
- local: models/adversarial-inception-v3
|
||||
title: Adversarial Inception v3
|
||||
- local: models/advprop
|
||||
title: AdvProp (EfficientNet)
|
||||
- local: models/big-transfer
|
||||
title: Big Transfer (BiT)
|
||||
- local: models/csp-darknet
|
||||
title: CSP-DarkNet
|
||||
- local: models/csp-resnet
|
||||
title: CSP-ResNet
|
||||
- local: models/csp-resnext
|
||||
title: CSP-ResNeXt
|
||||
- local: models/densenet
|
||||
title: DenseNet
|
||||
- local: models/dla
|
||||
title: Deep Layer Aggregation
|
||||
- local: models/dpn
|
||||
title: Dual Path Network (DPN)
|
||||
- local: models/ecaresnet
|
||||
title: ECA-ResNet
|
||||
- local: models/efficientnet
|
||||
title: EfficientNet
|
||||
- local: models/efficientnet-pruned
|
||||
title: EfficientNet (Knapsack Pruned)
|
||||
- local: models/ensemble-adversarial
|
||||
title: Ensemble Adversarial Inception ResNet v2
|
||||
- local: models/ese-vovnet
|
||||
title: ESE-VoVNet
|
||||
- local: models/fbnet
|
||||
title: FBNet
|
||||
- local: models/gloun-inception-v3
|
||||
title: (Gluon) Inception v3
|
||||
- local: models/gloun-resnet
|
||||
title: (Gluon) ResNet
|
||||
- local: models/gloun-resnext
|
||||
title: (Gluon) ResNeXt
|
||||
- local: models/gloun-senet
|
||||
title: (Gluon) SENet
|
||||
- local: models/gloun-seresnext
|
||||
title: (Gluon) SE-ResNeXt
|
||||
- local: models/gloun-xception
|
||||
title: (Gluon) Xception
|
||||
- local: models/hrnet
|
||||
title: HRNet
|
||||
- local: models/ig-resnext
|
||||
title: Instagram ResNeXt WSL
|
||||
- local: models/inception-resnet-v2
|
||||
title: Inception ResNet v2
|
||||
- local: models/inception-v3
|
||||
title: Inception v3
|
||||
- local: models/inception-v4
|
||||
title: Inception v4
|
||||
- local: models/legacy-se-resnet
|
||||
title: (Legacy) SE-ResNet
|
||||
- local: models/legacy-se-resnext
|
||||
title: (Legacy) SE-ResNeXt
|
||||
- local: models/legacy-senet
|
||||
title: (Legacy) SENet
|
||||
- local: models/mixnet
|
||||
title: MixNet
|
||||
- local: models/mnasnet
|
||||
title: MnasNet
|
||||
- local: models/mobilenet-v2
|
||||
title: MobileNet v2
|
||||
- local: models/mobilenet-v3
|
||||
title: MobileNet v3
|
||||
- local: models/nasnet
|
||||
title: NASNet
|
||||
- local: models/noisy-student
|
||||
title: Noisy Student (EfficientNet)
|
||||
- local: models/pnasnet
|
||||
title: PNASNet
|
||||
- local: models/regnetx
|
||||
title: RegNetX
|
||||
- local: models/regnety
|
||||
title: RegNetY
|
||||
- local: models/res2net
|
||||
title: Res2Net
|
||||
- local: models/res2next
|
||||
title: Res2NeXt
|
||||
- local: models/resnest
|
||||
title: ResNeSt
|
||||
- local: models/resnet
|
||||
title: ResNet
|
||||
- local: models/resnet-d
|
||||
title: ResNet-D
|
||||
- local: models/resnext
|
||||
title: ResNeXt
|
||||
- local: models/rexnet
|
||||
title: RexNet
|
||||
- local: models/se-resnet
|
||||
title: SE-ResNet
|
||||
- local: models/selecsls
|
||||
title: SelecSLS
|
||||
- local: models/seresnext
|
||||
title: SE-ResNeXt
|
||||
- local: models/skresnet
|
||||
title: SK-ResNet
|
||||
- local: models/skresnext
|
||||
title: SK-ResNeXt
|
||||
- local: models/spnasnet
|
||||
title: SPNASNet
|
||||
- local: models/ssl-resnet
|
||||
title: SSL ResNet
|
||||
- local: models/swsl-resnet
|
||||
title: SWSL ResNet
|
||||
- local: models/swsl-resnext
|
||||
title: SWSL ResNeXt
|
||||
- local: models/tf-efficientnet
|
||||
title: (Tensorflow) EfficientNet
|
||||
- local: models/tf-efficientnet-condconv
|
||||
title: (Tensorflow) EfficientNet CondConv
|
||||
- local: models/tf-efficientnet-lite
|
||||
title: (Tensorflow) EfficientNet Lite
|
||||
- local: models/tf-inception-v3
|
||||
title: (Tensorflow) Inception v3
|
||||
- local: models/tf-mixnet
|
||||
title: (Tensorflow) MixNet
|
||||
- local: models/tf-mobilenet-v3
|
||||
title: (Tensorflow) MobileNet v3
|
||||
- local: models/tresnet
|
||||
title: TResNet
|
||||
- local: models/wide-resnet
|
||||
title: Wide ResNet
|
||||
- local: models/xception
|
||||
title: Xception
|
||||
title: Model Pages
|
||||
isExpanded: false
|
||||
- sections:
|
||||
- local: reference/models
|
||||
title: Models
|
||||
- local: reference/data
|
||||
title: Data
|
||||
- local: reference/optimizers
|
||||
title: Optimizers
|
||||
- local: reference/schedulers
|
||||
title: Learning Rate Schedulers
|
||||
title: Reference
|
||||
- local: scripts
|
||||
title: Scripts
|
||||
- local: training_hparam_examples
|
||||
title: Training Examples
|
||||
- local: feature_extraction
|
||||
title: Feature Extraction
|
||||
- local: changes
|
||||
title: Recent Changes
|
||||
- local: archived_changes
|
||||
title: Archived Changes
|
||||
- local: model_pages
|
||||
title: Model Pages
|
||||
isExpanded: false
|
||||
sections:
|
||||
- local: models/adversarial-inception-v3
|
||||
title: Adversarial Inception v3
|
||||
- local: models/advprop
|
||||
title: AdvProp (EfficientNet)
|
||||
- local: models/big-transfer
|
||||
title: Big Transfer (BiT)
|
||||
- local: models/csp-darknet
|
||||
title: CSP-DarkNet
|
||||
- local: models/csp-resnet
|
||||
title: CSP-ResNet
|
||||
- local: models/csp-resnext
|
||||
title: CSP-ResNeXt
|
||||
- local: models/densenet
|
||||
title: DenseNet
|
||||
- local: models/dla
|
||||
title: Deep Layer Aggregation
|
||||
- local: models/dpn
|
||||
title: Dual Path Network (DPN)
|
||||
- local: models/ecaresnet
|
||||
title: ECA-ResNet
|
||||
- local: models/efficientnet
|
||||
title: EfficientNet
|
||||
- local: models/efficientnet-pruned
|
||||
title: EfficientNet (Knapsack Pruned)
|
||||
- local: models/ensemble-adversarial
|
||||
title: Ensemble Adversarial Inception ResNet v2
|
||||
- local: models/ese-vovnet
|
||||
title: ESE-VoVNet
|
||||
- local: models/fbnet
|
||||
title: FBNet
|
||||
- local: models/gloun-inception-v3
|
||||
title: (Gluon) Inception v3
|
||||
- local: models/gloun-resnet
|
||||
title: (Gluon) ResNet
|
||||
- local: models/gloun-resnext
|
||||
title: (Gluon) ResNeXt
|
||||
- local: models/gloun-senet
|
||||
title: (Gluon) SENet
|
||||
- local: models/gloun-seresnext
|
||||
title: (Gluon) SE-ResNeXt
|
||||
- local: models/gloun-xception
|
||||
title: (Gluon) Xception
|
||||
- local: models/hrnet
|
||||
title: HRNet
|
||||
- local: models/ig-resnext
|
||||
title: Instagram ResNeXt WSL
|
||||
- local: models/inception-resnet-v2
|
||||
title: Inception ResNet v2
|
||||
- local: models/inception-v3
|
||||
title: Inception v3
|
||||
- local: models/inception-v4
|
||||
title: Inception v4
|
||||
- local: models/legacy-se-resnet
|
||||
title: (Legacy) SE-ResNet
|
||||
- local: models/legacy-se-resnext
|
||||
title: (Legacy) SE-ResNeXt
|
||||
- local: models/legacy-senet
|
||||
title: (Legacy) SENet
|
||||
- local: models/mixnet
|
||||
title: MixNet
|
||||
- local: models/mnasnet
|
||||
title: MnasNet
|
||||
- local: models/mobilenet-v2
|
||||
title: MobileNet v2
|
||||
- local: models/mobilenet-v3
|
||||
title: MobileNet v3
|
||||
- local: models/nasnet
|
||||
title: NASNet
|
||||
- local: models/noisy-student
|
||||
title: Noisy Student (EfficientNet)
|
||||
- local: models/pnasnet
|
||||
title: PNASNet
|
||||
- local: models/regnetx
|
||||
title: RegNetX
|
||||
- local: models/regnety
|
||||
title: RegNetY
|
||||
- local: models/res2net
|
||||
title: Res2Net
|
||||
- local: models/res2next
|
||||
title: Res2NeXt
|
||||
- local: models/resnest
|
||||
title: ResNeSt
|
||||
- local: models/resnet
|
||||
title: ResNet
|
||||
- local: models/resnet-d
|
||||
title: ResNet-D
|
||||
- local: models/resnext
|
||||
title: ResNeXt
|
||||
- local: models/rexnet
|
||||
title: RexNet
|
||||
- local: models/se-resnet
|
||||
title: SE-ResNet
|
||||
- local: models/selecsls
|
||||
title: SelecSLS
|
||||
- local: models/seresnext
|
||||
title: SE-ResNeXt
|
||||
- local: models/skresnet
|
||||
title: SK-ResNet
|
||||
- local: models/skresnext
|
||||
title: SK-ResNeXt
|
||||
- local: models/spnasnet
|
||||
title: SPNASNet
|
||||
- local: models/ssl-resnet
|
||||
title: SSL ResNet
|
||||
- local: models/swsl-resnet
|
||||
title: SWSL ResNet
|
||||
- local: models/swsl-resnext
|
||||
title: SWSL ResNeXt
|
||||
- local: models/tf-efficientnet
|
||||
title: (Tensorflow) EfficientNet
|
||||
- local: models/tf-efficientnet-condconv
|
||||
title: (Tensorflow) EfficientNet CondConv
|
||||
- local: models/tf-efficientnet-lite
|
||||
title: (Tensorflow) EfficientNet Lite
|
||||
- local: models/tf-inception-v3
|
||||
title: (Tensorflow) Inception v3
|
||||
- local: models/tf-mixnet
|
||||
title: (Tensorflow) MixNet
|
||||
- local: models/tf-mobilenet-v3
|
||||
title: (Tensorflow) MobileNet v3
|
||||
- local: models/tresnet
|
||||
title: TResNet
|
||||
- local: models/wide-resnet
|
||||
title: Wide ResNet
|
||||
- local: models/xception
|
||||
title: Xception
|
||||
title: Get started
|
||||
|
||||
|
@ -0,0 +1,418 @@
|
||||
# Archived Changes
|
||||
|
||||
### July 12, 2021
|
||||
|
||||
* Add XCiT models from [official facebook impl](https://github.com/facebookresearch/xcit). Contributed by [Alexander Soare](https://github.com/alexander-soare)
|
||||
|
||||
### July 5-9, 2021
|
||||
|
||||
* Add `efficientnetv2_rw_t` weights, a custom 'tiny' 13.6M param variant that is a bit better than (non NoisyStudent) B3 models. Both faster and better accuracy (at same or lower res)
|
||||
* top-1 82.34 @ 288x288 and 82.54 @ 320x320
|
||||
* Add [SAM pretrained](https://arxiv.org/abs/2106.01548) in1k weight for ViT B/16 (`vit_base_patch16_sam_224`) and B/32 (`vit_base_patch32_sam_224`) models.
|
||||
* Add 'Aggregating Nested Transformer' (NesT) w/ weights converted from official [Flax impl](https://github.com/google-research/nested-transformer). Contributed by [Alexander Soare](https://github.com/alexander-soare).
|
||||
* `jx_nest_base` - 83.534, `jx_nest_small` - 83.120, `jx_nest_tiny` - 81.426
|
||||
|
||||
### June 23, 2021
|
||||
|
||||
* Reproduce gMLP model training, `gmlp_s16_224` trained to 79.6 top-1, matching [paper](https://arxiv.org/abs/2105.08050). Hparams for this and other recent MLP training [here](https://gist.github.com/rwightman/d6c264a9001f9167e06c209f630b2cc6)
|
||||
|
||||
### June 20, 2021
|
||||
|
||||
* Release Vision Transformer 'AugReg' weights from [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270)
|
||||
* .npz weight loading support added, can load any of the 50K+ weights from the [AugReg series](https://console.cloud.google.com/storage/browser/vit_models/augreg)
|
||||
* See [example notebook](https://colab.research.google.com/github/google-research/vision_transformer/blob/master/vit_jax_augreg.ipynb) from [official impl](https://github.com/google-research/vision_transformer/) for navigating the augreg weights
|
||||
* Replaced all default weights w/ best AugReg variant (if possible). All AugReg 21k classifiers work.
|
||||
* Highlights: `vit_large_patch16_384` (87.1 top-1), `vit_large_r50_s32_384` (86.2 top-1), `vit_base_patch16_384` (86.0 top-1)
|
||||
* `vit_deit_*` renamed to just `deit_*`
|
||||
* Remove my old small model, replace with DeiT compatible small w/ AugReg weights
|
||||
* Add 1st training of my `gmixer_24_224` MLP /w GLU, 78.1 top-1 w/ 25M params.
|
||||
* Add weights from official ResMLP release (https://github.com/facebookresearch/deit)
|
||||
* Add `eca_nfnet_l2` weights from my 'lightweight' series. 84.7 top-1 at 384x384.
|
||||
* Add distilled BiT 50x1 student and 152x2 Teacher weights from [Knowledge distillation: A good teacher is patient and consistent](https://arxiv.org/abs/2106.05237)
|
||||
* NFNets and ResNetV2-BiT models work w/ Pytorch XLA now
|
||||
* weight standardization uses F.batch_norm instead of std_mean (std_mean wasn't lowered)
|
||||
* eps values adjusted, will be slight differences but should be quite close
|
||||
* Improve test coverage and classifier interface of non-conv (vision transformer and mlp) models
|
||||
* Cleanup a few classifier / flatten details for models w/ conv classifiers or early global pool
|
||||
* Please report any regressions, this PR touched quite a few models.
|
||||
|
||||
### June 8, 2021
|
||||
|
||||
* Add first ResMLP weights, trained in PyTorch XLA on TPU-VM w/ my XLA branch. 24 block variant, 79.2 top-1.
|
||||
* Add ResNet51-Q model w/ pretrained weights at 82.36 top-1.
|
||||
* NFNet inspired block layout with quad layer stem and no maxpool
|
||||
* Same param count (35.7M) and throughput as ResNetRS-50 but +1.5 top-1 @ 224x224 and +2.5 top-1 at 288x288
|
||||
|
||||
### May 25, 2021
|
||||
|
||||
* Add LeViT, Visformer, Convit (PR by Aman Arora), Twins (PR by paper authors) transformer models
|
||||
* Cleanup input_size/img_size override handling and testing for all vision transformer models
|
||||
* Add `efficientnetv2_rw_m` model and weights (started training before official code). 84.8 top-1, 53M params.
|
||||
|
||||
### May 14, 2021
|
||||
|
||||
* Add EfficientNet-V2 official model defs w/ ported weights from official [Tensorflow/Keras](https://github.com/google/automl/tree/master/efficientnetv2) impl.
|
||||
* 1k trained variants: `tf_efficientnetv2_s/m/l`
|
||||
* 21k trained variants: `tf_efficientnetv2_s/m/l_in21k`
|
||||
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_in21ft1k`
|
||||
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
|
||||
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
|
||||
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
|
||||
|
||||
### May 5, 2021
|
||||
|
||||
* Add MLP-Mixer models and port pretrained weights from [Google JAX impl](https://github.com/google-research/vision_transformer/tree/linen)
|
||||
* Add CaiT models and pretrained weights from [FB](https://github.com/facebookresearch/deit)
|
||||
* Add ResNet-RS models and weights from [TF](https://github.com/tensorflow/tpu/tree/master/models/official/resnet/resnet_rs). Thanks [Aman Arora](https://github.com/amaarora)
|
||||
* Add CoaT models and weights. Thanks [Mohammed Rizin](https://github.com/morizin)
|
||||
* Add new ImageNet-21k weights & finetuned weights for TResNet, MobileNet-V3, ViT models. Thanks [mrT](https://github.com/mrT23)
|
||||
* Add GhostNet models and weights. Thanks [Kai Han](https://github.com/iamhankai)
|
||||
* Update ByoaNet attention modles
|
||||
* Improve SA module inits
|
||||
* Hack together experimental stand-alone Swin based attn module and `swinnet`
|
||||
* Consistent '26t' model defs for experiments.
|
||||
* Add improved Efficientnet-V2S (prelim model def) weights. 83.8 top-1.
|
||||
* WandB logging support
|
||||
|
||||
### April 13, 2021
|
||||
|
||||
* Add Swin Transformer models and weights from https://github.com/microsoft/Swin-Transformer
|
||||
|
||||
### April 12, 2021
|
||||
|
||||
* Add ECA-NFNet-L1 (slimmed down F1 w/ SiLU, 41M params) trained with this code. 84% top-1 @ 320x320. Trained at 256x256.
|
||||
* Add EfficientNet-V2S model (unverified model definition) weights. 83.3 top-1 @ 288x288. Only trained single res 224. Working on progressive training.
|
||||
* Add ByoaNet model definition (Bring-your-own-attention) w/ SelfAttention block and corresponding SA/SA-like modules and model defs
|
||||
* Lambda Networks - https://arxiv.org/abs/2102.08602
|
||||
* Bottleneck Transformers - https://arxiv.org/abs/2101.11605
|
||||
* Halo Nets - https://arxiv.org/abs/2103.12731
|
||||
* Adabelief optimizer contributed by Juntang Zhuang
|
||||
|
||||
### April 1, 2021
|
||||
|
||||
* Add snazzy `benchmark.py` script for bulk `timm` model benchmarking of train and/or inference
|
||||
* Add Pooling-based Vision Transformer (PiT) models (from https://github.com/naver-ai/pit)
|
||||
* Merged distilled variant into main for torchscript compatibility
|
||||
* Some `timm` cleanup/style tweaks and weights have hub download support
|
||||
* Cleanup Vision Transformer (ViT) models
|
||||
* Merge distilled (DeiT) model into main so that torchscript can work
|
||||
* Support updated weight init (defaults to old still) that closer matches original JAX impl (possibly better training from scratch)
|
||||
* Separate hybrid model defs into different file and add several new model defs to fiddle with, support patch_size != 1 for hybrids
|
||||
* Fix fine-tuning num_class changes (PiT and ViT) and pos_embed resizing (Vit) with distilled variants
|
||||
* nn.Sequential for block stack (does not break downstream compat)
|
||||
* TnT (Transformer-in-Transformer) models contributed by author (from https://gitee.com/mindspore/mindspore/tree/master/model_zoo/research/cv/TNT)
|
||||
* Add RegNetY-160 weights from DeiT teacher model
|
||||
* Add new NFNet-L0 w/ SE attn (rename `nfnet_l0b`->`nfnet_l0`) weights 82.75 top-1 @ 288x288
|
||||
* Some fixes/improvements for TFDS dataset wrapper
|
||||
|
||||
### March 7, 2021
|
||||
|
||||
* First 0.4.x PyPi release w/ NFNets (& related), ByoB (GPU-Efficient, RepVGG, etc).
|
||||
* Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.
|
||||
|
||||
### Feb 18, 2021
|
||||
|
||||
* Add pretrained weights and model variants for NFNet-F* models from [DeepMind Haiku impl](https://github.com/deepmind/deepmind-research/tree/master/nfnets).
|
||||
* Models are prefixed with `dm_`. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.
|
||||
* These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly 1/2 the inference speed of my SiLU PyTorch optimized `s` variants.
|
||||
* Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).
|
||||
* Matching the original pre-processing as closely as possible I get these results:
|
||||
* `dm_nfnet_f6` - 86.352
|
||||
* `dm_nfnet_f5` - 86.100
|
||||
* `dm_nfnet_f4` - 85.834
|
||||
* `dm_nfnet_f3` - 85.676
|
||||
* `dm_nfnet_f2` - 85.178
|
||||
* `dm_nfnet_f1` - 84.696
|
||||
* `dm_nfnet_f0` - 83.464
|
||||
|
||||
### Feb 16, 2021
|
||||
|
||||
* Add Adaptive Gradient Clipping (AGC) as per https://arxiv.org/abs/2102.06171. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.
|
||||
* AGC w/ default clipping factor `--clip-grad .01 --clip-mode agc`
|
||||
* PyTorch global norm of 1.0 (old behaviour, always norm), `--clip-grad 1.0`
|
||||
* PyTorch value clipping of 10, `--clip-grad 10. --clip-mode value`
|
||||
* AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.
|
||||
|
||||
### Feb 12, 2021
|
||||
|
||||
* Update Normalization-Free nets to include new NFNet-F (https://arxiv.org/abs/2102.06171) model defs
|
||||
|
||||
### Feb 10, 2021
|
||||
|
||||
* More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')
|
||||
* GPU-Efficient-Networks (https://github.com/idstcv/GPU-Efficient-Networks), impl in `byobnet.py`
|
||||
* RepVGG (https://github.com/DingXiaoH/RepVGG), impl in `byobnet.py`
|
||||
* classic VGG (from torchvision, impl in `vgg`)
|
||||
* Refinements to normalizer layer arg handling and normalizer+act layer handling in some models
|
||||
* Default AMP mode changed to native PyTorch AMP instead of APEX. Issues not being fixed with APEX. Native works with `--channels-last` and `--torchscript` model training, APEX does not.
|
||||
* Fix a few bugs introduced since last pypi release
|
||||
|
||||
### Feb 8, 2021
|
||||
|
||||
* Add several ResNet weights with ECA attention. 26t & 50t trained @ 256, test @ 320. 269d train @ 256, fine-tune @320, test @ 352.
|
||||
* `ecaresnet26t` - 79.88 top-1 @ 320x320, 79.08 @ 256x256
|
||||
* `ecaresnet50t` - 82.35 top-1 @ 320x320, 81.52 @ 256x256
|
||||
* `ecaresnet269d` - 84.93 top-1 @ 352x352, 84.87 @ 320x320
|
||||
* Remove separate tiered (`t`) vs tiered_narrow (`tn`) ResNet model defs, all `tn` changed to `t` and `t` models removed (`seresnext26t_32x4d` only model w/ weights that was removed).
|
||||
* Support model default_cfgs with separate train vs test resolution `test_input_size` and remove extra `_320` suffix ResNet model defs that were just for test.
|
||||
|
||||
### Jan 30, 2021
|
||||
|
||||
* Add initial "Normalization Free" NF-RegNet-B* and NF-ResNet model definitions based on [paper](https://arxiv.org/abs/2101.08692)
|
||||
|
||||
### Jan 25, 2021
|
||||
|
||||
* Add ResNetV2 Big Transfer (BiT) models w/ ImageNet-1k and 21k weights from https://github.com/google-research/big_transfer
|
||||
* Add official R50+ViT-B/16 hybrid models + weights from https://github.com/google-research/vision_transformer
|
||||
* ImageNet-21k ViT weights are added w/ model defs and representation layer (pre logits) support
|
||||
* NOTE: ImageNet-21k classifier heads were zero'd in original weights, they are only useful for transfer learning
|
||||
* Add model defs and weights for DeiT Vision Transformer models from https://github.com/facebookresearch/deit
|
||||
* Refactor dataset classes into ImageDataset/IterableImageDataset + dataset specific parser classes
|
||||
* Add Tensorflow-Datasets (TFDS) wrapper to allow use of TFDS image classification sets with train script
|
||||
* Ex: `train.py /data/tfds --dataset tfds/oxford_iiit_pet --val-split test --model resnet50 -b 256 --amp --num-classes 37 --opt adamw --lr 3e-4 --weight-decay .001 --pretrained -j 2`
|
||||
* Add improved .tar dataset parser that reads images from .tar, folder of .tar files, or .tar within .tar
|
||||
* Run validation on full ImageNet-21k directly from tar w/ BiT model: `validate.py /data/fall11_whole.tar --model resnetv2_50x1_bitm_in21k --amp`
|
||||
* Models in this update should be stable w/ possible exception of ViT/BiT, possibility of some regressions with train/val scripts and dataset handling
|
||||
|
||||
### Jan 3, 2021
|
||||
|
||||
* Add SE-ResNet-152D weights
|
||||
* 256x256 val, 0.94 crop top-1 - 83.75
|
||||
* 320x320 val, 1.0 crop - 84.36
|
||||
* Update results files
|
||||
|
||||
### Dec 18, 2020
|
||||
|
||||
* Add ResNet-101D, ResNet-152D, and ResNet-200D weights trained @ 256x256
|
||||
* 256x256 val, 0.94 crop (top-1) - 101D (82.33), 152D (83.08), 200D (83.25)
|
||||
* 288x288 val, 1.0 crop - 101D (82.64), 152D (83.48), 200D (83.76)
|
||||
* 320x320 val, 1.0 crop - 101D (83.00), 152D (83.66), 200D (84.01)
|
||||
|
||||
### Dec 7, 2020
|
||||
|
||||
* Simplify EMA module (ModelEmaV2), compatible with fully torchscripted models
|
||||
* Misc fixes for SiLU ONNX export, default_cfg missing from Feature extraction models, Linear layer w/ AMP + torchscript
|
||||
* PyPi release @ 0.3.2 (needed by EfficientDet)
|
||||
|
||||
|
||||
### Oct 30, 2020
|
||||
|
||||
* Test with PyTorch 1.7 and fix a small top-n metric view vs reshape issue.
|
||||
* Convert newly added 224x224 Vision Transformer weights from official JAX repo. 81.8 top-1 for B/16, 83.1 L/16.
|
||||
* Support PyTorch 1.7 optimized, native SiLU (aka Swish) activation. Add mapping to 'silu' name, custom swish will eventually be deprecated.
|
||||
* Fix regression for loading pretrained classifier via direct model entrypoint functions. Didn't impact create_model() factory usage.
|
||||
* PyPi release @ 0.3.0 version!
|
||||
|
||||
### Oct 26, 2020
|
||||
|
||||
* Update Vision Transformer models to be compatible with official code release at https://github.com/google-research/vision_transformer
|
||||
* Add Vision Transformer weights (ImageNet-21k pretrain) for 384x384 base and large models converted from official jax impl
|
||||
* ViT-B/16 - 84.2
|
||||
* ViT-B/32 - 81.7
|
||||
* ViT-L/16 - 85.2
|
||||
* ViT-L/32 - 81.5
|
||||
|
||||
### Oct 21, 2020
|
||||
|
||||
* Weights added for Vision Transformer (ViT) models. 77.86 top-1 for 'small' and 79.35 for 'base'. Thanks to [Christof](https://www.kaggle.com/christofhenkel) for training the base model w/ lots of GPUs.
|
||||
|
||||
### Oct 13, 2020
|
||||
|
||||
* Initial impl of Vision Transformer models. Both patch and hybrid (CNN backbone) variants. Currently trying to train...
|
||||
* Adafactor and AdaHessian (FP32 only, no AMP) optimizers
|
||||
* EdgeTPU-M (`efficientnet_em`) model trained in PyTorch, 79.3 top-1
|
||||
* Pip release, doc updates pending a few more changes...
|
||||
|
||||
### Sept 18, 2020
|
||||
|
||||
* New ResNet 'D' weights. 72.7 (top-1) ResNet-18-D, 77.1 ResNet-34-D, 80.5 ResNet-50-D
|
||||
* Added a few untrained defs for other ResNet models (66D, 101D, 152D, 200/200D)
|
||||
|
||||
### Sept 3, 2020
|
||||
|
||||
* New weights
|
||||
* Wide-ResNet50 - 81.5 top-1 (vs 78.5 torchvision)
|
||||
* SEResNeXt50-32x4d - 81.3 top-1 (vs 79.1 cadene)
|
||||
* Support for native Torch AMP and channels_last memory format added to train/validate scripts (`--channels-last`, `--native-amp` vs `--apex-amp`)
|
||||
* Models tested with channels_last on latest NGC 20.08 container. AdaptiveAvgPool in attn layers changed to mean((2,3)) to work around bug with NHWC kernel.
|
||||
|
||||
### Aug 12, 2020
|
||||
|
||||
* New/updated weights from training experiments
|
||||
* EfficientNet-B3 - 82.1 top-1 (vs 81.6 for official with AA and 81.9 for AdvProp)
|
||||
* RegNetY-3.2GF - 82.0 top-1 (78.9 from official ver)
|
||||
* CSPResNet50 - 79.6 top-1 (76.6 from official ver)
|
||||
* Add CutMix integrated w/ Mixup. See [pull request](https://github.com/rwightman/pytorch-image-models/pull/218) for some usage examples
|
||||
* Some fixes for using pretrained weights with `in_chans` != 3 on several models.
|
||||
|
||||
### Aug 5, 2020
|
||||
|
||||
Universal feature extraction, new models, new weights, new test sets.
|
||||
* All models support the `features_only=True` argument for `create_model` call to return a network that extracts feature maps from the deepest layer at each stride.
|
||||
* New models
|
||||
* CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet
|
||||
* ReXNet
|
||||
* (Modified Aligned) Xception41/65/71 (a proper port of TF models)
|
||||
* New trained weights
|
||||
* SEResNet50 - 80.3 top-1
|
||||
* CSPDarkNet53 - 80.1 top-1
|
||||
* CSPResNeXt50 - 80.0 top-1
|
||||
* DPN68b - 79.2 top-1
|
||||
* EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by [@hal-314](https://github.com/hal-314))
|
||||
* Add 'real' labels for ImageNet and ImageNet-Renditions test set, see [`results/README.md`](results/README.md)
|
||||
* Test set ranking/top-n diff script by [@KushajveerSingh](https://github.com/KushajveerSingh)
|
||||
* Train script and loader/transform tweaks to punch through more aug arguments
|
||||
* README and documentation overhaul. See initial (WIP) documentation at https://rwightman.github.io/pytorch-image-models/
|
||||
* adamp and sgdp optimizers added by [@hellbell](https://github.com/hellbell)
|
||||
|
||||
### June 11, 2020
|
||||
|
||||
Bunch of changes:
|
||||
* DenseNet models updated with memory efficient addition from torchvision (fixed a bug), blur pooling and deep stem additions
|
||||
* VoVNet V1 and V2 models added, 39 V2 variant (ese_vovnet_39b) trained to 79.3 top-1
|
||||
* Activation factory added along with new activations:
|
||||
* select act at model creation time for more flexibility in using activations compatible with scripting or tracing (ONNX export)
|
||||
* hard_mish (experimental) added with memory-efficient grad, along with ME hard_swish
|
||||
* context mgr for setting exportable/scriptable/no_jit states
|
||||
* Norm + Activation combo layers added with initial trial support in DenseNet and VoVNet along with impl of EvoNorm and InplaceAbn wrapper that fit the interface
|
||||
* Torchscript works for all but two of the model types as long as using Pytorch 1.5+, tests added for this
|
||||
* Some import cleanup and classifier reset changes, all models will have classifier reset to nn.Identity on reset_classifer(0) call
|
||||
* Prep for 0.1.28 pip release
|
||||
|
||||
### May 12, 2020
|
||||
|
||||
* Add ResNeSt models (code adapted from https://github.com/zhanghang1989/ResNeSt, paper https://arxiv.org/abs/2004.08955))
|
||||
|
||||
### May 3, 2020
|
||||
|
||||
* Pruned EfficientNet B1, B2, and B3 (https://arxiv.org/abs/2002.08258) contributed by [Yonathan Aflalo](https://github.com/yoniaflalo)
|
||||
|
||||
### May 1, 2020
|
||||
|
||||
* Merged a number of execellent contributions in the ResNet model family over the past month
|
||||
* BlurPool2D and resnetblur models initiated by [Chris Ha](https://github.com/VRandme), I trained resnetblur50 to 79.3.
|
||||
* TResNet models and SpaceToDepth, AntiAliasDownsampleLayer layers by [mrT23](https://github.com/mrT23)
|
||||
* ecaresnet (50d, 101d, light) models and two pruned variants using pruning as per (https://arxiv.org/abs/2002.08258) by [Yonathan Aflalo](https://github.com/yoniaflalo)
|
||||
* 200 pretrained models in total now with updated results csv in results folder
|
||||
|
||||
### April 5, 2020
|
||||
|
||||
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
|
||||
* 3.5M param MobileNet-V2 100 @ 73%
|
||||
* 4.5M param MobileNet-V2 110d @ 75%
|
||||
* 6.1M param MobileNet-V2 140 @ 76.5%
|
||||
* 5.8M param MobileNet-V2 120d @ 77.3%
|
||||
|
||||
### March 18, 2020
|
||||
|
||||
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
|
||||
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### April 5, 2020
|
||||
|
||||
* Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite
|
||||
* 3.5M param MobileNet-V2 100 @ 73%
|
||||
* 4.5M param MobileNet-V2 110d @ 75%
|
||||
* 6.1M param MobileNet-V2 140 @ 76.5%
|
||||
* 5.8M param MobileNet-V2 120d @ 77.3%
|
||||
|
||||
### March 18, 2020
|
||||
|
||||
* Add EfficientNet-Lite models w/ weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite)
|
||||
* Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### Feb 29, 2020
|
||||
|
||||
* New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1
|
||||
* IMPORTANT CHANGE - default weight init changed for all MobilenetV3 / EfficientNet / related models
|
||||
* overall results similar to a bit better training from scratch on a few smaller models tried
|
||||
* performance early in training seems consistently improved but less difference by end
|
||||
* set `fix_group_fanout=False` in `_init_weight_goog` fn if you need to reproducte past behaviour
|
||||
* Experimental LR noise feature added applies a random perturbation to LR each epoch in specified range of training
|
||||
|
||||
### Feb 18, 2020
|
||||
|
||||
* Big refactor of model layers and addition of several attention mechanisms. Several additions motivated by 'Compounding the Performance Improvements...' (https://arxiv.org/abs/2001.06268):
|
||||
* Move layer/module impl into `layers` subfolder/module of `models` and organize in a more granular fashion
|
||||
* ResNet downsample paths now properly support dilation (output stride != 32) for avg_pool ('D' variant) and 3x3 (SENets) networks
|
||||
* Add Selective Kernel Nets on top of ResNet base, pretrained weights
|
||||
* skresnet18 - 73% top-1
|
||||
* skresnet34 - 76.9% top-1
|
||||
* skresnext50_32x4d (equiv to SKNet50) - 80.2% top-1
|
||||
* ECA and CECA (circular padding) attention layer contributed by [Chris Ha](https://github.com/VRandme)
|
||||
* CBAM attention experiment (not the best results so far, may remove)
|
||||
* Attention factory to allow dynamically selecting one of SE, ECA, CBAM in the `.se` position for all ResNets
|
||||
* Add DropBlock and DropPath (formerly DropConnect for EfficientNet/MobileNetv3) support to all ResNet variants
|
||||
* Full dataset results updated that incl NoisyStudent weights and 2 of the 3 SK weights
|
||||
|
||||
### Feb 12, 2020
|
||||
|
||||
* Add EfficientNet-L2 and B0-B7 NoisyStudent weights ported from [Tensorflow TPU](https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet)
|
||||
|
||||
### Feb 6, 2020
|
||||
|
||||
* Add RandAugment trained EfficientNet-ES (EdgeTPU-Small) weights with 78.1 top-1. Trained by [Andrew Lavin](https://github.com/andravin) (see Training section for hparams)
|
||||
|
||||
### Feb 1/2, 2020
|
||||
|
||||
* Port new EfficientNet-B8 (RandAugment) weights, these are different than the B8 AdvProp, different input normalization.
|
||||
* Update results csv files on all models for ImageNet validation and three other test sets
|
||||
* Push PyPi package update
|
||||
|
||||
### Jan 31, 2020
|
||||
|
||||
* Update ResNet50 weights with a new 79.038 result from further JSD / AugMix experiments. Full command line for reproduction in training section below.
|
||||
|
||||
### Jan 11/12, 2020
|
||||
|
||||
* Master may be a bit unstable wrt to training, these changes have been tested but not all combos
|
||||
* Implementations of AugMix added to existing RA and AA. Including numerous supporting pieces like JSD loss (Jensen-Shannon divergence + CE), and AugMixDataset
|
||||
* SplitBatchNorm adaptation layer added for implementing Auxiliary BN as per AdvProp paper
|
||||
* ResNet-50 AugMix trained model w/ 79% top-1 added
|
||||
* `seresnext26tn_32x4d` - 77.99 top-1, 93.75 top-5 added to tiered experiment, higher img/s than 't' and 'd'
|
||||
|
||||
### Jan 3, 2020
|
||||
|
||||
* Add RandAugment trained EfficientNet-B0 weight with 77.7 top-1. Trained by [Michael Klachko](https://github.com/michaelklachko) with this code and recent hparams (see Training section)
|
||||
* Add `avg_checkpoints.py` script for post training weight averaging and update all scripts with header docstrings and shebangs.
|
||||
|
||||
### Dec 30, 2019
|
||||
|
||||
* Merge [Dushyant Mehta's](https://github.com/mehtadushy) PR for SelecSLS (Selective Short and Long Range Skip Connections) networks. Good GPU memory consumption and throughput. Original: https://github.com/mehtadushy/SelecSLS-Pytorch
|
||||
|
||||
### Dec 28, 2019
|
||||
|
||||
* Add new model weights and training hparams (see Training Hparams section)
|
||||
* `efficientnet_b3` - 81.5 top-1, 95.7 top-5 at default res/crop, 81.9, 95.8 at 320x320 1.0 crop-pct
|
||||
* trained with RandAugment, ended up with an interesting but less than perfect result (see training section)
|
||||
* `seresnext26d_32x4d`- 77.6 top-1, 93.6 top-5
|
||||
* deep stem (32, 32, 64), avgpool downsample
|
||||
* stem/dowsample from bag-of-tricks paper
|
||||
* `seresnext26t_32x4d`- 78.0 top-1, 93.7 top-5
|
||||
* deep tiered stem (24, 48, 64), avgpool downsample (a modified 'D' variant)
|
||||
* stem sizing mods from Jeremy Howard and fastai devs discussing ResNet architecture experiments
|
||||
|
||||
### Dec 23, 2019
|
||||
|
||||
* Add RandAugment trained MixNet-XL weights with 80.48 top-1.
|
||||
* `--dist-bn` argument added to train.py, will distribute BN stats between nodes after each train epoch, before eval
|
||||
|
||||
### Dec 4, 2019
|
||||
|
||||
* Added weights from the first training from scratch of an EfficientNet (B2) with my new RandAugment implementation. Much better than my previous B2 and very close to the official AdvProp ones (80.4 top-1, 95.08 top-5).
|
||||
|
||||
### Nov 29, 2019
|
||||
|
||||
* Brought EfficientNet and MobileNetV3 up to date with my https://github.com/rwightman/gen-efficientnet-pytorch code. Torchscript and ONNX export compat excluded.
|
||||
* AdvProp weights added
|
||||
* Official TF MobileNetv3 weights added
|
||||
* EfficientNet and MobileNetV3 hook based 'feature extraction' classes added. Will serve as basis for using models as backbones in obj detection/segmentation tasks. Lots more to be done here...
|
||||
* HRNet classification models and weights added from https://github.com/HRNet/HRNet-Image-Classification
|
||||
* Consistency in global pooling, `reset_classifer`, and `forward_features` across models
|
||||
* `forward_features` always returns unpooled feature maps now
|
||||
* Reasonable chance I broke something... let me know
|
||||
|
||||
### Nov 22, 2019
|
||||
|
||||
* Add ImageNet training RandAugment implementation alongside AutoAugment. PyTorch Transform compatible format, using PIL. Currently training two EfficientNet models from scratch with promising results... will update.
|
||||
* `drop-connect` cmd line arg finally added to `train.py`, no need to hack model fns. Works for efficientnet/mobilenetv3 based models, ignored otherwise.
|
@ -1,54 +0,0 @@
|
||||
# Sharing and Loading Models From the Hugging Face Hub
|
||||
|
||||
The `timm` library has a built-in integration with the Hugging Face Hub, making it easy to share and load models from the 🤗 Hub.
|
||||
|
||||
In this short guide, we'll see how to:
|
||||
1. Share a `timm` model on the Hub
|
||||
2. How to load that model back from the Hub
|
||||
|
||||
## Authenticating
|
||||
|
||||
First, you'll need to make sure you have the `huggingface_hub` package installed.
|
||||
|
||||
```bash
|
||||
pip install huggingface_hub
|
||||
```
|
||||
|
||||
Then, you'll need to authenticate yourself. You can do this by running the following command:
|
||||
|
||||
```bash
|
||||
huggingface-cli login
|
||||
```
|
||||
|
||||
Or, if you're using a notebook, you can use the `notebook_login` helper:
|
||||
|
||||
```py
|
||||
>>> from huggingface_hub import notebook_login
|
||||
>>> notebook_login()
|
||||
```
|
||||
|
||||
## Sharing a Model
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> model = timm.create_model('resnet18', pretrained=True, num_classes=4)
|
||||
```
|
||||
|
||||
Here is where you would normally train or fine-tune the model. We'll skip that for the sake of this tutorial.
|
||||
|
||||
Let's pretend we've now fine-tuned the model. The next step would be to push it to the Hub! We can do this with the `timm.models.hub.push_to_hf_hub` function.
|
||||
|
||||
```py
|
||||
>>> model_cfg = dict(labels=['a', 'b', 'c', 'd'])
|
||||
>>> timm.models.hub.push_to_hf_hub(model, 'resnet18-random', model_config=model_cfg)
|
||||
```
|
||||
|
||||
Running the above would push the model to `<your-username>/resnet18-random` on the Hub. You can now share this model with your friends, or use it in your own code!
|
||||
|
||||
## Loading a Model
|
||||
|
||||
Loading a model from the Hub is as simple as calling `timm.create_model` with the `pretrained` argument set to the name of the model you want to load. In this case, we'll use [`nateraw/resnet18-random`](https://huggingface.co/nateraw/resnet18-random), which is the model we just pushed to the Hub.
|
||||
|
||||
```py
|
||||
>>> model_reloaded = timm.create_model('hf_hub:nateraw/resnet18-random', pretrained=True)
|
||||
```
|
@ -1,22 +1,89 @@
|
||||
# timm
|
||||
# Getting Started
|
||||
|
||||
<img class="float-left !m-0 !border-0 !dark:border-0 !shadow-none !max-w-lg w-[150px]" src="https://huggingface.co/front/thumbnails/docs/timm.png"/>
|
||||
## Welcome
|
||||
|
||||
`timm` is a library containing SOTA computer vision models, layers, utilities, optimizers, schedulers, data-loaders, augmentations, and training/evaluation scripts.
|
||||
Welcome to the `timm` documentation, a lean set of docs that covers the basics of `timm`.
|
||||
|
||||
It comes packaged with >700 pretrained models, and is designed to be flexible and easy to use.
|
||||
For a more comprehensive set of docs (currently under development), please visit [timmdocs](http://timm.fast.ai) by [Aman Arora](https://github.com/amaarora).
|
||||
|
||||
Read the [quick start guide](quickstart) to get up and running with the `timm` library. You will learn how to load, discover, and use pretrained models included in the library.
|
||||
## Install
|
||||
|
||||
<div class="mt-10">
|
||||
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./feature_extraction"
|
||||
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Tutorials</div>
|
||||
<p class="text-gray-700">Learn the basics and become familiar with timm. Start here if you are using timm for the first time!</p>
|
||||
</a>
|
||||
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./reference/models"
|
||||
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
|
||||
<p class="text-gray-700">Technical descriptions of how timm classes and methods work.</p>
|
||||
</a>
|
||||
</div>
|
||||
</div>
|
||||
The library can be installed with pip:
|
||||
|
||||
```
|
||||
pip install timm
|
||||
```
|
||||
|
||||
I update the PyPi (pip) packages when I'm confident there are no significant model regressions from previous releases. If you want to pip install the bleeding edge from GitHub, use:
|
||||
```
|
||||
pip install git+https://github.com/rwightman/pytorch-image-models.git
|
||||
```
|
||||
|
||||
### Conda Environment
|
||||
|
||||
<Tip>
|
||||
|
||||
- All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically 3.7, 3.8, 3.9, 3.10
|
||||
|
||||
- Little to no care has been taken to be Python 2.x friendly and will not support it. If you run into any challenges running on Windows, or other OS, I'm definitely open to looking into those issues so long as it's in a reproducible (read Conda) environment.
|
||||
|
||||
- PyTorch versions 1.9, 1.10, 1.11 have been tested with the latest versions of this code.
|
||||
|
||||
</Tip>
|
||||
|
||||
I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda:
|
||||
|
||||
```bash
|
||||
conda create -n torch-env
|
||||
conda activate torch-env
|
||||
conda install pytorch torchvision cudatoolkit=11.3 -c pytorch
|
||||
conda install pyyaml
|
||||
```
|
||||
|
||||
## Load a Pretrained Model
|
||||
|
||||
Pretrained models can be loaded using `timm.create_model`
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
|
||||
>>> m = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> m.eval()
|
||||
```
|
||||
|
||||
## List Models with Pretrained Weights
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models(pretrained=True)
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'adv_inception_v3',
|
||||
'cspdarknet53',
|
||||
'cspresnext50',
|
||||
'densenet121',
|
||||
'densenet161',
|
||||
'densenet169',
|
||||
'densenet201',
|
||||
'densenetblur121d',
|
||||
'dla34',
|
||||
'dla46_c',
|
||||
]
|
||||
```
|
||||
|
||||
## List Model Architectures by Wildcard
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models('*resne*t*')
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'cspresnet50',
|
||||
'cspresnet50d',
|
||||
'cspresnet50w',
|
||||
'cspresnext50',
|
||||
...
|
||||
]
|
||||
```
|
||||
|
@ -1,74 +0,0 @@
|
||||
# Installation
|
||||
|
||||
Before you start, you'll need to setup your environment and install the appropriate packages. `timm` is tested on **Python 3+**.
|
||||
|
||||
## Virtual Environment
|
||||
|
||||
You should install `timm` in a [virtual environment](https://docs.python.org/3/library/venv.html) to keep things tidy and avoid dependency conflicts.
|
||||
|
||||
1. Create and navigate to your project directory:
|
||||
|
||||
```bash
|
||||
mkdir ~/my-project
|
||||
cd ~/my-project
|
||||
```
|
||||
|
||||
2. Start a virtual environment inside your directory:
|
||||
|
||||
```bash
|
||||
python -m venv .env
|
||||
```
|
||||
|
||||
3. Activate and deactivate the virtual environment with the following commands:
|
||||
|
||||
```bash
|
||||
# Activate the virtual environment
|
||||
source .env/bin/activate
|
||||
|
||||
# Deactivate the virtual environment
|
||||
source .env/bin/deactivate
|
||||
```
|
||||
`
|
||||
Once you've created your virtual environment, you can install `timm` in it.
|
||||
|
||||
## Using pip
|
||||
|
||||
The most straightforward way to install `timm` is with pip:
|
||||
|
||||
```bash
|
||||
pip install timm
|
||||
```
|
||||
|
||||
Alternatively, you can install `timm` from GitHub directly to get the latest, bleeding-edge version:
|
||||
|
||||
```bash
|
||||
pip install git+https://github.com/rwightman/pytorch-image-models.git
|
||||
```
|
||||
|
||||
Run the following command to check if `timm` has been properly installed:
|
||||
|
||||
```bash
|
||||
python -c "from timm import list_models; print(list_models(pretrained=True)[:5])"
|
||||
```
|
||||
|
||||
This command lists the first five pretrained models available in `timm` (which are sorted alphebetically). You should see the following output:
|
||||
|
||||
```python
|
||||
['adv_inception_v3', 'bat_resnext26ts', 'beit_base_patch16_224', 'beit_base_patch16_224_in22k', 'beit_base_patch16_384']
|
||||
```
|
||||
|
||||
## From Source
|
||||
|
||||
Building `timm` from source lets you make changes to the code base. To install from the source, clone the repository and install with the following commands:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/rwightman/pytorch-image-models.git
|
||||
cd timm
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
Again, you can check if `timm` was properly installed with the following command:
|
||||
|
||||
```bash
|
||||
python -c "from timm import list_models; print(list_models(pretrained=True)[:5])"
|
||||
```
|
@ -0,0 +1,5 @@
|
||||
# Available Models
|
||||
|
||||
`timm` comes bundled with a number of model architectures and corresponding pretrained models.
|
||||
|
||||
In these pages, you will find the models available in the `timm` library, as well as information on how to use them.
|
@ -1,228 +0,0 @@
|
||||
# Quickstart
|
||||
|
||||
This quickstart is intended for developers who are ready to dive into the code and see an example of how to integrate `timm` into their model training workflow.
|
||||
|
||||
First, you'll need to install `timm`. For more information on installation, see [Installation](installation).
|
||||
|
||||
```bash
|
||||
pip install timm
|
||||
```
|
||||
|
||||
## Load a Pretrained Model
|
||||
|
||||
Pretrained models can be loaded using [`create_model`].
|
||||
|
||||
Here, we load the pretrained `mobilenetv3_large_100` model.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
|
||||
>>> m = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> m.eval()
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: The returned PyTorch model is set to train mode by default, so you must call .eval() on it if you plan to use it for inference.
|
||||
</Tip>
|
||||
|
||||
## List Models with Pretrained Weights
|
||||
|
||||
To list models packaged with `timm`, you can use [`list_models`]. If you specify `pretrained=True`, this function will only return model names that have associated pretrained weights available.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models(pretrained=True)
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'adv_inception_v3',
|
||||
'cspdarknet53',
|
||||
'cspresnext50',
|
||||
'densenet121',
|
||||
'densenet161',
|
||||
'densenet169',
|
||||
'densenet201',
|
||||
'densenetblur121d',
|
||||
'dla34',
|
||||
'dla46_c',
|
||||
]
|
||||
```
|
||||
|
||||
You can also list models with a specific pattern in their name.
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> from pprint import pprint
|
||||
>>> model_names = timm.list_models('*resne*t*')
|
||||
>>> pprint(model_names)
|
||||
[
|
||||
'cspresnet50',
|
||||
'cspresnet50d',
|
||||
'cspresnet50w',
|
||||
'cspresnext50',
|
||||
...
|
||||
]
|
||||
```
|
||||
|
||||
## Fine-Tune a Pretrained Model
|
||||
|
||||
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
|
||||
|
||||
```py
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True, num_classes=NUM_FINETUNE_CLASSES)
|
||||
```
|
||||
|
||||
To fine-tune on your own dataset, you have to write a PyTorch training loop or adapt `timm`'s [training script](training_script) to use your dataset.
|
||||
|
||||
## Use a Pretrained Model for Feature Extraction
|
||||
|
||||
Without modifying the network, one can call model.forward_features(input) on any model instead of the usual model(input). This will bypass the head classifier and global pooling for networks.
|
||||
|
||||
For a more in depth guide to using `timm` for feature extraction, see [Feature Extraction](feature_extraction).
|
||||
|
||||
```py
|
||||
>>> import timm
|
||||
>>> import torch
|
||||
>>> x = torch.randn(1, 3, 224, 224)
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True)
|
||||
>>> features = model.forward_features(x)
|
||||
>>> print(features.shape)
|
||||
torch.Size([1, 960, 7, 7])
|
||||
```
|
||||
|
||||
## Image Augmentation
|
||||
|
||||
To transform images into valid inputs for a model, you can use [`timm.data.create_transform`], providing the desired `input_size` that the model expects.
|
||||
|
||||
This will return a generic transform that uses reasonable defaults.
|
||||
|
||||
```py
|
||||
>>> timm.data.create_transform((3, 224, 224))
|
||||
Compose(
|
||||
Resize(size=256, interpolation=bilinear, max_size=None, antialias=None)
|
||||
CenterCrop(size=(224, 224))
|
||||
ToTensor()
|
||||
Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
|
||||
)
|
||||
```
|
||||
|
||||
Pretrained models have specific transforms that were applied to images fed into them while training. If you use the wrong transform on your image, the model won't understand what it's seeing!
|
||||
|
||||
To figure out which transformations were used for a given pretrained model, we can start by taking a look at its `pretrained_cfg`
|
||||
|
||||
```py
|
||||
>>> model.pretrained_cfg
|
||||
{'url': 'https://github.com/rwightman/pytorch-image-models/releases/download/v0.1-weights/mobilenetv3_large_100_ra-f55367f5.pth',
|
||||
'num_classes': 1000,
|
||||
'input_size': (3, 224, 224),
|
||||
'pool_size': (7, 7),
|
||||
'crop_pct': 0.875,
|
||||
'interpolation': 'bicubic',
|
||||
'mean': (0.485, 0.456, 0.406),
|
||||
'std': (0.229, 0.224, 0.225),
|
||||
'first_conv': 'conv_stem',
|
||||
'classifier': 'classifier',
|
||||
'architecture': 'mobilenetv3_large_100'}
|
||||
```
|
||||
|
||||
We can then resolve only the data related configuration by using [`timm.data.resolve_data_config`].
|
||||
|
||||
```py
|
||||
>>> timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
{'input_size': (3, 224, 224),
|
||||
'interpolation': 'bicubic',
|
||||
'mean': (0.485, 0.456, 0.406),
|
||||
'std': (0.229, 0.224, 0.225),
|
||||
'crop_pct': 0.875}
|
||||
```
|
||||
|
||||
We can pass this data config to [`timm.data.create_transform`] to initialize the model's associated transform.
|
||||
|
||||
```py
|
||||
>>> data_cfg = timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
>>> transform = timm.data.create_transform(**data_cfg)
|
||||
>>> transform
|
||||
Compose(
|
||||
Resize(size=256, interpolation=bicubic, max_size=None, antialias=None)
|
||||
CenterCrop(size=(224, 224))
|
||||
ToTensor()
|
||||
Normalize(mean=tensor([0.4850, 0.4560, 0.4060]), std=tensor([0.2290, 0.2240, 0.2250]))
|
||||
)
|
||||
```
|
||||
|
||||
<Tip>
|
||||
Note: Here, the pretrained model's config happens to be the same as the generic config we made earlier. This is not always the case. So, it's safer to use the data config to create the transform as we did here instead of using the generic transform.
|
||||
</Tip>
|
||||
|
||||
## Using Pretrained Models for Inference
|
||||
|
||||
Here, we will put together the above sections and use a pretrained model for inference.
|
||||
|
||||
First we'll need an image to do inference on. Here we load a picture of a leaf from the web:
|
||||
|
||||
```py
|
||||
>>> import requests
|
||||
>>> from PIL import Image
|
||||
>>> from io import BytesIO
|
||||
>>> url = 'https://datasets-server.huggingface.co/assets/imagenet-1k/--/default/test/12/image/image.jpg'
|
||||
>>> image = Image.open(requests.get(url, stream=True).raw)
|
||||
>>> image
|
||||
```
|
||||
|
||||
Here's the image we loaded:
|
||||
|
||||
<img src="https://datasets-server.huggingface.co/assets/imagenet-1k/--/default/test/12/image/image.jpg" alt="An Image from a link" width="300"/>
|
||||
|
||||
Now, we'll create our model and transforms again. This time, we make sure to set our model in evaluation mode.
|
||||
|
||||
```py
|
||||
>>> model = timm.create_model('mobilenetv3_large_100', pretrained=True).eval()
|
||||
>>> transform = timm.data.create_transform(
|
||||
**timm.data.resolve_data_config(model.pretrained_cfg)
|
||||
)
|
||||
```
|
||||
|
||||
We can prepare this image for the model by passing it to the transform.
|
||||
|
||||
```py
|
||||
>>> image_tensor = transform(image)
|
||||
>>> image_tensor.shape
|
||||
torch.Size([3, 224, 224])
|
||||
```
|
||||
|
||||
Now we can pass that image to the model to get the predictions. We use `unsqueeze(0)` in this case, as the model is expecting a batch dimension.
|
||||
|
||||
```py
|
||||
>>> output = model(image_tensor.unsqueeze(0))
|
||||
>>> output.shape
|
||||
torch.Size([1, 1000])
|
||||
```
|
||||
|
||||
To get the predicted probabilities, we apply softmax to the output. This leaves us with a tensor of shape `(num_classes,)`.
|
||||
|
||||
```py
|
||||
>>> probabilities = torch.nn.functional.softmax(output[0], dim=0)
|
||||
>>> probabilities.shape
|
||||
torch.Size([1000])
|
||||
```
|
||||
|
||||
Now we'll find the top 5 predicted class indexes and values using `torch.topk`.
|
||||
|
||||
```py
|
||||
>>> values, indices = torch.topk(probabilities, 5)
|
||||
>>> indices
|
||||
tensor([162, 166, 161, 164, 167])
|
||||
```
|
||||
|
||||
If we check the imagenet labels for the top index, we can see what the model predicted...
|
||||
|
||||
```py
|
||||
>>> IMAGENET_1k_URL = 'https://storage.googleapis.com/bit_models/ilsvrc2012_wordnet_lemmas.txt'
|
||||
>>> IMAGENET_1k_LABELS = requests.get(IMAGENET_1k_URL).text.strip().split('\n')
|
||||
>>> [{'label': IMAGENET_1k_LABELS[idx], 'value': val.item()} for val, idx in zip(values, indices)]
|
||||
[{'label': 'beagle', 'value': 0.8486220836639404},
|
||||
{'label': 'Walker_hound, Walker_foxhound', 'value': 0.03753996267914772},
|
||||
{'label': 'basset, basset_hound', 'value': 0.024628572165966034},
|
||||
{'label': 'bluetick', 'value': 0.010317106731235981},
|
||||
{'label': 'English_foxhound', 'value': 0.006958036217838526}]
|
||||
```
|
@ -1,9 +0,0 @@
|
||||
# Data
|
||||
|
||||
[[autodoc]] timm.data.create_dataset
|
||||
|
||||
[[autodoc]] timm.data.create_loader
|
||||
|
||||
[[autodoc]] timm.data.create_transform
|
||||
|
||||
[[autodoc]] timm.data.resolve_data_config
|
@ -1,5 +0,0 @@
|
||||
# Models
|
||||
|
||||
[[autodoc]] timm.create_model
|
||||
|
||||
[[autodoc]] timm.list_models
|
@ -1,27 +0,0 @@
|
||||
# Optimization
|
||||
|
||||
This page contains the API reference documentation for learning rate optimizers included in `timm`.
|
||||
|
||||
## Optimizers
|
||||
|
||||
### Factory functions
|
||||
|
||||
[[autodoc]] timm.optim.optim_factory.create_optimizer
|
||||
[[autodoc]] timm.optim.optim_factory.create_optimizer_v2
|
||||
|
||||
### Optimizer Classes
|
||||
|
||||
[[autodoc]] timm.optim.adabelief.AdaBelief
|
||||
[[autodoc]] timm.optim.adafactor.Adafactor
|
||||
[[autodoc]] timm.optim.adahessian.Adahessian
|
||||
[[autodoc]] timm.optim.adamp.AdamP
|
||||
[[autodoc]] timm.optim.adamw.AdamW
|
||||
[[autodoc]] timm.optim.lamb.Lamb
|
||||
[[autodoc]] timm.optim.lars.Lars
|
||||
[[autodoc]] timm.optim.lookahead.Lookahead
|
||||
[[autodoc]] timm.optim.madgrad.MADGRAD
|
||||
[[autodoc]] timm.optim.nadam.Nadam
|
||||
[[autodoc]] timm.optim.nvnovograd.NvNovoGrad
|
||||
[[autodoc]] timm.optim.radam.RAdam
|
||||
[[autodoc]] timm.optim.rmsprop_tf.RMSpropTF
|
||||
[[autodoc]] timm.optim.sgdp.SGDP
|
@ -1,19 +0,0 @@
|
||||
# Learning Rate Schedulers
|
||||
|
||||
This page contains the API reference documentation for learning rate schedulers included in `timm`.
|
||||
|
||||
## Schedulers
|
||||
|
||||
### Factory functions
|
||||
|
||||
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler
|
||||
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler_v2
|
||||
|
||||
### Scheduler Classes
|
||||
|
||||
[[autodoc]] timm.scheduler.cosine_lr.CosineLRScheduler
|
||||
[[autodoc]] timm.scheduler.multistep_lr.MultiStepLRScheduler
|
||||
[[autodoc]] timm.scheduler.plateau_lr.PlateauLRScheduler
|
||||
[[autodoc]] timm.scheduler.poly_lr.PolyLRScheduler
|
||||
[[autodoc]] timm.scheduler.step_lr.StepLRScheduler
|
||||
[[autodoc]] timm.scheduler.tanh_lr.TanhLRScheduler
|
@ -0,0 +1,35 @@
|
||||
# Scripts
|
||||
A train, validation, inference, and checkpoint cleaning script included in the github root folder. Scripts are not currently packaged in the pip release.
|
||||
|
||||
The training and validation scripts evolved from early versions of the [PyTorch Imagenet Examples](https://github.com/pytorch/examples). I have added significant functionality over time, including CUDA specific performance enhancements based on
|
||||
[NVIDIA's APEX Examples](https://github.com/NVIDIA/apex/tree/master/examples).
|
||||
|
||||
## Training Script
|
||||
|
||||
The variety of training args is large and not all combinations of options (or even options) have been fully tested. For the training dataset folder, specify the folder to the base that contains a `train` and `validation` folder.
|
||||
|
||||
To train an SE-ResNet34 on ImageNet, locally distributed, 4 GPUs, one process per GPU w/ cosine schedule, random-erasing prob of 50% and per-pixel random value:
|
||||
|
||||
```bash
|
||||
./distributed_train.sh 4 /data/imagenet --model seresnet34 --sched cosine --epochs 150 --warmup-epochs 5 --lr 0.4 --reprob 0.5 --remode pixel --batch-size 256 --amp -j 4
|
||||
```
|
||||
|
||||
<Tip>
|
||||
It is recommended to use PyTorch 1.9+ w/ PyTorch native AMP and DDP instead of APEX AMP. --amp defaults to native AMP as of timm ver 0.4.3. --apex-amp will force use of APEX components if they are installed.
|
||||
</Tip>
|
||||
|
||||
## Validation / Inference Scripts
|
||||
|
||||
Validation and inference scripts are similar in usage. One outputs metrics on a validation set and the other outputs topk class ids in a csv. Specify the folder containing validation images, not the base as in training script.
|
||||
|
||||
To validate with the model's pretrained weights (if they exist):
|
||||
|
||||
```bash
|
||||
python validate.py /imagenet/validation/ --model seresnext26_32x4d --pretrained
|
||||
```
|
||||
|
||||
To run inference from a checkpoint:
|
||||
|
||||
```bash
|
||||
python inference.py /imagenet/validation/ --model mobilenetv3_large_100 --checkpoint ./output/train/model_best.pth.tar
|
||||
```
|
@ -1,3 +1,4 @@
|
||||
dependencies = ['torch']
|
||||
import timm
|
||||
globals().update(timm.models._registry._model_entrypoints)
|
||||
from timm.models import registry
|
||||
|
||||
globals().update(registry._model_entrypoints)
|
||||
|
@ -1,5 +1,4 @@
|
||||
mkdocs
|
||||
mkdocs-material
|
||||
mkdocs-redirects
|
||||
mdx_truly_sane_lists
|
||||
mkdocs-awesome-pages-plugin
|
||||
mkdocs-awesome-pages-plugin
|
@ -0,0 +1,2 @@
|
||||
model-index==0.1.10
|
||||
jinja2==2.11.3
|
|
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,4 +1,4 @@
|
||||
from .version import __version__
|
||||
from .layers import is_scriptable, is_exportable, set_scriptable, set_exportable
|
||||
from .models import create_model, list_models, list_pretrained, is_model, list_modules, model_entrypoint, \
|
||||
is_model_pretrained, get_pretrained_cfg, get_pretrained_cfg_value
|
||||
from .models import create_model, list_models, is_model, list_modules, model_entrypoint, \
|
||||
is_scriptable, is_exportable, set_scriptable, set_exportable, has_pretrained_cfg_key, is_pretrained_cfg_key, \
|
||||
get_pretrained_cfg_value, is_model_pretrained
|
||||
|
@ -1,15 +1,13 @@
|
||||
from .auto_augment import RandAugment, AutoAugment, rand_augment_ops, auto_augment_policy,\
|
||||
rand_augment_transform, auto_augment_transform
|
||||
from .config import resolve_data_config, resolve_model_data_config
|
||||
from .config import resolve_data_config
|
||||
from .constants import *
|
||||
from .dataset import ImageDataset, IterableImageDataset, AugMixDataset
|
||||
from .dataset_factory import create_dataset
|
||||
from .dataset_info import DatasetInfo, CustomDatasetInfo
|
||||
from .imagenet_info import ImageNetInfo, infer_imagenet_subset
|
||||
from .loader import create_loader
|
||||
from .mixup import Mixup, FastCollateMixup
|
||||
from .readers import create_reader
|
||||
from .readers import get_img_extensions, is_img_extension, set_img_extensions, add_img_extensions, del_img_extensions
|
||||
from .parsers import create_parser,\
|
||||
get_img_extensions, is_img_extension, set_img_extensions, add_img_extensions, del_img_extensions
|
||||
from .real_labels import RealLabelsImagenet
|
||||
from .transforms import *
|
||||
from .transforms_factory import create_transform
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1,73 +0,0 @@
|
||||
from abc import ABC, abstractmethod
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
|
||||
class DatasetInfo(ABC):
|
||||
|
||||
def __init__(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def num_classes(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_names(self):
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def index_to_label_name(self, index) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
pass
|
||||
|
||||
@abstractmethod
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
pass
|
||||
|
||||
|
||||
class CustomDatasetInfo(DatasetInfo):
|
||||
""" DatasetInfo that wraps passed values for custom datasets."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
label_names: Union[List[str], Dict[int, str]],
|
||||
label_descriptions: Optional[Dict[str, str]] = None
|
||||
):
|
||||
super().__init__()
|
||||
assert len(label_names) > 0
|
||||
self._label_names = label_names # label index => label name mapping
|
||||
self._label_descriptions = label_descriptions # label name => label description mapping
|
||||
if self._label_descriptions is not None:
|
||||
# validate descriptions (label names required)
|
||||
assert isinstance(self._label_descriptions, dict)
|
||||
for n in self._label_names:
|
||||
assert n in self._label_descriptions
|
||||
|
||||
def num_classes(self):
|
||||
return len(self._label_names)
|
||||
|
||||
def label_names(self):
|
||||
return self._label_names
|
||||
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
return self._label_descriptions
|
||||
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
if self._label_descriptions:
|
||||
return self._label_descriptions[label]
|
||||
return label # return label name itself if a descriptions is not present
|
||||
|
||||
def index_to_label_name(self, index) -> str:
|
||||
assert 0 <= index < len(self._label_names)
|
||||
return self._label_names[index]
|
||||
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
label = self.index_to_label_name(index)
|
||||
return self.label_name_to_description(label, detailed=detailed)
|
@ -1,92 +0,0 @@
|
||||
import csv
|
||||
import os
|
||||
import pkgutil
|
||||
import re
|
||||
from typing import Dict, List, Optional, Union
|
||||
|
||||
from .dataset_info import DatasetInfo
|
||||
|
||||
|
||||
_NUM_CLASSES_TO_SUBSET = {
|
||||
1000: 'imagenet-1k',
|
||||
11821: 'imagenet-12k',
|
||||
21841: 'imagenet-22k',
|
||||
21843: 'imagenet-21k-goog',
|
||||
11221: 'imagenet-21k-miil',
|
||||
}
|
||||
|
||||
_SUBSETS = {
|
||||
'imagenet1k': 'imagenet_synsets.txt',
|
||||
'imagenet12k': 'imagenet12k_synsets.txt',
|
||||
'imagenet22k': 'imagenet22k_synsets.txt',
|
||||
'imagenet21k': 'imagenet21k_goog_synsets.txt',
|
||||
'imagenet21kgoog': 'imagenet21k_goog_synsets.txt',
|
||||
'imagenet21kmiil': 'imagenet21k_miil_synsets.txt',
|
||||
}
|
||||
_LEMMA_FILE = 'imagenet_synset_to_lemma.txt'
|
||||
_DEFINITION_FILE = 'imagenet_synset_to_definition.txt'
|
||||
|
||||
|
||||
def infer_imagenet_subset(model_or_cfg) -> Optional[str]:
|
||||
if isinstance(model_or_cfg, dict):
|
||||
num_classes = model_or_cfg.get('num_classes', None)
|
||||
else:
|
||||
num_classes = getattr(model_or_cfg, 'num_classes', None)
|
||||
if not num_classes:
|
||||
pretrained_cfg = getattr(model_or_cfg, 'pretrained_cfg', {})
|
||||
# FIXME at some point pretrained_cfg should include dataset-tag,
|
||||
# which will be more robust than a guess based on num_classes
|
||||
num_classes = pretrained_cfg.get('num_classes', None)
|
||||
if not num_classes or num_classes not in _NUM_CLASSES_TO_SUBSET:
|
||||
return None
|
||||
return _NUM_CLASSES_TO_SUBSET[num_classes]
|
||||
|
||||
|
||||
class ImageNetInfo(DatasetInfo):
|
||||
|
||||
def __init__(self, subset: str = 'imagenet-1k'):
|
||||
super().__init__()
|
||||
subset = re.sub(r'[-_\s]', '', subset.lower())
|
||||
assert subset in _SUBSETS, f'Unknown imagenet subset {subset}.'
|
||||
|
||||
# WordNet synsets (part-of-speach + offset) are the unique class label names for ImageNet classifiers
|
||||
synset_file = _SUBSETS[subset]
|
||||
synset_data = pkgutil.get_data(__name__, os.path.join('_info', synset_file))
|
||||
self._synsets = synset_data.decode('utf-8').splitlines()
|
||||
|
||||
# WordNet lemmas (canonical dictionary form of word) and definitions are used to build
|
||||
# the class descriptions. If detailed=True both are used, otherwise just the lemmas.
|
||||
lemma_data = pkgutil.get_data(__name__, os.path.join('_info', _LEMMA_FILE))
|
||||
reader = csv.reader(lemma_data.decode('utf-8').splitlines(), delimiter='\t')
|
||||
self._lemmas = dict(reader)
|
||||
definition_data = pkgutil.get_data(__name__, os.path.join('_info', _DEFINITION_FILE))
|
||||
reader = csv.reader(definition_data.decode('utf-8').splitlines(), delimiter='\t')
|
||||
self._definitions = dict(reader)
|
||||
|
||||
def num_classes(self):
|
||||
return len(self._synsets)
|
||||
|
||||
def label_names(self):
|
||||
return self._synsets
|
||||
|
||||
def label_descriptions(self, detailed: bool = False, as_dict: bool = False) -> Union[List[str], Dict[str, str]]:
|
||||
if as_dict:
|
||||
return {label: self.label_name_to_description(label, detailed=detailed) for label in self._synsets}
|
||||
else:
|
||||
return [self.label_name_to_description(label, detailed=detailed) for label in self._synsets]
|
||||
|
||||
def index_to_label_name(self, index) -> str:
|
||||
assert 0 <= index < len(self._synsets), \
|
||||
f'Index ({index}) out of range for dataset with {len(self._synsets)} classes.'
|
||||
return self._synsets[index]
|
||||
|
||||
def index_to_description(self, index: int, detailed: bool = False) -> str:
|
||||
label = self.index_to_label_name(index)
|
||||
return self.label_name_to_description(label, detailed=detailed)
|
||||
|
||||
def label_name_to_description(self, label: str, detailed: bool = False) -> str:
|
||||
if detailed:
|
||||
description = f'{self._lemmas[label]}: {self._definitions[label]}'
|
||||
else:
|
||||
description = f'{self._lemmas[label]}'
|
||||
return description
|
@ -0,0 +1,2 @@
|
||||
from .parser_factory import create_parser
|
||||
from .img_extensions import *
|
@ -1,7 +1,7 @@
|
||||
from abc import abstractmethod
|
||||
|
||||
|
||||
class Reader:
|
||||
class Parser:
|
||||
def __init__(self):
|
||||
pass
|
||||
|
@ -0,0 +1,28 @@
|
||||
import os
|
||||
|
||||
from .parser_image_folder import ParserImageFolder
|
||||
from .parser_image_in_tar import ParserImageInTar
|
||||
|
||||
|
||||
def create_parser(name, root, split='train', **kwargs):
|
||||
name = name.lower()
|
||||
name = name.split('/', 2)
|
||||
prefix = ''
|
||||
if len(name) > 1:
|
||||
prefix = name[0]
|
||||
name = name[-1]
|
||||
|
||||
# FIXME improve the selection right now just tfds prefix or fallback path, will need options to
|
||||
# explicitly select other options shortly
|
||||
if prefix == 'tfds':
|
||||
from .parser_tfds import ParserTfds # defer tensorflow import
|
||||
parser = ParserTfds(root, name, split=split, **kwargs)
|
||||
else:
|
||||
assert os.path.exists(root)
|
||||
# default fallback path (backwards compat), use image tar if root is a .tar file, otherwise image folder
|
||||
# FIXME support split here, in parser?
|
||||
if os.path.isfile(root) and os.path.splitext(root)[1] == '.tar':
|
||||
parser = ParserImageInTar(root, **kwargs)
|
||||
else:
|
||||
parser = ParserImageFolder(root, **kwargs)
|
||||
return parser
|
@ -1,2 +0,0 @@
|
||||
from .reader_factory import create_reader
|
||||
from .img_extensions import *
|
@ -1,35 +0,0 @@
|
||||
import os
|
||||
|
||||
from .reader_image_folder import ReaderImageFolder
|
||||
from .reader_image_in_tar import ReaderImageInTar
|
||||
|
||||
|
||||
def create_reader(name, root, split='train', **kwargs):
|
||||
name = name.lower()
|
||||
name = name.split('/', 1)
|
||||
prefix = ''
|
||||
if len(name) > 1:
|
||||
prefix = name[0]
|
||||
name = name[-1]
|
||||
|
||||
# FIXME improve the selection right now just tfds prefix or fallback path, will need options to
|
||||
# explicitly select other options shortly
|
||||
if prefix == 'hfds':
|
||||
from .reader_hfds import ReaderHfds # defer tensorflow import
|
||||
reader = ReaderHfds(root, name, split=split, **kwargs)
|
||||
elif prefix == 'tfds':
|
||||
from .reader_tfds import ReaderTfds # defer tensorflow import
|
||||
reader = ReaderTfds(root, name, split=split, **kwargs)
|
||||
elif prefix == 'wds':
|
||||
from .reader_wds import ReaderWds
|
||||
kwargs.pop('download', False)
|
||||
reader = ReaderWds(root, name, split=split, **kwargs)
|
||||
else:
|
||||
assert os.path.exists(root)
|
||||
# default fallback path (backwards compat), use image tar if root is a .tar file, otherwise image folder
|
||||
# FIXME support split here or in reader?
|
||||
if os.path.isfile(root) and os.path.splitext(root)[1] == '.tar':
|
||||
reader = ReaderImageInTar(root, **kwargs)
|
||||
else:
|
||||
reader = ReaderImageFolder(root, **kwargs)
|
||||
return reader
|
@ -1,80 +0,0 @@
|
||||
""" Dataset reader that wraps Hugging Face datasets
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import io
|
||||
import math
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
from PIL import Image
|
||||
|
||||
try:
|
||||
import datasets
|
||||
except ImportError as e:
|
||||
print("Please install Hugging Face datasets package `pip install datasets`.")
|
||||
exit(1)
|
||||
from .class_map import load_class_map
|
||||
from .reader import Reader
|
||||
|
||||
|
||||
def get_class_labels(info, label_key='label'):
|
||||
if 'label' not in info.features:
|
||||
return {}
|
||||
class_label = info.features[label_key]
|
||||
class_to_idx = {n: class_label.str2int(n) for n in class_label.names}
|
||||
return class_to_idx
|
||||
|
||||
|
||||
class ReaderHfds(Reader):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
root,
|
||||
name,
|
||||
split='train',
|
||||
class_map=None,
|
||||
label_key='label',
|
||||
download=False,
|
||||
):
|
||||
"""
|
||||
"""
|
||||
super().__init__()
|
||||
self.root = root
|
||||
self.split = split
|
||||
self.dataset = datasets.load_dataset(
|
||||
name, # 'name' maps to path arg in hf datasets
|
||||
split=split,
|
||||
cache_dir=self.root, # timm doesn't expect hidden cache dir for datasets, specify a path
|
||||
)
|
||||
# leave decode for caller, plus we want easy access to original path names...
|
||||
self.dataset = self.dataset.cast_column('image', datasets.Image(decode=False))
|
||||
|
||||
self.label_key = label_key
|
||||
self.remap_class = False
|
||||
if class_map:
|
||||
self.class_to_idx = load_class_map(class_map)
|
||||
self.remap_class = True
|
||||
else:
|
||||
self.class_to_idx = get_class_labels(self.dataset.info, self.label_key)
|
||||
self.split_info = self.dataset.info.splits[split]
|
||||
self.num_samples = self.split_info.num_examples
|
||||
|
||||
def __getitem__(self, index):
|
||||
item = self.dataset[index]
|
||||
image = item['image']
|
||||
if 'bytes' in image and image['bytes']:
|
||||
image = io.BytesIO(image['bytes'])
|
||||
else:
|
||||
assert 'path' in image and image['path']
|
||||
image = open(image['path'], 'rb')
|
||||
label = item[self.label_key]
|
||||
if self.remap_class:
|
||||
label = self.class_to_idx[label]
|
||||
return image, label
|
||||
|
||||
def __len__(self):
|
||||
return len(self.dataset)
|
||||
|
||||
def _filename(self, index, basename=False, absolute=False):
|
||||
item = self.dataset[index]
|
||||
return item['image']['path']
|
@ -1,461 +0,0 @@
|
||||
""" Dataset reader for webdataset
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import io
|
||||
import json
|
||||
import logging
|
||||
import math
|
||||
import os
|
||||
import random
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from functools import partial
|
||||
from itertools import islice
|
||||
from typing import Any, Callable, Dict, List, Optional, Tuple
|
||||
|
||||
import torch
|
||||
import torch.distributed as dist
|
||||
import yaml
|
||||
from PIL import Image
|
||||
from torch.utils.data import Dataset, IterableDataset, get_worker_info
|
||||
|
||||
try:
|
||||
import webdataset as wds
|
||||
from webdataset.filters import _shuffle
|
||||
from webdataset.shardlists import expand_urls
|
||||
from webdataset.tariterators import base_plus_ext, url_opener, tar_file_expander, valid_sample
|
||||
except ImportError:
|
||||
wds = None
|
||||
expand_urls = None
|
||||
|
||||
from .reader import Reader
|
||||
from .shared_count import SharedCount
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
SHUFFLE_SIZE = int(os.environ.get('WDS_SHUFFLE_SIZE', 8192))
|
||||
|
||||
|
||||
def _load_info(root, basename='info'):
|
||||
info_json = os.path.join(root, basename + '.json')
|
||||
info_yaml = os.path.join(root, basename + '.yaml')
|
||||
err_str = ''
|
||||
try:
|
||||
with wds.gopen.gopen(info_json) as f:
|
||||
info_dict = json.load(f)
|
||||
return info_dict
|
||||
except Exception as e:
|
||||
err_str = str(e)
|
||||
try:
|
||||
with wds.gopen.gopen(info_yaml) as f:
|
||||
info_dict = yaml.safe_load(f)
|
||||
return info_dict
|
||||
except Exception:
|
||||
pass
|
||||
_logger.warning(
|
||||
f'Dataset info file not found at {info_json} or {info_yaml}. Error: {err_str}. '
|
||||
'Falling back to provided split and size arg.')
|
||||
return {}
|
||||
|
||||
|
||||
@dataclass
|
||||
class SplitInfo:
|
||||
num_samples: int
|
||||
filenames: Tuple[str]
|
||||
shard_lengths: Tuple[int] = ()
|
||||
alt_label: str = ''
|
||||
name: str = ''
|
||||
|
||||
|
||||
def _parse_split_info(split: str, info: Dict):
|
||||
def _info_convert(dict_info):
|
||||
return SplitInfo(
|
||||
num_samples=dict_info['num_samples'],
|
||||
filenames=tuple(dict_info['filenames']),
|
||||
shard_lengths=tuple(dict_info['shard_lengths']),
|
||||
alt_label=dict_info.get('alt_label', ''),
|
||||
name=dict_info['name'],
|
||||
)
|
||||
|
||||
if 'tar' in split or '..' in split:
|
||||
# split in WDS string braceexpand format, sample count can be included with a | separator
|
||||
# ex: `dataset-split-{0000..9999}.tar|100000` for 9999 shards, covering 100,000 samples
|
||||
split = split.split('|')
|
||||
num_samples = 0
|
||||
split_name = ''
|
||||
if len(split) > 1:
|
||||
num_samples = int(split[1])
|
||||
split = split[0]
|
||||
if '::' not in split:
|
||||
split_parts = split.split('-', 3)
|
||||
split_idx = len(split_parts) - 1
|
||||
if split_idx and 'splits' in info and split_parts[split_idx] in info['splits']:
|
||||
split_name = split_parts[split_idx]
|
||||
|
||||
split_filenames = expand_urls(split)
|
||||
if split_name:
|
||||
split_info = info['splits'][split_name]
|
||||
if not num_samples:
|
||||
_fc = {f: c for f, c in zip(split_info['filenames'], split_info['shard_lengths'])}
|
||||
num_samples = sum(_fc[f] for f in split_filenames)
|
||||
split_info['filenames'] = tuple(_fc.keys())
|
||||
split_info['shard_lengths'] = tuple(_fc.values())
|
||||
split_info['num_samples'] = num_samples
|
||||
split_info = _info_convert(split_info)
|
||||
else:
|
||||
split_info = SplitInfo(
|
||||
name=split_name,
|
||||
num_samples=num_samples,
|
||||
filenames=split_filenames,
|
||||
)
|
||||
else:
|
||||
if split not in info['splits']:
|
||||
raise RuntimeError(f"split {split} not found in info ({info['splits'].keys()})")
|
||||
split = split
|
||||
split_info = info['splits'][split]
|
||||
split_info = _info_convert(split_info)
|
||||
|
||||
return split_info
|
||||
|
||||
|
||||
def log_and_continue(exn):
|
||||
"""Call in an exception handler to ignore any exception, isssue a warning, and continue."""
|
||||
_logger.warning(f'Handling webdataset error ({repr(exn)}). Ignoring.')
|
||||
return True
|
||||
|
||||
|
||||
def _decode(
|
||||
sample,
|
||||
image_key='jpg',
|
||||
image_format='RGB',
|
||||
target_key='cls',
|
||||
alt_label=''
|
||||
):
|
||||
""" Custom sample decode
|
||||
* decode and convert PIL Image
|
||||
* cls byte string label to int
|
||||
* pass through JSON byte string (if it exists) without parse
|
||||
"""
|
||||
# decode class label, skip if alternate label not valid
|
||||
if alt_label:
|
||||
# alternative labels are encoded in json metadata
|
||||
meta = json.loads(sample['json'])
|
||||
class_label = int(meta[alt_label])
|
||||
if class_label < 0:
|
||||
# skipped labels currently encoded as -1, may change to a null/None value
|
||||
return None
|
||||
else:
|
||||
class_label = int(sample[target_key])
|
||||
|
||||
# decode image
|
||||
with io.BytesIO(sample[image_key]) as b:
|
||||
img = Image.open(b)
|
||||
img.load()
|
||||
if image_format:
|
||||
img = img.convert(image_format)
|
||||
|
||||
# json passed through in undecoded state
|
||||
decoded = dict(jpg=img, cls=class_label, json=sample.get('json', None))
|
||||
return decoded
|
||||
|
||||
|
||||
def _decode_samples(
|
||||
data,
|
||||
image_key='jpg',
|
||||
image_format='RGB',
|
||||
target_key='cls',
|
||||
alt_label='',
|
||||
handler=log_and_continue):
|
||||
"""Decode samples with skip."""
|
||||
for sample in data:
|
||||
try:
|
||||
result = _decode(
|
||||
sample,
|
||||
image_key=image_key,
|
||||
image_format=image_format,
|
||||
target_key=target_key,
|
||||
alt_label=alt_label
|
||||
)
|
||||
except Exception as exn:
|
||||
if handler(exn):
|
||||
continue
|
||||
else:
|
||||
break
|
||||
|
||||
# null results are skipped
|
||||
if result is not None:
|
||||
if isinstance(sample, dict) and isinstance(result, dict):
|
||||
result["__key__"] = sample.get("__key__")
|
||||
yield result
|
||||
|
||||
|
||||
def pytorch_worker_seed():
|
||||
"""get dataloader worker seed from pytorch"""
|
||||
worker_info = get_worker_info()
|
||||
if worker_info is not None:
|
||||
# favour the seed already created for pytorch dataloader workers if it exists
|
||||
return worker_info.seed
|
||||
# fallback to wds rank based seed
|
||||
return wds.utils.pytorch_worker_seed()
|
||||
|
||||
|
||||
if wds is not None:
|
||||
# conditional to avoid mandatory wds import (via inheritance of wds.PipelineStage)
|
||||
class detshuffle2(wds.PipelineStage):
|
||||
def __init__(
|
||||
self,
|
||||
bufsize=1000,
|
||||
initial=100,
|
||||
seed=0,
|
||||
epoch=-1,
|
||||
):
|
||||
self.bufsize = bufsize
|
||||
self.initial = initial
|
||||
self.seed = seed
|
||||
self.epoch = epoch
|
||||
|
||||
def run(self, src):
|
||||
if isinstance(self.epoch, SharedCount):
|
||||
epoch = self.epoch.value
|
||||
else:
|
||||
# NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train)
|
||||
# situation as different workers may wrap at different times (or not at all).
|
||||
self.epoch += 1
|
||||
epoch = self.epoch
|
||||
|
||||
if self.seed < 0:
|
||||
seed = pytorch_worker_seed() + epoch
|
||||
else:
|
||||
seed = self.seed + epoch
|
||||
# _logger.info(f'shuffle seed: {self.seed}, {seed}, epoch: {epoch}') # FIXME temporary
|
||||
rng = random.Random(seed)
|
||||
return _shuffle(src, self.bufsize, self.initial, rng)
|
||||
|
||||
else:
|
||||
detshuffle2 = None
|
||||
|
||||
|
||||
class ResampledShards2(IterableDataset):
|
||||
"""An iterable dataset yielding a list of urls."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
urls,
|
||||
nshards=sys.maxsize,
|
||||
worker_seed=None,
|
||||
deterministic=True,
|
||||
epoch=-1,
|
||||
):
|
||||
"""Sample shards from the shard list with replacement.
|
||||
|
||||
:param urls: a list of URLs as a Python list or brace notation string
|
||||
"""
|
||||
super().__init__()
|
||||
urls = wds.shardlists.expand_urls(urls)
|
||||
self.urls = urls
|
||||
assert isinstance(self.urls[0], str)
|
||||
self.nshards = nshards
|
||||
self.rng = random.Random()
|
||||
self.worker_seed = pytorch_worker_seed if worker_seed is None else worker_seed
|
||||
self.deterministic = deterministic
|
||||
self.epoch = epoch
|
||||
|
||||
def __iter__(self):
|
||||
"""Return an iterator over the shards."""
|
||||
if isinstance(self.epoch, SharedCount):
|
||||
epoch = self.epoch.value
|
||||
else:
|
||||
# NOTE: this is epoch tracking is problematic in a multiprocess (dataloader workers or train)
|
||||
# situation as different workers may wrap at different times (or not at all).
|
||||
self.epoch += 1
|
||||
epoch = self.epoch
|
||||
|
||||
if self.deterministic:
|
||||
# reset seed w/ epoch if deterministic, worker seed should be deterministic due to arg.seed
|
||||
self.rng = random.Random(self.worker_seed() + epoch)
|
||||
|
||||
for _ in range(self.nshards):
|
||||
index = self.rng.randint(0, len(self.urls) - 1)
|
||||
yield dict(url=self.urls[index])
|
||||
|
||||
|
||||
class ReaderWds(Reader):
|
||||
def __init__(
|
||||
self,
|
||||
root,
|
||||
name,
|
||||
split,
|
||||
is_training=False,
|
||||
batch_size=None,
|
||||
repeats=0,
|
||||
seed=42,
|
||||
input_name='jpg',
|
||||
input_image='RGB',
|
||||
target_name='cls',
|
||||
target_image='',
|
||||
prefetch_size=None,
|
||||
shuffle_size=None,
|
||||
):
|
||||
super().__init__()
|
||||
if wds is None:
|
||||
raise RuntimeError(
|
||||
'Please install webdataset 0.2.x package `pip install git+https://github.com/webdataset/webdataset`.')
|
||||
self.root = root
|
||||
self.is_training = is_training
|
||||
self.batch_size = batch_size
|
||||
self.repeats = repeats
|
||||
self.common_seed = seed # a seed that's fixed across all worker / distributed instances
|
||||
self.shard_shuffle_size = 500
|
||||
self.sample_shuffle_size = shuffle_size or SHUFFLE_SIZE
|
||||
|
||||
self.image_key = input_name
|
||||
self.image_format = input_image
|
||||
self.target_key = target_name
|
||||
self.filename_key = 'filename'
|
||||
self.key_ext = '.JPEG' # extension to add to key for original filenames (DS specific, default ImageNet)
|
||||
|
||||
self.info = _load_info(self.root)
|
||||
self.split_info = _parse_split_info(split, self.info)
|
||||
self.num_samples = self.split_info.num_samples
|
||||
if not self.num_samples:
|
||||
raise RuntimeError(f'Invalid split definition, no samples found.')
|
||||
|
||||
# Distributed world state
|
||||
self.dist_rank = 0
|
||||
self.dist_num_replicas = 1
|
||||
if dist.is_available() and dist.is_initialized() and dist.get_world_size() > 1:
|
||||
self.dist_rank = dist.get_rank()
|
||||
self.dist_num_replicas = dist.get_world_size()
|
||||
|
||||
# Attributes that are updated in _lazy_init
|
||||
self.worker_info = None
|
||||
self.worker_id = 0
|
||||
self.worker_seed = seed # seed unique to each worker instance
|
||||
self.num_workers = 1
|
||||
self.global_worker_id = 0
|
||||
self.global_num_workers = 1
|
||||
self.init_count = 0
|
||||
self.epoch_count = SharedCount()
|
||||
|
||||
# DataPipeline is lazy init, majority of WDS DataPipeline could be init here, BUT, shuffle seed
|
||||
# is not handled in manner where it can be deterministic for each worker AND initialized up front
|
||||
self.ds = None
|
||||
|
||||
def set_epoch(self, count):
|
||||
self.epoch_count.value = count
|
||||
|
||||
def set_loader_cfg(
|
||||
self,
|
||||
num_workers: Optional[int] = None,
|
||||
):
|
||||
if self.ds is not None:
|
||||
return
|
||||
if num_workers is not None:
|
||||
self.num_workers = num_workers
|
||||
self.global_num_workers = self.dist_num_replicas * self.num_workers
|
||||
|
||||
def _lazy_init(self):
|
||||
""" Lazily initialize worker (in worker processes)
|
||||
"""
|
||||
if self.worker_info is None:
|
||||
worker_info = torch.utils.data.get_worker_info()
|
||||
if worker_info is not None:
|
||||
self.worker_info = worker_info
|
||||
self.worker_id = worker_info.id
|
||||
self.worker_seed = worker_info.seed
|
||||
self.num_workers = worker_info.num_workers
|
||||
self.global_num_workers = self.dist_num_replicas * self.num_workers
|
||||
self.global_worker_id = self.dist_rank * self.num_workers + self.worker_id
|
||||
|
||||
# init data pipeline
|
||||
abs_shard_filenames = [os.path.join(self.root, f) for f in self.split_info.filenames]
|
||||
pipeline = [wds.SimpleShardList(abs_shard_filenames)]
|
||||
# at this point we have an iterator over all the shards
|
||||
if self.is_training:
|
||||
pipeline.extend([
|
||||
detshuffle2(self.shard_shuffle_size, seed=self.common_seed, epoch=self.epoch_count),
|
||||
self._split_by_node_and_worker,
|
||||
# at this point, we have an iterator over the shards assigned to each worker
|
||||
wds.tarfile_to_samples(handler=log_and_continue),
|
||||
wds.shuffle(
|
||||
self.sample_shuffle_size,
|
||||
rng=random.Random(self.worker_seed)), # this is why we lazy-init whole DataPipeline
|
||||
])
|
||||
else:
|
||||
pipeline.extend([
|
||||
self._split_by_node_and_worker,
|
||||
# at this point, we have an iterator over the shards assigned to each worker
|
||||
wds.tarfile_to_samples(handler=log_and_continue),
|
||||
])
|
||||
pipeline.extend([
|
||||
partial(
|
||||
_decode_samples,
|
||||
image_key=self.image_key,
|
||||
image_format=self.image_format,
|
||||
alt_label=self.split_info.alt_label
|
||||
)
|
||||
])
|
||||
self.ds = wds.DataPipeline(*pipeline)
|
||||
|
||||
def _split_by_node_and_worker(self, src):
|
||||
if self.global_num_workers > 1:
|
||||
for s in islice(src, self.global_worker_id, None, self.global_num_workers):
|
||||
yield s
|
||||
else:
|
||||
for s in src:
|
||||
yield s
|
||||
|
||||
def _num_samples_per_worker(self):
|
||||
num_worker_samples = self.num_samples / max(self.global_num_workers, self.dist_num_replicas)
|
||||
if self.is_training or self.dist_num_replicas > 1:
|
||||
num_worker_samples = math.ceil(num_worker_samples)
|
||||
if self.is_training and self.batch_size is not None:
|
||||
num_worker_samples = math.ceil(num_worker_samples / self.batch_size) * self.batch_size
|
||||
return int(num_worker_samples)
|
||||
|
||||
def __iter__(self):
|
||||
if self.ds is None:
|
||||
self._lazy_init()
|
||||
|
||||
num_worker_samples = self._num_samples_per_worker()
|
||||
if self.is_training or self.dist_num_replicas > 1:
|
||||
# NOTE: doing distributed validation w/ WDS is messy, hard to meet constraints that
|
||||
# same # of batches needed across all replicas w/ seeing each sample once.
|
||||
# with_epoch() is simple but could miss a shard's worth of samples in some workers,
|
||||
# and duplicate in others. Best to keep num DL workers low and a divisor of #val shards.
|
||||
ds = self.ds.with_epoch(num_worker_samples)
|
||||
else:
|
||||
ds = self.ds
|
||||
|
||||
i = 0
|
||||
# _logger.info(f'start {i}, {self.worker_id}') # FIXME temporary debug
|
||||
for sample in ds:
|
||||
yield sample[self.image_key], sample[self.target_key]
|
||||
i += 1
|
||||
# _logger.info(f'end {i}, {self.worker_id}') # FIXME temporary debug
|
||||
|
||||
def __len__(self):
|
||||
num_samples = self._num_samples_per_worker() * self.num_workers
|
||||
return num_samples
|
||||
|
||||
def _filename(self, index, basename=False, absolute=False):
|
||||
assert False, "Not supported" # no random access to examples
|
||||
|
||||
def filenames(self, basename=False, absolute=False):
|
||||
""" Return all filenames in dataset, overrides base"""
|
||||
if self.ds is None:
|
||||
self._lazy_init()
|
||||
|
||||
names = []
|
||||
for sample in self.ds:
|
||||
if self.filename_key in sample:
|
||||
name = sample[self.filename_key]
|
||||
elif '__key__' in sample:
|
||||
name = sample['__key__'] + self.key_ext
|
||||
else:
|
||||
assert False, "No supported name field present"
|
||||
names.append(name)
|
||||
if len(names) >= self.num_samples:
|
||||
break # safety for ds.repeat() case
|
||||
return names
|
@ -1,14 +0,0 @@
|
||||
from multiprocessing import Value
|
||||
|
||||
|
||||
class SharedCount:
|
||||
def __init__(self, epoch: int = 0):
|
||||
self.shared_epoch = Value('i', epoch)
|
||||
|
||||
@property
|
||||
def value(self):
|
||||
return self.shared_epoch.value
|
||||
|
||||
@value.setter
|
||||
def value(self, epoch):
|
||||
self.shared_epoch.value = epoch
|
@ -1,50 +0,0 @@
|
||||
from .activations import *
|
||||
from .adaptive_avgmax_pool import \
|
||||
adaptive_avgmax_pool2d, select_adaptive_pool2d, AdaptiveAvgMaxPool2d, SelectAdaptivePool2d
|
||||
from .attention_pool2d import AttentionPool2d, RotAttentionPool2d, RotaryEmbedding
|
||||
from .blur_pool import BlurPool2d
|
||||
from .classifier import ClassifierHead, create_classifier, NormMlpClassifierHead
|
||||
from .cond_conv2d import CondConv2d, get_condconv_initializer
|
||||
from .config import is_exportable, is_scriptable, is_no_jit, set_exportable, set_scriptable, set_no_jit,\
|
||||
set_layer_config
|
||||
from .conv2d_same import Conv2dSame, conv2d_same
|
||||
from .conv_bn_act import ConvNormAct, ConvNormActAa, ConvBnAct
|
||||
from .create_act import create_act_layer, get_act_layer, get_act_fn
|
||||
from .create_attn import get_attn, create_attn
|
||||
from .create_conv2d import create_conv2d
|
||||
from .create_norm import get_norm_layer, create_norm_layer
|
||||
from .create_norm_act import get_norm_act_layer, create_norm_act_layer, get_norm_act_layer
|
||||
from .drop import DropBlock2d, DropPath, drop_block_2d, drop_path
|
||||
from .eca import EcaModule, CecaModule, EfficientChannelAttn, CircularEfficientChannelAttn
|
||||
from .evo_norm import EvoNorm2dB0, EvoNorm2dB1, EvoNorm2dB2,\
|
||||
EvoNorm2dS0, EvoNorm2dS0a, EvoNorm2dS1, EvoNorm2dS1a, EvoNorm2dS2, EvoNorm2dS2a
|
||||
from .fast_norm import is_fast_norm, set_fast_norm, fast_group_norm, fast_layer_norm
|
||||
from .filter_response_norm import FilterResponseNormTlu2d, FilterResponseNormAct2d
|
||||
from .gather_excite import GatherExcite
|
||||
from .global_context import GlobalContext
|
||||
from .helpers import to_ntuple, to_2tuple, to_3tuple, to_4tuple, make_divisible, extend_tuple
|
||||
from .inplace_abn import InplaceAbn
|
||||
from .linear import Linear
|
||||
from .mixed_conv2d import MixedConv2d
|
||||
from .mlp import Mlp, GluMlp, GatedMlp, ConvMlp, GlobalResponseNormMlp
|
||||
from .non_local_attn import NonLocalAttn, BatNonLocalAttn
|
||||
from .norm import GroupNorm, GroupNorm1, LayerNorm, LayerNorm2d, RmsNorm
|
||||
from .norm_act import BatchNormAct2d, GroupNormAct, GroupNorm1Act, LayerNormAct, LayerNormAct2d,\
|
||||
SyncBatchNormAct, convert_sync_batchnorm, FrozenBatchNormAct2d, freeze_batch_norm_2d, unfreeze_batch_norm_2d
|
||||
from .padding import get_padding, get_same_padding, pad_same
|
||||
from .patch_embed import PatchEmbed, resample_patch_embed
|
||||
from .pool2d_same import AvgPool2dSame, create_pool2d
|
||||
from .pos_embed import resample_abs_pos_embed
|
||||
from .pos_embed_rel import RelPosMlp, RelPosBias, RelPosBiasTf, gen_relative_position_index, gen_relative_log_coords
|
||||
from .pos_embed_sincos import build_sincos2d_pos_embed, build_fourier_pos_embed, build_rotary_pos_embed, \
|
||||
FourierEmbed, RotaryEmbedding
|
||||
from .squeeze_excite import SEModule, SqueezeExcite, EffectiveSEModule, EffectiveSqueezeExcite
|
||||
from .selective_kernel import SelectiveKernel
|
||||
from .separable_conv import SeparableConv2d, SeparableConvNormAct
|
||||
from .space_to_depth import SpaceToDepthModule
|
||||
from .split_attn import SplitAttn
|
||||
from .split_batchnorm import SplitBatchNorm2d, convert_splitbn_model
|
||||
from .std_conv import StdConv2d, StdConv2dSame, ScaledStdConv2d, ScaledStdConv2dSame
|
||||
from .test_time_pool import TestTimePoolHead, apply_test_time_pool
|
||||
from .trace_utils import _assert, _float_to_int
|
||||
from .weight_init import trunc_normal_, trunc_normal_tf_, variance_scaling_, lecun_normal_
|
@ -1,161 +0,0 @@
|
||||
""" Classifier head and layer factory
|
||||
|
||||
Hacked together by / Copyright 2020 Ross Wightman
|
||||
"""
|
||||
from collections import OrderedDict
|
||||
from functools import partial
|
||||
from typing import Optional, Union, Callable
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
from torch.nn import functional as F
|
||||
|
||||
from .adaptive_avgmax_pool import SelectAdaptivePool2d
|
||||
from .create_act import get_act_layer
|
||||
from .create_norm import get_norm_layer
|
||||
|
||||
|
||||
def _create_pool(num_features, num_classes, pool_type='avg', use_conv=False):
|
||||
flatten_in_pool = not use_conv # flatten when we use a Linear layer after pooling
|
||||
if not pool_type:
|
||||
assert num_classes == 0 or use_conv,\
|
||||
'Pooling can only be disabled if classifier is also removed or conv classifier is used'
|
||||
flatten_in_pool = False # disable flattening if pooling is pass-through (no pooling)
|
||||
global_pool = SelectAdaptivePool2d(pool_type=pool_type, flatten=flatten_in_pool)
|
||||
num_pooled_features = num_features * global_pool.feat_mult()
|
||||
return global_pool, num_pooled_features
|
||||
|
||||
|
||||
def _create_fc(num_features, num_classes, use_conv=False):
|
||||
if num_classes <= 0:
|
||||
fc = nn.Identity() # pass-through (no classifier)
|
||||
elif use_conv:
|
||||
fc = nn.Conv2d(num_features, num_classes, 1, bias=True)
|
||||
else:
|
||||
fc = nn.Linear(num_features, num_classes, bias=True)
|
||||
return fc
|
||||
|
||||
|
||||
def create_classifier(num_features, num_classes, pool_type='avg', use_conv=False):
|
||||
global_pool, num_pooled_features = _create_pool(num_features, num_classes, pool_type, use_conv=use_conv)
|
||||
fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
|
||||
return global_pool, fc
|
||||
|
||||
|
||||
class ClassifierHead(nn.Module):
|
||||
"""Classifier head w/ configurable global pooling and dropout."""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_features: int,
|
||||
num_classes: int,
|
||||
pool_type: str = 'avg',
|
||||
drop_rate: float = 0.,
|
||||
use_conv: bool = False,
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
in_features: The number of input features.
|
||||
num_classes: The number of classes for the final classifier layer (output).
|
||||
pool_type: Global pooling type, pooling disabled if empty string ('').
|
||||
drop_rate: Pre-classifier dropout rate.
|
||||
"""
|
||||
super(ClassifierHead, self).__init__()
|
||||
self.drop_rate = drop_rate
|
||||
self.in_features = in_features
|
||||
self.use_conv = use_conv
|
||||
|
||||
self.global_pool, num_pooled_features = _create_pool(in_features, num_classes, pool_type, use_conv=use_conv)
|
||||
self.fc = _create_fc(num_pooled_features, num_classes, use_conv=use_conv)
|
||||
self.flatten = nn.Flatten(1) if use_conv and pool_type else nn.Identity()
|
||||
|
||||
def reset(self, num_classes, global_pool=None):
|
||||
if global_pool is not None:
|
||||
if global_pool != self.global_pool.pool_type:
|
||||
self.global_pool, _ = _create_pool(self.in_features, num_classes, global_pool, use_conv=self.use_conv)
|
||||
self.flatten = nn.Flatten(1) if self.use_conv and global_pool else nn.Identity()
|
||||
num_pooled_features = self.in_features * self.global_pool.feat_mult()
|
||||
self.fc = _create_fc(num_pooled_features, num_classes, use_conv=self.use_conv)
|
||||
|
||||
def forward(self, x, pre_logits: bool = False):
|
||||
x = self.global_pool(x)
|
||||
if self.drop_rate:
|
||||
x = F.dropout(x, p=float(self.drop_rate), training=self.training)
|
||||
if pre_logits:
|
||||
return x.flatten(1)
|
||||
else:
|
||||
x = self.fc(x)
|
||||
return self.flatten(x)
|
||||
|
||||
|
||||
class NormMlpClassifierHead(nn.Module):
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
in_features: int,
|
||||
num_classes: int,
|
||||
hidden_size: Optional[int] = None,
|
||||
pool_type: str = 'avg',
|
||||
drop_rate: float = 0.,
|
||||
norm_layer: Union[str, Callable] = 'layernorm2d',
|
||||
act_layer: Union[str, Callable] = 'tanh',
|
||||
):
|
||||
"""
|
||||
Args:
|
||||
in_features: The number of input features.
|
||||
num_classes: The number of classes for the final classifier layer (output).
|
||||
hidden_size: The hidden size of the MLP (pre-logits FC layer) if not None.
|
||||
pool_type: Global pooling type, pooling disabled if empty string ('').
|
||||
drop_rate: Pre-classifier dropout rate.
|
||||
norm_layer: Normalization layer type.
|
||||
act_layer: MLP activation layer type (only used if hidden_size is not None).
|
||||
"""
|
||||
super().__init__()
|
||||
self.drop_rate = drop_rate
|
||||
self.in_features = in_features
|
||||
self.hidden_size = hidden_size
|
||||
self.num_features = in_features
|
||||
self.use_conv = not pool_type
|
||||
norm_layer = get_norm_layer(norm_layer)
|
||||
act_layer = get_act_layer(act_layer)
|
||||
linear_layer = partial(nn.Conv2d, kernel_size=1) if self.use_conv else nn.Linear
|
||||
|
||||
self.global_pool = SelectAdaptivePool2d(pool_type=pool_type)
|
||||
self.norm = norm_layer(in_features)
|
||||
self.flatten = nn.Flatten(1) if pool_type else nn.Identity()
|
||||
if hidden_size:
|
||||
self.pre_logits = nn.Sequential(OrderedDict([
|
||||
('fc', linear_layer(in_features, hidden_size)),
|
||||
('act', act_layer()),
|
||||
]))
|
||||
self.num_features = hidden_size
|
||||
else:
|
||||
self.pre_logits = nn.Identity()
|
||||
self.drop = nn.Dropout(self.drop_rate)
|
||||
self.fc = linear_layer(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
|
||||
|
||||
def reset(self, num_classes, global_pool=None):
|
||||
if global_pool is not None:
|
||||
self.global_pool = SelectAdaptivePool2d(pool_type=global_pool)
|
||||
self.flatten = nn.Flatten(1) if global_pool else nn.Identity()
|
||||
self.use_conv = self.global_pool.is_identity()
|
||||
linear_layer = partial(nn.Conv2d, kernel_size=1) if self.use_conv else nn.Linear
|
||||
if self.hidden_size:
|
||||
if ((isinstance(self.pre_logits.fc, nn.Conv2d) and not self.use_conv) or
|
||||
(isinstance(self.pre_logits.fc, nn.Linear) and self.use_conv)):
|
||||
with torch.no_grad():
|
||||
new_fc = linear_layer(self.in_features, self.hidden_size)
|
||||
new_fc.weight.copy_(self.pre_logits.fc.weight.reshape(new_fc.weight.shape))
|
||||
new_fc.bias.copy_(self.pre_logits.fc.bias)
|
||||
self.pre_logits.fc = new_fc
|
||||
self.fc = linear_layer(self.num_features, num_classes) if num_classes > 0 else nn.Identity()
|
||||
|
||||
def forward(self, x, pre_logits: bool = False):
|
||||
x = self.global_pool(x)
|
||||
x = self.norm(x)
|
||||
x = self.flatten(x)
|
||||
x = self.pre_logits(x)
|
||||
if pre_logits:
|
||||
return x
|
||||
x = self.fc(x)
|
||||
return x
|
@ -1,39 +0,0 @@
|
||||
""" Global Response Normalization Module
|
||||
|
||||
Based on the GRN layer presented in
|
||||
`ConvNeXt-V2 - Co-designing and Scaling ConvNets with Masked Autoencoders` - https://arxiv.org/abs/2301.00808
|
||||
|
||||
This implementation
|
||||
* works for both NCHW and NHWC tensor layouts
|
||||
* uses affine param names matching existing torch norm layers
|
||||
* slightly improves eager mode performance via fused addcmul
|
||||
|
||||
Hacked together by / Copyright 2023 Ross Wightman
|
||||
"""
|
||||
|
||||
import torch
|
||||
from torch import nn as nn
|
||||
|
||||
|
||||
class GlobalResponseNorm(nn.Module):
|
||||
""" Global Response Normalization layer
|
||||
"""
|
||||
def __init__(self, dim, eps=1e-6, channels_last=True):
|
||||
super().__init__()
|
||||
self.eps = eps
|
||||
if channels_last:
|
||||
self.spatial_dim = (1, 2)
|
||||
self.channel_dim = -1
|
||||
self.wb_shape = (1, 1, 1, -1)
|
||||
else:
|
||||
self.spatial_dim = (2, 3)
|
||||
self.channel_dim = 1
|
||||
self.wb_shape = (1, -1, 1, 1)
|
||||
|
||||
self.weight = nn.Parameter(torch.zeros(dim))
|
||||
self.bias = nn.Parameter(torch.zeros(dim))
|
||||
|
||||
def forward(self, x):
|
||||
x_g = x.norm(p=2, dim=self.spatial_dim, keepdim=True)
|
||||
x_n = x_g / (x_g.mean(dim=self.channel_dim, keepdim=True) + self.eps)
|
||||
return x + torch.addcmul(self.bias.view(self.wb_shape), self.weight.view(self.wb_shape), x * x_n)
|
@ -1,184 +0,0 @@
|
||||
""" Image to Patch Embedding using Conv2d
|
||||
|
||||
A convolution based approach to patchifying a 2D image w/ embedding projection.
|
||||
|
||||
Based on code in:
|
||||
* https://github.com/google-research/vision_transformer
|
||||
* https://github.com/google-research/big_vision/tree/main/big_vision
|
||||
|
||||
Hacked together by / Copyright 2020 Ross Wightman
|
||||
"""
|
||||
import logging
|
||||
from typing import List
|
||||
|
||||
import torch
|
||||
from torch import nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from .helpers import to_2tuple
|
||||
from .trace_utils import _assert
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class PatchEmbed(nn.Module):
|
||||
""" 2D Image to Patch Embedding
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
img_size=224,
|
||||
patch_size=16,
|
||||
in_chans=3,
|
||||
embed_dim=768,
|
||||
norm_layer=None,
|
||||
flatten=True,
|
||||
bias=True,
|
||||
):
|
||||
super().__init__()
|
||||
img_size = to_2tuple(img_size)
|
||||
patch_size = to_2tuple(patch_size)
|
||||
self.img_size = img_size
|
||||
self.patch_size = patch_size
|
||||
self.grid_size = (img_size[0] // patch_size[0], img_size[1] // patch_size[1])
|
||||
self.num_patches = self.grid_size[0] * self.grid_size[1]
|
||||
self.flatten = flatten
|
||||
|
||||
self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=patch_size, stride=patch_size, bias=bias)
|
||||
self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
|
||||
|
||||
def forward(self, x):
|
||||
B, C, H, W = x.shape
|
||||
_assert(H == self.img_size[0], f"Input image height ({H}) doesn't match model ({self.img_size[0]}).")
|
||||
_assert(W == self.img_size[1], f"Input image width ({W}) doesn't match model ({self.img_size[1]}).")
|
||||
x = self.proj(x)
|
||||
if self.flatten:
|
||||
x = x.flatten(2).transpose(1, 2) # BCHW -> BNC
|
||||
x = self.norm(x)
|
||||
return x
|
||||
|
||||
|
||||
def resample_patch_embed(
|
||||
patch_embed,
|
||||
new_size: List[int],
|
||||
interpolation: str = 'bicubic',
|
||||
antialias: bool = True,
|
||||
verbose: bool = False,
|
||||
):
|
||||
"""Resample the weights of the patch embedding kernel to target resolution.
|
||||
We resample the patch embedding kernel by approximately inverting the effect
|
||||
of patch resizing.
|
||||
|
||||
Code based on:
|
||||
https://github.com/google-research/big_vision/blob/b00544b81f8694488d5f36295aeb7972f3755ffe/big_vision/models/proj/flexi/vit.py
|
||||
|
||||
With this resizing, we can for example load a B/8 filter into a B/16 model
|
||||
and, on 2x larger input image, the result will match.
|
||||
|
||||
Args:
|
||||
patch_embed: original parameter to be resized.
|
||||
new_size (tuple(int, int): target shape (height, width)-only.
|
||||
interpolation (str): interpolation for resize
|
||||
antialias (bool): use anti-aliasing filter in resize
|
||||
verbose (bool): log operation
|
||||
Returns:
|
||||
Resized patch embedding kernel.
|
||||
"""
|
||||
import numpy as np
|
||||
try:
|
||||
import functorch
|
||||
vmap = functorch.vmap
|
||||
except ImportError:
|
||||
if hasattr(torch, 'vmap'):
|
||||
vmap = torch.vmap
|
||||
else:
|
||||
assert False, "functorch or a version of torch with vmap is required for FlexiViT resizing."
|
||||
|
||||
assert len(patch_embed.shape) == 4, "Four dimensions expected"
|
||||
assert len(new_size) == 2, "New shape should only be hw"
|
||||
old_size = patch_embed.shape[-2:]
|
||||
if tuple(old_size) == tuple(new_size):
|
||||
return patch_embed
|
||||
|
||||
if verbose:
|
||||
_logger.info(f"Resize patch embedding {patch_embed.shape} to {new_size}, w/ {interpolation} interpolation.")
|
||||
|
||||
def resize(x_np, _new_size):
|
||||
x_tf = torch.Tensor(x_np)[None, None, ...]
|
||||
x_upsampled = F.interpolate(
|
||||
x_tf, size=_new_size, mode=interpolation, antialias=antialias)[0, 0, ...].numpy()
|
||||
return x_upsampled
|
||||
|
||||
def get_resize_mat(_old_size, _new_size):
|
||||
mat = []
|
||||
for i in range(np.prod(_old_size)):
|
||||
basis_vec = np.zeros(_old_size)
|
||||
basis_vec[np.unravel_index(i, _old_size)] = 1.
|
||||
mat.append(resize(basis_vec, _new_size).reshape(-1))
|
||||
return np.stack(mat).T
|
||||
|
||||
resize_mat = get_resize_mat(old_size, new_size)
|
||||
resize_mat_pinv = torch.Tensor(np.linalg.pinv(resize_mat.T))
|
||||
|
||||
def resample_kernel(kernel):
|
||||
resampled_kernel = resize_mat_pinv @ kernel.reshape(-1)
|
||||
return resampled_kernel.reshape(new_size)
|
||||
|
||||
v_resample_kernel = vmap(vmap(resample_kernel, 0, 0), 1, 1)
|
||||
return v_resample_kernel(patch_embed)
|
||||
|
||||
|
||||
# def divs(n, m=None):
|
||||
# m = m or n // 2
|
||||
# if m == 1:
|
||||
# return [1]
|
||||
# if n % m == 0:
|
||||
# return [m] + divs(n, m - 1)
|
||||
# return divs(n, m - 1)
|
||||
#
|
||||
#
|
||||
# class FlexiPatchEmbed(nn.Module):
|
||||
# """ 2D Image to Patch Embedding w/ Flexible Patch sizes (FlexiViT)
|
||||
# FIXME WIP
|
||||
# """
|
||||
# def __init__(
|
||||
# self,
|
||||
# img_size=240,
|
||||
# patch_size=16,
|
||||
# in_chans=3,
|
||||
# embed_dim=768,
|
||||
# base_img_size=240,
|
||||
# base_patch_size=32,
|
||||
# norm_layer=None,
|
||||
# flatten=True,
|
||||
# bias=True,
|
||||
# ):
|
||||
# super().__init__()
|
||||
# self.img_size = to_2tuple(img_size)
|
||||
# self.patch_size = to_2tuple(patch_size)
|
||||
# self.num_patches = 0
|
||||
#
|
||||
# # full range for 240 = (5, 6, 8, 10, 12, 14, 15, 16, 20, 24, 30, 40, 48)
|
||||
# self.seqhw = (6, 8, 10, 12, 14, 15, 16, 20, 24, 30)
|
||||
#
|
||||
# self.base_img_size = to_2tuple(base_img_size)
|
||||
# self.base_patch_size = to_2tuple(base_patch_size)
|
||||
# self.base_grid_size = tuple([i // p for i, p in zip(self.base_img_size, self.base_patch_size)])
|
||||
# self.base_num_patches = self.base_grid_size[0] * self.base_grid_size[1]
|
||||
#
|
||||
# self.flatten = flatten
|
||||
# self.proj = nn.Conv2d(in_chans, embed_dim, kernel_size=self.patch_size, stride=self.patch_size, bias=bias)
|
||||
# self.norm = norm_layer(embed_dim) if norm_layer else nn.Identity()
|
||||
#
|
||||
# def forward(self, x):
|
||||
# B, C, H, W = x.shape
|
||||
#
|
||||
# if self.patch_size == self.base_patch_size:
|
||||
# weight = self.proj.weight
|
||||
# else:
|
||||
# weight = resample_patch_embed(self.proj.weight, self.patch_size)
|
||||
# patch_size = self.patch_size
|
||||
# x = F.conv2d(x, weight, bias=self.proj.bias, stride=patch_size)
|
||||
# if self.flatten:
|
||||
# x = x.flatten(2).transpose(1, 2) # BCHW -> BNC
|
||||
# x = self.norm(x)
|
||||
# return x
|
@ -1,52 +0,0 @@
|
||||
""" Position Embedding Utilities
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import logging
|
||||
import math
|
||||
from typing import List, Tuple, Optional, Union
|
||||
|
||||
import torch
|
||||
import torch.nn.functional as F
|
||||
|
||||
from .helpers import to_2tuple
|
||||
|
||||
_logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
def resample_abs_pos_embed(
|
||||
posemb,
|
||||
new_size: List[int],
|
||||
old_size: Optional[List[int]] = None,
|
||||
num_prefix_tokens: int = 1,
|
||||
interpolation: str = 'bicubic',
|
||||
antialias: bool = True,
|
||||
verbose: bool = False,
|
||||
):
|
||||
# sort out sizes, assume square if old size not provided
|
||||
new_size = to_2tuple(new_size)
|
||||
new_ntok = new_size[0] * new_size[1]
|
||||
if not old_size:
|
||||
old_size = int(math.sqrt(posemb.shape[1] - num_prefix_tokens))
|
||||
old_size = to_2tuple(old_size)
|
||||
if new_size == old_size: # might not both be same container type
|
||||
return posemb
|
||||
|
||||
if num_prefix_tokens:
|
||||
posemb_prefix, posemb = posemb[:, :num_prefix_tokens], posemb[:, num_prefix_tokens:]
|
||||
else:
|
||||
posemb_prefix, posemb = None, posemb
|
||||
|
||||
# do the interpolation
|
||||
posemb = posemb.reshape(1, old_size[0], old_size[1], -1).permute(0, 3, 1, 2)
|
||||
posemb = F.interpolate(posemb, size=new_size, mode=interpolation, antialias=antialias)
|
||||
posemb = posemb.permute(0, 2, 3, 1).reshape(1, new_ntok, -1)
|
||||
|
||||
if verbose:
|
||||
_logger.info(f'Resized position embedding: {old_size} to {new_size}.')
|
||||
|
||||
# add back extra (class, etc) prefix tokens
|
||||
if posemb_prefix is not None:
|
||||
print(posemb_prefix.shape, posemb.shape)
|
||||
posemb = torch.cat([posemb_prefix, posemb], dim=1)
|
||||
return posemb
|
@ -1,270 +0,0 @@
|
||||
""" Relative position embedding modules and functions
|
||||
|
||||
Hacked together by / Copyright 2022 Ross Wightman
|
||||
"""
|
||||
import math
|
||||
from typing import Optional, Tuple
|
||||
|
||||
import torch
|
||||
import torch.nn as nn
|
||||
import torch.nn.functional as F
|
||||
|
||||
from .mlp import Mlp
|
||||
from .weight_init import trunc_normal_
|
||||
|
||||
|
||||
def gen_relative_position_index(
|
||||
q_size: Tuple[int, int],
|
||||
k_size: Tuple[int, int] = None,
|
||||
class_token: bool = False) -> torch.Tensor:
|
||||
# Adapted with significant modifications from Swin / BeiT codebases
|
||||
# get pair-wise relative position index for each token inside the window
|
||||
q_coords = torch.stack(torch.meshgrid([torch.arange(q_size[0]), torch.arange(q_size[1])])).flatten(1) # 2, Wh, Ww
|
||||
if k_size is None:
|
||||
k_coords = q_coords
|
||||
k_size = q_size
|
||||
else:
|
||||
# different q vs k sizes is a WIP
|
||||
k_coords = torch.stack(torch.meshgrid([torch.arange(k_size[0]), torch.arange(k_size[1])])).flatten(1)
|
||||
relative_coords = q_coords[:, :, None] - k_coords[:, None, :] # 2, Wh*Ww, Wh*Ww
|
||||
relative_coords = relative_coords.permute(1, 2, 0) # Wh*Ww, Wh*Ww, 2
|
||||
_, relative_position_index = torch.unique(relative_coords.view(-1, 2), return_inverse=True, dim=0)
|
||||
|
||||
if class_token:
|
||||
# handle cls to token & token 2 cls & cls to cls as per beit for rel pos bias
|
||||
# NOTE not intended or tested with MLP log-coords
|
||||
max_size = (max(q_size[0], k_size[0]), max(q_size[1], k_size[1]))
|
||||
num_relative_distance = (2 * max_size[0] - 1) * (2 * max_size[1] - 1) + 3
|
||||
relative_position_index = F.pad(relative_position_index, [1, 0, 1, 0])
|
||||
relative_position_index[0, 0:] = num_relative_distance - 3
|
||||
relative_position_index[0:, 0] = num_relative_distance - 2
|
||||
relative_position_index[0, 0] = num_relative_distance - 1
|
||||
|
||||
return relative_position_index.contiguous()
|
||||
|
||||
|
||||
class RelPosBias(nn.Module):
|
||||
""" Relative Position Bias
|
||||
Adapted from Swin-V1 relative position bias impl, modularized.
|
||||
"""
|
||||
|
||||
def __init__(self, window_size, num_heads, prefix_tokens=0):
|
||||
super().__init__()
|
||||
assert prefix_tokens <= 1
|
||||
self.window_size = window_size
|
||||
self.window_area = window_size[0] * window_size[1]
|
||||
self.bias_shape = (self.window_area + prefix_tokens,) * 2 + (num_heads,)
|
||||
|
||||
num_relative_distance = (2 * window_size[0] - 1) * (2 * window_size[1] - 1) + 3 * prefix_tokens
|
||||
self.relative_position_bias_table = nn.Parameter(torch.zeros(num_relative_distance, num_heads))
|
||||
self.register_buffer(
|
||||
"relative_position_index",
|
||||
gen_relative_position_index(self.window_size, class_token=prefix_tokens > 0),
|
||||
persistent=False,
|
||||
)
|
||||
|
||||
self.init_weights()
|
||||
|
||||
def init_weights(self):
|
||||
trunc_normal_(self.relative_position_bias_table, std=.02)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)]
|
||||
# win_h * win_w, win_h * win_w, num_heads
|
||||
relative_position_bias = relative_position_bias.view(self.bias_shape).permute(2, 0, 1)
|
||||
return relative_position_bias.unsqueeze(0).contiguous()
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
||||
|
||||
|
||||
def gen_relative_log_coords(
|
||||
win_size: Tuple[int, int],
|
||||
pretrained_win_size: Tuple[int, int] = (0, 0),
|
||||
mode='swin',
|
||||
):
|
||||
assert mode in ('swin', 'cr')
|
||||
# as per official swin-v2 impl, supporting timm specific 'cr' log coords as well
|
||||
relative_coords_h = torch.arange(-(win_size[0] - 1), win_size[0], dtype=torch.float32)
|
||||
relative_coords_w = torch.arange(-(win_size[1] - 1), win_size[1], dtype=torch.float32)
|
||||
relative_coords_table = torch.stack(torch.meshgrid([relative_coords_h, relative_coords_w]))
|
||||
relative_coords_table = relative_coords_table.permute(1, 2, 0).contiguous() # 2*Wh-1, 2*Ww-1, 2
|
||||
if mode == 'swin':
|
||||
if pretrained_win_size[0] > 0:
|
||||
relative_coords_table[:, :, 0] /= (pretrained_win_size[0] - 1)
|
||||
relative_coords_table[:, :, 1] /= (pretrained_win_size[1] - 1)
|
||||
else:
|
||||
relative_coords_table[:, :, 0] /= (win_size[0] - 1)
|
||||
relative_coords_table[:, :, 1] /= (win_size[1] - 1)
|
||||
relative_coords_table *= 8 # normalize to -8, 8
|
||||
relative_coords_table = torch.sign(relative_coords_table) * torch.log2(
|
||||
1.0 + relative_coords_table.abs()) / math.log2(8)
|
||||
else:
|
||||
# mode == 'cr'
|
||||
relative_coords_table = torch.sign(relative_coords_table) * torch.log(
|
||||
1.0 + relative_coords_table.abs())
|
||||
|
||||
return relative_coords_table
|
||||
|
||||
|
||||
class RelPosMlp(nn.Module):
|
||||
""" Log-Coordinate Relative Position MLP
|
||||
Based on ideas presented in Swin-V2 paper (https://arxiv.org/abs/2111.09883)
|
||||
|
||||
This impl covers the 'swin' implementation as well as two timm specific modes ('cr', and 'rw')
|
||||
"""
|
||||
def __init__(
|
||||
self,
|
||||
window_size,
|
||||
num_heads=8,
|
||||
hidden_dim=128,
|
||||
prefix_tokens=0,
|
||||
mode='cr',
|
||||
pretrained_window_size=(0, 0)
|
||||
):
|
||||
super().__init__()
|
||||
self.window_size = window_size
|
||||
self.window_area = self.window_size[0] * self.window_size[1]
|
||||
self.prefix_tokens = prefix_tokens
|
||||
self.num_heads = num_heads
|
||||
self.bias_shape = (self.window_area,) * 2 + (num_heads,)
|
||||
if mode == 'swin':
|
||||
self.bias_act = nn.Sigmoid()
|
||||
self.bias_gain = 16
|
||||
mlp_bias = (True, False)
|
||||
else:
|
||||
self.bias_act = nn.Identity()
|
||||
self.bias_gain = None
|
||||
mlp_bias = True
|
||||
|
||||
self.mlp = Mlp(
|
||||
2, # x, y
|
||||
hidden_features=hidden_dim,
|
||||
out_features=num_heads,
|
||||
act_layer=nn.ReLU,
|
||||
bias=mlp_bias,
|
||||
drop=(0.125, 0.)
|
||||
)
|
||||
|
||||
self.register_buffer(
|
||||
"relative_position_index",
|
||||
gen_relative_position_index(window_size),
|
||||
persistent=False)
|
||||
|
||||
# get relative_coords_table
|
||||
self.register_buffer(
|
||||
"rel_coords_log",
|
||||
gen_relative_log_coords(window_size, pretrained_window_size, mode=mode),
|
||||
persistent=False)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
relative_position_bias = self.mlp(self.rel_coords_log)
|
||||
if self.relative_position_index is not None:
|
||||
relative_position_bias = relative_position_bias.view(-1, self.num_heads)[
|
||||
self.relative_position_index.view(-1)] # Wh*Ww,Wh*Ww,nH
|
||||
relative_position_bias = relative_position_bias.view(self.bias_shape)
|
||||
relative_position_bias = relative_position_bias.permute(2, 0, 1)
|
||||
relative_position_bias = self.bias_act(relative_position_bias)
|
||||
if self.bias_gain is not None:
|
||||
relative_position_bias = self.bias_gain * relative_position_bias
|
||||
if self.prefix_tokens:
|
||||
relative_position_bias = F.pad(relative_position_bias, [self.prefix_tokens, 0, self.prefix_tokens, 0])
|
||||
return relative_position_bias.unsqueeze(0).contiguous()
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
||||
|
||||
|
||||
def generate_lookup_tensor(
|
||||
length: int,
|
||||
max_relative_position: Optional[int] = None,
|
||||
):
|
||||
"""Generate a one_hot lookup tensor to reindex embeddings along one dimension.
|
||||
|
||||
Args:
|
||||
length: the length to reindex to.
|
||||
max_relative_position: the maximum relative position to consider.
|
||||
Relative position embeddings for distances above this threshold
|
||||
are zeroed out.
|
||||
Returns:
|
||||
a lookup Tensor of size [length, length, vocab_size] that satisfies
|
||||
ret[n,m,v] = 1{m - n + max_relative_position = v}.
|
||||
"""
|
||||
if max_relative_position is None:
|
||||
max_relative_position = length - 1
|
||||
# Return the cached lookup tensor, otherwise compute it and cache it.
|
||||
vocab_size = 2 * max_relative_position + 1
|
||||
ret = torch.zeros(length, length, vocab_size)
|
||||
for i in range(length):
|
||||
for x in range(length):
|
||||
v = x - i + max_relative_position
|
||||
if abs(x - i) > max_relative_position:
|
||||
continue
|
||||
ret[i, x, v] = 1
|
||||
return ret
|
||||
|
||||
|
||||
def reindex_2d_einsum_lookup(
|
||||
relative_position_tensor,
|
||||
height: int,
|
||||
width: int,
|
||||
height_lookup: torch.Tensor,
|
||||
width_lookup: torch.Tensor,
|
||||
) -> torch.Tensor:
|
||||
"""Reindex 2d relative position bias with 2 independent einsum lookups.
|
||||
|
||||
Adapted from:
|
||||
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
|
||||
|
||||
Args:
|
||||
relative_position_tensor: tensor of shape
|
||||
[..., vocab_height, vocab_width, ...].
|
||||
height: height to reindex to.
|
||||
width: width to reindex to.
|
||||
height_lookup: one-hot height lookup
|
||||
width_lookup: one-hot width lookup
|
||||
Returns:
|
||||
reindexed_tensor: a Tensor of shape
|
||||
[..., height * width, height * width, ...]
|
||||
"""
|
||||
reindexed_tensor = torch.einsum('nhw,ixh->nixw', relative_position_tensor, height_lookup)
|
||||
reindexed_tensor = torch.einsum('nixw,jyw->nijxy', reindexed_tensor, width_lookup)
|
||||
area = height * width
|
||||
return reindexed_tensor.reshape(relative_position_tensor.shape[0], area, area)
|
||||
|
||||
|
||||
class RelPosBiasTf(nn.Module):
|
||||
""" Relative Position Bias Impl (Compatible with Tensorflow MaxViT models)
|
||||
Adapted from:
|
||||
https://github.com/google-research/maxvit/blob/2e06a7f1f70c76e64cd3dabe5cd1b8c1a23c9fb7/maxvit/models/attention_utils.py
|
||||
"""
|
||||
def __init__(self, window_size, num_heads, prefix_tokens=0):
|
||||
super().__init__()
|
||||
assert prefix_tokens <= 1
|
||||
self.window_size = window_size
|
||||
self.window_area = window_size[0] * window_size[1]
|
||||
self.num_heads = num_heads
|
||||
|
||||
vocab_height = 2 * window_size[0] - 1
|
||||
vocab_width = 2 * window_size[1] - 1
|
||||
self.bias_shape = (self.num_heads, vocab_height, vocab_width)
|
||||
self.relative_position_bias_table = nn.Parameter(torch.zeros(self.bias_shape))
|
||||
self.register_buffer('height_lookup', generate_lookup_tensor(window_size[0]), persistent=False)
|
||||
self.register_buffer('width_lookup', generate_lookup_tensor(window_size[1]), persistent=False)
|
||||
self.init_weights()
|
||||
|
||||
def init_weights(self):
|
||||
nn.init.normal_(self.relative_position_bias_table, std=.02)
|
||||
|
||||
def get_bias(self) -> torch.Tensor:
|
||||
# FIXME change to not use one-hot/einsum?
|
||||
return reindex_2d_einsum_lookup(
|
||||
self.relative_position_bias_table,
|
||||
self.window_size[0],
|
||||
self.window_size[1],
|
||||
self.height_lookup,
|
||||
self.width_lookup
|
||||
)
|
||||
|
||||
def forward(self, attn, shared_rel_pos: Optional[torch.Tensor] = None):
|
||||
return attn + self.get_bias()
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in new issue