Ross Wightman
769ab4b98a
Clean up no_grad for trunc normal weight inits
2 years ago
Ross Wightman
48e1df8b37
Add norm/norm_act header comments
2 years ago
Ross Wightman
7c2660576d
Tweak init for convnext block using maxxvit/coatnext.
2 years ago
Ross Wightman
1d8d6f6072
Fix two default args in DenseNet blocks... fix #1427
2 years ago
Ross Wightman
527f9a4cb2
Updated to correct maxvit_nano weights...
2 years ago
Ross Wightman
b2e8426fca
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
2 years ago
Ross Wightman
837c68263b
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
2 years ago
Ross Wightman
cac0a4570a
More test fixes, pool size for 256x256 maxvit models
2 years ago
Ross Wightman
e939ed19b9
Rename internal creation fn for maxvit, has not been just coatnet for a while...
2 years ago
Ross Wightman
ffaf97f813
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
2 years ago
Ross Wightman
8c9696c9df
More model and test fixes
2 years ago
Ross Wightman
ca52108c2b
Fix some model support functions
2 years ago
Ross Wightman
f332fc2db7
Fix some test failures, torchscript issues
2 years ago
Ross Wightman
6e559e9b5f
Add MViT (Multi-Scale) V2
2 years ago
Ross Wightman
43aa84e861
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
2 years ago
Ross Wightman
c486aa71f8
Add GCViT
2 years ago
Ross Wightman
fba6ecd39b
Add EfficientFormer
2 years ago
Ross Wightman
ff4a38e2c3
Add PyramidVisionTransformerV2
2 years ago
Ross Wightman
1d8ada359a
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
2 years ago
Ross Wightman
2544d3b80f
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
2 years ago
Ross Wightman
13565aad50
Add edgenext_base model def & weight link, update to improve ONNX export #1385
2 years ago
Ross Wightman
8ad4bdfa06
Allow ntuple to be used with string values
2 years ago
Christoph Reich
faae93e62d
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman
7430a85d07
Update README, bump version to 0.6.8
2 years ago
Ross Wightman
ec6a28830f
Add DeiT-III 'medium' model defs and weights
2 years ago
Ross Wightman
d875a1d3f6
version 0.6.7
2 years ago
Ross Wightman
6f103a442b
Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
2 years ago
Ross Wightman
4042a94f8f
Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
2 years ago
Ross Wightman
c8f69e04a9
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
...
Take `no_emb_class` into account when calling `resize_pos_embed`
2 years ago
Ceshine Lee
0b64117592
Take `no_emb_class` into account when calling `resize_pos_embed`
2 years ago
Jasha10
56c3a84db3
Update type hint for `register_notrace_module`
...
register_notrace_module is used to decorate types (i.e. subclasses of nn.Module).
It is not called on module instances.
2 years ago
Ross Wightman
1b278136c3
Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
2 years ago
Ross Wightman
909705e7ff
Remove some redundant requires_grad=True from nn.Parameter in third party code
2 years ago
Ross Wightman
c5e0d1c700
Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
2 years ago
Ross Wightman
dc376e3676
Ensure all model entrypoint fn default to `pretrained=False` (a few didn't)
2 years ago
Ross Wightman
23b102064a
Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed)
2 years ago
Ross Wightman
0dbd9352ce
Add bulk_runner script and updates to benchmark.py and validate.py for better error handling in bulk runs (used for benchmark and validation result runs). Improved batch size decay stepping on retry...
2 years ago
Ross Wightman
92b91af3bb
version 0.6.6
2 years ago
Ross Wightman
05313940e2
Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models.
2 years ago
nateraw
51cca82aa1
👽 use hf_hub_download instead of cached_download
2 years ago
Ross Wightman
324a4e58b6
disable nvfuser for jit te/legacy modes (for PT 1.12+)
2 years ago
Ross Wightman
2898cf6e41
version 0.6.5 for pypi release
2 years ago
Ross Wightman
a45b4bce9a
x and xx small edgenext models do benefit from larger test input size
2 years ago
Ross Wightman
a8e34051c1
Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4
2 years ago
Ross Wightman
1c5cb819f9
bump version to 0.6.3 before merge
2 years ago
Ross Wightman
a1cb25066e
Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights.
2 years ago
Ross Wightman
7c7ecd2492
Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
2 years ago
Ross Wightman
ce65a7b29f
Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
2 years ago
Ross Wightman
58621723bd
Add CrossStage3 DarkNet (cs3) weights
2 years ago
Ross Wightman
9be0c84715
Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
2 years ago
Ross Wightman
db0cee9910
Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
2 years ago
Ross Wightman
eca09b8642
Add MobileVitV2 support. Fix #1332 . Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2 years ago
Ross Wightman
06307b8b41
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman
bfc0dccb0e
Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274 .
2 years ago
Ross Wightman
7d4b3807d5
Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
2 years ago
Ross Wightman
d0c5bd5722
Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
2 years ago
Ross Wightman
d765305821
Remove first_conv for resnetaa50 def
2 years ago
Ross Wightman
dd9b8f57c4
Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
2 years ago
Ross Wightman
377e9bfa21
Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
2 years ago
Ross Wightman
c170ba3173
Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
2 years ago
Ross Wightman
188c194b0f
Left some experiment stem code in convnext by mistake
2 years ago
Ross Wightman
70d6d2c484
support test_crop_size in data config resolve
2 years ago
Ross Wightman
6064d16a2d
Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
...
* edgenext refactored for torchscript compat, stage base organization
* slight refactor of ConvNeXt to match some EdgeNeXt additions
* remove use of funky LayerNorm layer in ConvNeXt and just use nn.LayerNorm and LayerNorm2d (permute)
2 years ago
Ross Wightman
7a9c6811c9
Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
2 years ago
Ross Wightman
82c311d082
Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
2 years ago
Ross Wightman
a050fde5cd
Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats
2 years ago
Ross Wightman
e6d7df40ec
no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg
2 years ago
Ross Wightman
07d0c4ae96
Improve repr for DropPath module
2 years ago
Ross Wightman
e27c16b8a0
Remove unecessary code for synbn guard
2 years ago
Ross Wightman
0da3c9ebbf
Remove SiLU layer in default args that breaks import on old old PyTorch
2 years ago
Ross Wightman
7d657d2ef4
Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311
2 years ago
Ross Wightman
879df47c0a
Support BatchNormAct2d for sync-bn use. Fix #1254
2 years ago
Ross Wightman
7cedc8d474
Follow up to #1256 , fix interpolation warning in auto_autoaugment as well
2 years ago
Jakub Kaczmarzyk
db64393c0d
use `Image.Resampling` namespace for PIL mapping ( #1256 )
...
* use `Image.Resampling` namespace for PIL mapping
PIL shows a deprecation warning when accessing resampling constants via the `Image` namespace. The suggested namespace is `Image.Resampling`. This commit updates `_pil_interpolation_to_str` to use the `Image.Resampling` namespace.
```
/tmp/ipykernel_11959/698124036.py:2: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
Image.NEAREST: 'nearest',
/tmp/ipykernel_11959/698124036.py:3: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
Image.BILINEAR: 'bilinear',
/tmp/ipykernel_11959/698124036.py:4: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
Image.BICUBIC: 'bicubic',
/tmp/ipykernel_11959/698124036.py:5: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
Image.BOX: 'box',
/tmp/ipykernel_11959/698124036.py:6: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
Image.HAMMING: 'hamming',
/tmp/ipykernel_11959/698124036.py:7: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
Image.LANCZOS: 'lanczos',
```
* use new pillow resampling enum only if it exists
2 years ago
Ross Wightman
20a1fa63f8
Make dev version 0.6.2.dev0 for pypi pre
3 years ago
Ross Wightman
347308faad
Update README.md, version to 0.6.2
3 years ago
Ross Wightman
4b30bae67b
Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
3 years ago
Ross Wightman
d4c0588012
Remove persistent buffers from Swin-V2. Change SwinV2Cr cos attn + tau/logit_scale to match official, add ckpt convert, init_value zeros resid LN weight by default
3 years ago
Ross Wightman
27c42f0830
Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block
3 years ago
Ross Wightman
2f2b22d8c7
Disable nvfuser fma / opt level overrides per #1244
3 years ago
Ross Wightman
c0211b0bf7
Swin-V2 test fixes, typo
3 years ago
Ross Wightman
9a86b900fa
Official SwinV2 models
3 years ago
Ross Wightman
d07d015173
Merge pull request #1249 from okojoalg/sequencer
...
Add Sequencer
3 years ago
Ross Wightman
d30685c283
Merge pull request #1251 from hankyul2/fix-multistep-scheduler
...
fix: multistep lr decay epoch bugs
3 years ago
han
a16171335b
fix: change milestones to decay-milestones
...
- change argparser option `milestone` to `decay-milestone`
3 years ago
Ross Wightman
39b725e1c9
Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
3 years ago
Ross Wightman
78a32655fa
Fix poolformer group_matcher to merge proj downsample with previous block, support coarse
3 years ago
Ross Wightman
d79f3d9d1e
Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting
3 years ago
Ross Wightman
37b6920df3
Fix group_matcher regex for regnet.py
3 years ago
okojoalg
93a79a3dd9
Fix num_features in Sequencer
3 years ago
han
57a988df30
fix: multistep lr decay epoch bugs
...
- add milestones arguments
- change decay_epochs to milestones variable
3 years ago
okojoalg
578d52e752
Add Sequencer
3 years ago
Ross Wightman
f5ca4141f7
Adjust arg order for recent vit model args, add a few comments
3 years ago
Ross Wightman
41dc49a337
Vision Transformer refactoring and Rel Pos impl
3 years ago
Ross Wightman
b7cb8d0337
Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling
3 years ago
jjsjann123
f88c606fcf
fixing channels_last on cond_conv2d; update nvfuser debug env variable
3 years ago
Li Dong
09e9f3defb
migrate azure blob for beit checkpoints
...
## Motivation
We are going to use a new blob account to store the checkpoints.
## Modification
Modify the azure blob storage URLs for BEiT checkpoints.
3 years ago
Ross Wightman
52ac881402
Missed first_conv in latest seresnext 'D' default_cfgs
3 years ago
Ross Wightman
7629d8264d
Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned.
3 years ago
SeeFun
8f0bc0591e
fix convnext args
3 years ago
Ross Wightman
c5a8e929fb
Add initial swinv2 tiny / small weights
3 years ago
Ross Wightman
f670d98cb8
Make a few more layers symbolically traceable (remove from FX leaf modules)
...
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
3 years ago
SeeFun
ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
...
Add ConvNeXt tiny and small pretrain in22k from ConvNeXt repo:
06f7b05f92
3 years ago
Ross Wightman
575924ed60
Update test crop for new RegNet-V weights to match Y
3 years ago
Ross Wightman
1618527098
Add layer scale and parallel blocks to vision_transformer
3 years ago
Ross Wightman
c42be74621
Add attrib / comments about Swin-S3 (AutoFormerV2) weights
3 years ago
Ross Wightman
474ac906a2
Add 'head norm first' convnext_tiny_hnf weights
3 years ago
Ross Wightman
dc51334cdc
Fix pruned adapt for EfficientNet models that are now using BatchNormAct layers
3 years ago
Ross Wightman
024fc4d9ab
version 0.6.1 for master
3 years ago
Ross Wightman
e1e037ba52
Fix bad tuple typing fix that was on XLA branch bust missed on master merge
3 years ago
Ross Wightman
341b464a5a
Remove redundant noise attr from Plateau scheduler (use parent)
3 years ago
Ross Wightman
fe457c1996
Update SwinTransformerV2Cr post-merge, update with grad checkpointing / grad matcher
...
* weight compat break, activate norm3 for final block of final stage (equivalent to pre-head norm, but while still in BLC shape)
* remove fold/unfold for TPU compat, add commented out roll code for TPU
* add option for end of stage norm in all stages
* allow weight_init to be selected between pytorch default inits and xavier / moco style vit variant
3 years ago
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
7cdd164d77
Fix #1184 , scheduler noise bug during merge madness
3 years ago
Ross Wightman
9440a50c95
Merge branch 'mrT23-master'
3 years ago
Ross Wightman
d98aa47d12
Revert ml-decoder changes to model factory and train script
3 years ago
Ross Wightman
b20665d379
Merge pull request #1007 from qwertyforce/patch-1
...
update arxiv link
3 years ago
Ross Wightman
7a0994f581
Merge pull request #1150 from ChristophReich1996/master
...
Swin Transformer V2
3 years ago
Ross Wightman
61d3493f87
Fix hf-hub handling when hf-hub is config source
3 years ago
Ross Wightman
5f47518f27
Fix pit implementation to be clsoer to deit/levit re distillation head handling
3 years ago
Ross Wightman
0862e6ebae
Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet
3 years ago
Ross Wightman
94bcdebd73
Add latest weights trained on TPU-v3 VM instances
3 years ago
Ross Wightman
0557c8257d
Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc.
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
3 years ago
Ross Wightman
c6e4b7895a
Swin V2 CR impl refactor.
...
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
3 years ago
Christoph Reich
67d140446b
Fix bug in classification head
3 years ago
Christoph Reich
29add820ac
Refactor (back to relative imports)
3 years ago
Christoph Reich
74a04e0016
Add parameter to change normalization type
3 years ago
Christoph Reich
2a4f6c13dd
Create model functions
3 years ago
Christoph Reich
87b4d7a29a
Add get and reset classifier method
3 years ago
Christoph Reich
ff5f6bcd6c
Check input resolution
3 years ago
Christoph Reich
81bf0b4033
Change parameter names to match Swin V1
3 years ago
Christoph Reich
f227b88831
Add initials (CR) to model and file
3 years ago
Christoph Reich
90dc74c450
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
3 years ago
Ross Wightman
2c3870e107
semobilevit_s for good measure
3 years ago
Ross Wightman
bcaeb91b03
Version to 0.6.0, possible interface incompatibilities vs 0.5.x
3 years ago
Ross Wightman
58ba49c8ef
Add MobileViT models (w/ ByobNet base). Close #1038 .
3 years ago
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
ayasyrev
cf57695938
sched noise dup code remove
3 years ago
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
abc9ba2544
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
3 years ago
Ross Wightman
07379c6d5d
Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
3 years ago
Ross Wightman
447677616f
version 0.5.5
3 years ago
Ross Wightman
83b40c5a58
Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
3 years ago
Mi-Peng
cdcd0a92ca
fix lars
3 years ago
Ross Wightman
1aa617cb3b
Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well
3 years ago
Ross Wightman
f0f9eccda8
Add --fuser arg to train/validate/benchmark scripts to select jit fuser type
3 years ago
Ross Wightman
010b486590
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman
738a9cd635
unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago