Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
3 years ago
Ross Wightman
c6e4b7895a
Swin V2 CR impl refactor.
...
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
3 years ago
Christoph Reich
67d140446b
Fix bug in classification head
3 years ago
Christoph Reich
29add820ac
Refactor (back to relative imports)
3 years ago
Christoph Reich
74a04e0016
Add parameter to change normalization type
3 years ago
Christoph Reich
2a4f6c13dd
Create model functions
3 years ago
Christoph Reich
87b4d7a29a
Add get and reset classifier method
3 years ago
Christoph Reich
ff5f6bcd6c
Check input resolution
3 years ago
Christoph Reich
81bf0b4033
Change parameter names to match Swin V1
3 years ago
Christoph Reich
f227b88831
Add initials (CR) to model and file
3 years ago
Christoph Reich
90dc74c450
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
3 years ago
Ross Wightman
7c67d6aca9
Update README.md
3 years ago
Ross Wightman
2c3870e107
semobilevit_s for good measure
3 years ago
Ross Wightman
bcaeb91b03
Version to 0.6.0, possible interface incompatibilities vs 0.5.x
3 years ago
Ross Wightman
58ba49c8ef
Add MobileViT models (w/ ByobNet base). Close #1038 .
3 years ago
Ross Wightman
fafece230b
Allow changing base lr batch size from 256 via arg
3 years ago
Ross Wightman
7148039f9f
Tweak base lr log
3 years ago
Ross Wightman
f82fb6b608
Add base lr w/ linear and sqrt scaling to train script
3 years ago
Ross Wightman
066e490605
Merge branch 'norm_norm_norm' into bits_and_tpu
3 years ago
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
ayasyrev
cf57695938
sched noise dup code remove
3 years ago
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
abc9ba2544
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
3 years ago
Ross Wightman
07379c6d5d
Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
3 years ago
Ross Wightman
cf4334391e
Update benchmark and validate scripts to output results in JSON with a fixed delimiter for use in multi-process launcher
3 years ago
Ross Wightman
1331c145a3
Add train benchmark results, adjust name scheme for inference and train benchmark files.
3 years ago
Ross Wightman
a517bf6a7a
Merge pull request #1105 from kozistr/refactor/remove-condition
...
Remove checking `smoothing` parameter
3 years ago
kozistr
56a6b38f76
refactor: remove if-condition
3 years ago
Ross Wightman
447677616f
version 0.5.5
3 years ago
Ross Wightman
499c4749d7
Add update NCHW and NHWC inference benchmark numbers for current models. Flip name of 'sam' vit models in results files
3 years ago
Ross Wightman
83b40c5a58
Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
3 years ago
Ross Wightman
7f73252716
Merge pull request #1094 from Mi-Peng/lars
...
fix lars
3 years ago
Mi-Peng
cdcd0a92ca
fix lars
3 years ago
Ross Wightman
2d4b7e7080
Update results csvs for latest release
3 years ago
Ross Wightman
1aa617cb3b
Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well
3 years ago
Ross Wightman
f0f9eccda8
Add --fuser arg to train/validate/benchmark scripts to select jit fuser type
3 years ago
Ross Wightman
010b486590
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman
738a9cd635
unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago
Ross Wightman
e0c4eec4b6
Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
3 years ago
Ross Wightman
b669f4a588
Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
3 years ago
Ross Wightman
6dcbaf211a
Update README.md
3 years ago
Ross Wightman
a8d103e18b
Giant/gigantic vits snuck through in a test a broke GitHub test runner, add filter
3 years ago
Ross Wightman
ef72ad4177
Extra vit_huge model likely to cause test issue (non in21k variant), adding to filters
3 years ago
Ross Wightman
e967c72875
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
3 years ago
Ross Wightman
9ca3437178
Add some more small model weights lcnet, mnas, mnv2
3 years ago
Ross Wightman
fa6463c936
Version 0.5.4
3 years ago
Ross Wightman
fa81164378
Fix stem width for really small mobilenetv3 arch defs
3 years ago
Ross Wightman
edd3d73695
Add missing dropout for head reset in ConvNeXt default head
3 years ago
Ross Wightman
b093dcb46d
Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
3 years ago