Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2 years ago
Ross Wightman
cda39b35bd
Add a deprecation phase to module re-org
2 years ago
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2 years ago
Ross Wightman
803254bb40
Fix spacing misalignment for fast norm path in LayerNorm modules
2 years ago
Ross Wightman
b293dfa595
Add CL SE module
2 years ago
Ross Wightman
9709dbaaa9
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
2 years ago
Ross Wightman
769ab4b98a
Clean up no_grad for trunc normal weight inits
2 years ago
Ross Wightman
48e1df8b37
Add norm/norm_act header comments
2 years ago
Ross Wightman
ffaf97f813
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
2 years ago
Ross Wightman
8c9696c9df
More model and test fixes
2 years ago
Ross Wightman
43aa84e861
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
2 years ago
Ross Wightman
8ad4bdfa06
Allow ntuple to be used with string values
2 years ago
Ross Wightman
7c7ecd2492
Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
2 years ago
Ross Wightman
eca09b8642
Add MobileVitV2 support. Fix #1332 . Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2 years ago
Ross Wightman
7a9c6811c9
Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
2 years ago
Ross Wightman
82c311d082
Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
2 years ago
Ross Wightman
07d0c4ae96
Improve repr for DropPath module
2 years ago
Ross Wightman
e27c16b8a0
Remove unecessary code for synbn guard
2 years ago
Ross Wightman
0da3c9ebbf
Remove SiLU layer in default args that breaks import on old old PyTorch
2 years ago
Ross Wightman
879df47c0a
Support BatchNormAct2d for sync-bn use. Fix #1254
2 years ago
Ross Wightman
4b30bae67b
Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
3 years ago
jjsjann123
f88c606fcf
fixing channels_last on cond_conv2d; update nvfuser debug env variable
3 years ago
Ross Wightman
f670d98cb8
Make a few more layers symbolically traceable (remove from FX leaf modules)
...
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
3 years ago
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
9440a50c95
Merge branch 'mrT23-master'
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
656757d26b
Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones.
3 years ago
Ross Wightman
b27c21b09a
Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman
214c84a235
Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman
a52a614475
Remove layer experiment which should not have been added
3 years ago
Ross Wightman
ab49d275de
Significant norm update
...
* ConvBnAct layer renamed -> ConvNormAct and ConvNormActAa for anti-aliased
* Significant update to EfficientNet and MobileNetV3 arch to support NormAct layers and grouped conv (as alternative to depthwise)
* Update RegNet to add Z variant
* Add Pre variant of XceptionAligned that works with NormAct layers
* EvoNorm matches bits_and_tpu branch for merge
3 years ago
Ross Wightman
d04f2f1377
Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman
834a9ec721
Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman
78912b6375
Updated EvoNorm implementations with some experimentation. Add FilterResponseNorm. Updated RegnetZ and ResNetV2 model defs for trials.
3 years ago
talrid
c11f4c3218
support CNNs
3 years ago
mrT23
d6701d8a81
Merge branch 'rwightman:master' into master
3 years ago
Ross Wightman
480c676ffa
Fix FX breaking assert in evonorm
3 years ago
talrid
41559247e9
use_ml_decoder_head
3 years ago
Ross Wightman
93cc08fdc5
Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix).
3 years ago
Ross Wightman
af607b75cc
Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU
3 years ago
Ross Wightman
c976a410d9
Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets.
3 years ago
Alexander Soare
b25ff96768
wip - pre-rebase
3 years ago
Alexander Soare
e051dce354
Make all models FX traceable
3 years ago
Alexander Soare
0149ec30d7
wip - attempting to rebase
3 years ago
Alexander Soare
bc3d4eb403
wip -rebase
3 years ago
Ross Wightman
2ddef942b9
Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists
3 years ago
Ross Wightman
4f0f9cb348
Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8
3 years ago
Ross Wightman
b745d30a3e
Fix formatting of last commit
3 years ago