Ross Wightman
b3042081b4
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
2 years ago
Ross Wightman
316bdf8955
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
2 years ago
Ross Wightman
6f28b562c6
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
2 years ago
Ross Wightman
8ab573cd26
Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights
2 years ago
Ross Wightman
e9aac412de
Correct mean/std for CLIP convnexts
2 years ago
Ross Wightman
42bd8f7bcb
Add convnext_base CLIP image tower weights for fine-tuning / features
2 years ago
Ross Wightman
a2c14c2064
Add tiny/small in12k pretrained and fine-tuned ConvNeXt models
2 years ago
Ross Wightman
2e83bba142
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
2 years ago
Ross Wightman
e861b74cf8
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
2 years ago
Ross Wightman
6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) ( #1614 )
...
* Add ConvNeXt-V2 support (model additions and weights)
* ConvNeXt-V2 weights on HF Hub, tweaking some tests
* Update README, fixing convnextv2 tests
2 years ago
Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2 years ago
Ross Wightman
6a01101905
Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano.
2 years ago
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman
755570e2d6
Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2 years ago
Ross Wightman
72cfa57761
Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2 years ago
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2 years ago
Ross Wightman
837c68263b
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
2 years ago
Ross Wightman
1d8ada359a
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
2 years ago
Ross Wightman
2544d3b80f
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
2 years ago
Ross Wightman
6f103a442b
Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
2 years ago
Ross Wightman
c5e0d1c700
Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
2 years ago
Ross Wightman
06307b8b41
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman
188c194b0f
Left some experiment stem code in convnext by mistake
2 years ago
Ross Wightman
6064d16a2d
Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
...
* edgenext refactored for torchscript compat, stage base organization
* slight refactor of ConvNeXt to match some EdgeNeXt additions
* remove use of funky LayerNorm layer in ConvNeXt and just use nn.LayerNorm and LayerNorm2d (permute)
2 years ago
SeeFun
8f0bc0591e
fix convnext args
3 years ago
SeeFun
ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
...
Add ConvNeXt tiny and small pretrain in22k from ConvNeXt repo:
06f7b05f92
3 years ago
Ross Wightman
474ac906a2
Add 'head norm first' convnext_tiny_hnf weights
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
Ross Wightman
738a9cd635
unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago
Ross Wightman
e0c4eec4b6
Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
3 years ago
Ross Wightman
b669f4a588
Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
3 years ago
Ross Wightman
edd3d73695
Add missing dropout for head reset in ConvNeXt default head
3 years ago
Ross Wightman
b093dcb46d
Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
3 years ago
Ross Wightman
18934debc5
Add initial ConvNeXt impl (mods of official code)
3 years ago