Commit Graph

28 Commits (01aea8c1bfa925608fd302fca63881a4d569ff4a)

Author SHA1 Message Date
Ross Wightman 2e83bba142 Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
2 years ago
Ross Wightman e861b74cf8 Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
2 years ago
Ross Wightman 6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) (#1614)
2 years ago
Ross Wightman 9a51e4ea2e Add FlexiViT models and weights, refactoring, push more weights
2 years ago
Ross Wightman 6a01101905 Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano.
2 years ago
Ross Wightman 927f031293 Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman 755570e2d6 Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2 years ago
Ross Wightman 72cfa57761 Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2 years ago
Ross Wightman 4d5c395160 MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
2 years ago
Ross Wightman 837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
2 years ago
Ross Wightman 1d8ada359a Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
2 years ago
Ross Wightman 2544d3b80f ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
2 years ago
Ross Wightman 6f103a442b Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
2 years ago
Ross Wightman c5e0d1c700 Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
2 years ago
Ross Wightman 06307b8b41 Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman 188c194b0f Left some experiment stem code in convnext by mistake
2 years ago
Ross Wightman 6064d16a2d Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
2 years ago
SeeFun 8f0bc0591e fix convnext args
3 years ago
SeeFun ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
3 years ago
Ross Wightman 474ac906a2 Add 'head norm first' convnext_tiny_hnf weights
3 years ago
Ross Wightman 372ad5fa0d Significant model refactor and additions:
3 years ago
Ross Wightman 5f81d4de23 Move DeiT to own file, vit getting crowded. Working towards fixing #1029, make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
Ross Wightman 738a9cd635 unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago
Ross Wightman e0c4eec4b6 Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
3 years ago
Ross Wightman b669f4a588 Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
3 years ago
Ross Wightman edd3d73695 Add missing dropout for head reset in ConvNeXt default head
3 years ago
Ross Wightman b093dcb46d Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
3 years ago
Ross Wightman 18934debc5 Add initial ConvNeXt impl (mods of official code)
3 years ago