Commit Graph

22 Commits (4d5c395160a6610d6c4f09f12aa08dac34f3405a)

Author SHA1 Message Date
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
2 years ago
Ross Wightman 1b278136c3 Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
2 years ago
Ross Wightman 0862e6ebae Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet
2 years ago
Ross Wightman 372ad5fa0d Significant model refactor and additions:
2 years ago
Ross Wightman 5f81d4de23 Move DeiT to own file, vit getting crowded. Working towards fixing #1029, make pooling interface for transformers and mlp closer to convnets. Still working through some details...
2 years ago
Ross Wightman abc9ba2544 Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
2 years ago
Martins Bruveris 85c5ff26d7 Added DINO pretrained ResMLP models.
3 years ago
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
3 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
3 years ago
Ross Wightman 511a8e8c96 Add official ResMLP weights.
3 years ago
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes
3 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
3 years ago
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1.
3 years ago
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models
3 years ago
Ross Wightman bfc72f75d3 Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
3 years ago
talrid dc1a4efd28 mixer_b16_224_miil, mixer_b16_224_miil_in21k models
3 years ago
Ross Wightman d5af752117 Add preliminary gMLP and ResMLP impl to Mlp-Mixer
3 years ago
Ross Wightman e7f0db8664 Fix drop/drop_path arg on MLP-Mixer model. Fix #641
3 years ago
Ross Wightman b2c305c2aa Move Mlp and PatchEmbed modules into layers. Being used in lots of models now...
3 years ago
Ross Wightman 2d8b09fe8b Add official pretrained weights to MLP-Mixer, complete model cfgs.
3 years ago
Ross Wightman 12efffa6b1 Initial MLP-Mixer attempt...
3 years ago