Commit Graph

15 Commits (1c9284c640503ad53b819990a4c56985609704e2)

Author SHA1 Message Date
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
3 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
3 years ago
Ross Wightman 511a8e8c96 Add official ResMLP weights.
3 years ago
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes
3 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
3 years ago
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1.
3 years ago
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models
3 years ago
Ross Wightman bfc72f75d3 Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
3 years ago
talrid dc1a4efd28 mixer_b16_224_miil, mixer_b16_224_miil_in21k models
3 years ago
Ross Wightman d5af752117 Add preliminary gMLP and ResMLP impl to Mlp-Mixer
3 years ago
Ross Wightman e7f0db8664 Fix drop/drop_path arg on MLP-Mixer model. Fix #641
3 years ago
Ross Wightman b2c305c2aa Move Mlp and PatchEmbed modules into layers. Being used in lots of models now...
3 years ago
Ross Wightman 2d8b09fe8b Add official pretrained weights to MLP-Mixer, complete model cfgs.
3 years ago
Ross Wightman 12efffa6b1 Initial MLP-Mixer attempt...
3 years ago