naman jain
e940755778
added new wt initializtion- xavier_uniform
3 years ago
kira7005
450b0d57f0
model_freeze_and_pretraining
3 years ago
kira7005
50fd5be983
add_softmax_layer
3 years ago
kira7005
cbc75c808e
model changes to support pretrained weights loading
3 years ago
kira7005
592e2984af
classes_number
3 years ago
kira7005
af21ef0847
bag_sampler
3 years ago
kira7005
029a2cb449
mil_ranking loss
3 years ago
kira7005
6c376d8139
Supports Read from c3d_embeddings
3 years ago
Martins Bruveris
85c5ff26d7
Added DINO pretrained ResMLP models.
4 years ago
Ross Wightman
20a2be14c3
Add gMLP-S weights, 79.6 top-1
4 years ago
Ross Wightman
b41cffaa93
Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
4 years ago
Ross Wightman
8f4a0222ed
Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
4 years ago
Ross Wightman
511a8e8c96
Add official ResMLP weights.
4 years ago
Ross Wightman
4d96165989
Merge branch 'master' into cleanup_xla_model_fixes
4 years ago
Ross Wightman
8880f696b6
Refactoring, cleanup, improved test coverage.
...
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
4 years ago
Ross Wightman
d413eef1bf
Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1.
4 years ago
Ross Wightman
2f5ed2dec1
Update `init_values` const for 24 and 36 layer ResMLP models
4 years ago
Ross Wightman
bfc72f75d3
Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
4 years ago
talrid
dc1a4efd28
mixer_b16_224_miil, mixer_b16_224_miil_in21k models
4 years ago
Ross Wightman
d5af752117
Add preliminary gMLP and ResMLP impl to Mlp-Mixer
4 years ago
Ross Wightman
e7f0db8664
Fix drop/drop_path arg on MLP-Mixer model. Fix #641
4 years ago
Ross Wightman
b2c305c2aa
Move Mlp and PatchEmbed modules into layers. Being used in lots of models now...
4 years ago
Ross Wightman
2d8b09fe8b
Add official pretrained weights to MLP-Mixer, complete model cfgs.
4 years ago
Ross Wightman
12efffa6b1
Initial MLP-Mixer attempt...
4 years ago