Ross Wightman
|
010b486590
|
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
|
3 years ago |
Ross Wightman
|
a8d103e18b
|
Giant/gigantic vits snuck through in a test a broke GitHub test runner, add filter
|
3 years ago |
Ross Wightman
|
ef72ad4177
|
Extra vit_huge model likely to cause test issue (non in21k variant), adding to filters
|
3 years ago |
Ross Wightman
|
e967c72875
|
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
|
3 years ago |
Ross Wightman
|
4df51f3932
|
Add lcnet_100 and mnasnet_small weights
|
3 years ago |
Ross Wightman
|
5ccf682a8f
|
Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged.
|
3 years ago |
Ross Wightman
|
25d1526092
|
Update pytest for GitHub runner to use --forked with xdist, hopefully eliminate memory buildup
|
3 years ago |
Ross Wightman
|
f7d210d759
|
Remove evonorm models from FX tests
|
3 years ago |
Ross Wightman
|
f83b0b01e3
|
Would like to pass GitHub tests again disabling both FX feature extract backward and torchscript tests
|
3 years ago |
Ross Wightman
|
147e1059a8
|
Remove FX backward test from GitHub actions runs for now.
|
3 years ago |
Ross Wightman
|
878bee1d5e
|
Add patch8 vit model to FX exclusion filter
|
3 years ago |
Ross Wightman
|
ce76a810c2
|
New FX test strategy, filter based on param count
|
3 years ago |
Ross Wightman
|
1e51c2d02e
|
More FX test tweaks
|
3 years ago |
Ross Wightman
|
90448031ea
|
Filter more large models from FX tests
|
3 years ago |
Ross Wightman
|
8dc269c303
|
Filter more models for FX tests
|
3 years ago |
Ross Wightman
|
2482652027
|
Add nfnet_f2 to FX test exclusion
|
3 years ago |
Ross Wightman
|
05092e2fbe
|
Add more models to FX filter
|
3 years ago |
Ross Wightman
|
3819bef93e
|
Add FX test exclusion since it uses more ram and barfs on GitHub actions. Will take a few iterations to include needed models :(
|
3 years ago |
Ross Wightman
|
9b3519545d
|
Attempt to reduce memory footprint of FX tests for GitHub actions runs
|
3 years ago |
Ross Wightman
|
bdd3dff0ca
|
beit_large models killing GitHub actions test, filter out
|
3 years ago |
Ross Wightman
|
f2006b2437
|
Cleanup qkv_bias cat in beit model so it can be traced
|
3 years ago |
Ross Wightman
|
1076a65df1
|
Minor post FX merge cleanup
|
3 years ago |
Alexander Soare
|
0262a0e8e1
|
fx ready for review
|
3 years ago |
Alexander Soare
|
d2994016e9
|
Add try/except guards
|
3 years ago |
Alexander Soare
|
b25ff96768
|
wip - pre-rebase
|
3 years ago |
Alexander Soare
|
a6c24b936b
|
Tests to enforce all models FX traceable
|
3 years ago |
Alexander Soare
|
6d2acec1bb
|
Fix ordering of tests
|
3 years ago |
Alexander Soare
|
65c3d78b96
|
Freeze unfreeze functionality finalized. Tests added
|
3 years ago |
Ross Wightman
|
24720abe3b
|
Merge branch 'master' into attn_update
|
3 years ago |
Ross Wightman
|
1c9284c640
|
Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
|
3 years ago |
Ross Wightman
|
7ab2491ab7
|
Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
|
3 years ago |
Ross Wightman
|
f1808e0970
|
Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
|
3 years ago |
Ross Wightman
|
a897e0ebcc
|
Merge branch 'feature/crossvit' of https://github.com/chunfuchen/pytorch-image-models into chunfuchen-feature/crossvit
|
3 years ago |
Ross Wightman
|
8642401e88
|
Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
|
3 years ago |
Ross Wightman
|
5f12de4875
|
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
|
3 years ago |
Ross Wightman
|
54e90e82a5
|
Another attempt at sgd momentum test passing...
|
3 years ago |
Richard Chen
|
7ab9d4555c
|
add crossvit
|
3 years ago |
Ross Wightman
|
fc894c375c
|
Another attempt at sgd momentum test passing...
|
3 years ago |
Ross Wightman
|
708d87a813
|
Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test.
|
3 years ago |
Ross Wightman
|
c207e02782
|
MOAR optimizer changes. Woo!
|
3 years ago |
Ross Wightman
|
42c1f0cf6c
|
Fix lars tests
|
3 years ago |
Ross Wightman
|
a426511c95
|
More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
|
3 years ago |
Ross Wightman
|
a6af48be64
|
add madgradw optimizer
|
3 years ago |
Ross Wightman
|
55fb5eedf6
|
Remove experiment from lamb impl
|
3 years ago |
Ross Wightman
|
959eaff121
|
Add optimizer tests and update testing to pytorch 1.9
|
3 years ago |
Ross Wightman
|
01cb46a9a5
|
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
|
3 years ago |
Ross Wightman
|
ef1e2e12be
|
Attempt to fix xcit test failures on github runner by filter largest models
|
3 years ago |
Alexander Soare
|
623e8b8eb8
|
wip xcit
|
3 years ago |
Alexander Soare
|
7b8a0017f1
|
wip to review
|
3 years ago |
Ross Wightman
|
b41cffaa93
|
Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
|
3 years ago |