Ross Wightman
010b486590
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman
a8d103e18b
Giant/gigantic vits snuck through in a test a broke GitHub test runner, add filter
3 years ago
Ross Wightman
ef72ad4177
Extra vit_huge model likely to cause test issue (non in21k variant), adding to filters
3 years ago
Ross Wightman
e967c72875
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
3 years ago
Ross Wightman
4df51f3932
Add lcnet_100 and mnasnet_small weights
3 years ago
Ross Wightman
5ccf682a8f
Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged.
3 years ago
Ross Wightman
25d1526092
Update pytest for GitHub runner to use --forked with xdist, hopefully eliminate memory buildup
3 years ago
Ross Wightman
f7d210d759
Remove evonorm models from FX tests
3 years ago
Ross Wightman
f83b0b01e3
Would like to pass GitHub tests again disabling both FX feature extract backward and torchscript tests
3 years ago
Ross Wightman
147e1059a8
Remove FX backward test from GitHub actions runs for now.
3 years ago
Ross Wightman
878bee1d5e
Add patch8 vit model to FX exclusion filter
3 years ago
Ross Wightman
ce76a810c2
New FX test strategy, filter based on param count
3 years ago
Ross Wightman
1e51c2d02e
More FX test tweaks
3 years ago
Ross Wightman
90448031ea
Filter more large models from FX tests
3 years ago
Ross Wightman
8dc269c303
Filter more models for FX tests
3 years ago
Ross Wightman
2482652027
Add nfnet_f2 to FX test exclusion
3 years ago
Ross Wightman
05092e2fbe
Add more models to FX filter
3 years ago
Ross Wightman
3819bef93e
Add FX test exclusion since it uses more ram and barfs on GitHub actions. Will take a few iterations to include needed models :(
3 years ago
Ross Wightman
9b3519545d
Attempt to reduce memory footprint of FX tests for GitHub actions runs
3 years ago
Ross Wightman
bdd3dff0ca
beit_large models killing GitHub actions test, filter out
3 years ago
Ross Wightman
f2006b2437
Cleanup qkv_bias cat in beit model so it can be traced
3 years ago
Ross Wightman
1076a65df1
Minor post FX merge cleanup
3 years ago
Alexander Soare
0262a0e8e1
fx ready for review
3 years ago
Alexander Soare
d2994016e9
Add try/except guards
3 years ago
Alexander Soare
b25ff96768
wip - pre-rebase
3 years ago
Alexander Soare
a6c24b936b
Tests to enforce all models FX traceable
3 years ago
Ross Wightman
1c9284c640
Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
3 years ago
Ross Wightman
7ab2491ab7
Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
3 years ago
Ross Wightman
f1808e0970
Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
3 years ago
Richard Chen
7ab9d4555c
add crossvit
3 years ago
Ross Wightman
01cb46a9a5
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman
ef1e2e12be
Attempt to fix xcit test failures on github runner by filter largest models
3 years ago
Alexander Soare
623e8b8eb8
wip xcit
3 years ago
Alexander Soare
7b8a0017f1
wip to review
3 years ago
Ross Wightman
b41cffaa93
Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman
381b279785
Add hybrid model fwds back
3 years ago
Ross Wightman
0020268d9b
Try lower max size for non_std default_cfg test
3 years ago
Ross Wightman
8880f696b6
Refactoring, cleanup, improved test coverage.
...
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
3 years ago
Ross Wightman
17dc47c8e6
Missed comma in test filters.
3 years ago
Ross Wightman
8bf63b6c6c
Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable.
3 years ago
Ross Wightman
9c78de8c02
Fix #661 , move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present.
3 years ago
Ross Wightman
5db7452173
Fix visformer in_chans stem handling
3 years ago
Ross Wightman
fd92ba0de8
Filter large vit models from torchscript tests
3 years ago
Ross Wightman
99d97e0d67
Hopefully the last test update for this PR...
3 years ago
Ross Wightman
d400f1dbdd
Filter test models before creation for backward/torchscript tests
3 years ago
Ross Wightman
c4572cc5aa
Add Visformer-small weighs, tweak torchscript jit test img size.
4 years ago
Ross Wightman
83487e2a0d
Lower max backward size for tests.
4 years ago
Ross Wightman
bfc72f75d3
Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
4 years ago
Ross Wightman
f45de37690
Merge branch 'master' into levit_visformer_rednet
4 years ago
Ross Wightman
306c86b668
Merge branch 'convit' of https://github.com/amaarora/pytorch-image-models into amaarora-convit
4 years ago