Ross Wightman
29afe79c8b
Attempt to fix unit tests by removing subset of tests on mac runner
2 years ago
Ross Wightman
c0211b0bf7
Swin-V2 test fixes, typo
3 years ago
Ross Wightman
39b725e1c9
Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
3 years ago
okojoalg
2fec08e923
Add Sequencer to non std filters
3 years ago
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
3 years ago
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
abc9ba2544
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
3 years ago
Ross Wightman
010b486590
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman
a8d103e18b
Giant/gigantic vits snuck through in a test a broke GitHub test runner, add filter
3 years ago
Ross Wightman
ef72ad4177
Extra vit_huge model likely to cause test issue (non in21k variant), adding to filters
3 years ago
Ross Wightman
e967c72875
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
3 years ago
Ross Wightman
4df51f3932
Add lcnet_100 and mnasnet_small weights
3 years ago
Ross Wightman
5ccf682a8f
Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged.
3 years ago
Ross Wightman
25d1526092
Update pytest for GitHub runner to use --forked with xdist, hopefully eliminate memory buildup
3 years ago
Ross Wightman
cd059cbe9c
Add FX backward tests back
3 years ago
Ross Wightman
58ffa2bfb7
Update pytest for GitHub runner to use --forked with xdist, hopefully eliminate memory buildup
3 years ago
Ross Wightman
f7d210d759
Remove evonorm models from FX tests
3 years ago
Ross Wightman
f83b0b01e3
Would like to pass GitHub tests again disabling both FX feature extract backward and torchscript tests
3 years ago
Ross Wightman
147e1059a8
Remove FX backward test from GitHub actions runs for now.
3 years ago
Ross Wightman
878bee1d5e
Add patch8 vit model to FX exclusion filter
3 years ago
Ross Wightman
ce76a810c2
New FX test strategy, filter based on param count
3 years ago
Ross Wightman
1e51c2d02e
More FX test tweaks
3 years ago
Ross Wightman
90448031ea
Filter more large models from FX tests
3 years ago
Ross Wightman
8dc269c303
Filter more models for FX tests
3 years ago
Ross Wightman
2482652027
Add nfnet_f2 to FX test exclusion
3 years ago
Ross Wightman
05092e2fbe
Add more models to FX filter
3 years ago
Ross Wightman
3819bef93e
Add FX test exclusion since it uses more ram and barfs on GitHub actions. Will take a few iterations to include needed models :(
3 years ago
Ross Wightman
9b3519545d
Attempt to reduce memory footprint of FX tests for GitHub actions runs
3 years ago
Ross Wightman
bdd3dff0ca
beit_large models killing GitHub actions test, filter out
3 years ago
Ross Wightman
f2006b2437
Cleanup qkv_bias cat in beit model so it can be traced
3 years ago
Ross Wightman
1076a65df1
Minor post FX merge cleanup
3 years ago
Alexander Soare
0262a0e8e1
fx ready for review
3 years ago
Alexander Soare
d2994016e9
Add try/except guards
3 years ago
Alexander Soare
b25ff96768
wip - pre-rebase
3 years ago
Alexander Soare
a6c24b936b
Tests to enforce all models FX traceable
3 years ago
Ross Wightman
1c9284c640
Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
3 years ago
Ross Wightman
7ab2491ab7
Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
3 years ago
Ross Wightman
f1808e0970
Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
3 years ago
Richard Chen
7ab9d4555c
add crossvit
3 years ago
Ross Wightman
01cb46a9a5
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman
ef1e2e12be
Attempt to fix xcit test failures on github runner by filter largest models
3 years ago
Alexander Soare
623e8b8eb8
wip xcit
3 years ago
Alexander Soare
7b8a0017f1
wip to review
3 years ago
Ross Wightman
b41cffaa93
Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman
381b279785
Add hybrid model fwds back
3 years ago
Ross Wightman
0020268d9b
Try lower max size for non_std default_cfg test
3 years ago
Ross Wightman
8880f696b6
Refactoring, cleanup, improved test coverage.
...
* Add eca_nfnet_l2 weights, 84.7 @ 384x384
* All 'non-std' (ie transformer / mlp) models have classifier / default_cfg test added
* Fix #694 reset_classifer / num_features / forward_features / num_classes=0 consistency for transformer / mlp models
* Add direct loading of npz to vision transformer (pure transformer so far, hybrid to come)
* Rename vit_deit* to deit_*
* Remove some deprecated vit hybrid model defs
* Clean up classifier flatten for conv classifiers and unusual cases (mobilenetv3/ghostnet)
* Remove explicit model fns for levit conv, just pass in arg
3 years ago