Commit Graph

12 Commits (1aa617cb3b13832c29b4f5c4a1aba221acb4013e)

Author SHA1 Message Date
Ross Wightman 656757d26b Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones.
3 years ago
Ross Wightman c21b21660d visformer supports spatial feat map, update pool_size in pretrained cfg to match
3 years ago
KAI ZHAO b4b8d1ec18 fix hard-coded strides
3 years ago
Ross Wightman f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p
3 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
3 years ago
Ross Wightman 742c2d5247 Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
3 years ago
Ross Wightman 5db7452173 Fix visformer in_chans stem handling
3 years ago
Ross Wightman c4572cc5aa Add Visformer-small weighs, tweak torchscript jit test img size.
3 years ago
Ross Wightman bfc72f75d3 Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
3 years ago
Ross Wightman 94d4b53352 Add temporary default_cfgs to visformer models so they pass tests
3 years ago
Ross Wightman ecc7552c5c Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup.
3 years ago