Commit Graph

611 Commits (8449ba210c6bde6d65a237eb96a81b2ca2e38de2)

Author SHA1 Message Date
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines
3 years ago
Ross Wightman a5a542f17d Fix typo
3 years ago
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
3 years ago
Ross Wightman d667351eac Tweak accuracy topk safety. Fix #807
3 years ago
Yohann Lereclus 35c9740826 Fix accuracy when topk > num_classes
3 years ago
Ross Wightman a16a753852 Add lamb/lars to optim init imports, remove stray comment
3 years ago
Ross Wightman c207e02782 MOAR optimizer changes. Woo!
3 years ago
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
3 years ago
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer
3 years ago
Ross Wightman 8f68193c91
Update lamp.py comment
3 years ago
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
3 years ago
Ross Wightman a6af48be64 add madgradw optimizer
3 years ago
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl
3 years ago
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import
3 years ago
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
3 years ago
Sepehr Sameni abf3e044bb
Update scheduler_factory.py
3 years ago
Ross Wightman 3cdaf5ed56 Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting.
3 years ago
Ross Wightman 1042b8a146 Add non fused LAMB optimizer option
3 years ago
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs
3 years ago
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
3 years ago
Ross Wightman 2907c1f967
Merge pull request #746 from samarth4149/master
3 years ago
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants
3 years ago
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed()
3 years ago
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit
3 years ago
Ross Wightman f8039c7492 Fix gc effv2 model cfg name
3 years ago
Alexander Soare 3a55a30ed1 add notes from author
3 years ago
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models
3 years ago
Alexander Soare 623e8b8eb8 wip xcit
3 years ago
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13
3 years ago
samarth daab57a6d9 1. Added a simple multi step LR scheduler
3 years ago
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models.
3 years ago
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor
3 years ago
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed.
3 years ago
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url.
3 years ago
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer
3 years ago
Alexander Soare 7b8a0017f1 wip to review
3 years ago
Alexander Soare b11d949a06 wip checkpoint with some feature extraction work
3 years ago
Alexander Soare 23bb72ce5e nested_transformer wip
3 years ago
Ross Wightman 766b4d3262 Fix features for resnetv2_50t
3 years ago
Ross Wightman e8045e712f Fix BatchNorm for ResNetV2 non GN models, add more ResNetV2 model defs for future experimentation, fix zero_init of last residual for pre-act.
3 years ago
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
3 years ago
Ross Wightman 85f894e03d Fix ViT in21k representation (pre_logits) layer handling across old and new npz checkpoints
3 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman 9c9755a808 AugReg release
3 years ago
Ross Wightman 381b279785 Add hybrid model fwds back
3 years ago
Ross Wightman 26f04a8e3e Fix a weight link
3 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
3 years ago
Ross Wightman 4c09a2f169 Bump version 0.4.12
3 years ago