Commit Graph

777 Commits (ff0f709c206d748bd553970dcc9dd7659373e6e1)

Author SHA1 Message Date
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
4 years ago
Ross Wightman b0265ef8a6 Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer
4 years ago
Ross Wightman b76b48e8e9 Update optimizer creation for master optimizer changes
4 years ago
Ross Wightman f98662b9c9 Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman 8f68193c91
Update lamp.py comment
4 years ago
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
4 years ago
Ross Wightman a6af48be64 add madgradw optimizer
4 years ago
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl
4 years ago
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import
4 years ago
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
4 years ago
Sepehr Sameni abf3e044bb
Update scheduler_factory.py
4 years ago
Ross Wightman cb621e0f00 Remove print, arg order
4 years ago
Ross Wightman c06c739901 Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman 40457e5691 Transforms, augmentation work for bits, add RandomErasing support for XLA (pushing into transforms), revamp of transform/preproc config, etc ongoing...
4 years ago
Ross Wightman 3cdaf5ed56 Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting.
4 years ago
Ross Wightman 1042b8a146 Add non fused LAMB optimizer option
4 years ago
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
4 years ago
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs
4 years ago
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
4 years ago
Ross Wightman 2907c1f967
Merge pull request #746 from samarth4149/master
4 years ago
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants
4 years ago
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed()
4 years ago
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit
4 years ago
Ross Wightman f8039c7492 Fix gc effv2 model cfg name
4 years ago
Alexander Soare 3a55a30ed1 add notes from author
4 years ago
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models
4 years ago
Alexander Soare 623e8b8eb8 wip xcit
4 years ago
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13
4 years ago
samarth daab57a6d9 1. Added a simple multi step LR scheduler
4 years ago
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models.
4 years ago
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor
4 years ago
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed.
4 years ago
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url.
4 years ago
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer
4 years ago
Alexander Soare 7b8a0017f1 wip to review
4 years ago
Alexander Soare b11d949a06 wip checkpoint with some feature extraction work
4 years ago
Alexander Soare 23bb72ce5e nested_transformer wip
4 years ago
Ross Wightman 766b4d3262 Fix features for resnetv2_50t
4 years ago
Ross Wightman e8045e712f Fix BatchNorm for ResNetV2 non GN models, add more ResNetV2 model defs for future experimentation, fix zero_init of last residual for pre-act.
4 years ago
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
4 years ago
Ross Wightman 85f894e03d Fix ViT in21k representation (pre_logits) layer handling across old and new npz checkpoints
4 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
4 years ago
Ross Wightman 9c9755a808 AugReg release
4 years ago
Ross Wightman 381b279785 Add hybrid model fwds back
4 years ago
Ross Wightman 26f04a8e3e Fix a weight link
4 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
4 years ago
Ross Wightman 4c09a2f169 Bump version 0.4.12
4 years ago
Ross Wightman b319eb5b5d Update ViT weights, more details to be added before merge.
4 years ago
Ross Wightman 8257b86550 Fix up resnetv2 bit/bitm model default res
4 years ago
Ross Wightman 1228f5a3d8 Add BiT distilled 50x1 and teacher 152x2 models from 'A good teacher is patient and consistent' paper.
4 years ago
Ross Wightman 511a8e8c96 Add official ResMLP weights.
4 years ago
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
4 years ago
Ross Wightman 8319e0c373 Add file docstring to std_conv.py
4 years ago
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes
4 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
4 years ago
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid.
4 years ago
Ross Wightman b7a568f065 Fix torchscript issue in bat
4 years ago
Ross Wightman d17b374f0f Minimum input_size needed to be higher
4 years ago
Ross Wightman b3b90d944d Add min_input_size to bat_resnext to prevent test breakage.
4 years ago
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1.
4 years ago
Ross Wightman 10d8fa4620 Add gc and bat attention resnext26ts variants to byob for test.
4 years ago
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models
4 years ago
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP.
4 years ago
Ross Wightman 2a63d0246b Post merge cleanup
4 years ago
Ross Wightman 45dec179e5
Merge pull request #681 from lmk123568/master
4 years ago
Dongyoon Han ded1671483 Fix stochastic depth working only with a shortcut
4 years ago
Ross Wightman 847b4af144
Update README.md
4 years ago
Mike b87d98b238
Update convit.py
4 years ago
Ross Wightman 5c5cadfe4c
Update README.md
4 years ago
Ross Wightman ee2b8f49ee
Update README.md
4 years ago
Ross Wightman cc870df7b8
Update README.md
4 years ago
Ross Wightman 6b2d9c2660 Another bits/README.md update
4 years ago
Ross Wightman c3db5f5801 Worker hack for TFDS eval, add TPU env var setting.
4 years ago
Ross Wightman f411724de4 Fix checkpoint delete issue. Add README about bits and initial Pytorch XLA usage on TPU-VM. Add some FIXMEs and fold train_cfg into train_state by default.
4 years ago
Ross Wightman b57a03bd0d Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman 91ab0b6ce5 Add proper TrainState checkpoint save/load. Some reorg/refactoring and other cleanup. More to go...
4 years ago
Ross Wightman 02320c3e3d Bump version to 0.4.11
4 years ago
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough.
4 years ago
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring.
4 years ago
Ross Wightman 307a935b79 Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
4 years ago
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable.
4 years ago
Ross Wightman bcec14d3b5 Bring EfficientNet SE layer in line with others, pull se_ratio outside of blocks. Allows swapping w/ other attn layers.
4 years ago
Ross Wightman 9611458e19 Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs.
4 years ago
Ross Wightman 01b9108619 Merge branch 'master' into more_attn
4 years ago
Ross Wightman d7bab8a6c5 Fix strict flag change for checkpoint load.
4 years ago
Ross Wightman 02f9d4bc34 Add weights for resnet51q model, add 61q def.
4 years ago
Ross Wightman f615474be3 Fix broken test, repvgg block doesn't have attn_last attr.
4 years ago
Ross Wightman 742c2d5247 Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
4 years ago
Ross Wightman 9c78de8c02 Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present.
4 years ago
Ross Wightman 5db7452173 Fix visformer in_chans stem handling
4 years ago
Ross Wightman 318360c3f9 Update README.md before merge. Bump version to 0.4.10
4 years ago
Ross Wightman 11ae795e99 Redo LeViT attention bias caching in a way that works with both torchscript and DataParallel
4 years ago
Ross Wightman d400f1dbdd Filter test models before creation for backward/torchscript tests
4 years ago
Ross Wightman c4572cc5aa Add Visformer-small weighs, tweak torchscript jit test img size.
4 years ago
Ross Wightman bfc72f75d3 Expand scope of testing for non-std vision transformer / mlp models. Some related cleanup and create fn cleanup for all vision transformer and mlp models. More CoaT weights.
4 years ago
Ross Wightman 18bf520ad1 Add eca_nfnet_l2/l3 defs for future training
4 years ago
Ross Wightman f45de37690 Merge branch 'master' into levit_visformer_rednet
4 years ago
Ross Wightman 23c18a33e4 Add efficientnetv2_rw_m weights trained in PyTorch. 84.8 top-1 @ 416 test. 53M params.
4 years ago
Ross Wightman 5b9c69e80a Add basic training resume based on legacy code
4 years ago
Ross Wightman c2ba229d99 Prep for effcientnetv2_rw_m model weights that started training before official release..
4 years ago
Ross Wightman 30b9880d06 Minor adjustment, mutable default arg, extra check of valid len...
4 years ago
Ross Wightman be0abfbcce Merge branch 'master' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-master
4 years ago
Ross Wightman b7de82e835 ConViT cleanup, fix torchscript, bit of reformatting, reuse existing layers.
4 years ago
Ross Wightman 306c86b668 Merge branch 'convit' of https://github.com/amaarora/pytorch-image-models into amaarora-convit
4 years ago
Ross Wightman a569635045 Update twin weights to a copy in GitHub releases for faster dl. Tweak model class comment.
4 years ago
Ross Wightman be99eef9c1 Remove redundant code, cleanup, fix torchscript.
4 years ago
Ross Wightman 5ab372a3ec Merge branch 'master' of https://github.com/abcdvzz/pytorch-image-models into abcdvzz-master
4 years ago
Aman Arora 5db1eb6ba5 Add defaults
4 years ago
Aman Arora 8b1f2e8e1f remote unused matplotlib import
4 years ago
Aman Arora 40c506ba1e Add ConViT
4 years ago
Alexander Soare 7976019864 extend positional embedding resizing functionality to tnt
4 years ago
Alexander Soare 8086943b6f allow resize positional embeddings to non-square grid
4 years ago
talrid dc1a4efd28 mixer_b16_224_miil, mixer_b16_224_miil_in21k models
4 years ago
Ross Wightman 4210d922d2 Merge branch 'master' into bits_and_tpu
4 years ago
李鑫杰 7b799c4e79 add latest code
4 years ago
Ross Wightman 72ca831dd4 Back to using strings for the enum translation, forgot about import dep
4 years ago
Ross Wightman d5af752117 Add preliminary gMLP and ResMLP impl to Mlp-Mixer
4 years ago
Ross Wightman cbd4ee737f Fix model init for XLA, remove some prints.
4 years ago
李鑫杰 00548b8427 Add Twins
4 years ago
Ross Wightman 74d2829341 Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman aa92d7b1c5 Major timm.bits update. Updater and DeviceEnv now dataclasses, after_step closure used, metrics base impl w/ distributed reduce, many tweaks/fixes.
4 years ago
Ross Wightman e7f0db8664 Fix drop/drop_path arg on MLP-Mixer model. Fix #641
4 years ago
Ross Wightman 9a3ae97311 Another set of byoanet models w/ ECA channel + SA + groups
4 years ago
Ross Wightman d53e91218e Fix tf.data options setting for newer TF versions
4 years ago
Ross Wightman 7077f16c6a Change 21k model naming from _21k to _in21k for consistency with existing 21k models.
4 years ago
Ross Wightman 94d4b53352 Add temporary default_cfgs to visformer models so they pass tests
4 years ago
Ross Wightman 3bffc701f1 Merge branch 'master' into levit_visformer_rednet
4 years ago
Ross Wightman ecc7552c5c Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup.
4 years ago
Ross Wightman 165fb354b2 Add initial RedNet model / Involution layer impl for testing
4 years ago
Ross Wightman 328249f11a Update README, tweak fine-tune effv2 model names.
4 years ago
Ross Wightman c4f482a08b EfficientNetV2 official impl w/ weights ported from TF. Cleanup/refactor of related EfficientNet classes and models.
4 years ago
Ross Wightman 4fbc32d3d0 Fix crop_pct for cait models.
4 years ago
Ross Wightman 715519a5ef Rethink name of patch embed grid info
4 years ago
Ross Wightman b2c305c2aa Move Mlp and PatchEmbed modules into layers. Being used in lots of models now...
4 years ago
Ross Wightman 3ba6b55cb2 More adjustments to ByoaNet models for further experiments.
4 years ago
Ross Wightman 5fcddb96a8 Merge branch 'master' into cait
4 years ago
Ross Wightman 3db12b4b6a Finish CaiT cleanup
4 years ago
Ross Wightman 2d8b09fe8b Add official pretrained weights to MLP-Mixer, complete model cfgs.
4 years ago
Ross Wightman 12efffa6b1 Initial MLP-Mixer attempt...
4 years ago
Ross Wightman 0721559511 Improved (hopefully) init for SA/SA-like layers used in ByoaNets
4 years ago
Ross Wightman d5473c17f7 Fix incorrect name of shortcut/identity paths in many residual nets. Inherited from naming in old old torchvision, long fixed there.
4 years ago
Ross Wightman 0d87650fea Remove filter hack from BlurPool w/ non-persistent buffer. Use BlurPool2d instead of AntiAliasing.. for TResNet. Breaks PyTorch < 1.6.
4 years ago
Ross Wightman ddc743fdf8 Update ResNet-RS models to EMA weights
4 years ago
Ross Wightman 08d60f4a9a resnetrs50 pool sizing wrong
4 years ago
Ross Wightman 1daa15ecc3 Initial Cait commit. Still some cleanup to do.
4 years ago
Ross Wightman 67d0665b46 Post ResNet-RS merge cleanup. Add weight urls, adjust train/test/crop pct.
4 years ago
Aman Arora 560eae38f5
[WIP] Add ResNet-RS models (#554)
4 years ago
Ross Wightman 9cc7dda6e5 Fixup byoanet configs to pass unit tests. Add swin_attn and swinnet26t model for testing.
4 years ago
Ross Wightman e15c3886ba Defaul lambda r=7. Define '26t' stage 4/5 256x256 variants for all of bot/halo/lambda nets for experiment. Add resnet50t for exp. Fix a few comments.
4 years ago
Ross Wightman e5e15754c9 Fix coat first conv ident
4 years ago
Ross Wightman 76739a7589 CoaT merge. Bit of formatting, fix torchscript (for non features), remove einops/einsum dep, add pretrained weight hub (url) support.
4 years ago
Ross Wightman 026430c083 Merge branch 'master' of https://github.com/morizin/pytorch-image-models-1 into morizin-master
4 years ago
Ross Wightman a0492e3b48 A few miil weights naming tweaks to improve compat with model registry and filtering wildcards.
4 years ago
talrid 8c1f03e56c comment
4 years ago
talrid 19e1b67a84 old spaces
4 years ago
talrid a443865876 update naming and scores
4 years ago
talrid cf0e371594 84_0
4 years ago
talrid 0968bdeca3 vit, tresnet and mobilenetV3 ImageNet-21K-P weights
4 years ago
morizin 1e3b6d4dfc
Update __init__.py
4 years ago
morizin fd022fd6a2
Update __init__.py
4 years ago
morizin c2d5087eae
Add files via upload
4 years ago
Ross Wightman 938716c753 Fix import issue, use devenv for dist info in parser_tfds
4 years ago
Ross Wightman 76de984a5f Fix some bugs with XLA support, logger, add hacky xla dist launch script since torch.dist.launch doesn't work
4 years ago
Ross Wightman 12d9a6d4d2 First timm.bits commit, add initial abstractions, WIP updates to train, val... some of it working
4 years ago
Norman Mu 79640fcc1f Enable uniform augmentation magnitude sampling and set AugMix default
4 years ago
Ross Wightman c1cf9712fc Add updated EfficientNet-V2S weights, 83.8 @ 384x384 test. Add PyTorch trained EfficientNet-B4 weights, 83.4 @ 384x384 test. Tweak non TF EfficientNet B1-B4 train/test res scaling.
4 years ago
Ross Wightman e8a64fb881 Test input size for efficientnet_v2s was wrong in last results run
4 years ago
Ross Wightman 2df77ee5cb Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers.
4 years ago
Ross Wightman d793deb51a Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master
4 years ago
Ross Wightman e685618f45
Merge pull request #550 from amaarora/wandb
4 years ago
Ross Wightman f606c45c38 Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer
4 years ago
iamhankai de445e7827 Add GhostNet
4 years ago
Ross Wightman 5a196dddf6 Update README.md with latest, bump version to 0.4.8
4 years ago
Ross Wightman b3d7580df1 Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
4 years ago
Ross Wightman 16f7aa9f54 Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
4 years ago
Ross Wightman 4e4b863b15 Missed norm.py
4 years ago
Ross Wightman 7c97e66f7c Remove commented code, add more consistent seed fn
4 years ago
Ross Wightman 364dd6a58e Merge branch 'master' into byoanet-self_attn
4 years ago
Ross Wightman ce62f96d4d ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
4 years ago
Ross Wightman cd3dc4979f Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
4 years ago
Ross Wightman 21812d33aa Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
4 years ago
Aman Arora 5772c55c57 Make wandb optional
4 years ago
Aman Arora f54897cc0b make wandb not required but rather optional as huggingface_hub
4 years ago
Aman Arora 3f028ebc0f import wandb in summary.py
4 years ago
Aman Arora 624c9b6949 log to wandb only if using using wandb
4 years ago
juntang addfc7c1ac adabelief
4 years ago
Ross Wightman fb896c0b26 Update some comments re preliminary EfficientNet-V2 assumptions
4 years ago
Ross Wightman 2b49ab7a36 Fix ResNetV2 pretrained classifier issue. Fixes #540
4 years ago
Ross Wightman de9dff933a EfficientNet-V2S preliminary model def (for experimentation)
4 years ago
Ross Wightman 37c71a5609 Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
4 years ago
Ross Wightman 2bb65bd875 Wrong default_cfg pool_size for L1
4 years ago
Ross Wightman bf2ca6bdf4 Merge jax and original weight init
4 years ago
Ross Wightman acbd698c83 Update README.md with updates. Small tweak to head_dist handling.
4 years ago
Ross Wightman 9071568f0e Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training.
4 years ago
Ross Wightman c468c47a9c Add regnety_160 weights from DeiT teacher model, update that and my regnety_032 weights to use higher test size.
4 years ago
Ross Wightman 288682796f Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman ea9c9550b2 Fully move ViT hybrids to their own file, including embedding module. Remove some extra DeiT models that were for benchmarking only.
4 years ago
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
4 years ago
Ross Wightman 7953e5d11a Fix pos_embed scaling for ViT and num_classes != 1000 for pretrained distilled deit and pit models. Fix #426 and fix #433
4 years ago