Commit Graph

490 Commits (3581affb7769ec3554b2ea3d242c83db3f92a960)

Author SHA1 Message Date
Ross Wightman c2f02b08b8 Merge remote-tracking branch 'origin/attn_update' into bits_and_tpu
4 years ago
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related
4 years ago
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
4 years ago
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
4 years ago
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
4 years ago
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments...
4 years ago
Ross Wightman 492c0a4e20 Update HaloAttn comment
4 years ago
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
4 years ago
Ross Wightman 2568ffc5ef Merge branch 'master' into attn_update
4 years ago
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test.
4 years ago
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
4 years ago
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines
4 years ago
Ross Wightman a5a542f17d Fix typo
4 years ago
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
4 years ago
Ross Wightman c06c739901 Merge branch 'master' into bits_and_tpu
4 years ago
Ross Wightman 40457e5691 Transforms, augmentation work for bits, add RandomErasing support for XLA (pushing into transforms), revamp of transform/preproc config, etc ongoing...
4 years ago
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
4 years ago
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs
4 years ago
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
4 years ago
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants
4 years ago
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed()
4 years ago
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit
4 years ago
Ross Wightman f8039c7492 Fix gc effv2 model cfg name
4 years ago
Alexander Soare 3a55a30ed1 add notes from author
4 years ago
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models
4 years ago
Alexander Soare 623e8b8eb8 wip xcit
4 years ago
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13
4 years ago
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models.
4 years ago
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor
4 years ago
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed.
4 years ago
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url.
4 years ago
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer
4 years ago
Alexander Soare 7b8a0017f1 wip to review
4 years ago
Alexander Soare b11d949a06 wip checkpoint with some feature extraction work
4 years ago
Alexander Soare 23bb72ce5e nested_transformer wip
4 years ago
Ross Wightman 766b4d3262 Fix features for resnetv2_50t
4 years ago
Ross Wightman e8045e712f Fix BatchNorm for ResNetV2 non GN models, add more ResNetV2 model defs for future experimentation, fix zero_init of last residual for pre-act.
4 years ago
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
4 years ago
Ross Wightman 85f894e03d Fix ViT in21k representation (pre_logits) layer handling across old and new npz checkpoints
4 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
4 years ago
Ross Wightman 9c9755a808 AugReg release
4 years ago
Ross Wightman 381b279785 Add hybrid model fwds back
4 years ago
Ross Wightman 26f04a8e3e Fix a weight link
4 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
4 years ago
Ross Wightman b319eb5b5d Update ViT weights, more details to be added before merge.
4 years ago
Ross Wightman 8257b86550 Fix up resnetv2 bit/bitm model default res
4 years ago
Ross Wightman 1228f5a3d8 Add BiT distilled 50x1 and teacher 152x2 models from 'A good teacher is patient and consistent' paper.
4 years ago
Ross Wightman 511a8e8c96 Add official ResMLP weights.
4 years ago
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
4 years ago
Ross Wightman 8319e0c373 Add file docstring to std_conv.py
4 years ago