Commit Graph

985 Commits (16d2db7e4b85b1574cc03694d9c12561d618a0f9)

Author SHA1 Message Date
Christoph Reich 81bf0b4033 Change parameter names to match Swin V1
3 years ago
Christoph Reich f227b88831 Add initials (CR) to model and file
3 years ago
Christoph Reich 90dc74c450 Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
3 years ago
Ross Wightman 2c3870e107 semobilevit_s for good measure
3 years ago
Ross Wightman bcaeb91b03 Version to 0.6.0, possible interface incompatibilities vs 0.5.x
3 years ago
Ross Wightman 58ba49c8ef Add MobileViT models (w/ ByobNet base). Close #1038.
3 years ago
Ross Wightman 5f81d4de23 Move DeiT to own file, vit getting crowded. Working towards fixing #1029, make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
ayasyrev cf57695938 sched noise dup code remove
3 years ago
Ross Wightman 95cfc9b3e8 Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman abc9ba2544 Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
3 years ago
Ross Wightman 07379c6d5d Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
3 years ago
Ross Wightman 447677616f version 0.5.5
3 years ago
Ross Wightman 83b40c5a58 Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
3 years ago
Mi-Peng cdcd0a92ca fix lars
3 years ago
Ross Wightman 1aa617cb3b Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well
3 years ago
Ross Wightman f0f9eccda8 Add --fuser arg to train/validate/benchmark scripts to select jit fuser type
3 years ago
Ross Wightman 010b486590 Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman 738a9cd635 unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago
Ross Wightman e0c4eec4b6 Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
3 years ago
Ross Wightman b669f4a588 Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
3 years ago
Ross Wightman e967c72875 Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
3 years ago
Ross Wightman 9ca3437178 Add some more small model weights lcnet, mnas, mnv2
3 years ago
Ross Wightman fa6463c936 Version 0.5.4
3 years ago
Ross Wightman fa81164378 Fix stem width for really small mobilenetv3 arch defs
3 years ago
Ross Wightman edd3d73695 Add missing dropout for head reset in ConvNeXt default head
3 years ago
Ross Wightman b093dcb46d Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
3 years ago
Ross Wightman 18934debc5 Add initial ConvNeXt impl (mods of official code)
3 years ago
Ross Wightman 656757d26b Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones.
3 years ago
Ross Wightman ccfeb06936 Fix out_indices handling breakage, should have left as per vgg approach.
3 years ago
Ross Wightman a9f91483a6 Fix #1078, DarkNet has 6 feature maps. Make vgg and darknet out_indices handling/comments equivalent
3 years ago
Ross Wightman c21b21660d visformer supports spatial feat map, update pool_size in pretrained cfg to match
3 years ago
Ross Wightman 9c11dfd9cb Fix fbnetv3 pretrained cfg changes
3 years ago
Ross Wightman 1406cddc2e FBNetV3 timm trained weights added for b/d/g variants. Update version to 0.5.2 for pypi release.
3 years ago
Ross Wightman 02ae11e526 Leaving repeat aug sampler indices as tensor thrashes worker shared process memory
3 years ago
Ross Wightman 4df51f3932 Add lcnet_100 and mnasnet_small weights
3 years ago
Ross Wightman 5ccf682a8f Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged.
3 years ago
Ross Wightman b9a715c86a Add more small model defs for MobileNetV3/V2/LCNet
3 years ago
Ross Wightman b27c21b09a Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman 214c84a235 Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman 72b57163d1 Merge branch 'master' of https://github.com/mrT23/pytorch-image-models into mrT23-master
3 years ago
Ross Wightman de5fa791c6 Merge branch 'master' into norm_norm_norm
3 years ago
Ross Wightman 26ff57f953 Add more small model defs for MobileNetV3/V2/LCNet
3 years ago
Hyeongchan Kim a0b2657497
Use `torch.repeat_interleave()` to generate repeated indices faster (#1058)
3 years ago
Ross Wightman 450ac6a0f5 Post merge tinynet fixes for pool_size, feature extraction
3 years ago
Ross Wightman a04164cd75 Merge branch 'tinynet' of https://github.com/rsomani95/pytorch-image-models into rsomani95-tinynet
3 years ago
Ross Wightman 8a93ce6ee3 Fix regnetv/w tests, refactor regnet generator code a bit
3 years ago
Ross Wightman 4dec8c8087 Fix skip path regression for updated EfficientNet and RegNet def. Add Pre-Act RegNet support (experimental). Remove BN-TF flag. Add efficientnet_b0_g8_gn model.
3 years ago
Ross Wightman a52a614475 Remove layer experiment which should not have been added
3 years ago
Ross Wightman ab49d275de Significant norm update
3 years ago
Rahul Somani 31bcd36e46 add tinynet models
3 years ago
KAI ZHAO b4b8d1ec18 fix hard-coded strides
3 years ago
Ross Wightman d04f2f1377 Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman 834a9ec721 Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman 78912b6375 Updated EvoNorm implementations with some experimentation. Add FilterResponseNorm. Updated RegnetZ and ResNetV2 model defs for trials.
3 years ago
Ross Wightman 55adfbeb8d Add commented code to increase open file limit via Python (for TFDS dataset building)
3 years ago
talrid c11f4c3218 support CNNs
3 years ago
mrT23 d6701d8a81
Merge branch 'rwightman:master' into master
3 years ago
qwertyforce ccb3815360
update arxiv link
3 years ago
Ross Wightman 3dc71695bf
Merge pull request #989 from martinsbruveris/feat/resmlp-dino
3 years ago
Ross Wightman 480c676ffa Fix FX breaking assert in evonorm
3 years ago
Martins Bruveris 85c5ff26d7 Added DINO pretrained ResMLP models.
3 years ago
Ross Wightman d633a014e6 Post merge cleanup. Fix potential security issue passing kwargs directly through to serialized web data.
3 years ago
Nathan Raw b18c9e323b
Update helpers.py
3 years ago
Nathan Raw 308d0b9554
Merge branch 'master' into hf-save-and-push
3 years ago
Ross Wightman f0507f6da6 Fix k_decay default arg != 1.0 in poly scheduler
3 years ago
talrid 41559247e9 use_ml_decoder_head
3 years ago
Ross Wightman 1f53db2ece Updated lamhalobotnet weights, 81.5 top-1
3 years ago
Ross Wightman 15ef108eb4 Add better halo2botnet50ts weights, 82 top-1 @ 256
3 years ago
Ross Wightman 734b2244fe Add RegNetZ-D8 (83.5 @ 256, 84 @ 320) and RegNetZ-E8 (84.5 @ 256, 85 @ 320) weights. Update names of existing RegZ models to include group size.
3 years ago
Ross Wightman 93cc08fdc5 Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix).
3 years ago
Ross Wightman af607b75cc Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU
3 years ago
Ross Wightman c976a410d9 Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets.
3 years ago
Ross Wightman f2006b2437 Cleanup qkv_bias cat in beit model so it can be traced
3 years ago
Ross Wightman 1076a65df1 Minor post FX merge cleanup
3 years ago
Ross Wightman 32c9937dec Merge branch 'fx-feature-extract-new' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-fx-feature-extract-new
3 years ago
Ross Wightman 78b36bf46c Places365 doesn't exist in some still used torchvision version
3 years ago
Alexander Soare 65d827c7a6 rename notrace registration and standardize trace_utils imports
3 years ago
Ross Wightman 9b2daf2a35 Add ResNeXt-50 weights 81.1 top-1 @ 224, 82 @ 288 with A1 'high aug' recipe
3 years ago
Ross Wightman 9b5d6dc7e2 Merge branch 'add-vit-b8' of https://github.com/martinsbruveris/pytorch-image-models into martinsbruveris-add-vit-b8
3 years ago
Ross Wightman cfa414cad2 Matching two bits_and_tpu changes for TFDs wrapper
3 years ago
Martins Bruveris 5220711d87 Added B/8 models to ViT.
3 years ago
Alexander Soare 0262a0e8e1 fx ready for review
3 years ago
Alexander Soare d2994016e9 Add try/except guards
3 years ago
Alexander Soare b25ff96768 wip - pre-rebase
3 years ago
Alexander Soare e051dce354 Make all models FX traceable
3 years ago
Alexander Soare cf4561ca72 Add FX based FeatureGraphNet capability
3 years ago
Alexander Soare 0149ec30d7 wip - attempting to rebase
3 years ago
Alexander Soare 02c3a75a45 wip - make it possible to use fx graph in train and eval mode
3 years ago
Alexander Soare bc3d4eb403 wip -rebase
3 years ago
Alexander Soare ab3ac3f25b Add FX based FeatureGraphNet capability
3 years ago
Ross Wightman 9ec3210c2d More TFDS parser cleanup, support improved TFDS even_split impl (on tfds-nightly only currently).
3 years ago
Ross Wightman ba65dfe2c6 Dataset work
3 years ago
Ross Wightman ddc29da974 Add ResNet101 and ResNet152 weights from higher aug RSB recipes. 81.93 and 82.82 top-1 at 224x224.
3 years ago
Ross Wightman b328e56f49 Update eca_halonext26ts weights to a better set
3 years ago
Ross Wightman 2ddef942b9 Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists
3 years ago
Ross Wightman 4f0f9cb348 Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8
3 years ago
Ross Wightman a41de1f666 Add interpolation mode handling to transforms. Removes InterpolationMode warning. Works for torchvision versions w/ and w/o InterpolationMode enum. Fix #738.
3 years ago
Ross Wightman ed41d32637 Add repr to auto_augment and random_erasing impl
3 years ago
Ross Wightman ae72d009fa Add weights for lambda_resnet50ts, halo2botnet50ts, lamhalobotnet50ts, updated halonet50ts
3 years ago
Ross Wightman b745d30a3e Fix formatting of last commit
3 years ago
Ross Wightman 3478f1d7f1 Traceability fix for vit models for some experiments
3 years ago
Ross Wightman f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p
3 years ago
Thomas Viehmann f805ba86d9 use .unbind instead of explicitly listing the indices
3 years ago
Ross Wightman 57992509f9 Fix some formatting in utils/model.py
3 years ago
Ross Wightman 0fe4fd3f1f add d8 and e8 regnetz models with group size 8
3 years ago
Ross Wightman 25e7c8c5e5 Update broken resnetv2_50 weight url, add resnetv1_101 a1h recipe weights for 224x224 train
3 years ago
Ross Wightman b6caa356d2 Fixed eca_botnext26ts_256 weights added, 79.27
3 years ago
Ross Wightman c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention
3 years ago
Ross Wightman 02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
3 years ago
masafumi 047a5ec05f Fix bugs that Mixup does not work device=cpu
3 years ago
Ross Wightman cd34913278 Remove some outdated comments, botnet networks working great now.
3 years ago
Ross Wightman 6ed4cdccca Update lambda_resnet26t weights with better set
3 years ago
ICLR Author 44d6d51668 Add ConvMixer
3 years ago
Ross Wightman a85df34993 Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim
3 years ago
Ross Wightman b544ad4d3f regnetz model default cfg tweaks
3 years ago
Ross Wightman e5da481073 Small post-merge tweak for freeze/unfreeze, add to __init__ for utils
3 years ago
Ross Wightman 5ca72dcc75 Merge branch 'freeze-functionality' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-freeze-functionality
3 years ago
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
3 years ago
Alexander Soare 431e60c83f Add acknowledgements for freeze_batch_norm inspiration
3 years ago
Ross Wightman fbf59c04ee Change crop ratio on correct resnet50 variant.
3 years ago
Ross Wightman ae1ff5792f Clean a1/a2/3 rsb _0 checkpoints properly, fix v2 loading.
3 years ago
Ross Wightman 93901e992f Version bump to 0.5.0 for pending release post RSB and ATTN updates
3 years ago
Ross Wightman da0d39bedd Update default crop_pct for byoanet
3 years ago
Ross Wightman cc9bedf373 Add initial ResNet Strikes Back weights for ResNet50 and ResNetV2-50 models
3 years ago
Ross Wightman 64495505b7 Add updated lambda resnet26 and botnet26 checkpoints with fixes applied
3 years ago
Ross Wightman b2094f4ee8 support bits checkpoints in avg/load
3 years ago
Ross Wightman 007bc39323 Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
3 years ago
Alexander Soare 65c3d78b96 Freeze unfreeze functionality finalized. Tests added
3 years ago
Alexander Soare 0cb8ea432c wip
3 years ago
Ross Wightman b1c2e3eb92 Match rel_pos_indices attr rename in conv branch
3 years ago
Ross Wightman b49630a138 Add relative pos embed option to LambdaLayer, fix last transpose/reshape.
3 years ago
Ross Wightman d657e2cc0b Remove dead code line from efficientnet
3 years ago
Ross Wightman 0ca687f224 Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes
3 years ago
leondgarse 51eaf9360d
Remove a duplicate layer creation in byobnet.py
3 years ago
Ross Wightman b81e79aae9 Fix bottleneck attn transpose typo, hopefully these train better now..
3 years ago
Ross Wightman 80075b0b8a Add worker_seeding arg to allow selecting old vs updated data loader worker seed for (old) experiment repeatability
3 years ago
Ross Wightman 6478bcd02c Fix regnetz_d conv layer name, use inception mean/std
3 years ago
Ross Wightman 0387e6057e Update binary cross ent impl to use thresholding as an option (convert soft targets from mixup/cutmix to 0, 1)
3 years ago
Ross Wightman f8a63a3b71 Add worker_init_fn to loader for numpy seed per worker
3 years ago
Ross Wightman 515121cca1 Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last
3 years ago
Ross Wightman da06cc61d4 ResNetV2 seems to work best without zero_init residual
3 years ago
Ross Wightman 8e11da0ce3 Add experimental RegNetZ(ish) models for training / perf trials.
3 years ago
Alexander Soare 6bbc50beb4 make it possible to provide norm_layer via create_model
3 years ago
nateraw adcb74f87f 🎨 Import load_state_dict_from_url directly
3 years ago
nateraw e65a2cba3d 🎨 cleanup and add a couple comments
3 years ago
nateraw 2b6ade24b3 🎨 write model card to enable inference
3 years ago
Ross Wightman cf5ac2800c BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights.
3 years ago
Ross Wightman 24720abe3b Merge branch 'master' into attn_update
3 years ago
Ross Wightman 1c9284c640 Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
3 years ago
Ross Wightman f8a215cfe6 A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results
3 years ago
Ross Wightman 7ab2491ab7 Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
3 years ago
Ross Wightman f1808e0970 Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
3 years ago
Ross Wightman 4027412757 Add resnet33ts weights, update resnext26ts baseline weights
3 years ago
Richard Chen 9fe5798bee fix bug for reset classifier and fix for validating the dimension
3 years ago
Richard Chen 3718c5a5bd fix loading pretrained model
3 years ago
Richard Chen bb50b69a57 fix for torch script
3 years ago
nateraw abf9d51bc3 🚧 wip
3 years ago
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related
3 years ago
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
3 years ago
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments...
3 years ago
Ross Wightman fb94350896 Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss
3 years ago
Ross Wightman f262137ff2 Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments.
3 years ago
Ross Wightman ba9c1108a1 Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments.
3 years ago
Ross Wightman 29a37e23ee LR scheduler update:
3 years ago
nateraw 28d2841acf 💄 apply isort
3 years ago
Ross Wightman 492c0a4e20 Update HaloAttn comment
3 years ago
nateraw e72c989973 add ability to push to hf hub
3 years ago
Richard Chen 7ab9d4555c add crossvit
3 years ago
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
3 years ago
Ross Wightman 78933122c9 Fix silly typo
3 years ago
Ross Wightman 2568ffc5ef Merge branch 'master' into attn_update
3 years ago
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test.
3 years ago
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines
3 years ago
Ross Wightman a5a542f17d Fix typo
3 years ago
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
3 years ago
Ross Wightman d667351eac Tweak accuracy topk safety. Fix #807
3 years ago
Yohann Lereclus 35c9740826 Fix accuracy when topk > num_classes
3 years ago
Ross Wightman a16a753852 Add lamb/lars to optim init imports, remove stray comment
3 years ago
Ross Wightman c207e02782 MOAR optimizer changes. Woo!
3 years ago
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
3 years ago
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer
3 years ago
Ross Wightman 8f68193c91
Update lamp.py comment
3 years ago
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
3 years ago
Ross Wightman a6af48be64 add madgradw optimizer
3 years ago
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl
3 years ago
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import
3 years ago
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
3 years ago
Sepehr Sameni abf3e044bb
Update scheduler_factory.py
3 years ago
Ross Wightman 3cdaf5ed56 Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting.
3 years ago
Ross Wightman 1042b8a146 Add non fused LAMB optimizer option
3 years ago
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman d3f7440650 Add EfficientNetV2 XL model defs
3 years ago
Ross Wightman 72b227dcf5
Merge pull request #750 from drjinying/master
3 years ago
Ross Wightman 2907c1f967
Merge pull request #746 from samarth4149/master
3 years ago
Ross Wightman 748ab852ca Allow act_layer switch for xcit, fix in_chans for some variants
3 years ago
Ying Jin 20b2d4b69d Use bicubic interpolation in resize_pos_embed()
3 years ago
Ross Wightman d3255adf8e Merge branch 'xcit' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-xcit
3 years ago
Ross Wightman f8039c7492 Fix gc effv2 model cfg name
3 years ago
Alexander Soare 3a55a30ed1 add notes from author
3 years ago
Alexander Soare 899cf84ccc bug fix - missing _dist postfix for many of the 224_dist models
3 years ago
Alexander Soare 623e8b8eb8 wip xcit
3 years ago
Ross Wightman 392368e210 Add efficientnetv2_rw_t defs w/ weights, and gc variant, as well as gcresnet26ts for experiments. Version 0.4.13
3 years ago
samarth daab57a6d9 1. Added a simple multi step LR scheduler
3 years ago
Ross Wightman 6d8272e92c Add SAM pretrained model defs/weights for ViT B16 and B32 models.
3 years ago
Ross Wightman ee4d8fc69a Remove unecessary line from nest post refactor
3 years ago
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed.
3 years ago
Ross Wightman 81cd6863c8 Move aggregation (convpool) for nest into NestLevel, cleanup and enable features_only use. Finalize weight url.
3 years ago
Ross Wightman 6ae0ac6420 Merge branch 'nested_transformer' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-nested_transformer
3 years ago
Alexander Soare 7b8a0017f1 wip to review
3 years ago
Alexander Soare b11d949a06 wip checkpoint with some feature extraction work
3 years ago
Alexander Soare 23bb72ce5e nested_transformer wip
3 years ago
Ross Wightman 766b4d3262 Fix features for resnetv2_50t
3 years ago
Ross Wightman e8045e712f Fix BatchNorm for ResNetV2 non GN models, add more ResNetV2 model defs for future experimentation, fix zero_init of last residual for pre-act.
3 years ago
Ross Wightman 20a2be14c3 Add gMLP-S weights, 79.6 top-1
3 years ago
Ross Wightman 85f894e03d Fix ViT in21k representation (pre_logits) layer handling across old and new npz checkpoints
3 years ago
Ross Wightman b41cffaa93 Fix a few issues loading pretrained vit/bit npz weights w/ num_classes=0 __init__ arg. Missed a few other small classifier handling detail on Mlp, GhostNet, Levit. Should fix #713
3 years ago
Ross Wightman 9c9755a808 AugReg release
3 years ago
Ross Wightman 381b279785 Add hybrid model fwds back
3 years ago
Ross Wightman 26f04a8e3e Fix a weight link
3 years ago
Ross Wightman 8f4a0222ed Add GMixer-24 MLP model weights, trained w/ TPU + PyTorch XLA
3 years ago
Ross Wightman 4c09a2f169 Bump version 0.4.12
3 years ago
Ross Wightman b319eb5b5d Update ViT weights, more details to be added before merge.
3 years ago
Ross Wightman 8257b86550 Fix up resnetv2 bit/bitm model default res
3 years ago
Ross Wightman 1228f5a3d8 Add BiT distilled 50x1 and teacher 152x2 models from 'A good teacher is patient and consistent' paper.
3 years ago
Ross Wightman 511a8e8c96 Add official ResMLP weights.
3 years ago
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
3 years ago
Ross Wightman 8319e0c373 Add file docstring to std_conv.py
3 years ago
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes
3 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
3 years ago
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid.
3 years ago
Ross Wightman b7a568f065 Fix torchscript issue in bat
3 years ago
Ross Wightman d17b374f0f Minimum input_size needed to be higher
3 years ago
Ross Wightman b3b90d944d Add min_input_size to bat_resnext to prevent test breakage.
3 years ago
Ross Wightman d413eef1bf Add ResMLP-24 model weights that I trained in PyTorch XLA on TPU-VM. 79.2 top-1.
3 years ago
Ross Wightman 10d8fa4620 Add gc and bat attention resnext26ts variants to byob for test.
3 years ago
Ross Wightman 2f5ed2dec1 Update `init_values` const for 24 and 36 layer ResMLP models
3 years ago
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP.
3 years ago
Ross Wightman 2a63d0246b Post merge cleanup
3 years ago
Ross Wightman 45dec179e5
Merge pull request #681 from lmk123568/master
3 years ago
Dongyoon Han ded1671483 Fix stochastic depth working only with a shortcut
3 years ago
Mike b87d98b238
Update convit.py
3 years ago
Ross Wightman 02320c3e3d Bump version to 0.4.11
4 years ago
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough.
4 years ago
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring.
4 years ago
Ross Wightman 307a935b79 Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
4 years ago
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable.
4 years ago
Ross Wightman bcec14d3b5 Bring EfficientNet SE layer in line with others, pull se_ratio outside of blocks. Allows swapping w/ other attn layers.
4 years ago