Commit Graph

1063 Commits (scaling_vit)

Author SHA1 Message Date
Ross Wightman 803254bb40 Fix spacing misalignment for fast norm path in LayerNorm modules
2 years ago
Ross Wightman 475ecdfa3d cast env var args for dataset readers to int
2 years ago
Hoan Nguyen 39190f5f44
Remove inplace operators when calculating the loss
2 years ago
Ross Wightman 6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
2 years ago
Ross Wightman 0e6023f032
Merge pull request #1381 from ChristophReich1996/master
2 years ago
Ross Wightman 66f4af7090 Merge remote-tracking branch 'origin/master' into script_cleanup
2 years ago
Ross Wightman d3961536c9 comment some debug logs for WDS dataset
2 years ago
Ross Wightman e9dccc918c Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc
2 years ago
Ross Wightman 8c28363dc9 Version 0.7.dev0 for master
2 years ago
nateraw 30bafd7347 🔖 add dev suffix to version tag
2 years ago
Ross Wightman f67a7ee8bd Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct
2 years ago
Ross Wightman cea8df3d0c Version 0.6.12
2 years ago
Ross Wightman 9914f744dc Add more maxxvit weights includ ConvNeXt conv block based experiments.
2 years ago
Ross Wightman b1b024dfed Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
2 years ago
Ross Wightman 4f18d6dc5f Fix logs in WDS parser
2 years ago
Mohamed Rashad 8fda68aff6
Fix repo id bug
2 years ago
Ross Wightman b8c8550841 Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
2 years ago
Alex Fafard 7327792f39 update to support pickle based dictionaries
2 years ago
Ross Wightman 1199c5a1a4 clip_laion2b models need 1e-5 eps for LayerNorm
2 years ago
Ross Wightman 87939e6fab Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed.
2 years ago
Ross Wightman c88947ad3d Add initial Hugging Face Datasets parser impl.
2 years ago
Ross Wightman e858912e0c Add brute-force checkpoint remapping option
2 years ago
Ross Wightman b293dfa595 Add CL SE module
2 years ago
Ross Wightman 2a296412be Add Adan optimizer
2 years ago
Ross Wightman 5dc4343308 version 0.6.11
2 years ago
Ross Wightman a383ef99f5 Make huggingface_hub necessary if it's the only source for a pretrained weight
2 years ago
Ross Wightman 33e30f8c8b Remove layer-decay print
2 years ago
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
2 years ago
Ross Wightman 9d65557be3 Fix errant import
2 years ago
Ross Wightman 9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
2 years ago
Ross Wightman a520da9b49 Update tresnet features_info for v2
2 years ago
Ross Wightman c8ab747bf4 BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
2 years ago
Ross Wightman 73049dc2aa Fix type in dla weight update
2 years ago
Ross Wightman 3599c7e6a4 version 0.6.10
2 years ago
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
2 years ago
Ross Wightman fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
2 years ago
Ross Wightman c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
2 years ago
Ross Wightman 914544fc81 Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
2 years ago
Ross Wightman dc90816f26 Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
2 years ago
Ross Wightman f489f02ad1 Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
2 years ago
Ross Wightman 7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
2 years ago
Ross Wightman e6a4361306 pretrained_cfg entry for mvitv2_small_cls
2 years ago
Ross Wightman f66e5f0e35 Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
2 years ago
Ross Wightman f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
2 years ago
Ross Wightman eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py
2 years ago
Ross Wightman ff6a919cf5 Add --fast-norm arg to benchmark.py, train.py, validate.py
2 years ago
Ross Wightman 769ab4b98a Clean up no_grad for trunc normal weight inits
2 years ago
Ross Wightman 48e1df8b37 Add norm/norm_act header comments
2 years ago
Ross Wightman 7c2660576d Tweak init for convnext block using maxxvit/coatnext.
2 years ago
Ross Wightman 1d8d6f6072 Fix two default args in DenseNet blocks... fix #1427
2 years ago
Ross Wightman 527f9a4cb2 Updated to correct maxvit_nano weights...
2 years ago
Ross Wightman b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
2 years ago
Ross Wightman 837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
2 years ago
Ross Wightman cac0a4570a More test fixes, pool size for 256x256 maxvit models
2 years ago
Ross Wightman e939ed19b9 Rename internal creation fn for maxvit, has not been just coatnet for a while...
2 years ago
Ross Wightman ffaf97f813 MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
2 years ago
Ross Wightman 8c9696c9df More model and test fixes
2 years ago
Ross Wightman ca52108c2b Fix some model support functions
2 years ago
Ross Wightman f332fc2db7 Fix some test failures, torchscript issues
2 years ago
Ross Wightman 6e559e9b5f Add MViT (Multi-Scale) V2
2 years ago
Ross Wightman 43aa84e861 Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
2 years ago
Ross Wightman c486aa71f8 Add GCViT
2 years ago
Ross Wightman fba6ecd39b Add EfficientFormer
2 years ago
Ross Wightman ff4a38e2c3 Add PyramidVisionTransformerV2
2 years ago
Ross Wightman 1d8ada359a Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
2 years ago
Ross Wightman 2544d3b80f ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
2 years ago
Ross Wightman 13565aad50 Add edgenext_base model def & weight link, update to improve ONNX export #1385
2 years ago
Ross Wightman 8ad4bdfa06 Allow ntuple to be used with string values
2 years ago
Christoph Reich faae93e62d
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman 7430a85d07 Update README, bump version to 0.6.8
2 years ago
Ross Wightman ec6a28830f Add DeiT-III 'medium' model defs and weights
2 years ago
Ross Wightman d875a1d3f6 version 0.6.7
2 years ago
Ross Wightman 6f103a442b Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
2 years ago
Ross Wightman 4042a94f8f Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
2 years ago
Ross Wightman c8f69e04a9
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
2 years ago
Ceshine Lee 0b64117592 Take `no_emb_class` into account when calling `resize_pos_embed`
2 years ago
Jasha10 56c3a84db3
Update type hint for `register_notrace_module`
2 years ago
Ross Wightman 1b278136c3 Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
2 years ago
Ross Wightman 909705e7ff Remove some redundant requires_grad=True from nn.Parameter in third party code
2 years ago
Ross Wightman c5e0d1c700 Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
2 years ago
Ross Wightman dc376e3676 Ensure all model entrypoint fn default to `pretrained=False` (a few didn't)
2 years ago
Ross Wightman 23b102064a Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed)
2 years ago
Ross Wightman 0dbd9352ce Add bulk_runner script and updates to benchmark.py and validate.py for better error handling in bulk runs (used for benchmark and validation result runs). Improved batch size decay stepping on retry...
2 years ago
Ross Wightman 92b91af3bb version 0.6.6
2 years ago
Ross Wightman 05313940e2 Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models.
2 years ago
nateraw 51cca82aa1 👽 use hf_hub_download instead of cached_download
2 years ago
Ross Wightman 324a4e58b6 disable nvfuser for jit te/legacy modes (for PT 1.12+)
2 years ago
Ross Wightman 2898cf6e41 version 0.6.5 for pypi release
2 years ago
Ross Wightman a45b4bce9a x and xx small edgenext models do benefit from larger test input size
2 years ago
Ross Wightman a8e34051c1 Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4
2 years ago
Ross Wightman 1c5cb819f9 bump version to 0.6.3 before merge
2 years ago
Ross Wightman a1cb25066e Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights.
2 years ago
Ross Wightman 7c7ecd2492 Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
2 years ago
Ross Wightman ce65a7b29f Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
2 years ago
Ross Wightman 58621723bd Add CrossStage3 DarkNet (cs3) weights
2 years ago
Ross Wightman 9be0c84715 Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
2 years ago
Ross Wightman db0cee9910 Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
2 years ago
Ross Wightman eca09b8642 Add MobileVitV2 support. Fix #1332. Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2 years ago
Ross Wightman 06307b8b41 Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman bfc0dccb0e Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274.
2 years ago
Ross Wightman 7d4b3807d5 Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
2 years ago
Ross Wightman d0c5bd5722 Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
2 years ago
Ross Wightman d765305821 Remove first_conv for resnetaa50 def
2 years ago
Ross Wightman dd9b8f57c4 Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
2 years ago
Ross Wightman 377e9bfa21 Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
2 years ago
Ross Wightman c170ba3173 Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
2 years ago
Ross Wightman 188c194b0f Left some experiment stem code in convnext by mistake
2 years ago
Ross Wightman 70d6d2c484 support test_crop_size in data config resolve
2 years ago
Ross Wightman 6064d16a2d Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
2 years ago
Ross Wightman 7a9c6811c9 Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
2 years ago
Ross Wightman 82c311d082 Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
2 years ago
Ross Wightman a050fde5cd Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats
2 years ago
Ross Wightman e6d7df40ec no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg
2 years ago
Ross Wightman 07d0c4ae96 Improve repr for DropPath module
2 years ago
Ross Wightman e27c16b8a0 Remove unecessary code for synbn guard
2 years ago
Ross Wightman 0da3c9ebbf Remove SiLU layer in default args that breaks import on old old PyTorch
2 years ago
Ross Wightman 7d657d2ef4 Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311
2 years ago
Ross Wightman 879df47c0a Support BatchNormAct2d for sync-bn use. Fix #1254
2 years ago
Ross Wightman 7cedc8d474 Follow up to #1256, fix interpolation warning in auto_autoaugment as well
2 years ago
Jakub Kaczmarzyk db64393c0d
use `Image.Resampling` namespace for PIL mapping (#1256)
2 years ago
Ross Wightman 20a1fa63f8 Make dev version 0.6.2.dev0 for pypi pre
2 years ago
Ross Wightman 347308faad Update README.md, version to 0.6.2
2 years ago
Ross Wightman 4b30bae67b Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
2 years ago
Ross Wightman d4c0588012 Remove persistent buffers from Swin-V2. Change SwinV2Cr cos attn + tau/logit_scale to match official, add ckpt convert, init_value zeros resid LN weight by default
2 years ago
Ross Wightman 27c42f0830 Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block
2 years ago
Ross Wightman 2f2b22d8c7 Disable nvfuser fma / opt level overrides per #1244
2 years ago
Ross Wightman c0211b0bf7 Swin-V2 test fixes, typo
2 years ago
Ross Wightman 9a86b900fa Official SwinV2 models
2 years ago
Ross Wightman d07d015173
Merge pull request #1249 from okojoalg/sequencer
2 years ago
Ross Wightman d30685c283
Merge pull request #1251 from hankyul2/fix-multistep-scheduler
2 years ago
han a16171335b fix: change milestones to decay-milestones
2 years ago
Ross Wightman 39b725e1c9 Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
2 years ago
Ross Wightman 78a32655fa Fix poolformer group_matcher to merge proj downsample with previous block, support coarse
2 years ago
Ross Wightman d79f3d9d1e Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting
2 years ago
Ross Wightman 37b6920df3 Fix group_matcher regex for regnet.py
2 years ago
okojoalg 93a79a3dd9 Fix num_features in Sequencer
2 years ago
han 57a988df30 fix: multistep lr decay epoch bugs
2 years ago
okojoalg 578d52e752 Add Sequencer
2 years ago
Ross Wightman f5ca4141f7 Adjust arg order for recent vit model args, add a few comments
2 years ago
Ross Wightman 41dc49a337 Vision Transformer refactoring and Rel Pos impl
2 years ago
Ross Wightman b7cb8d0337 Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling
2 years ago
jjsjann123 f88c606fcf fixing channels_last on cond_conv2d; update nvfuser debug env variable
2 years ago
Li Dong 09e9f3defb
migrate azure blob for beit checkpoints
2 years ago
Ross Wightman 52ac881402 Missed first_conv in latest seresnext 'D' default_cfgs
2 years ago
Ross Wightman 7629d8264d Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned.
2 years ago
SeeFun 8f0bc0591e fix convnext args
2 years ago
Ross Wightman c5a8e929fb Add initial swinv2 tiny / small weights
2 years ago
Ross Wightman f670d98cb8 Make a few more layers symbolically traceable (remove from FX leaf modules)
2 years ago
SeeFun ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
2 years ago
Ross Wightman 575924ed60 Update test crop for new RegNet-V weights to match Y
2 years ago