Commit Graph

997 Commits (e2fc43bc63081164c7903288c0ab2658a7fcbb94)

Author SHA1 Message Date
Ross Wightman b8c8550841 Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
2 years ago
Alex Fafard 7327792f39 update to support pickle based dictionaries
2 years ago
Ross Wightman 1199c5a1a4 clip_laion2b models need 1e-5 eps for LayerNorm
2 years ago
Ross Wightman 87939e6fab Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed.
2 years ago
Ross Wightman c88947ad3d Add initial Hugging Face Datasets parser impl.
2 years ago
Ross Wightman e858912e0c Add brute-force checkpoint remapping option
2 years ago
Ross Wightman b293dfa595 Add CL SE module
2 years ago
Ross Wightman 2a296412be Add Adan optimizer
2 years ago
Ross Wightman 5dc4343308 version 0.6.11
2 years ago
Ross Wightman a383ef99f5 Make huggingface_hub necessary if it's the only source for a pretrained weight
2 years ago
Ross Wightman 33e30f8c8b Remove layer-decay print
2 years ago
Ross Wightman e069249a2d Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
2 years ago
Ross Wightman 9d65557be3 Fix errant import
2 years ago
Ross Wightman 9709dbaaa9 Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
2 years ago
Ross Wightman a520da9b49 Update tresnet features_info for v2
2 years ago
Ross Wightman c8ab747bf4 BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
2 years ago
Ross Wightman 73049dc2aa Fix type in dla weight update
2 years ago
Ross Wightman 3599c7e6a4 version 0.6.10
2 years ago
Ross Wightman e11efa872d Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
2 years ago
Ross Wightman fa8c84eede Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
2 years ago
Ross Wightman c1b3cea19d Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
2 years ago
Ross Wightman 914544fc81 Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
2 years ago
Ross Wightman dc90816f26 Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
2 years ago
Ross Wightman f489f02ad1 Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
2 years ago
Ross Wightman 7f1b223c02 Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
2 years ago
Ross Wightman e6a4361306 pretrained_cfg entry for mvitv2_small_cls
2 years ago
Ross Wightman f66e5f0e35 Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
2 years ago
Ross Wightman f1d2160d85 Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
2 years ago
Ross Wightman eca6f0a25c Fix syntax error (extra dataclass comma) in maxxvit.py
2 years ago
Ross Wightman ff6a919cf5 Add --fast-norm arg to benchmark.py, train.py, validate.py
2 years ago
Ross Wightman 769ab4b98a Clean up no_grad for trunc normal weight inits
2 years ago
Ross Wightman 48e1df8b37 Add norm/norm_act header comments
2 years ago
Ross Wightman 7c2660576d Tweak init for convnext block using maxxvit/coatnext.
2 years ago
Ross Wightman 1d8d6f6072 Fix two default args in DenseNet blocks... fix #1427
2 years ago
Ross Wightman 527f9a4cb2 Updated to correct maxvit_nano weights...
2 years ago
Ross Wightman b2e8426fca Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
2 years ago
Ross Wightman 837c68263b For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
2 years ago
Ross Wightman cac0a4570a More test fixes, pool size for 256x256 maxvit models
2 years ago
Ross Wightman e939ed19b9 Rename internal creation fn for maxvit, has not been just coatnet for a while...
2 years ago
Ross Wightman ffaf97f813 MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
2 years ago
Ross Wightman 8c9696c9df More model and test fixes
2 years ago
Ross Wightman ca52108c2b Fix some model support functions
2 years ago
Ross Wightman f332fc2db7 Fix some test failures, torchscript issues
2 years ago
Ross Wightman 6e559e9b5f Add MViT (Multi-Scale) V2
2 years ago
Ross Wightman 43aa84e861 Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
2 years ago
Ross Wightman c486aa71f8 Add GCViT
2 years ago
Ross Wightman fba6ecd39b Add EfficientFormer
2 years ago
Ross Wightman ff4a38e2c3 Add PyramidVisionTransformerV2
2 years ago
Ross Wightman 1d8ada359a Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
2 years ago
Ross Wightman 2544d3b80f ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
2 years ago
Ross Wightman 13565aad50 Add edgenext_base model def & weight link, update to improve ONNX export #1385
2 years ago
Ross Wightman 8ad4bdfa06 Allow ntuple to be used with string values
2 years ago
Christoph Reich faae93e62d
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman 7430a85d07 Update README, bump version to 0.6.8
2 years ago
Ross Wightman ec6a28830f Add DeiT-III 'medium' model defs and weights
2 years ago
Ross Wightman d875a1d3f6 version 0.6.7
2 years ago
Ross Wightman 6f103a442b Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
2 years ago
Ross Wightman 4042a94f8f Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
2 years ago
Ross Wightman c8f69e04a9
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
2 years ago
Ceshine Lee 0b64117592 Take `no_emb_class` into account when calling `resize_pos_embed`
2 years ago
Jasha10 56c3a84db3
Update type hint for `register_notrace_module`
2 years ago
Ross Wightman 1b278136c3 Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
2 years ago
Ross Wightman 909705e7ff Remove some redundant requires_grad=True from nn.Parameter in third party code
2 years ago
Ross Wightman c5e0d1c700 Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
2 years ago
Ross Wightman dc376e3676 Ensure all model entrypoint fn default to `pretrained=False` (a few didn't)
2 years ago
Ross Wightman 23b102064a Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed)
2 years ago
Ross Wightman 0dbd9352ce Add bulk_runner script and updates to benchmark.py and validate.py for better error handling in bulk runs (used for benchmark and validation result runs). Improved batch size decay stepping on retry...
2 years ago
Ross Wightman 92b91af3bb version 0.6.6
2 years ago
Ross Wightman 05313940e2 Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models.
2 years ago
nateraw 51cca82aa1 👽 use hf_hub_download instead of cached_download
2 years ago
Ross Wightman 324a4e58b6 disable nvfuser for jit te/legacy modes (for PT 1.12+)
2 years ago
Ross Wightman 2898cf6e41 version 0.6.5 for pypi release
2 years ago
Ross Wightman a45b4bce9a x and xx small edgenext models do benefit from larger test input size
2 years ago
Ross Wightman a8e34051c1 Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4
2 years ago
Ross Wightman 1c5cb819f9 bump version to 0.6.3 before merge
2 years ago
Ross Wightman a1cb25066e Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights.
2 years ago
Ross Wightman 7c7ecd2492 Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
2 years ago
Ross Wightman ce65a7b29f Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
2 years ago
Ross Wightman 58621723bd Add CrossStage3 DarkNet (cs3) weights
2 years ago
Ross Wightman 9be0c84715 Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
2 years ago
Ross Wightman db0cee9910 Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
2 years ago
Ross Wightman eca09b8642 Add MobileVitV2 support. Fix #1332. Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2 years ago
Ross Wightman 06307b8b41 Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman bfc0dccb0e Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274.
2 years ago
Ross Wightman 7d4b3807d5 Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
2 years ago
Ross Wightman d0c5bd5722 Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
2 years ago
Ross Wightman d765305821 Remove first_conv for resnetaa50 def
2 years ago
Ross Wightman dd9b8f57c4 Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
2 years ago
Ross Wightman 377e9bfa21 Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
2 years ago
Ross Wightman c170ba3173 Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
2 years ago
Ross Wightman 188c194b0f Left some experiment stem code in convnext by mistake
2 years ago
Ross Wightman 70d6d2c484 support test_crop_size in data config resolve
2 years ago
Ross Wightman 6064d16a2d Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
2 years ago
Ross Wightman 7a9c6811c9 Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
2 years ago
Ross Wightman 82c311d082 Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
2 years ago
Ross Wightman a050fde5cd Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats
2 years ago
Ross Wightman e6d7df40ec no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg
2 years ago
Ross Wightman 07d0c4ae96 Improve repr for DropPath module
2 years ago
Ross Wightman e27c16b8a0 Remove unecessary code for synbn guard
2 years ago
Ross Wightman 0da3c9ebbf Remove SiLU layer in default args that breaks import on old old PyTorch
2 years ago