Ross Wightman
|
e9dccc918c
|
Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc
|
2 years ago |
Ross Wightman
|
8c28363dc9
|
Version 0.7.dev0 for master
|
2 years ago |
nateraw
|
30bafd7347
|
🔖 add dev suffix to version tag
|
2 years ago |
Ross Wightman
|
f67a7ee8bd
|
Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct
|
2 years ago |
Ross Wightman
|
cea8df3d0c
|
Version 0.6.12
|
2 years ago |
Ross Wightman
|
9914f744dc
|
Add more maxxvit weights includ ConvNeXt conv block based experiments.
|
2 years ago |
Ross Wightman
|
b1b024dfed
|
Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
|
2 years ago |
Ross Wightman
|
4f18d6dc5f
|
Fix logs in WDS parser
|
2 years ago |
Mohamed Rashad
|
8fda68aff6
|
Fix repo id bug
This to fix this issue #1482
|
2 years ago |
Ross Wightman
|
b8c8550841
|
Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
|
2 years ago |
Alex Fafard
|
7327792f39
|
update to support pickle based dictionaries
|
2 years ago |
Ross Wightman
|
1199c5a1a4
|
clip_laion2b models need 1e-5 eps for LayerNorm
|
2 years ago |
Ross Wightman
|
87939e6fab
|
Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed.
|
2 years ago |
Ross Wightman
|
c88947ad3d
|
Add initial Hugging Face Datasets parser impl.
|
2 years ago |
Ross Wightman
|
e858912e0c
|
Add brute-force checkpoint remapping option
|
2 years ago |
Ross Wightman
|
b293dfa595
|
Add CL SE module
|
2 years ago |
Ross Wightman
|
2a296412be
|
Add Adan optimizer
|
2 years ago |
Ross Wightman
|
5dc4343308
|
version 0.6.11
|
2 years ago |
Ross Wightman
|
a383ef99f5
|
Make huggingface_hub necessary if it's the only source for a pretrained weight
|
2 years ago |
Ross Wightman
|
33e30f8c8b
|
Remove layer-decay print
|
2 years ago |
Ross Wightman
|
e069249a2d
|
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
|
2 years ago |
Ross Wightman
|
9d65557be3
|
Fix errant import
|
2 years ago |
Ross Wightman
|
9709dbaaa9
|
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
|
2 years ago |
Ross Wightman
|
a520da9b49
|
Update tresnet features_info for v2
|
2 years ago |
Ross Wightman
|
c8ab747bf4
|
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
|
2 years ago |
Ross Wightman
|
73049dc2aa
|
Fix type in dla weight update
|
2 years ago |
Ross Wightman
|
3599c7e6a4
|
version 0.6.10
|
2 years ago |
Ross Wightman
|
e11efa872d
|
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
|
2 years ago |
Ross Wightman
|
fa8c84eede
|
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
|
2 years ago |
Ross Wightman
|
c1b3cea19d
|
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
|
2 years ago |
Ross Wightman
|
914544fc81
|
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
|
2 years ago |
Ross Wightman
|
dc90816f26
|
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
|
2 years ago |
Ross Wightman
|
f489f02ad1
|
Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
|
2 years ago |
Ross Wightman
|
7f1b223c02
|
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
|
2 years ago |
Ross Wightman
|
e6a4361306
|
pretrained_cfg entry for mvitv2_small_cls
|
2 years ago |
Ross Wightman
|
f66e5f0e35
|
Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
|
2 years ago |
Ross Wightman
|
f1d2160d85
|
Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
|
2 years ago |
Ross Wightman
|
eca6f0a25c
|
Fix syntax error (extra dataclass comma) in maxxvit.py
|
2 years ago |
Ross Wightman
|
ff6a919cf5
|
Add --fast-norm arg to benchmark.py, train.py, validate.py
|
2 years ago |
Ross Wightman
|
769ab4b98a
|
Clean up no_grad for trunc normal weight inits
|
2 years ago |
Ross Wightman
|
48e1df8b37
|
Add norm/norm_act header comments
|
2 years ago |
Ross Wightman
|
7c2660576d
|
Tweak init for convnext block using maxxvit/coatnext.
|
2 years ago |
Ross Wightman
|
1d8d6f6072
|
Fix two default args in DenseNet blocks... fix #1427
|
2 years ago |
Ross Wightman
|
527f9a4cb2
|
Updated to correct maxvit_nano weights...
|
2 years ago |
Ross Wightman
|
b2e8426fca
|
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
|
2 years ago |
Ross Wightman
|
837c68263b
|
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
|
2 years ago |
Ross Wightman
|
cac0a4570a
|
More test fixes, pool size for 256x256 maxvit models
|
2 years ago |
Ross Wightman
|
e939ed19b9
|
Rename internal creation fn for maxvit, has not been just coatnet for a while...
|
2 years ago |
Ross Wightman
|
ffaf97f813
|
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
|
2 years ago |
Ross Wightman
|
8c9696c9df
|
More model and test fixes
|
2 years ago |
Ross Wightman
|
ca52108c2b
|
Fix some model support functions
|
2 years ago |
Ross Wightman
|
f332fc2db7
|
Fix some test failures, torchscript issues
|
2 years ago |
Ross Wightman
|
6e559e9b5f
|
Add MViT (Multi-Scale) V2
|
2 years ago |
Ross Wightman
|
43aa84e861
|
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
|
2 years ago |
Ross Wightman
|
c486aa71f8
|
Add GCViT
|
2 years ago |
Ross Wightman
|
fba6ecd39b
|
Add EfficientFormer
|
2 years ago |
Ross Wightman
|
ff4a38e2c3
|
Add PyramidVisionTransformerV2
|
2 years ago |
Ross Wightman
|
1d8ada359a
|
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
|
2 years ago |
Ross Wightman
|
2544d3b80f
|
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
|
2 years ago |
Ross Wightman
|
13565aad50
|
Add edgenext_base model def & weight link, update to improve ONNX export #1385
|
2 years ago |
Ross Wightman
|
8ad4bdfa06
|
Allow ntuple to be used with string values
|
2 years ago |
Christoph Reich
|
faae93e62d
|
Fix typo in PositionalEncodingFourier
|
2 years ago |
Ross Wightman
|
7430a85d07
|
Update README, bump version to 0.6.8
|
2 years ago |
Ross Wightman
|
ec6a28830f
|
Add DeiT-III 'medium' model defs and weights
|
2 years ago |
Ross Wightman
|
d875a1d3f6
|
version 0.6.7
|
2 years ago |
Ross Wightman
|
6f103a442b
|
Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
|
2 years ago |
Ross Wightman
|
4042a94f8f
|
Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
|
2 years ago |
Ross Wightman
|
c8f69e04a9
|
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
Take `no_emb_class` into account when calling `resize_pos_embed`
|
2 years ago |
Ceshine Lee
|
0b64117592
|
Take `no_emb_class` into account when calling `resize_pos_embed`
|
2 years ago |
Jasha10
|
56c3a84db3
|
Update type hint for `register_notrace_module`
register_notrace_module is used to decorate types (i.e. subclasses of nn.Module).
It is not called on module instances.
|
2 years ago |
Ross Wightman
|
1b278136c3
|
Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
|
2 years ago |
Ross Wightman
|
909705e7ff
|
Remove some redundant requires_grad=True from nn.Parameter in third party code
|
2 years ago |
Ross Wightman
|
c5e0d1c700
|
Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
|
2 years ago |
Ross Wightman
|
dc376e3676
|
Ensure all model entrypoint fn default to `pretrained=False` (a few didn't)
|
2 years ago |
Ross Wightman
|
23b102064a
|
Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed)
|
2 years ago |
Ross Wightman
|
0dbd9352ce
|
Add bulk_runner script and updates to benchmark.py and validate.py for better error handling in bulk runs (used for benchmark and validation result runs). Improved batch size decay stepping on retry...
|
2 years ago |
Ross Wightman
|
92b91af3bb
|
version 0.6.6
|
2 years ago |
Ross Wightman
|
05313940e2
|
Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models.
|
2 years ago |
nateraw
|
51cca82aa1
|
👽 use hf_hub_download instead of cached_download
|
2 years ago |
Ross Wightman
|
324a4e58b6
|
disable nvfuser for jit te/legacy modes (for PT 1.12+)
|
2 years ago |
Ross Wightman
|
2898cf6e41
|
version 0.6.5 for pypi release
|
2 years ago |
Ross Wightman
|
a45b4bce9a
|
x and xx small edgenext models do benefit from larger test input size
|
2 years ago |
Ross Wightman
|
a8e34051c1
|
Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4
|
2 years ago |
Ross Wightman
|
1c5cb819f9
|
bump version to 0.6.3 before merge
|
2 years ago |
Ross Wightman
|
a1cb25066e
|
Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights.
|
2 years ago |
Ross Wightman
|
7c7ecd2492
|
Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
|
2 years ago |
Ross Wightman
|
ce65a7b29f
|
Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
|
2 years ago |
Ross Wightman
|
58621723bd
|
Add CrossStage3 DarkNet (cs3) weights
|
2 years ago |
Ross Wightman
|
9be0c84715
|
Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
|
2 years ago |
Ross Wightman
|
db0cee9910
|
Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
|
2 years ago |
Ross Wightman
|
eca09b8642
|
Add MobileVitV2 support. Fix #1332. Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
|
2 years ago |
Ross Wightman
|
06307b8b41
|
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
|
2 years ago |
Ross Wightman
|
bfc0dccb0e
|
Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274.
|
2 years ago |
Ross Wightman
|
7d4b3807d5
|
Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
|
3 years ago |
Ross Wightman
|
d0c5bd5722
|
Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
|
3 years ago |
Ross Wightman
|
d765305821
|
Remove first_conv for resnetaa50 def
|
3 years ago |
Ross Wightman
|
dd9b8f57c4
|
Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
|
3 years ago |
Ross Wightman
|
377e9bfa21
|
Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
|
3 years ago |
Ross Wightman
|
c170ba3173
|
Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
|
3 years ago |
Ross Wightman
|
188c194b0f
|
Left some experiment stem code in convnext by mistake
|
3 years ago |