Ross Wightman
|
0e6023f032
|
Merge pull request #1381 from ChristophReich1996/master
Fix typo in PositionalEncodingFourier
|
2 years ago |
Ross Wightman
|
9914f744dc
|
Add more maxxvit weights includ ConvNeXt conv block based experiments.
|
2 years ago |
Mohamed Rashad
|
8fda68aff6
|
Fix repo id bug
This to fix this issue #1482
|
2 years ago |
Ross Wightman
|
1199c5a1a4
|
clip_laion2b models need 1e-5 eps for LayerNorm
|
2 years ago |
Ross Wightman
|
a383ef99f5
|
Make huggingface_hub necessary if it's the only source for a pretrained weight
|
2 years ago |
Ross Wightman
|
e069249a2d
|
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
|
2 years ago |
Ross Wightman
|
9d65557be3
|
Fix errant import
|
2 years ago |
Ross Wightman
|
9709dbaaa9
|
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
|
2 years ago |
Ross Wightman
|
a520da9b49
|
Update tresnet features_info for v2
|
2 years ago |
Ross Wightman
|
c8ab747bf4
|
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
|
2 years ago |
Ross Wightman
|
73049dc2aa
|
Fix type in dla weight update
|
2 years ago |
Ross Wightman
|
e11efa872d
|
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
|
2 years ago |
Ross Wightman
|
fa8c84eede
|
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
|
2 years ago |
Ross Wightman
|
c1b3cea19d
|
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
|
2 years ago |
Ross Wightman
|
914544fc81
|
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
|
2 years ago |
Ross Wightman
|
dc90816f26
|
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
|
2 years ago |
Ross Wightman
|
f489f02ad1
|
Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
|
2 years ago |
Ross Wightman
|
7f1b223c02
|
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
|
2 years ago |
Ross Wightman
|
e6a4361306
|
pretrained_cfg entry for mvitv2_small_cls
|
2 years ago |
Ross Wightman
|
f66e5f0e35
|
Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
|
2 years ago |
Ross Wightman
|
f1d2160d85
|
Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
|
2 years ago |
Ross Wightman
|
eca6f0a25c
|
Fix syntax error (extra dataclass comma) in maxxvit.py
|
2 years ago |
Ross Wightman
|
ff6a919cf5
|
Add --fast-norm arg to benchmark.py, train.py, validate.py
|
2 years ago |
Ross Wightman
|
769ab4b98a
|
Clean up no_grad for trunc normal weight inits
|
2 years ago |
Ross Wightman
|
48e1df8b37
|
Add norm/norm_act header comments
|
2 years ago |
Ross Wightman
|
7c2660576d
|
Tweak init for convnext block using maxxvit/coatnext.
|
2 years ago |
Ross Wightman
|
1d8d6f6072
|
Fix two default args in DenseNet blocks... fix #1427
|
2 years ago |
Ross Wightman
|
527f9a4cb2
|
Updated to correct maxvit_nano weights...
|
2 years ago |
Ross Wightman
|
b2e8426fca
|
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
|
2 years ago |
Ross Wightman
|
837c68263b
|
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
|
2 years ago |
Ross Wightman
|
cac0a4570a
|
More test fixes, pool size for 256x256 maxvit models
|
2 years ago |
Ross Wightman
|
e939ed19b9
|
Rename internal creation fn for maxvit, has not been just coatnet for a while...
|
2 years ago |
Ross Wightman
|
ffaf97f813
|
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
|
2 years ago |
Ross Wightman
|
8c9696c9df
|
More model and test fixes
|
2 years ago |
Ross Wightman
|
ca52108c2b
|
Fix some model support functions
|
2 years ago |
Ross Wightman
|
f332fc2db7
|
Fix some test failures, torchscript issues
|
2 years ago |
Ross Wightman
|
6e559e9b5f
|
Add MViT (Multi-Scale) V2
|
2 years ago |
Ross Wightman
|
43aa84e861
|
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
|
2 years ago |
Ross Wightman
|
c486aa71f8
|
Add GCViT
|
2 years ago |
Ross Wightman
|
fba6ecd39b
|
Add EfficientFormer
|
2 years ago |
Ross Wightman
|
ff4a38e2c3
|
Add PyramidVisionTransformerV2
|
2 years ago |
Ross Wightman
|
1d8ada359a
|
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
|
2 years ago |
Ross Wightman
|
2544d3b80f
|
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
|
2 years ago |
Ross Wightman
|
13565aad50
|
Add edgenext_base model def & weight link, update to improve ONNX export #1385
|
2 years ago |
Ross Wightman
|
8ad4bdfa06
|
Allow ntuple to be used with string values
|
2 years ago |
Christoph Reich
|
faae93e62d
|
Fix typo in PositionalEncodingFourier
|
2 years ago |
Ross Wightman
|
ec6a28830f
|
Add DeiT-III 'medium' model defs and weights
|
2 years ago |
Ross Wightman
|
6f103a442b
|
Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
|
2 years ago |
Ross Wightman
|
4042a94f8f
|
Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
|
2 years ago |
Ross Wightman
|
c8f69e04a9
|
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
Take `no_emb_class` into account when calling `resize_pos_embed`
|
2 years ago |