Ross Wightman
|
5dc4343308
|
version 0.6.11
|
2 years ago |
Ross Wightman
|
a383ef99f5
|
Make huggingface_hub necessary if it's the only source for a pretrained weight
|
2 years ago |
Ross Wightman
|
d199f6651d
|
Merge pull request #1467 from rwightman/clip_laion2b
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14, and g/14.
|
2 years ago |
Ross Wightman
|
33e30f8c8b
|
Remove layer-decay print
|
2 years ago |
Ross Wightman
|
e069249a2d
|
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
|
2 years ago |
Ross Wightman
|
9d65557be3
|
Fix errant import
|
2 years ago |
Ross Wightman
|
9709dbaaa9
|
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
|
2 years ago |
Ross Wightman
|
a520da9b49
|
Update tresnet features_info for v2
|
2 years ago |
Ross Wightman
|
c8ab747bf4
|
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
|
2 years ago |
Ross Wightman
|
73049dc2aa
|
Fix type in dla weight update
|
2 years ago |
Ross Wightman
|
3599c7e6a4
|
version 0.6.10
|
2 years ago |
Ross Wightman
|
e11efa872d
|
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
|
2 years ago |
Ross Wightman
|
fa8c84eede
|
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
|
2 years ago |
Ross Wightman
|
de40f66536
|
Update README.md
|
2 years ago |
Ross Wightman
|
c1b3cea19d
|
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
|
2 years ago |
Ross Wightman
|
da6f8f5a40
|
Fix beitv2 tests
|
2 years ago |
Ross Wightman
|
914544fc81
|
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
|
2 years ago |
Ross Wightman
|
dc90816f26
|
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
|
2 years ago |
Ross Wightman
|
f489f02ad1
|
Make gcvit window size ratio based to improve resolution changing support #1449. Change default init to original.
|
2 years ago |
Ross Wightman
|
c45c6ee8e4
|
Update README.md
|
2 years ago |
Ross Wightman
|
7f1b223c02
|
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
|
2 years ago |
Ross Wightman
|
e6a4361306
|
pretrained_cfg entry for mvitv2_small_cls
|
2 years ago |
Ross Wightman
|
f66e5f0e35
|
Fix class token support in MViT-V2, add small_class variant to ensure it's tested. Fix #1443
|
2 years ago |
Ross Wightman
|
b94b7cea65
|
Missed GCVit in README paper links
|
2 years ago |
Ross Wightman
|
f1d2160d85
|
Update a few maxxvit comments, rename PartitionAttention -> PartitionAttenionCl for consistency with other blocks
|
2 years ago |
Ross Wightman
|
eca6f0a25c
|
Fix syntax error (extra dataclass comma) in maxxvit.py
|
2 years ago |
Ross Wightman
|
4f72bae43b
|
Merge pull request #1415 from rwightman/more_vit
More ViT and ViT-CNN Hybrid architecture
|
2 years ago |
Ross Wightman
|
ff6a919cf5
|
Add --fast-norm arg to benchmark.py, train.py, validate.py
|
2 years ago |
Ross Wightman
|
769ab4b98a
|
Clean up no_grad for trunc normal weight inits
|
2 years ago |
Ross Wightman
|
48e1df8b37
|
Add norm/norm_act header comments
|
2 years ago |
Ross Wightman
|
99ee61e245
|
Add T/G legend to README.md maxvit list
|
2 years ago |
Ross Wightman
|
a54008bd97
|
Update README.md for merge
|
2 years ago |
Ross Wightman
|
7c2660576d
|
Tweak init for convnext block using maxxvit/coatnext.
|
2 years ago |
Ross Wightman
|
1d8d6f6072
|
Fix two default args in DenseNet blocks... fix #1427
|
2 years ago |
Ross Wightman
|
527f9a4cb2
|
Updated to correct maxvit_nano weights...
|
2 years ago |
Ross Wightman
|
2a5b5b2a7b
|
Update feature_request.md
|
2 years ago |
Ross Wightman
|
e018253acc
|
Update config.yml
|
2 years ago |
Ross Wightman
|
995e2691d6
|
Update config.yml
|
2 years ago |
Ross Wightman
|
b2e8426fca
|
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
|
2 years ago |
Ross Wightman
|
837c68263b
|
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
|
2 years ago |
Ross Wightman
|
cac0a4570a
|
More test fixes, pool size for 256x256 maxvit models
|
2 years ago |
Ross Wightman
|
e939ed19b9
|
Rename internal creation fn for maxvit, has not been just coatnet for a while...
|
2 years ago |
Ross Wightman
|
ffaf97f813
|
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
|
2 years ago |
Ross Wightman
|
8c9696c9df
|
More model and test fixes
|
2 years ago |
Ross Wightman
|
ca52108c2b
|
Fix some model support functions
|
2 years ago |
Ross Wightman
|
f332fc2db7
|
Fix some test failures, torchscript issues
|
2 years ago |
Ross Wightman
|
6e559e9b5f
|
Add MViT (Multi-Scale) V2
|
2 years ago |
Ross Wightman
|
43aa84e861
|
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
|
2 years ago |
Ross Wightman
|
c486aa71f8
|
Add GCViT
|
2 years ago |
Ross Wightman
|
fba6ecd39b
|
Add EfficientFormer
|
2 years ago |