Ross Wightman
|
122621daef
|
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
|
2 years ago |
Ross Wightman
|
621e1b2182
|
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
|
2 years ago |
Ross Wightman
|
a3d528524a
|
Version 0.8.12dev0
|
2 years ago |
testbot
|
a09d403c24
|
changed warning to info
|
2 years ago |
testbot
|
8470e29541
|
Add support to load safetensors weights
|
2 years ago |
Ross Wightman
|
f35d6ea57b
|
Add multi-tensor (foreach) version of Lion in style of upcoming PyTorch 2.0 optimizers
|
2 years ago |
Ross Wightman
|
709d5e0d9d
|
Add Lion optimizer
|
2 years ago |
Ross Wightman
|
624266148d
|
Remove unused imports from _hub helpers
|
2 years ago |
Ross Wightman
|
2cfff0581b
|
Add grad_checkpointing support to features_only, test in EfficientDet.
|
2 years ago |
Ross Wightman
|
45af496197
|
Version 0.8.11dev0
|
2 years ago |
Ross Wightman
|
9c14654a0d
|
Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg
|
2 years ago |
Ross Wightman
|
497be8343c
|
Update README and version
|
2 years ago |
Ross Wightman
|
0d33127df2
|
Add 384x384 convnext_large_mlp laion2b fine-tune on in1k
|
2 years ago |
Ross Wightman
|
7a0bd095cb
|
Update model prune loader to use pkgutil
|
2 years ago |
Ross Wightman
|
0f2803de7a
|
Move ImageNet metadata (aka info) files to timm/data/_info. Add helper classes to make info available for labelling. Update inference.py for first use.
|
2 years ago |
Ross Wightman
|
7a13be67a5
|
Update version.py
|
2 years ago |
Ross Wightman
|
13acac8c5e
|
Update head metadata for effformerv2
|
2 years ago |
Ross Wightman
|
8682528096
|
Add first conv metadata for efficientformer_v2
|
2 years ago |
Ross Wightman
|
72fba669a8
|
is_scripting() guard on checkpoint_seq
|
2 years ago |
Ross Wightman
|
95ec255f7f
|
Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers
|
2 years ago |
Ross Wightman
|
9d03c6f526
|
Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux
|
2 years ago |
Ross Wightman
|
086bd55a94
|
Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub.
|
2 years ago |
Ross Wightman
|
2cb2699dc8
|
Apply fix from #1649 to main
|
2 years ago |
Ross Wightman
|
b3042081b4
|
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
|
2 years ago |
Ross Wightman
|
316bdf8955
|
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
|
2 years ago |
Ross Wightman
|
6f28b562c6
|
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
|
2 years ago |
Ross Wightman
|
9a53c3f727
|
Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub.
|
2 years ago |
Fredo Guan
|
fb717056da
|
Merge remote-tracking branch 'upstream/main'
|
2 years ago |
Ross Wightman
|
2bbc26dd82
|
version 0.8.8dev0
|
2 years ago |
Ross Wightman
|
64667bfa0e
|
Add 'gigantic' vit clip variant for feature extraction and future fine-tuning
|
2 years ago |
Ross Wightman
|
c2822568ec
|
Update version to 0.8.7dev0
|
2 years ago |
Ross Wightman
|
36989cfae4
|
Factor out readme generation in hub helper, add more readme fields
|
2 years ago |
Ross Wightman
|
32f252381d
|
Change order of checkpoitn filtering fn application in builder, try dict, model variant first
|
2 years ago |
Ross Wightman
|
e9f1376cde
|
Cleanup resolve data config fns, add 'model' variant that takes model as first arg, make 'args' arg optional in original fn
|
2 years ago |
Ross Wightman
|
bed350f5e5
|
Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights.
|
2 years ago |
Ross Wightman
|
ca38e1e73f
|
Update ClassifierHead module, add reset() method, update in_chs -> in_features for consistency
|
2 years ago |
Ross Wightman
|
8ab573cd26
|
Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights
|
2 years ago |
Fredo Guan
|
81ca323751
|
Davit update formatting and fix grad checkpointing (#7)
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
|
2 years ago |
Ross Wightman
|
e9aac412de
|
Correct mean/std for CLIP convnexts
|
2 years ago |
Ross Wightman
|
42bd8f7bcb
|
Add convnext_base CLIP image tower weights for fine-tuning / features
|
2 years ago |
Ross Wightman
|
e520553e3e
|
Update batchnorm freezing to handle NormAct variants, Add GroupNorm1Act, update BatchNormAct2d tracing change from PyTorch
|
2 years ago |
Ross Wightman
|
a2c14c2064
|
Add tiny/small in12k pretrained and fine-tuned ConvNeXt models
|
2 years ago |
Ross Wightman
|
01aea8c1bf
|
Version 0.8.6dev0
|
2 years ago |
Ross Wightman
|
2e83bba142
|
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
|
2 years ago |
Ross Wightman
|
1825b5e314
|
maxxvit type
|
2 years ago |
Ross Wightman
|
5078b28f8a
|
More kwarg handling tweaks, maxvit_base_rw def added
|
2 years ago |
Ross Wightman
|
c0d7388a1b
|
Improving kwarg merging in more models
|
2 years ago |
Ross Wightman
|
ae9153052f
|
Update version.py
|
2 years ago |
Ross Wightman
|
60ebb6cefa
|
Re-order vit pretrained entries for more sensible default weights (no .tag specified)
|
2 years ago |
Ross Wightman
|
e861b74cf8
|
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
|
2 years ago |