Ross Wightman
|
7a0bd095cb
|
Update model prune loader to use pkgutil
|
2 years ago |
Ross Wightman
|
0f2803de7a
|
Move ImageNet metadata (aka info) files to timm/data/_info. Add helper classes to make info available for labelling. Update inference.py for first use.
|
2 years ago |
Ross Wightman
|
89b0452171
|
Add PyTorch 1.13 inference benchmark numbers
|
2 years ago |
Ross Wightman
|
7a13be67a5
|
Update version.py
|
2 years ago |
Ross Wightman
|
4b383e8ffe
|
Merge pull request #1655 from rwightman/levit_efficientformer_redux
Add EfficientFormer-V2, refactor EfficientFormer and Levit
|
2 years ago |
Ross Wightman
|
13acac8c5e
|
Update head metadata for effformerv2
|
2 years ago |
Ross Wightman
|
8682528096
|
Add first conv metadata for efficientformer_v2
|
2 years ago |
Ross Wightman
|
72fba669a8
|
is_scripting() guard on checkpoint_seq
|
2 years ago |
Ross Wightman
|
95ec255f7f
|
Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers
|
2 years ago |
Ross Wightman
|
9d03c6f526
|
Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux
|
2 years ago |
Ross Wightman
|
086bd55a94
|
Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub.
|
2 years ago |
Ross Wightman
|
2cb2699dc8
|
Apply fix from #1649 to main
|
2 years ago |
Ross Wightman
|
e0a5911072
|
Merge pull request #1645 from rwightman/norm_mlp_classifier
Extract NormMlpClassifierHead from maxxvit.py
|
2 years ago |
Ross Wightman
|
b3042081b4
|
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
|
2 years ago |
Ross Wightman
|
316bdf8955
|
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
|
2 years ago |
Ross Wightman
|
6f28b562c6
|
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
|
2 years ago |
Ross Wightman
|
29fda20e6d
|
Merge branch 'fffffgggg54-main'
|
2 years ago |
Ross Wightman
|
9a53c3f727
|
Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub.
|
2 years ago |
Fredo Guan
|
fb717056da
|
Merge remote-tracking branch 'upstream/main'
|
2 years ago |
Ross Wightman
|
2bbc26dd82
|
version 0.8.8dev0
|
2 years ago |
Ross Wightman
|
64667bfa0e
|
Add 'gigantic' vit clip variant for feature extraction and future fine-tuning
|
2 years ago |
Ross Wightman
|
3aa31f537d
|
Merge pull request #1641 from rwightman/maxxvit_hub
MaxxViT weights on hub, new 12k FT 1k weights, convnext 384x384 12k FT 1k, and more
|
2 years ago |
Ross Wightman
|
9983ed7721
|
xlarge maxvit killing the tests
|
2 years ago |
Ross Wightman
|
c2822568ec
|
Update version to 0.8.7dev0
|
2 years ago |
Ross Wightman
|
0417a9dd81
|
Update README
|
2 years ago |
Ross Wightman
|
36989cfae4
|
Factor out readme generation in hub helper, add more readme fields
|
2 years ago |
Ross Wightman
|
32f252381d
|
Change order of checkpoitn filtering fn application in builder, try dict, model variant first
|
2 years ago |
Ross Wightman
|
e9f1376cde
|
Cleanup resolve data config fns, add 'model' variant that takes model as first arg, make 'args' arg optional in original fn
|
2 years ago |
Ross Wightman
|
bed350f5e5
|
Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights.
|
2 years ago |
Ross Wightman
|
ca38e1e73f
|
Update ClassifierHead module, add reset() method, update in_chs -> in_features for consistency
|
2 years ago |
Ross Wightman
|
8ab573cd26
|
Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights
|
2 years ago |
Fredo Guan
|
e58a884c1c
|
Merge remote-tracking branch 'upstream/main'
|
2 years ago |
Fredo Guan
|
81ca323751
|
Davit update formatting and fix grad checkpointing (#7)
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
|
2 years ago |
Ross Wightman
|
e9aac412de
|
Correct mean/std for CLIP convnexts
|
2 years ago |
Ross Wightman
|
42bd8f7bcb
|
Add convnext_base CLIP image tower weights for fine-tuning / features
|
2 years ago |
Ross Wightman
|
65aea97067
|
Update tests.yml
Attempt to work around flaky azure ubuntu mirrors
|
2 years ago |
Ross Wightman
|
dd60c45044
|
Merge pull request #1633 from rwightman/freeze_norm_revisit
Update batchnorm freezing to handle NormAct variants
|
2 years ago |
Ross Wightman
|
e520553e3e
|
Update batchnorm freezing to handle NormAct variants, Add GroupNorm1Act, update BatchNormAct2d tracing change from PyTorch
|
2 years ago |
Ross Wightman
|
a2c14c2064
|
Add tiny/small in12k pretrained and fine-tuned ConvNeXt models
|
2 years ago |
Ross Wightman
|
01aea8c1bf
|
Version 0.8.6dev0
|
2 years ago |
Ross Wightman
|
2e83bba142
|
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
|
2 years ago |
Ikko Eltociear Ashimine
|
2c24cb98f1
|
Fix typo in results/README.md
occuring -> occurring
|
2 years ago |
Ross Wightman
|
1825b5e314
|
maxxvit type
|
2 years ago |
Ross Wightman
|
5078b28f8a
|
More kwarg handling tweaks, maxvit_base_rw def added
|
2 years ago |
Ross Wightman
|
c0d7388a1b
|
Improving kwarg merging in more models
|
2 years ago |
Ross Wightman
|
94a91598c3
|
Update README.md
|
2 years ago |
Ross Wightman
|
d2ef5a3a94
|
Update README.md
|
2 years ago |
Ross Wightman
|
ae9153052f
|
Update version.py
|
2 years ago |
Ross Wightman
|
60ebb6cefa
|
Re-order vit pretrained entries for more sensible default weights (no .tag specified)
|
2 years ago |
Ross Wightman
|
e861b74cf8
|
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
|
2 years ago |