Benjamin Bossan
a5b01ec04e
Add type annotations to _registry.py
...
Description
Add type annotations to _registry.py so that they will pass mypy
--strict.
Comment
I was reading the code and felt that this module would be easier to
understand with type annotations. Therefore, I went ahead and added the
annotations.
The idea with this PR is to start small to see if we can align on _how_
to annotate types. I've seen people in the past disagree on how strictly
to annotate the code base, so before spending too much time on this, I
wanted to check if you agree, Ross.
Most of the added types should be straightforward. Some notes on the
non-trivial changes:
- I made no assumption about the fn passed to register_model, but maybe
the type could be stricter. Are all models nn.Modules?
- If I'm not mistaken, the type hint for get_arch_name was incorrect
- I had to add a # type: ignore to model.__all__ = ...
- I made some minor code changes to list_models to facilitate the
typing. I think the changes should not affect the logic of the function.
- I removed list from list(sorted(...)) because sorted returns always a
list.
2 years ago
Ross Wightman
4d9c3ae2fb
Add laion2b 320x320 ConvNeXt-Large CLIP weights
2 years ago
Ross Wightman
d0b45c9b4d
Make safetensor import option for now. Improve avg/clean checkpoints ext handling a bit (more consistent).
2 years ago
Ross Wightman
7d9e321b76
Improve tracing of window attn models with simpler reshape logic
2 years ago
Ross Wightman
2e38d53dca
Remove dead line
2 years ago
Ross Wightman
f77c04ff36
Torchscript fixes/hacks for rms_norm, refactor ParallelScalingBlock with manual combination of input projections, closer paper match
2 years ago
Ross Wightman
122621daef
Add Final annotation to attn_fas to avoid symbol lookup of new scaled_dot_product_attn fn on old PyTorch in jit
2 years ago
Ross Wightman
621e1b2182
Add ideas from 'Scaling ViT to 22-B Params', testing PyTorch 2.0 fused F.scaled_dot_product_attention impl in vit, vit_relpos, maxxvit / coatnet.
2 years ago
testbot
a09d403c24
changed warning to info
2 years ago
testbot
8470e29541
Add support to load safetensors weights
2 years ago
Ross Wightman
624266148d
Remove unused imports from _hub helpers
2 years ago
Ross Wightman
2cfff0581b
Add grad_checkpointing support to features_only, test in EfficientDet.
2 years ago
Ross Wightman
9c14654a0d
Improve support for custom dataset label name/description through HF hub export, via pretrained_cfg
2 years ago
Ross Wightman
0d33127df2
Add 384x384 convnext_large_mlp laion2b fine-tune on in1k
2 years ago
Ross Wightman
7a0bd095cb
Update model prune loader to use pkgutil
2 years ago
Ross Wightman
13acac8c5e
Update head metadata for effformerv2
2 years ago
Ross Wightman
8682528096
Add first conv metadata for efficientformer_v2
2 years ago
Ross Wightman
72fba669a8
is_scripting() guard on checkpoint_seq
2 years ago
Ross Wightman
95ec255f7f
Finish timm mode api for efficientformer_v2, add grad checkpointing support to both efficientformers
2 years ago
Ross Wightman
9d03c6f526
Merge remote-tracking branch 'origin/main' into levit_efficientformer_redux
2 years ago
Ross Wightman
086bd55a94
Add EfficientFormer-V2, refactor EfficientFormer and Levit for more uniformity across the 3 related arch. Add features_out support to levit conv models and efficientformer_v2. All weights on hub.
2 years ago
Ross Wightman
2cb2699dc8
Apply fix from #1649 to main
2 years ago
Ross Wightman
b3042081b4
Add laion -> in1k fine-tuned base and large_mlp weights for convnext
2 years ago
Ross Wightman
316bdf8955
Add mlp head support for convnext_large, add laion2b CLIP weights, prep fine-tuned weight tags
2 years ago
Ross Wightman
6f28b562c6
Factor NormMlpClassifierHead from MaxxViT and use across MaxxViT / ConvNeXt / DaViT, refactor some type hints & comments
2 years ago
Ross Wightman
9a53c3f727
Finalize DaViT, some formatting and modelling simplifications (separate PatchEmbed to Stem + Downsample, weights on HF hub.
2 years ago
Fredo Guan
fb717056da
Merge remote-tracking branch 'upstream/main'
2 years ago
Ross Wightman
64667bfa0e
Add 'gigantic' vit clip variant for feature extraction and future fine-tuning
2 years ago
Ross Wightman
36989cfae4
Factor out readme generation in hub helper, add more readme fields
2 years ago
Ross Wightman
32f252381d
Change order of checkpoitn filtering fn application in builder, try dict, model variant first
2 years ago
Ross Wightman
bed350f5e5
Push all MaxxViT weights to HF hub, cleanup impl, add feature map extraction support and prompote to 'std' architecture. Fix norm head for proper embedding / feat map output. Add new in12k + ft 1k weights.
2 years ago
Ross Wightman
ca38e1e73f
Update ClassifierHead module, add reset() method, update in_chs -> in_features for consistency
2 years ago
Ross Wightman
8ab573cd26
Add convnext_tiny and convnext_small 384x384 fine-tunes of in12k weights, fix pool size for laion CLIP convnext weights
2 years ago
Fredo Guan
81ca323751
Davit update formatting and fix grad checkpointing ( #7 )
...
fixed head to gap->norm->fc as per convnext, along with option for norm->gap->fc
failed tests due to clip convnext models, davit tests passed
2 years ago
Ross Wightman
e9aac412de
Correct mean/std for CLIP convnexts
2 years ago
Ross Wightman
42bd8f7bcb
Add convnext_base CLIP image tower weights for fine-tuning / features
2 years ago
Ross Wightman
a2c14c2064
Add tiny/small in12k pretrained and fine-tuned ConvNeXt models
2 years ago
Ross Wightman
2e83bba142
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
2 years ago
Ross Wightman
1825b5e314
maxxvit type
2 years ago
Ross Wightman
5078b28f8a
More kwarg handling tweaks, maxvit_base_rw def added
2 years ago
Ross Wightman
c0d7388a1b
Improving kwarg merging in more models
2 years ago
Ross Wightman
60ebb6cefa
Re-order vit pretrained entries for more sensible default weights (no .tag specified)
2 years ago
Ross Wightman
e861b74cf8
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
2 years ago
Ross Wightman
add3fb864e
Working on improved model card template for push_to_hf_hub
2 years ago
Ross Wightman
6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) ( #1614 )
...
* Add ConvNeXt-V2 support (model additions and weights)
* ConvNeXt-V2 weights on HF Hub, tweaking some tests
* Update README, fixing convnextv2 tests
2 years ago
Ross Wightman
6902c48a5f
Fix ResNet based models to work w/ norm layers w/o affine params. Reformat long arg lists into vertical form.
2 years ago
Ross Wightman
8ece53e194
Switch BEiT to HF hub weights
2 years ago
Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2 years ago
Fredo Guan
10b3f696b4
Davit std ( #6 )
...
Separate patch_embed module
2 years ago
Ross Wightman
656e1776de
Convert mobilenetv3 to multi-weight, tweak PretrainedCfg metadata
2 years ago