Ross Wightman
2e83bba142
Revert head norm changes to ConvNeXt as it broke some downstream use, alternate workaround for fcmae weights
2 years ago
Ross Wightman
1825b5e314
maxxvit type
2 years ago
Ross Wightman
5078b28f8a
More kwarg handling tweaks, maxvit_base_rw def added
2 years ago
Ross Wightman
c0d7388a1b
Improving kwarg merging in more models
2 years ago
Ross Wightman
ae9153052f
Update version.py
2 years ago
Ross Wightman
60ebb6cefa
Re-order vit pretrained entries for more sensible default weights (no .tag specified)
2 years ago
Ross Wightman
e861b74cf8
Pass through --model-kwargs (and --opt-kwargs for train) from command line through to model __init__. Update some models to improve arg overlay. Cleanup along the way.
2 years ago
Ross Wightman
add3fb864e
Working on improved model card template for push_to_hf_hub
2 years ago
Ross Wightman
dd0bb327e9
Update version.py
...
Ver 0.8.4dev0
2 years ago
Ross Wightman
6e5553da5f
Add ConvNeXt-V2 support (model additions and weights) ( #1614 )
...
* Add ConvNeXt-V2 support (model additions and weights)
* ConvNeXt-V2 weights on HF Hub, tweaking some tests
* Update README, fixing convnextv2 tests
2 years ago
Ross Wightman
6902c48a5f
Fix ResNet based models to work w/ norm layers w/o affine params. Reformat long arg lists into vertical form.
2 years ago
Ross Wightman
d5aa17e415
Remove print from auto_augment
2 years ago
Ross Wightman
7c846d9970
Better vmap compat across recent torch versions
2 years ago
Ross Wightman
4e24f75289
Merge pull request #1593 from rwightman/multi-weight_effnet_convnext
...
Update efficientnet.py and convnext.py to multi-weight, add new 12k pretrained weights
2 years ago
Ross Wightman
8ece53e194
Switch BEiT to HF hub weights
2 years ago
Ross Wightman
d1bfa9a000
Support HF datasets and TFSD w/ a sub-path by fixing split, fix #1598 ... add class mapping support to HF datasets in case class label isn't in info.
2 years ago
Ross Wightman
e2fc43bc63
Version 0.8.2dev0
2 years ago
Ross Wightman
9a51e4ea2e
Add FlexiViT models and weights, refactoring, push more weights
...
* push all vision_transformer*.py weights to HF hub
* finalize more pretrained tags for pushed weights
* refactor pos_embed files and module locations, move some pos embed modules to layers
* tweak hf hub helpers to aid bulk uploading and updating
2 years ago
Fredo Guan
10b3f696b4
Davit std ( #6 )
...
Separate patch_embed module
2 years ago
Ross Wightman
656e1776de
Convert mobilenetv3 to multi-weight, tweak PretrainedCfg metadata
2 years ago
Fredo Guan
546590c5f5
Merge branch 'rwightman:main' into main
2 years ago
Ross Wightman
6a01101905
Update efficientnet.py and convnext.py to multi-weight, add ImageNet-12k pretrained EfficientNet-B5 and ConvNeXt-Nano.
2 years ago
alec.tu
74d6afb4cd
Add Adan to __init__.py
2 years ago
Fredo Guan
84178fca60
Merge branch 'rwightman:main' into main
2 years ago
Fredo Guan
c43340ddd4
Davit std ( #5 )
...
* Update davit.py
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* starting point
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Davit revised (#4 )
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
clean up
* Update test_models.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update davit.py
* Update test_models.py
* Update davit.py
2 years ago
Ross Wightman
e7da205345
Fix aa min_max level clamp
2 years ago
Ross Wightman
e3b2f5be0a
Add 3-Augment support to auto_augment.py, clean up weighted choice handling, and allow adjust per op prob via arg string
2 years ago
Ross Wightman
d5e7d6b27e
Merge remote-tracking branch 'origin/main' into refactor-imports
2 years ago
Ross Wightman
cda39b35bd
Add a deprecation phase to module re-org
2 years ago
Fredo Guan
edea013dd1
Davit std ( #3 )
...
Davit with all features working
2 years ago
Ross Wightman
7c4ed4d5a4
Add EVA-large models
2 years ago
Fredo Guan
434a03937d
Merge branch 'rwightman:main' into main
2 years ago
Ross Wightman
98047ef5e3
Add EVA FT results, hopefully fix BEiT test failures
2 years ago
Ross Wightman
3cc4d7a894
Fix missing register for 224 eva model
2 years ago
Ross Wightman
eba07b0de7
Add eva models to beit.py
2 years ago
Fredo Guan
3bd96609c8
Davit ( #1 )
...
Implement the davit model from https://arxiv.org/abs/2204.03645 and https://github.com/dingmyu/davit
2 years ago
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman
3785c234d7
Remove clip vit models that won't be ft and comment two that aren't uploaded yet
2 years ago
Ross Wightman
f82239b30e
multi-weight branch version -> 0.8.0dev
2 years ago
Ross Wightman
755570e2d6
Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2 years ago
Ross Wightman
72cfa57761
Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2 years ago
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2 years ago
Ross Wightman
3db4e346e0
Switch TFDS dataset to use INTEGER_ACCURATE jpeg decode by default
2 years ago
Ross Wightman
9da7e3a799
Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate
2 years ago
Ross Wightman
b2b6285af7
Add two more FT clip weights
2 years ago
Ross Wightman
5895056dc4
Add openai b32 ft
2 years ago
Ross Wightman
9dea5143d5
Adding more clip ft variants
2 years ago
Ross Wightman
444dcba4ad
CLIP B16 12k weights added
2 years ago
Ross Wightman
dff4717cbf
Add clip b16 384x384 finetunes
2 years ago
Ross Wightman
883fa2eeaa
Add fine-tuned B/16 224x224 in1k clip models
2 years ago
Ross Wightman
9a3d2ac2d5
Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates
2 years ago
Ross Wightman
42bbbddee9
Add missing model config
2 years ago
Ross Wightman
def68befa7
Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights.
2 years ago
Ross Wightman
0dadb4a6e9
Initial multi-weight support, handled so old pretraing config handling co-exists with new tags.
2 years ago
hongxin xiang
653bdc7105
Fix comment: https://github.com/rwightman/pytorch-image-models/pull/1564#issuecomment-1326743424
2 years ago
hongxin xiang
bdc9fad638
Fix compatible BUG: QMNIST and ImageNet datasets do not exist in torchvision 0.10.1.
2 years ago
Wauplin
9b114754db
refactor push_to_hub helper
2 years ago
Wauplin
ae0a0db7de
Create repo before cloning with Repository.clone_from
2 years ago
Ross Wightman
803254bb40
Fix spacing misalignment for fast norm path in LayerNorm modules
2 years ago
Ross Wightman
475ecdfa3d
cast env var args for dataset readers to int
2 years ago
Hoan Nguyen
39190f5f44
Remove inplace operators when calculating the loss
...
Remove inplace operators to overcome the following error when using `asymmetric_loss`
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
```
2 years ago
Ross Wightman
6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
...
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
2 years ago
Ross Wightman
0e6023f032
Merge pull request #1381 from ChristophReich1996/master
...
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman
66f4af7090
Merge remote-tracking branch 'origin/master' into script_cleanup
2 years ago
Ross Wightman
d3961536c9
comment some debug logs for WDS dataset
2 years ago
Ross Wightman
e9dccc918c
Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc
2 years ago
Ross Wightman
8c28363dc9
Version 0.7.dev0 for master
2 years ago
nateraw
30bafd7347
🔖 add dev suffix to version tag
2 years ago
Ross Wightman
f67a7ee8bd
Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct
2 years ago
Ross Wightman
cea8df3d0c
Version 0.6.12
2 years ago
Ross Wightman
9914f744dc
Add more maxxvit weights includ ConvNeXt conv block based experiments.
2 years ago
Ross Wightman
b1b024dfed
Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
2 years ago
Ross Wightman
4f18d6dc5f
Fix logs in WDS parser
2 years ago
Mohamed Rashad
8fda68aff6
Fix repo id bug
...
This to fix this issue #1482
2 years ago
Ross Wightman
b8c8550841
Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
2 years ago
Alex Fafard
7327792f39
update to support pickle based dictionaries
2 years ago
Ross Wightman
1199c5a1a4
clip_laion2b models need 1e-5 eps for LayerNorm
2 years ago
Ross Wightman
87939e6fab
Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed.
2 years ago
Ross Wightman
c88947ad3d
Add initial Hugging Face Datasets parser impl.
2 years ago
Ross Wightman
e858912e0c
Add brute-force checkpoint remapping option
2 years ago
Ross Wightman
b293dfa595
Add CL SE module
2 years ago
Ross Wightman
2a296412be
Add Adan optimizer
2 years ago
Ross Wightman
5dc4343308
version 0.6.11
2 years ago
Ross Wightman
a383ef99f5
Make huggingface_hub necessary if it's the only source for a pretrained weight
2 years ago
Ross Wightman
33e30f8c8b
Remove layer-decay print
2 years ago
Ross Wightman
e069249a2d
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
2 years ago
Ross Wightman
9d65557be3
Fix errant import
2 years ago
Ross Wightman
9709dbaaa9
Adding support for fine-tune CLIP LAION-2B image tower weights for B/32, L/14, H/14 and g/14. Still WIP
2 years ago
Ross Wightman
a520da9b49
Update tresnet features_info for v2
2 years ago
Ross Wightman
c8ab747bf4
BEiT-V2 checkpoints didn't remove 'module' from weights, adapt checkpoint filter
2 years ago
Ross Wightman
73049dc2aa
Fix type in dla weight update
2 years ago
Ross Wightman
3599c7e6a4
version 0.6.10
2 years ago
Ross Wightman
e11efa872d
Update a bunch of weights with external links to timm release assets. Fixes issue with *aliyuncs.com returning forbidden. Did pickle scan / verify and re-hash. Add TresNet-V2-L weights.
2 years ago
Ross Wightman
fa8c84eede
Update maxvit_tiny_256 weight to better iter, add coatnet / maxvit / maxxvit model defs for future runs
2 years ago
Ross Wightman
c1b3cea19d
Add maxvit_rmlp_tiny_rw_256 model def and weights w/ 84.2 top-1 @ 256, 84.8 @ 320
2 years ago
Ross Wightman
914544fc81
Add beitv2 224x224 checkpoints from https://github.com/microsoft/unilm/tree/master/beit2
2 years ago
Ross Wightman
dc90816f26
Add `maxvit_tiny_rw_224` weights 83.5 @ 224 and `maxvit_rmlp_pico_rw_256` relpos weights, 80.5 @ 256, 81.3 @ 320
2 years ago
Ross Wightman
f489f02ad1
Make gcvit window size ratio based to improve resolution changing support #1449 . Change default init to original.
2 years ago
Ross Wightman
7f1b223c02
Add maxvit_rmlp_nano_rw_256 model def & weights, make window/grid size dynamic wrt img_size by default
2 years ago
Ross Wightman
e6a4361306
pretrained_cfg entry for mvitv2_small_cls
2 years ago