Lorenzo Baraldi
e09b4d5c7f
Gradient accumulation included into training script
...
Added parameter iters_to_accumulate to perform gradient accumulation
2 years ago
Ross Wightman
e7da205345
Fix aa min_max level clamp
2 years ago
Ross Wightman
e3b2f5be0a
Add 3-Augment support to auto_augment.py, clean up weighted choice handling, and allow adjust per op prob via arg string
2 years ago
Ross Wightman
d5e7d6b27e
Merge remote-tracking branch 'origin/main' into refactor-imports
2 years ago
Ross Wightman
cda39b35bd
Add a deprecation phase to module re-org
2 years ago
Ross Wightman
7c4ed4d5a4
Add EVA-large models
2 years ago
Ross Wightman
98047ef5e3
Add EVA FT results, hopefully fix BEiT test failures
2 years ago
Ross Wightman
3cc4d7a894
Fix missing register for 224 eva model
2 years ago
Ross Wightman
eba07b0de7
Add eva models to beit.py
2 years ago
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman
3785c234d7
Remove clip vit models that won't be ft and comment two that aren't uploaded yet
2 years ago
Ross Wightman
f82239b30e
multi-weight branch version -> 0.8.0dev
2 years ago
Ross Wightman
755570e2d6
Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2 years ago
Ross Wightman
72cfa57761
Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2 years ago
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2 years ago
Ross Wightman
3db4e346e0
Switch TFDS dataset to use INTEGER_ACCURATE jpeg decode by default
2 years ago
Ross Wightman
9da7e3a799
Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate
2 years ago
Ross Wightman
b2b6285af7
Add two more FT clip weights
2 years ago
Ross Wightman
5895056dc4
Add openai b32 ft
2 years ago
Ross Wightman
9dea5143d5
Adding more clip ft variants
2 years ago
Ross Wightman
444dcba4ad
CLIP B16 12k weights added
2 years ago
Ross Wightman
dff4717cbf
Add clip b16 384x384 finetunes
2 years ago
Ross Wightman
883fa2eeaa
Add fine-tuned B/16 224x224 in1k clip models
2 years ago
Ross Wightman
9a3d2ac2d5
Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates
2 years ago
Ross Wightman
42bbbddee9
Add missing model config
2 years ago
Ross Wightman
def68befa7
Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights.
2 years ago
Ross Wightman
0dadb4a6e9
Initial multi-weight support, handled so old pretraing config handling co-exists with new tags.
2 years ago
hongxin xiang
653bdc7105
Fix comment: https://github.com/rwightman/pytorch-image-models/pull/1564#issuecomment-1326743424
2 years ago
hongxin xiang
bdc9fad638
Fix compatible BUG: QMNIST and ImageNet datasets do not exist in torchvision 0.10.1.
2 years ago
Wauplin
9b114754db
refactor push_to_hub helper
2 years ago
Wauplin
ae0a0db7de
Create repo before cloning with Repository.clone_from
2 years ago
Ross Wightman
803254bb40
Fix spacing misalignment for fast norm path in LayerNorm modules
2 years ago
Ross Wightman
475ecdfa3d
cast env var args for dataset readers to int
2 years ago
Hoan Nguyen
39190f5f44
Remove inplace operators when calculating the loss
...
Remove inplace operators to overcome the following error when using `asymmetric_loss`
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
```
2 years ago
Ross Wightman
6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
...
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
2 years ago
Ross Wightman
0e6023f032
Merge pull request #1381 from ChristophReich1996/master
...
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman
66f4af7090
Merge remote-tracking branch 'origin/master' into script_cleanup
2 years ago
Ross Wightman
d3961536c9
comment some debug logs for WDS dataset
2 years ago
Ross Wightman
e9dccc918c
Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc
2 years ago
Ross Wightman
8c28363dc9
Version 0.7.dev0 for master
2 years ago
nateraw
30bafd7347
🔖 add dev suffix to version tag
2 years ago
Ross Wightman
f67a7ee8bd
Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct
2 years ago
Ross Wightman
cea8df3d0c
Version 0.6.12
2 years ago
Ross Wightman
9914f744dc
Add more maxxvit weights includ ConvNeXt conv block based experiments.
2 years ago
Ross Wightman
b1b024dfed
Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
2 years ago
Ross Wightman
4f18d6dc5f
Fix logs in WDS parser
2 years ago
Mohamed Rashad
8fda68aff6
Fix repo id bug
...
This to fix this issue #1482
2 years ago
Ross Wightman
b8c8550841
Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
2 years ago
Alex Fafard
7327792f39
update to support pickle based dictionaries
2 years ago
Ross Wightman
1199c5a1a4
clip_laion2b models need 1e-5 eps for LayerNorm
2 years ago