Fredo Guan
3bd96609c8
Davit ( #1 )
...
Implement the davit model from https://arxiv.org/abs/2204.03645 and https://github.com/dingmyu/davit
2 years ago
Ross Wightman
3785c234d7
Remove clip vit models that won't be ft and comment two that aren't uploaded yet
2 years ago
Ross Wightman
f82239b30e
multi-weight branch version -> 0.8.0dev
2 years ago
Ross Wightman
755570e2d6
Rename _pretrained.py -> pretrained.py, not feasible to change the other files to same scheme without breaking uses
2 years ago
Ross Wightman
72cfa57761
Add ported Tensorflow MaxVit weights. Add a few more CLIP ViT fine-tunes. Tweak some model tag names. Improve model tag name sorting. Update HF hub push config layout.
2 years ago
Ross Wightman
4d5c395160
MaxVit, ViT, ConvNeXt, and EfficientNet-v2 updates
...
* Add support for TF weights and modelling specifics to MaxVit (testing ported weights)
* More fine-tuned CLIP ViT configs
* ConvNeXt and MaxVit updated to new pretrained cfgs use
* EfficientNetV2, MaxVit and ConvNeXt high res models use squash crop/resize
2 years ago
Ross Wightman
3db4e346e0
Switch TFDS dataset to use INTEGER_ACCURATE jpeg decode by default
2 years ago
Ross Wightman
9da7e3a799
Add crop_mode for pretraind config / image transforms. Add support for dynamo compilation to benchmark/train/validate
2 years ago
Ross Wightman
b2b6285af7
Add two more FT clip weights
2 years ago
Ross Wightman
5895056dc4
Add openai b32 ft
2 years ago
Ross Wightman
9dea5143d5
Adding more clip ft variants
2 years ago
Ross Wightman
444dcba4ad
CLIP B16 12k weights added
2 years ago
Ross Wightman
dff4717cbf
Add clip b16 384x384 finetunes
2 years ago
Ross Wightman
883fa2eeaa
Add fine-tuned B/16 224x224 in1k clip models
2 years ago
Ross Wightman
9a3d2ac2d5
Add latest CLIP ViT fine-tune pretrained configs / model entrypt updates
2 years ago
Ross Wightman
42bbbddee9
Add missing model config
2 years ago
Ross Wightman
def68befa7
Updating vit model defs for mult-weight support trial (vit first). Prepping for CLIP (laion2b and openai) fine-tuned weights.
2 years ago
Ross Wightman
0dadb4a6e9
Initial multi-weight support, handled so old pretraing config handling co-exists with new tags.
2 years ago
hongxin xiang
653bdc7105
Fix comment: https://github.com/rwightman/pytorch-image-models/pull/1564#issuecomment-1326743424
2 years ago
hongxin xiang
bdc9fad638
Fix compatible BUG: QMNIST and ImageNet datasets do not exist in torchvision 0.10.1.
2 years ago
Wauplin
9b114754db
refactor push_to_hub helper
2 years ago
Wauplin
ae0a0db7de
Create repo before cloning with Repository.clone_from
2 years ago
Ross Wightman
803254bb40
Fix spacing misalignment for fast norm path in LayerNorm modules
2 years ago
Ross Wightman
475ecdfa3d
cast env var args for dataset readers to int
2 years ago
Hoan Nguyen
39190f5f44
Remove inplace operators when calculating the loss
...
Remove inplace operators to overcome the following error when using `asymmetric_loss`
```
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation
```
2 years ago
Ross Wightman
6635bc3f7d
Merge pull request #1479 from rwightman/script_cleanup
...
Train / val script enhancements, non-GPU (ie CPU) device support, HF datasets support, TFDS/WDS dataloading improvements
2 years ago
Ross Wightman
0e6023f032
Merge pull request #1381 from ChristophReich1996/master
...
Fix typo in PositionalEncodingFourier
2 years ago
Ross Wightman
66f4af7090
Merge remote-tracking branch 'origin/master' into script_cleanup
2 years ago
Ross Wightman
d3961536c9
comment some debug logs for WDS dataset
2 years ago
Ross Wightman
e9dccc918c
Rename dataset/parsers -> dataset/readers, create_parser to create_reader, etc
2 years ago
Ross Wightman
8c28363dc9
Version 0.7.dev0 for master
2 years ago
nateraw
30bafd7347
🔖 add dev suffix to version tag
2 years ago
Ross Wightman
f67a7ee8bd
Set num_workers in Iterable WDS/TFDS datasets early so sample estimate is correct
2 years ago
Ross Wightman
cea8df3d0c
Version 0.6.12
2 years ago
Ross Wightman
9914f744dc
Add more maxxvit weights includ ConvNeXt conv block based experiments.
2 years ago
Ross Wightman
b1b024dfed
Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
2 years ago
Ross Wightman
4f18d6dc5f
Fix logs in WDS parser
2 years ago
Mohamed Rashad
8fda68aff6
Fix repo id bug
...
This to fix this issue #1482
2 years ago
Ross Wightman
b8c8550841
Data improvements. Improve train support for in_chans != 3. Add wds dataset support from bits_and_tpu branch w/ fixes and tweaks. TFDS tweaks.
2 years ago
Alex Fafard
7327792f39
update to support pickle based dictionaries
2 years ago
Ross Wightman
1199c5a1a4
clip_laion2b models need 1e-5 eps for LayerNorm
2 years ago
Ross Wightman
87939e6fab
Refactor device handling in scripts, distributed init to be less 'cuda' centric. More device args passed through where needed.
2 years ago
Ross Wightman
c88947ad3d
Add initial Hugging Face Datasets parser impl.
2 years ago
Ross Wightman
e858912e0c
Add brute-force checkpoint remapping option
2 years ago
Ross Wightman
b293dfa595
Add CL SE module
2 years ago
Ross Wightman
2a296412be
Add Adan optimizer
2 years ago
Ross Wightman
5dc4343308
version 0.6.11
2 years ago
Ross Wightman
a383ef99f5
Make huggingface_hub necessary if it's the only source for a pretrained weight
2 years ago
Ross Wightman
33e30f8c8b
Remove layer-decay print
2 years ago
Ross Wightman
e069249a2d
Add hf hub entries for laion2b clip models, add huggingface_hub dependency, update some setup/reqs, torch >= 1.7
2 years ago