Ross Wightman
|
026430c083
|
Merge branch 'master' of https://github.com/morizin/pytorch-image-models-1 into morizin-master
|
4 years ago |
Ross Wightman
|
c03e45cbf3
|
Merge branch 'mrT23-master'
|
4 years ago |
Ross Wightman
|
a0492e3b48
|
A few miil weights naming tweaks to improve compat with model registry and filtering wildcards.
|
4 years ago |
talrid
|
8c1f03e56c
|
comment
|
4 years ago |
talrid
|
19e1b67a84
|
old spaces
|
4 years ago |
talrid
|
a443865876
|
update naming and scores
|
4 years ago |
talrid
|
cf0e371594
|
84_0
|
4 years ago |
talrid
|
0968bdeca3
|
vit, tresnet and mobilenetV3 ImageNet-21K-P weights
|
4 years ago |
mrT23
|
b81cd75eea
|
Merge pull request #3 from rwightman/master
merge base
|
4 years ago |
morizin
|
1e3b6d4dfc
|
Update __init__.py
|
4 years ago |
morizin
|
fd022fd6a2
|
Update __init__.py
|
4 years ago |
morizin
|
06841427cd
|
Add files via upload
|
4 years ago |
morizin
|
c2d5087eae
|
Add files via upload
|
4 years ago |
Ross Wightman
|
9a1bd358c7
|
Merge pull request #571 from normster/augmix-fix
Enable uniform augmentation magnitude sampling and set AugMix default
|
4 years ago |
Norman Mu
|
79640fcc1f
|
Enable uniform augmentation magnitude sampling and set AugMix default
|
4 years ago |
Ross Wightman
|
c1cf9712fc
|
Add updated EfficientNet-V2S weights, 83.8 @ 384x384 test. Add PyTorch trained EfficientNet-B4 weights, 83.4 @ 384x384 test. Tweak non TF EfficientNet B1-B4 train/test res scaling.
|
4 years ago |
Ross Wightman
|
e8a64fb881
|
Test input size for efficientnet_v2s was wrong in last results run
|
4 years ago |
Ross Wightman
|
a04427d8ce
|
Add _in22k to bulk validate filter
|
4 years ago |
Ross Wightman
|
af647c10b3
|
Update results csv, includes latest transformer models, swin, pit, tnt... run on pytorch 1.8.1 cuda 10.2 (11.x rel crashes).
|
4 years ago |
Ross Wightman
|
e15e68d881
|
Fix #566, summary.csv writing to pwd on local_rank != 0. Tweak benchmark mem handling to see if it reduces likelihood of 'bad' exceptions on OOM.
|
4 years ago |
Ross Wightman
|
1b0c8e7b01
|
Merge branch 'iamhankai-master'
|
4 years ago |
Ross Wightman
|
2df77ee5cb
|
Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers.
|
4 years ago |
Ross Wightman
|
d793deb51a
|
Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master
|
4 years ago |
Ross Wightman
|
e685618f45
|
Merge pull request #550 from amaarora/wandb
Wandb Support
|
4 years ago |
Ross Wightman
|
277a9a78f9
|
Fix unit test filter update.
|
4 years ago |
Ross Wightman
|
858728799c
|
Update README again. Add 101x3 BiT-M model to CI ignore since it's starting to fail in GitHub runners.
|
4 years ago |
Ross Wightman
|
f606c45c38
|
Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer
|
4 years ago |
iamhankai
|
de445e7827
|
Add GhostNet
|
4 years ago |
Ross Wightman
|
5a196dddf6
|
Update README.md with latest, bump version to 0.4.8
|
4 years ago |
Ross Wightman
|
ce6585f533
|
Merge pull request #556 from rwightman/byoanet-self_attn
ByoaNet - Self Attn Networks - Bottleneck Transformers, Lambda ResNet, HaloNet
|
4 years ago |
Ross Wightman
|
b3d7580df1
|
Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
|
4 years ago |
Ross Wightman
|
16f7aa9f54
|
Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
|
4 years ago |
Ross Wightman
|
4e4b863b15
|
Missed norm.py
|
4 years ago |
Ross Wightman
|
7c97e66f7c
|
Remove commented code, add more consistent seed fn
|
4 years ago |
Ross Wightman
|
364dd6a58e
|
Merge branch 'master' into byoanet-self_attn
|
4 years ago |
Ross Wightman
|
ce62f96d4d
|
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
|
4 years ago |
Ross Wightman
|
cd3dc4979f
|
Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
|
4 years ago |
Ross Wightman
|
21812d33aa
|
Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
|
4 years ago |
Michael Monashev
|
0be1fa4793
|
Argument description fixed
|
4 years ago |
Aman Arora
|
5772c55c57
|
Make wandb optional
|
4 years ago |
Aman Arora
|
f54897cc0b
|
make wandb not required but rather optional as huggingface_hub
|
4 years ago |
Aman Arora
|
f13f7508a9
|
Keep changes to minimal and use args.experiment as wandb project name if it exists
|
4 years ago |
Aman Arora
|
f8bb13f640
|
Default project name to None
|
4 years ago |
Aman Arora
|
8db8ff346f
|
add wandb to requirements.txt
|
4 years ago |
Aman Arora
|
3f028ebc0f
|
import wandb in summary.py
|
4 years ago |
Aman Arora
|
a9e5d9e5ad
|
log loss as before
|
4 years ago |
Aman Arora
|
624c9b6949
|
log to wandb only if using using wandb
|
4 years ago |
Aman Arora
|
00c8e0b8bd
|
Make use of wandb configurable
|
4 years ago |
Aman Arora
|
8e6fb861e4
|
Add wandb support
|
4 years ago |
Ross Wightman
|
779107b693
|
Merge pull request #542 from juntang-zhuang/adabelief
Add Adabelief Optimizer
|
4 years ago |