morizin
|
06841427cd
|
Add files via upload
|
4 years ago |
morizin
|
c2d5087eae
|
Add files via upload
|
4 years ago |
Ross Wightman
|
9a1bd358c7
|
Merge pull request #571 from normster/augmix-fix
Enable uniform augmentation magnitude sampling and set AugMix default
|
4 years ago |
Norman Mu
|
79640fcc1f
|
Enable uniform augmentation magnitude sampling and set AugMix default
|
4 years ago |
Ross Wightman
|
c1cf9712fc
|
Add updated EfficientNet-V2S weights, 83.8 @ 384x384 test. Add PyTorch trained EfficientNet-B4 weights, 83.4 @ 384x384 test. Tweak non TF EfficientNet B1-B4 train/test res scaling.
|
4 years ago |
Ross Wightman
|
e8a64fb881
|
Test input size for efficientnet_v2s was wrong in last results run
|
4 years ago |
Ross Wightman
|
a04427d8ce
|
Add _in22k to bulk validate filter
|
4 years ago |
Ross Wightman
|
af647c10b3
|
Update results csv, includes latest transformer models, swin, pit, tnt... run on pytorch 1.8.1 cuda 10.2 (11.x rel crashes).
|
4 years ago |
Ross Wightman
|
e15e68d881
|
Fix #566, summary.csv writing to pwd on local_rank != 0. Tweak benchmark mem handling to see if it reduces likelihood of 'bad' exceptions on OOM.
|
4 years ago |
Ross Wightman
|
1b0c8e7b01
|
Merge branch 'iamhankai-master'
|
4 years ago |
Ross Wightman
|
2df77ee5cb
|
Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers.
|
4 years ago |
Ross Wightman
|
d793deb51a
|
Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master
|
4 years ago |
Ross Wightman
|
e685618f45
|
Merge pull request #550 from amaarora/wandb
Wandb Support
|
4 years ago |
Ross Wightman
|
277a9a78f9
|
Fix unit test filter update.
|
4 years ago |
Ross Wightman
|
858728799c
|
Update README again. Add 101x3 BiT-M model to CI ignore since it's starting to fail in GitHub runners.
|
4 years ago |
Ross Wightman
|
f606c45c38
|
Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer
|
4 years ago |
iamhankai
|
de445e7827
|
Add GhostNet
|
4 years ago |
Ross Wightman
|
5a196dddf6
|
Update README.md with latest, bump version to 0.4.8
|
4 years ago |
Ross Wightman
|
ce6585f533
|
Merge pull request #556 from rwightman/byoanet-self_attn
ByoaNet - Self Attn Networks - Bottleneck Transformers, Lambda ResNet, HaloNet
|
4 years ago |
Ross Wightman
|
b3d7580df1
|
Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
|
4 years ago |
Ross Wightman
|
16f7aa9f54
|
Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
|
4 years ago |
Ross Wightman
|
4e4b863b15
|
Missed norm.py
|
4 years ago |
Ross Wightman
|
7c97e66f7c
|
Remove commented code, add more consistent seed fn
|
4 years ago |
Ross Wightman
|
364dd6a58e
|
Merge branch 'master' into byoanet-self_attn
|
4 years ago |
Ross Wightman
|
ce62f96d4d
|
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
|
4 years ago |
Ross Wightman
|
cd3dc4979f
|
Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
|
4 years ago |
Ross Wightman
|
21812d33aa
|
Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
|
4 years ago |
Aman Arora
|
5772c55c57
|
Make wandb optional
|
4 years ago |
Aman Arora
|
f54897cc0b
|
make wandb not required but rather optional as huggingface_hub
|
4 years ago |
Aman Arora
|
f13f7508a9
|
Keep changes to minimal and use args.experiment as wandb project name if it exists
|
4 years ago |
Aman Arora
|
f8bb13f640
|
Default project name to None
|
4 years ago |
Aman Arora
|
8db8ff346f
|
add wandb to requirements.txt
|
4 years ago |
Aman Arora
|
3f028ebc0f
|
import wandb in summary.py
|
4 years ago |
Aman Arora
|
a9e5d9e5ad
|
log loss as before
|
4 years ago |
Aman Arora
|
624c9b6949
|
log to wandb only if using using wandb
|
4 years ago |
Aman Arora
|
00c8e0b8bd
|
Make use of wandb configurable
|
4 years ago |
Aman Arora
|
8e6fb861e4
|
Add wandb support
|
4 years ago |
Ross Wightman
|
779107b693
|
Merge pull request #542 from juntang-zhuang/adabelief
Add Adabelief Optimizer
|
4 years ago |
Juntang Zhuang
|
74366f733c
|
Delete distributed_train_adabelief.sh
|
4 years ago |
Juntang Zhuang
|
1d848f409a
|
Delete args.yaml
|
4 years ago |
juntang
|
addfc7c1ac
|
adabelief
|
4 years ago |
Ross Wightman
|
fb896c0b26
|
Update some comments re preliminary EfficientNet-V2 assumptions
|
4 years ago |
Ross Wightman
|
2b49ab7a36
|
Fix ResNetV2 pretrained classifier issue. Fixes #540
|
4 years ago |
Ross Wightman
|
de9dff933a
|
EfficientNet-V2S preliminary model def (for experimentation)
|
4 years ago |
Ross Wightman
|
d5ed58d623
|
Merge pull request #533 from rwightman/pit_and_vit_update
Addition of PiT models and update/cleanup of ViT, new NFNet weight, TFDS wrapper fix, few misc fixes/updates
|
4 years ago |
Ross Wightman
|
37c71a5609
|
Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
|
4 years ago |
Ross Wightman
|
2bb65bd875
|
Wrong default_cfg pool_size for L1
|
4 years ago |
Ross Wightman
|
bf2ca6bdf4
|
Merge jax and original weight init
|
4 years ago |
Ross Wightman
|
acbd698c83
|
Update README.md with updates. Small tweak to head_dist handling.
|
4 years ago |
Ross Wightman
|
9071568f0e
|
Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training.
|
4 years ago |