Ross Wightman
|
76de984a5f
|
Fix some bugs with XLA support, logger, add hacky xla dist launch script since torch.dist.launch doesn't work
|
4 years ago |
Ross Wightman
|
12d9a6d4d2
|
First timm.bits commit, add initial abstractions, WIP updates to train, val... some of it working
|
4 years ago |
Ross Wightman
|
2df77ee5cb
|
Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers.
|
4 years ago |
Ross Wightman
|
d793deb51a
|
Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master
|
4 years ago |
Ross Wightman
|
e685618f45
|
Merge pull request #550 from amaarora/wandb
Wandb Support
|
4 years ago |
Ross Wightman
|
f606c45c38
|
Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer
|
4 years ago |
iamhankai
|
de445e7827
|
Add GhostNet
|
4 years ago |
Ross Wightman
|
5a196dddf6
|
Update README.md with latest, bump version to 0.4.8
|
4 years ago |
Ross Wightman
|
b3d7580df1
|
Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
|
4 years ago |
Ross Wightman
|
16f7aa9f54
|
Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
|
4 years ago |
Ross Wightman
|
4e4b863b15
|
Missed norm.py
|
4 years ago |
Ross Wightman
|
7c97e66f7c
|
Remove commented code, add more consistent seed fn
|
4 years ago |
Ross Wightman
|
364dd6a58e
|
Merge branch 'master' into byoanet-self_attn
|
4 years ago |
Ross Wightman
|
ce62f96d4d
|
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
|
4 years ago |
Ross Wightman
|
cd3dc4979f
|
Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
|
4 years ago |
Ross Wightman
|
21812d33aa
|
Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
|
4 years ago |
Aman Arora
|
5772c55c57
|
Make wandb optional
|
4 years ago |
Aman Arora
|
f54897cc0b
|
make wandb not required but rather optional as huggingface_hub
|
4 years ago |
Aman Arora
|
3f028ebc0f
|
import wandb in summary.py
|
4 years ago |
Aman Arora
|
624c9b6949
|
log to wandb only if using using wandb
|
4 years ago |
juntang
|
addfc7c1ac
|
adabelief
|
4 years ago |
Ross Wightman
|
fb896c0b26
|
Update some comments re preliminary EfficientNet-V2 assumptions
|
4 years ago |
Ross Wightman
|
2b49ab7a36
|
Fix ResNetV2 pretrained classifier issue. Fixes #540
|
4 years ago |
Ross Wightman
|
de9dff933a
|
EfficientNet-V2S preliminary model def (for experimentation)
|
4 years ago |
Ross Wightman
|
37c71a5609
|
Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
|
4 years ago |
Ross Wightman
|
2bb65bd875
|
Wrong default_cfg pool_size for L1
|
4 years ago |
Ross Wightman
|
bf2ca6bdf4
|
Merge jax and original weight init
|
4 years ago |
Ross Wightman
|
acbd698c83
|
Update README.md with updates. Small tweak to head_dist handling.
|
4 years ago |
Ross Wightman
|
9071568f0e
|
Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training.
|
4 years ago |
Ross Wightman
|
c468c47a9c
|
Add regnety_160 weights from DeiT teacher model, update that and my regnety_032 weights to use higher test size.
|
4 years ago |
Ross Wightman
|
288682796f
|
Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
|
4 years ago |
Ross Wightman
|
ea9c9550b2
|
Fully move ViT hybrids to their own file, including embedding module. Remove some extra DeiT models that were for benchmarking only.
|
4 years ago |
Ross Wightman
|
a5310a3451
|
Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
|
4 years ago |
Ross Wightman
|
7953e5d11a
|
Fix pos_embed scaling for ViT and num_classes != 1000 for pretrained distilled deit and pit models. Fix #426 and fix #433
|
4 years ago |
Ross Wightman
|
a760a4c3f4
|
Some ViT cleanup, merge distilled model with main, fixup torchscript support for distilled models
|
4 years ago |
Ross Wightman
|
0dfc5a66bb
|
Add PiT model from https://github.com/naver-ai/pit
|
4 years ago |
Ross Wightman
|
51febd869b
|
Small tweak to tests for tnt model, reorder model imports.
|
4 years ago |
Ross Wightman
|
b27a4e0d88
|
Merge branch 'master' of https://github.com/contrastive/pytorch-image-models into contrastive-master
|
4 years ago |
Aman Arora
|
6b18061773
|
Add GIST to docstring for quick access
|
4 years ago |
contrastive
|
de86314655
|
Update TNT
|
4 years ago |
Aman Arora
|
92b1db9a79
|
update docstrings and add check on and
|
4 years ago |
Aman Arora
|
b85be24054
|
update to work with fnmatch
|
4 years ago |
contrastive
|
cfc15283a4
|
Update TNT url
|
4 years ago |
contrastive
|
4a09bc851e
|
Add TNT model
|
4 years ago |
Aman Arora
|
20626e8387
|
Add to extract stats for SPP
|
4 years ago |
Ross Wightman
|
cf5fec5047
|
Cleanup experimental vit weight init a bit
|
4 years ago |
Ross Wightman
|
f42f1df26c
|
Improve evenness of per-worker split for validation set with TFDS
|
4 years ago |
Ross Wightman
|
cbcb76d72c
|
Should have included Conv2d layers in original weight init. Lets see what the impact is...
|
4 years ago |
Ross Wightman
|
4de57ccf01
|
Add weight init scheme that's closer to JAX impl
|
4 years ago |
Ross Wightman
|
14ac4abf74
|
Change huggingface hub revision delimiter to '@', add hf_hub reference for eca_nfnet_l0 model as an example.
|
4 years ago |