Ross Wightman
|
364dd6a58e
|
Merge branch 'master' into byoanet-self_attn
|
4 years ago |
Ross Wightman
|
ce62f96d4d
|
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
|
4 years ago |
Ross Wightman
|
cd3dc4979f
|
Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
|
4 years ago |
Ross Wightman
|
21812d33aa
|
Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
|
4 years ago |
Michael Monashev
|
0be1fa4793
|
Argument description fixed
|
4 years ago |
Aman Arora
|
5772c55c57
|
Make wandb optional
|
4 years ago |
Aman Arora
|
f54897cc0b
|
make wandb not required but rather optional as huggingface_hub
|
4 years ago |
Aman Arora
|
f13f7508a9
|
Keep changes to minimal and use args.experiment as wandb project name if it exists
|
4 years ago |
Aman Arora
|
f8bb13f640
|
Default project name to None
|
4 years ago |
Aman Arora
|
8db8ff346f
|
add wandb to requirements.txt
|
4 years ago |
Aman Arora
|
3f028ebc0f
|
import wandb in summary.py
|
4 years ago |
Aman Arora
|
a9e5d9e5ad
|
log loss as before
|
4 years ago |
Aman Arora
|
624c9b6949
|
log to wandb only if using using wandb
|
4 years ago |
Aman Arora
|
00c8e0b8bd
|
Make use of wandb configurable
|
4 years ago |
Aman Arora
|
8e6fb861e4
|
Add wandb support
|
4 years ago |
Ross Wightman
|
779107b693
|
Merge pull request #542 from juntang-zhuang/adabelief
Add Adabelief Optimizer
|
4 years ago |
Juntang Zhuang
|
74366f733c
|
Delete distributed_train_adabelief.sh
|
4 years ago |
Juntang Zhuang
|
1d848f409a
|
Delete args.yaml
|
4 years ago |
juntang
|
addfc7c1ac
|
adabelief
|
4 years ago |
Ross Wightman
|
fb896c0b26
|
Update some comments re preliminary EfficientNet-V2 assumptions
|
4 years ago |
Ross Wightman
|
2b49ab7a36
|
Fix ResNetV2 pretrained classifier issue. Fixes #540
|
4 years ago |
Ross Wightman
|
de9dff933a
|
EfficientNet-V2S preliminary model def (for experimentation)
|
4 years ago |
Ross Wightman
|
d5ed58d623
|
Merge pull request #533 from rwightman/pit_and_vit_update
Addition of PiT models and update/cleanup of ViT, new NFNet weight, TFDS wrapper fix, few misc fixes/updates
|
4 years ago |
Ross Wightman
|
37c71a5609
|
Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
|
4 years ago |
Ross Wightman
|
2bb65bd875
|
Wrong default_cfg pool_size for L1
|
4 years ago |
Ross Wightman
|
bf2ca6bdf4
|
Merge jax and original weight init
|
4 years ago |
Ross Wightman
|
acbd698c83
|
Update README.md with updates. Small tweak to head_dist handling.
|
4 years ago |
Ross Wightman
|
9071568f0e
|
Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training.
|
4 years ago |
Ross Wightman
|
c468c47a9c
|
Add regnety_160 weights from DeiT teacher model, update that and my regnety_032 weights to use higher test size.
|
4 years ago |
Ross Wightman
|
288682796f
|
Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
|
4 years ago |
Ross Wightman
|
ea9c9550b2
|
Fully move ViT hybrids to their own file, including embedding module. Remove some extra DeiT models that were for benchmarking only.
|
4 years ago |
Ross Wightman
|
a5310a3451
|
Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
|
4 years ago |
Ross Wightman
|
7953e5d11a
|
Fix pos_embed scaling for ViT and num_classes != 1000 for pretrained distilled deit and pit models. Fix #426 and fix #433
|
4 years ago |
Ross Wightman
|
a760a4c3f4
|
Some ViT cleanup, merge distilled model with main, fixup torchscript support for distilled models
|
4 years ago |
Ross Wightman
|
0dfc5a66bb
|
Add PiT model from https://github.com/naver-ai/pit
|
4 years ago |
Ross Wightman
|
1ad1645a50
|
Merge branch 'contrastive-master'
|
4 years ago |
Ross Wightman
|
51febd869b
|
Small tweak to tests for tnt model, reorder model imports.
|
4 years ago |
Ross Wightman
|
b27a4e0d88
|
Merge branch 'master' of https://github.com/contrastive/pytorch-image-models into contrastive-master
|
4 years ago |
Ross Wightman
|
2319cbbff2
|
Merge pull request #525 from amaarora/spp
Add `ActivationStatsHook` to allow extracting activation stats for Signal Propogation Plots
|
4 years ago |
Aman Arora
|
6b18061773
|
Add GIST to docstring for quick access
|
4 years ago |
contrastive
|
809271b0f3
|
Update test_models.py
|
4 years ago |
contrastive
|
de86314655
|
Update TNT
|
4 years ago |
Aman Arora
|
92b1db9a79
|
update docstrings and add check on and
|
4 years ago |
Aman Arora
|
b85be24054
|
update to work with fnmatch
|
4 years ago |
contrastive
|
cfc15283a4
|
Update TNT url
|
4 years ago |
contrastive
|
4a09bc851e
|
Add TNT model
|
4 years ago |
Aman Arora
|
20626e8387
|
Add to extract stats for SPP
|
4 years ago |
Ross Wightman
|
a2727c1bf7
|
Merge pull request #510 from rwightman/dependabot/pip/jinja2-2.11.3
Bump jinja2 from 2.11.2 to 2.11.3
|
4 years ago |
Ross Wightman
|
e2e3290fbf
|
Add '--experiment' to train args for fixed exp name if desired, 'train' not added to output folder if specified.
|
4 years ago |
Ross Wightman
|
cf5fec5047
|
Cleanup experimental vit weight init a bit
|
4 years ago |