Commit Graph

1277 Commits (1420c118dfa1ba151a9cbd76f08db2701da23bbe)
 

Author SHA1 Message Date
Ross Wightman 2df77ee5cb Fix torchscript compat and features_only behaviour in GhostNet PR. A few minor formatting changes. Reuse existing layers.
4 years ago
Ross Wightman d793deb51a Merge branch 'master' of https://github.com/iamhankai/pytorch-image-models into iamhankai-master
4 years ago
Ross Wightman e685618f45
Merge pull request #550 from amaarora/wandb
4 years ago
Ross Wightman 277a9a78f9 Fix unit test filter update.
4 years ago
Ross Wightman 858728799c Update README again. Add 101x3 BiT-M model to CI ignore since it's starting to fail in GitHub runners.
4 years ago
Ross Wightman f606c45c38 Add Swin Transformer models from https://github.com/microsoft/Swin-Transformer
4 years ago
iamhankai de445e7827 Add GhostNet
4 years ago
Ross Wightman 5a196dddf6 Update README.md with latest, bump version to 0.4.8
4 years ago
Ross Wightman ce6585f533
Merge pull request #556 from rwightman/byoanet-self_attn
4 years ago
Ross Wightman b3d7580df1 Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
4 years ago
Ross Wightman 16f7aa9f54 Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
4 years ago
Ross Wightman 4e4b863b15 Missed norm.py
4 years ago
Ross Wightman 7c97e66f7c Remove commented code, add more consistent seed fn
4 years ago
Ross Wightman 364dd6a58e Merge branch 'master' into byoanet-self_attn
4 years ago
Ross Wightman ce62f96d4d ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
4 years ago
Ross Wightman cd3dc4979f Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
4 years ago
Ross Wightman 21812d33aa Add prelim efficientnet_v2s weights from 224x224 train, eval 83.3 @ 288. Add eca_nfnet_l1 weights, train at 256, eval 84 @ 320.
4 years ago
Michael Monashev 0be1fa4793
Argument description fixed
4 years ago
Aman Arora 5772c55c57 Make wandb optional
4 years ago
Aman Arora f54897cc0b make wandb not required but rather optional as huggingface_hub
4 years ago
Aman Arora f13f7508a9 Keep changes to minimal and use args.experiment as wandb project name if it exists
4 years ago
Aman Arora f8bb13f640 Default project name to None
4 years ago
Aman Arora 8db8ff346f add wandb to requirements.txt
4 years ago
Aman Arora 3f028ebc0f import wandb in summary.py
4 years ago
Aman Arora a9e5d9e5ad log loss as before
4 years ago
Aman Arora 624c9b6949 log to wandb only if using using wandb
4 years ago
Aman Arora 00c8e0b8bd Make use of wandb configurable
4 years ago
Aman Arora 8e6fb861e4 Add wandb support
4 years ago
Ross Wightman 779107b693
Merge pull request #542 from juntang-zhuang/adabelief
4 years ago
Juntang Zhuang 74366f733c
Delete distributed_train_adabelief.sh
4 years ago
Juntang Zhuang 1d848f409a
Delete args.yaml
4 years ago
juntang addfc7c1ac adabelief
4 years ago
Ross Wightman fb896c0b26 Update some comments re preliminary EfficientNet-V2 assumptions
4 years ago
Ross Wightman 2b49ab7a36 Fix ResNetV2 pretrained classifier issue. Fixes #540
4 years ago
Ross Wightman de9dff933a EfficientNet-V2S preliminary model def (for experimentation)
4 years ago
Ross Wightman d5ed58d623
Merge pull request #533 from rwightman/pit_and_vit_update
4 years ago
Ross Wightman 37c71a5609 Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
4 years ago
Ross Wightman 2bb65bd875 Wrong default_cfg pool_size for L1
4 years ago
Ross Wightman bf2ca6bdf4 Merge jax and original weight init
4 years ago
Ross Wightman acbd698c83 Update README.md with updates. Small tweak to head_dist handling.
4 years ago
Ross Wightman 9071568f0e Add weights for SE NFNet-L0 model, rename nfnet_l0b -> nfnet_l0. 82.75 top-1 @ 288. Add nfnet_l1 model def for training.
4 years ago
Ross Wightman c468c47a9c Add regnety_160 weights from DeiT teacher model, update that and my regnety_032 weights to use higher test size.
4 years ago
Ross Wightman 288682796f Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman ea9c9550b2 Fully move ViT hybrids to their own file, including embedding module. Remove some extra DeiT models that were for benchmarking only.
4 years ago
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
4 years ago
Ross Wightman 7953e5d11a Fix pos_embed scaling for ViT and num_classes != 1000 for pretrained distilled deit and pit models. Fix #426 and fix #433
4 years ago
Ross Wightman a760a4c3f4 Some ViT cleanup, merge distilled model with main, fixup torchscript support for distilled models
4 years ago
Ross Wightman 0dfc5a66bb Add PiT model from https://github.com/naver-ai/pit
4 years ago
Ross Wightman 1ad1645a50 Merge branch 'contrastive-master'
4 years ago
Ross Wightman 51febd869b Small tweak to tests for tnt model, reorder model imports.
4 years ago