Commit Graph

1256 Commits (cdcd0a92ca8a3dc120336a5dde1b7d6ecd5e9186)
 

Author SHA1 Message Date
Ross Wightman af3299ba4a
Merge pull request #263 from rwightman/fixes_oct2020
4 years ago
Ross Wightman 741572dc9d Bump version to 0.3.0 for pending PyPi push
4 years ago
Ross Wightman b401952caf Add newly added vision transformer large/base 224x224 weights ported from JAX official repo
4 years ago
Ross Wightman 61200db0ab in_chans=1 working w/ pretrained weights for vision_transformer
4 years ago
Ross Wightman e90edce438 Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7.
4 years ago
Ross Wightman da6cd2cc1f Fix regression for pretrained classifier loading when using entrypt functions directly
4 years ago
Ross Wightman f591e90b0d Make sure num_features attr is present in vit models as with others
4 years ago
Ross Wightman 4a3df7842a Fix topn metric view regression on PyTorch 1.7
4 years ago
Ross Wightman f944242cb0 Fix #262, num_classes arg mixup. Make vision_transformers a bit closer to other models wrt get/reset classfier/forward_features. Fix torchscript for ViT.
4 years ago
Ross Wightman da1b90e5c9 Update results csvs with latest run
4 years ago
Ross Wightman 736f209e7d Update vision transformers to be compatible with official code. Port official ViT weights from jax impl.
4 years ago
Ross Wightman 7613094fb5 Add ViT to sotabench
4 years ago
Ross Wightman 477a78ed81 Fix optimizer factory regressin for optimizers like sgd/momentum that don't have an eps arg
4 years ago
Ross Wightman 27a93e9de7 Improve test crop for ViT models. Small now 77.85, added base weights at 79.35 top-1.
4 years ago
Ross Wightman d4db9e7977 Add small vision transformer weights. 77.42 top-1.
4 years ago
Ross Wightman ccfb5751ab
Merge pull request #255 from mrT23/master
4 years ago
talrid 27fadaa922 asymmetric_loss
4 years ago
talrid 79e727e07a Merge branch 'master' of https://github.com/mrT23/pytorch-image-models
4 years ago
mrT23 8331fac688
Merge pull request #2 from rwightman/master
4 years ago
Ross Wightman 70ae7f0cc2
Merge pull request #250 from rwightman/vision_transformer
4 years ago
Ross Wightman be53107e8a Update README, ensure vit excluded from all tests (not ready)
4 years ago
Ross Wightman f31933cb37 Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers.
4 years ago
Ross Wightman 9305313291 Default to old checkpoint format for now, still want compatibility with older torch ver for released models
4 years ago
Ross Wightman a4d8fea61e Add model based wd skip support. Improve cross version compat of optimizer factory. Fix #247
4 years ago
Ross Wightman 80078c47bb Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
4 years ago
Ross Wightman fcb6258877 Add missing leaky_relu layer factory defn, update Apex/Native loss scaler interfaces to support unscaled grad clipping. Bump ver to 0.2.2 for pending release.
4 years ago
Ross Wightman 186075ef03
Merge pull request #244 from hollance/master
4 years ago
Matthijs Hollemans f04bdc8c8e don't forget this file
4 years ago
Matthijs Hollemans 8ffdc5910a test_time_pool would be set to a non-False value even if test-time pooling is not available
4 years ago
Ross Wightman 4be5b51e0a Missed moving some seresnet -> legacy in sotabench. Check sotabench cache.
4 years ago
Ross Wightman e8e2d9cabf Add DropPath (stochastic depth) to ReXNet and VoVNet. RegNet DropPath impl tweak and dedupe se args.
4 years ago
Ross Wightman e8ca45854c More models in sotabench, more control over sotabench run, dataset filename extraction consistency
4 years ago
Ross Wightman 9c406532bd Add EfficientNet-EdgeTPU-M (efficientnet_em) model trained natively in PyTorch. More sotabench fiddling.
4 years ago
Ross Wightman 3681c5c4dd Another sotabench.py debug iter
4 years ago
Ross Wightman 08029852d9 Sotabench debugging
4 years ago
Ross Wightman c40384f5bd Add ResNet weights. 80.5 (top-1) ResNet-50-D, 77.1 ResNet-34-D, 72.7 ResNet-18-D.
4 years ago
Ross Wightman e39bf6ef59
Merge pull request #237 from rwightman/utils_cleanup
4 years ago
Ross Wightman 47a7b3b5b1 More flexible mixup mode, add 'half' mode.
4 years ago
Ross Wightman 532e3b417d Reorg of utils into separate modules
4 years ago
Ross Wightman 9ce42d5c5a
Update README.md
4 years ago
Ross Wightman 0729dbe865
Update README.md
4 years ago
Ross Wightman 33f8a1bf36 Updated README, add wide_resnet50_2 and seresnext50_32x4d weights
4 years ago
Ross Wightman 5247eb37a7
Merge pull request #233 from rwightman/torchamp
4 years ago
Ross Wightman 751b0bba98 Add global_pool (--gp) arg changes to allow passing 'fast' easily for train/validate to avoid channels_last issue with AdaptiveAvgPool
4 years ago
Ross Wightman 9c297ec67d Cleanup Apex vs native AMP scaler state save/load. Cleanup CheckpointSaver a bit.
4 years ago
Ross Wightman 80c9d9cc72 Add 'fast' global pool option, remove redundant SEModule from tresnet, normal one is now 'fast'
4 years ago
Ross Wightman 90a01f47d1 hrnet features_only pretrained weight loading issue. Fix #232.
4 years ago
Ross Wightman 110a7c4982 AdaptiveAvgPool2d -> mean((2,3)) for all SE/attn layers to avoid NaN with AMP + channels_last layout. See https://github.com/pytorch/pytorch/issues/43992
4 years ago
Ross Wightman c2cd1a332e Improve torch amp support and add channels_last support for train/validate scripts
4 years ago
Ross Wightman 1d34a0a851 Merge branch 'master' of https://github.com/tgisaturday/pytorch-image-models into torchamp
4 years ago