Ross Wightman
|
460eba7f24
|
Work around casting issue with combination of native torch AMP and torchscript for Linear layers
|
4 years ago |
Ross Wightman
|
5f4b6076d8
|
Fix inplace arg compat for GELU and PreLU via activation factory
|
4 years ago |
Ross Wightman
|
fd962c4b4a
|
Native SiLU (Swish) op doesn't export to ONNX
|
4 years ago |
Ross Wightman
|
27bbc70d71
|
Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
|
4 years ago |
Ross Wightman
|
6f43aeb252
|
Merge pull request #286 from s-rog/patch-1
Fix link typo in README
|
4 years ago |
Roger Shieh
|
a7f6126b92
|
Update README.md
|
4 years ago |
tigertang
|
43f2500c26
|
Add symbolic for SwishJitAutoFn to support onnx
|
4 years ago |
Ross Wightman
|
9214ca0716
|
Simplifying EMA...
|
4 years ago |
Ross Wightman
|
80cd31f21f
|
Create config.yml
|
4 years ago |
Ross Wightman
|
d940a53cd9
|
Update issue templates
|
4 years ago |
Ross Wightman
|
60c998af22
|
Update issue templates
|
4 years ago |
Ross Wightman
|
9d73911e62
|
Update issue templates
|
4 years ago |
Ross Wightman
|
2542283b28
|
Merge pull request #268 from seemethere/patch-1
Update torch -> 1.7.0, torchvision -> 0.8.1 for github actions
|
4 years ago |
Eli Uriegas
|
ab9f1fc995
|
Update torch -> 1.7.0, torchvision -> 0.8.1
|
4 years ago |
Ross Wightman
|
53aeed3499
|
ver 0.3.1
|
4 years ago |
Ross Wightman
|
30ab4a1494
|
Fix issue in optim factory with sgd / eps flag. Bump version to 0.3.1
|
4 years ago |
Ross Wightman
|
46f15443be
|
Update README.md and docs in prep for 0.3.0 PyPi release.
|
4 years ago |
Ross Wightman
|
af3299ba4a
|
Merge pull request #263 from rwightman/fixes_oct2020
Fixes for upcoming PyPi release
|
4 years ago |
Ross Wightman
|
741572dc9d
|
Bump version to 0.3.0 for pending PyPi push
|
4 years ago |
Ross Wightman
|
b401952caf
|
Add newly added vision transformer large/base 224x224 weights ported from JAX official repo
|
4 years ago |
Ross Wightman
|
61200db0ab
|
in_chans=1 working w/ pretrained weights for vision_transformer
|
4 years ago |
Ross Wightman
|
e90edce438
|
Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7.
|
4 years ago |
Ross Wightman
|
da6cd2cc1f
|
Fix regression for pretrained classifier loading when using entrypt functions directly
|
4 years ago |
Ross Wightman
|
f591e90b0d
|
Make sure num_features attr is present in vit models as with others
|
4 years ago |
Ross Wightman
|
4a3df7842a
|
Fix topn metric view regression on PyTorch 1.7
|
4 years ago |
Ross Wightman
|
f944242cb0
|
Fix #262, num_classes arg mixup. Make vision_transformers a bit closer to other models wrt get/reset classfier/forward_features. Fix torchscript for ViT.
|
4 years ago |
Ross Wightman
|
da1b90e5c9
|
Update results csvs with latest run
|
4 years ago |
Ross Wightman
|
736f209e7d
|
Update vision transformers to be compatible with official code. Port official ViT weights from jax impl.
|
4 years ago |
Ross Wightman
|
7613094fb5
|
Add ViT to sotabench
|
4 years ago |
Ross Wightman
|
477a78ed81
|
Fix optimizer factory regressin for optimizers like sgd/momentum that don't have an eps arg
|
4 years ago |
Ross Wightman
|
27a93e9de7
|
Improve test crop for ViT models. Small now 77.85, added base weights at 79.35 top-1.
|
4 years ago |
Ross Wightman
|
d4db9e7977
|
Add small vision transformer weights. 77.42 top-1.
|
4 years ago |
Ross Wightman
|
ccfb5751ab
|
Merge pull request #255 from mrT23/master
Adding ASL (asymmetric loss)
|
4 years ago |
talrid
|
27fadaa922
|
asymmetric_loss
|
4 years ago |
talrid
|
79e727e07a
|
Merge branch 'master' of https://github.com/mrT23/pytorch-image-models
|
4 years ago |
mrT23
|
8331fac688
|
Merge pull request #2 from rwightman/master
merge original
|
4 years ago |
Ross Wightman
|
70ae7f0cc2
|
Merge pull request #250 from rwightman/vision_transformer
Vision Transformer
|
4 years ago |
Ross Wightman
|
be53107e8a
|
Update README, ensure vit excluded from all tests (not ready)
|
4 years ago |
Ross Wightman
|
f31933cb37
|
Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers.
|
4 years ago |
Ross Wightman
|
9305313291
|
Default to old checkpoint format for now, still want compatibility with older torch ver for released models
|
4 years ago |
Ross Wightman
|
a4d8fea61e
|
Add model based wd skip support. Improve cross version compat of optimizer factory. Fix #247
|
4 years ago |
Ross Wightman
|
80078c47bb
|
Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
|
4 years ago |
Ross Wightman
|
fcb6258877
|
Add missing leaky_relu layer factory defn, update Apex/Native loss scaler interfaces to support unscaled grad clipping. Bump ver to 0.2.2 for pending release.
|
4 years ago |
Ross Wightman
|
186075ef03
|
Merge pull request #244 from hollance/master
Bug fix: test_time_pool would be set to a non-False value
|
4 years ago |
Matthijs Hollemans
|
f04bdc8c8e
|
don't forget this file
|
4 years ago |
Matthijs Hollemans
|
8ffdc5910a
|
test_time_pool would be set to a non-False value even if test-time pooling is not available
|
4 years ago |
Ross Wightman
|
4be5b51e0a
|
Missed moving some seresnet -> legacy in sotabench. Check sotabench cache.
|
4 years ago |
Ross Wightman
|
e8e2d9cabf
|
Add DropPath (stochastic depth) to ReXNet and VoVNet. RegNet DropPath impl tweak and dedupe se args.
|
4 years ago |
Ross Wightman
|
e8ca45854c
|
More models in sotabench, more control over sotabench run, dataset filename extraction consistency
|
4 years ago |
Ross Wightman
|
9c406532bd
|
Add EfficientNet-EdgeTPU-M (efficientnet_em) model trained natively in PyTorch. More sotabench fiddling.
|
4 years ago |