Ross Wightman
|
19816fe226
|
Add citation info
|
4 years ago |
Csaba Kertesz
|
e42b140ade
|
Add --input-size option to scripts to specify full input dimensions from command-line
|
4 years ago |
Csaba Kertesz
|
7cae7e7035
|
Follow links during dataset scanning
|
4 years ago |
Ross Wightman
|
1d01c2b68c
|
Update README.md
|
4 years ago |
Ross Wightman
|
c96e9f99a0
|
Update version to 0.3.3
|
4 years ago |
Ross Wightman
|
a7d0a8b5b2
|
Update results csv files with latest models, incl 101D, 152D, 200D, SE152D ResNets and yet to be merged BiT and ViT-R50 models.
|
4 years ago |
Ross Wightman
|
4e2533db77
|
Add 320x320 model default cfgs for 101D and 152D ResNets. Add SEResNet-152D weights and 320x320 cfg.
|
4 years ago |
Ross Wightman
|
0167f749d3
|
Remove some old __future__ imports
|
4 years ago |
Ross Wightman
|
85bf4b8cd6
|
Add setup.cfg for conda / fastai integration
|
4 years ago |
Ross Wightman
|
e553480b67
|
Add 21843 synset txt for google 21k models like BiT/ViT
|
4 years ago |
Ross Wightman
|
e35e9760a6
|
More work on dataset / parser split and imagenet21k (tar) support
|
4 years ago |
Ross Wightman
|
ce69de70d3
|
Add 21k weight urls to vision_transformer. Cleanup feature_info for preact ResNetV2 (BiT) models
|
4 years ago |
Ross Wightman
|
231d04e91a
|
ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing.
|
4 years ago |
Ross Wightman
|
de6046e213
|
Initial commit for dataset / parser reorg to support additional datasets / types
|
4 years ago |
Ross Wightman
|
392595c7eb
|
Add pool_size to default cfgs for new models to prevent tests from failing. Add explicit 200D_320 model entrypoint for next benchmark run.
|
4 years ago |
Ross Wightman
|
7a75b8d033
|
Update README.md
|
4 years ago |
Ross Wightman
|
b1f1228a41
|
Add ResNet101D, 152D, and 200D weights, remove meh 66d model
|
4 years ago |
Ross Wightman
|
198f6ea0f3
|
Merge pull request #302 from Jasha10/create_optimizer-opt_args
Configure create_optimizer with args.opt_args
|
4 years ago |
Jasha
|
7c56c718f3
|
Configure create_optimizer with args.opt_args
Closes #301
|
4 years ago |
Ross Wightman
|
51d74d91da
|
Update README.md
|
4 years ago |
Ross Wightman
|
9a25fdf3ad
|
Merge pull request #297 from rwightman/ema_simplify
Simplified JIT compatible Ema module. Fixes for SiLU export and torchscript training w/ Linear layer.
|
4 years ago |
Ross Wightman
|
c9ebe86d03
|
Merge pull request #300 from tmkkk/real-labels-fix
Fix a bug with accuracy retrieving from RealLabels
|
4 years ago |
Tymoteusz Wiśniewski
|
de15b43865
|
Fix a bug with accuracy retrieving from RealLabels
|
4 years ago |
Ross Wightman
|
cd72e66eff
|
Bug in last mod for features_only default_cfg
|
4 years ago |
Ross Wightman
|
867a0e5a04
|
Add default_cfg back to models wrapped in feature extraction module as per discussion in #294.
|
4 years ago |
Ross Wightman
|
4ca52d73d8
|
Add separate set and update method to ModelEmaV2
|
4 years ago |
Ross Wightman
|
2ed8f24715
|
A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper.
|
4 years ago |
Ross Wightman
|
6504a42832
|
Version 0.3.2
|
4 years ago |
Ross Wightman
|
460eba7f24
|
Work around casting issue with combination of native torch AMP and torchscript for Linear layers
|
4 years ago |
Ross Wightman
|
5f4b6076d8
|
Fix inplace arg compat for GELU and PreLU via activation factory
|
4 years ago |
Ross Wightman
|
fd962c4b4a
|
Native SiLU (Swish) op doesn't export to ONNX
|
4 years ago |
Ross Wightman
|
27bbc70d71
|
Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
|
4 years ago |
Ross Wightman
|
6f43aeb252
|
Merge pull request #286 from s-rog/patch-1
Fix link typo in README
|
4 years ago |
Roger Shieh
|
a7f6126b92
|
Update README.md
|
4 years ago |
tigertang
|
43f2500c26
|
Add symbolic for SwishJitAutoFn to support onnx
|
4 years ago |
Ross Wightman
|
9214ca0716
|
Simplifying EMA...
|
4 years ago |
Ross Wightman
|
80cd31f21f
|
Create config.yml
|
4 years ago |
Ross Wightman
|
d940a53cd9
|
Update issue templates
|
4 years ago |
Ross Wightman
|
60c998af22
|
Update issue templates
|
4 years ago |
Ross Wightman
|
9d73911e62
|
Update issue templates
|
4 years ago |
Ross Wightman
|
2542283b28
|
Merge pull request #268 from seemethere/patch-1
Update torch -> 1.7.0, torchvision -> 0.8.1 for github actions
|
4 years ago |
Eli Uriegas
|
ab9f1fc995
|
Update torch -> 1.7.0, torchvision -> 0.8.1
|
4 years ago |
Ross Wightman
|
53aeed3499
|
ver 0.3.1
|
4 years ago |
Ross Wightman
|
30ab4a1494
|
Fix issue in optim factory with sgd / eps flag. Bump version to 0.3.1
|
4 years ago |
Ross Wightman
|
46f15443be
|
Update README.md and docs in prep for 0.3.0 PyPi release.
|
4 years ago |
Ross Wightman
|
af3299ba4a
|
Merge pull request #263 from rwightman/fixes_oct2020
Fixes for upcoming PyPi release
|
4 years ago |
Ross Wightman
|
741572dc9d
|
Bump version to 0.3.0 for pending PyPi push
|
4 years ago |
Ross Wightman
|
b401952caf
|
Add newly added vision transformer large/base 224x224 weights ported from JAX official repo
|
4 years ago |
Ross Wightman
|
61200db0ab
|
in_chans=1 working w/ pretrained weights for vision_transformer
|
4 years ago |
Ross Wightman
|
e90edce438
|
Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7.
|
4 years ago |