Ross Wightman
|
f8463b8fa9
|
Version 0.3.4. Tweak setup.cfg and update setup.py metadata
|
4 years ago |
Ross Wightman
|
e7a9ddf982
|
Merge pull request #334 from kecsap/links
Follow symbolic links during dataset scanning
|
4 years ago |
Csaba Kertesz
|
7cae7e7035
|
Follow links during dataset scanning
|
4 years ago |
Ross Wightman
|
c96e9f99a0
|
Update version to 0.3.3
|
4 years ago |
Ross Wightman
|
4e2533db77
|
Add 320x320 model default cfgs for 101D and 152D ResNets. Add SEResNet-152D weights and 320x320 cfg.
|
4 years ago |
Ross Wightman
|
0167f749d3
|
Remove some old __future__ imports
|
4 years ago |
Ross Wightman
|
392595c7eb
|
Add pool_size to default cfgs for new models to prevent tests from failing. Add explicit 200D_320 model entrypoint for next benchmark run.
|
4 years ago |
Ross Wightman
|
b1f1228a41
|
Add ResNet101D, 152D, and 200D weights, remove meh 66d model
|
4 years ago |
Jasha
|
7c56c718f3
|
Configure create_optimizer with args.opt_args
Closes #301
|
4 years ago |
Ross Wightman
|
9a25fdf3ad
|
Merge pull request #297 from rwightman/ema_simplify
Simplified JIT compatible Ema module. Fixes for SiLU export and torchscript training w/ Linear layer.
|
4 years ago |
Tymoteusz Wiśniewski
|
de15b43865
|
Fix a bug with accuracy retrieving from RealLabels
|
4 years ago |
Ross Wightman
|
cd72e66eff
|
Bug in last mod for features_only default_cfg
|
4 years ago |
Ross Wightman
|
867a0e5a04
|
Add default_cfg back to models wrapped in feature extraction module as per discussion in #294.
|
4 years ago |
Ross Wightman
|
4ca52d73d8
|
Add separate set and update method to ModelEmaV2
|
4 years ago |
Ross Wightman
|
2ed8f24715
|
A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper.
|
4 years ago |
Ross Wightman
|
6504a42832
|
Version 0.3.2
|
4 years ago |
Ross Wightman
|
460eba7f24
|
Work around casting issue with combination of native torch AMP and torchscript for Linear layers
|
4 years ago |
Ross Wightman
|
5f4b6076d8
|
Fix inplace arg compat for GELU and PreLU via activation factory
|
4 years ago |
Ross Wightman
|
fd962c4b4a
|
Native SiLU (Swish) op doesn't export to ONNX
|
4 years ago |
Ross Wightman
|
27bbc70d71
|
Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
|
4 years ago |
Ross Wightman
|
9214ca0716
|
Simplifying EMA...
|
4 years ago |
Ross Wightman
|
53aeed3499
|
ver 0.3.1
|
4 years ago |
Ross Wightman
|
30ab4a1494
|
Fix issue in optim factory with sgd / eps flag. Bump version to 0.3.1
|
4 years ago |
Ross Wightman
|
741572dc9d
|
Bump version to 0.3.0 for pending PyPi push
|
4 years ago |
Ross Wightman
|
b401952caf
|
Add newly added vision transformer large/base 224x224 weights ported from JAX official repo
|
4 years ago |
Ross Wightman
|
61200db0ab
|
in_chans=1 working w/ pretrained weights for vision_transformer
|
4 years ago |
Ross Wightman
|
e90edce438
|
Support native silu activation (aka swish). An optimized ver is available in PyTorch 1.7.
|
4 years ago |
Ross Wightman
|
da6cd2cc1f
|
Fix regression for pretrained classifier loading when using entrypt functions directly
|
4 years ago |
Ross Wightman
|
f591e90b0d
|
Make sure num_features attr is present in vit models as with others
|
4 years ago |
Ross Wightman
|
4a3df7842a
|
Fix topn metric view regression on PyTorch 1.7
|
4 years ago |
Ross Wightman
|
f944242cb0
|
Fix #262, num_classes arg mixup. Make vision_transformers a bit closer to other models wrt get/reset classfier/forward_features. Fix torchscript for ViT.
|
4 years ago |
Ross Wightman
|
736f209e7d
|
Update vision transformers to be compatible with official code. Port official ViT weights from jax impl.
|
4 years ago |
Ross Wightman
|
477a78ed81
|
Fix optimizer factory regressin for optimizers like sgd/momentum that don't have an eps arg
|
4 years ago |
Ross Wightman
|
27a93e9de7
|
Improve test crop for ViT models. Small now 77.85, added base weights at 79.35 top-1.
|
4 years ago |
Ross Wightman
|
d4db9e7977
|
Add small vision transformer weights. 77.42 top-1.
|
4 years ago |
talrid
|
27fadaa922
|
asymmetric_loss
|
4 years ago |
Ross Wightman
|
f31933cb37
|
Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers.
|
4 years ago |
Ross Wightman
|
a4d8fea61e
|
Add model based wd skip support. Improve cross version compat of optimizer factory. Fix #247
|
4 years ago |
Ross Wightman
|
80078c47bb
|
Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
|
4 years ago |
Ross Wightman
|
fcb6258877
|
Add missing leaky_relu layer factory defn, update Apex/Native loss scaler interfaces to support unscaled grad clipping. Bump ver to 0.2.2 for pending release.
|
4 years ago |
Ross Wightman
|
e8e2d9cabf
|
Add DropPath (stochastic depth) to ReXNet and VoVNet. RegNet DropPath impl tweak and dedupe se args.
|
4 years ago |
Ross Wightman
|
e8ca45854c
|
More models in sotabench, more control over sotabench run, dataset filename extraction consistency
|
4 years ago |
Ross Wightman
|
9c406532bd
|
Add EfficientNet-EdgeTPU-M (efficientnet_em) model trained natively in PyTorch. More sotabench fiddling.
|
4 years ago |
Ross Wightman
|
c40384f5bd
|
Add ResNet weights. 80.5 (top-1) ResNet-50-D, 77.1 ResNet-34-D, 72.7 ResNet-18-D.
|
4 years ago |
Ross Wightman
|
47a7b3b5b1
|
More flexible mixup mode, add 'half' mode.
|
4 years ago |
Ross Wightman
|
532e3b417d
|
Reorg of utils into separate modules
|
4 years ago |
Ross Wightman
|
33f8a1bf36
|
Updated README, add wide_resnet50_2 and seresnext50_32x4d weights
|
4 years ago |
Ross Wightman
|
751b0bba98
|
Add global_pool (--gp) arg changes to allow passing 'fast' easily for train/validate to avoid channels_last issue with AdaptiveAvgPool
|
4 years ago |
Ross Wightman
|
9c297ec67d
|
Cleanup Apex vs native AMP scaler state save/load. Cleanup CheckpointSaver a bit.
|
4 years ago |
Ross Wightman
|
80c9d9cc72
|
Add 'fast' global pool option, remove redundant SEModule from tresnet, normal one is now 'fast'
|
4 years ago |