rwightman
adbf770f16
Add Res2Net and DLA models w/ pretrained weights. Update sotabench.
5 years ago
Ross Wightman
4002c0d4ce
Fix AutoAugment abs translate calc
5 years ago
Ross Wightman
c06274e5a2
Add note on random selection of magnitude value
5 years ago
Ross Wightman
b750b76f67
More AutoAugment work. Ready to roll...
5 years ago
Ross Wightman
25d2088d9e
Working on auto-augment
5 years ago
Ross Wightman
aff194f42c
Merge pull request #32 from rwightman/opt
...
More optimizer work
5 years ago
Ross Wightman
64966f61f7
Add Nvidia's NovogGrad impl from Jasper (cleaner/faster than current) and Apex Fused optimizers
5 years ago
Ross Wightman
3d9c8a6489
Add support for new AMP checkpointing support w/ amp.state_dict
5 years ago
Ross Wightman
ba3c97c3ad
Some Lookahead cleanup and fixes
5 years ago
Ross Wightman
e9d2ec4d8e
Merge pull request #31 from rwightman/opt
...
Optimizers and more
5 years ago
Ross Wightman
fac58f609a
Add RAdam, NovoGrad, Lookahead, and AdamW optimizers, a few ResNet tweaks and scheduler factory tweak.
...
* Add some of the trendy new optimizers. Decent results but not clearly better than the standards.
* Can create a None scheduler for constant LR
* ResNet defaults to zero_init of last BN in residual
* add resnet50d config
5 years ago
Ross Wightman
81875d52a6
Update sotabench model list, add Mean-Max pooling DPN variants, disable download progress
5 years ago
Ross Wightman
f37e633e9b
Merge remote-tracking branch 'origin/re-exp' into opt
5 years ago
Ross Wightman
b06dce8d71
Bump version for next push to pypi
5 years ago
Ross Wightman
73fbd97ed4
Add weights for my MixNet-XL creation, include README updates for EdgeTPU models
5 years ago
Ross Wightman
51a2375b0c
Experimenting with a custom MixNet-XL and MixNet-XXL definition
5 years ago
Ross Wightman
9ec6824bab
Finally got around to adding EdgeTPU EfficientNet variant
5 years ago
Ross Wightman
daeaa113e2
Add initial sotabench attempt. Split create_transform out of create_loader. Update requirements.txt
5 years ago
Ross Wightman
66634d2200
Add support to split random erasing blocks into randomly selected number with --recount arg. Fix random selection of aspect ratios.
5 years ago
Ross Wightman
6946281fde
Experimenting with random erasing changes
5 years ago
Ross Wightman
aeaaad7304
Merge pull request #24 from rwightman/gluon_xception
...
Port Gluon Aligned Xception models
5 years ago
Ross Wightman
3b4868f6dc
A few more additions to Gluon Xception models to match interface of others.
5 years ago
Ross Wightman
4d505e0785
Add working Gluon Xception-65 model. Some cleanup still needed.
5 years ago
Minqin Chen
4e7a854dd0
Update helpers.py
...
Fixing out of memory error by loading the checkpoint onto the CPU.
5 years ago
Ross Wightman
0c874195db
Update results csv files, bump version for timm pip release
5 years ago
Ross Wightman
4fe2da558c
Add MixNet Small and Large PyTorch native weights (no same padding)
5 years ago
Ross Wightman
e879cf52fa
Update validation scores for new TF EfficientNet weights.
5 years ago
Ross Wightman
77e2e0c4e3
Add new auto-augmentation Tensorflow EfficientNet weights, incl B6 and B7 models. Validation scores still pending but looking good.
5 years ago
Ross Wightman
857f33015a
Add native PyTorch weights for MixNet-Medium with no SAME padding necessary. Remove unused block of code.
5 years ago
Ross Wightman
e7c8a37334
Make min-lr and cooldown-epochs cmdline args, change dash in color_jitter arg for consistency
5 years ago
Ross Wightman
d4debe6597
Update version, results csv files, and move remaining dropbox weights to github
5 years ago
Ross Wightman
dfa9298b4e
Add MixNet ( https://arxiv.org/abs/1907.09595 ) with pretrained weights converted from Tensorflow impl
...
* refactor 'same' convolution and add helper to use MixedConv2d when needed
* improve performance of 'same' padding for cases that can be handled statically
* add support for extra exp, pw, and dw kernel specs with grouping support to decoder/string defs for MixNet
* shuffle some args for a bit more consistency, a little less clutter overall in gen_efficientnet.py
5 years ago
Ross Wightman
7a92caa560
Add basic image folder style dataset to read directly out of tar files, example in validate.py
5 years ago
Ross Wightman
d6ac5bbc48
EfficientNet and related cleanup
...
* remove folded_bn support and corresponding untrainable tflite ported weights
* combine bn args into dict
* add inplace support to activations and use where possible for reduced mem on large models
5 years ago
Ross Wightman
3d9be78fc6
A bit more ResNet cleanup.
...
* add inplace=True back
* minor comment improvements
* few clarity changes
5 years ago
Ross Wightman
33436fafad
Add weights for ResNeXt50d model
5 years ago
Ross Wightman
e78cd79073
Move ResNet additions for Gluon into main ResNet impl. Add ResNet-26 and ResNet-26d models with weights.
5 years ago
Ross Wightman
6cdf35e670
Add explicit half/fp16 support to loader and validation script
5 years ago
Ross Wightman
a6b2f6eca5
Update README, bump version
5 years ago
Ross Wightman
949b7a81c4
Fix typo in Densenet default resolutions
5 years ago
Ross Wightman
da52fcf78a
Add NASNet-Large model
5 years ago
Ross Wightman
6057496409
Register dpn107
5 years ago
Ross Wightman
3d1a66b6fc
Version 0.1.6
5 years ago
Ross Wightman
a6878b5218
Fix DPN config keys that I broke
5 years ago
Ross Wightman
9b0070edc9
Add two comments back, fix typo
5 years ago
Ross Wightman
188aeae8f4
Bump version 0.1.4
5 years ago
Ross Wightman
c3287aafb3
Slight improvement in EfficientNet-B2 native PyTorch weights
5 years ago
Ross Wightman
b8762cc67d
Model updates. Add my best ResNet50 weights top-1=78.47. Add some other torchvision weights.
...
* Remove some models that don't exist as pretrained an likely never will (se)resnext152
* Add some torchvision weights as tv_ for models that I have added better weights for
* Add wide resnet recently added to torchvision along with resnext101-32x8d
* Add functionality to model registry to allow filtering on pretrained weight presence
5 years ago
Ross Wightman
65a634626f
Switch random erasing to doing normal_() on CPU to avoid instability, remove a debug print
5 years ago
Ross Wightman
c6b32cbe73
A number of tweaks to arguments, epoch handling, config
...
* reorganize train args
* allow resolve_data_config to be used with dict args, not just arparse
* stop incrementing epoch before save, more consistent naming vs csv, etc
* update resume and start epoch handling to match above
* stop auto-incrementing epoch in scheduler
5 years ago