Ross Wightman
f35d6ea57b
Add multi-tensor (foreach) version of Lion in style of upcoming PyTorch 2.0 optimizers
2 years ago
Ross Wightman
709d5e0d9d
Add Lion optimizer
2 years ago
alec.tu
74d6afb4cd
Add Adan to __init__.py
2 years ago
Ross Wightman
927f031293
Major module / path restructure, timm.models.layers -> timm.layers, add _ prefix to all non model modules in timm.models
2 years ago
Ross Wightman
b1b024dfed
Scheduler update, add v2 factory method, support scheduling on updates instead of just epochs. Add LR to summary csv. Add lr_base scaling calculations to train script. Fix #1168
2 years ago
Ross Wightman
2a296412be
Add Adan optimizer
2 years ago
Ross Wightman
33e30f8c8b
Remove layer-decay print
2 years ago
Ross Wightman
0557c8257d
Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc.
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Mi-Peng
cdcd0a92ca
fix lars
3 years ago
Ross Wightman
a16a753852
Add lamb/lars to optim init imports, remove stray comment
3 years ago
Ross Wightman
c207e02782
MOAR optimizer changes. Woo!
3 years ago
Ross Wightman
a426511c95
More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
3 years ago
Ross Wightman
9541f4963b
One more scalar -> tensor fix for lamb optimizer
3 years ago
Ross Wightman
8f68193c91
Update lamp.py comment
3 years ago
Ross Wightman
a6af48be64
add madgradw optimizer
3 years ago
Ross Wightman
55fb5eedf6
Remove experiment from lamb impl
3 years ago
Ross Wightman
8a9eca5157
A few optimizer comments, dead import, missing import
3 years ago
Ross Wightman
ac469b50da
Optimizer improvements, additions, cleanup
...
* Add MADGRAD code
* Fix Lamb (non-fused variant) to work w/ PyTorch XLA
* Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat
* Use newer fn signatures for all add,addcdiv, addcmul in optimizers
* Use upcoming PyTorch native Nadam if it's available
* Cleanup lookahead opt
* Add optimizer tests
* Remove novograd.py impl as it was messy, keep nvnovograd
* Make AdamP/SGDP work in channels_last layout
* Add rectified adablief mode (radabelief)
* Support a few more PyTorch optim, adamax, adagrad
3 years ago
Ross Wightman
1042b8a146
Add non fused LAMB optimizer option
3 years ago
Ross Wightman
cd3dc4979f
Fix adabelief imports, remove prints, preserve memory format is the default arg for zeros_like
4 years ago
juntang
addfc7c1ac
adabelief
4 years ago
Ross Wightman
37c71a5609
Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
4 years ago
Ross Wightman
288682796f
Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman
0e16d4e9fb
Add benchmark.py script, and update optimizer factory to be more friendly to use outside of argparse interface.
4 years ago
Jasha
7c56c718f3
Configure create_optimizer with args.opt_args
...
Closes #301
4 years ago
Ross Wightman
30ab4a1494
Fix issue in optim factory with sgd / eps flag. Bump version to 0.3.1
4 years ago
Ross Wightman
f944242cb0
Fix #262 , num_classes arg mixup. Make vision_transformers a bit closer to other models wrt get/reset classfier/forward_features. Fix torchscript for ViT.
4 years ago
Ross Wightman
477a78ed81
Fix optimizer factory regressin for optimizers like sgd/momentum that don't have an eps arg
4 years ago
Ross Wightman
a4d8fea61e
Add model based wd skip support. Improve cross version compat of optimizer factory. Fix #247
4 years ago
Ross Wightman
80078c47bb
Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
4 years ago
Ross Wightman
7995295968
Merge branch 'logger' into features. Change 'logger' to '_logger'.
4 years ago
Ross Wightman
6c17d57a2c
Fix some attributions, add copyrights to some file docstrings
4 years ago
Sangdoo Yun
e93e571f7a
Add `adamp` and 'sgdp' optimizers.
...
Update requirements.txt
Update optim_factory.py
Add `adamp` optimizer
Update __init__.py
copy files of adamp & sgdp
Create adamp.py
Update __init__.py
Create sgdp.py
Update optim_factory.py
Update optim_factory.py
Update requirements.txt
Update adamp.py
Update sgdp.py
Update sgdp.py
Update adamp.py
4 years ago
Ross Wightman
e6f24e5578
Add 'momentum' optimizer (SGD w/o nesterov) for stable EfficientDet training defaults
5 years ago
Ross Wightman
64966f61f7
Add Nvidia's NovogGrad impl from Jasper (cleaner/faster than current) and Apex Fused optimizers
5 years ago
Ross Wightman
ba3c97c3ad
Some Lookahead cleanup and fixes
5 years ago
Ross Wightman
fac58f609a
Add RAdam, NovoGrad, Lookahead, and AdamW optimizers, a few ResNet tweaks and scheduler factory tweak.
...
* Add some of the trendy new optimizers. Decent results but not clearly better than the standards.
* Can create a None scheduler for constant LR
* ResNet defaults to zero_init of last BN in residual
* add resnet50d config
5 years ago
Ross Wightman
aa4354f466
Big re-org, working towards making pip/module as 'timm'
6 years ago