Ross Wightman
5f12de4875
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman
76881d207b
Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
3 years ago
Ross Wightman
54e90e82a5
Another attempt at sgd momentum test passing...
3 years ago
Ross Wightman
484e61648d
Adding the attn series weights, tweaking model names, comments...
3 years ago
Ross Wightman
0639d9a591
Fix updated validation_batch_size fallback
3 years ago
Ross Wightman
5db057dca0
Fix misnamed arg, tweak other train script args for better defaults.
3 years ago
Ross Wightman
fb94350896
Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss
3 years ago
Ross Wightman
f262137ff2
Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments.
3 years ago
Ross Wightman
ba9c1108a1
Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments.
3 years ago
Ross Wightman
29a37e23ee
LR scheduler update:
...
* add polynomial decay 'poly'
* cleanup cycle specific args for cosine, poly, and tanh sched, t_mul -> cycle_mul, decay -> cycle_decay, default cycle_limit to 1 in each opt
* add k-decay for cosine and poly sched as per https://arxiv.org/abs/2004.05909
* change default tanh ub/lb to push inflection to later epochs
3 years ago
nateraw
28d2841acf
💄 apply isort
3 years ago
Ross Wightman
492c0a4e20
Update HaloAttn comment
3 years ago
nateraw
e72c989973
✨ add ability to push to hf hub
3 years ago
Richard Chen
7ab9d4555c
add crossvit
3 years ago
Ross Wightman
3b9032ea48
Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
3 years ago
Ross Wightman
fc894c375c
Another attempt at sgd momentum test passing...
3 years ago
Ross Wightman
78933122c9
Fix silly typo
3 years ago
Ross Wightman
2568ffc5ef
Merge branch 'master' into attn_update
3 years ago
Ross Wightman
708d87a813
Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825 . Tweak optim test.
3 years ago
Ross Wightman
8449ba210c
Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman
a8b65695f1
Add resnet26ts and resnext26ts models for non-attn baselines
3 years ago
Ross Wightman
a5a542f17d
Fix typo
3 years ago
Ross Wightman
925e102982
Update attention / self-attn based models from a series of experiments:
...
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
3 years ago
Ross Wightman
acd6c687fd
git push origin masterMerge branch 'yohann84L-fix_accuracy'
3 years ago
Ross Wightman
d667351eac
Tweak accuracy topk safety. Fix #807
3 years ago
Yohann Lereclus
35c9740826
Fix accuracy when topk > num_classes
3 years ago
Ross Wightman
a16a753852
Add lamb/lars to optim init imports, remove stray comment
3 years ago
Ross Wightman
c207e02782
MOAR optimizer changes. Woo!
3 years ago
Ross Wightman
42c1f0cf6c
Fix lars tests
3 years ago
Ross Wightman
a426511c95
More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
3 years ago
Ross Wightman
9541f4963b
One more scalar -> tensor fix for lamb optimizer
3 years ago
Ross Wightman
8f68193c91
Update lamp.py comment
3 years ago
Ross Wightman
4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
...
Optimizer cleanup and additions
3 years ago
Ross Wightman
a6af48be64
add madgradw optimizer
3 years ago
Ross Wightman
55fb5eedf6
Remove experiment from lamb impl
3 years ago
Ross Wightman
8a9eca5157
A few optimizer comments, dead import, missing import
3 years ago
Ross Wightman
959eaff121
Add optimizer tests and update testing to pytorch 1.9
3 years ago
Ross Wightman
ac469b50da
Optimizer improvements, additions, cleanup
...
* Add MADGRAD code
* Fix Lamb (non-fused variant) to work w/ PyTorch XLA
* Tweak optimizer factory args (lr/learning_rate and opt/optimizer_name), may break compat
* Use newer fn signatures for all add,addcdiv, addcmul in optimizers
* Use upcoming PyTorch native Nadam if it's available
* Cleanup lookahead opt
* Add optimizer tests
* Remove novograd.py impl as it was messy, keep nvnovograd
* Make AdamP/SGDP work in channels_last layout
* Add rectified adablief mode (radabelief)
* Support a few more PyTorch optim, adamax, adagrad
3 years ago
Ross Wightman
368211d19a
Merge pull request #805 from Separius/patch-1
...
Remove duplicate code in create_scheduler
3 years ago
Sepehr Sameni
abf3e044bb
Update scheduler_factory.py
...
remove duplicate code from create_scheduler()
3 years ago
Ross Wightman
3cdaf5ed56
Add `mmax` config key to auto_augment for increasing upper bound of RandAugment magnitude beyond 10. Make AugMix uniform sampling default not override config setting.
3 years ago
Ross Wightman
1042b8a146
Add non fused LAMB optimizer option
3 years ago
Ross Wightman
01cb46a9a5
Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman
bd56946676
Update README.md
3 years ago
Ross Wightman
d3f7440650
Add EfficientNetV2 XL model defs
3 years ago
Ross Wightman
ef1e2e12be
Attempt to fix xcit test failures on github runner by filter largest models
3 years ago
Ross Wightman
72b227dcf5
Merge pull request #750 from drjinying/master
...
Specify "interpolation" mode in vision_transformer's resize_pos_embed
3 years ago
Ross Wightman
2907c1f967
Merge pull request #746 from samarth4149/master
...
Adding a Multi Step LR Scheduler
3 years ago
Ross Wightman
5aca7c01e5
Update README.md
3 years ago
Ross Wightman
763329f23f
Merge branch 'alexander-soare-xcit'
3 years ago