Commit Graph

1075 Commits (cf5ac2800cb6c6d151a5fce55d77614d864534d5)
 

Author SHA1 Message Date
Ross Wightman cf5ac2800c BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights.
3 years ago
Ross Wightman 24720abe3b Merge branch 'master' into attn_update
3 years ago
Ross Wightman 1c9284c640 Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
3 years ago
Ross Wightman f8a215cfe6 A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results
3 years ago
Ross Wightman 7ab2491ab7 Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
3 years ago
Ross Wightman 702982d8af Merge branch 'chunfuchen-feature/crossvit'
3 years ago
Ross Wightman f1808e0970 Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
3 years ago
Ross Wightman a897e0ebcc Merge branch 'feature/crossvit' of https://github.com/chunfuchen/pytorch-image-models into chunfuchen-feature/crossvit
3 years ago
Ross Wightman 4027412757 Add resnet33ts weights, update resnext26ts baseline weights
3 years ago
Richard Chen 9fe5798bee fix bug for reset classifier and fix for validating the dimension
3 years ago
Richard Chen 3718c5a5bd fix loading pretrained model
3 years ago
Richard Chen bb50b69a57 fix for torch script
3 years ago
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related
3 years ago
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman 76881d207b Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
3 years ago
Ross Wightman 54e90e82a5 Another attempt at sgd momentum test passing...
3 years ago
Ross Wightman 484e61648d Adding the attn series weights, tweaking model names, comments...
3 years ago
Ross Wightman 0639d9a591 Fix updated validation_batch_size fallback
3 years ago
Ross Wightman 5db057dca0 Fix misnamed arg, tweak other train script args for better defaults.
3 years ago
Ross Wightman fb94350896 Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss
3 years ago
Ross Wightman f262137ff2 Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments.
3 years ago
Ross Wightman ba9c1108a1 Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments.
3 years ago
Ross Wightman 29a37e23ee LR scheduler update:
3 years ago
Ross Wightman 492c0a4e20 Update HaloAttn comment
3 years ago
Richard Chen 7ab9d4555c add crossvit
3 years ago
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
3 years ago
Ross Wightman fc894c375c Another attempt at sgd momentum test passing...
3 years ago
Ross Wightman 78933122c9 Fix silly typo
3 years ago
Ross Wightman 2568ffc5ef Merge branch 'master' into attn_update
3 years ago
Ross Wightman 708d87a813 Fix ViT SAM weight compat as weights at URL changed to not use repr layer. Fix #825. Tweak optim test.
3 years ago
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman a8b65695f1 Add resnet26ts and resnext26ts models for non-attn baselines
3 years ago
Ross Wightman a5a542f17d Fix typo
3 years ago
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
3 years ago
Ross Wightman acd6c687fd git push origin masterMerge branch 'yohann84L-fix_accuracy'
3 years ago
Ross Wightman d667351eac Tweak accuracy topk safety. Fix #807
3 years ago
Yohann Lereclus 35c9740826 Fix accuracy when topk > num_classes
3 years ago
Ross Wightman a16a753852 Add lamb/lars to optim init imports, remove stray comment
3 years ago
Ross Wightman c207e02782 MOAR optimizer changes. Woo!
3 years ago
Ross Wightman 42c1f0cf6c Fix lars tests
3 years ago
Ross Wightman a426511c95 More optimizer cleanup. Change all to no longer use .data. Improve (b)float16 use with adabelief. Add XLA compatible Lars.
3 years ago
Ross Wightman 9541f4963b One more scalar -> tensor fix for lamb optimizer
3 years ago
Ross Wightman 8f68193c91
Update lamp.py comment
3 years ago
Ross Wightman 4d284017b8
Merge pull request #813 from rwightman/opt_cleanup
3 years ago
Ross Wightman a6af48be64 add madgradw optimizer
3 years ago
Ross Wightman 55fb5eedf6 Remove experiment from lamb impl
3 years ago
Ross Wightman 8a9eca5157 A few optimizer comments, dead import, missing import
3 years ago
Ross Wightman 959eaff121 Add optimizer tests and update testing to pytorch 1.9
3 years ago
Ross Wightman ac469b50da Optimizer improvements, additions, cleanup
3 years ago