Ross Wightman
a85df34993
Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim
3 years ago
Ross Wightman
b544ad4d3f
regnetz model default cfg tweaks
3 years ago
Ross Wightman
e2b8d44ff0
Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
...
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
3 years ago
Ross Wightman
da0d39bedd
Update default crop_pct for byoanet
3 years ago
Ross Wightman
64495505b7
Add updated lambda resnet26 and botnet26 checkpoints with fixes applied
3 years ago
Ross Wightman
007bc39323
Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
3 years ago
Ross Wightman
b49630a138
Add relative pos embed option to LambdaLayer, fix last transpose/reshape.
3 years ago
Ross Wightman
0ca687f224
Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes
3 years ago
Ross Wightman
cf5ac2800c
BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights.
3 years ago
Ross Wightman
8642401e88
Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman
5f12de4875
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman
76881d207b
Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
3 years ago
Ross Wightman
484e61648d
Adding the attn series weights, tweaking model names, comments...
3 years ago
Ross Wightman
8449ba210c
Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman
925e102982
Update attention / self-attn based models from a series of experiments:
...
* remove dud attention, involution + my swin attention adaptation don't seem worth keeping
* add or update several new 26/50 layer ResNe(X)t variants that were used in experiments
* remove models associated with dead-end or uninteresting experiment results
* weights coming soon...
3 years ago
Ross Wightman
742c2d5247
Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
4 years ago
Ross Wightman
9a3ae97311
Another set of byoanet models w/ ECA channel + SA + groups
4 years ago
Ross Wightman
165fb354b2
Add initial RedNet model / Involution layer impl for testing
4 years ago
Ross Wightman
3ba6b55cb2
More adjustments to ByoaNet models for further experiments.
4 years ago
Ross Wightman
0721559511
Improved (hopefully) init for SA/SA-like layers used in ByoaNets
4 years ago
Ross Wightman
9cc7dda6e5
Fixup byoanet configs to pass unit tests. Add swin_attn and swinnet26t model for testing.
4 years ago
Ross Wightman
e15c3886ba
Defaul lambda r=7. Define '26t' stage 4/5 256x256 variants for all of bot/halo/lambda nets for experiment. Add resnet50t for exp. Fix a few comments.
4 years ago
Ross Wightman
b3d7580df1
Update ByoaNet comments. Fix first Steam feat chs for ByobNet.
4 years ago
Ross Wightman
16f7aa9f54
Add default_cfg options for min_input_size / fixed_input_size, queries in model registry, and use for testing self-attn models
4 years ago
Ross Wightman
ce62f96d4d
ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
4 years ago