Commit Graph

16 Commits (dbe7531aa3806154113c53bf22d189a72f70d11e)

Author SHA1 Message Date
Alexander Soare b25ff96768 wip - pre-rebase
3 years ago
Alexander Soare e051dce354 Make all models FX traceable
3 years ago
Alexander Soare 0149ec30d7 wip - attempting to rebase
3 years ago
Alexander Soare bc3d4eb403 wip -rebase
3 years ago
Ross Wightman c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention
3 years ago
Ross Wightman 02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
3 years ago
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
3 years ago
Ross Wightman 007bc39323 Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
3 years ago
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related
3 years ago
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman 492c0a4e20 Update HaloAttn comment
3 years ago
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
3 years ago
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman 0721559511 Improved (hopefully) init for SA/SA-like layers used in ByoaNets
3 years ago
Ross Wightman e15c3886ba Defaul lambda r=7. Define '26t' stage 4/5 256x256 variants for all of bot/halo/lambda nets for experiment. Add resnet50t for exp. Fix a few comments.
3 years ago
Ross Wightman ce62f96d4d ByoaNet with bottleneck transformer, lambda resnet, and halo net experiments
3 years ago