Commit Graph

142 Commits (58ffa2bfb715e6807d0f110b04cfa524922c534d)

Author SHA1 Message Date
Ross Wightman 78912b6375 Updated EvoNorm implementations with some experimentation. Add FilterResponseNorm. Updated RegnetZ and ResNetV2 model defs for trials.
3 years ago
Ross Wightman 480c676ffa Fix FX breaking assert in evonorm
3 years ago
Ross Wightman 93cc08fdc5 Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix).
3 years ago
Ross Wightman af607b75cc Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU
3 years ago
Ross Wightman c976a410d9 Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets.
3 years ago
Alexander Soare b25ff96768 wip - pre-rebase
3 years ago
Alexander Soare e051dce354 Make all models FX traceable
3 years ago
Alexander Soare 0149ec30d7 wip - attempting to rebase
3 years ago
Alexander Soare bc3d4eb403 wip -rebase
3 years ago
Ross Wightman 2ddef942b9 Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists
3 years ago
Ross Wightman 4f0f9cb348 Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8
3 years ago
Ross Wightman b745d30a3e Fix formatting of last commit
3 years ago
Ross Wightman 3478f1d7f1 Traceability fix for vit models for some experiments
3 years ago
Ross Wightman f658a72e72 Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p
3 years ago
Ross Wightman c02334d9fa Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention
3 years ago
Ross Wightman 02daf2ab94 Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
3 years ago
Ross Wightman e2b8d44ff0 Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
3 years ago
Ross Wightman 007bc39323 Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
3 years ago
Ross Wightman b1c2e3eb92 Match rel_pos_indices attr rename in conv branch
3 years ago
Ross Wightman b49630a138 Add relative pos embed option to LambdaLayer, fix last transpose/reshape.
3 years ago
Ross Wightman b81e79aae9 Fix bottleneck attn transpose typo, hopefully these train better now..
3 years ago
Ross Wightman 515121cca1 Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last
3 years ago
Ross Wightman 5bd04714e4 Cleanup weight init for byob/byoanet and related
3 years ago
Ross Wightman 8642401e88 Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman 5f12de4875 Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman 492c0a4e20 Update HaloAttn comment
3 years ago
Ross Wightman 3b9032ea48 Use Tensor.unfold().unfold() for HaloAttn, fast like as_strided but more clarity
3 years ago
Ross Wightman 8449ba210c Improve performance of HaloAttn, change default dim calc. Some cleanup / fixes for byoanet. Rename resnet26ts to tfs to distinguish (extra fc).
3 years ago
Ross Wightman 925e102982 Update attention / self-attn based models from a series of experiments:
3 years ago
Ross Wightman 01cb46a9a5 Add gc_efficientnetv2_rw_t weights (global context instead of SE attn). Add TF XL weights even though the fine-tuned ones don't validate that well. Change default arg for GlobalContext to use scal (mul) mode.
3 years ago
Ross Wightman 8165cacd82 Realized LayerNorm2d won't work in all cases as is, fixed.
3 years ago
Ross Wightman b9cfb64412 Support npz custom load for vision transformer hybrid models. Add posembed rescale for npz load.
3 years ago
Ross Wightman 8319e0c373 Add file docstring to std_conv.py
3 years ago
Ross Wightman 4d96165989 Merge branch 'master' into cleanup_xla_model_fixes
3 years ago
Ross Wightman 8880f696b6 Refactoring, cleanup, improved test coverage.
3 years ago
Ross Wightman ba2ca4b464 One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid.
3 years ago
Ross Wightman b7a568f065 Fix torchscript issue in bat
3 years ago
Ross Wightman 8e4ac3549f All ScaledStdConv and StdConv uses default to using F.layernorm so that they work with PyTorch XLA. eps value tweaking is a WIP.
3 years ago
Ross Wightman bda8ab015a Remove min channels for SelectiveKernel, divisor should cover cases well enough.
3 years ago
Ross Wightman a27f4aec4a Missed args for skresnext w/ refactoring.
3 years ago
Ross Wightman 307a935b79 Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA.
3 years ago
Ross Wightman 8bf63b6c6c Able to use other attn layer in EfficientNet now. Create test ECA + GC B0 configs. Make ECA more configurable.
3 years ago
Ross Wightman 9611458e19 Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs.
3 years ago
Ross Wightman f615474be3 Fix broken test, repvgg block doesn't have attn_last attr.
4 years ago
Ross Wightman 742c2d5247 Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy.
4 years ago
Ross Wightman 9c78de8c02 Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present.
4 years ago
Ross Wightman f45de37690 Merge branch 'master' into levit_visformer_rednet
4 years ago
Ross Wightman d5af752117 Add preliminary gMLP and ResMLP impl to Mlp-Mixer
4 years ago
Ross Wightman 3bffc701f1 Merge branch 'master' into levit_visformer_rednet
4 years ago
Ross Wightman ecc7552c5c Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup.
4 years ago