You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
pytorch-image-models/timm/models/layers
Ross Wightman ba2ca4b464
One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid.
3 years ago
..
__init__.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
activations.py Fix inplace arg compat for GELU and PreLU via activation factory 4 years ago
activations_jit.py
activations_me.py Merge pull request #282 from tigert1998/patch-1 4 years ago
adaptive_avgmax_pool.py Add 'fast' global pool option, remove redundant SEModule from tresnet, normal one is now 'fast' 4 years ago
blur_pool.py Remove filter hack from BlurPool w/ non-persistent buffer. Use BlurPool2d instead of AntiAliasing.. for TResNet. Breaks PyTorch < 1.6. 4 years ago
bottleneck_attn.py Improved (hopefully) init for SA/SA-like layers used in ByoaNets 4 years ago
cbam.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
classifier.py ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing. 4 years ago
cond_conv2d.py Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 4 years ago
config.py
conv2d_same.py
conv_bn_act.py Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models. 4 years ago
create_act.py Fix #661, move hardswish out of default args for LeViT. Enable native torch support for hardswish, hardsigmoid, mish if present. 3 years ago
create_attn.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
create_conv2d.py Use in_channels for depthwise groups, allows using `out_channels=N * in_channels` (does not impact existing models). Fix #354. 4 years ago
create_norm_act.py Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models. 4 years ago
drop.py Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 4 years ago
eca.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
evo_norm.py Fix a silly bug in Sample version of EvoNorm missing x* part of swish, update EvoNormBatch to accumulated unbiased variance. 4 years ago
gather_excite.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
global_context.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
halo_attn.py Improved (hopefully) init for SA/SA-like layers used in ByoaNets 4 years ago
helpers.py Throw in some FBNetV3 code I had lying around, some refactoring of SE reduction channel calcs for all EffNet archs. 3 years ago
inplace_abn.py Update README, fix iabn pip version print. 4 years ago
involution.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
lambda_layer.py Improved (hopefully) init for SA/SA-like layers used in ByoaNets 4 years ago
linear.py A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper. 4 years ago
median_pool.py Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 4 years ago
mixed_conv2d.py Use in_channels for depthwise groups, allows using `out_channels=N * in_channels` (does not impact existing models). Fix #354. 4 years ago
mlp.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
non_local_attn.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
norm.py Add Gather-Excite and Global Context attn modules. Refactor existing SE-like attn for consistency and refactor byob/byoanet for less redundancy. 3 years ago
norm_act.py Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models. 4 years ago
padding.py
patch_embed.py Add levit, levit_c, and visformer model defs. Largely untested and not finished cleanup. 4 years ago
pool2d_same.py Initial Vision Transformer impl w/ patch and hybrid variants. Refactor tuple helpers. 4 years ago
selective_kernel.py Remove min channels for SelectiveKernel, divisor should cover cases well enough. 3 years ago
separable_conv.py Fix regression in models with 1001 class pretrained weights. Improve batchnorm arg and BatchNormAct layer handling in several models. 4 years ago
space_to_depth.py
split_attn.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
split_batchnorm.py
squeeze_excite.py Add non-local and BAT attention. Merge attn and self-attn factories into one. Add attention references to README. Add mlp 'mode' to ECA. 3 years ago
std_conv.py One codepath for stdconv, switch layernorm to batchnorm so gain included. Tweak epsilon values for nfnet, resnetv2, vit hybrid. 3 years ago
swin_attn.py Improved (hopefully) init for SA/SA-like layers used in ByoaNets 4 years ago
test_time_pool.py Add new weights for ecaresnet26t/50t/269d models. Remove distinction between 't' and 'tn' (tiered models), tn is now t. Add test time img size spec to default cfg. 4 years ago
weight_init.py Cleanup experimental vit weight init a bit 4 years ago