From e6f5617dcc8755b86006dd65000553f6f3e57e71 Mon Sep 17 00:00:00 2001 From: Ross Wightman Date: Sun, 14 Mar 2021 12:20:12 -0700 Subject: [PATCH] Fix a few typos and consistency issues in model docs --- docs/models/.templates/models/advprop.md | 4 +++- docs/models/.templates/models/csp-darknet.md | 2 +- docs/models/.templates/models/csp-resnet.md | 2 +- docs/models/.templates/models/csp-resnext.md | 2 +- docs/models/.templates/models/ecaresnet.md | 2 +- docs/models/.templates/models/ese-vovnet.md | 2 +- docs/models/.templates/models/gloun-inception-v3.md | 2 +- docs/models/.templates/models/gloun-resnet.md | 2 +- docs/models/.templates/models/gloun-resnext.md | 2 +- docs/models/.templates/models/gloun-senet.md | 2 +- docs/models/.templates/models/gloun-seresnext.md | 2 +- docs/models/.templates/models/gloun-xception.md | 2 +- docs/models/.templates/models/inception-resnet-v2.md | 2 +- docs/models/.templates/models/legacy-se-resnet.md | 2 +- docs/models/.templates/models/legacy-se-resnext.md | 2 +- docs/models/.templates/models/se-resnet.md | 2 +- docs/models/.templates/models/seresnext.md | 2 +- docs/models/.templates/models/skresnet.md | 2 +- docs/models/.templates/models/skresnext.md | 2 +- docs/models/advprop.md | 4 +++- docs/models/csp-darknet.md | 2 +- docs/models/csp-resnet.md | 2 +- docs/models/csp-resnext.md | 2 +- docs/models/ecaresnet.md | 2 +- docs/models/ese-vovnet.md | 2 +- docs/models/gloun-inception-v3.md | 2 +- docs/models/gloun-resnet.md | 2 +- docs/models/gloun-resnext.md | 2 +- docs/models/gloun-senet.md | 2 +- docs/models/gloun-seresnext.md | 2 +- docs/models/gloun-xception.md | 2 +- docs/models/inception-resnet-v2.md | 2 +- docs/models/legacy-se-resnet.md | 2 +- docs/models/legacy-se-resnext.md | 2 +- docs/models/se-resnet.md | 2 +- docs/models/seresnext.md | 2 +- docs/models/skresnet.md | 2 +- docs/models/skresnext.md | 2 +- 38 files changed, 42 insertions(+), 38 deletions(-) diff --git a/docs/models/.templates/models/advprop.md b/docs/models/.templates/models/advprop.md index 6dd3f9b4..c204d871 100644 --- a/docs/models/.templates/models/advprop.md +++ b/docs/models/.templates/models/advprop.md @@ -1,7 +1,9 @@ -# AdvProp +# AdvProp (EfficientNet) **AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. +The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). + {% include 'code_snippets.md' %} ## How do I train this model? diff --git a/docs/models/.templates/models/csp-darknet.md b/docs/models/.templates/models/csp-darknet.md index 8819d966..b6ab42d1 100644 --- a/docs/models/.templates/models/csp-darknet.md +++ b/docs/models/.templates/models/csp-darknet.md @@ -1,4 +1,4 @@ -# CSP DarkNet +# CSP-DarkNet **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/.templates/models/csp-resnet.md b/docs/models/.templates/models/csp-resnet.md index 83eb6a6d..228faa0c 100644 --- a/docs/models/.templates/models/csp-resnet.md +++ b/docs/models/.templates/models/csp-resnet.md @@ -1,4 +1,4 @@ -# CSP ResNet +# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/.templates/models/csp-resnext.md b/docs/models/.templates/models/csp-resnext.md index 1c0f3bf1..cea88183 100644 --- a/docs/models/.templates/models/csp-resnext.md +++ b/docs/models/.templates/models/csp-resnext.md @@ -1,4 +1,4 @@ -# CSP ResNeXt +# CSP-ResNeXt **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/.templates/models/ecaresnet.md b/docs/models/.templates/models/ecaresnet.md index 88ee3c8b..126aaacc 100644 --- a/docs/models/.templates/models/ecaresnet.md +++ b/docs/models/.templates/models/ecaresnet.md @@ -1,4 +1,4 @@ -# ECA ResNet +# ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction. diff --git a/docs/models/.templates/models/ese-vovnet.md b/docs/models/.templates/models/ese-vovnet.md index 468866cf..5f942f00 100644 --- a/docs/models/.templates/models/ese-vovnet.md +++ b/docs/models/.templates/models/ese-vovnet.md @@ -1,4 +1,4 @@ -# ESE VoVNet +# ESE-VoVNet **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. diff --git a/docs/models/.templates/models/gloun-inception-v3.md b/docs/models/.templates/models/gloun-inception-v3.md index dfd5f7b6..90e25b91 100644 --- a/docs/models/.templates/models/gloun-inception-v3.md +++ b/docs/models/.templates/models/gloun-inception-v3.md @@ -1,4 +1,4 @@ -# Gluon Inception v3 +# (Gluon) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module). diff --git a/docs/models/.templates/models/gloun-resnet.md b/docs/models/.templates/models/gloun-resnet.md index d17eef3d..a66a658c 100644 --- a/docs/models/.templates/models/gloun-resnet.md +++ b/docs/models/.templates/models/gloun-resnet.md @@ -1,4 +1,4 @@ -# Glu on ResNet +# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. diff --git a/docs/models/.templates/models/gloun-resnext.md b/docs/models/.templates/models/gloun-resnext.md index 76b46a5a..b41353f0 100644 --- a/docs/models/.templates/models/gloun-resnext.md +++ b/docs/models/.templates/models/gloun-resnext.md @@ -1,4 +1,4 @@ -# Gluon ResNeXt +# (Gluon) ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. diff --git a/docs/models/.templates/models/gloun-senet.md b/docs/models/.templates/models/gloun-senet.md index afb0322e..281a782f 100644 --- a/docs/models/.templates/models/gloun-senet.md +++ b/docs/models/.templates/models/gloun-senet.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/gloun-seresnext.md b/docs/models/.templates/models/gloun-seresnext.md index 7e98df58..d0f2de01 100644 --- a/docs/models/.templates/models/gloun-seresnext.md +++ b/docs/models/.templates/models/gloun-seresnext.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/gloun-xception.md b/docs/models/.templates/models/gloun-xception.md index 70aef0ed..9dfc773a 100644 --- a/docs/models/.templates/models/gloun-xception.md +++ b/docs/models/.templates/models/gloun-xception.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. diff --git a/docs/models/.templates/models/inception-resnet-v2.md b/docs/models/.templates/models/inception-resnet-v2.md index e5c7c3af..99e09a1d 100644 --- a/docs/models/.templates/models/inception-resnet-v2.md +++ b/docs/models/.templates/models/inception-resnet-v2.md @@ -1,4 +1,4 @@ -# Inception Resnet v2 +# Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). diff --git a/docs/models/.templates/models/legacy-se-resnet.md b/docs/models/.templates/models/legacy-se-resnet.md index 1fa909a3..33f0c806 100644 --- a/docs/models/.templates/models/legacy-se-resnet.md +++ b/docs/models/.templates/models/legacy-se-resnet.md @@ -1,4 +1,4 @@ -# (Legacy) SE ResNet +# (Legacy) SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/legacy-se-resnext.md b/docs/models/.templates/models/legacy-se-resnext.md index 442d784d..fd610b59 100644 --- a/docs/models/.templates/models/legacy-se-resnext.md +++ b/docs/models/.templates/models/legacy-se-resnext.md @@ -1,4 +1,4 @@ -# (Legacy) SE ResNeXt +# (Legacy) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/se-resnet.md b/docs/models/.templates/models/se-resnet.md index fbf510ac..e1155492 100644 --- a/docs/models/.templates/models/se-resnet.md +++ b/docs/models/.templates/models/se-resnet.md @@ -1,4 +1,4 @@ -# SE ResNet +# SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/seresnext.md b/docs/models/.templates/models/seresnext.md index e1772274..41be0451 100644 --- a/docs/models/.templates/models/seresnext.md +++ b/docs/models/.templates/models/seresnext.md @@ -1,4 +1,4 @@ -# SE ResNeXt +# SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/.templates/models/skresnet.md b/docs/models/.templates/models/skresnet.md index b0ef87aa..3df53b03 100644 --- a/docs/models/.templates/models/skresnet.md +++ b/docs/models/.templates/models/skresnet.md @@ -1,4 +1,4 @@ -# SK ResNet +# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner. diff --git a/docs/models/.templates/models/skresnext.md b/docs/models/.templates/models/skresnext.md index b9e9f225..06e98b06 100644 --- a/docs/models/.templates/models/skresnext.md +++ b/docs/models/.templates/models/skresnext.md @@ -1,4 +1,4 @@ -# SK ResNeXt +# SK-ResNeXt **SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner. diff --git a/docs/models/advprop.md b/docs/models/advprop.md index 8abac950..197b85c8 100644 --- a/docs/models/advprop.md +++ b/docs/models/advprop.md @@ -1,7 +1,9 @@ -# AdvProp +# AdvProp (EfficientNet) **AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples. +The weights from this model were ported from [Tensorflow/TPU](https://github.com/tensorflow/tpu). + ## How do I use this model on an image? To load a pretrained model: diff --git a/docs/models/csp-darknet.md b/docs/models/csp-darknet.md index 009c8556..2a2f72e9 100644 --- a/docs/models/csp-darknet.md +++ b/docs/models/csp-darknet.md @@ -1,4 +1,4 @@ -# CSP DarkNet +# CSP-DarkNet **CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/csp-resnet.md b/docs/models/csp-resnet.md index c5eb78ee..d9c5a3e6 100644 --- a/docs/models/csp-resnet.md +++ b/docs/models/csp-resnet.md @@ -1,4 +1,4 @@ -# CSP ResNet +# CSP-ResNet **CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/csp-resnext.md b/docs/models/csp-resnext.md index c22efc53..4da11a00 100644 --- a/docs/models/csp-resnext.md +++ b/docs/models/csp-resnext.md @@ -1,4 +1,4 @@ -# CSP ResNeXt +# CSP-ResNeXt **CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. diff --git a/docs/models/ecaresnet.md b/docs/models/ecaresnet.md index 88b8c466..2bafb64a 100644 --- a/docs/models/ecaresnet.md +++ b/docs/models/ecaresnet.md @@ -1,4 +1,4 @@ -# ECA ResNet +# ECA-ResNet An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) that reduces model complexity without dimensionality reduction. diff --git a/docs/models/ese-vovnet.md b/docs/models/ese-vovnet.md index 51313445..028bdc89 100644 --- a/docs/models/ese-vovnet.md +++ b/docs/models/ese-vovnet.md @@ -1,4 +1,4 @@ -# ESE VoVNet +# ESE-VoVNet **VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel. diff --git a/docs/models/gloun-inception-v3.md b/docs/models/gloun-inception-v3.md index f7365ed3..107bddda 100644 --- a/docs/models/gloun-inception-v3.md +++ b/docs/models/gloun-inception-v3.md @@ -1,4 +1,4 @@ -# Gluon Inception v3 +# (Gluon) Inception v3 **Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.com/method/auxiliary-classifier) to propagate label information lower down the network (along with the use of batch normalization for layers in the sidehead). The key building block is an [Inception Module](https://paperswithcode.com/method/inception-v3-module). diff --git a/docs/models/gloun-resnet.md b/docs/models/gloun-resnet.md index c7186295..594892b4 100644 --- a/docs/models/gloun-resnet.md +++ b/docs/models/gloun-resnet.md @@ -1,4 +1,4 @@ -# Glu on ResNet +# (Gluon) ResNet **Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual blocks](https://paperswithcode.com/method/residual-block) ontop of each other to form network: e.g. a ResNet-50 has fifty layers using these blocks. diff --git a/docs/models/gloun-resnext.md b/docs/models/gloun-resnext.md index 499ab273..286957df 100644 --- a/docs/models/gloun-resnext.md +++ b/docs/models/gloun-resnext.md @@ -1,4 +1,4 @@ -# Gluon ResNeXt +# (Gluon) ResNeXt A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$, as an essential factor in addition to the dimensions of depth and width. diff --git a/docs/models/gloun-senet.md b/docs/models/gloun-senet.md index ac8f4ca8..6c667d0f 100644 --- a/docs/models/gloun-senet.md +++ b/docs/models/gloun-senet.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) SENet A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/gloun-seresnext.md b/docs/models/gloun-seresnext.md index 72dc530d..9c5671cb 100644 --- a/docs/models/gloun-seresnext.md +++ b/docs/models/gloun-seresnext.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/gloun-xception.md b/docs/models/gloun-xception.md index 2609552a..5d8dde60 100644 --- a/docs/models/gloun-xception.md +++ b/docs/models/gloun-xception.md @@ -1,4 +1,4 @@ -# Summary +# (Gluon) Xception **Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers. diff --git a/docs/models/inception-resnet-v2.md b/docs/models/inception-resnet-v2.md index b496e31a..3afe69b8 100644 --- a/docs/models/inception-resnet-v2.md +++ b/docs/models/inception-resnet-v2.md @@ -1,4 +1,4 @@ -# Inception Resnet v2 +# Inception ResNet v2 **Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture). diff --git a/docs/models/legacy-se-resnet.md b/docs/models/legacy-se-resnet.md index 44ba292a..a7c257a4 100644 --- a/docs/models/legacy-se-resnet.md +++ b/docs/models/legacy-se-resnet.md @@ -1,4 +1,4 @@ -# (Legacy) SE ResNet +# (Legacy) SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/legacy-se-resnext.md b/docs/models/legacy-se-resnext.md index 3f4c3cf3..2823727d 100644 --- a/docs/models/legacy-se-resnext.md +++ b/docs/models/legacy-se-resnext.md @@ -1,4 +1,4 @@ -# (Legacy) SE ResNeXt +# (Legacy) SE-ResNeXt **SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/se-resnet.md b/docs/models/se-resnet.md index 4b121433..8c41531c 100644 --- a/docs/models/se-resnet.md +++ b/docs/models/se-resnet.md @@ -1,4 +1,4 @@ -# SE ResNet +# SE-ResNet **SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/seresnext.md b/docs/models/seresnext.md index 6406291b..5f073dc0 100644 --- a/docs/models/seresnext.md +++ b/docs/models/seresnext.md @@ -1,4 +1,4 @@ -# SE ResNeXt +# SE-ResNeXt **SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration. diff --git a/docs/models/skresnet.md b/docs/models/skresnet.md index 077047b6..f9d21255 100644 --- a/docs/models/skresnet.md +++ b/docs/models/skresnet.md @@ -1,4 +1,4 @@ -# SK ResNet +# SK-ResNet **SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner. diff --git a/docs/models/skresnext.md b/docs/models/skresnext.md index e1c0c51b..27345803 100644 --- a/docs/models/skresnext.md +++ b/docs/models/skresnext.md @@ -1,4 +1,4 @@ -# SK ResNeXt +# SK-ResNeXt **SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK convolutions](https://paperswithcode.com/method/selective-kernel-convolution), enabling the network to choose appropriate receptive field sizes in an adaptive manner.