Deployed 46f1544 with MkDocs version: 1.1.2

gh-pages
Ross Wightman 4 years ago
parent 26b3a56a0a
commit da8447c4ae

@ -297,6 +297,20 @@
</label> </label>
<ul class="md-nav__list" data-md-scrollfix> <ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#april-5-2020" class="md-nav__link">
April 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#march-18-2020" class="md-nav__link">
March 18, 2020
</a>
</li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#feb-29-2020" class="md-nav__link"> <a href="#feb-29-2020" class="md-nav__link">
Feb 29, 2020 Feb 29, 2020
@ -427,6 +441,20 @@
</label> </label>
<ul class="md-nav__list" data-md-scrollfix> <ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#april-5-2020" class="md-nav__link">
April 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#march-18-2020" class="md-nav__link">
March 18, 2020
</a>
</li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#feb-29-2020" class="md-nav__link"> <a href="#feb-29-2020" class="md-nav__link">
Feb 29, 2020 Feb 29, 2020
@ -546,6 +574,21 @@
<h1 id="archived-changes">Archived Changes</h1> <h1 id="archived-changes">Archived Changes</h1>
<h3 id="april-5-2020">April 5, 2020</h3>
<ul>
<li>Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite<ul>
<li>3.5M param MobileNet-V2 100 @ 73%</li>
<li>4.5M param MobileNet-V2 110d @ 75%</li>
<li>6.1M param MobileNet-V2 140 @ 76.5%</li>
<li>5.8M param MobileNet-V2 120d @ 77.3%</li>
</ul>
</li>
</ul>
<h3 id="march-18-2020">March 18, 2020</h3>
<ul>
<li>Add EfficientNet-Lite models w/ weights ported from <a href="https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite">Tensorflow TPU</a></li>
<li>Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by <a href="https://github.com/andravin">Andrew Lavin</a> (see Training section for hparams)</li>
</ul>
<h3 id="feb-29-2020">Feb 29, 2020</h3> <h3 id="feb-29-2020">Feb 29, 2020</h3>
<ul> <ul>
<li>New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1</li> <li>New MobileNet-V3 Large weights trained from stratch with this code to 75.77% top-1</li>

@ -286,50 +286,85 @@
<ul class="md-nav__list" data-md-scrollfix> <ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#aug-5-2020" class="md-nav__link"> <a href="#oct-30-2020" class="md-nav__link">
Aug 5, 2020 Oct 30, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link"> <a href="#oct-26-2020" class="md-nav__link">
June 11, 2020 Oct 26, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-12-2020" class="md-nav__link"> <a href="#oct-21-2020" class="md-nav__link">
May 12, 2020 Oct 21, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-3-2020" class="md-nav__link"> <a href="#oct-13-2020" class="md-nav__link">
May 3, 2020 Oct 13, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-1-2020" class="md-nav__link"> <a href="#sept-18-2020" class="md-nav__link">
May 1, 2020 Sept 18, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#sept-3-2020" class="md-nav__link">
Sept 3, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#aug-12-2020" class="md-nav__link">
Aug 12, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#aug-5-2020" class="md-nav__link">
Aug 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link">
June 11, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#april-5-2020" class="md-nav__link"> <a href="#may-12-2020" class="md-nav__link">
April 5, 2020 May 12, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#march-18-2020" class="md-nav__link"> <a href="#may-3-2020" class="md-nav__link">
March 18, 2020 May 3, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#may-1-2020" class="md-nav__link">
May 1, 2020
</a> </a>
</li> </li>
@ -379,50 +414,85 @@
<ul class="md-nav__list" data-md-scrollfix> <ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#aug-5-2020" class="md-nav__link"> <a href="#oct-30-2020" class="md-nav__link">
Aug 5, 2020 Oct 30, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link"> <a href="#oct-26-2020" class="md-nav__link">
June 11, 2020 Oct 26, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-12-2020" class="md-nav__link"> <a href="#oct-21-2020" class="md-nav__link">
May 12, 2020 Oct 21, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-3-2020" class="md-nav__link"> <a href="#oct-13-2020" class="md-nav__link">
May 3, 2020 Oct 13, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#may-1-2020" class="md-nav__link"> <a href="#sept-18-2020" class="md-nav__link">
May 1, 2020 Sept 18, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#sept-3-2020" class="md-nav__link">
Sept 3, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#aug-12-2020" class="md-nav__link">
Aug 12, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#april-5-2020" class="md-nav__link"> <a href="#aug-5-2020" class="md-nav__link">
April 5, 2020 Aug 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link">
June 11, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#may-12-2020" class="md-nav__link">
May 12, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#may-3-2020" class="md-nav__link">
May 3, 2020
</a> </a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
<a href="#march-18-2020" class="md-nav__link"> <a href="#may-1-2020" class="md-nav__link">
March 18, 2020 May 1, 2020
</a> </a>
</li> </li>
@ -448,6 +518,62 @@
<h1 id="recent-changes">Recent Changes</h1> <h1 id="recent-changes">Recent Changes</h1>
<h3 id="oct-30-2020">Oct 30, 2020</h3>
<ul>
<li>Test with PyTorch 1.7 and fix a small top-n metric view vs reshape issue.</li>
<li>Convert newly added 224x224 Vision Transformer weights from official JAX repo. 81.8 top-1 for B/16, 83.1 L/16.</li>
<li>Support PyTorch 1.7 optimized, native SiLU (aka Swish) activation. Add mapping to 'silu' name, custom swish will eventually be deprecated.</li>
<li>Fix regression for loading pretrained classifier via direct model entrypoint functions. Didn't impact create_model() factory usage.</li>
<li>PyPi release @ 0.3.0 version!</li>
</ul>
<h3 id="oct-26-2020">Oct 26, 2020</h3>
<ul>
<li>Update Vision Transformer models to be compatible with official code release at <a href="https://github.com/google-research/vision_transformer">https://github.com/google-research/vision_transformer</a></li>
<li>Add Vision Transformer weights (ImageNet-21k pretrain) for 384x384 base and large models converted from official jax impl<ul>
<li>ViT-B/16 - 84.2</li>
<li>ViT-B/32 - 81.7</li>
<li>ViT-L/16 - 85.2</li>
<li>ViT-L/32 - 81.5</li>
</ul>
</li>
</ul>
<h3 id="oct-21-2020">Oct 21, 2020</h3>
<ul>
<li>Weights added for Vision Transformer (ViT) models. 77.86 top-1 for 'small' and 79.35 for 'base'. Thanks to <a href="https://www.kaggle.com/christofhenkel">Christof</a> for training the base model w/ lots of GPUs.</li>
</ul>
<h3 id="oct-13-2020">Oct 13, 2020</h3>
<ul>
<li>Initial impl of Vision Transformer models. Both patch and hybrid (CNN backbone) variants. Currently trying to train...</li>
<li>Adafactor and AdaHessian (FP32 only, no AMP) optimizers</li>
<li>EdgeTPU-M (<code>efficientnet_em</code>) model trained in PyTorch, 79.3 top-1</li>
<li>Pip release, doc updates pending a few more changes...</li>
</ul>
<h3 id="sept-18-2020">Sept 18, 2020</h3>
<ul>
<li>New ResNet 'D' weights. 72.7 (top-1) ResNet-18-D, 77.1 ResNet-34-D, 80.5 ResNet-50-D</li>
<li>Added a few untrained defs for other ResNet models (66D, 101D, 152D, 200/200D)</li>
</ul>
<h3 id="sept-3-2020">Sept 3, 2020</h3>
<ul>
<li>New weights<ul>
<li>Wide-ResNet50 - 81.5 top-1 (vs 78.5 torchvision)</li>
<li>SEResNeXt50-32x4d - 81.3 top-1 (vs 79.1 cadene)</li>
</ul>
</li>
<li>Support for native Torch AMP and channels_last memory format added to train/validate scripts (<code>--channels-last</code>, <code>--native-amp</code> vs <code>--apex-amp</code>)</li>
<li>Models tested with channels_last on latest NGC 20.08 container. AdaptiveAvgPool in attn layers changed to mean((2,3)) to work around bug with NHWC kernel.</li>
</ul>
<h3 id="aug-12-2020">Aug 12, 2020</h3>
<ul>
<li>New/updated weights from training experiments<ul>
<li>EfficientNet-B3 - 82.1 top-1 (vs 81.6 for official with AA and 81.9 for AdvProp)</li>
<li>RegNetY-3.2GF - 82.0 top-1 (78.9 from official ver)</li>
<li>CSPResNet50 - 79.6 top-1 (76.6 from official ver)</li>
</ul>
</li>
<li>Add CutMix integrated w/ Mixup. See <a href="https://github.com/rwightman/pytorch-image-models/pull/218">pull request</a> for some usage examples</li>
<li>Some fixes for using pretrained weights with <code>in_chans</code> != 3 on several models.</li>
</ul>
<h3 id="aug-5-2020">Aug 5, 2020</h3> <h3 id="aug-5-2020">Aug 5, 2020</h3>
<p>Universal feature extraction, new models, new weights, new test sets.</p> <p>Universal feature extraction, new models, new weights, new test sets.</p>
<ul> <ul>
@ -455,20 +581,22 @@
<li>New models<ul> <li>New models<ul>
<li>CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet</li> <li>CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet</li>
<li>ReXNet</li> <li>ReXNet</li>
<li>(Aligned) Xception41/65/71 (a proper port of TF models)</li> <li>(Modified Aligned) Xception41/65/71 (a proper port of TF models)</li>
</ul> </ul>
</li> </li>
<li>New trained weights<ul> <li>New trained weights<ul>
<li>SEResNet50 - 80.3</li> <li>SEResNet50 - 80.3 top-1</li>
<li>CSPDarkNet53 - 80.1 top-1</li> <li>CSPDarkNet53 - 80.1 top-1</li>
<li>CSPResNeXt50 - 80.0 to-1</li> <li>CSPResNeXt50 - 80.0 top-1</li>
<li>DPN68b - 79.2 top-1</li> <li>DPN68b - 79.2 top-1</li>
<li>EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by @hal-314)</li> <li>EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by <a href="https://github.com/hal-314">@hal-314</a>)</li>
</ul> </ul>
</li> </li>
<li>Add 'real' labels for ImageNet and ImageNet-Renditions test set, see <a href="results/README.md"><code>results/README.md</code></a></li> <li>Add 'real' labels for ImageNet and ImageNet-Renditions test set, see <a href="results/README.md"><code>results/README.md</code></a></li>
<li>Test set ranking/top-n diff script by <a href="https://github.com/KushajveerSingh">@KushajveerSingh</a></li>
<li>Train script and loader/transform tweaks to punch through more aug arguments</li> <li>Train script and loader/transform tweaks to punch through more aug arguments</li>
<li>README and documentation overhaul. See initial (WIP) documentation at <a href="https://rwightman.github.io/pytorch-image-models/">https://rwightman.github.io/pytorch-image-models/</a></li> <li>README and documentation overhaul. See initial (WIP) documentation at <a href="https://rwightman.github.io/pytorch-image-models/">https://rwightman.github.io/pytorch-image-models/</a></li>
<li>adamp and sgdp optimizers added by <a href="https://github.com/hellbell">@hellbell</a></li>
</ul> </ul>
<h3 id="june-11-2020">June 11, 2020</h3> <h3 id="june-11-2020">June 11, 2020</h3>
<p>Bunch of changes:</p> <p>Bunch of changes:</p>
@ -504,21 +632,6 @@
</li> </li>
<li>200 pretrained models in total now with updated results csv in results folder</li> <li>200 pretrained models in total now with updated results csv in results folder</li>
</ul> </ul>
<h3 id="april-5-2020">April 5, 2020</h3>
<ul>
<li>Add some newly trained MobileNet-V2 models trained with latest h-params, rand augment. They compare quite favourably to EfficientNet-Lite<ul>
<li>3.5M param MobileNet-V2 100 @ 73%</li>
<li>4.5M param MobileNet-V2 110d @ 75%</li>
<li>6.1M param MobileNet-V2 140 @ 76.5%</li>
<li>5.8M param MobileNet-V2 120d @ 77.3%</li>
</ul>
</li>
</ul>
<h3 id="march-18-2020">March 18, 2020</h3>
<ul>
<li>Add EfficientNet-Lite models w/ weights ported from <a href="https://github.com/tensorflow/tpu/tree/master/models/official/efficientnet/lite">Tensorflow TPU</a></li>
<li>Add RandAugment trained ResNeXt-50 32x4d weights with 79.8 top-1. Trained by <a href="https://github.com/andravin">Andrew Lavin</a> (see Training section for hparams)</li>
</ul>

@ -415,11 +415,11 @@
<p class="admonition-title">Conda Environment</p> <p class="admonition-title">Conda Environment</p>
<p>All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x, 3.7.x., 3.8.x.</p> <p>All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x, 3.7.x., 3.8.x.</p>
<p>Little to no care has been taken to be Python 2.x friendly and will not support it. If you run into any challenges running on Windows, or other OS, I'm definitely open to looking into those issues so long as it's in a reproducible (read Conda) environment.</p> <p>Little to no care has been taken to be Python 2.x friendly and will not support it. If you run into any challenges running on Windows, or other OS, I'm definitely open to looking into those issues so long as it's in a reproducible (read Conda) environment.</p>
<p>PyTorch versions 1.4, 1.5.x, and 1.6 have been tested with this code.</p> <p>PyTorch versions 1.4, 1.5.x, 1.6, and 1.7 have been tested with this code.</p>
<p>I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda: <p>I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda:
<div class="highlight"><pre><span></span><code>conda create -n torch-env <div class="highlight"><pre><span></span><code>conda create -n torch-env
conda activate torch-env conda activate torch-env
conda install -c pytorch pytorch torchvision cudatoolkit=10.2 conda install -c pytorch pytorch torchvision cudatoolkit=11
conda install pyyaml conda install pyyaml
</code></pre></div></p> </code></pre></div></p>
</div> </div>
@ -427,7 +427,7 @@ conda install pyyaml
<p>Pretrained models can be loaded using <code>timm.create_model</code></p> <p>Pretrained models can be loaded using <code>timm.create_model</code></p>
<div class="highlight"><pre><span></span><code><span class="kn">import</span> <span class="nn">timm</span> <div class="highlight"><pre><span></span><code><span class="kn">import</span> <span class="nn">timm</span>
<span class="n">m</span> <span class="o">=</span> <span class="n">timm</span><span class="o">.</span><span class="n">create_model</span><span class="p">(</span><span class="s1">&#39;mobilenetv3_100&#39;</span><span class="p">,</span> <span class="n">pretrained</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span> <span class="n">m</span> <span class="o">=</span> <span class="n">timm</span><span class="o">.</span><span class="n">create_model</span><span class="p">(</span><span class="s1">&#39;mobilenetv3_large_100&#39;</span><span class="p">,</span> <span class="n">pretrained</span><span class="o">=</span><span class="bp">True</span><span class="p">)</span>
<span class="n">m</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span> <span class="n">m</span><span class="o">.</span><span class="n">eval</span><span class="p">()</span>
</code></pre></div> </code></pre></div>

@ -370,6 +370,13 @@
TResNet [tresnet.py] TResNet [tresnet.py]
</a> </a>
</li>
<li class="md-nav__item">
<a href="#vision-transformer-vision_transformerpy" class="md-nav__link">
Vision Transformer [vision_transformer.py]
</a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
@ -649,6 +656,13 @@
TResNet [tresnet.py] TResNet [tresnet.py]
</a> </a>
</li>
<li class="md-nav__item">
<a href="#vision-transformer-vision_transformerpy" class="md-nav__link">
Vision Transformer [vision_transformer.py]
</a>
</li> </li>
<li class="md-nav__item"> <li class="md-nav__item">
@ -864,6 +878,11 @@
<li>Paper: <code>TResNet: High Performance GPU-Dedicated Architecture</code> - <a href="https://arxiv.org/abs/2003.13630">https://arxiv.org/abs/2003.13630</a></li> <li>Paper: <code>TResNet: High Performance GPU-Dedicated Architecture</code> - <a href="https://arxiv.org/abs/2003.13630">https://arxiv.org/abs/2003.13630</a></li>
<li>Code: <a href="https://github.com/mrT23/TResNet">https://github.com/mrT23/TResNet</a></li> <li>Code: <a href="https://github.com/mrT23/TResNet">https://github.com/mrT23/TResNet</a></li>
</ul> </ul>
<h2 id="vision-transformer-vision_transformerpy">Vision Transformer [<a href="https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vision_transformer.py">vision_transformer.py</a>]</h2>
<ul>
<li>Paper: <code>An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale</code> - <a href="https://arxiv.org/abs/2010.11929">https://arxiv.org/abs/2010.11929</a></li>
<li>Reference code and pretrained weights: <a href="https://github.com/google-research/vision_transformer">https://github.com/google-research/vision_transformer</a></li>
</ul>
<h2 id="vovnet-v2-and-v1-vovnetpy">VovNet V2 and V1 [<a href="https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vovnet.py">vovnet.py</a>]</h2> <h2 id="vovnet-v2-and-v1-vovnetpy">VovNet V2 and V1 [<a href="https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/vovnet.py">vovnet.py</a>]</h2>
<ul> <ul>
<li>Paper: <code>CenterMask : Real-Time Anchor-Free Instance Segmentation</code> - <a href="https://arxiv.org/abs/1911.06667">https://arxiv.org/abs/1911.06667</a></li> <li>Paper: <code>CenterMask : Real-Time Anchor-Free Instance Segmentation</code> - <a href="https://arxiv.org/abs/1911.06667">https://arxiv.org/abs/1911.06667</a></li>

File diff suppressed because it is too large Load Diff

File diff suppressed because one or more lines are too long

@ -1,35 +1,35 @@
<?xml version="1.0" encoding="UTF-8"?> <?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><url> <urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url><url> </url><url>
<loc>None</loc> <loc>None</loc>
<lastmod>2020-08-06</lastmod> <lastmod>2020-10-30</lastmod>
<changefreq>daily</changefreq> <changefreq>daily</changefreq>
</url> </url>
</urlset> </urlset>

Binary file not shown.
Loading…
Cancel
Save