Deployed 5c7d298 with MkDocs version: 1.1.2

gh-pages
Ross Wightman 4 years ago
parent 550a478fa5
commit d76483b468

@ -297,6 +297,27 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#aug-12-2020" class="md-nav__link">
Aug 12, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#aug-5-2020" class="md-nav__link">
Aug 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link">
June 11, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#may-12-2020" class="md-nav__link">
May 12, 2020
@ -476,6 +497,27 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#aug-12-2020" class="md-nav__link">
Aug 12, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#aug-5-2020" class="md-nav__link">
Aug 5, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#june-11-2020" class="md-nav__link">
June 11, 2020
</a>
</li>
<li class="md-nav__item">
<a href="#may-12-2020" class="md-nav__link">
May 12, 2020
@ -644,6 +686,47 @@
<h1 id="archived-changes">Archived Changes</h1>
<h3 id="aug-12-2020">Aug 12, 2020</h3>
<ul>
<li>New/updated weights from training experiments<ul>
<li>EfficientNet-B3 - 82.1 top-1 (vs 81.6 for official with AA and 81.9 for AdvProp)</li>
<li>RegNetY-3.2GF - 82.0 top-1 (78.9 from official ver)</li>
<li>CSPResNet50 - 79.6 top-1 (76.6 from official ver)</li>
</ul>
</li>
<li>Add CutMix integrated w/ Mixup. See <a href="https://github.com/rwightman/pytorch-image-models/pull/218">pull request</a> for some usage examples</li>
<li>Some fixes for using pretrained weights with <code>in_chans</code> != 3 on several models.</li>
</ul>
<h3 id="aug-5-2020">Aug 5, 2020</h3>
<p>Universal feature extraction, new models, new weights, new test sets.
* All models support the <code>features_only=True</code> argument for <code>create_model</code> call to return a network that extracts feature maps from the deepest layer at each stride.
* New models
* CSPResNet, CSPResNeXt, CSPDarkNet, DarkNet
* ReXNet
* (Modified Aligned) Xception41/65/71 (a proper port of TF models)
* New trained weights
* SEResNet50 - 80.3 top-1
* CSPDarkNet53 - 80.1 top-1
* CSPResNeXt50 - 80.0 top-1
* DPN68b - 79.2 top-1
* EfficientNet-Lite0 (non-TF ver) - 75.5 (submitted by <a href="https://github.com/hal-314">@hal-314</a>)
* Add 'real' labels for ImageNet and ImageNet-Renditions test set, see <a href="results/README.md"><code>results/README.md</code></a>
* Test set ranking/top-n diff script by <a href="https://github.com/KushajveerSingh">@KushajveerSingh</a>
* Train script and loader/transform tweaks to punch through more aug arguments
* README and documentation overhaul. See initial (WIP) documentation at <a href="https://rwightman.github.io/pytorch-image-models/">https://rwightman.github.io/pytorch-image-models/</a>
* adamp and sgdp optimizers added by <a href="https://github.com/hellbell">@hellbell</a></p>
<h3 id="june-11-2020">June 11, 2020</h3>
<p>Bunch of changes:
* DenseNet models updated with memory efficient addition from torchvision (fixed a bug), blur pooling and deep stem additions
* VoVNet V1 and V2 models added, 39 V2 variant (ese_vovnet_39b) trained to 79.3 top-1
* Activation factory added along with new activations:
* select act at model creation time for more flexibility in using activations compatible with scripting or tracing (ONNX export)
* hard_mish (experimental) added with memory-efficient grad, along with ME hard_swish
* context mgr for setting exportable/scriptable/no_jit states
* Norm + Activation combo layers added with initial trial support in DenseNet and VoVNet along with impl of EvoNorm and InplaceAbn wrapper that fit the interface
* Torchscript works for all but two of the model types as long as using Pytorch 1.5+, tests added for this
* Some import cleanup and classifier reset changes, all models will have classifier reset to nn.Identity on reset_classifer(0) call
* Prep for 0.1.28 pip release</p>
<h3 id="may-12-2020">May 12, 2020</h3>
<ul>
<li>Add ResNeSt models (code adapted from <a href="https://github.com/zhanghang1989/ResNeSt">https://github.com/zhanghang1989/ResNeSt</a>, paper <a href="https://arxiv.org/abs/2004.08955">https://arxiv.org/abs/2004.08955</a>))</li>

@ -285,6 +285,34 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#march-7-2021" class="md-nav__link">
March 7, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-18-2021" class="md-nav__link">
Feb 18, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-16-2021" class="md-nav__link">
Feb 16, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-12-2021" class="md-nav__link">
Feb 12, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-10-2021" class="md-nav__link">
Feb 10, 2021
@ -462,6 +490,34 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#march-7-2021" class="md-nav__link">
March 7, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-18-2021" class="md-nav__link">
Feb 18, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-16-2021" class="md-nav__link">
Feb 16, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-12-2021" class="md-nav__link">
Feb 12, 2021
</a>
</li>
<li class="md-nav__item">
<a href="#feb-10-2021" class="md-nav__link">
Feb 10, 2021
@ -616,6 +672,44 @@
<h1 id="recent-changes">Recent Changes</h1>
<h3 id="march-7-2021">March 7, 2021</h3>
<ul>
<li>First 0.4.x PyPi release w/ NFNets (&amp; related), ByoB (GPU-Efficient, RepVGG, etc).</li>
<li>Change feature extraction for pre-activation nets (NFNets, ResNetV2) to return features before activation.</li>
</ul>
<h3 id="feb-18-2021">Feb 18, 2021</h3>
<ul>
<li>Add pretrained weights and model variants for NFNet-F* models from <a href="https://github.com/deepmind/deepmind-research/tree/master/nfnets">DeepMind Haiku impl</a>.<ul>
<li>Models are prefixed with <code>dm_</code>. They require SAME padding conv, skipinit enabled, and activation gains applied in act fn.</li>
<li>These models are big, expect to run out of GPU memory. With the GELU activiation + other options, they are roughly &frac12; the inference speed of my SiLU PyTorch optimized <code>s</code> variants.</li>
<li>Original model results are based on pre-processing that is not the same as all other models so you'll see different results in the results csv (once updated).</li>
<li>Matching the original pre-processing as closely as possible I get these results:<ul>
<li><code>dm_nfnet_f6</code> - 86.352</li>
<li><code>dm_nfnet_f5</code> - 86.100</li>
<li><code>dm_nfnet_f4</code> - 85.834</li>
<li><code>dm_nfnet_f3</code> - 85.676</li>
<li><code>dm_nfnet_f2</code> - 85.178</li>
<li><code>dm_nfnet_f1</code> - 84.696</li>
<li><code>dm_nfnet_f0</code> - 83.464</li>
</ul>
</li>
</ul>
</li>
</ul>
<h3 id="feb-16-2021">Feb 16, 2021</h3>
<ul>
<li>Add Adaptive Gradient Clipping (AGC) as per <a href="https://arxiv.org/abs/2102.06171">https://arxiv.org/abs/2102.06171</a>. Integrated w/ PyTorch gradient clipping via mode arg that defaults to prev 'norm' mode. For backward arg compat, clip-grad arg must be specified to enable when using train.py.<ul>
<li>AGC w/ default clipping factor <code>--clip-grad .01 --clip-mode agc</code></li>
<li>PyTorch global norm of 1.0 (old behaviour, always norm), <code>--clip-grad 1.0</code></li>
<li>PyTorch value clipping of 10, <code>--clip-grad 10. --clip-mode value</code></li>
<li>AGC performance is definitely sensitive to the clipping factor. More experimentation needed to determine good values for smaller batch sizes and optimizers besides those in paper. So far I've found .001-.005 is necessary for stable RMSProp training w/ NFNet/NF-ResNet.</li>
</ul>
</li>
</ul>
<h3 id="feb-12-2021">Feb 12, 2021</h3>
<ul>
<li>Update Normalization-Free nets to include new NFNet-F (<a href="https://arxiv.org/abs/2102.06171">https://arxiv.org/abs/2102.06171</a>) model defs</li>
</ul>
<h3 id="feb-10-2021">Feb 10, 2021</h3>
<ul>
<li>More model archs, incl a flexible ByobNet backbone ('Bring-your-own-blocks')<ul>

@ -568,7 +568,7 @@
<h1 id="feature-extraction">Feature Extraction</h1>
<p>All of the models in <code>timm</code> have consistent mechanisms for obtaining various types of features from the model for tasks besides classification.</p>
<h2 id="penultimate-layer-features-pre-classifier-features">Penultimate Layer Features (Pre-Classifier Features)</h2>
<p>The features from the penultimate model layer can be obtained in severay ways without requiring model surgery (although feel free to do surgery). One must first decide if they want pooled or un-pooled features.</p>
<p>The features from the penultimate model layer can be obtained in several ways without requiring model surgery (although feel free to do surgery). One must first decide if they want pooled or un-pooled features.</p>
<h3 id="unpooled">Unpooled</h3>
<p>There are three ways to obtain unpooled features.</p>
<p>Without modifying the network, one can call <code>model.forward_features(input)</code> on any model instead of the usual <code>model(input)</code>. This will bypass the head classifier and global pooling for networks.</p>

@ -213,6 +213,13 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#welcome" class="md-nav__link">
Welcome
</a>
</li>
<li class="md-nav__item">
<a href="#install" class="md-nav__link">
Install
@ -357,6 +364,13 @@
</label>
<ul class="md-nav__list" data-md-scrollfix>
<li class="md-nav__item">
<a href="#welcome" class="md-nav__link">
Welcome
</a>
</li>
<li class="md-nav__item">
<a href="#install" class="md-nav__link">
Install
@ -406,20 +420,26 @@
<h1 id="getting-started">Getting Started</h1>
<h2 id="welcome">Welcome</h2>
<p>Welcome to the <code>timm</code> documentation, a lean set of docs that covers the basics of <code>timm</code>.</p>
<p>For a more comprehensive set of docs (currently under development), please visit <a href="https://fastai.github.io/timmdocs/">timmdocs</a> by <a href="https://github.com/amaarora">Aman Arora</a>.</p>
<h2 id="install">Install</h2>
<p>The library can be installed with pip:</p>
<div class="highlight"><pre><span></span><code>pip install timm
</code></pre></div>
<p>I update the PyPi (pip) packages when I'm confident there are no significant model regressions from previous releases. If you want to pip install the bleeding edge from GitHub, use:
<div class="highlight"><pre><span></span><code>pip install git+https://github.com/rwightman/pytorch-image-models.git
</code></pre></div></p>
<div class="admonition info">
<p class="admonition-title">Conda Environment</p>
<p>All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x, 3.7.x., 3.8.x.</p>
<p>All development and testing has been done in Conda Python 3 environments on Linux x86-64 systems, specifically Python 3.6.x, 3.7.x., 3.8.x., 3.9</p>
<p>Little to no care has been taken to be Python 2.x friendly and will not support it. If you run into any challenges running on Windows, or other OS, I'm definitely open to looking into those issues so long as it's in a reproducible (read Conda) environment.</p>
<p>PyTorch versions 1.4, 1.5.x, 1.6, and 1.7 have been tested with this code.</p>
<p>PyTorch versions 1.4, 1.5.x, 1.6, 1.7.x, and 1.8 have been tested with this code.</p>
<p>I've tried to keep the dependencies minimal, the setup is as per the PyTorch default install instructions for Conda:
<div class="highlight"><pre><span></span><code>conda create -n torch-env
conda activate torch-env
conda install -c pytorch pytorch torchvision cudatoolkit=11
conda install pytorch torchvision cudatoolkit=11.1 -c pytorch -c conda-forge
conda install pyyaml
</code></pre></div></p>
</div>

@ -777,7 +777,8 @@
<li>ported by myself from their original impl in a different framework (e.g. Tensorflow models)</li>
<li>trained from scratch using the included training script</li>
</ol>
<p>The validation results for the pretrained weights can be found <a href="../results/">here</a></p>
<p>The validation results for the pretrained weights are <a href="../results/">here</a></p>
<p>A more exciting view (with pretty pictures) of the models within <code>timm</code> can be found at <a href="https://paperswithcode.com/lib/timm">paperswithcode</a>.</p>
<h2 id="big-transfer-resnetv2-bit-resnetv2py">Big Transfer ResNetV2 (BiT) [<a href="https://github.com/rwightman/pytorch-image-models/blob/master/timm/models/resnetv2.py">resnetv2.py</a>]</h2>
<ul>
<li>Paper: <code>Big Transfer (BiT): General Visual Representation Learning</code> - <a href="https://arxiv.org/abs/1912.11370">https://arxiv.org/abs/1912.11370</a></li>

@ -378,9 +378,9 @@
<h1 id="results">Results</h1>
<p>CSV files containing an ImageNet-1K validation and out-of-distribution (OOD) test set validation results for all included models with pretrained weights and default configurations is located <a href="https://github.com/rwightman/pytorch-image-models/tree/master/results">here</a>.</p>
<p>CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository <a href="https://github.com/rwightman/pytorch-image-models/tree/master/results">results folder</a>.</p>
<h2 id="self-trained-weights">Self-trained Weights</h2>
<p>I've leveraged the training scripts in this repository to train a few of the models with to good levels of performance.</p>
<p>The table below includes ImageNet-1k validation results of model weights that I've trained myself. It is not updated as frequently as the csv results outputs linked above.</p>
<table>
<thead>
<tr>

File diff suppressed because one or more lines are too long

@ -1,35 +1,35 @@
<?xml version="1.0" encoding="UTF-8"?>
<urlset xmlns="http://www.sitemaps.org/schemas/sitemap/0.9"><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url><url>
<loc>None</loc>
<lastmod>2021-02-11</lastmod>
<lastmod>2021-03-10</lastmod>
<changefreq>daily</changefreq>
</url>
</urlset>

Binary file not shown.
Loading…
Cancel
Save