You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
pytorch-image-models/docs/models/hrnet.md

364 lines
10 KiB

# Summary
**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradually add high-to-low resolution convolution streams one by one, and connect the multi-resolution streams in parallel. The resulting network consists of several ($4$ in the paper) stages and the $n$th stage contains $n$ streams corresponding to $n$ resolutions. The authors conduct repeated multi-resolution fusions by exchanging the information across the parallel streams over and over.
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('hrnet_w18_small', pretrained=True)
model.eval()
```
To load and preprocess the image:
```python
import urllib
from PIL import Image
from timm.data import resolve_data_config
from timm.data.transforms_factory import create_transform
config = resolve_data_config({}, model=model)
transform = create_transform(**config)
url, filename = ("https://github.com/pytorch/hub/raw/master/images/dog.jpg", "dog.jpg")
urllib.request.urlretrieve(url, filename)
img = Image.open(filename).convert('RGB')
tensor = transform(img).unsqueeze(0) # transform and add batch dimension
```
To get the model predictions:
```python
import torch
with torch.no_grad():
out = model(tensor)
probabilities = torch.nn.functional.softmax(out[0], dim=0)
print(probabilities.shape)
# prints: torch.Size([1000])
```
To get the top-5 predictions class names:
```python
# Get imagenet class mappings
url, filename = ("https://raw.githubusercontent.com/pytorch/hub/master/imagenet_classes.txt", "imagenet_classes.txt")
urllib.request.urlretrieve(url, filename)
with open("imagenet_classes.txt", "r") as f:
categories = [s.strip() for s in f.readlines()]
# Print top categories per image
top5_prob, top5_catid = torch.topk(probabilities, 5)
for i in range(top5_prob.size(0)):
print(categories[top5_catid[i]], top5_prob[i].item())
# prints class names and probabilities like:
# [('Samoyed', 0.6425196528434753), ('Pomeranian', 0.04062102362513542), ('keeshond', 0.03186424449086189), ('white wolf', 0.01739676296710968), ('Eskimo dog', 0.011717947199940681)]
```
Replace the model name with the variant you want to use, e.g. `hrnet_w18_small`. You can find the IDs in the model summaries at the top of this page.
To extract image features with this model, follow the [timm feature extraction examples](https://rwightman.github.io/pytorch-image-models/feature_extraction/), just change the name of the model you want to use.
## How do I finetune this model?
You can finetune any of the pre-trained models just by changing the classifier (the last layer).
```python
model = timm.create_model('hrnet_w18_small', pretrained=True).reset_classifier(NUM_FINETUNE_CLASSES)
```
To finetune on your own dataset, you have to write a training loop or adapt [timm's training
script](https://github.com/rwightman/pytorch-image-models/blob/master/train.py) to use your dataset.
## How do I train this model?
You can follow the [timm recipe scripts](https://rwightman.github.io/pytorch-image-models/scripts/) for training a new model afresh.
## Citation
```BibTeX
@misc{sun2019highresolution,
title={High-Resolution Representations for Labeling Pixels and Regions},
author={Ke Sun and Yang Zhao and Borui Jiang and Tianheng Cheng and Bin Xiao and Dong Liu and Yadong Mu and Xinggang Wang and Wenyu Liu and Jingdong Wang},
year={2019},
eprint={1904.04514},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
<!--
Models:
- Name: hrnet_w18_small
Metadata:
FLOPs: 2071651488
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 52934302
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w18_small
Layers: 18
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L790
Config: ''
In Collection: HRNet
- Name: hrnet_w18_small_v2
Metadata:
FLOPs: 3360023160
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 62682879
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w18_small_v2
Layers: 18
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L795
Config: ''
In Collection: HRNet
- Name: hrnet_w32
Metadata:
FLOPs: 11524528320
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 165547812
Tasks:
- Image Classification
Training Time: 60 hours
ID: hrnet_w32
Layers: 32
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L810
Config: ''
In Collection: HRNet
- Name: hrnet_w40
Metadata:
FLOPs: 16381182192
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 230899236
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w40
Layers: 40
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L815
Config: ''
In Collection: HRNet
- Name: hrnet_w44
Metadata:
FLOPs: 19202520264
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 268957432
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w44
Layers: 44
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L820
Config: ''
In Collection: HRNet
- Name: hrnet_w48
Metadata:
FLOPs: 22285865760
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 310603710
Tasks:
- Image Classification
Training Time: 80 hours
ID: hrnet_w48
Layers: 48
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L825
Config: ''
In Collection: HRNet
- Name: hrnet_w18
Metadata:
FLOPs: 5547205500
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 85718883
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w18
Layers: 18
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L800
Config: ''
In Collection: HRNet
- Name: hrnet_w64
Metadata:
FLOPs: 37239321984
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 513071818
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w64
Layers: 64
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L830
Config: ''
In Collection: HRNet
- Name: hrnet_w30
Metadata:
FLOPs: 10474119492
Epochs: 100
Batch Size: 256
Training Data:
- ImageNet
Training Techniques:
- Nesterov Accelerated Gradient
- Weight Decay
Training Resources: 4x NVIDIA V100 GPUs
Architecture:
- Batch Normalization
- Convolution
- ReLU
- Residual Connection
File Size: 151452218
Tasks:
- Image Classification
Training Time: ''
ID: hrnet_w30
Layers: 30
Crop Pct: '0.875'
Momentum: 0.9
Image Size: '224'
Weight Decay: 0.001
Interpolation: bilinear
Code: https://github.com/rwightman/pytorch-image-models/blob/d8e69206be253892b2956341fea09fdebfaae4e3/timm/models/hrnet.py#L805
Config: ''
In Collection: HRNet
Collections:
- Name: HRNet
Paper:
title: Deep High-Resolution Representation Learning for Visual Recognition
3 years ago
url: https://paperswithcode.com//paper/190807919
type: model-index
Type: model-index
-->