You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
pytorch-image-models/results
Ikko Eltociear Ashimine 2c24cb98f1
Fix typo in results/README.md
2 years ago
..
README.md Fix typo in results/README.md 2 years ago
benchmark-infer-amp-nchw-pt111-cu113-rtx3090.csv Update eval results and add latest PyTorch 1.11 inference benchmarks 3 years ago
benchmark-infer-amp-nchw-pt112-cu113-rtx3090.csv Update benchmark with latest model adds 2 years ago
benchmark-infer-amp-nhwc-pt111-cu113-rtx3090.csv Update eval results and add latest PyTorch 1.11 inference benchmarks 3 years ago
benchmark-infer-amp-nhwc-pt112-cu113-rtx3090.csv Update benchmark with latest model adds 2 years ago
benchmark-train-amp-nchw-pt111-cu113-rtx3090.csv Add PyTorch 1.11 train benchmark numbers 3 years ago
benchmark-train-amp-nchw-pt112-cu113-rtx3090.csv Update benchmark with latest model adds 2 years ago
benchmark-train-amp-nhwc-pt111-cu113-rtx3090.csv Add PyTorch 1.11 train benchmark numbers 3 years ago
benchmark-train-amp-nhwc-pt112-cu113-rtx3090.csv Update benchmark with latest model adds 2 years ago
generate_csv_results.py Improve csv table result processing for better sort when updating 2 years ago
imagenet12k_synsets.txt Add ImageNet22k and 12k subset sysnet/index maps 2 years ago
imagenet21k_goog_synsets.txt Add 21843 synset txt for google 21k models like BiT/ViT 4 years ago
imagenet21k_goog_to_12k_indices.txt Add ImageNet22k and 12k subset sysnet/index maps 2 years ago
imagenet21k_goog_to_22k_indices.txt Add ImageNet22k and 12k subset sysnet/index maps 2 years ago
imagenet22k_synsets.txt Add ImageNet22k and 12k subset sysnet/index maps 2 years ago
imagenet22k_to_12k_indices.txt Add ImageNet22k and 12k subset sysnet/index maps 2 years ago
imagenet_a_indices.txt Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
imagenet_a_synsets.txt Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
imagenet_r_indices.txt Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
imagenet_r_synsets.txt Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
imagenet_real_labels.json Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
imagenet_synsets.txt Add synset/label indices for results generation. Add 'valid labels' to validation script to support imagenet-a/r label subsets properly. 4 years ago
model_metadata-in1k.csv Add train benchmark results, adjust name scheme for inference and train benchmark files. 3 years ago
results-imagenet-a-clean.csv Update results csv with latest val/test set runs 2 years ago
results-imagenet-a.csv Update results csv with latest val/test set runs 2 years ago
results-imagenet-r-clean.csv Update results csv with latest val/test set runs 2 years ago
results-imagenet-r.csv Update results csv with latest val/test set runs 2 years ago
results-imagenet-real.csv Update results csv with latest val/test set runs 2 years ago
results-imagenet.csv Update results csv with latest val/test set runs 2 years ago
results-imagenetv2-matched-frequency.csv Update results csv with latest val/test set runs 2 years ago
results-sketch.csv Update results csv with latest val/test set runs 2 years ago

README.md

Validation and Benchmark Results

This folder contains validation and benchmark results for the models in this collection. Validation scores are currently only run for models with pretrained weights and ImageNet-1k heads, benchmark numbers are run for all.

Datasets

There are currently results for the ImageNet validation set and 5 additional test / label sets.

The test set results include rank and top-1/top-5 differences from clean validation. For the "Real Labels", ImageNetV2, and Sketch test sets, the differences were calculated against the full 1000 class ImageNet-1k validation set. For both the Adversarial and Rendition sets, the differences were calculated against 'clean' runs on the ImageNet-1k validation set with the same 200 classes used in each test set respectively.

ImageNet Validation - results-imagenet.csv

The standard 50,000 image ImageNet-1k validation set. Model selection during training utilizes this validation set, so it is not a true test set. Question: Does anyone have the official ImageNet-1k test set classification labels now that challenges are done?

ImageNet-"Real Labels" - results-imagenet-real.csv

The usual ImageNet-1k validation set with a fresh new set of labels intended to improve on mistakes in the original annotation process.

ImageNetV2 Matched Frequency - results-imagenetv2-matched-frequency.csv

An ImageNet test set of 10,000 images sampled from new images roughly 10 years after the original. Care was taken to replicate the original ImageNet curation/sampling process.

ImageNet-Sketch - results-sketch.csv

50,000 non photographic (or photos of such) images (sketches, doodles, mostly monochromatic) covering all 1000 ImageNet classes.

ImageNet-Adversarial - results-imagenet-a.csv

A collection of 7500 images covering 200 of the 1000 ImageNet classes. Images are naturally occurring adversarial examples that confuse typical ImageNet classifiers. This is a challenging dataset, your typical ResNet-50 will score 0% top-1.

For clean validation with same 200 classes, see results-imagenet-a-clean.csv

ImageNet-Rendition - results-imagenet-r.csv

Renditions of 200 ImageNet classes resulting in 30,000 images for testing robustness.

For clean validation with same 200 classes, see results-imagenet-r-clean.csv

TODO

Benchmark

CSV files with a model_benchmark prefix include benchmark numbers for models on various accelerators with different precision. Currently only run on RTX 3090 w/ AMP for inference, I intend to add more in the future.

Metadata

CSV files with model_metadata prefix contain extra information about the source training, currently the pretraining dataset and technique (ie distillation, SSL, WSL, etc). Eventually I'd like to have metadata about augmentation, regularization, etc. but that will be a challenge to source consistently.