Aman Arora
f54897cc0b
make wandb not required but rather optional as huggingface_hub
4 years ago
Aman Arora
f13f7508a9
Keep changes to minimal and use args.experiment as wandb project name if it exists
4 years ago
Aman Arora
f8bb13f640
Default project name to None
4 years ago
Aman Arora
3f028ebc0f
import wandb in summary.py
4 years ago
Aman Arora
a9e5d9e5ad
log loss as before
4 years ago
Aman Arora
624c9b6949
log to wandb only if using using wandb
4 years ago
Aman Arora
00c8e0b8bd
Make use of wandb configurable
4 years ago
Aman Arora
8e6fb861e4
Add wandb support
4 years ago
Ross Wightman
37c71a5609
Some further create_optimizer_v2 tweaks, remove some redudnant code, add back safe model str. Benchmark step times per batch.
4 years ago
Ross Wightman
288682796f
Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman
a5310a3451
Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
4 years ago
Ross Wightman
e2e3290fbf
Add '--experiment' to train args for fixed exp name if desired, 'train' not added to output folder if specified.
4 years ago
Ross Wightman
d584e7f617
Support for huggingface hub via create_model and default_cfgs.
...
* improve consistency of model creation helper fns
* add comments to some of the model helpers
* support passing external default_cfgs so they can be sourced from hub
4 years ago
Ross Wightman
2db2d87ff7
Add epoch-repeats arg to multiply the number of dataset passes per epoch. Currently for iterable datasets (read TFDS wrapper) only.
4 years ago
Ross Wightman
0e16d4e9fb
Add benchmark.py script, and update optimizer factory to be more friendly to use outside of argparse interface.
4 years ago
Ross Wightman
01653db104
Missed clip-mode arg for repo train script
4 years ago
Ross Wightman
4f49b94311
Initial AGC impl. Still testing.
4 years ago
Ross Wightman
d8e69206be
Merge pull request #419 from rwightman/byob_vgg_models
...
More models, GPU-Efficient Nets, RepVGG, classic VGG, and flexible Byob backbone.
4 years ago
Ross Wightman
0356e773f5
Default to native PyTorch AMP instead of APEX amp. Too many APEX issues cropping up lately.
4 years ago
Csaba Kertesz
5114c214fc
Change the Python interpreter to Python 3.x in the scripts
4 years ago
Ross Wightman
4203efa36d
Fix #387 so that checkpoint saver works with max history of 1. Add checkpoint-hist arg to train.py.
4 years ago
Ross Wightman
38d8f67570
Fix potential issue with change to num_classes arg in train/validate.py defaulting to None (rely on model def / default_cfg)
4 years ago
Ross Wightman
5d4c3d0af3
Add enhanced ParserImageInTar that can read images from tars within tars, folders with multiple tars, etc. Additional comment cleanup.
4 years ago
Ross Wightman
9d5d4b8df6
Fix silly train.py typo during dataset work
4 years ago
Ross Wightman
855d6cc217
More dataset work including factories and a tensorflow datasets (TFDS) wrapper
...
* Add parser/dataset factory methods for more flexible dataset & parser creation
* Add dataset parser that wraps TFDS image classification datasets
* Tweak num_classes handling bug for 21k models
* Add initial deit models so they can be benchmarked in next csv results runs
4 years ago
Ross Wightman
fd9061dbf7
Remove debug print from train.py
4 years ago
Ross Wightman
59ec7e6a53
Merge branch 'master' into imagenet21k_datasets_more
4 years ago
Csaba Kertesz
e42b140ade
Add --input-size option to scripts to specify full input dimensions from command-line
4 years ago
Ross Wightman
231d04e91a
ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing.
4 years ago
Ross Wightman
de6046e213
Initial commit for dataset / parser reorg to support additional datasets / types
4 years ago
Ross Wightman
2ed8f24715
A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper.
4 years ago
Ross Wightman
460eba7f24
Work around casting issue with combination of native torch AMP and torchscript for Linear layers
4 years ago
Ross Wightman
27bbc70d71
Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
4 years ago
Ross Wightman
9214ca0716
Simplifying EMA...
4 years ago
Ross Wightman
80078c47bb
Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
4 years ago
Ross Wightman
47a7b3b5b1
More flexible mixup mode, add 'half' mode.
4 years ago
Ross Wightman
532e3b417d
Reorg of utils into separate modules
4 years ago
Ross Wightman
751b0bba98
Add global_pool (--gp) arg changes to allow passing 'fast' easily for train/validate to avoid channels_last issue with AdaptiveAvgPool
4 years ago
Ross Wightman
9c297ec67d
Cleanup Apex vs native AMP scaler state save/load. Cleanup CheckpointSaver a bit.
4 years ago
Ross Wightman
c2cd1a332e
Improve torch amp support and add channels_last support for train/validate scripts
4 years ago
datamining99
5f563ca4df
fix save_checkpoint bug with native amp
4 years ago
datamining99
d98967ed5d
add support for native torch AMP in torch 1.6
4 years ago
Ross Wightman
8c9814e3f5
Final cleanup of mixup/cutmix. Element/batch modes working with both collate (prefetcher active) and without prefetcher.
4 years ago
Ross Wightman
f471c17c9d
More cutmix/mixup overhaul, ready to kick-off some trials.
4 years ago
Ross Wightman
92f2d0d65d
Merge branch 'master' into cutmix. Fixup a few issues.
4 years ago
Ross Wightman
fa28067704
Add more augmentation arguments, including a no_aug disable flag. Fix #209
4 years ago
Ross Wightman
7995295968
Merge branch 'logger' into features. Change 'logger' to '_logger'.
4 years ago
Ross Wightman
1998bd3180
Merge branch 'feature/AB/logger' of https://github.com/antoinebrl/pytorch-image-models into logger
4 years ago
Ross Wightman
6c17d57a2c
Fix some attributions, add copyrights to some file docstrings
4 years ago
Antoine Broyelle
78fa0772cc
Leverage python hierachical logger
...
with this update one can tune the kind of logs generated by timm but
training and inference traces are unchanged
4 years ago
Ross Wightman
6441e9cc1b
Fix memory_efficient mode for DenseNets. Add AntiAliasing (Blur) support for DenseNets and create one test model. Add lr cycle/mul params to train args.
5 years ago
AFLALO, Jonathan Isaac
a7f570c9b7
added MultiEpochsDataLoader
5 years ago
Ross Wightman
13cf68850b
Remove poorly named metrics from torch imagenet example origins. Use top1/top5 in csv output for consistency with existing validation results files, acc elsewhere. Fixes #111
5 years ago
Ross Wightman
27b3680d49
Revamp LR noise, move logic to scheduler base. Fixup PlateauLRScheduler and add it as an option.
5 years ago
Ross Wightman
514b0938c4
Experimenting with per-epoch learning rate noise w/ step scheduler
5 years ago
Ross Wightman
43225d110c
Unify drop connect vs drop path under 'drop path' name, switch all EfficientNet/MobilenetV3 refs to 'drop_path'. Update factory to handle new drop args.
5 years ago
Ross Wightman
b3cb5f3275
Working on CutMix impl as per #8 , integrating with Mixup, currently experimenting...
5 years ago
Andrew Lavin
b72013def8
Added commandline argument validation-batch-size-multiplier with default set to 1.
5 years ago
Ross Wightman
5b7cc16ac9
Add warning about using sync-bn with zero initialized BN layers. Fixes #54
5 years ago
Ross Wightman
d9a6a9d0af
Merge pull request #74 from rwightman/augmix-jsd
...
AugMix, JSD loss, SplitBatchNorm (Auxiliary BN), and more
5 years ago
Ross Wightman
3eb4a96eda
Update AugMix, JSD, etc comments and references
5 years ago
Ross Wightman
7547119891
Add SplitBatchNorm. AugMix, Rand/AutoAugment, Split (Aux) BatchNorm, Jensen-Shannon Divergence, RandomErasing all working together
5 years ago
Ross Wightman
40fea63ebe
Add checkpoint averaging script. Add headers, shebangs, exec perms to all scripts
5 years ago
Ross Wightman
4666cc9aed
Add --pin-mem arg to enable dataloader pin_memory (showing more benefit in some scenarios now), also add --torchscript arg to validate.py for testing models with jit.script
5 years ago
Ross Wightman
232ab7fb12
Working on an implementation of AugMix with JensenShannonDivergence loss that's compatible with my AutoAugment and RandAugment impl
5 years ago
Ross Wightman
5719b493ad
Missed update dist-bn logic for EMA model
5 years ago
Ross Wightman
a435ea1327
Change reduce_bn to distribute_bn, add ability to choose between broadcast and reduce (mean). Add crop_pct arg to allow selecting validation crop while training.
5 years ago
Ross Wightman
3bff2b21dc
Add support for keeping running bn stats the same across distributed training nodes before eval/save
5 years ago
Ross Wightman
1f39d15f15
Allow float decay epochs arg for training, works out with step lr math
5 years ago
Ross Wightman
7b83e67f77
Pass drop connect arg through to EfficientNet models
5 years ago
Ross Wightman
4748c6dff2
Fix non-prefetch variant of Mixup. Fixes #50
5 years ago
Ross Wightman
187ecbafbe
Add support for loading args from yaml file (and saving them with each experiment)
5 years ago
Ross Wightman
b750b76f67
More AutoAugment work. Ready to roll...
5 years ago
Ross Wightman
3d9c8a6489
Add support for new AMP checkpointing support w/ amp.state_dict
5 years ago
Ross Wightman
fac58f609a
Add RAdam, NovoGrad, Lookahead, and AdamW optimizers, a few ResNet tweaks and scheduler factory tweak.
...
* Add some of the trendy new optimizers. Decent results but not clearly better than the standards.
* Can create a None scheduler for constant LR
* ResNet defaults to zero_init of last BN in residual
* add resnet50d config
5 years ago
Ross Wightman
66634d2200
Add support to split random erasing blocks into randomly selected number with --recount arg. Fix random selection of aspect ratios.
5 years ago
Ross Wightman
e7c8a37334
Make min-lr and cooldown-epochs cmdline args, change dash in color_jitter arg for consistency
5 years ago
Ross Wightman
c6b32cbe73
A number of tweaks to arguments, epoch handling, config
...
* reorganize train args
* allow resolve_data_config to be used with dict args, not just arparse
* stop incrementing epoch before save, more consistent naming vs csv, etc
* update resume and start epoch handling to match above
* stop auto-incrementing epoch in scheduler
5 years ago
Ross Wightman
b20bb58284
Distributed tweaks
...
* Support PyTorch native DDP as fallback if APEX not present
* Support SyncBN for both APEX and Torch native (if torch >= 1.1)
* EMA model does not appear to need DDP wrapper, no gradients, updated from wrapped model
5 years ago
Ross Wightman
6fc886acaf
Remove all prints, change most to logging calls, tweak alignment of batch logs, improve setup.py
5 years ago
Ross Wightman
aa4354f466
Big re-org, working towards making pip/module as 'timm'
5 years ago
Ross Wightman
7dab6d1ec7
Default to img_size in model default_cfg, defer output folder creation until later in the init sequence
6 years ago
Ross Wightman
9bcd65181b
Add exponential moving average for model weights + few other additions and cleanup
...
* ModelEma class added to track an EMA set of weights for the model being trained
* EMA handling added to train, validation and clean_checkpoint scripts
* Add multi checkpoint or multi-model validation support to validate.py
* Add syncbn option (APEX) to train script for experimentation
* Cleanup interface of CheckpointSaver while adding ema functionality
6 years ago
Ross Wightman
e6c14427c0
More appropriate/correct loss name
6 years ago
Zhun Zhong
127487369f
Fix bug for prefetcher
...
Set input and target to Cuda when without using prefetcher.
6 years ago
Ross Wightman
4d2056722a
Mixup and prefetcher improvements
...
* Do mixup in custom collate fn if prefetcher enabled, reduces performance impact
* Move mixup code to own file
* Add arg to disable prefetcher
* Fix no cuda transfer when prefetcher off
* Random erasing when prefetcher off wasn't changed to match new args, fixed
* Default random erasing to off (prob = 0.) for train
6 years ago
Ross Wightman
780c0a96a4
Change args for RandomErasing so only one required for pixel/color mode
6 years ago
Ross Wightman
76539d905e
Some transform/data/loader refactoring, hopefully didn't break things
...
* factor out data related constants to own file
* move data related config helpers to own file
* add a variant of RandomResizeCrop that randomizes interpolation method
* remove old Numpy version of RandomErasing
* cleanup torch version of RandomErasing and use it in either GPU loader batch mode or single image cpu Transform
6 years ago
Ross Wightman
fee607edf6
Mixup implemention in progress
...
* initial impl w/ label smoothing converging, but needs more testing
6 years ago
Ross Wightman
8fbd62a169
Exclude batchnorm and bias params from weight_decay by default
6 years ago
Ross Wightman
bc264269c9
Morph mnasnet impl into a generic mobilenet that covers Mnasnet, MobileNetV1/V2, ChamNet, FBNet, and related
...
* add an alternate RMSprop opt that applies eps like TF
* add bn params for passing through alternates and changing defaults to TF style
6 years ago
Ross Wightman
e9c7961efc
Fix pooling in mnasnet, more sensible default for AMP opt level
6 years ago
Ross Wightman
0562b91c38
Add per model crop pct, interpolation defaults, tie it all together
...
* create one resolve fn to pull together model defaults + cmd line args
* update attribution comments in some models
* test update train/validation/inference scripts
6 years ago
Ross Wightman
c328b155e9
Random erasing crash fix and args pass through
6 years ago
Ross Wightman
9c3859fb9c
Uniform pretrained model handling.
...
* All models have 'default_cfgs' dict
* load/resume/pretrained helpers factored out
* pretrained load operates on state_dict based on default_cfg
* test all models in validate
* schedule, optim factor factored out
* test time pool wrapper applied based on default_cfg
6 years ago
Ross Wightman
f1cd1a5ce3
Cleanup CheckpointSaver, add support for increasing or decreasing metric, switch to prec1 metric in train loop
6 years ago
Ross Wightman
5180f94c7e
Distributed (multi-process) train, multi-gpu single process train, and NVIDIA AMP support
6 years ago
Ross Wightman
45cde6f0c7
Improve creation of data pipeline with prefetch enabled vs disabled, fixup inception_res_v2 and dpn models
6 years ago
Ross Wightman
2295cf56c2
Add some Nvidia performance enhancements (prefetch loader, fast collate), and refactor some of training and model fact/transforms
6 years ago
Ross Wightman
9d927a389a
Add adabound, random erasing
6 years ago