Commit Graph

150 Commits (475ecdfa3d369b6d482287f2467ce101ce5c276c)

Author SHA1 Message Date
Ross Wightman 288682796f Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
4 years ago
Ross Wightman e2e3290fbf Add '--experiment' to train args for fixed exp name if desired, 'train' not added to output folder if specified.
4 years ago
Ross Wightman d584e7f617 Support for huggingface hub via create_model and default_cfgs.
4 years ago
Ross Wightman 2db2d87ff7 Add epoch-repeats arg to multiply the number of dataset passes per epoch. Currently for iterable datasets (read TFDS wrapper) only.
4 years ago
Ross Wightman 0e16d4e9fb Add benchmark.py script, and update optimizer factory to be more friendly to use outside of argparse interface.
4 years ago
Ross Wightman 01653db104 Missed clip-mode arg for repo train script
4 years ago
Ross Wightman 4f49b94311 Initial AGC impl. Still testing.
4 years ago
Ross Wightman d8e69206be
Merge pull request #419 from rwightman/byob_vgg_models
4 years ago
Ross Wightman 0356e773f5 Default to native PyTorch AMP instead of APEX amp. Too many APEX issues cropping up lately.
4 years ago
Csaba Kertesz 5114c214fc Change the Python interpreter to Python 3.x in the scripts
4 years ago
Ross Wightman 4203efa36d Fix #387 so that checkpoint saver works with max history of 1. Add checkpoint-hist arg to train.py.
4 years ago
Ross Wightman 38d8f67570 Fix potential issue with change to num_classes arg in train/validate.py defaulting to None (rely on model def / default_cfg)
4 years ago
Ross Wightman 5d4c3d0af3 Add enhanced ParserImageInTar that can read images from tars within tars, folders with multiple tars, etc. Additional comment cleanup.
4 years ago
Ross Wightman 9d5d4b8df6 Fix silly train.py typo during dataset work
4 years ago
Ross Wightman 855d6cc217 More dataset work including factories and a tensorflow datasets (TFDS) wrapper
4 years ago
Ross Wightman fd9061dbf7 Remove debug print from train.py
4 years ago
Ross Wightman 59ec7e6a53 Merge branch 'master' into imagenet21k_datasets_more
4 years ago
Csaba Kertesz e42b140ade Add --input-size option to scripts to specify full input dimensions from command-line
4 years ago
Ross Wightman 231d04e91a ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing.
4 years ago
Ross Wightman de6046e213 Initial commit for dataset / parser reorg to support additional datasets / types
4 years ago
Ross Wightman 2ed8f24715 A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper.
4 years ago
Ross Wightman 460eba7f24 Work around casting issue with combination of native torch AMP and torchscript for Linear layers
4 years ago
Ross Wightman 27bbc70d71 Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
4 years ago
Ross Wightman 9214ca0716 Simplifying EMA...
4 years ago
Ross Wightman 80078c47bb Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
4 years ago
Ross Wightman 47a7b3b5b1 More flexible mixup mode, add 'half' mode.
4 years ago
Ross Wightman 532e3b417d Reorg of utils into separate modules
4 years ago
Ross Wightman 751b0bba98 Add global_pool (--gp) arg changes to allow passing 'fast' easily for train/validate to avoid channels_last issue with AdaptiveAvgPool
4 years ago
Ross Wightman 9c297ec67d Cleanup Apex vs native AMP scaler state save/load. Cleanup CheckpointSaver a bit.
4 years ago
Ross Wightman c2cd1a332e Improve torch amp support and add channels_last support for train/validate scripts
4 years ago
datamining99 5f563ca4df fix save_checkpoint bug with native amp
4 years ago
datamining99 d98967ed5d add support for native torch AMP in torch 1.6
4 years ago
Ross Wightman 8c9814e3f5 Final cleanup of mixup/cutmix. Element/batch modes working with both collate (prefetcher active) and without prefetcher.
4 years ago
Ross Wightman f471c17c9d More cutmix/mixup overhaul, ready to kick-off some trials.
4 years ago
Ross Wightman 92f2d0d65d Merge branch 'master' into cutmix. Fixup a few issues.
4 years ago
Ross Wightman fa28067704 Add more augmentation arguments, including a no_aug disable flag. Fix #209
4 years ago
Ross Wightman 7995295968 Merge branch 'logger' into features. Change 'logger' to '_logger'.
4 years ago
Ross Wightman 1998bd3180 Merge branch 'feature/AB/logger' of https://github.com/antoinebrl/pytorch-image-models into logger
4 years ago
Ross Wightman 6c17d57a2c Fix some attributions, add copyrights to some file docstrings
4 years ago
Antoine Broyelle 78fa0772cc Leverage python hierachical logger
4 years ago
Ross Wightman 6441e9cc1b Fix memory_efficient mode for DenseNets. Add AntiAliasing (Blur) support for DenseNets and create one test model. Add lr cycle/mul params to train args.
5 years ago
AFLALO, Jonathan Isaac a7f570c9b7 added MultiEpochsDataLoader
5 years ago
Ross Wightman 13cf68850b Remove poorly named metrics from torch imagenet example origins. Use top1/top5 in csv output for consistency with existing validation results files, acc elsewhere. Fixes #111
5 years ago
Ross Wightman 27b3680d49 Revamp LR noise, move logic to scheduler base. Fixup PlateauLRScheduler and add it as an option.
5 years ago
Ross Wightman 514b0938c4 Experimenting with per-epoch learning rate noise w/ step scheduler
5 years ago
Ross Wightman 43225d110c Unify drop connect vs drop path under 'drop path' name, switch all EfficientNet/MobilenetV3 refs to 'drop_path'. Update factory to handle new drop args.
5 years ago
Ross Wightman b3cb5f3275 Working on CutMix impl as per #8, integrating with Mixup, currently experimenting...
5 years ago
Andrew Lavin b72013def8 Added commandline argument validation-batch-size-multiplier with default set to 1.
5 years ago
Ross Wightman 5b7cc16ac9 Add warning about using sync-bn with zero initialized BN layers. Fixes #54
5 years ago
Ross Wightman d9a6a9d0af
Merge pull request #74 from rwightman/augmix-jsd
5 years ago
Ross Wightman 3eb4a96eda Update AugMix, JSD, etc comments and references
5 years ago
Ross Wightman 7547119891 Add SplitBatchNorm. AugMix, Rand/AutoAugment, Split (Aux) BatchNorm, Jensen-Shannon Divergence, RandomErasing all working together
5 years ago
Ross Wightman 40fea63ebe Add checkpoint averaging script. Add headers, shebangs, exec perms to all scripts
5 years ago
Ross Wightman 4666cc9aed Add --pin-mem arg to enable dataloader pin_memory (showing more benefit in some scenarios now), also add --torchscript arg to validate.py for testing models with jit.script
5 years ago
Ross Wightman 232ab7fb12 Working on an implementation of AugMix with JensenShannonDivergence loss that's compatible with my AutoAugment and RandAugment impl
5 years ago
Ross Wightman 5719b493ad Missed update dist-bn logic for EMA model
5 years ago
Ross Wightman a435ea1327 Change reduce_bn to distribute_bn, add ability to choose between broadcast and reduce (mean). Add crop_pct arg to allow selecting validation crop while training.
5 years ago
Ross Wightman 3bff2b21dc Add support for keeping running bn stats the same across distributed training nodes before eval/save
5 years ago
Ross Wightman 1f39d15f15 Allow float decay epochs arg for training, works out with step lr math
5 years ago
Ross Wightman 7b83e67f77 Pass drop connect arg through to EfficientNet models
5 years ago
Ross Wightman 4748c6dff2 Fix non-prefetch variant of Mixup. Fixes #50
5 years ago
Ross Wightman 187ecbafbe Add support for loading args from yaml file (and saving them with each experiment)
5 years ago
Ross Wightman b750b76f67 More AutoAugment work. Ready to roll...
5 years ago
Ross Wightman 3d9c8a6489 Add support for new AMP checkpointing support w/ amp.state_dict
5 years ago
Ross Wightman fac58f609a Add RAdam, NovoGrad, Lookahead, and AdamW optimizers, a few ResNet tweaks and scheduler factory tweak.
5 years ago
Ross Wightman 66634d2200 Add support to split random erasing blocks into randomly selected number with --recount arg. Fix random selection of aspect ratios.
5 years ago
Ross Wightman e7c8a37334 Make min-lr and cooldown-epochs cmdline args, change dash in color_jitter arg for consistency
5 years ago
Ross Wightman c6b32cbe73 A number of tweaks to arguments, epoch handling, config
5 years ago
Ross Wightman b20bb58284 Distributed tweaks
5 years ago
Ross Wightman 6fc886acaf Remove all prints, change most to logging calls, tweak alignment of batch logs, improve setup.py
5 years ago
Ross Wightman aa4354f466 Big re-org, working towards making pip/module as 'timm'
5 years ago
Ross Wightman 7dab6d1ec7 Default to img_size in model default_cfg, defer output folder creation until later in the init sequence
6 years ago
Ross Wightman 9bcd65181b Add exponential moving average for model weights + few other additions and cleanup
6 years ago
Ross Wightman e6c14427c0 More appropriate/correct loss name
6 years ago
Zhun Zhong 127487369f
Fix bug for prefetcher
6 years ago
Ross Wightman 4d2056722a Mixup and prefetcher improvements
6 years ago
Ross Wightman 780c0a96a4 Change args for RandomErasing so only one required for pixel/color mode
6 years ago
Ross Wightman 76539d905e Some transform/data/loader refactoring, hopefully didn't break things
6 years ago
Ross Wightman fee607edf6 Mixup implemention in progress
6 years ago
Ross Wightman 8fbd62a169 Exclude batchnorm and bias params from weight_decay by default
6 years ago
Ross Wightman bc264269c9 Morph mnasnet impl into a generic mobilenet that covers Mnasnet, MobileNetV1/V2, ChamNet, FBNet, and related
6 years ago
Ross Wightman e9c7961efc Fix pooling in mnasnet, more sensible default for AMP opt level
6 years ago
Ross Wightman 0562b91c38 Add per model crop pct, interpolation defaults, tie it all together
6 years ago
Ross Wightman c328b155e9 Random erasing crash fix and args pass through
6 years ago
Ross Wightman 9c3859fb9c Uniform pretrained model handling.
6 years ago
Ross Wightman f1cd1a5ce3 Cleanup CheckpointSaver, add support for increasing or decreasing metric, switch to prec1 metric in train loop
6 years ago
Ross Wightman 5180f94c7e Distributed (multi-process) train, multi-gpu single process train, and NVIDIA AMP support
6 years ago
Ross Wightman 45cde6f0c7 Improve creation of data pipeline with prefetch enabled vs disabled, fixup inception_res_v2 and dpn models
6 years ago
Ross Wightman 2295cf56c2 Add some Nvidia performance enhancements (prefetch loader, fast collate), and refactor some of training and model fact/transforms
6 years ago
Ross Wightman 9d927a389a Add adabound, random erasing
6 years ago
Ross Wightman 1577c52976 Resnext added, changes to bring it and seresnet in line with rest of models
6 years ago
Ross Wightman 31055466fc Fixup validate/inference script args, fix senet init for better test accuracy
6 years ago
Ross Wightman b1a5a71151 Update schedulers
6 years ago
Ross Wightman b5255960d9 Tweaking tanh scheduler, senet weight init (for BN), transform defaults
6 years ago
Ross Wightman a336e5bff3 Minor updates
6 years ago
Ross Wightman cf0c280e1b Cleanup tranforms, add custom schedulers, tweak senet34 model
6 years ago
Ross Wightman c57717d325 Fix tta train bug, improve logging
6 years ago
Ross Wightman 72b4d162a2 Increase training performance
6 years ago
Ross Wightman 5855b07ae0 Initial commit, puting some ol pieces together
6 years ago