Commit Graph

100 Commits (2bb65bd8750755112da11124c2fdc4895bd971a8)

Author SHA1 Message Date
Ross Wightman 288682796f Update benchmark script to add precision arg. Fix some downstream (DeiT) compat issues with latest changes. Bump version to 0.4.7
4 years ago
Ross Wightman a5310a3451 Merge remote-tracking branch 'origin/benchmark-fixes-vit_hybrids' into pit_and_vit_update
4 years ago
Ross Wightman e2e3290fbf Add '--experiment' to train args for fixed exp name if desired, 'train' not added to output folder if specified.
4 years ago
Ross Wightman d584e7f617 Support for huggingface hub via create_model and default_cfgs.
4 years ago
Ross Wightman 2db2d87ff7 Add epoch-repeats arg to multiply the number of dataset passes per epoch. Currently for iterable datasets (read TFDS wrapper) only.
4 years ago
Ross Wightman 0e16d4e9fb Add benchmark.py script, and update optimizer factory to be more friendly to use outside of argparse interface.
4 years ago
Ross Wightman 01653db104 Missed clip-mode arg for repo train script
4 years ago
Ross Wightman 4f49b94311 Initial AGC impl. Still testing.
4 years ago
Ross Wightman d8e69206be
Merge pull request #419 from rwightman/byob_vgg_models
4 years ago
Ross Wightman 0356e773f5 Default to native PyTorch AMP instead of APEX amp. Too many APEX issues cropping up lately.
4 years ago
Csaba Kertesz 5114c214fc Change the Python interpreter to Python 3.x in the scripts
4 years ago
Ross Wightman 4203efa36d Fix #387 so that checkpoint saver works with max history of 1. Add checkpoint-hist arg to train.py.
4 years ago
Ross Wightman 38d8f67570 Fix potential issue with change to num_classes arg in train/validate.py defaulting to None (rely on model def / default_cfg)
4 years ago
Ross Wightman 5d4c3d0af3 Add enhanced ParserImageInTar that can read images from tars within tars, folders with multiple tars, etc. Additional comment cleanup.
4 years ago
Ross Wightman 9d5d4b8df6 Fix silly train.py typo during dataset work
4 years ago
Ross Wightman 855d6cc217 More dataset work including factories and a tensorflow datasets (TFDS) wrapper
4 years ago
Ross Wightman fd9061dbf7 Remove debug print from train.py
4 years ago
Ross Wightman 59ec7e6a53 Merge branch 'master' into imagenet21k_datasets_more
4 years ago
Csaba Kertesz e42b140ade Add --input-size option to scripts to specify full input dimensions from command-line
4 years ago
Ross Wightman 231d04e91a ResNetV2 pre-act and non-preact model, w/ BiT pretrained weights and support for ViT R50 model. Tweaks for in21k num_classes passing. More to do... tests failing.
4 years ago
Ross Wightman de6046e213 Initial commit for dataset / parser reorg to support additional datasets / types
4 years ago
Ross Wightman 2ed8f24715 A few more changes for 0.3.2 maint release. Linear layer change for mobilenetv3 and inception_v3, support no bias for linear wrapper.
4 years ago
Ross Wightman 460eba7f24 Work around casting issue with combination of native torch AMP and torchscript for Linear layers
4 years ago
Ross Wightman 27bbc70d71 Add back old ModelEma and rename new one to ModelEmaV2 to avoid compat breaks in dependant code. Shuffle train script, add a few comments, remove DataParallel support, support experimental torchscript training.
4 years ago
Ross Wightman 9214ca0716 Simplifying EMA...
4 years ago
Ross Wightman 80078c47bb Add Adafactor and Adahessian optimizers, cleanup optimizer arg passing, add gradient clipping support.
4 years ago
Ross Wightman 47a7b3b5b1 More flexible mixup mode, add 'half' mode.
4 years ago
Ross Wightman 532e3b417d Reorg of utils into separate modules
4 years ago
Ross Wightman 751b0bba98 Add global_pool (--gp) arg changes to allow passing 'fast' easily for train/validate to avoid channels_last issue with AdaptiveAvgPool
4 years ago
Ross Wightman 9c297ec67d Cleanup Apex vs native AMP scaler state save/load. Cleanup CheckpointSaver a bit.
4 years ago
Ross Wightman c2cd1a332e Improve torch amp support and add channels_last support for train/validate scripts
4 years ago
datamining99 5f563ca4df fix save_checkpoint bug with native amp
4 years ago
datamining99 d98967ed5d add support for native torch AMP in torch 1.6
4 years ago
Ross Wightman 8c9814e3f5 Final cleanup of mixup/cutmix. Element/batch modes working with both collate (prefetcher active) and without prefetcher.
4 years ago
Ross Wightman f471c17c9d More cutmix/mixup overhaul, ready to kick-off some trials.
4 years ago
Ross Wightman 92f2d0d65d Merge branch 'master' into cutmix. Fixup a few issues.
4 years ago
Ross Wightman fa28067704 Add more augmentation arguments, including a no_aug disable flag. Fix #209
4 years ago
Ross Wightman 7995295968 Merge branch 'logger' into features. Change 'logger' to '_logger'.
4 years ago
Ross Wightman 1998bd3180 Merge branch 'feature/AB/logger' of https://github.com/antoinebrl/pytorch-image-models into logger
4 years ago
Ross Wightman 6c17d57a2c Fix some attributions, add copyrights to some file docstrings
4 years ago
Antoine Broyelle 78fa0772cc Leverage python hierachical logger
4 years ago
Ross Wightman 6441e9cc1b Fix memory_efficient mode for DenseNets. Add AntiAliasing (Blur) support for DenseNets and create one test model. Add lr cycle/mul params to train args.
5 years ago
AFLALO, Jonathan Isaac a7f570c9b7 added MultiEpochsDataLoader
5 years ago
Ross Wightman 13cf68850b Remove poorly named metrics from torch imagenet example origins. Use top1/top5 in csv output for consistency with existing validation results files, acc elsewhere. Fixes #111
5 years ago
Ross Wightman 27b3680d49 Revamp LR noise, move logic to scheduler base. Fixup PlateauLRScheduler and add it as an option.
5 years ago
Ross Wightman 514b0938c4 Experimenting with per-epoch learning rate noise w/ step scheduler
5 years ago
Ross Wightman 43225d110c Unify drop connect vs drop path under 'drop path' name, switch all EfficientNet/MobilenetV3 refs to 'drop_path'. Update factory to handle new drop args.
5 years ago
Ross Wightman b3cb5f3275 Working on CutMix impl as per #8, integrating with Mixup, currently experimenting...
5 years ago
Andrew Lavin b72013def8 Added commandline argument validation-batch-size-multiplier with default set to 1.
5 years ago
Ross Wightman 5b7cc16ac9 Add warning about using sync-bn with zero initialized BN layers. Fixes #54
5 years ago