Commit Graph

199 Commits (2681a8d618ba85085369bcf22a89c6fb4c5076be)

Author SHA1 Message Date
Ross Wightman 2a7d256fd5 Re-enable mem-efficient/jit activations after torchscript tests
5 years ago
Ross Wightman f902bcd54c Layer refactoring continues, ResNet downsample rewrite for proper dilation in 3x3 and avg_pool cases
5 years ago
Ross Wightman a99ec4e7d1 A bunch more layer reorg, splitting many layers into own files. Improve torchscript compatibility.
5 years ago
Ross Wightman 13746a33fc Big move, layer modules and fn to timm/models/layers
5 years ago
Ross Wightman f54612f648 Merge branch 'select_kernel' into attention
5 years ago
Ross Wightman 4defbbbaa8 Fix module name mistake, start layers sub-package
5 years ago
Ross Wightman 7011cd0902 A little bit of ECA cleanup
5 years ago
Ross Wightman 46471df7b2 Merge pull request #82 from VRandme/eca
5 years ago
Ross Wightman d0eb59ef46 Remove unused default_init for EfficientNets, experimenting with fanout calc for #84
5 years ago
Chris Ha e6a762346a Implement Adaptive Kernel selection
5 years ago
Ross Wightman 13e8da2b46 SelectKernel split_input works best when input channels split like grouped conv, but output is full width. Disable zero_init for SK nets, seems a bad combo.
5 years ago
Chris Ha 6db087a1ff Merge remote-tracking branch 'upstream/master' into eca
5 years ago
Chris Ha 904c618040 Update EcaModule.py
5 years ago
Chris Ha db91ba053b EcaModule(CamelCase)
5 years ago
Ross Wightman 5c4991a088 Add PyTorch trained EfficientNet-ES weights from Andrew Lavin
5 years ago
Chris Ha d04ff95eda Merge branch 'master' into eca
5 years ago
Chris Ha d63ae121d5 Clean up eca_module code
5 years ago
Ross Wightman d66819d1f3 Indentation mistake. Fixes #81
5 years ago
Chris Ha f87fcd7e88 Implement Eca modules
5 years ago
Ross Wightman 4808b3c32f Bump version for PyPi update, fix few out of date README items/mistakes, add README updates for TF EfficientNet-B8 (RandAugment)
5 years ago
Ross Wightman 7d07ebb660 Adding some configs to sknet, incl ResNet50 variants from 'Compounding ... Assembled Techniques' paper and original SKNet50
5 years ago
Ross Wightman a9d2424fd1 Add separate zero_init_last_bn function to support more block variety without a mess
5 years ago
Ross Wightman 355aa152d5 Just leave it float for now, will look at fp16 later. Remove unused reference code.
5 years ago
Ross Wightman ef457555d3 BlockDrop working on GPU
5 years ago
Ross Wightman 3ff19079f9 Missed nn_ops.py from last commit
5 years ago
Ross Wightman 9f11b4e8a2 Add ConvBnAct layer to parallel integrated SelectKernelConv, add support for DropPath and DropBlock to ResNet base and SK blocks
5 years ago
Ross Wightman cefc9b7761 Move SelectKernelConv to conv2d_layers and more
5 years ago
Ross Wightman 9abe610931 Used wrong channel var for split
5 years ago
Ross Wightman 58e28dc7e7 Move Selective Kernel blocks/convs to their own sknet.py file
5 years ago
Ross Wightman a93bae6dc5 A SelectiveKernelBasicBlock for more experiments
5 years ago
Ross Wightman ad087b4b17 Missed bias=False in selection conv
5 years ago
Ross Wightman c8b3d6b81a Initial impl of Selective Kernel Networks. Very much a WIP.
5 years ago
Ross Wightman 1daa303744 Add support to Dataset for class id mapping file, clean up a bit of old logic. Add results file arg for validation and update script.
5 years ago
Ross Wightman 91534522f9 Add newly added TF ported EfficientNet-B8 weights (RandAugment)
5 years ago
Ross Wightman 12dbc74742 New ResNet50 JSD + RandAugment weights
5 years ago
Ross Wightman 2f41905ba5 Update ResNet50 weights to AuxMix trained 78.994 top-1. A few commentes re 'tiered_narrow' tn variant.
5 years ago
Ross Wightman d9a6a9d0af
Merge pull request #74 from rwightman/augmix-jsd
5 years ago
Ross Wightman 3eb4a96eda Update AugMix, JSD, etc comments and references
5 years ago
Ross Wightman a28117ea46 Add tiered narrow ResNet (tn) and weights for seresnext26tn_32x4d
5 years ago
Ross Wightman 833066b540 A few minor things in SplitBN
5 years ago
Ross Wightman 7547119891 Add SplitBatchNorm. AugMix, Rand/AutoAugment, Split (Aux) BatchNorm, Jensen-Shannon Divergence, RandomErasing all working together
5 years ago
Ross Wightman 2e955cfd0c Update RandomErasing with some improved arg names, tweak to aspect range
5 years ago
Ross Wightman 3cc0f91e23 Fix augmix variable name scope overlap, default non-blended mode
5 years ago
Ross Wightman ec0dd4053a Add updated RandAugment trained EfficientNet-B0 trained weights from @michaelklachko
5 years ago
Ross Wightman 40fea63ebe Add checkpoint averaging script. Add headers, shebangs, exec perms to all scripts
5 years ago
Ross Wightman 4666cc9aed Add --pin-mem arg to enable dataloader pin_memory (showing more benefit in some scenarios now), also add --torchscript arg to validate.py for testing models with jit.script
5 years ago
Ross Wightman 53001dd292 ResNet / Res2Net additions:
5 years ago
Ross Wightman f96b3e5e92 InceptionResNetV2 torchscript compatible
5 years ago
Ross Wightman 19d93fe454 Add selecsls60 weights
5 years ago
Ross Wightman 0062c15fb0 Update checkpoint url with modelzoo compatible ones.
5 years ago
Ross Wightman b5315e66b5 Streamline SelecSLS model without breaking checkpoint compat. Move cfg handling out of model class. Update feature/pooling behaviour to match current.
5 years ago
Ross Wightman d59a756c16 Run PyCharm autoformat on selecsls and change mix cap variables and model names to all lower
5 years ago
Ross Wightman fb3a0f4bb8
Merge pull request #65 from mehtadushy/selecsls
5 years ago
Ross Wightman 19fc205a4d Update comments on the new SE-ResNeXt26 models
5 years ago
Ross Wightman acc3ed2b8c Add EfficientNet-B3 weights, trained from scratch with RA.
5 years ago
Dushyant Mehta 2404361f62 correct asset paths
5 years ago
Dushyant Mehta 31939311f6 Added SelecSLS models
5 years ago
rwightman 1f4498f217 Add ResNet deep tiered stem and model weights for seresnext26t_32x4d and seresnext26d_32x4d
5 years ago
Dushyant Mehta 32012a44fd Added SelecSLS model
5 years ago
Ross Wightman 73b78459dc Add update RandAugment MixNet-XL weights
5 years ago
Ross Wightman 3afc2a4dc0 Some cleanup/improvements to AugMix impl:
5 years ago
Ross Wightman 232ab7fb12 Working on an implementation of AugMix with JensenShannonDivergence loss that's compatible with my AutoAugment and RandAugment impl
5 years ago
Ross Wightman a435ea1327 Change reduce_bn to distribute_bn, add ability to choose between broadcast and reduce (mean). Add crop_pct arg to allow selecting validation crop while training.
5 years ago
Ross Wightman 3bff2b21dc Add support for keeping running bn stats the same across distributed training nodes before eval/save
5 years ago
Ross Wightman 0161de0127 Switch RandoErasing back to on GPU normal sampling
5 years ago
Ross Wightman ff421e5e09 New PyTorch trained EfficientNet-B2 weights with my RandAugment impl
5 years ago
Ross Wightman 3bef524f9c Finish with HRNet, weights and models updated. Improve consistency in model classifier/global pool treatment.
5 years ago
Ross Wightman 6ca0828166 Update EfficientNet comments, MobileNetV3 non-TF create fns, fix factory arg checks, bump PyTorch version req to 1.2
5 years ago
Ross Wightman eccbadca74 Update EfficientNet comments
5 years ago
Ross Wightman 902d32fb16 Renamed gen_efficientnet.py -> efficientnet.py
5 years ago
Ross Wightman 5a0a8de7e3 ResNet updates:
5 years ago
Ross Wightman a39cc43374 Bring EfficientNet and MobileNetV3 up to date with my gen-efficientnet repo
5 years ago
Ross Wightman ad93347548 Initial HRNet classification model commit
5 years ago
Ross Wightman 2393708650 Missed stashing of out_indices in model
5 years ago
Ross Wightman 35e8f0c5e7 Fixup a few comments, add PyTorch version aware Flatten and finish as_sequential for GenEfficientNet
5 years ago
Ross Wightman 7ac6db4543 Missed activations.py
5 years ago
Ross Wightman 506df0e3d0 Add CondConv support for EfficientNet into WIP for GenEfficientNet Feature extraction setup
5 years ago
Ross Wightman 576d360f20 Bring in JIT version of optimized swish activation from gen_efficientnet as default (while working on feature extraction functionality here).
5 years ago
Ross Wightman 7b83e67f77 Pass drop connect arg through to EfficientNet models
5 years ago
Ross Wightman 31453b039e Update Auto/RandAugment comments, README, more.
5 years ago
Ross Wightman 4243f076f1 Adding RandAugment to AutoAugment impl, some tweaks to AA included
5 years ago
Ross Wightman 0d58c50fb1 Add TF RandAug weights for B5/B7 EfficientNet models.
5 years ago
Ross Wightman c099374771 Map pretrained checkpoint to cpu to avoid issue with some pretrained checkpoints still having CUDA tensors. Fixes #42
5 years ago
Ross Wightman b93fcf0708 Add Facebook Research Semi-Supervised and Semi-Weakly Supervised ResNet model weights.
5 years ago
Ross Wightman a9eb484835 Add memory efficient Swish impl
5 years ago
rwightman d3ba34ee7e Fix Mobilenet V3 model name for sotabench. Minor res2net cleanup.
5 years ago
Ross Wightman 2680ad14bb Add Res2Net and DLA to README
5 years ago
rwightman adbf770f16 Add Res2Net and DLA models w/ pretrained weights. Update sotabench.
5 years ago
Ross Wightman 4002c0d4ce Fix AutoAugment abs translate calc
5 years ago
Ross Wightman c06274e5a2 Add note on random selection of magnitude value
5 years ago
Ross Wightman b750b76f67 More AutoAugment work. Ready to roll...
5 years ago
Ross Wightman 25d2088d9e Working on auto-augment
5 years ago
Ross Wightman aff194f42c
Merge pull request #32 from rwightman/opt
5 years ago
Ross Wightman 64966f61f7 Add Nvidia's NovogGrad impl from Jasper (cleaner/faster than current) and Apex Fused optimizers
5 years ago
Ross Wightman 3d9c8a6489 Add support for new AMP checkpointing support w/ amp.state_dict
5 years ago
Ross Wightman ba3c97c3ad Some Lookahead cleanup and fixes
5 years ago
Ross Wightman e9d2ec4d8e
Merge pull request #31 from rwightman/opt
5 years ago
Ross Wightman fac58f609a Add RAdam, NovoGrad, Lookahead, and AdamW optimizers, a few ResNet tweaks and scheduler factory tweak.
5 years ago
Ross Wightman 81875d52a6 Update sotabench model list, add Mean-Max pooling DPN variants, disable download progress
5 years ago
Ross Wightman f37e633e9b Merge remote-tracking branch 'origin/re-exp' into opt
5 years ago