* remove redundant GluonResNet model/blocks and use the code in ResNet for Gluon weights
* change SEModules back to using AdaptiveAvgPool instead of mean, PyTorch issue long fixed
* Split MobileNetV3 and EfficientNet model files and put builder and blocks in own files (getting too large)
* Finalize CondConv EfficientNet variant
* Add the AdvProp weights files and B8 EfficientNet model
* Refine the feature extraction module for EfficientNet and MobileNetV3
* Add some of the trendy new optimizers. Decent results but not clearly better than the standards.
* Can create a None scheduler for constant LR
* ResNet defaults to zero_init of last BN in residual
* add resnet50d config
* refactor 'same' convolution and add helper to use MixedConv2d when needed
* improve performance of 'same' padding for cases that can be handled statically
* add support for extra exp, pw, and dw kernel specs with grouping support to decoder/string defs for MixNet
* shuffle some args for a bit more consistency, a little less clutter overall in gen_efficientnet.py
* remove folded_bn support and corresponding untrainable tflite ported weights
* combine bn args into dict
* add inplace support to activations and use where possible for reduced mem on large models
* Remove some models that don't exist as pretrained an likely never will (se)resnext152
* Add some torchvision weights as tv_ for models that I have added better weights for
* Add wide resnet recently added to torchvision along with resnext101-32x8d
* Add functionality to model registry to allow filtering on pretrained weight presence
* reorganize train args
* allow resolve_data_config to be used with dict args, not just arparse
* stop incrementing epoch before save, more consistent naming vs csv, etc
* update resume and start epoch handling to match above
* stop auto-incrementing epoch in scheduler
* host some of Cadene's weights on github instead of .fr for speed
* add my old port of ensemble adversarial inception resnet v2
* switch to my TF port of normal inception res v2 and change FC layer back to 'classif' for compat with ens_adv