Ross Wightman
58621723bd
Add CrossStage3 DarkNet (cs3) weights
2 years ago
Ross Wightman
9be0c84715
Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
2 years ago
Ross Wightman
db0cee9910
Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
2 years ago
Ross Wightman
eca09b8642
Add MobileVitV2 support. Fix #1332 . Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
2 years ago
Ross Wightman
06307b8b41
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
2 years ago
Ross Wightman
bfc0dccb0e
Improve image extension handling, add methods to modify / get defaults. Fix #1335 fix #1274 .
2 years ago
Ross Wightman
7d4b3807d5
Support DeiT-3 (Revenge of the ViT) checkpoints. Add non-overlapping (w/ class token) pos-embed support to vit.
3 years ago
Ross Wightman
d0c5bd5722
Rename cs2->cs3 for darknets. Fix features_only for cs3 darknets.
3 years ago
Ross Wightman
d765305821
Remove first_conv for resnetaa50 def
3 years ago
Ross Wightman
dd9b8f57c4
Add feature_info to edgenext for features_only support, hopefully fix some fx / test errors
3 years ago
Ross Wightman
377e9bfa21
Add TPU trained darknet53 weights. Add mising pretrain_cfg for some csp/darknet models.
3 years ago
Ross Wightman
c170ba3173
Add weights for resnet10t, resnet14t, and resnetaa50 models. Fix #1314
3 years ago
Ross Wightman
188c194b0f
Left some experiment stem code in convnext by mistake
3 years ago
Ross Wightman
70d6d2c484
support test_crop_size in data config resolve
3 years ago
Ross Wightman
6064d16a2d
Add initial EdgeNeXt import. Significant cleanup / reorg (like ConvNeXt). Fix #1320
...
* edgenext refactored for torchscript compat, stage base organization
* slight refactor of ConvNeXt to match some EdgeNeXt additions
* remove use of funky LayerNorm layer in ConvNeXt and just use nn.LayerNorm and LayerNorm2d (permute)
3 years ago
Ross Wightman
7a9c6811c9
Add eps arg to LayerNorm2d, add 'tf' (tensorflow) variant of trunc_normal_ that applies scale/shift after sampling (instead of needing to move a/b)
3 years ago
Ross Wightman
82c311d082
Add more experimental darknet and 'cs2' darknet variants (different cross stage setup, closer to newer YOLO backbones) for train trials.
3 years ago
Ross Wightman
a050fde5cd
Add resnet10t (basic block) and resnet14t (bottleneck) with 1,1,1,1 repeats
3 years ago
Ross Wightman
e6d7df40ec
no longer a point using kwargs for pretrain_cfg resolve, just pass explicit arg
3 years ago
Ross Wightman
07d0c4ae96
Improve repr for DropPath module
3 years ago
Ross Wightman
e27c16b8a0
Remove unecessary code for synbn guard
3 years ago
Ross Wightman
0da3c9ebbf
Remove SiLU layer in default args that breaks import on old old PyTorch
3 years ago
Ross Wightman
7d657d2ef4
Improve resolve_pretrained_cfg behaviour when no cfg exists, warn instead of crash. Improve usability ex #1311
3 years ago
Ross Wightman
879df47c0a
Support BatchNormAct2d for sync-bn use. Fix #1254
3 years ago
Ross Wightman
7cedc8d474
Follow up to #1256 , fix interpolation warning in auto_autoaugment as well
3 years ago
Jakub Kaczmarzyk
db64393c0d
use `Image.Resampling` namespace for PIL mapping ( #1256 )
...
* use `Image.Resampling` namespace for PIL mapping
PIL shows a deprecation warning when accessing resampling constants via the `Image` namespace. The suggested namespace is `Image.Resampling`. This commit updates `_pil_interpolation_to_str` to use the `Image.Resampling` namespace.
```
/tmp/ipykernel_11959/698124036.py:2: DeprecationWarning: NEAREST is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.NEAREST or Dither.NONE instead.
Image.NEAREST: 'nearest',
/tmp/ipykernel_11959/698124036.py:3: DeprecationWarning: BILINEAR is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BILINEAR instead.
Image.BILINEAR: 'bilinear',
/tmp/ipykernel_11959/698124036.py:4: DeprecationWarning: BICUBIC is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BICUBIC instead.
Image.BICUBIC: 'bicubic',
/tmp/ipykernel_11959/698124036.py:5: DeprecationWarning: BOX is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.BOX instead.
Image.BOX: 'box',
/tmp/ipykernel_11959/698124036.py:6: DeprecationWarning: HAMMING is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.HAMMING instead.
Image.HAMMING: 'hamming',
/tmp/ipykernel_11959/698124036.py:7: DeprecationWarning: LANCZOS is deprecated and will be removed in Pillow 10 (2023-07-01). Use Resampling.LANCZOS instead.
Image.LANCZOS: 'lanczos',
```
* use new pillow resampling enum only if it exists
3 years ago
Ross Wightman
20a1fa63f8
Make dev version 0.6.2.dev0 for pypi pre
3 years ago
Ross Wightman
347308faad
Update README.md, version to 0.6.2
3 years ago
Ross Wightman
4b30bae67b
Add updated vit_relpos weights, and impl w/ support for official swin-v2 differences for relpos. Add bias control support for MLP layers
3 years ago
Ross Wightman
d4c0588012
Remove persistent buffers from Swin-V2. Change SwinV2Cr cos attn + tau/logit_scale to match official, add ckpt convert, init_value zeros resid LN weight by default
3 years ago
Ross Wightman
27c42f0830
Fix torchscript use for offician Swin-V2, add support for non-square window/shift to WindowAttn/Block
3 years ago
Ross Wightman
2f2b22d8c7
Disable nvfuser fma / opt level overrides per #1244
3 years ago
Ross Wightman
c0211b0bf7
Swin-V2 test fixes, typo
3 years ago
Ross Wightman
9a86b900fa
Official SwinV2 models
3 years ago
Ross Wightman
d07d015173
Merge pull request #1249 from okojoalg/sequencer
...
Add Sequencer
3 years ago
Ross Wightman
d30685c283
Merge pull request #1251 from hankyul2/fix-multistep-scheduler
...
fix: multistep lr decay epoch bugs
3 years ago
han
a16171335b
fix: change milestones to decay-milestones
...
- change argparser option `milestone` to `decay-milestone`
3 years ago
Ross Wightman
39b725e1c9
Fix tests for rank-4 output where feature channels dim is -1 (3) and not 1
3 years ago
Ross Wightman
78a32655fa
Fix poolformer group_matcher to merge proj downsample with previous block, support coarse
3 years ago
Ross Wightman
d79f3d9d1e
Fix torchscript use for sequencer, add group_matcher, forward_head support, minor formatting
3 years ago
Ross Wightman
37b6920df3
Fix group_matcher regex for regnet.py
3 years ago
okojoalg
93a79a3dd9
Fix num_features in Sequencer
3 years ago
han
57a988df30
fix: multistep lr decay epoch bugs
...
- add milestones arguments
- change decay_epochs to milestones variable
3 years ago
okojoalg
578d52e752
Add Sequencer
3 years ago
Ross Wightman
f5ca4141f7
Adjust arg order for recent vit model args, add a few comments
3 years ago
Ross Wightman
41dc49a337
Vision Transformer refactoring and Rel Pos impl
3 years ago
Ross Wightman
b7cb8d0337
Add Swin-V2 Small-NS weights (83.5 @ 224). Add layer scale like 'init_values' via post-norm LN weight scaling
3 years ago
jjsjann123
f88c606fcf
fixing channels_last on cond_conv2d; update nvfuser debug env variable
3 years ago
Li Dong
09e9f3defb
migrate azure blob for beit checkpoints
...
## Motivation
We are going to use a new blob account to store the checkpoints.
## Modification
Modify the azure blob storage URLs for BEiT checkpoints.
3 years ago
Ross Wightman
52ac881402
Missed first_conv in latest seresnext 'D' default_cfgs
3 years ago
Ross Wightman
7629d8264d
Add two new SE-ResNeXt101-D 32x8d weights, one anti-aliased and one not. Reshuffle default_cfgs vs model entrypoints for resnet.py so they are better aligned.
3 years ago
SeeFun
8f0bc0591e
fix convnext args
3 years ago
Ross Wightman
c5a8e929fb
Add initial swinv2 tiny / small weights
3 years ago
Ross Wightman
f670d98cb8
Make a few more layers symbolically traceable (remove from FX leaf modules)
...
* remove dtype kwarg from .to() calls in EvoNorm as it messed up script + trace combo
* BatchNormAct2d always uses custom forward (cut & paste from original) instead of super().forward. Fixes #1176
* BlurPool groups==channels, no need to use input.dim[1]
3 years ago
SeeFun
ec4e9aa5a0
Add ConvNeXt tiny and small pretrain in22k
...
Add ConvNeXt tiny and small pretrain in22k from ConvNeXt repo:
06f7b05f92
3 years ago
Ross Wightman
575924ed60
Update test crop for new RegNet-V weights to match Y
3 years ago
Ross Wightman
1618527098
Add layer scale and parallel blocks to vision_transformer
3 years ago
Ross Wightman
c42be74621
Add attrib / comments about Swin-S3 (AutoFormerV2) weights
3 years ago
Ross Wightman
474ac906a2
Add 'head norm first' convnext_tiny_hnf weights
3 years ago
Ross Wightman
dc51334cdc
Fix pruned adapt for EfficientNet models that are now using BatchNormAct layers
3 years ago
Ross Wightman
024fc4d9ab
version 0.6.1 for master
3 years ago
Ross Wightman
e1e037ba52
Fix bad tuple typing fix that was on XLA branch bust missed on master merge
3 years ago
Ross Wightman
341b464a5a
Remove redundant noise attr from Plateau scheduler (use parent)
3 years ago
Ross Wightman
fe457c1996
Update SwinTransformerV2Cr post-merge, update with grad checkpointing / grad matcher
...
* weight compat break, activate norm3 for final block of final stage (equivalent to pre-head norm, but while still in BLC shape)
* remove fold/unfold for TPU compat, add commented out roll code for TPU
* add option for end of stage norm in all stages
* allow weight_init to be selected between pytorch default inits and xavier / moco style vit variant
3 years ago
Ross Wightman
b049a5c5c6
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
7cdd164d77
Fix #1184 , scheduler noise bug during merge madness
3 years ago
Ross Wightman
9440a50c95
Merge branch 'mrT23-master'
3 years ago
Ross Wightman
d98aa47d12
Revert ml-decoder changes to model factory and train script
3 years ago
Ross Wightman
b20665d379
Merge pull request #1007 from qwertyforce/patch-1
...
update arxiv link
3 years ago
Ross Wightman
7a0994f581
Merge pull request #1150 from ChristophReich1996/master
...
Swin Transformer V2
3 years ago
Ross Wightman
61d3493f87
Fix hf-hub handling when hf-hub is config source
3 years ago
Ross Wightman
5f47518f27
Fix pit implementation to be clsoer to deit/levit re distillation head handling
3 years ago
Ross Wightman
0862e6ebae
Fix correctness of some group matching regex (no impact on result), some formatting, missed forward_head for resnet
3 years ago
Ross Wightman
94bcdebd73
Add latest weights trained on TPU-v3 VM instances
3 years ago
Ross Wightman
0557c8257d
Fix bug introduced in non layer_decay weight_decay application. Remove debug print, fix arg desc.
3 years ago
Ross Wightman
372ad5fa0d
Significant model refactor and additions:
...
* All models updated with revised foward_features / forward_head interface
* Vision transformer and MLP based models consistently output sequence from forward_features (pooling or token selection considered part of 'head')
* WIP param grouping interface to allow consistent grouping of parameters for layer-wise decay across all model types
* Add gradient checkpointing support to a significant % of models, especially popular architectures
* Formatting and interface consistency improvements across models
* layer-wise LR decay impl part of optimizer factory w/ scale support in scheduler
* Poolformer and Volo architectures added
3 years ago
Ross Wightman
1420c118df
Missed comitting outstanding changes to default_cfg keys and test exclusions for swin v2
3 years ago
Ross Wightman
c6e4b7895a
Swin V2 CR impl refactor.
...
* reformat and change some naming so closer to existing timm vision transformers
* remove typing that wasn't adding clarity (or causing torchscript issues)
* support non-square windows
* auto window size adjust from image size
* post-norm + main-branch no
3 years ago
Christoph Reich
67d140446b
Fix bug in classification head
3 years ago
Christoph Reich
29add820ac
Refactor (back to relative imports)
3 years ago
Christoph Reich
74a04e0016
Add parameter to change normalization type
3 years ago
Christoph Reich
2a4f6c13dd
Create model functions
3 years ago
Christoph Reich
87b4d7a29a
Add get and reset classifier method
3 years ago
Christoph Reich
ff5f6bcd6c
Check input resolution
3 years ago
Christoph Reich
81bf0b4033
Change parameter names to match Swin V1
3 years ago
Christoph Reich
f227b88831
Add initials (CR) to model and file
3 years ago
Christoph Reich
90dc74c450
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
3 years ago
Ross Wightman
2c3870e107
semobilevit_s for good measure
3 years ago
Ross Wightman
bcaeb91b03
Version to 0.6.0, possible interface incompatibilities vs 0.5.x
3 years ago
Ross Wightman
58ba49c8ef
Add MobileViT models (w/ ByobNet base). Close #1038 .
3 years ago
Ross Wightman
5f81d4de23
Move DeiT to own file, vit getting crowded. Working towards fixing #1029 , make pooling interface for transformers and mlp closer to convnets. Still working through some details...
3 years ago
ayasyrev
cf57695938
sched noise dup code remove
3 years ago
Ross Wightman
95cfc9b3e8
Merge remote-tracking branch 'origin/master' into norm_norm_norm
3 years ago
Ross Wightman
abc9ba2544
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
3 years ago
Ross Wightman
07379c6d5d
Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
3 years ago
Ross Wightman
447677616f
version 0.5.5
3 years ago
Ross Wightman
83b40c5a58
Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
3 years ago
Mi-Peng
cdcd0a92ca
fix lars
3 years ago
Ross Wightman
1aa617cb3b
Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well
3 years ago
Ross Wightman
f0f9eccda8
Add --fuser arg to train/validate/benchmark scripts to select jit fuser type
3 years ago
Ross Wightman
010b486590
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
3 years ago
Ross Wightman
738a9cd635
unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
3 years ago
Ross Wightman
e0c4eec4b6
Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
3 years ago
Ross Wightman
b669f4a588
Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
3 years ago
Ross Wightman
e967c72875
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
3 years ago
Ross Wightman
9ca3437178
Add some more small model weights lcnet, mnas, mnv2
3 years ago
Ross Wightman
fa6463c936
Version 0.5.4
3 years ago
Ross Wightman
fa81164378
Fix stem width for really small mobilenetv3 arch defs
3 years ago
Ross Wightman
edd3d73695
Add missing dropout for head reset in ConvNeXt default head
3 years ago
Ross Wightman
b093dcb46d
Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
3 years ago
Ross Wightman
18934debc5
Add initial ConvNeXt impl (mods of official code)
3 years ago
Ross Wightman
656757d26b
Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones.
3 years ago
Ross Wightman
ccfeb06936
Fix out_indices handling breakage, should have left as per vgg approach.
3 years ago
Ross Wightman
a9f91483a6
Fix #1078 , DarkNet has 6 feature maps. Make vgg and darknet out_indices handling/comments equivalent
3 years ago
Ross Wightman
c21b21660d
visformer supports spatial feat map, update pool_size in pretrained cfg to match
3 years ago
Ross Wightman
9c11dfd9cb
Fix fbnetv3 pretrained cfg changes
3 years ago
Ross Wightman
1406cddc2e
FBNetV3 timm trained weights added for b/d/g variants. Update version to 0.5.2 for pypi release.
3 years ago
Ross Wightman
02ae11e526
Leaving repeat aug sampler indices as tensor thrashes worker shared process memory
3 years ago
Ross Wightman
4df51f3932
Add lcnet_100 and mnasnet_small weights
3 years ago
Ross Wightman
5ccf682a8f
Remove deprecated bn-tf train arg and create_model handler. Add evos/evob models back into fx test filter until norm_norm_norm branch merged.
3 years ago
Ross Wightman
b9a715c86a
Add more small model defs for MobileNetV3/V2/LCNet
3 years ago
Ross Wightman
b27c21b09a
Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman
214c84a235
Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman
72b57163d1
Merge branch 'master' of https://github.com/mrT23/pytorch-image-models into mrT23-master
3 years ago
Ross Wightman
de5fa791c6
Merge branch 'master' into norm_norm_norm
3 years ago
Ross Wightman
26ff57f953
Add more small model defs for MobileNetV3/V2/LCNet
3 years ago
Hyeongchan Kim
a0b2657497
Use `torch.repeat_interleave()` to generate repeated indices faster ( #1058 )
...
* update: use numpy to generate repeated indices faster
* update: use torch.repeat_interleave() instead of np.repeat()
* refactor: remove unused import, numpy
* refactor: torch.range to torch.arange
* update: tensor to list before appending the extra samples
* update: concatenate the paddings with torch.cat
3 years ago
Ross Wightman
450ac6a0f5
Post merge tinynet fixes for pool_size, feature extraction
3 years ago
Ross Wightman
a04164cd75
Merge branch 'tinynet' of https://github.com/rsomani95/pytorch-image-models into rsomani95-tinynet
3 years ago
Ross Wightman
8a93ce6ee3
Fix regnetv/w tests, refactor regnet generator code a bit
3 years ago
Ross Wightman
4dec8c8087
Fix skip path regression for updated EfficientNet and RegNet def. Add Pre-Act RegNet support (experimental). Remove BN-TF flag. Add efficientnet_b0_g8_gn model.
3 years ago
Ross Wightman
a52a614475
Remove layer experiment which should not have been added
3 years ago
Ross Wightman
ab49d275de
Significant norm update
...
* ConvBnAct layer renamed -> ConvNormAct and ConvNormActAa for anti-aliased
* Significant update to EfficientNet and MobileNetV3 arch to support NormAct layers and grouped conv (as alternative to depthwise)
* Update RegNet to add Z variant
* Add Pre variant of XceptionAligned that works with NormAct layers
* EvoNorm matches bits_and_tpu branch for merge
3 years ago
Rahul Somani
31bcd36e46
add tinynet models
3 years ago
KAI ZHAO
b4b8d1ec18
fix hard-coded strides
3 years ago
Ross Wightman
d04f2f1377
Update drop_path and drop_block (fast impl) to be symbolically traceable, slightly faster
3 years ago
Ross Wightman
834a9ec721
Disable use of timm nn.Linear wrapper since AMP autocast + torchscript use appears fixed
3 years ago
Ross Wightman
78912b6375
Updated EvoNorm implementations with some experimentation. Add FilterResponseNorm. Updated RegnetZ and ResNetV2 model defs for trials.
3 years ago
Ross Wightman
55adfbeb8d
Add commented code to increase open file limit via Python (for TFDS dataset building)
3 years ago
talrid
c11f4c3218
support CNNs
3 years ago
mrT23
d6701d8a81
Merge branch 'rwightman:master' into master
3 years ago
qwertyforce
ccb3815360
update arxiv link
3 years ago
Ross Wightman
3dc71695bf
Merge pull request #989 from martinsbruveris/feat/resmlp-dino
...
Added DINO pretrained ResMLP models.
3 years ago
Ross Wightman
480c676ffa
Fix FX breaking assert in evonorm
3 years ago
Martins Bruveris
85c5ff26d7
Added DINO pretrained ResMLP models.
3 years ago
Ross Wightman
d633a014e6
Post merge cleanup. Fix potential security issue passing kwargs directly through to serialized web data.
3 years ago
Nathan Raw
b18c9e323b
Update helpers.py
3 years ago
Nathan Raw
308d0b9554
Merge branch 'master' into hf-save-and-push
3 years ago
Ross Wightman
f0507f6da6
Fix k_decay default arg != 1.0 in poly scheduler
3 years ago
talrid
41559247e9
use_ml_decoder_head
3 years ago
Ross Wightman
1f53db2ece
Updated lamhalobotnet weights, 81.5 top-1
3 years ago
Ross Wightman
15ef108eb4
Add better halo2botnet50ts weights, 82 top-1 @ 256
3 years ago
Ross Wightman
734b2244fe
Add RegNetZ-D8 (83.5 @ 256, 84 @ 320) and RegNetZ-E8 (84.5 @ 256, 85 @ 320) weights. Update names of existing RegZ models to include group size.
3 years ago
Ross Wightman
93cc08fdc5
Make evonorm variables 1d to match other PyTorch norm layers, will break weight compat for any existing use (likely minimal, easy to fix).
3 years ago
Ross Wightman
af607b75cc
Prep a set of ResNetV2 models with GroupNorm, EvoNormB0, EvoNormS0 for BN free model experiments on TPU and IPU
3 years ago
Ross Wightman
c976a410d9
Add ResNet-50 w/ GN (resnet50_gn) and SEBotNet-33-TS (sebotnet33ts_256) model defs and weights. Update halonet50ts weights w/ slightly better variant in1k val, more robust to test sets.
3 years ago
Ross Wightman
f2006b2437
Cleanup qkv_bias cat in beit model so it can be traced
3 years ago
Ross Wightman
1076a65df1
Minor post FX merge cleanup
3 years ago
Ross Wightman
32c9937dec
Merge branch 'fx-feature-extract-new' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-fx-feature-extract-new
3 years ago
Ross Wightman
78b36bf46c
Places365 doesn't exist in some still used torchvision version
3 years ago
Alexander Soare
65d827c7a6
rename notrace registration and standardize trace_utils imports
3 years ago
Ross Wightman
9b2daf2a35
Add ResNeXt-50 weights 81.1 top-1 @ 224, 82 @ 288 with A1 'high aug' recipe
3 years ago
Ross Wightman
9b5d6dc7e2
Merge branch 'add-vit-b8' of https://github.com/martinsbruveris/pytorch-image-models into martinsbruveris-add-vit-b8
3 years ago
Ross Wightman
cfa414cad2
Matching two bits_and_tpu changes for TFDs wrapper
...
* change 'samples' -> 'examples' for tfds wrapper to match tfds naming
* add class_to_idx for image classification datasets in tfds wrapper
3 years ago
Martins Bruveris
5220711d87
Added B/8 models to ViT.
3 years ago
Alexander Soare
0262a0e8e1
fx ready for review
3 years ago
Alexander Soare
d2994016e9
Add try/except guards
3 years ago
Alexander Soare
b25ff96768
wip - pre-rebase
3 years ago
Alexander Soare
e051dce354
Make all models FX traceable
3 years ago
Alexander Soare
cf4561ca72
Add FX based FeatureGraphNet capability
3 years ago
Alexander Soare
0149ec30d7
wip - attempting to rebase
3 years ago
Alexander Soare
02c3a75a45
wip - make it possible to use fx graph in train and eval mode
3 years ago
Alexander Soare
bc3d4eb403
wip -rebase
3 years ago
Alexander Soare
ab3ac3f25b
Add FX based FeatureGraphNet capability
3 years ago
Ross Wightman
9ec3210c2d
More TFDS parser cleanup, support improved TFDS even_split impl (on tfds-nightly only currently).
3 years ago
Ross Wightman
ba65dfe2c6
Dataset work
...
* support some torchvision datasets
* improvements to TFDS wrapper for subsplit handling (fix #942 ), shuffle seed
* add class-map support to train (fix #957 )
3 years ago
Ross Wightman
ddc29da974
Add ResNet101 and ResNet152 weights from higher aug RSB recipes. 81.93 and 82.82 top-1 at 224x224.
3 years ago
Ross Wightman
b328e56f49
Update eca_halonext26ts weights to a better set
3 years ago
Ross Wightman
2ddef942b9
Better fix for #954 that doesn't break torchscript, pull torch._assert into timm namespace when it exists
3 years ago
Ross Wightman
4f0f9cb348
Fix #954 by bringing traceable _assert into timm to allow compat w/ PyTorch < 1.8
3 years ago
Ross Wightman
a41de1f666
Add interpolation mode handling to transforms. Removes InterpolationMode warning. Works for torchvision versions w/ and w/o InterpolationMode enum. Fix #738 .
3 years ago
Ross Wightman
ed41d32637
Add repr to auto_augment and random_erasing impl
3 years ago
Ross Wightman
ae72d009fa
Add weights for lambda_resnet50ts, halo2botnet50ts, lamhalobotnet50ts, updated halonet50ts
3 years ago
Ross Wightman
b745d30a3e
Fix formatting of last commit
3 years ago
Ross Wightman
3478f1d7f1
Traceability fix for vit models for some experiments
3 years ago
Ross Wightman
f658a72e72
Cleanup re-use of Dropout modules in Mlp modules after some twitter feedback :p
3 years ago
Thomas Viehmann
f805ba86d9
use .unbind instead of explicitly listing the indices
3 years ago
Ross Wightman
57992509f9
Fix some formatting in utils/model.py
3 years ago
Ross Wightman
0fe4fd3f1f
add d8 and e8 regnetz models with group size 8
3 years ago
Ross Wightman
25e7c8c5e5
Update broken resnetv2_50 weight url, add resnetv1_101 a1h recipe weights for 224x224 train
3 years ago
Ross Wightman
b6caa356d2
Fixed eca_botnext26ts_256 weights added, 79.27
3 years ago
Ross Wightman
c02334d9fa
Add weights for regnetz_d and haloregnetz_c, update regnetz_c weights. Add commented PyTorch XLA code for halo attention
3 years ago
Ross Wightman
02daf2ab94
Add option to include relative pos embedding in the attention scaling as per references. See discussion #912
3 years ago
masafumi
047a5ec05f
Fix bugs that Mixup does not work device=cpu
3 years ago
Ross Wightman
cd34913278
Remove some outdated comments, botnet networks working great now.
3 years ago
Ross Wightman
6ed4cdccca
Update lambda_resnet26t weights with better set
3 years ago
ICLR Author
44d6d51668
Add ConvMixer
3 years ago
Ross Wightman
a85df34993
Update lambda_resnet26rpt weights to 78.9, add better halonet26t weights at 79.1 with tweak to attention dim
3 years ago
Ross Wightman
b544ad4d3f
regnetz model default cfg tweaks
3 years ago
Ross Wightman
e5da481073
Small post-merge tweak for freeze/unfreeze, add to __init__ for utils
3 years ago
Ross Wightman
5ca72dcc75
Merge branch 'freeze-functionality' of https://github.com/alexander-soare/pytorch-image-models into alexander-soare-freeze-functionality
3 years ago
Ross Wightman
e2b8d44ff0
Halo, bottleneck attn, lambda layer additions and cleanup along w/ experimental model defs
...
* align interfaces of halo, bottleneck attn and lambda layer
* add qk_ratio to all of above, control q/k dim relative to output dim
* add experimental haloregnetz, and trionet (lambda + halo + bottle) models
3 years ago
Alexander Soare
431e60c83f
Add acknowledgements for freeze_batch_norm inspiration
3 years ago
Ross Wightman
fbf59c04ee
Change crop ratio on correct resnet50 variant.
3 years ago
Ross Wightman
ae1ff5792f
Clean a1/a2/3 rsb _0 checkpoints properly, fix v2 loading.
3 years ago
Ross Wightman
93901e992f
Version bump to 0.5.0 for pending release post RSB and ATTN updates
3 years ago
Ross Wightman
da0d39bedd
Update default crop_pct for byoanet
3 years ago
Ross Wightman
cc9bedf373
Add initial ResNet Strikes Back weights for ResNet50 and ResNetV2-50 models
3 years ago
Ross Wightman
64495505b7
Add updated lambda resnet26 and botnet26 checkpoints with fixes applied
3 years ago
Ross Wightman
b2094f4ee8
support bits checkpoints in avg/load
3 years ago
Ross Wightman
007bc39323
Some halo and bottleneck attn code cleanup, add halonet50ts weights, use optimal crop ratios
3 years ago
Alexander Soare
65c3d78b96
Freeze unfreeze functionality finalized. Tests added
3 years ago
Alexander Soare
0cb8ea432c
wip
3 years ago
Ross Wightman
b1c2e3eb92
Match rel_pos_indices attr rename in conv branch
3 years ago
Ross Wightman
b49630a138
Add relative pos embed option to LambdaLayer, fix last transpose/reshape.
3 years ago
Ross Wightman
d657e2cc0b
Remove dead code line from efficientnet
3 years ago
Ross Wightman
0ca687f224
Make 'regnetz' model experiments closer to actual RegNetZ, bottleneck expansion, expand from in_chs, no shortcut on stride 2, tweak model sizes
3 years ago
leondgarse
51eaf9360d
Remove a duplicate layer creation in byobnet.py
...
`self.conv2_kxk` is repeated in `byobnet.py`. Remove the duplicate code.
3 years ago
Ross Wightman
b81e79aae9
Fix bottleneck attn transpose typo, hopefully these train better now..
3 years ago
Ross Wightman
80075b0b8a
Add worker_seeding arg to allow selecting old vs updated data loader worker seed for (old) experiment repeatability
3 years ago
Ross Wightman
6478bcd02c
Fix regnetz_d conv layer name, use inception mean/std
3 years ago
Ross Wightman
0387e6057e
Update binary cross ent impl to use thresholding as an option (convert soft targets from mixup/cutmix to 0, 1)
3 years ago
Ross Wightman
f8a63a3b71
Add worker_init_fn to loader for numpy seed per worker
3 years ago
Ross Wightman
515121cca1
Use reshape instead of view in std_conv, causing issues in recent PyTorch in channels_last
3 years ago
Ross Wightman
da06cc61d4
ResNetV2 seems to work best without zero_init residual
3 years ago
Ross Wightman
8e11da0ce3
Add experimental RegNetZ(ish) models for training / perf trials.
3 years ago
Alexander Soare
6bbc50beb4
make it possible to provide norm_layer via create_model
3 years ago
nateraw
adcb74f87f
🎨 Import load_state_dict_from_url directly
3 years ago
nateraw
e65a2cba3d
🎨 cleanup and add a couple comments
3 years ago
nateraw
2b6ade24b3
🎨 write model card to enable inference
3 years ago
Ross Wightman
cf5ac2800c
BotNet models were still off, remove weights for bad configs. Add good SE-HaloNet33-TS weights.
3 years ago
Ross Wightman
24720abe3b
Merge branch 'master' into attn_update
3 years ago
Ross Wightman
1c9284c640
Add BeiT 'finetuned' 1k weights and pretrained 22k weights, pretraining specific (masked) model excluded for now
3 years ago
Ross Wightman
f8a215cfe6
A few more crossvit tweaks, fix training w/ no_weight_decay names, add crop option for scaling, adjust default crop_pct for large img size to 1.0 for better results
3 years ago
Ross Wightman
7ab2491ab7
Better handling of crossvit for tests / forward_features, fix torchscript regression in my changes
3 years ago
Ross Wightman
f1808e0970
Post crossvit merge cleanup, change model names to reflect input size, cleanup img size vs scale handling, fix tests
3 years ago
Ross Wightman
4027412757
Add resnet33ts weights, update resnext26ts baseline weights
3 years ago
Richard Chen
9fe5798bee
fix bug for reset classifier and fix for validating the dimension
3 years ago
Richard Chen
3718c5a5bd
fix loading pretrained model
3 years ago
Richard Chen
bb50b69a57
fix for torch script
3 years ago
nateraw
abf9d51bc3
🚧 wip
3 years ago
Ross Wightman
5bd04714e4
Cleanup weight init for byob/byoanet and related
3 years ago
Ross Wightman
8642401e88
Swap botnet 26/50 weights/models after realizing a mistake in arch def, now figuring out why they were so low...
3 years ago
Ross Wightman
5f12de4875
Add initial AttentionPool2d that's being trialed. Fix comment and still trying to improve reliability of sgd test.
3 years ago
Ross Wightman
76881d207b
Add baseline resnet26t @ 256x256 weights. Add 33ts variant of halonet with at least one halo in stage 2,3,4
3 years ago
Ross Wightman
484e61648d
Adding the attn series weights, tweaking model names, comments...
3 years ago
Ross Wightman
fb94350896
Update training script and loader factory to allow use of scheduler updates, repeat augment, and bce loss
3 years ago
Ross Wightman
f262137ff2
Add RepeatAugSampler as per DeiT RASampler impl, showing promise for current (distributed) training experiments.
3 years ago
Ross Wightman
ba9c1108a1
Add a BCE loss impl that converts dense targets to sparse /w smoothing as an alternate to CE w/ smoothing. For training experiments.
3 years ago
Ross Wightman
29a37e23ee
LR scheduler update:
...
* add polynomial decay 'poly'
* cleanup cycle specific args for cosine, poly, and tanh sched, t_mul -> cycle_mul, decay -> cycle_decay, default cycle_limit to 1 in each opt
* add k-decay for cosine and poly sched as per https://arxiv.org/abs/2004.05909
* change default tanh ub/lb to push inflection to later epochs
3 years ago