Ross Wightman
|
7c2660576d
|
Tweak init for convnext block using maxxvit/coatnext.
|
2 years ago |
Ross Wightman
|
1d8d6f6072
|
Fix two default args in DenseNet blocks... fix #1427
|
2 years ago |
Ross Wightman
|
527f9a4cb2
|
Updated to correct maxvit_nano weights...
|
2 years ago |
Ross Wightman
|
b2e8426fca
|
Make k=stride=2 ('avg2') pooling default for coatnet/maxvit. Add weight links. Rename 'combined' partition to 'parallel'.
|
2 years ago |
Ross Wightman
|
837c68263b
|
For ConvNeXt, use timm internal LayerNorm for fast_norm in non conv_mlp mode
|
2 years ago |
Ross Wightman
|
cac0a4570a
|
More test fixes, pool size for 256x256 maxvit models
|
2 years ago |
Ross Wightman
|
e939ed19b9
|
Rename internal creation fn for maxvit, has not been just coatnet for a while...
|
2 years ago |
Ross Wightman
|
ffaf97f813
|
MaxxVit! A very configurable MaxVit and CoAtNet impl with lots of goodies..
|
2 years ago |
Ross Wightman
|
8c9696c9df
|
More model and test fixes
|
2 years ago |
Ross Wightman
|
ca52108c2b
|
Fix some model support functions
|
2 years ago |
Ross Wightman
|
f332fc2db7
|
Fix some test failures, torchscript issues
|
2 years ago |
Ross Wightman
|
6e559e9b5f
|
Add MViT (Multi-Scale) V2
|
2 years ago |
Ross Wightman
|
43aa84e861
|
Add 'fast' layer norm that doesn't cast to float32, support APEX LN impl for slight speed gain, update norm and act factories, tweak SE for ability to disable bias (needed by GCVit)
|
2 years ago |
Ross Wightman
|
c486aa71f8
|
Add GCViT
|
2 years ago |
Ross Wightman
|
fba6ecd39b
|
Add EfficientFormer
|
2 years ago |
Ross Wightman
|
ff4a38e2c3
|
Add PyramidVisionTransformerV2
|
2 years ago |
Ross Wightman
|
1d8ada359a
|
Add timm ConvNeXt 'atto' weights, change test resolution for FB ConvNeXt 224x224 weights, add support for different dw kernel_size
|
2 years ago |
Ross Wightman
|
2544d3b80f
|
ConvNeXt pico, femto, and nano, pico, femto ols (overlapping stem) weights and model defs
|
2 years ago |
Ross Wightman
|
13565aad50
|
Add edgenext_base model def & weight link, update to improve ONNX export #1385
|
2 years ago |
Ross Wightman
|
8ad4bdfa06
|
Allow ntuple to be used with string values
|
2 years ago |
Ross Wightman
|
7430a85d07
|
Update README, bump version to 0.6.8
|
2 years ago |
Ross Wightman
|
ec6a28830f
|
Add DeiT-III 'medium' model defs and weights
|
2 years ago |
Ross Wightman
|
d875a1d3f6
|
version 0.6.7
|
2 years ago |
Ross Wightman
|
6f103a442b
|
Add convnext_nano weights, 80.8 @ 224, 81.5 @ 288
|
2 years ago |
Ross Wightman
|
4042a94f8f
|
Add weights for two 'Edge' block (3x3->1x1) variants of CS3 networks.
|
2 years ago |
Ross Wightman
|
c8f69e04a9
|
Merge pull request #1365 from veritable-tech/fix-resize-pos-embed
Take `no_emb_class` into account when calling `resize_pos_embed`
|
2 years ago |
Ceshine Lee
|
0b64117592
|
Take `no_emb_class` into account when calling `resize_pos_embed`
|
2 years ago |
Jasha10
|
56c3a84db3
|
Update type hint for `register_notrace_module`
register_notrace_module is used to decorate types (i.e. subclasses of nn.Module).
It is not called on module instances.
|
2 years ago |
Ross Wightman
|
1b278136c3
|
Change models with mean 0,0,0 std 1,1,1 from int to float for consistency as mentioned in #1355
|
2 years ago |
Ross Wightman
|
909705e7ff
|
Remove some redundant requires_grad=True from nn.Parameter in third party code
|
2 years ago |
Ross Wightman
|
c5e0d1c700
|
Add dilation support to convnext, allows output_stride=8 and 16 use. Fix #1341
|
2 years ago |
Ross Wightman
|
dc376e3676
|
Ensure all model entrypoint fn default to `pretrained=False` (a few didn't)
|
2 years ago |
Ross Wightman
|
23b102064a
|
Add cs3sedarknet_x weights w/ 82.65 @ 288 top1. Add 2 cs3 edgenet models (w/ 3x3-1x1 block), remove aa from cspnet blocks (not needed)
|
2 years ago |
Ross Wightman
|
0dbd9352ce
|
Add bulk_runner script and updates to benchmark.py and validate.py for better error handling in bulk runs (used for benchmark and validation result runs). Improved batch size decay stepping on retry...
|
2 years ago |
Ross Wightman
|
92b91af3bb
|
version 0.6.6
|
2 years ago |
Ross Wightman
|
05313940e2
|
Add cs3darknet_x, cs3sedarknet_l, and darknetaa53 weights from TPU sessions. Move SE btwn conv1 & conv2 in DarkBlock. Improve SE/attn handling in Csp/DarkNet. Fix leaky_relu bug on older csp models.
|
2 years ago |
nateraw
|
51cca82aa1
|
👽 use hf_hub_download instead of cached_download
|
2 years ago |
Ross Wightman
|
324a4e58b6
|
disable nvfuser for jit te/legacy modes (for PT 1.12+)
|
2 years ago |
Ross Wightman
|
2898cf6e41
|
version 0.6.5 for pypi release
|
2 years ago |
Ross Wightman
|
a45b4bce9a
|
x and xx small edgenext models do benefit from larger test input size
|
2 years ago |
Ross Wightman
|
a8e34051c1
|
Unbreak gamma remap impacting beit checkpoint load, version bump to 0.6.4
|
2 years ago |
Ross Wightman
|
1c5cb819f9
|
bump version to 0.6.3 before merge
|
2 years ago |
Ross Wightman
|
a1cb25066e
|
Add edgnext_small_rw weights trained with swin like recipe. Better than original 'small' but not the recent 'USI' distilled weights.
|
2 years ago |
Ross Wightman
|
7c7ecd2492
|
Add --use-train-size flag to force use of train input_size (over test input size) for validation. Default test-time pooling to use train input size (fixes issues).
|
2 years ago |
Ross Wightman
|
ce65a7b29f
|
Update vit_relpos w/ some additional weights, some cleanup to match recent vit updates, more MLP log coord experiments.
|
2 years ago |
Ross Wightman
|
58621723bd
|
Add CrossStage3 DarkNet (cs3) weights
|
2 years ago |
Ross Wightman
|
9be0c84715
|
Change set -> dict w/ None keys for dataset split synonym search, so always consistent if more than 1 exists. Fix #1224
|
2 years ago |
Ross Wightman
|
db0cee9910
|
Refactor cspnet configuration using dataclasses, update feature extraction for new cs3 variants.
|
2 years ago |
Ross Wightman
|
eca09b8642
|
Add MobileVitV2 support. Fix #1332. Move GroupNorm1 to common layers (used in poolformer + mobilevitv2). Keep ol custom ConvNeXt LayerNorm2d impl as LayerNormExp2d for reference.
|
2 years ago |
Ross Wightman
|
06307b8b41
|
Remove experimental downsample in block support in ConvNeXt. Experiment further before keeping it in.
|
2 years ago |