Christoph Reich
|
74a04e0016
|
Add parameter to change normalization type
|
3 years ago |
Christoph Reich
|
2a4f6c13dd
|
Create model functions
|
3 years ago |
Christoph Reich
|
87b4d7a29a
|
Add get and reset classifier method
|
3 years ago |
Christoph Reich
|
ff5f6bcd6c
|
Check input resolution
|
3 years ago |
Christoph Reich
|
81bf0b4033
|
Change parameter names to match Swin V1
|
3 years ago |
Christoph Reich
|
f227b88831
|
Add initials (CR) to model and file
|
3 years ago |
Christoph Reich
|
90dc74c450
|
Add code from https://github.com/ChristophReich1996/Swin-Transformer-V2 and change docstring style to match timm
|
3 years ago |
Ross Wightman
|
7c67d6aca9
|
Update README.md
|
3 years ago |
Ross Wightman
|
2c3870e107
|
semobilevit_s for good measure
|
3 years ago |
Ross Wightman
|
bcaeb91b03
|
Version to 0.6.0, possible interface incompatibilities vs 0.5.x
|
3 years ago |
Ross Wightman
|
58ba49c8ef
|
Add MobileViT models (w/ ByobNet base). Close #1038.
|
3 years ago |
Ross Wightman
|
5f81d4de23
|
Move DeiT to own file, vit getting crowded. Working towards fixing #1029, make pooling interface for transformers and mlp closer to convnets. Still working through some details...
|
3 years ago |
ayasyrev
|
cf57695938
|
sched noise dup code remove
|
3 years ago |
Ross Wightman
|
95cfc9b3e8
|
Merge remote-tracking branch 'origin/master' into norm_norm_norm
|
3 years ago |
Ross Wightman
|
abc9ba2544
|
Transitioning default_cfg -> pretrained_cfg. Improving handling of pretrained_cfg source (HF-Hub, files, timm config, etc). Checkpoint handling tweaks.
|
3 years ago |
Ross Wightman
|
07379c6d5d
|
Add vit_base2_patch32_256 for a model between base_patch16 and patch32 with a slightly larger img size and width
|
3 years ago |
Ross Wightman
|
cf4334391e
|
Update benchmark and validate scripts to output results in JSON with a fixed delimiter for use in multi-process launcher
|
3 years ago |
Ross Wightman
|
1331c145a3
|
Add train benchmark results, adjust name scheme for inference and train benchmark files.
|
3 years ago |
Ross Wightman
|
a517bf6a7a
|
Merge pull request #1105 from kozistr/refactor/remove-condition
Remove checking `smoothing` parameter
|
3 years ago |
kozistr
|
56a6b38f76
|
refactor: remove if-condition
|
3 years ago |
Ross Wightman
|
447677616f
|
version 0.5.5
|
3 years ago |
Ross Wightman
|
499c4749d7
|
Add update NCHW and NHWC inference benchmark numbers for current models. Flip name of 'sam' vit models in results files
|
3 years ago |
Ross Wightman
|
83b40c5a58
|
Last batch of small model weights (for now). mobilenetv3_small 050/075/100 and updated mnasnet_small with lambc/lamb optimizer.
|
3 years ago |
Ross Wightman
|
7f73252716
|
Merge pull request #1094 from Mi-Peng/lars
fix lars
|
3 years ago |
Mi-Peng
|
cdcd0a92ca
|
fix lars
|
3 years ago |
Ross Wightman
|
2d4b7e7080
|
Update results csvs for latest release
|
3 years ago |
Ross Wightman
|
1aa617cb3b
|
Add AvgPool2d anti-aliasing support to ResNet arch (as per OpenAI CLIP models), add a few blur aa models as well
|
3 years ago |
Ross Wightman
|
f0f9eccda8
|
Add --fuser arg to train/validate/benchmark scripts to select jit fuser type
|
3 years ago |
Ross Wightman
|
010b486590
|
Add Dino pretrained weights (no head) for vit models. Add support to tests and helpers for models w/ no classifier (num_classes=0 in pretrained cfg)
|
3 years ago |
Ross Wightman
|
738a9cd635
|
unbiased=False for torch.var_mean path of ConvNeXt LN. Fix #1090
|
3 years ago |
Ross Wightman
|
e0c4eec4b6
|
Default conv_mlp to False across the board for ConvNeXt, causing issues on more setups than it's improving right now...
|
3 years ago |
Ross Wightman
|
b669f4a588
|
Add ConvNeXt 22k->1k fine-tuned and 384 22k-1k fine-tuned weights after testing
|
3 years ago |
Ross Wightman
|
6dcbaf211a
|
Update README.md
|
3 years ago |
Ross Wightman
|
a8d103e18b
|
Giant/gigantic vits snuck through in a test a broke GitHub test runner, add filter
|
3 years ago |
Ross Wightman
|
ef72ad4177
|
Extra vit_huge model likely to cause test issue (non in21k variant), adding to filters
|
3 years ago |
Ross Wightman
|
e967c72875
|
Update REAMDE.md. Sneak in g/G (giant / gigantic?) ViT defs from scaling paper
|
3 years ago |
Ross Wightman
|
9ca3437178
|
Add some more small model weights lcnet, mnas, mnv2
|
3 years ago |
Ross Wightman
|
fa6463c936
|
Version 0.5.4
|
3 years ago |
Ross Wightman
|
fa81164378
|
Fix stem width for really small mobilenetv3 arch defs
|
3 years ago |
Ross Wightman
|
edd3d73695
|
Add missing dropout for head reset in ConvNeXt default head
|
3 years ago |
Ross Wightman
|
b093dcb46d
|
Some convnext cleanup, remove in place mul_ for gamma, breaking symbolic trace, cleanup head a bit...
|
3 years ago |
Ross Wightman
|
18934debc5
|
Add initial ConvNeXt impl (mods of official code)
|
3 years ago |
Ross Wightman
|
656757d26b
|
Fix MobileNetV2 head conv size for multiplier < 1.0. Add some missing modification copyrights, fix starting date of some old ones.
|
3 years ago |
Ross Wightman
|
ccfeb06936
|
Fix out_indices handling breakage, should have left as per vgg approach.
|
3 years ago |
Ross Wightman
|
a9f91483a6
|
Fix #1078, DarkNet has 6 feature maps. Make vgg and darknet out_indices handling/comments equivalent
|
3 years ago |
Ross Wightman
|
c21b21660d
|
visformer supports spatial feat map, update pool_size in pretrained cfg to match
|
3 years ago |
Ross Wightman
|
9c11dfd9cb
|
Fix fbnetv3 pretrained cfg changes
|
3 years ago |
Ross Wightman
|
1406cddc2e
|
FBNetV3 timm trained weights added for b/d/g variants. Update version to 0.5.2 for pypi release.
|
3 years ago |
Ross Wightman
|
02ae11e526
|
Leaving repeat aug sampler indices as tensor thrashes worker shared process memory
|
3 years ago |
Ross Wightman
|
4df51f3932
|
Add lcnet_100 and mnasnet_small weights
|
3 years ago |