From b944546200f81ebe5f688cb523aaf3423a437b81 Mon Sep 17 00:00:00 2001 From: Ross Wightman Date: Mon, 10 Oct 2022 22:13:05 -0700 Subject: [PATCH] Update README.md --- README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/README.md b/README.md index a8863d34..c4cb0258 100644 --- a/README.md +++ b/README.md @@ -24,11 +24,12 @@ And a big thanks to all GitHub sponsors who helped with some of my costs before ### Oct 10, 2022 * More weights in `maxxvit` series, incl first ConvNeXt block based `coatnext` and `maxxvit` experiments: * `coatnext_nano_rw_224` - 82.0 @ 224 (G) -- (uses ConvNeXt conv block, no BatchNorm) - * `maxxvit_nano_rw_256` - 83.0 @ 256, 83.7 @ 320 (G) (uses ConvNeXt conv block, no BN) + * `maxxvit_rmlp_nano_rw_256` - 83.0 @ 256, 83.7 @ 320 (G) (uses ConvNeXt conv block, no BN) * `maxvit_rmlp_small_rw_224` - 84.5 @ 224, 85.1 @ 320 (G) - * `maxxvit_small_rw_256` - 84.6 @ 256, 84.9 @ 288 (G) -- could be trained better, hparams need tuning (uses ConvNeXt block, no BN) + * `maxxvit_rmlp_small_rw_256` - 84.6 @ 256, 84.9 @ 288 (G) -- could be trained better, hparams need tuning (uses ConvNeXt block, no BN) * `coatnet_rmlp_2_rw_224` - 84.6 @ 224, 85 @ 320 (T) - + * NOTE: official MaxVit weights (in1k) have been released https://github.com/google-research/maxvit, some extra work needed to port and adapt since my impl was in parallel and has a few small differences + ### Sept 23, 2022 * LAION-2B CLIP image towers supported as pretrained backbones for fine-tune or features (no classifier) * vit_base_patch32_224_clip_laion2b