|
|
@ -28,7 +28,7 @@ I'm fortunate to be able to dedicate significant time and money of my own suppor
|
|
|
|
* 1k trained variants: `tf_efficientnetv2_s/m/l`
|
|
|
|
* 1k trained variants: `tf_efficientnetv2_s/m/l`
|
|
|
|
* 21k trained variants: `tf_efficientnetv2_s/m/l_21k`
|
|
|
|
* 21k trained variants: `tf_efficientnetv2_s/m/l_21k`
|
|
|
|
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_21ft1k`
|
|
|
|
* 21k pretrained -> 1k fine-tuned: `tf_efficientnetv2_s/m/l_21ft1k`
|
|
|
|
* v2 models w/ v1 scaling: `tf_efficientnet_v2_b0` through `b3`
|
|
|
|
* v2 models w/ v1 scaling: `tf_efficientnetv2_b0` through `b3`
|
|
|
|
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
|
|
|
|
* Rename my prev V2 guess `efficientnet_v2s` -> `efficientnetv2_rw_s`
|
|
|
|
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
|
|
|
|
* Some blank `efficientnetv2_*` models in-place for future native PyTorch training
|
|
|
|
|
|
|
|
|
|
|
|