@ -26,6 +26,37 @@ M2 MacBook Air 8GB Latency (s) | 18 | 23 | 23 |
Please see [Important Notes on Performance Benchmarks](#important-notes-on-performance-benchmarks) section for details.
Please see [Important Notes on Performance Benchmarks](#important-notes-on-performance-benchmarks) section for details.
## <aname="using-converted-weights"></a> Using Converted Weights from Hugging Face Hub
<details>
<summary> Click to expand </summary>
🤗 Hugging Face ran the [conversion procedure](#converting-models-to-coreml) on the following models and made the Core ML weights publicly available in the Hub:
If you want to use any of those models you may download the weights and proceed to [generate images with Python](#image-generation-with-python) or [Swift](#image-generation-with-swift).
There are several variants in each model repository. You may clone the whole repos using `git` and `git lfs`, or select the variants you need. For example, to do generation in Python using the `original` attention implementation (read [this section](#converting-models-to-core-ml) for details), you could do something like this: