This is a simple app that shows how to integrate Apple's [Core ML Stable Diffusion implementation](https://github.com/apple/ml-stable-diffusion) in a native Swift UI application. It can be used for faster iteration, or as sample code for other use cases.
This is a simple app that shows how to integrate Apple's [Core ML Stable Diffusion implementation](https://github.com/apple/ml-stable-diffusion) in a native Swift UI application. The Core ML port is a simplification of the Stable Diffusion implementation from the [diffusers library](https://github.com/huggingface/diffusers). This application can be used for faster iteration, or as sample code for any use cases.
This is what it looks like:
This is what the app looks like on macOS:
![App Screenshot](screenshot.jpg)
![App Screenshot](screenshot.jpg)
On first launch, the application downloads a zipped archive with a Core ML version of Runway's Stable Diffusion v1.5, from [this location in the Hugging Face Hub](https://huggingface.co/pcuenq/coreml-stable-diffusion/tree/main). This process takes a while, as several GB of data have to be downloaded and unarchived.
On first launch, the application downloads a zipped archive with a Core ML version of Runway's Stable Diffusion v1.5, from [this location in the Hugging Face Hub](https://huggingface.co/pcuenq/coreml-stable-diffusion/tree/main). This process takes a while, as several GB of data have to be downloaded and unarchived.
@ -13,10 +13,11 @@ For faster inference, we use a very fast scheduler: [DPM-Solver++](https://githu
Performance on iPhone is somewhat erratic, sometimes it's ~20x slower and the phone heats up. This happens because the model could not be scheduled to run on the Neural Engine and everything happens in the CPU. We have not been able to determine the reasons for this problem. If you observe the same, here are some recommendations:
Performance on iPhone is somewhat erratic, sometimes it's ~20x slower and the phone heats up. This happens because the model could not be scheduled to run on the Neural Engine and everything happens in the CPU. We have not been able to determine the reasons for this problem. If you observe the same, here are some recommendations:
- Detach from Xcode
- Kill apps you are not using.
- Kill apps you are not using.
- Let the iPhone cool down before repeating the test.
- Let the iPhone cool down before repeating the test.
- Reboot your device.
- Reboot your device.
@ -27,14 +28,11 @@ If you clone or fork this repo, please update `common.xcconfig` with your develo
## Limitations
## Limitations
- The UI does not expose a way to configure the scheduler, number of inference steps, or generation seed. These are all available in the underlying code.
- A handful of models are currently supported.
- A single model (Stable Diffusion v1.5) is considered. The Core ML compute units have been hardcoded to CPU and GPU, since that's what gives best results on my Mac (M1 Max MacBook Pro).
- The Core ML compute units have been hardcoded to CPU and GPU on macOS, and to CPU + Neural Engine on iOS/iPadOS.
- Sometimes generation returns a `nil` image. This needs to be investigated.
## Next Steps
## Next Steps
- Improve UI. Allow the user to select generation parameters.
- Allow users to select compute units to verify the combination that achieves the best performance on their hardware.
- Allow other models to run. Provide a recommended "compute units" configuration based on model and platform.
- Implement other schedulers, additional options.
- Implement other interesting schedulers.
- Experiment with smaller distilled models.
- Implement negative prompts.
- Explore other features (image to image, for example).