|
|
|
@ -12,7 +12,7 @@ High-performance inference of [OpenAI's Whisper](https://github.com/openai/whisp
|
|
|
|
|
- Zero memory allocations at runtime
|
|
|
|
|
- Runs on the CPU
|
|
|
|
|
- [C-style API](https://github.com/ggerganov/whisper.cpp/blob/master/whisper.h)
|
|
|
|
|
- Supported platforms: Linux, Mac OS (Intel and Arm), Raspberry Pi, Android
|
|
|
|
|
- Supported platforms: Linux, Mac OS (Intel and Arm), Windows (MinGW), Raspberry Pi, Android
|
|
|
|
|
|
|
|
|
|
## Usage
|
|
|
|
|
|
|
|
|
@ -248,6 +248,8 @@ The original models are converted to a custom binary format. This allows to pack
|
|
|
|
|
- vocabulary
|
|
|
|
|
- weights
|
|
|
|
|
|
|
|
|
|
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script.
|
|
|
|
|
You can download the converted models using the [download-ggml-model.sh](download-ggml-model.sh) script or from here:
|
|
|
|
|
|
|
|
|
|
https://ggml.ggerganov.com
|
|
|
|
|
|
|
|
|
|
For more details, see the conversion script [convert-pt-to-ggml.py](convert-pt-to-ggml.py) or the README in [models](models).
|
|
|
|
|