From 6da2df34ee40301d9ecb126968ec4c0c6195f26d Mon Sep 17 00:00:00 2001 From: Georgi Gerganov Date: Sat, 11 Mar 2023 01:18:10 +0200 Subject: [PATCH] Update README.md --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index e16dd07..e7e7cb2 100644 --- a/README.md +++ b/README.md @@ -139,5 +139,5 @@ python3 convert-pth-to-ggml.py models/7B/ 1 In general, it seems to work, but I think it fails for unicode character support. Hopefully, someone can help with that - I don't know yet how much the quantization affects the quality of the generated text - Probably the token sampling can be improved -- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon +- x86 quantization support [not yet ready](https://github.com/ggerganov/ggml/pull/27). Basically, you want to run this on Apple Silicon. For now, on Linux and Windows you can use the F16 `ggml-model-f16.bin` model, but it will be much slower.