You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
whisper.cpp/examples/talk.wasm
Georgi Gerganov 87dd4a3081
talk.wasm : bump memory usage + update whisper.js
2 years ago
..
CMakeLists.txt talk.wasm : bump memory usage + update whisper.js 2 years ago
README.md talk.wasm : bump memory usage + update whisper.js 2 years ago
emscripten.cpp livestream.sh : simple tool to transcribe audio livestreams (#185) 2 years ago
gpt-2.cpp talk : make compatible with c++11 (part 2) 2 years ago
gpt-2.h talk.wasm : refactoring + update README.md 2 years ago
index-tmpl.html command.wasm : add voice assistant example for the Web (#171) 2 years ago

README.md

talk.wasm

Talk with an Artificial Intelligence in your browser:

https://user-images.githubusercontent.com/1991296/203411580-fedb4839-05e4-4474-8364-aaf1e9a9b615.mp4

Online demo: https://whisper.ggerganov.com/talk/

Terminal version: examples/talk

How it works?

This demo leverages 2 modern neural network models to create a high-quality voice chat directly in your browser:

  • OpenAI's Whisper speech recognition model is used to process your voice and understand what you are saying
  • Upon receiving some voice input, the AI generates a text response using OpenAI's GPT-2 language model
  • The AI then vocalizes the response using the browser's Web Speech API

The web page does the processing locally on your machine. The processing of these heavy neural network models in the browser is possible by implementing them efficiently in C/C++ and using the browser's WebAssembly SIMD capabilities for extra performance:

In order to run the models, the web page first needs to download the model data which is about ~350 MB. The model data is then cached in your browser's cache and can be reused in future visits without downloading it again.

Requirements

In order to run this demo efficiently, you need to have the following:

  • Latest Chrome or Firefox browser (Safari is not supported)
  • Run this on a desktop or laptop with modern CPU (a mobile phone will likely not be good enough)
  • Speak phrases that are no longer than 10 seconds - this is the audio context of the AI
  • The web-page uses about 1.8GB of RAM

Notice that this demo is using the smallest GPT-2 model, so the generated text responses are not always very good. Also, the prompting strategy can likely be improved to achieve better results.

The demo is quite computationally heavy, so you need a fast CPU. It's not usual to run these transformer models in a browser. Typically, they run on powerful GPUs.

Currently, mobile browsers do not support the Fixed-width SIMD WebAssembly capability, so you cannot run this demo on a phone or a tablet. Hopefully, in the near future this will become supported.

Todo

  • Better UI (contributions are welcome)
  • Better GPT-2 prompting

Build instructions

# build using Emscripten (v3.1.2)
git clone https://github.com/ggerganov/whisper.cpp
cd whisper.cpp
mkdir build-em && cd build-em
emcmake cmake ..
make -j

# copy the produced page to your HTTP path
cp bin/talk.wasm/*       /path/to/html/
cp bin/libtalk.worker.js /path/to/html/

Feedback

If you have any comments or ideas for improvement, please drop a comment in the following discussion:

https://github.com/ggerganov/whisper.cpp/discussions/167