A Very Basic Simple Whisper Web Interface

Created a little web interface to use Whisper, technically using whisper-ctranslate2 which is built on faster-whisper.

This is not currently ready to be run on the public web. It doesn’t have any sort of TLS for encrypting communications from client to server and all the files are stored on server. Only use in a trusted environment.

Setting up Prerequisite

Installing whisper-ctranslate2

pip install -U whisper-ctranslate2

Install NodeJS

sudo apt install nodejs

or

sudo dnf install nodejs

Install Node Dependencies

npm install formidable
npm install http
npm install fs

Setting up Simple Whisper Web Interface

First we need a web directory to use.

Next lets make an uploads folder

mkdir uploads

Now let’s create a main.js file. Node is going to be our webserver. Copy the following contents.

var http = require('http')
var formidable = require('formidable')
var fs = require('fs')

const execSync = require('child_process').execSync

let newpath = ''
let modelSize = 'medium.en'
const { exec } = require('node:child_process')
const validModels = [
  'medium.en',
  'tiny',
  'tiny.en',
  'base',
  'base.en',
  'small',
  'small.en',
  'medium',
  'medium.en',
  'large-v1',
  'large-v2'
]
fs.readFile('./index.html', function (err, html) {
  if (err) throw err

  http
    .createServer(function (req, res) {
      if (req.url == '/fileupload') {
        res.write(html)
        var form = new formidable.IncomingForm()
        form.parse(req, function (err, fields, files) {
          console.log('Fields ' + fields.modeltousema)
          console.log('File ' + files.filetoupload)
          var oldpath = files.filetoupload.filepath
          newpath = './uploads/' + files.filetoupload.originalFilename
          modelSize = validModels.includes(fields.modeltouse)
            ? fields.modeltouse
            : 'medium.en'
          console.log('modelSize::' + modelSize)
          fs.rename(oldpath, newpath, function (err) {
            if (err) {
              console.log('No file selected!') // throw err
              res.write(`<div class="results">No file selected</div>`)
            } else {
              console.log(newpath)
              const output = execSync(
                `whisper-ctranslate2 ${newpath} --model ${modelSize}`,
                { encoding: 'utf-8' }
              )

              res.write(
                `<div class="results"><h2>Results:</h2> <p>${output}</p></div>`
              )
              res.end()
            }
          })
        })
      } else {
        res.writeHead(200, { 'Content-Type': 'text/html' })
        res.write(html)
        return res.end()
      }
    })
    .listen(8080)
})

Now create an index.html file and paste the following in

<!DOCTYPE html>
<html lang="en">
  <head>
    <meta charset="UTF-8" />
    <meta http-equiv="X-UA-Compatible" content="IE=edge" />
    <meta name="viewport" content="width=device-width, initial-scale=1.0" />
    <title>Voice Transcribing Using Whisper</title>
    <link type="text/css" rel="stylesheet" href="style.css" />
  </head>
  <style>
    body {
      background-color: #b9dbe7;
      align-items: center;
    }

    .box {
      border-radius: 25px;
      padding: 25px;
      width: 80%;
      background-color: azure;
      margin: auto;
      border-bottom: 25px;
      margin-bottom: 25px;
    }

    .button {
      border-radius: 25px;
      margin: auto;
      width: 50%;
      height: 50px;
      display: flex;
      justify-content: center;
      border-style: solid;

      background-color: #e8d2ba;
    }

    h1 {
      text-align: center;
      padding: 0%;
      margin: 0%;
    }

    p {
      font-size: larger;
    }
    .headings {
      font-size: large;
      font-weight: bold;
    }
    input {
      font-size: medium;
    }
    select {
      font-size: medium;
    }
    .results {
      white-space: pre-wrap;
      border-radius: 25px;
      padding: 25px;
      width: 80%;
      align-self: center;
      background-color: azure;
      margin: auto;
    }
    .note {
      font-style: italic;
      font-size: small;
      font-weight: normal;
    }
  </style>
  <body>
    <script></script>
    <div class="box">
      <h1>Simple Whisper Web Interface</h1>
      <br />
      <p>
        Welcome to the very Simple Whisper Web Interface!<br /><br />
        This is a very basic, easy to use, web interface for OpenAI's Whisper
        tool. It has not been extensively tested, so you may encounter bugs or
        other problems.
        <br /><br />
        Instructions for use. <br />1. Select audio file <br />2. Select the
        Model you want to use <br />
        3. Click Transcribe! <br />4. Copy your transcription
      </p>
      <br />
      <br />
      <div class="headings">
        <form action="fileupload" method="post" enctype="multipart/form-data">
          Audio File: <input type="file" name="filetoupload" /><br />

          <br />
          Model:
          <select name="modeltouse" id="modeltouse">
            <option value="medium.en">medium.en</option>
            <option value="tiny">tiny</option>
            <option value="tiny.en">tiny.en</option>
            <option value="base">base</option>
            <option value="base.en">base.en</option>
            <option value="small">small</option>
            <option value="small.en">small.en</option>
            <option value="medium">medium</option>
            <option value="medium.en">medium.en</option>
            <option value="large-v1">large-v1</option>
            <option value="large-v2">large-v2</option>
          </select>
          <p class="note">
            Large-v2 and medium.en seem to produce the most accurate results.
          </p>
          <br />
          <br />
          <br />
          <input class="button" type="submit" value="Transcribe!" />
        </form>
      </div>
    </div>
  </body>
</html>

Now we should be set to go.

Fire the web server up with

node ./main.js

If we want to start it in the background, run

node ./main.js &

Known Limitations or Bugs

If you hit Transcribe with no file selected, the server crashes.

We are calling whisper-ctranslate2 directly, if it is not in the path, then it won’t work.

We are currently using the medium.en model, if the model is not downloaded, then the first transcription may take awhile while it downloads. Would like to add a menu for selecting which model to use. We fixed this by adding a drop down that let’s you select a model.

Would be nice to have an option for getting rid of the timestamps.

Improving Accuracy for OpenAI’s Whisper

We can use prompts to improve our Whisper transcriptions.

We can add “–initial_prompt” to our command like the following.

--initial_prompt "Computer Historical etc"

We can also look into suppressing Tokens to eliminate words that we won’t use. Believe we need to find the tokens for words, and then we can use the token ID to ignore those words. More links below.

https://github.com/openai/whisper/blob/15ab54826343c27cfaf44ce31e9c8fb63d0aa775/whisper/decoding.py#L87-L88

https://platform.openai.com/docs/guides/speech-to-text/prompting

https://github.com/openai/whisper/discussions/355

https://github.com/openai/whisper/discussions/117

https://huggingface.co/blog/fine-tune-whisper

https://discuss.huggingface.co/t/adding-custom-vocabularies-on-whisper/29311/2?u=nbroad

Using FasterWhisper on Ubuntu

faster-whisper is a faster implementation of OpenAI’s Whisper.

https://github.com/guillaumekln/faster-whisper

Someone else has added a “front end” to it so we can just about use it as a drop in replacement for Whisper.

https://github.com/jordimas/whisper-ctranslate2

We can easily install it with pip.

pip install -U faster-Whisper
pip install -U whisper-ctranslate2

For some reason initially the quality was worse then vanilla Whisper. Adding the “–compute_type float32” option improved the quality to where there was not any difference between them.

Setting up Databricks Dolly on Ubuntu with GPU

This is a quick guide for getting Dolly running on an Ubuntu machine with Nvidia GPUs.

You’ll need a good internet connection and around 35GB of hard drive space for the Nvidia driver, Dolly (12b model) and extras. You can use the smaller models to take up less space. The 8 billion parameter model uses about ~14GB of space while the 3 billion parameter one is around 6GB

Install Nvidia Drivers and CUDA

sudo apt install nvidia-driver nvidia-cuda-toolkit

Reboot to activate the Nvidia driver

reboot

Install Python

Python should already be installed, but we do need to install pip.

Once pip is installed, then we need to install numpy, accelerate, and transformers

sudo apt install python3-pip
pip install numpy
pip install accelerate>=0.12.0 transformers[torch]==4.25.1

Run Dolly

Run a python console. If you run it as administrator, it should be faster.

python3

Run the following commands to set up Dolly.

import torch
from transformers import pipeline

generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

# Alternatively, If you want to use a smaller model run

generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

Notes:

  1. If you have issues, you may want/need to specify an offload folder with offload_folder=”.\offloadfolder”. An SSD is preferable.
  2. If you have lots of RAM, you can take out the “torch_dtype=torch.bfloat16”
  3. If you do NOT have lots of ram (>32GB), then you may only be able to run the smallest model

Alternatively, if we don’t want to trust_remote_code, we can download this file, and run the following

from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-12b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-12b", device_map="auto")

generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)

Now we can ask Dolly a question.

generate_text("Your question?")

Example:

>>> generate_text("Tell me about Databricks dolly-v2-3b?")
'Dolly is the fully managed open-source engine that allows you to rapidly build, test, and deploy machine learning models, all on your own infrastructure.'

Further information is available at the following two links.

https://github.com/databrickslabs/dolly
https://huggingface.co/databricks/dolly-v2-3b
https://huggingface.co/databricks/dolly-v2-12b

Install and Use OpanAI Whisper

These commands work for Ubuntu. Should be simple to change for other Linux distros.

Install Nvidia and CUDA drivers

sudo apt install nvidia-driver-530 nvidia-cuda-toolkit

Reboot so the system uses the driver.

Install pip and ffmpeg

sudo apt install python3-pip
sudo apt install ffmpeg

Now we can install whisper with

pip install -U openai-whisper

Run Whisper

After it is installed, it should be able to run it like

whisper audio.mp3 --model medium

Change out medium to the model you would like to use. It will then download the model and then work get to work on transcribing it. The .en models i.e. medium.en, seem to perform better then the other ones. If you are using English that is.

If you receive a “Command ‘whisper’ not found” error, you may not have ~/.local/bin in your user PATH. Either add ~/.local/bin to your PATH, or run whisper with the full path

~/.local/bin/whisper audio.mp3 --model medium

OpenAI Whisper GitHub link.
https://github.com/openai/whisper

Setting up Databricks Dolly on Windows with GPU

The total process can take awhile to setup Dolly. You’ll need a good internet connection and around 50GB of hard drive space.

Install Nvidia CUDA Toolkit

You’ll need to install the CUDA Toolkit to take advantage of the GPU. The GPU is much faster than just using the CPU.

https://developer.nvidia.com/cuda-downloads

Install Git

Install git from the following site.

https://git-scm.com/downloads

Download Dolly

Download Dolly with git.

git lfs install 
git clone https://huggingface.co/databricks/dolly-v2-12b

Install Python

We’ll also need Python installed if it is not already.
https://www.python.org/downloads/release/

Next we’ll need the following installed

py.exe -m pip install numpy
py.exe -m pip install accelerate>=0.12.0 transformers[torch]==4.25.1
py.exe -m pip install numpy --pre torch --force-reinstall --index-url https://download.pytorch.org/whl/nightly/cu117 --user

The last one is needed to get Dolly to utilize a GPU.

Run Dolly

Run a python console. If you run it as administrator, it should be faster.

py.exe

Run the following commands to set up Dolly.

import torch
from transformers import pipeline

generate_text = pipeline(model="databricks/dolly-v2-3b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

# Or to use the full model run

generate_text = pipeline(model="databricks/dolly-v2-12b", torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

Note: if you have issues, you may want/need to specify an offload folder with offload_folder=”.\offloadfolder”. An SSD is preferable.
Also if you have lots of RAM, you can take out the “torch_dtype=torch.bfloat16”

Alternatively, if we don’t want to trust_remote_code, we can do run the following

from instruct_pipeline import InstructionTextGenerationPipeline
from transformers import AutoModelForCausalLM, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("databricks/dolly-v2-3b", padding_side="left")
model = AutoModelForCausalLM.from_pretrained("databricks/dolly-v2-3b", device_map="auto")

generate_text = InstructionTextGenerationPipeline(model=model, tokenizer=tokenizer)

Now can ask Dolly a question.

generate_text("Your question?")

Example:

>>> generate_text("Tell me about Databricks dolly-v2-3b?")
'Dolly is the fully managed open-source engine that allows you to rapidly build, test, and deploy machine learning models, all on your own infrastructure.'

Further information is available at the following two links.

https://github.com/databrickslabs/dolly
https://huggingface.co/databricks/dolly-v2-3b