This post is essentially a walk through of this shell script.
If you’re reading this, I’m assuming that you’ve already downloaded and installed Kaldi and successfully trained a GMM acoustic model along with a decoding graph.
If you’ve run one of the Kaldi run.sh scripts from the example directory egs/, then you should be ready to go.
This post was prompted by a comment on my Kaldi notes post, which basically asked, “Now that I’ve trained a [GMM] model, how can I start using it?”. I think this is a very relevant question for the people who want to use Kaldi to create and implement a speech recognition system for some application. The Kaldi scripts are currently set up in a researcher-focused way, and so I think this more applied question is a good one. With this in mind, I decided to write a small post on how to use an existing Kaldi model and graph to generate transcriptions for some new audio.
We normally generate transcriptions for new audio with the Kaldi testing and scoring scripts, so I just simply dug out the most important parts of these scripts to demonstrate in a concise way how decoding can work.
What you see here is what I gather to be the simplest way to do decoding in Kaldi - it is by no means garanteed to be the best way to do decoding.
Things you need
The first file you need is wav.scp. This is the only file that you need to make for your new audio files. All the other files listed below should have already been created during the training phase.
This should be the same format as the wav.svp file generated during training and testing. It will be a two-column file, with the utterance ID on the left column and the path to the audio file on the right column.
I’m just going to decode one audio file, so my wav.scp file is one line long, and it looks like this:
Next, you should have a configuration file specifying how to extract MFCCs. You need to extract the exact same number of features for this new audio file as you did in training. If not, the existing GMM acoustic model and new audio feature vectors will have a different number of parameters. Comparing these two would be like asking where a 3-D point exists in 2-D space, it doesn’t make sense. So, you don’t need to adjust anything in the config file. I used MFCCs, and my config file looks like this:
Next, you need a trained GMM acoustic model, such as final.mdl. This should have been produced in your training phase, and should be located somewhere like egs/your-model/your-model-1/exp/triphones_deldel/final.mdl. It doesn’t make too much sense to a human, but here’s what the head of the file looks like:
The compiled decoding graph, HCLG.fst is a key part of the decoding process, as it combines the acoustic model (HC), the pronunciation dictionary (lexicon), and the language model (G). This file, like the acoustic model shown above, doesn’t make too much sense to humans, but in any case, here’s what the head of mine looks like:
Lastly, if we want to be able to read our transcriptions as an utterance of words instead of a list of intergers, we need to provide the mapping of word-IDs to words themselves. HCLG.fst uses the intergers representing words without worrying about what the words are. As such, we need words.txt to map from the list of intergers we get from decoding to something readable.
This file should have been generated during the data preparation (training) phase.
Assuming you’ve got all the files listed above in the right place, I’m now going to go step-by-step through the decoding process.
Audio –> Feature Vectors
First, we’re going to extract MFCCs from the audio according to the specifications listed in the mfcc.conf file. At this point, we give as input (1) our configuration file and (2) our list of audio files, and we get as output (1) ark and scp feature files.
Next, since I trained my GMM acoustic model with delta + delta-delta features, we need to add them to our vanilla MFCC feature vectors. We give as input (1) the MFCC feature vectors generated above and receive as output (1) extended feature vectors with delta + delta-delta features.
Trained GMM-HMM + Feature Vectors –> Lattice
Now that we have feature vectors from our new audio in the appropriate shape, we can use our GMM acoustic model and decoding graph to generate lattices of hypothesized transcriptions. This program takes as input (1) our word-to-symbol table, (2) a trained acoustic model, (3) a compiled decoding graph, and (4) the features from our new audio, and we are returned (1) a file of lattices.
Lattice –> Best Path Through Lattice
Some people might be happy to stop with the lattice, and do their own post-processing, but I think many people will want a single best-guess transcription for the audio. The following program takes as input (1) the generated lattices from above and (2) the word-to-symbol table and returns (1) the best path through the lattice.
Best Path Intergers –> Best Path Words
The best path that we get above will display a line of intergers for each transcription. This isn’t very useful for most applications, so here is how we can substitute the intergers for the words they represent.
If you run all the above programs successfully, you should end up with a new file transcriptions/one-best-hypothesis.txt, which will list your files and their transcriptions.
I hope this was helpful!
If you have any feedback or questions, don’t hesitate to leave a comment!