disambig - disambiguate text tokens using an N-gram model
translates a stream of tokens from a vocabulary V1 to a corresponding stream
of tokens from a vocabulary V2,
according to a probabilistic, 1-to-many mapping.
Ambiguities in the mapping are resolved by finding the V2 sequence with
the highest posterior probability given the V1 sequence.
This probability is computed from pairwise conditional probabilities P(V1|V2),
as well as a language model for sequences over V2.
Each filename argument can be an ASCII file, or a
compressed file (name ending in .Z or .gz), or ``-'' to indicate
Print option summary.
Print version information.
- -text file
Specifies the file containing the V1 sentences.
- -map file
Specifies the file containing the V1-to-V2 mapping information.
Each line of
contains the mapping for a single word in V1:
w1 w21 [p21] w22 [p22] ...
is a word from V1, which has possible mappings
... from V2.
Optionally, each of these can be followed by a numeric string for the
which defaults to 1.
The number is used as the conditional probability P(w1|w21),
but the program does not depend on these numbers being properly normalized.
- -escape string
Set an ``escape string.''
Input lines starting with
are not processed and passed unchanged to stdout instead.
This allows associated information to be passed to scoring scripts etc.
- -text-map file
Processes a combined text/map file.
The format of
is the same as for
except that the
field on each line is interpreted as a word
rather than a word
Hence, the V1 text input consists of all words in column 1 of
in order of appearance.
This is convenient if different instances of a word have different mappings.
There is no implicit insertion of begin/end sentence tokens in this
mode. Sentence boundaries should be indicated explicitly by
lines of the form
An escaped line (see
also implicitly marks a sentence boundary.
- -classes file
Specifies the V1-to-V2 mapping information in
Class labels are interpreted as V2 words, and expansions as V1 words.
Multi-word expansions are not allowed.
Interpret the numbers in the mapping as P(w21|w1).
This is done by dividing probabilities by the unigram probabilities of
obtained from the V2 language model.
Interpret numeric values in map file as log probabilities, not probabilities.
- -lm file
Specifies the V2 language model as a standard ARPA N-gram backoff model file
The default is not to use a language model, i.e., choose V2 tokens
based only on the probabilities in the map file.
- -use-server S
Use a network LM server (typically implemented by
option) as the main model.
The server specification
can be an unsigned integer port number (referring to a server port running on
the local host),
a hostname (referring to default port 2525 on the named host),
or a string of the form
is a portnumber and
is either a hostname ("dukas.speech.sri.com")
or IP number in dotted-quad format ("126.96.36.199").
For server-based LMs, the
option limits the context length of N-grams queried by the client
(with 0 denoting unlimited length).
Hence, the effective LM order is the mimimum of the client-specified value
and any limit implemented in the server.
is specified, the arguments to the options
etc. are also interpreted as network LM server specifications provided
they contain a '@' character and do not contain a '/' character.
This allows the creation of mixtures of several file- and/or
Enables client-side caching of N-gram probabilities to eliminated duplicate
network queries, in conjunction with
This may result in a substantial speedup
but requires memory in the client that may grow linearly with the
amount of data processed.
- -order n
Set the effective N-gram order used by the language model to
Default is 2 (use a bigram model).
- -mix-lm file
Read a second N-gram model for interpolation purposes.
Interpret the files specified by
as a factored N-gram model specification.
Interpret the model specified by
as a count-based LM.
- -lambda weight
Set the weight of the main model when interpolating with
Default value is 0.5.
- -mix-lm2 file
- -mix-lm3 file
- -mix-lm4 file
- -mix-lm5 file
- -mix-lm6 file
- -mix-lm7 file
- -mix-lm8 file
- -mix-lm9 file
Up to 9 more N-gram models can be specified for interpolation.
- -mix-lambda2 weight
- -mix-lambda3 weight
- -mix-lambda4 weight
- -mix-lambda5 weight
- -mix-lambda6 weight
- -mix-lambda7 weight
- -mix-lambda8 weight
- -mix-lambda9 weight
These are the weights for the additional mixture components, corresponding
The weight for the
model is 1 minus the sum of
- -bayes length
Set the context length used for Bayesian interpolation.
The default value is 0, giving the standard fixed interpolation weight
- -bayes-scale scale
Set the exponential scale factor on the context likelihood in conjunction
Default value is 1.0.
- -lmw W
Scales the language model probabilities by a factor
Default language model weight is 1.
- -mapw W
Scales the likelihood map probability by a factor
Default map weight is 1.
Note: For Viterbi decoding (the default) it is equivalent to use
but not for forward-backward computation.
Map input vocabulary (V1) to lowercase, removing case distinctions.
Map output vocabulary (V2) to lowercase, removing case distinctions.
Do not map unknown input words to the <unk> token.
Instead, output the input word unchanged.
This is like having an implicit default mapping for unknown words to
themselves, except that the word will still be treated as <unk> by the language
Also, with this option the LM is assumed to be open-vocabulary
(the default is close-vocabulary).
- -vocab-aliases file
Reads vocabulary alias definitions from
consisting of lines of the form
This causes all V2 tokens
to be mapped to
and is useful for adapting mismatched language models.
Do no assume that each input line contains a complete sentence.
This prevents end-of-sentence tokens </s> from being appended automatically.
Process all words in the input as one sequence of words, irrespective of
Normally each line is processed separately as a sentence.
V2 tokens are output one-per-line.
This option also prevents sentence start/end tokens (<s> and </s>)
from being added to the input.
Perform forward-backward decoding of the input (V1) token sequence.
Outputs the V2 tokens that have the highest posterior probability,
for each position.
The default is to use Viterbi decoding, i.e., the output is the
V2 sequence with the higher joint posterior probability.
but uses only the forward probabilities for computing posteriors.
This may be used to simulate on-line prediction of tags, without the
benefit of future context.
Output the total string probability for each input sentence.
Output the table of posterior probabilities for each
input (V1) token and each V2 token, in the same format as
required for the
is also specified the posterior probabilities will be computed using
forward-backward probabilities; otherwise an approximation will be used
that is based on the probability of the most likely path containing
a given V2 token at given position.
- -nbest N
best hypotheses instead of just the first best when
doing Viterbi search.
then each hypothesis is prefixed by the tag
is the rank of the hypothesis in the N-best list and
its score, the negative log of the combined probability of transitions
and observations of the corresponding HMM path.
- -write-counts file
Outputs the V2-V1 bigram counts corresponding to the tagging performed on
the input data.
was specified these are expected counts, and otherwise they reflect the 1-best
- -write-vocab1 file
Writes the input vocabulary from the map (V1) to
- -write-vocab2 file
Writes the output vocabulary from the map (V2) to
The vocabulary will also include the words specified in the language model.
- -write-map file
Writes the map back to a file for validation purposes.
Sets debugging output level.
Each filename argument can be an ASCII file, or a compressed
file (name ending in .Z or .gz), or ``-'' to indicate
options effectively disable
i.e., unknown input words are always mapped to <unk>.
doesn't preserve the positions of escaped input lines relative to
ngram-count(1), ngram(1), hidden-ngram(1), training-scripts(1),
Andreas Stolcke <firstname.lastname@example.org>,
Anand Venkataraman <email@example.com>.
Copyright 1995-2007 SRI International