In information theory, linguistics, and computer science, the Levenshtein distance is a string metric for measuring the difference between two sequences. Informally, the Levenshtein distance between two words is the minimum number of single-character edits (insertions, deletions or substitutions) required to change one word into the other. It is named after the Soviet mathematician Vladimir Levenshtein, who considered this distance in 1965.
Levenshtein distance may also be referred to as edit distance, although that term may also denote a larger family of distance metrics known collectively as edit distance. It is closely related to pairwise string alignments.
The Levenshtein distance between two strings (of length and respectively) is given by where
where the of some string is a string of all but the first character of , and is the th character of the string , starting with character 0.
Note that the first element in the minimum corresponds to deletion (from to ), the second to insertion and the third to replacement.
This definition corresponds directly to the naïve recursive implementation.
For example, the Levenshtein distance between "kitten" and "sitting" is 3, since the following three edits change one into the other, and there is no way to do it with fewer than three edits:
The Levenshtein distance has several simple upper and lower bounds. These include:
An example where the Levenshtein distance between two strings of the same length is strictly less than the Hamming distance is given by the pair "flaw" and "lawn". Here the Levenshtein distance equals 2 (delete "f" from the front; insert "n" at the end). The Hamming distance is 4.
In approximate string matching, the objective is to find matches for short strings in many longer texts, in situations where a small number of differences is to be expected. The short strings could come from a dictionary, for instance. Here, one of the strings is typically short, while the other is arbitrarily long. This has a wide range of applications, for instance, spell checkers, correction systems for optical character recognition, and software to assist natural language translation based on translation memory.
The Levenshtein distance can also be computed between two longer strings, but the cost to compute it, which is roughly proportional to the product of the two string lengths, makes this impractical. Thus, when used to aid in fuzzy string searching in applications such as record linkage, the compared strings are usually short to help improve speed of comparisons.
In linguistics, the Levenshtein distance is used as a metric to quantify the linguistic distance, or how different two languages are from one another. It is related to mutual intelligibility, the higher the linguistic distance, the lower the mutual intelligibility, and the lower the linguistic distance, the higher the mutual intelligibility.
There are other popular measures of edit distance, which are calculated using a different set of allowable edit operations. For instance,
Edit distance is usually defined as a parameterizable metric calculated with a specific set of allowed edit operations, and each operation is assigned a cost (possibly infinite). This is further generalized by DNA sequence alignment algorithms such as the Smith-Waterman algorithm, which make an operation's cost depend on where it is applied.
This is a straightforward, but inefficient, recursive Haskell implementation of a
lDistance function that takes two strings, s and t, together with their lengths, and returns the Levenshtein distance between them:
lDistance :: ( Eq a ) => [a] -> [a] -> Int lDistance  t = length t -- If s is empty the distance is the number of characters in t lDistance s  = length s -- If t is empty the distance is the number of characters in s lDistance (a:s') (b:t') = if a == b then lDistance s' t' -- If the first characters are the same they can be ignored else 1 + minimum -- Otherwise try all three possible actions and select the best one [ lDistance (a:s') t' -- Character is inserted (b inserted) , lDistance s' (b:t') -- Character is deleted (a deleted) , lDistance s' t' -- Character is replaced (a replaced with b) ]
This implementation is very inefficient because it recomputes the Levenshtein distance of the same substrings many times.
A more efficient method would never repeat the same distance calculation. For example, the Levenshtein distance of all possible prefixes might be stored in an array where is the distance between the last characters of string
s and the last characters of string
t. The table is easy to construct one row at a time starting with row 0. When the entire table has been built, the desired distance is in the table in the last row and column, representing the distance between all of the characters in
s and all the characters in
Computing the Levenshtein distance is based on the observation that if we reserve a matrix to hold the Levenshtein distances between all prefixes of the first string and all prefixes of the second, then we can compute the values in the matrix in a dynamic programming fashion, and thus find the distance between the two full strings as the last value computed.
This is a straightforward pseudocode implementation for a function
LevenshteinDistance that takes two strings, s of length m, and t of length n, and returns the Levenshtein distance between them:
function LevenshteinDistance(char s[1..m], char t[1..n]): // for all i and j, d[i,j] will hold the Levenshtein distance between // the first i characters of s and the first j characters of t declare int d[0..m, 0..n] set each element in d to zero // source prefixes can be transformed into empty string by // dropping all characters for i from 1 to m: d[i, 0] := i // target prefixes can be reached from empty source prefix // by inserting every character for j from 1 to n: d[0, j] := j for j from 1 to n: for i from 1 to m: if s[i] = t[j]: substitutionCost := 0 else: substitutionCost := 1 d[i, j] := minimum(d[i-1, j] + 1, // deletion d[i, j-1] + 1, // insertion d[i-1, j-1] + substitutionCost) // substitution return d[m, n]
Two examples of the resulting matrix (hovering over a tagged number reveals the operation performed to get that number):
The invariant maintained throughout the algorithm is that we can transform the initial segment
t[1..j] using a minimum of
d[i,j] operations. At the end, the bottom-right element of the array contains the answer.
It turns out that only two rows of the table are needed for the construction if one does not want to reconstruct the edited input strings (the previous row and the current row being calculated).
The Levenshtein distance may be calculated iteratively using the following algorithm:
function LevenshteinDistance(char s[0..m-1], char t[0..n-1]): // create two work vectors of integer distances declare int v0[n + 1] declare int v1[n + 1] // initialize v0 (the previous row of distances) // this row is A[i]: edit distance for an empty s // the distance is just the number of characters to delete from t for i from 0 to n: v0[i] = i for i from 0 to m-1: // calculate v1 (current row distances) from the previous row v0 // first element of v1 is A[i+1] // edit distance is delete (i+1) chars from s to match empty t v1 = i + 1 // use formula to fill in the rest of the row for j from 0 to n-1: // calculating costs for A[i+1][j+1] deletionCost := v0[j + 1] + 1 insertionCost := v1[j] + 1 if s[i] = t[j]: substitutionCost := v0[j] else: substitutionCost := v0[j] + 1 v1[j + 1] := minimum(deletionCost, insertionCost, substitutionCost) // copy v1 (current row) to v0 (previous row) for next iteration // since data in v1 is always invalidated, a swap without copy could be more efficient swap v0 with v1 // after the last swap, the results of v1 are now in v0 return v0[n]
This two row variant is suboptimal--the amount of memory required may be reduced to one row and one (index) word of overhead, for better cache locality.
The dynamic variant is not the ideal implementation. An adaptive approach may reduce the amount of memory required and, in the best case, may reduce the time complexity to linear in the length of the shortest string, and, in the worst case, no more than quadratic in the length of the shortest string. The idea is that one can use efficient library functions (
std::mismatch) to check for common prefixes and suffixes and only dive into the DP part on mismatch.
The Levenshtein distance between two strings of length n can be approximated to within a factor
where ? > 0 is a free parameter to be tuned, in time O(n1 + ?).
... Assuming that intelligibility is inversely related to linguistic distance ... the content words the percentage of cognates (related directly or via a synonym) ... lexical relatedness ... grammatical relatedness ...
It gets it's [sic] speed from using very little memory, often keeping the buffer entirely in cache, and reducing the amount of work by skipping any prefix and postfix that won't add to the cost. [...] The point is, you only really need to know the three values when you want to update a cell in the matrix and you can keep two of them in a buffer, while keeping the third value in a fixed location.Live descendent code.