A port of Hunspell for .NET.
Download and install with NuGet: WeCantSpell.Hunspell
"It's complicated"
Read the license: LICENSE
This library was ported from the original Hunspell source and as a result is licensed under their MPL, LGPL, and GPL tri-license. Read the LICENSE file to be sure you can use this library.
using WeCantSpell.Hunspell;
var dictionary = WordList.CreateFromFiles(@"English (British).dic");
bool notOk = dictionary.Check("Color");
var suggestions = dictionary.Suggest("Color");
bool ok = dictionary.Check("Colour");
This port performs competitively on newer versions of the .NET framework compared to the original NHunspell binaries.
Benchmark | .NET 8 | .NET 4.8 | NHunspell |
---|---|---|---|
Check | ๐ 5,980 ฮผs | ๐ 18,268 ฮผs | ๐ 6,121 ฮผs |
Suggest | ๐ 358 ms | ๐ข 807 ms | ๐ 1,903 ms |
Note: Measurements taken on an AMD 5800H.
Construct from a list:
var words = "The quick brown fox jumps over the lazy dog".Split(' ');
var dictionary = WordList.CreateFromWords(words);
bool notOk = dictionary.Check("teh");
Construct from streams:
using var dictionaryStream = File.OpenRead(@"English (British).dic");
using var affixStream = File.OpenRead(@"English (British).aff");
var dictionary = WordList.CreateFromStreams(dictionaryStream, affixStream);
bool notOk = dictionary.Check("teh");
The .NET Framework contains many encodings that can be handy when opening some dictionary or affix files that do not use a UTF8 encoding or were incorrectly given a UTF BOM. On a full framework platform this works out great but when using .NET Core or .NET Standard those encodings may be missing. If you suspect that there is an issue when loading dictionary and affix files you can check the dictionary.Affix.Warnings
collection to see if there was a failure when parsing the encoding specified in the file, such as "Failed to get encoding: ISO-8859-15"
or "Failed to parse line: SET ISO-8859-15"
. To enable these encodings, reference the System.Text.Encoding.CodePages
package and then use Encoding.RegisterProvider(CodePagesEncodingProvider.Instance)
to register them before loading files.
using System.Text;
using WeCantSpell.Hunspell;
class Program
{
static Program() => Encoding.RegisterProvider(CodePagesEncodingProvider.Instance);
static void Main(string[] args)
{
var dictionary = WordList.CreateFromFiles(@"encoding.dic");
bool notOk = dictionary.Check("teh");
var warnings = dictionary.Affix.Warnings;
}
}
This port wouldn't be feasible for me to produce or maintain without the live testing functionality in NCrunch. Being able to get actual near instant feedback from tests saved me from so many typos, bugs due to porting, and even bugs from upstream. I was very relieved to see that NCrunch had survived the release of "Live Unit Testing" in Visual Studio. If you want to try live testing but have been dissatisfied with the native implementation in Visual Studio, please give NCrunch a try. Without NCrunch I will likely stop maintaining this port, it really is that critical to my workflow.
I initially started this port so I could revive my old C# spell check tool but I ended up so distracted and burnt out from this port I never got around to writing the Roslyn analyzer. Eventually, Visual Studio got it's own spell checker and vscode has a plethora of them too, so I doubt I will be developing such an analyzer in the future. Some others have taken up that task, so give them a look:
For details on contributing, see the contributing document. Check the hunspell-origin submodule to see how up to date this library is compared with source .