dictation-toolbox / aenea

Client-server library for using voice macros from Dragon NaturallySpeaking and Dragonfly on remote/non-windows hosts.
GNU Lesser General Public License v3.0
211 stars 62 forks source link

Plugin-based chaining grammars #19

Closed calmofthestorm closed 10 years ago

calmofthestorm commented 10 years ago

Grammars such as multiedit and verbal_emacs work by building up a bunch of commands, each of which the user would think of as a single command, and then creating a repetition rule. This allows the user to string together commands, rather than having to wait for each to be recognized before speaking the next (or at the very least without pausing between commands). To Dragon, this looks like one large command each time the user speaks.

This is great for usability, and it is wonderful that Dragon can support such complicated commands. I would argue that it is the capability of doing this alone that makes editing text possible in this way. Unfortunately, this tends to lead to conflating the natural concept of a command from a users perspective with the logic necessary to support the chaining. This leads to issues such as the duplication of code between verbal_emacs and multiedit. As the project grows, and more people want to use different languages, different programs, etc., I feel like this architecture will become more and more unwieldy.

I propose the following as a long-term design for how to solve this. I basically see three distinct problems that need to be composed in various combinations to make any particular grammar along the lines of verbal_emacs, multiedit, shell commands, etc:

This is how I would build this in the absence of constraints. Unfortunately I need to look into what constraints Natlink/Dragonfly/Dragon impose (in particular I suspect some of the issues with nested grammars being thrown out for being too complex that I ran into with verbal_emacs will likely reappear here; Dragon seems to object more to grammars that are too deep (even when unambiguous) then to grammars which recognize a large number of commands, which is what I would expect based on how most parsing tends to go) and I suspect that is likely to dominate the design, unfortunately.

This is also complicated by #17.

I welcome feedback, both on my design in ideal land and on the realities anyone has run into with similar projects. @poppe1219 @nirvdrum

calmofthestorm commented 10 years ago

@grayjay

poppe1219 commented 10 years ago

I wasn't aware of the problem with nested grammars. I guess my grammars haven't reached that level of complexity. Since my focus right now is on making use of your server-client solution to control Linux, with a multi-screen mousegrid, it's not very likely I will bump into these problems anytime soon.

But it will be very interesting to see what conclusions you come to and what solutions you find.

poppe1219 commented 10 years ago

Apart from the problem with Dragonfly's limitations of nested grammars, I haven't grasped the problem of what you want to achieve. And due to lack of time and my example-driven brain, I haven't understood your verbal_emacs scripts yet. Maybe it's a pain of me to ask, but perhaps if you could give a usage example of how you would like the solution to be used, I would understand the problem better? A usage example that causes nesting of the kind that made you reach Dragonfly's limits?

calmofthestorm commented 10 years ago

I can in fact give two real-world examples I ran into when writing verbal_emacs. First, let me say that it is not just about depth, but more generally about "complexity", as defined by Dragon (So far the only way I've found to measure it is binary search -- either Dragon refuses to load it or not).

The first thing I ran into was how I handled letters of the alphabet, digits, etc. Initially, I wanted to have a sub mode for spelling -- so there would be a command "letters", after which you could say any number of letters, followed by "done" (or somesuch). This would be a single atom in the chaining loop -- meaning I could give a couple of movement commands, enter some letters, and then keep giving commands all without pausing. Note that this is actually a fairly simple grammar --inside spelling mode, Dragon need only track something like 52 literals.

The problem was that dragonfly simply threw an error when I tried to write this grammar, saying it was too complex. No explanation, nothing in the dragonfly source to explain either. What I ended up going with was making each letter of the alphabet an atom. This sucks for several reasons -- recognition is slowed and accuracy decreased because Dragon has 52 more atoms in the main chain it has to recognize. Likewise, it means that you can only enter max_chain_length letters in one go, which brings me to the second problem: chain length.

In multiedit you can speak up to 16 atoms in one command. This sounds like a lot, and it is if each atom is a variable name, a looping construct, a movement command, etc. In verbal_emacs, you are limited to 10 atoms. Usually this is still enough, but suppose you want to spell a bunch of letters, combined with a movement command or two. You can run up against this limit without realizing it, especially as you gain practice with using it and tend to speak more complex phrases. In practice I find that I have to interrupt my flow frequently to avoid losing things -- if you go for longer than 10 atoms, it can lose the entire sequence.

Note that nothing about this issue would address this problem, as far as I know. Short of digging into Natlink (I already understand Dragonfly reasonably well) and getting lucky, I am not sure whether this can be fixed. It is quite possible/likely that this constraint comes from Dragon.

Rather, I mentioned the problem here because it is likely to be a big limit on what I am able to do.

calmofthestorm commented 10 years ago

Why I want this architecture:

A big part of this project for me is enabling people to bring voice to their current setup. Obviously the weirder your setup the more adaptation you're going to have to do yourself:-).

poppe1219 commented 10 years ago

Well I don't have any useful ideas on the architecture at this point.

But when it comes to the particular problem of spelling a sequence as a single command, my thoughts immediately goes to trying to reuse Dragon's built in Spell mode. But I haven't found a way to trigger the different modes. I found this in natlinkmain:

DNSmode = 0 # can be changed in grammarX by the setMode command to

1 dictate, 2 command, 3 numbers, 4 spell

         # commands currently from _general7,
         # is reset temporarily in DisplayMessage function.
         # it is only safe when changing modes is performed through
         # this setMode function

But I haven't found a way to actually use this and switch between these modes. I have tried to use Mimic, Mimic("Start spell mode"), but I have never gotten Mimic to do anything sensible for me (probably because I haven't understood it). And even if I did get that to work, there should be a much more direct way of switching between the modes. If it could be done, it's quite possible that a command for spelling an arbitrary number of letters/characters could be built and is still treated as a single atom, where Dragon itself does all the heavy lifting.

calmofthestorm commented 10 years ago

The problem is that that would require a pause between modes, at least using DNSmode would. if we are willing to tolerate a pause between modes, I can think of a number of ways to solve this problem. In particular, one thing I did in multiedit was to have "finishing rules" -- once you say "letters", the rest of the chain will be interpreted as a sequence of letters (this is similar to how "literal" works for entering reserved words). Unfortunately, once you enter this mode you must pause before saying something that is not a letter. This makes sense for "literal", since it is intended to allow any word to be typed literally, but I would like to be able to say "end letters" and to keep going.

In general the problem I have found with relying on built in Dragon capabilities is that their design takes to an extreme an emphasis on discoverability and memorability versus usability. This means it has a much less steep learning curve, which is great for casual users, but severely limits what power users can do. (Think about how slow Dragon's built-in editing commands are, since you must pause after each one.)

Perhaps I should put off designing the architecture until I have thoroughly studied Natlink to see just what is possible. Another thing I would like to do is integrate a test architecture for grammars, so that we can write automated unit tests for them. I believe mimic is the way to do this but like you I have not been able to get it to work.

One thing you mentioned to me that I appreciate is how to disable built-in Dragon commands so that I can appropriate those words to my use.

poppe1219 commented 10 years ago

I was actually thinking that, if the mode was changed directly by using Natlink, there would be no pause. But that probably wouldn't matter anyway because the entire spoken sequence would be interpreted before the mode switch would be executed.

calmofthestorm commented 10 years ago

Yeah, that seems equivalent behavior to the finishing rule. I am actually kind of surprised that the hackery for chaining is necessary, I would expect Dragon to natively support trailing commands together. Given this doesn't seem to work for any built-in commands and you cannot program it in Professional Edition, I am pessimistic about avoiding the manual chaining.

What might be possible is completely rethinking how Natlink discovers and loads grammars to be friendlier to modules and plug-ins.

calmofthestorm commented 10 years ago

The new dynamic vocabulary system does 90% of this in a very simple way, and the dynamics from dragonfly-scripts can do something similar, though not integrating into other grammars. I can't think of any user stories for the more general form, and I'm not even sure NatLink could handle them, so I'm closing this.

sboosali commented 9 years ago
  1. Can you explain what and how the new dynamic vocabulary system handles? And thanks for all your work, I was trying to set this up myself :-) but I got sad when I read this about nested grammars :-( because I had this whole awesome complex grammar worked out.
  2. I'm new to NatLink, but if I understand correctly, that the problem is grammar depth, could automatically "inlining" finite rules help?

e.g. the nice modular grammar:

<command> exported
 = <number> <command>
 | <action> <region>
;
<number> = ([<tens>] <ones>) | <special_number>;
<special_number> = zero | eleven | twelve | ...;
<ones> = one | two | three | ...;
<tens> = twenty | thirty | ...;
<action> = prev | next | del | ...;
<region> = char | word | line | ...;

becomes (inlining <number>'s children):

<command> exported
 = <number> <command>
 | <action> <region>
;
<number> = one | two | three | ... | eleven | twelve | ... | twenty one | twenty two | ... | thirty one | ...;
<action> = prev | next | del | ...;
<region> = char | word | line | ...;

becomes (inlining <action> and <region>):

<command> exported
 = <number> <command>
 | prev char
 | next char
 | del char
 ...
 | del line
 ...
;
<number> = ...;

becomes (eliminating recursion):

<command> exported = <number> <command_>
<command_> = ...;

becomes (inlining <number> and <command_>):

<command>
 = prev char
 | del line 
 ...
 | one prev char
 ...
 | ninety nine del line
 ...
;

with some loss of generality in the recursion. you would then need to parse the output, without relying on rule callbacks.

or is breath as well as depth a problem?

given 100 <number>s, 20 <action>s, 20 <region>s, that's ~40,000 variants. We could try to "balance" the tree (well, DAG) if we knew the breadth/depth constraints. if we use this grammar:

<command> exported = [<number>] <editing>;
<number> = zero | ... | ninety nine;
<editing> = prev char | ... | del line;

we have a breath of 400 and depth of 2.

thoughts?

sboosali commented 9 years ago

Initially, I wanted to have a sub mode for spelling -- so there would be a command "letters", after which you could say any number of letters, followed by "done" (or somesuch).

my first plan. drat.

I found this in natlinkmain:

DNSmode = 0 # can be changed in grammarX by the setMode command to
# 1 dictate, 2 command, 3 numbers, 4 spell
# commands currently from _general7,
# is reset temporarily in DisplayMessage function.
# it is only safe when changing modes is performed through
# this setMode function

my second plan, in case the first plan didn't work. double drat.

I feel like I'm three steps behind!

My question: is there a way to do something like letters {letter}+ done being nested in other commands, with the update?

sboosali commented 9 years ago

if you go for longer than 10 atoms, it can lose the entire sequence.

also, maybe you could recover the first 10 atoms (what exactly do you mean by "atom"?) by saving hypotheses. I haven't been able to trigger gotHypothesis yet you enable triggering the call back with load(hypothesis=1)

calmofthestorm commented 9 years ago

The main thing the vocabulary system handles is decoupling vocabulary (things that are important words for a particular language/user/etc. Examples include Eclipse shortcuts, Python keywords, even common variable names in a project) from how to type them (in VIM, for example, we must enter insert mode, type the word, then return to normal mode). This decoupling should ensure that people can add new grammars that will work with existing vocabularies (whether user-custom or included with the project), and (especially) that people can add new vocabularies that will then work with existing (or new) grammars, even if they don't know about them.

It also allows the user to enable and disable different vocabularies as appropriate (so, eg, you don't get Python keywords when you're working in C++ or whatnot), though I'd consider this a less crucial feature. This is the reason for static vs dynamic grammars -- static ones are a tiny bit more powerful but require reloading the grammar to update. Dynamic ones can be switched on and off at will. The distinction between the two is not super important for understanding the high level need for the feature.

For a simple example, consider Python keywords "lambda" and "def". I want to be able to use them in any Python file, regardless of whether I'm using VIM or multiedit. VIM and multiedit both have ways of entering text. Multiedit just has a "loop" of commands, and any command can be to enter a keyword, which it does by typing it. VIM is a bit more complex -- it also has the loop, but there's also the mode switch.

If you envision a grammar as a tree, vocabularies are a way for grammar authors to define places in the tree that users can add custom vocabulary. VIM exposes a tag for arbitrary keywords it should recognize.

Now suppose I decide I want to write Ruby. I can just write a vocabulary file with the language's keywords, and it will automagically work with existing grammars. Likewise, I could write a new editing grammar (emacs, for example) that if written against the vocabulary system would work with any custom vocabularies someone may write.

calmofthestorm commented 9 years ago

Inlining -- If I'm understanding you correctly, there are basically two ways of doing inlining. One is more or less a lossless grammar rewrite (essentially transforming the grammar's structure but not the language it recognizes). The second is to change (probably enlarging) the language recognized by simplifying the grammar, then doing post-processing to determine appropriate the action.

The first form could at least in principle help, but I suspect I'd need better understanding of Natlink/Dragon internals, and it's also worth pointing out that Dragon is in the best position to do such automated simplification.

I don't like the second form because Dragon uses the structure of the grammar to determine which words were said (or to make an analogy, lexing takes the grammar into account, rather than the two being independent passes). By allowing the grammar to recognize phrases we don't want, we will hurt recognition performance and accuracy, and also sometimes find ourselves with a phrase we're not sure what to do with.

The main obstacle to my understanding is that I would expect performance (both speed and accuracy) of processing to be a function of the size of the language -- the more possible valid phrases you can say, the harder it is to tell them apart. Dragon seems to hurt more from deep grammars than it does from size of the language.

To be clear, the issue with a spelling mode is a purely performance driven one -- indeed, for a fairly simple editing grammar you could probably have a mode just as you describe that would work without issues. The problem is that Dragon will reject grammars that are too "complex". Increasing max sequence increases complexity, and having a spelling mode dramatically increases complexity.

calmofthestorm commented 9 years ago

Why long sequences drop: Remember that the way grammars like my vim, multiedit, etc work is by every phrase you say being one giant command parsed all at once. You may think of a sequence as a list of commands, but Dragonfly, natlink, and dragon see it as one phrase. Thus, if your phrase is too long, it simple fails to recognize.

As a simple example, consider a grammar that recognizes any one, two, or three digit number. If you give it a four digit number, it won't enter the first three digits -- it will fail to recognize the phrase.

Maybe you could do something with saving hypotheses? That seems really hacky, and of dubious benefit to me. I'm not really seeing the angle here.

calmofthestorm commented 9 years ago

(by atom I mean what you think of as a single command. Multiedit etc commands are a sequence of atoms. Examples of atoms include "up 10", "with statement" (if using Python vocabulary), and "score hello world").

sboosali commented 9 years ago

Thanks!

"and it's also worth pointing out that Dragon is in the best position to do such automated simplification."

I was thinking that myself, but Dragon would be in the best position document their API, or to open source their engine too :p

also, are static vocabularies Natlink rules and dynamic vocabularies Natlink lists?

I've been doing my own experiments, and I haven't been able to trigger a BadGrammar: too complex error. could you send me: either the high-level grammar that triggered the error for you, or even the low-level Natlink grammar that it was compiled down to?

e.g. this grammar was not rejected on complexity and did successfully recognize long chains.

<dgndictation> imported;

<command> exported
 = <phrase_9>
 ;

<phrase_9> = <phrase_cons> <phrase_8> | <phrase_0>;
<phrase_8> = <phrase_cons> <phrase_7> | <phrase_0>;
<phrase_7> = <phrase_cons> <phrase_6> | <phrase_0>;
<phrase_6> = <phrase_cons> <phrase_5> | <phrase_0>;
<phrase_5> = <phrase_cons> <phrase_4> | <phrase_0>;
<phrase_4> = <phrase_cons> <phrase_3> | <phrase_0>;
<phrase_3> = <phrase_cons> <phrase_2> | <phrase_0>;
<phrase_2> = <phrase_cons> <phrase_1> | <phrase_0>;
<phrase_1> = <phrase_cons> <phrase_0> | <phrase_0>;
<phrase_0> = <dgndictation>;

<phrase_cons>
 = <casing>
 | <joiner>
 | <surround>
 | <letter>+
 ;

<casing>
 = lower
 | upper
 | capper
 ;

<joiner>
 = camel
 | class
 | file
 | snake
 | list
 | dash
 | squeeze
 ;

<surround>
 = string
 | circle
 | square
 | braced
 | diamond
 | spaced

 ;

<letter>
 = ay
 | bee
 | sea
 | dee
 | ee
 | eff
 | gee
 | aych
 | i
 | jay
 | kay
 | el
 | em
 | en
 | oh
 | pea
 | Q
 | are
 | ess
 | tea
 | you
 | vee
 | dub
 | ex
 | why
 | zee
 ;

"lexing takes the grammar into account, rather than the two being independent passes"

that's a great point.

calmofthestorm commented 9 years ago

I'm currently in the process of moving and don't have any microphones with me at the moment, and Dragon won't even start without one:/ That said, try taking multiedit from https://github.com/dictation-toolbox/aenea-grammars/blob/master/_multiedit/_multiedit.py and max=16 to something larger on line 238. IIRC 32 is enough to trigger it, but try higher and come down.

If this doesn't work, Ill see if I can produce an example once I get my mic set up again.

calmofthestorm commented 9 years ago

was thinking that myself, but Dragon would be in the best position document their API, or to open source their engine too :p

Full disclosure; I haven't dug too much into the lower levels of the Dragon -> Natlink -> Dragonfly stack. I've studied Dragonfly a bit but I mostly treat Natlink as a black box. It's entirely possible that limitations I think are present aren't actually, so don't take my "I don't think it can be done"s too seriously:-) The main reason I haven't looked deeper is because aenea already does basically everything I want, and I stopped finding reverse engineering low-level binary formats fun about 15 years ago:-)

also, are static vocabularies Natlink rules and dynamic vocabularies Natlink lists?

Exactly.

calmofthestorm commented 9 years ago

I just attempted to reproduce the "grammar too complex" issue but was unable to do so. By increasing repeat counts, etc, I did notice a decrease in recognition accuracy and speed, but not a hard fail as I recalled.

The last time I encountered this issue it was about a year ago when I was working on my VIM bindings. Initially I envisioned a deeper grammar with verbal modes and cancellations ("enter letters mode a b c leave letters mode" or whatnot), and ran into it then. Unfortunately I don't seem to have committed the grammars that caused the issue.

Based on this, I guess I understand this issue even less well than I previously did. Sorry I couldn't be more helpful.

sboosali commented 9 years ago

Thanks for the follow-up.

Deeply nested grammars sounds like the right way to make grammars composable. if you do implement something supporting:

"enter letters mode a b c leave letters mode"

in the framework, let me know!

jgarvin commented 9 years ago

I've been on rolling my own server/client to use Dragon on Linux (I started my project before I know about aenea) and stumbled on this thread trying to find if anyone knew the exact conditions that trigger natlink.BadGrammar complaining about the grammar being too complex. AFAICT it is a matter of raw size, not just nesting. I had a grammar that was working fine until I added two new entries, now it's too big. Try repetitions of large mapping rules. I have a set of voice commands for emacs in python mode and one for in lisp mode, where the only difference in the grammar is the number of language keywords supported, where 4 phrases are added to the mapping rule for every keyword. The python one works fine, the lisp one is now too big. They have the same level of nesting.

sboosali commented 9 years ago

that's interesting. can you send a link to your two grammars?

On Tuesday, January 27, 2015, jgarvin notifications@github.com wrote:

I've been on rolling my own server/client to use Dragon on Linux (I started my project before I know about aenea) and stumbled on this thread trying to find if anyone knew the exact conditions that trigger natlink.BadGrammar complaining about the grammar being too complex. AFAICT it is a matter of raw size, not just nesting. I had a grammar that was working fine until I added two new entries, now it's too big. Try repetitions of large mapping rules. I have a set of voice commands for emacs in python mode and one for in lisp mode, where the only difference in the grammar is the number of language keywords supported, where 4 phrases are added to the mapping rule for every keyword. The python one works fine, the lisp one is now too big. They have the same level of nesting.

— Reply to this email directly or view it on GitHub https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71723853 .

(this message was composed with dictation: charitably interpret typos)Sam Boosalis

jgarvin commented 9 years ago

Here's the 128 keyword list for the lisp mode: https://bpaste.net/show/8d97200e37f9

Note that some elements are lists of two strings rather than a string -- this is for when the spoken and written form should differ.

From the list I would generate a mapping rule that had 4 rules for each keyword in the list, "future []", "prior []", "key

", "new ". So there would be 4 \* 128 entries -- which being a power of 2 makes sense as a limit, could be it always dies at 512. It died when I added the future/prior commands. That was too big apparently. Moving the keyword list into it's own rule and the 4 possible commands into their own rule and making the new grammar just " []" made the error go away, which ironically has more possibilities because before key and new didn't have a number after and now they do. My guess is Dragon barfs if you try to make an alternative with 512 entries, since I assume that's what MappingRule builds for you. On Tue, Jan 27, 2015 at 4:53 PM, Sam Boosalis notifications@github.com wrote: > that's interesting. can you send a link to your two grammars? > > On Tuesday, January 27, 2015, jgarvin notifications@github.com wrote: > > > I've been on rolling my own server/client to use Dragon on Linux (I > > started my project before I know about aenea) and stumbled on this > > thread > > trying to find if anyone knew the exact conditions that trigger > > natlink.BadGrammar complaining about the grammar being too complex. > > AFAICT > > it is a matter of raw size, not just nesting. I had a grammar that was > > working fine until I added two new entries, now it's too big. Try > > repetitions of large mapping rules. I have a set of voice commands for > > emacs in python mode and one for in lisp mode, where the only difference > > in > > the grammar is the number of language keywords supported, where 4 > > phrases > > are added to the mapping rule for every keyword. The python one works > > fine, > > the lisp one is now too big. They have the same level of nesting. > > > > — > > Reply to this email directly or view it on GitHub > > < > > https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71723853> > > > > . > > ## > > _(this message was composed with dictation: charitably interpret typos)Sam > Boosalis_ > > — > Reply to this email directly or view it on GitHub > https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71745702 > .
jgarvin commented 9 years ago

Actually, I forgot to subtract out the leading lines, there are only 116 keywords, that the source file containing them was 128 lines is pure luck :p Still, adding future/prior would have pushed it over 256, which could be a limit.

On Tue, Jan 27, 2015 at 6:04 PM, Joseph Garvin joseph.h.garvin@gmail.com wrote:

Here's the 128 keyword list for the lisp mode: https://bpaste.net/show/8d97200e37f9

Note that some elements are lists of two strings rather than a string -- this is for when the spoken and written form should differ.

From the list I would generate a mapping rule that had 4 rules for each keyword in the list, "future []", "prior []", "key

", "new ". So there would be 4 \* 128 entries -- which being a power of 2 makes sense as a limit, could be it always dies at 512. It died when I added the future/prior commands. That was too big apparently. Moving the keyword list into it's own rule and the 4 possible commands into their own rule and making the new grammar just " []" made the error go away, which ironically has more possibilities because before key and new didn't have a number after and now they do. My guess is Dragon barfs if you try to make an alternative with 512 entries, since I assume that's what MappingRule builds for you. On Tue, Jan 27, 2015 at 4:53 PM, Sam Boosalis notifications@github.com wrote: > that's interesting. can you send a link to your two grammars? > > On Tuesday, January 27, 2015, jgarvin notifications@github.com wrote: > > > I've been on rolling my own server/client to use Dragon on Linux (I > > started my project before I know about aenea) and stumbled on this > > thread > > trying to find if anyone knew the exact conditions that trigger > > natlink.BadGrammar complaining about the grammar being too complex. > > AFAICT > > it is a matter of raw size, not just nesting. I had a grammar that was > > working fine until I added two new entries, now it's too big. Try > > repetitions of large mapping rules. I have a set of voice commands for > > emacs in python mode and one for in lisp mode, where the only > > difference in > > the grammar is the number of language keywords supported, where 4 > > phrases > > are added to the mapping rule for every keyword. The python one works > > fine, > > the lisp one is now too big. They have the same level of nesting. > > > > — > > Reply to this email directly or view it on GitHub > > < > > https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71723853> > > > > . > > ## > > _(this message was composed with dictation: charitably interpret > typos)Sam > Boosalis_ > > — > Reply to this email directly or view it on GitHub > https://github.com/dictation-toolbox/aenea/issues/19#issuecomment-71745702 > .
sboosali commented 9 years ago

I tried a list of 1000 terminals (http://simple.wikipedia.org/wiki/Wikipedia:List_of_1000_basic_words) in a rule and both did not throw any bad grammar errors and successfully recognized them when spoken.

Can you reproduce this:

https://bpaste.net/show/aa6c2117c91e

if you drop this file into your MacroSystem folder (that's a NatLink thing, not an Aenea thing), it should disable the other grammars and activate. There are other commented out grammars that should all work too, if you're interested. I think the only problem I had was with a recursive grammar which didn't throw bad grammar error but just crashed Dragon. lol.

The limit might also depend on the system and version and stuff. Like RAM? I'm running Dragon 13 in a VM that has two CPUs and 4 GB of RAM fwiw.

sboosali commented 9 years ago

@poppe1219

this worked for me:

        natlink.recognitionMimic(["Start","spell","mode"])

since it takes a list, not a string: https://github.com/sboosali/NatLink/blob/9545436181f23652224041afa2035f12fa60d949/NatlinkSource/natlink.txt#L209

curse you dynamic types!

;)

you might need to deactivate your own grammars:

 # natlink, not dragonfly
 self.activateSet([], exclusive=0)

or explicitly handle the recognition yourself with:

# natlink, not dragonfly
def initialize(self):
    self.load(self.gramSpec, allResults=1)
    self.activateSet(["..."], exclusive=1)
    ...

def gotResultsObject(self, recognitionType, resultsObject):
    if recognitionType == 'other':  # when a different grammar has recognized the utterance, e.g. dragons built-in spelling grammar
    ...

(and of course, you still can't embed this in the middle of another rule)