opencog / link-grammar

The CMU Link Grammar natural language parser
GNU Lesser General Public License v2.1
385 stars 119 forks source link

Open work items for 5.12.5 #1454

Open linas opened 1 year ago

linas commented 1 year ago

See comment in https://github.com/opencog/link-grammar/pull/1446#issuecomment-1441397457 for pending work items for 5.12.1

I think it makes sense to also start a 5.13.0 branch that will include proposals #1450, and #1453 and #1452 and maybe #1449 depending on how that goes. And if #1449 can happen easily, then it would be version 6.0

linas commented 1 year ago

The emscripten issues are in #1361 #1374 and #1377

ampli commented 1 year ago

For 6.00 I have many PRs that I would like to include at least some of them:

  1. Dict token insertion (need to find the issue number).
  2. Tokenization drastic speed improvements.
  3. Generator drastic speedup.
  4. Generator API.
  5. Cross-links implementation (I need your answers to my old questions + more discussion, in order to complete it).
  6. Implement power-prune for expressions in order to make power_prune() much faster.
  7. Simplify expressions before converting them to disjuncts (it speeds up building the disjuncts). (The code was ready for PR but then I changed Exp_struct before I sent it, and its conversion to the new struct turned out to be buggy so I need to work on it some more...).
  8. More power-pruning! It removed an additional ~5% of the disjuncts. (This new power pruning had worked but then I introduced a bug without committing the working code..., so again I need to continue debugging...).
  9. Rewritten post-processing, for drastic postprocessing speedup and drastically increasing the number of good linkages per linkage_limit.
  10. Tests for link-parser.
  11. Graphical link-parser (Python).
  12. Local hard costs (we need to discuss this).
  13. Segmentation according to the dict.
  14. Partial parsing infrastructure.
  15. Phantom word handling.
  16. Capitalization handling by dict definitions.
linas commented 1 year ago

Re tokenization speed: In one of my atomese use-cases on and older slower machine, I see the following performance:

The above was obtained using sentences that are all exactly 12 words long. Dictionary lookup times not included in the tokenization. Linkages limit = 15K

linas commented 1 year ago

More about tokenization. With the atomese dicts, the dict can grow after every sentence. Thus, I call condesc_setup(dict); after tokenization, before parsing. It took me two days to discover that it runs about 1sec at first, growing to 10 sec after a while. Thus, it acounts from 1/3 of grand-total sentence time at first, to 80% after a while.

I need to find some way of doing what it does incrementally. Possibly by telling it exactly what expressions were added. -- fixed in #1459

linas commented 1 year ago

I published version 5.12.1 -- I couldn't wait, certain automation scripts depend on the published tarballs.

SoapGentoo commented 1 year ago

hi @linas I tried updating to 5.12.2 in Gentoo but am getting build failures:

In file included from /var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.cpp:1:
/var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.hpp:23:83: error: 'X_node' does not name a type
   23 |                     const std::vector<int>& er, const std::vector<int>& el, const X_node *w_xnode, Parse_Options opts)
      |                                                                                   ^~~~~~
In file included from /var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.cpp:1:
/var/tmp/portage/dev-libs/link-grammar-5.12.2/work/link-grammar-5.12.2/link-grammar/sat-solver/word-tag.hpp:82:9: error: 'X_node' does not name a type
   82 |   const X_node *word_xnode;
      |         ^~~~~~

which we haven't seen in 5.12.0

linas commented 1 year ago

build failures:

I'm looking. Recommended fix is to disable the build of the sat-solver code. Since it's disabled by default, your build scripts must have turned it on. (Just run ../configure without any options.)

The recommendation is to disable, because the SAT parser is slower, in all situations, than the regular parser; in some cases, it is 10x or 20x slower. I've been considering deleting it permanently, although Amir convinced me that it can be fixed up. And so .. its in limbo ...

@SoapGentoo If you are willing to carry patches, I just pushed a fix here: ffdf5d8da583b3158656dfe46ed6f8bd12b3bc25

Otherwise, wait for 5.12.3 ... which might appear in a few weeks(? I have plans for "urgent" Atomese fixes which necessitate an LG release.)

linas commented 1 year ago

@SoapGentoo Version 5.12.3 is now out, with the fix you reported above.

SoapGentoo commented 1 year ago

@linas after confirming that 5.12.3 works indeed, I proceeded to pass --disable-sat-solver to ./configure to disable the SAT solver as per your recommendations. Thanks :+1:

linas commented 1 year ago

Cool. OK. FWIW. the SAT solver is already disabled by default (configure.ac lines 365ff) so if it was on for you, then somehow you were carrying a config setting from long ago? Keep in mind that ./configure does not start with a clean state; it remembers flags from prior invocations. (This also reveals my testing is incomplete.)

SoapGentoo commented 1 year ago

in general, we like to specify all options to ./configure, since it makes our configuration more robust to changes of default settings. In this case, the --enable-sat-solver=bundled was added due to a conflict with the system minisat: https://bugs.gentoo.org/593662

linas commented 1 year ago

Hm. OK. SAT was disabled to discourage it's use. In all situations, it is always slower, sometimes slower by factors of 10x or 100x. Amir says that, in fact, this can be fixed up and repaired, which might make SAT faster than the regular parser, maybe.

Whether this is worth the effort, or not, depends mostly on future applications, rather than on the current situation. For the present English, russian, Thai, etc. dictionaries, reviving SAT seems pointless: the current parser is good enough. However, I'm working with brand-new dicts which have radically different structure, and different performance profiles, and make different demands on the parser. And for those, maybe the SAT parser could be faster or more space-efficient. Maybe, or maybe not. Unexplored.