Open pawanjay176 opened 9 years ago
The problem is that the requirements.txt file hasn't version pinned for the dependencies. I've updated/rewritten these libraries, and now redshift doesn't compile.
The offending change seems to be this one:
https://github.com/syllog1sm/redshift/commit/1487f64051b358315e90d20b7279c8e05729b417
Instead of updating the pin to the correct version, I removed the version pin. Damn.
To solve this, the correct versions need to be identified, and the versions specified in the requirements.txt
The following libraries need version pinning:
murmurhash
cymem
preshed
thinc
I've made lots of releases of thinc
, and a couple of preshed
. cymem
and murmurhash
have been more stable.
Looking through the commit history of thinc
, and matching up the dates, it looks like v1.73 is a likely candidate.
Found the right versions. Now compiles for me. Give it a go.
Btw, just checking that you know that this library is code from my research, and that the maintained library is spaCy? Probably this library is only useful for replicating or extending one of my papers, especially the disfluency detection one. If you're doing that, let me know if you have any questions.
But if you're just looking for a good parsing library, try spaCy :). http://spacy.io
I tried installing it again. Gives me the following errors now http://pastebin.com/7T7tV0xN
I just wanted a fast POS tagging utility. I was not aware of spaCy. Using that now. Thanks a lot for the quick reply :)
http://nlp.stanford.edu/software/tagger.shtml
On Fri, Dec 4, 2015 at 9:53 PM, pawanjay176 notifications@github.com wrote:
I tried installing it again. Gives me the following errors now http://pastebin.com/7T7tV0xN
I just wanted a fast POS tagging utility. I was not aware of spaCy. Using that now. Thanks a lot for the quick reply :)
— Reply to this email directly or view it on GitHub https://github.com/syllog1sm/redshift/issues/20#issuecomment-162003154.
spaCy's tagger is MIT licensed, can be used from Python, and is much faster than Stanford's :). Accuracy should be the same as well.
how can i use spaCy's or stanford for for parsing only for define noun and verb ?.i want to send noun and verb in a database because i want to use it search poses in a search engine .
On Fri, Dec 4, 2015 at 10:06 PM, Matthew Honnibal notifications@github.com wrote:
spaCy's tagger can be used from Python, and is much faster than Stanford's :). Accuracy should be the same as well.
— Reply to this email directly or view it on GitHub https://github.com/syllog1sm/redshift/issues/20#issuecomment-162006398.
I'm not sure I understand your question. But this will load spaCy's default English model, analyse some text, and return only the nouns and verbs. The parser is here disabled for efficiency.
>>> from spacy.en import English
>>> from spacy.attrs import NOUN, VERB
>>> nlp = English(parser=False)
>>> doc = nlp(u'An example sentence, that has two nouns. This sentence contains three nouns.')
>>> print([w.text for w in doc if w.pos in (NOUN, VERB)]
[u'sentence', u'has', u'nouns', u'contains', u'nouns']
See the spaCy docs for details. You should ask further questions there as well :)
great....i want to make a search engine for my BSC final year project and it is ontology base semantic search engine . it depends on verb . the main class is verb for search and sub class in noun .so i need to define verb and noun and search in a specific domain . do you understand ,about my project . i really need parsing it for future work .
On Fri, Dec 4, 2015 at 10:59 PM, Matthew Honnibal notifications@github.com wrote:
I'm not sure I understand your question. But this will load spaCy's default English model, analyse some text, and return only the nouns and verbs. The parser is here disabled for efficiency.
from spacy.en import English from spacy.attrs import NOUN, VERB nlp = English(parser=False) doc = nlp(u'An example sentence, that has two nouns. This sentence contains three nouns.') print([w.text for w in doc if w.pos in (NOUN, VERB)] [u'sentence', u'has', u'nouns', u'contains', u'nouns']
See the spaCy docs for details. You should ask further questions there as well :)
— Reply to this email directly or view it on GitHub https://github.com/syllog1sm/redshift/issues/20#issuecomment-162021521.
Thanks a ton! Tagger is really accurate and way way fast than stanford tagger
but i do not understand how can i use it for search engine?can you kindly describe it.plz
On Sat, Dec 5, 2015 at 1:03 AM, pawanjay176 notifications@github.com wrote:
Thanks a ton! Tagger is really accurate and way way fast than stanford tagger
— Reply to this email directly or view it on GitHub https://github.com/syllog1sm/redshift/issues/20#issuecomment-162053926.
You can pos_tag any sentence like this
from spacy.en import English, LOCAL_DATA_DIR import spacy.en import os, time data_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR) nlp = English(parser=False, tagger=True, entity=False)
def print_finepos(token): return (token.tag)
def pos_tags(sentence): sentence = unicode(sentence, "utf-8") tokens = nlp(sentence) tags = [] for tok in tokens: tags.append((tok,print_fine_pos(tok))) return tags
print pos_tags("This is a sentence")
how can i install spaCy.io in Ubuntu 14.04 ? i install Canopy before .
On Sat, Dec 5, 2015 at 1:25 AM, pawanjay176 notifications@github.com wrote:
You can pos_tag any sentence like this
from spacy.en import English, LOCAL_DATA_DIR import spacy.en import os, time data_dir = os.environ.get('SPACY_DATA', LOCAL_DATA_DIR) nlp = English(parser=False, tagger=True, entity=False)
def print_finepos(token): return (token.tag)
def pos_tags(sentence): sentence = unicode(sentence, "utf-8") tokens = nlp(sentence) tags = [] for tok in tokens: tags.append((tok,print_fine_pos(tok))) return tags
print pos_tags("This is a sentence")
— Reply to this email directly or view it on GitHub https://github.com/syllog1sm/redshift/issues/20#issuecomment-162059313.
Running with fab make test gives me a bunch of errors
http://pastebin.com/kw20DhbE
I am running on Ubuntu 14..04. How to fix this?