nasa / concept-tagging-api

Contains code for the API that takes in text and predicts concepts & keywords from a list of standardized NASA keywords. API is for exposing models created with the repository `concept-tagging-training`.
MIT License
19 stars 10 forks source link

module dsconcept missing #5

Open bluetyson opened 4 years ago

bluetyson commented 4 years ago

sorry, should have put it here 👎

~/concept-tagging-api/tests$ python context.py Traceback (most recent call last): File "context.py", line 8, in import app File "/home/ubuntu/concept-tagging-api/service/app.py", line 6, in import dsconcept.get_metrics as gm ModuleNotFoundError: No module named 'dsconcept'

and re: other repo

fatal: unable to access 'https://developer.nasa.gov/DataSquad/classifier_scripts.git/': Could not resolve host: developer.nasa.gov ERROR: Command errored out with exit status 128: git clone -q https://developer.nasa.gov/DataSquad/classifier_scripts.git

abuonomo commented 4 years ago

Did you try installing the library with this command?

pip install git+https://github.com/nasa/concept-tagging-training.git@v1.0.3-open_source_release#egg=dsconcept

What was the exact command you executed that resulted in the fatal: unable to access error?

bluetyson commented 4 years ago

Trying to run the app ... no have not tried that one, will give it a shot later today thanks

bluetyson commented 4 years ago

That appears to have worked thank you Anthony. Now just have to wait to be allowed to access it to try it.

abuonomo commented 4 years ago

Glad to hear it. Sorry about those misleading links. Would love to hear how it ends up working.

bluetyson commented 4 years ago

Anthony, have it running now. Have to seriously test soon - probably have a project full of documents coming too.

bluetyson commented 4 years ago

Speaking of which, do you have a recommended size limit for firing a batch at it, do you think?

bluetyson commented 4 years ago

e.g. context being - if this generally working on short abstracts - and start throwing long reports at it to see what comes out - reasonable length for model to handle, that sort of thing - 10MB text dump of a report being like 10K 200 word abstracts?

bluetyson commented 4 years ago

Seems at the moment approx 4MB might be a limit (at least at this machine capacity) for a single text payload. Everything from 1KB up to 7MB or so in this 7000 odd I am trying.

This is one example: https://mer-env.s3-ap-southeast-2.amazonaws.com/ENV01111.pdf-textract-text.txt

abuonomo commented 4 years ago

To optimize speed, my tests (using abstracts) indicated that a batch size of about 700 documents is ideal. More than that gave diminishing returns on speed. If you are working with longer documents, the ideal batch size may be lower. However, I would recommend automatically summarizing the text before sending it to the API, as other tests indicated that this is best for getting the most accurate tags.

bluetyson commented 4 years ago

Thanks Anthony, so maybe a megabyte or so in that case?

I am going to do the summarizer as well for the same set and compare.

abuonomo commented 4 years ago

I'm not totally sure, but that sounds about right.

bluetyson commented 4 years ago

Seems like around 3MB docs with presumably lots of words etc. can run 32GB out of memory and core dump a machine

RichardScottOZ commented 12 months ago

Notes on installing this repo: change scm_version to False to get it to in sudo apt-get install cargo for rust package manager for example

RichardScottOZ commented 12 months ago

and if running local python3 -m spacy download en_core_web_sm