Open shoebjoarder opened 11 months ago
Can't tell what commands you're running locally and what your verbosity setting is. The worker process is started with level warning
:
I am using the following command locally in my Windows machine: celery -A server worker -l info -P eventlet
I changed the bin/worker file to: celery -A interest_miner_api worker -c 1 -l info
The logs are showing are now this:
rima-backend-worker-1 | ============================
rima-backend-worker-1 | Loading ELMO Weight File...
rima-backend-worker-1 | /home/app/.model/elmo/elmo.hdf5
rima-backend-worker-1 | ============================
rima-backend-worker-1 | 2023-11-12 12:21:38.098223: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
rima-backend-worker-1 | 2023-11-12 12:21:38.098260: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
rima-backend-worker-1 | coreNLP: <interests.Keyword_Extractor.Algorithms.embedding_based.sifrank.taggers.stanford_core_nlp_tagger.StanfordCoreNLPTagger object at 0x7f7591ec5d90>
rima-backend-worker-1 | None
rima-backend-worker-1 |
rima-backend-worker-1 | -------------- celery@1bd72ed55c0e v4.3.0 (rhubarb)
rima-backend-worker-1 | ---- **** -----
rima-backend-worker-1 | --- * *** * -- Linux-6.4.16-linuxkit-x86_64-with-debian-12.1 2023-11-12 12:21:46
rima-backend-worker-1 | -- * - **** ---
rima-backend-worker-1 | - ** ---------- [config]
rima-backend-worker-1 | - ** ---------- .> app: interest_miner_api:0x7f75d3ba5c90
rima-backend-worker-1 | - ** ---------- .> transport: redis://backend-redis:6379//
rima-backend-worker-1 | - ** ---------- .> results: redis://backend-redis:6379/
rima-backend-worker-1 | - *** --- * --- .> concurrency: 1 (prefork)
rima-backend-worker-1 | -- ******* ---- .> task events: OFF (enable -E to monitor tasks in this worker)
rima-backend-worker-1 | --- ***** -----
rima-backend-worker-1 | -------------- [queues]
rima-backend-worker-1 | .> celery exchange=celery(direct) key=celery
rima-backend-worker-1 |
rima-backend-worker-1 |
rima-backend-worker-1 | [tasks]
rima-backend-worker-1 | . getConnectedAuthorsData
rima-backend-worker-1 | . getRefCitAuthorsPapers
rima-backend-worker-1 | . import_papers
rima-backend-worker-1 | . import_papers_for_user
rima-backend-worker-1 | . import_tweets
rima-backend-worker-1 | . import_tweets_for_user
rima-backend-worker-1 | . import_user_citation_data
rima-backend-worker-1 | . import_user_data
rima-backend-worker-1 | . import_user_paperdata
rima-backend-worker-1 | . import_user_papers
rima-backend-worker-1 | . interests.publication.publication_utils.process_publication
rima-backend-worker-1 | . manual_regenerate_long_term_model
rima-backend-worker-1 | . regenerate_interest_profile
rima-backend-worker-1 | . regenerate_short_term_interest_model
rima-backend-worker-1 | . update_long_term_interest_model
rima-backend-worker-1 | . update_long_term_interest_model_for_user
rima-backend-worker-1 | . update_short_term_interest_model
rima-backend-worker-1 | . update_short_term_interest_model_for_user
rima-backend-worker-1 |
rima-backend-worker-1 | [2023-11-12 12:21:46,513: INFO/MainProcess] Connected to redis://backend-redis:6379//
rima-backend-worker-1 | [2023-11-12 12:21:46,519: INFO/MainProcess] mingle: searching for neighbors
rima-backend-worker-1 | [2023-11-12 12:21:47,531: INFO/MainProcess] mingle: all alone
rima-backend-worker-1 | [2023-11-12 12:21:47,544: INFO/MainProcess] celery@1bd72ed55c0e ready.
Still the logs doesn't show that was able to load the models downloaded from the .model folder.
I have tried to install the python packages from the Pipfile locally in my Ubuntu machine, and it's failing to install dependencies such as scikit-learn and tensorflow, reporting mismatch in dependency being installed and also not able to find scikit-learn and tensorflow versions for python 3.7. Just wondering how was the docker container able to build without any issues...
Can't really help with https://github.com/ude-soco/RIMA/tree/development. gensim 3.8.3 and old numpy versions didn't work on ARM CPUs yet, so I can only build more recent versions.
In the development branch, I am unable to locate the issue why the logs for the development without docker and with docker are different. Here is the log when we start the development server without docker:
We can clearly see that the
stanfordcorenlp
is recognized and running in port 9002corenlp 4064 11140 java -Xmx4g -cp "C:\Users\shoeb\Desktop\RIMA\RIMA-Backend\model\stanford-corenlp\*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer -port 9002
The logs for the
rima-backend-worker-1
at least provides information regarding the Tensorflow not able to find GPU on the machine, but the rest of the logs regarding thepytorch
,allennlp
, andstanfordcorenlp
not being shown when the server is started.Moreover, the rima-backend-api-1 container logs is quite different, not even showing the Tensorflow issue regarding GPU
I need help to know why exactly these files are not being recognized by the backend libraries while running in the docker.