OkunaOrg / okuna-api

🤖 The Okuna Social Network API
https://okuna.io
MIT License
239 stars 81 forks source link

admin/scheduler is empty #751

Closed alsiPanda closed 3 years ago

alsiPanda commented 3 years ago

I was originally facing this issue where the top posts and trending posts were always empty. I tried reducing the number of reacts and comments required to 1 and was still getting the same problem. I checked the admin/scheduler in browser, where i found the cron job, repeatable job and scheduled jobs, but those 3 are also empty inside, which could be why the top posts are always empty. I wanted to know how to start running the curate_top_posts and other functions in openbook_posts > jobs ? I read the read-me but couldn't figure it out

lifenautjoe commented 3 years ago

You need to add these jobs yourself. On the readme it also says what the recommended interval is.

For example:

openbook_posts.jobs.curate_top_posts Curates the top posts, which end up in the explore tab.

Should be run every 5 minutes or so.

On the django admin on the task field, put openbook_posts.jobs.curate_top_posts and on the interval 5 mins

alsiPanda commented 3 years ago

Thanks @lifenautjoe , I was able to create all the tasks. The problem now is, that there are still no top posts. I have posted a new post from a dummy account, which has 2 reacts and 2 comments. I have lowered the threshold for reaction_count and comments_count to 1, instead of the previous 5, directly in the jobs.py instead of doing it in settings.py. In addition I have also removed the community related conditions in the query, so that world posts would be taken into account. But the database for TopPosts and TrendingPosts is still empty. Is there any specific steps I need to take after creating the scheduler tasks, or after editing the jobs.py ?

kitlu007 commented 3 years ago

@lifenautjoe could you help us fix this please.

alsiPanda commented 3 years ago

Hi @lifenautjoe, I also tried reducing the time from 5 min to 2, then put it back to 5 after an hour. Top posts and trending posts are still blank. Is there any specific commands that needs to be executed before or after creating the scheduled tasks ?

lifenautjoe commented 3 years ago

Hi both. Are you running both the rq sheduler and the rq workers? These are both needed to run jobs periodically.

You need to run one scheduler for each queue. We have high, low and default.

See https://github.com/OkunaOrg/okuna-api/blob/master/.docker/scheduler/supervisord.conf for reference

alsiPanda commented 3 years ago

@lifenautjoe , I had run all three scheduler before adding the tasks. Should I run it again ? Also worker is the same as the tasks added via admin/scheduler panel in browser or is their a different command for it that needs to be run in the docker container?

kitlu007 commented 3 years ago

Hey @lifenautjoe thanks for the response. However I have the same doubt as @alsiPanda . Could you clarify please ?

alsiPanda commented 3 years ago

Thanks @lifenautjoe , I ran the three python manage.py rqscheduler --queue=default/low/high commands, It's showing birth registered. I am assuming thats how it's supposed to go, because after each command I had to escape container and enter again. But the top and trending posts in database is still empty.

kitlu007 commented 3 years ago

I'm doing the same above as @alsiPanda and getting the same result. @lifenautjoe help us out pls

alsiPanda commented 3 years ago

Untitled

@lifenautjoe, found this when I was looking around in the /django-rq. All the tasks are queued but none completed. Could this be where the problem lies ? This is the task I had created - Untitled2

alsiPanda commented 3 years ago

@lifenautjoe , I also tried to follow what was on the this thread #510 , but since i couldn't figure out where to run the stop command, I ran it in all three containers - okuna-api, okuna-worker, okuna-scheduler. I got No such file or directory error for /usr/local/bin/supervisorctl everywhere. The only place supervisorctl is mentioned is in .docker/.../supervisord.conf , Do I need to make any changes there ?

lifenautjoe commented 3 years ago

Do you have your workers running too? with python manage.py rqworker --queue=default/low/high ?

alsiPanda commented 3 years ago

@lifenautjoe , thanks, I tried running the above command python manage.py rqworker --queue=default inside both okuna-api and okuna-worker, getting the following error -

Traceback (most recent call last): File "manage.py", line 22, in execute_from_command_line(sys.argv) File "/usr/local/lib/python3.7/site-packages/django/core/management/init.py", line 381, in execute_from_comma nd_line utility.execute() File "/usr/local/lib/python3.7/site-packages/django/core/management/init.py", line 375, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 323, in run_from_argv self.execute(*args, cmd_options) File "/usr/local/lib/python3.7/site-packages/django/core/management/base.py", line 364, in execute output = self.handle(*args, *options) File "/usr/local/lib/python3.7/site-packages/django_rq/management/commands/rqworker.py", line 86, in handle w = get_worker(args, worker_kwargs) File "/usr/local/lib/python3.7/site-packages/django_rq/workers.py", line 47, in get_worker 'queue_class': queue_class}) File "/usr/local/lib/python3.7/site-packages/django_rq/queues.py", line 189, in get_queues return [get_queue(*queue_names, **kwargs)] File "/usr/local/lib/python3.7/site-packages/django_rq/queues.py", line 147, in get_queue queue_class = get_queue_class(QUEUES[name], queue_class) File "/usr/local/lib/python3.7/site-packages/django_rq/queues.py", line 47, in get_queue_class queue_class = import_attribute(queue_class) File "/usr/local/lib/python3.7/site-packages/rq/utils.py", line 151, in import_attribute module_name, attribute = name.rsplit('.', 1) ValueError: not enough values to unpack (expected 2, got 1)

Apparently the rqworker not working is the main issue for now.

alsiPanda commented 3 years ago

@lifenautjoe , problem was solved, I am not sure why, but when I reran the setup - down-full and then up-full, the rqworkers started working. Thanks a lot for patiently guiding us so far.