Koed00 / django-q

A multiprocessing distributed task queue for Django
https://django-q.readthedocs.org
MIT License
1.81k stars 280 forks source link

Windows compatibility #50

Open titusz opened 8 years ago

titusz commented 8 years ago

On PyPi the metadata states that this package is OS Independent. After spending half an hour reading the documentation and setting up a test installation I found out that django-q does not support Windows. After adding it to the django project setting, any management command issued fails with:

ImportError: No module named _curses

As it seems, this is due to the dependency on blessed. Please spare others the same frustration and update pypi metadata and/or add a note about operating support to your readme.

If you´d ever wanted to support windows I can recommend https://pypi.python.org/pypi/colorama for crossplatform colored terminal output. I can also recommend http://www.appveyor.com/ if you need travis like free windows ci/testing for the project.

Otherwise this looks like a great project. Thanks for making it open source.

Koed00 commented 8 years ago

I don't really have a windows machine to test on, but since celery depends on curses too I sort of figured I'd be good there. Thanks for the feedback. Have you tried the curses binaries at http://www.lfd.uci.edu/~gohlke/pythonlibs/#curses ?

titusz commented 8 years ago

Wow that was a fast response.

Celery has no hard dependency on curses. Celery and django-celery run fine on windows. I tested django-q with the curses windows binaries from Christoph Gohlke but starting the worker with python manage.py qcluster fails with:

AttributeError: 'module' object has no attribute 'getppid'

see: https://docs.python.org/2/library/os.html#os.getppid

titusz commented 8 years ago

Just tested and it seems you can get the parent pid crossplatform via psutil like this:

self.parent_pid = psutil.Process(os.getpid()).ppid()
Koed00 commented 8 years ago

I'm kinda depending on the community for feedback on other OS's. In the few months this library has been up, I have had not a single comment from a windows users. So let's see if we can't make a windows compatible version or at least some documentation on how to make it compatible. If you're willing to help.

I can add some code tomorrow, that will use ppid from psutils if installed.

Koed00 commented 8 years ago

ah :) we came to same thing. I'll add some conditional imports for that in the morning.

titusz commented 8 years ago

I´am happy to help with testing. If we get this working on windows it might make sense to setup a windows CI. I know crossplatform can be a real pain. I have one tiny project that is tested on linux/windows (but not on mac). See: https://github.com/titusz/onixcheck

Koed00 commented 8 years ago

Can you check if https://github.com/Koed00/django-q/pull/51 works for you with psutil installed? It's in the dev branch

titusz commented 8 years ago

Yes it fixes the os.getppid error. Now i get a PicklingError. Would have been to easy if that made it work :). Here is the full traceback in all its beauty :)

c:\dev\envs\realtime\lib\site-packages\blessed\terminal.py:28: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Te
rminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
  warnings.warn(msg_nosupport)
Process Process-1:
Traceback (most recent call last):
  File "C:\Python27\Lib\multiprocessing\process.py", line 258, in _bootstrap
    self.run()
  File "C:\Python27\Lib\multiprocessing\process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 130, in __init__
    self.start()
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 134, in start
    self.spawn_cluster()
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 209, in spawn_cluster
    self.pusher = self.spawn_pusher()
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 168, in spawn_pusher
    return self.spawn_process(pusher, self.task_queue, self.event_out, self.list_key, self.r)
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 164, in spawn_process
    p.start()
  File "C:\Python27\Lib\multiprocessing\process.py", line 130, in start
    self._popen = Popen(self)
  File "C:\Python27\Lib\multiprocessing\forking.py", line 277, in __init__
    dump(process_obj, to_child, HIGHEST_PROTOCOL)
  File "C:\Python27\Lib\multiprocessing\forking.py", line 199, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Python27\Lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 562, in save_tuple
    save(element)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 748, in save_global
    (obj, module, name))
PicklingError: Can't pickle <function <lambda> at 0x027AC0B0>: it's not found as redis.client.<lambda>
c:\dev\envs\realtime\lib\site-packages\blessed\terminal.py:28: UserWarning: One or more of the modules: 'termios', 'fcntl', and 'tty' are not found on your platform 'win32'. The following methods of Te
rminal are dummy/no-op unless a deriving class overrides them: setraw, cbreak, kbhit, height, width
  warnings.warn(msg_nosupport)
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python27\Lib\multiprocessing\forking.py", line 381, in main
    self = load(from_parent)
  File "C:\Python27\Lib\pickle.py", line 1378, in load
    return Unpickler(file).load()
  File "C:\Python27\Lib\pickle.py", line 858, in load
    dispatch[key](self)
  File "C:\Python27\Lib\pickle.py", line 880, in load_eof
    raise EOFError
EOFError
titusz commented 8 years ago

I have been setting up a test environment. Running the tests gives the same PicklingError. I guess this is due to the additional restrictions for multiporicessing on windows. See: https://docs.python.org/2/library/multiprocessing.html#windows

Koed00 commented 8 years ago

I've removed the redis client connection argument from the pusher process. This was an educated guess and I hope it fixes the problem.

I'm trying to set up a windows CI but getting redis to run turns out to be problematic.

Koed00 commented 8 years ago

Do you have an appveyor.yml I can use?

titusz commented 8 years ago

Ok... the pickle error is fixed with your last commit. Next we have an AppRestistryNotReady:

Traceback (most recent call last):
  File "C:\Python27\Lib\multiprocessing\process.py", line 258, in _bootstrap
    self.run()
  File "C:\Python27\Lib\multiprocessing\process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 130, in __init__
    self.start()
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 135, in start
    self.guard()
  File "c:\dev\envs\realtime\lib\site-packages\django_q\cluster.py", line 215, in guard
    logger.info(_('{} guarding cluster at {}').format(current_process().name, self.pid))
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\functional.py", line 136, in __wrapper__
    res = func(*self.__args, **self.__kw)
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\__init__.py", line 84, in ugettext
    return _trans.ugettext(message)
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\trans_real.py", line 327, in ugettext
    return do_translate(message, 'ugettext')
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\trans_real.py", line 304, in do_translate
    _default = _default or translation(settings.LANGUAGE_CODE)
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\trans_real.py", line 206, in translation
    _translations[language] = DjangoTranslation(language)
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\trans_real.py", line 116, in __init__
    self._add_installed_apps_translations()
  File "c:\dev\envs\realtime\lib\site-packages\django\utils\translation\trans_real.py", line 164, in _add_installed_apps_translations
    "The translation infrastructure cannot be initialized before the "
AppRegistryNotReady: The translation infrastructure cannot be initialized before the apps registry is ready. Check that you don't make non-lazy gettext calls at import time.
Koed00 commented 8 years ago

if you have WSGIHandler() in your uwsgi configuration, try changing it to get_wsgi_application().

titusz commented 8 years ago

well here is the one I use: https://github.com/titusz/onixcheck/blob/master/appveyor.yml But I did not bother understanding all the details... :) I just used a ready to go python project template: https://github.com/ionelmc/cookiecutter-pylibrary (which worked surprisingly well...)

titusz commented 8 years ago

You mean the wsgi.py... yes it is get_wsgi_application().

Koed00 commented 8 years ago

I meant your uwsgi configuration file. Probably uwsg.ini or maybe you are using a different wsgi server. The error you mention happens when the webserver circumvents the django app loading procedure. I don't think it's a specific windows problem.

titusz commented 8 years ago

I am not using uwsgi... just trying to run a dev setup. The traceback is from a python manage.py qcluster call in my virtualenv

Koed00 commented 8 years ago

Ok, this could be a Django bug but I can't figure out if it's OS specific. Can you give me your Django and Python version.

titusz commented 8 years ago

Django 1.8.4 on python 2.7.10. I removed the gettext_lazy wrappings from the logging calls and that seems to work.

titusz commented 8 years ago

I started seeing many passing tests. test_cluster.test_timeout gets stuck indefinetly.

djq-tests

https://github.com/titusz/django-q/commit/d4fb8d2d72854e3e0d82ec976ba686522634a5d6

Koed00 commented 8 years ago

That test uses a timed thread to send a multiprocessing event. I'm sure windows does not like this one bit. I'm setting up a windows virtual for this, cause just deleting the translations for windows is not really an option. I'm trying to determine if this is a windows specific Django bug.

titusz commented 8 years ago

My guess is, that this is due to the differences in the multiprocessing module on windows.

Koed00 commented 8 years ago

Just to give you an update: Everything seems to work fine on my windows virtual , once I sidestep the translations. I'm still trying to find a solutions for this, but I might have to issue a Django ticket for it. In that case I'll write something that will disable translations on windows, for the time being.

Koed00 commented 8 years ago

I've got it so far as that you could probably develop with it for windows, but I can't get it stable enough for production work. Being able to fork processes is essential to the current architecture and windows just doesn't support it. The sentinel just can't reliable poll the the processes at the moment, which means that recycles and timeouts aren't reliable or flat out don't work.

The question now is; do I merge these changes so it runs partially on windows and we take it from there or do I strike windows as a supported platform?

titusz commented 8 years ago

I just ran the tests on your current dev branch but it still hangs on AppRegistryNotReady. You will have to decide. I originally wanted to check out this project as it did´t not use os.fork and I had the impression that it should work on windows. If linux style forking is essential to the current architecture you might want to strike windows as supported platform.

Koed00 commented 8 years ago

Try the windows branch instead. Even though it doesn't directly use os.fork, multiprocessing does when starting a process on unix. https://docs.python.org/3/library/multiprocessing.html#contexts-and-start-methods . As you can see windows uses the spawn method by default, since fork is not available. I will try to some tests while forcing the spawn method and see what comes up.

titusz commented 8 years ago

sorry I did not see the windows branch. I get all tests passing except test_timeout* and test_recycle which hang indefinitely. I am available for testing. If I find some time I will also try to take look a bit more closely into a solution for windows.

goodrichj commented 8 years ago

I just wanted to say I'm also very interested in getting this working in Windows as it would make my development process a lot easier.

Koed00 commented 8 years ago

I'll see what I can do to make it at least working well enough to develop with. You could always set the sync option. This will make everything run synchronous, so you don't need the cluster.

aparsons commented 8 years ago

I am attempting to use the Django ORM as the broker and with the windows branch as soon as I startup the qcluster it wants to ping redis. On the plus side, i was getting the pickling error and using the windows branch seemed to resolve that issue.

Koed00 commented 8 years ago

Unfortunately the windows branch is quite old and still only uses the Redis broker. I'd rather delete it and see if we can't solve your problem in the main branch. Would be nice if you could at least develop on windows.

aparsons commented 8 years ago

Thanks for the prompt response. Getting rid of the windows branch makes sense. Below is the stack trace I am getting when launching the qcluster via the manage.py command. This is likely the same pickling error titusz was experiencing.

Traceback (most recent call last):
  File "manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\__init__.py", line 351, in execute_from_command_line
    utility.execute()
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\__init__.py", line 343, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\base.py", line 394, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\base.py", line 445, in execute
    output = self.handle(*args, **options)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django_q\management\commands\qcluster.py", line 23, in handle
    q.start()
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django_q\cluster.py", line 56, in start
    self.sentinel.start()
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\context.py", line 212, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\context.py", line 313, in _Popen
    return Popen(process_obj)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__
    reduction.dump(process_obj, to_child)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\reduction.py", line 59, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class '_thread.lock'>: attribute lookup lock on _thread failed
Koed00 commented 8 years ago

To be honest, I'm not super confident that anything involving forking will every work on windows, but we can have another stab at it. The only real difference between the two versions, is the pusher process getting a broker instance passed along. That's the first process that gets forked and needs to be pickled before that, which obviously fails. Adding:

    def __getstate__(self):
        state = dict(self.__dict__)
        del state['broker']
        return state

to the Sentinel class would omit the broker instance from being pickled. That might just work. You want to give that a try?

aparsons commented 8 years ago

Unfortunately the change didn't work. When launching the qcluster from the dev branch the stack trace is the following.

Traceback (most recent call last):
  File "manage.py", line 10, in <module>
    execute_from_command_line(sys.argv)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\__init__.py", line 351, in execute_from_command_line
    utility.execute()
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\__init__.py", line 343, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\base.py", line 394, in run_from_argv
    self.execute(*args, **cmd_options)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django\core\management\base.py", line 445, in execute
    output = self.handle(*args, **options)
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django_q\management\commands\qcluster.py", line 23, in handle
    q.start()
  File "C:\Users\User\Documents\GitHub\project\py3env\lib\site-packages\django_q\cluster.py", line 56, in start
    self.sentinel.start()
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\context.py", line 212, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\context.py", line 313, in _Popen
    return Popen(process_obj)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__
    reduction.dump(process_obj, to_child)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\reduction.py", line 59, in dump
    ForkingPickler(file, protocol).dump(obj)
_pickle.PicklingError: Can't pickle <class '_thread.lock'>: attribute lookup lock on _thread failed
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\spawn.py", line 106, in spawn_main
    exitcode = _main(fd)
  File "c:\users\user\appdata\local\programs\python\python35\Lib\multiprocessing\spawn.py", line 116, in _main
    self = pickle.load(from_parent)
EOFError: Ran out of input
Koed00 commented 8 years ago

Well I guess I'll have to install windows somewhere. The sacrifices I make for open source ¯(ツ)

Koed00 commented 8 years ago

I have a virtual image with windows running now and I managed to install all the stuff.

Now when I set:

Q_CLUSTER = {'orm': 'default', 'sync': DEBUG}

Things work just fine cause the cluster is not being used and every async call is just handled inline by a worker. So at least you can develop on windows that way. This is covered in the docs http://django-q.readthedocs.org/en/latest/install.html#windows and I'm happy to see it's still true.

I'll have to spend some more time on windows to see if I can get the actual cluster to run.

pypetey commented 7 years ago

I've installed newest version of Q and set my settings on windows to: Q_CLUSTER = {'orm': 'default', 'sync': True}

  File "c:\python35-32\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "c:\python35-32\Lib\multiprocessing\context.py", line 212, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "c:\python35-32\Lib\multiprocessing\context.py", line 313, in _Popen
    return Popen(process_obj)
  File "c:\python35-32\Lib\multiprocessing\popen_spawn_win32.py", line 66, in __init__
    reduction.dump(process_obj, to_child)
  File "c:\python35-32\Lib\multiprocessing\reduction.py", line 59, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "c:\python35-32\Lib\multiprocessing\spawn.py", line 106, in spawn_main
    exitcode = _main(fd)
  File "c:\python35-32\Lib\multiprocessing\spawn.py", line 116, in _main
    self = pickle.load(from_parent)
EOFError: Ran out of input
srj55 commented 7 years ago

@pypetey I haven't tried this in Django-Q, but I can confirm that I receive the same exception and identical stack trace (for EOFError: Ran out of input) when using Huey on Windows.

I'm guessing that the object can't be pickled, and therefore a process can't be spawned.

The nice thing about Huey (maybe Q has this as well?) is that you can change the worker type to: process, thread, or greenlet. So for Windows, thread and greenlet works fine.

alexsilva commented 7 years ago

Windows 10 / Python 2.7 / Django 1.11.2

pip install django-q Settings

INSTALLED_APPS = [... "django_q" `...]
Q_CLUSTER = {'orm': 'default', 'sync': True}

manage.py migrate manage.py qcluster

Traceback (most recent call last):
  File "F:\Program Files\JetBrains\PyCharm 2017.1.3\helpers\pycharm\django_manage.py", line 43, in <module>
    run_module(manage_file, None, '__main__', True)
  File "C:\Python27\Lib\runpy.py", line 176, in run_module
    fname, loader, pkg_name)
  File "C:\Python27\Lib\runpy.py", line 82, in _run_module_code
    mod_name, mod_fname, mod_loader, pkg_name)
  File "C:\Python27\Lib\runpy.py", line 72, in _run_code
    exec code in run_globals
  File ...manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "...\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_cofmmand_line
    utility.execute()
  File "...\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "...\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "...\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "...\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "...\lib\site-packages\django_q\cluster.py", line 58, in start
    self.sentinel.start()
  File "C:\Python27\Lib\multiprocessing\process.py", line 130, in start
    self._popen = Popen(self)
  File "C:\Python27\Lib\multiprocessing\forking.py", line 277, in __init__
    dump(process_obj, to_child, HIGHEST_PROTOCOL)
  File "C:\Python27\Lib\multiprocessing\forking.py", line 199, in dump
    ForkingPickler(file, protocol).dump(obj)
  File "C:\Python27\Lib\pickle.py", line 224, in dump
    self.save(obj)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 562, in save_tuple
    save(element)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 419, in save_reduce
    save(state)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 649, in save_dict
    self._batch_setitems(obj.iteritems())
  File "C:\Python27\Lib\pickle.py", line 681, in _batch_setitems
    save(v)
  File "C:\Python27\Lib\pickle.py", line 331, in save
    self.save_reduce(obj=obj, *rv)
  File "C:\Python27\Lib\pickle.py", line 396, in save_reduce
    save(cls)
  File "C:\Python27\Lib\pickle.py", line 286, in save
    f(self, obj) # Call unbound method with explicit self
  File "C:\Python27\Lib\pickle.py", line 748, in save_global
    (obj, module, name))
pickle.PicklingError: Can't pickle <type 'thread.lock'>: it's not found as thread.lock
dancaron commented 6 years ago

Django Q looks amazing. However, I'm on Windows 10 and seeing a few different errors related to the work around configuration settings. Couldn't find any that worked. Django version 1.11.2.

settings.py

Q_CLUSTER = {
    'orm': 'default'
}

python manage.py qcluster

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_command_line
    utility.execute()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\cluster.py", line 54, in start
    self.sentinel.start()
  File "C:\Python36\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Python36\Lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 99, in spawn_main
    new_handle = reduction.steal_handle(parent_pid, pipe_handle)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 87, in steal_handle
    _winapi.DUPLICATE_SAME_ACCESS | _winapi.DUPLICATE_CLOSE_SOURCE)
PermissionError: [WinError 5] Access is denied

[EDIT]: I realize now that the "Access is denied" is due to me using sqlite. Switched to postgres and this error is gone, however I'm getting the following:

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_command_line
    utility.execute()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\cluster.py", line 54, in start
    self.sentinel.start()
  File "C:\Python36\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Python36\Lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

Also...

settings.py

Q_CLUSTER = {
    'sync': True
}

python manage.py qcluster

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_command_line
    utility.execute()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\cluster.py", line 54, in start
    self.sentinel.start()
  File "C:\Python36\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Python36\Lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input

And...

settings.py

Q_CLUSTER = {
    'sync': DEBUG
}

python manage.py qcluster

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_command_line
    utility.execute()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\cluster.py", line 54, in start
    self.sentinel.start()
  File "C:\Python36\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Python36\Lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.lock objects
(venv-win) PS D:\Dropbox\git\poachr\poachr> Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 99, in spawn_main
    new_handle = reduction.steal_handle(parent_pid, pipe_handle)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 82, in steal_handle
    _winapi.PROCESS_DUP_HANDLE, False, source_pid)
OSError: [WinError 87] The parameter is incorrect

And...

settings.py

Q_CLUSTER = {
    'orm': 'default',
    'sync': True
}

python manage.py qcluster

Traceback (most recent call last):
  File "manage.py", line 22, in <module>
    execute_from_command_line(sys.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 363, in execute_from_command_line
    utility.execute()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\__init__.py", line 355, in execute
    self.fetch_command(subcommand).run_from_argv(self.argv)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 283, in run_from_argv
    self.execute(*args, **cmd_options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django\core\management\base.py", line 330, in execute
    output = self.handle(*args, **options)
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\management\commands\qcluster.py", line 22, in handle
    q.start()
  File "D:\Dropbox\git\poachr\venv-win\lib\site-packages\django_q\cluster.py", line 54, in start
    self.sentinel.start()
  File "C:\Python36\Lib\multiprocessing\process.py", line 105, in start
    self._popen = self._Popen(self)
  File "C:\Python36\Lib\multiprocessing\context.py", line 223, in _Popen
    return _default_context.get_context().Process._Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\context.py", line 322, in _Popen
    return Popen(process_obj)
  File "C:\Python36\Lib\multiprocessing\popen_spawn_win32.py", line 65, in __init__
    reduction.dump(process_obj, to_child)
  File "C:\Python36\Lib\multiprocessing\reduction.py", line 60, in dump
    ForkingPickler(file, protocol).dump(obj)
TypeError: can't pickle _thread.RLock objects
Traceback (most recent call last):
  File "<string>", line 1, in <module>
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 105, in spawn_main
    exitcode = _main(fd)
  File "C:\Python36\Lib\multiprocessing\spawn.py", line 115, in _main
    self = reduction.pickle.load(from_parent)
EOFError: Ran out of input
dancaron commented 6 years ago

Could the new Windows Subsystem for Linux be of help here? I believe it supports forking.

"As an example, the Linux fork() syscall has no direct equivalent call documented for Windows. When a fork system call is made to the Windows Subsystem for Linux, lxcore.sys does some of the initial work to prepare for copying the process. It then calls internal Windows NT kernel APIs to create the process with the correct semantics, and completes copying additional data for the new process."

https://blogs.msdn.microsoft.com/wsl/2016/04/22/windows-subsystem-for-linux-overview/

dancaron commented 6 years ago

Yup. Bash on Ubuntu on Windows works.

Here's a decent video on the getting started with Ubuntu on Windows: https://godjango.com/123-django-with-bash-for-windows/

And installation of some basic dependencies.

sudo apt-get update
sudo apt-get install python3 python3-venv python3.5-dev build-essential redis-server
python3 -m venv /path/to/venv
source /path/to/venv/bin/activate
pip install django
pip install django_q
python manage.py runserver

And finally...

python manage.py qcluster
03:14:03 [Q] INFO Q Cluster-30787 starting.
03:14:04 [Q] INFO Process-1:1 ready for work at 30793
03:14:04 [Q] INFO Process-1:2 ready for work at 30794
03:14:04 [Q] INFO Process-1:3 ready for work at 30795
03:14:04 [Q] INFO Process-1:4 ready for work at 30796
03:14:04 [Q] INFO Process-1:5 ready for work at 30797
03:14:04 [Q] INFO Process-1:6 ready for work at 30798
03:14:04 [Q] INFO Process-1:7 ready for work at 30799
03:14:04 [Q] INFO Process-1:8 ready for work at 30800
03:14:04 [Q] INFO Process-1:9 monitoring at 30801
03:14:04 [Q] INFO Process-1 guarding cluster at 30792
03:14:04 [Q] INFO Q Cluster-30787 running.
03:14:04 [Q] INFO Process-1:10 pushing tasks at 30802
mpibpc-mroose commented 5 years ago

Is there the possibility to somehow fix this problem to give Windows developers at least the possibility to develop app in Windows IDE's? When using PyCharm using the WSL is not an option. Docker may work, but only for the professional Version.

carltongibson commented 5 years ago

VSCode supports working with the WSL

mpibpc-mroose commented 5 years ago

Maybe one should update the documentation. From reading it I would expect no problems when developing in Windows. But it seems django-q only works in NIX environments (as WSL). Reading the documentation setting sync to True should be all needed to get it working in Windows. But it's not.

Did anyone anytime went into the deepest deeps of the code to find out anything about this pickle errors occurring?

randlet commented 4 years ago

I've had some success running django-q on Windows with a very small change: https://github.com/Koed00/django-q/compare/master...qatrackplus:qatrackplus

and the following settings:

Q_CLUSTER = {
    'name': 'foo',
    'workers': 1,
    'cpu_affinity': 1,
    'sync': True,
    'timeout': 60,
    'catch_up': True,
    'recycle': 20,
    'compress': False,
    'save_limit': 250,
    'queue_limit': 500,
    'label': 'Django Q',
    'orm': 'default',
}

I did not run into a pickle error yet. Using Python 3.6.x/Django 2.1

aragentum commented 4 years ago

I'm developing project in a Docker container (because my project will use in production on linux server) and django-q works without problems. If you have the same case, you can run django-q separately in container:

Dockerfile

ARG APP_NAME=web
ARG WORKDIR=/usr/src/app
ARG REQUIREMENTS=${APP_NAME}/requirements/development.txt

FROM python:3.7
ARG APP_NAME
ARG WORKDIR
ARG REQUIREMENTS

WORKDIR ${WORKDIR}

COPY ./${APP_NAME} ./${APP_NAME}
COPY ./manage.py ./
COPY ./run.sh ./

RUN chmod +x ./run.sh
RUN pip install -r ${REQUIREMENTS}

run.sh

#!/bin/sh

set -e

until python manage.py migrate; do
  echo "Migration problems, possibly DB server is unavailable"
  sleep 5
done

# you can comment out the line below and change to docker-compose.yml
# tty: true, stdin_open: true, rename 'entrypoint' to 'command', after that you can
# just connect to container and type this command manually
python manage.py qcluster 

docker-compose.yml

version: "3.7"

services:
  dev-web:
    container_name: dev-web
    networks:
      - web-network
    build:
      context: .
      dockerfile: Dockerfile
    restart: always
    depends_on:
      - dev-db
    links:
      - dev-db
    entrypoint: ["./run.sh"]
#    tty: true
#    stdin_open: true
    ports:
      - "8000:8000"

  dev-db:
    container_name: dev-db
    networks:
      - web-network
    image: postgres:10.7
    restart: always
    environment:
      - POSTGRES_USER=django_web
      - POSTGRES_PASSWORD=django_web
      - POSTGRES_DB=django_web
    command: ["--autovacuum=off"]
    ports:
      - "5432:5432"

networks:
  web-network: