my8100 / scrapydweb

Web app for Scrapyd cluster management, Scrapy log analysis & visualization, Auto packaging, Timer tasks, Monitor & Alert, and Mobile UI. DEMO :point_right:
https://github.com/my8100/files
GNU General Public License v3.0
3.13k stars 554 forks source link

Clean install on clean Ubuntu VM. Whatever I do it is not working. #212

Closed DanBoyDan closed 1 year ago

DanBoyDan commented 1 year ago

Describe the bug Clean install on clean Ubuntu VM. Tried several times. Whatever I do it is not working. This is my best progress.

To Reproduce

  1. Trying to login to localhost:5000 right after installation

Expected behavior I expected to be working :)

Logs

500 (INTERNAL SERVER ERROR): 500 Internal Server Error: The server encountered an internal error and was unable to complete your request. Either the server is overloaded or there is an error in the application.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/dan/.local/lib/python3.10/site-packages/flask/app.py", line 2073, in wsgi_app response = self.full_dispatch_request() File "/home/dan/.local/lib/python3.10/site-packages/flask/app.py", line 1518, in full_dispatch_request rv = self.handle_user_exception(e) File "/home/dan/.local/lib/python3.10/site-packages/flask/app.py", line 1516, in full_dispatch_request rv = self.dispatch_request() File "/home/dan/.local/lib/python3.10/site-packages/flask/app.py", line 1502, in dispatch_request return self.ensure_sync(self.view_functions[rule.endpoint])(req.view_args) File "/home/dan/.local/lib/python3.10/site-packages/flask/views.py", line 83, in view self = view.view_class(*class_args, *class_kwargs) # type: ignore File "/home/dan/.local/lib/python3.10/site-packages/scrapydweb/views/index.py", line 10, in init super(IndexView, self).init() File "/home/dan/.local/lib/python3.10/site-packages/scrapydweb/views/baseview.py", line 221, in init for job in self.scheduler.get_jobs(jobstore='default')) File "/home/dan/.local/lib/python3.10/site-packages/apscheduler/schedulers/base.py", line 574, in get_jobs jobs.extend(store.get_all_jobs()) File "/home/dan/.local/lib/python3.10/site-packages/apscheduler/jobstores/sqlalchemy.py", line 89, in get_all_jobs jobs = self._get_jobs() File "/home/dan/.local/lib/python3.10/site-packages/apscheduler/jobstores/sqlalchemy.py", line 141, in _get_jobs selectable = select(self.jobs_t.c.id, self.jobs_t.c.job_state).\ File "", line 2, in select File "", line 2, in init File "/home/dan/.local/lib/python3.10/site-packages/sqlalchemy/util/deprecations.py", line 139, in warned return fn(args, kwargs) File "/home/dan/.local/lib/python3.10/site-packages/sqlalchemy/sql/selectable.py", line 3107, in init util.raise( File "/home/dan/.local/lib/python3.10/site-packages/sqlalchemy/util/compat.py", line 182, in raise raise exception sqlalchemy.exc.ArgumentError: columns argument to select() must be a Python list or other iterable


- **OS:** Linux-5.19.0-38-generic-x86_64-with-glibc2.35
- **Python:** 3.10.6
- **ScrapydWeb:** 1.4.0
- **LogParser:** 0.8.2
- **Scrapyd servers amount:** 2
- **User-Agent:** Mozilla/5.0 (X11; Linux x86_64; rv:103.0) Gecko/20100101 Firefox/103.0
- **Request Method:** GET
- **Request Args:** ImmutableMultiDict([])
- **Form Data:** ImmutableMultiDict([])
- **Files Data:** ImmutableMultiDict([])
- **Request Referer:** None

**Screenshots**
If applicable, add screenshots to help explain your problem.

**Environment (please complete the following information):**
 - Operating system: [e.g. Win 10, macOS 10.14, Ubuntu 18, centOS 7.6, Debian 9.6 or Fedora 29]
 - Python version: [e.g. 2.7 or 3.7]
 - ScrapydWeb version: [e.g. 1.4.0 or latest code on GitHub]
 - ScrapydWeb related settings [e.g. 'ENABLE_AUTH = True']
 - Scrapyd version: [e.g. 1.2.1 or latest code on GitHub]
 - Scrapyd amount [e.g. 1 or 5]
 - Scrapy version: [e.g. 1.8.0, 2.0.0 or latest code on GitHub]
 - Browser [e.g. Chrome 71, Firefox 64 or Safari 12]

**Additional context**
Add any other context about the problem here.
my8100 commented 1 year ago

Please have a try for the latest scrapydweb v1.4.1, which supports Python 3.6 to 3.9

my8100 commented 1 year ago

Fixed in scrapydweb v1.4.1