my8100 / scrapydweb

Web app for Scrapyd cluster management, Scrapy log analysis & visualization, Auto packaging, Timer tasks, Monitor & Alert, and Mobile UI. DEMO :point_right:
https://github.com/my8100/files
GNU General Public License v3.0
3.15k stars 563 forks source link

Cannot save state when restarting scrapydweb #21

Closed jdespatis closed 5 years ago

jdespatis commented 5 years ago

I'm running scrapydweb in docker I can start job and I can then see some statistics, I also see the finished jobs, it's perfect

However, when I restart my container, I loose this state. I no more see the finished jobs for example => what is the data to persist in my docker container, so that I can see everything when I restart the container ?

I've tried to persist /usr/local/lib/python3.6/site-packages/scrapydweb/data, but it doesn't seem to do the trick

spiderkeeper keeps all its state in a SpiderKeeper.db which is perfect to keep state on container restart

Any idea how to have the same stuff with scrapydweb ?

Thanks again for your work !

my8100 commented 5 years ago

Actually, it's an issue of Scrapyd. I would figure it out in the next release. You can check out the log of finished jobs in the Logs page for the time being, and there is no need to persist the data folder of ScrapydWeb.

jdespatis commented 5 years ago

Yes it would be awesome to support this feature ! Will it be possible to see the job graph after a scrapydweb restart ?

my8100 commented 5 years ago

I just implemented a snapshot mechanism for the Dashboard page so that you still can check out the last view of it in case the Scrapyd service is restarted. What do you mean by 'job graph'?

jdespatis commented 5 years ago

I mean the graphic that shows the number of items stored by minute, the number of pages crawled by minute, and the other graph giving the progression of the global amount of crawled pages / stored items Very handy to have after a scrapydweb restart.

my8100 commented 5 years ago

The stats and graphs of a job would be available as long as the json file generated by LogParser or the original logfile exist. You may need to adjust jobs_to_keep and finished_to_keep in the config file of Scrapyd

my8100 commented 5 years ago

But why emphasizing "after a ScrapydWeb restart"? Is there anything wrong with v1.1.0?

jdespatis commented 5 years ago

Well indeed I’ve launched a job. It has finished, then I’ve restarted scrapydweb and scrapyd also. So I guess scrapydweb doesn’t show anymore the finished job and as a result I cannot get anymore the stats and graph of the job

I imagine that if scrapydweb persist the finished job(next release) then I’ll be able also to see the graph built in real time

Is it right ? I’ll be happy to test this new release ;)

jdespatis commented 5 years ago

Well I've just noticed that I could see the graph of the job when going to the Files > Logs section, a nice column let see the graph for all log files, which is perfect for me!

With a snapshot of the dashboard, it will be even better!

my8100 commented 5 years ago

As I told you before: "You can check out the log of finished jobs in the Logs page for the time being"

my8100 commented 5 years ago

Also note that the json files generated by LogParser would also be removed by Scrapyd when it deletes the original logfiles.

my8100 commented 5 years ago

v1.2.0: Persist jobs information in the database

Digenis commented 5 years ago

Hi, scrapyd uses sqlite only as a concurrently accessed queue. The persistence of scheduled jobs that you see right now was not in purpose. scrapyd should have used https://docs.python.org/3/library/queue.html to implement the spider queue instead of sqlite.

I think what's best is to make scrapyd more modular so that developers like @my8100 can easily plug custom components, eg a persistent job table.

goshaQ commented 5 years ago

Is there any plans to add this feature in future releases? There are a lot of cases when it's nice to be able to restore failed parser right from where it stopped, so that already scheduled requests won't be lost, just like it implemented in SpiderKeeper.

my8100 commented 5 years ago

@goshaQ What’s the meaning of “restore failed parser right from where it stopped“?

goshaQ commented 5 years ago

@my8100 The same as in the first comment. SpiderKeeper allows to save the state of the queue that contains scheduled requests, if a spider will stop (because of user request or anything else), it will be able to start not from scratch. But now I think that there are some considerations that make it hard to provide such functionality. And it's not so hard to do it by yourself with all specifics of particular use case.

Btw, just noticed that Items section show error if scrapyd doesn't return them, which is normal if the result is written to database. It looks for me for the same reason on Jobs section keep show the red tip that tells to install logparser to show number of parsed items, even after I've installed logparser and launched it. Or am I doing something wrong? Sorry for unrelated question.

my8100 commented 5 years ago
  1. pip install scrapydweb==1.3.0 Both the Classic view and Database view of the Jobs page are provided, that's why I closed this issue in v1.2.0

  2. Set SHOW_SCRAPYD_ITEMS to False to hide the Items link in the sidebar. https://github.com/my8100/scrapydweb/blob/a449dbf0137b8c15928fbbe664f77ddd8eec6fe3/scrapydweb/default_settings.py#L155-L159

  3. What's the result of visiting http://127.0.0.1:6800/logs/stats.json

goshaQ commented 5 years ago

Thanks, that's what I was looking for. But it appears that there is no stats.json on the server:

  1. What's the result of visiting http://127.0.0.1:6800/logs/stats.json

The reply is No Such Resource.

my8100 commented 5 years ago

I've installed logparser and launched it.

Restart logparser and post the full log.

goshaQ commented 5 years ago

Restart logparser and post the full log.

[2019-08-08 07:23:22,926] INFO     in logparser.run: LogParser version: 0.8.2
[2019-08-08 07:23:22,927] INFO     in logparser.run: Use 'logparser -h' to get help
[2019-08-08 07:23:22,927] INFO     in logparser.run: Main pid: 20297
[2019-08-08 07:23:22,927] INFO     in logparser.run: Check out the config file below for more advanced settings.

****************************************************************************************************
Loading settings from /usr/local/lib/python3.6/dist-packages/logparser/settings.py
****************************************************************************************************

[2019-08-08 07:23:22,928] DEBUG    in logparser.run: Reading settings from command line: Namespace(delete_json_files=False, disable_telnet=False, main_pid=0, scrapyd_logs_dir='/somepath', scrapyd_server='127.0.0.1:6800', sleep=10, verbose=False)
[2019-08-08 07:23:22,928] DEBUG    in logparser.run: Checking config
[2019-08-08 07:23:22,928] INFO     in logparser.run: SCRAPYD_SERVER: 127.0.0.1:6800
[2019-08-08 07:23:22,928] INFO     in logparser.run: SCRAPYD_LOGS_DIR: /somepath
[2019-08-08 07:23:22,928] INFO     in logparser.run: PARSE_ROUND_INTERVAL: 10
[2019-08-08 07:23:22,928] INFO     in logparser.run: ENABLE_TELNET: True
[2019-08-08 07:23:22,928] INFO     in logparser.run: DELETE_EXISTING_JSON_FILES_AT_STARTUP: False
[2019-08-08 07:23:22,928] INFO     in logparser.run: VERBOSE: False

****************************************************************************************************
Visit stats at: http://127.0.0.1:6800/logs/stats.json
****************************************************************************************************

[2019-08-08 07:23:23,294] INFO     in logparser.utils: Running the latest version: 0.8.2
[2019-08-08 07:23:26,299] WARNING  in logparser.logparser: New logfile found: /somepath/2019-08-08T07_19_39.log (121355 bytes)
[2019-08-08 07:23:26,299] WARNING  in logparser.logparser: Json file not found: /somepath/2019-08-08T07_19_39.json
[2019-08-08 07:23:26,299] WARNING  in logparser.logparser: New logfile: /somepath/2019-08-08T07_19_39.log (121355 bytes) -> parse
[2019-08-08 07:23:26,331] WARNING  in logparser.logparser: Saved to /somepath/2019-08-08T07_19_39.json
[2019-08-08 07:23:26,332] WARNING  in logparser.logparser: Saved to http://127.0.0.1:6800/logs/stats.json
[2019-08-08 07:23:26,332] WARNING  in logparser.logparser: Sleep 10 seconds
[2019-08-08 07:23:36,343] WARNING  in logparser.logparser: Saved to http://127.0.0.1:6800/logs/stats.json
[2019-08-08 07:23:36,343] WARNING  in logparser.logparser: Sleep 10 seconds
[2019-08-08 07:23:46,350] WARNING  in logparser.logparser: Saved to http://127.0.0.1:6800/logs/stats.json
[2019-08-08 07:23:46,351] WARNING  in logparser.logparser: Sleep 10 seconds
my8100 commented 5 years ago

Check if SCRAPYD_LOGS_DIR/stats.json exists. Visit http://127.0.0.1:6800/logs/stats.json again.

goshaQ commented 5 years ago

There is a file .json, but it's named the same as .log file. The reply is the same - No Such Resource.

my8100 commented 5 years ago

Check if SCRAPYD_LOGS_DIR/stats.json exists.

my8100 commented 5 years ago

Did you see the comment below? https://github.com/my8100/logparser/blob/711786042aece827be87acf0286fb68bfe5ebd20/logparser/settings.py#L20-L26