Open jasonvagner opened 5 years ago
i made some host changes, and update the firewall settings, and this seems to have either 1) resolved the issue, or 2) brought me a bit closer.
i think i have a functionality question now. should live loggin stream the log into the Live Job Event Log portion of the "view job" screen, under the Job Progress, CPU Usage and Memory Usage charts?
Because I got this message in the "Live Job Event Log": Log Watcher: Connecting to server: http://[hostname].azure.com:3012... Log Watcher: Connected successfully!
..and then nothing more.
But if I click the "View Full Log" link, and refresh the resulting window, that page updates with log content as the job progresses
Thanks!
That is really strange behavior. I cannot explain why the log watcher is connecting successfully but not showing any output. Normally I would suggest something like not flushing your log file handle after each write, but you say if you click the link to view the log it is visible there. So I am kind of at a loss here.
The only thing I can offer is, the next major release of Cronicle has a completely redesigned log watcher system that uses the main WebSocket that connects to the master server, and the master server proxies a connection to the slave for streaming log updates. It no longer connects directly to the slave from the UI. Hopefully, this will resolve your issue.
This would be very helpful indeed to have the master tunnel the live log. I am facing the exact same issue with our deployment on Google Cloud, where the LAN IPs are not available from outside and live log times out. I even tried to "add server" a slave with its public IP (with port 3012 open), but Cronicle ultimately resolves the private IP and uses that instead. Would you have some rough estimate on when the new major version would be out. Or a way to force Cronicle to use the provided public IP, and not override with the revolved private IP? Thanks!
@atellier Have you tried configuring Cronicle to use hostnames for WebSockets instead of IPs? If your hostname resolves to the public IP, this should work for you:
Setting this parameter to
true
will force Cronicle's Web UI to connect to the back-end servers using their hostnames rather than IP addresses. This includes both AJAX API calls and Websocket streams. You should only need to enable this in special situations where your users cannot access your servers via their LAN IPs, and you need to proxy them through a hostname (DNS) instead. The default isfalse
(disabled), meaning connect using IP addresses.This property only takes effect if web_direct_connect is also set to
true
.
I do apologize but I really don't have an ETA for a v1.0 release. My TODO list is still a mile long, and I have been taking a break to work on other projects. I'm trying to get back into working on Cronicle, but I don't know how long it's going to take to redesign and implement live log tunneling through the master server. Sorry 😞
Summary
I can't seem to onfigure cronicle in Azure to enable/support live logging.
I've used truncated hostnames, and full DNS names. I've also used the local IPs and the public IPs, but even so, when running "node -e 'console.log(require("os").hostname());'" on either the master or the slave, it always returns just the short hostname (e.g. "master1" and "slave1").
I've tested switching the web_socket_use_hostnames from false to true.
Other things I've tested:
Using full public DNS hostname for the master renders it unable to start up. I can use the short hostname for master, and the fully public DNS name for the slave, and it starts up. The jobs run fine, but live logging times out. Curiously, if I curl the http://[full-public-dns-hostname]:3012, it returns the cronicle page content. so curl works from the master, but the live logging attempt does not. Is there another URL construction I can test?
Steps to reproduce the problem
Your Setup
Azure
Operating system and version?
Ubuntu 18.04.2 LTS
Node.js version?
8.10.0
Cronicle software version?
Version 0.8.28
Are you using a multi-server setup, or just a single server?
One master, one slave
Are you using the filesystem as back-end storage, or S3/Couchbase?
Filesystem default, as installed
Can you reproduce the crash consistently?
Log Excerpts
Log Watcher: Connecting to server: http://[hostname]:3012... Log Watcher: Server Connect Error: timeout (http://[hostname]:3012)