Open aaaaalbert opened 8 years ago
We keep the node manager state in memory anyways in the nmmain's module space. So can't we just read it from there?
On Thu, Jul 30, 2015 at 10:45 AM, aaaaalbert notifications@github.com wrote:
The servicelogger currently uses a rather complicated way to find out where it should put its logfiles to:
- The servicevessel public key is hardcoded https://github.com/SeattleTestbed/nodemanager/blob/master/servicelogger.py#L109-L110 (which is a problem in itself http://SeattleTestbed/custominstallerbuilder#13 when people fork their own Seattle-like testbed)
- It is looked up in the Seattle install's vesseldict https://github.com/SeattleTestbed/nodemanager/blob/master/servicelogger.py#L134
- or, if that didn't work out, . (the current directory) is used.
The first two items could be implemented in a more general way by just persist.restore_objecting the nodeman.cfg that plainly states the name of the servicevessel.
The theoretical downside I can see is that if the servicevessel owner decided to give up the vessel, and/or the owner of another vessel on the node transferred ownership to the servicevessel pubkey, then nodeman.cfg would not reflect the new assignement, whereas vesseldict would, and the service logs would end up in the wrong vessel. (I don't think this has ever happened though. The clearinghouse does nothing like that.)
— Reply to this email directly or view it on GitHub https://github.com/SeattleTestbed/nodemanager/issues/119.
Yes, we can do that for nodemanager's use of servicelogger (assuming we refactor the init
functions a bit).
A bigger problem exists in repy.py
's use of it (via tracebackrepy
). repy.py
doesn't load the NM config, yet some internal Repy errors could get logged in the nodemanager log, which is why servicelogger
contains the logic to find the service vessel.
I think that the Repy sandbox should not take care of setting up log files itself, but rather use standard streams, and have the nodemanager redirect them to the appropriate places. I'll open another issue for this.
What if this runs without a node manager? For example, as a standalone Repy program.
On Tue, Aug 11, 2015 at 10:10 AM, aaaaalbert notifications@github.com wrote:
Yes, we can do that for nodemanager's use of servicelogger (assuming we refactor the init functions a bit).
A bigger problem exists in repy.py's use of it (via tracebackrepy). repy.py doesn't load the NM config, yet some internal Repy errors could get logged in the nodemanager log, which is why servicelogger contains the logic to find the service vessel.
I think that the Repy sandbox should not take care of setting up log files itself, but rather use standard streams, and have the nodemanager redirect them to the appropriate places. I'll open another issue for this.
— Reply to this email directly or view it on GitHub https://github.com/SeattleTestbed/nodemanager/issues/119#issuecomment-129894819 .
In the usual running-local case, python repy.py restrictionsfile my_program.r2py
, the command-line arguments to use servicelogger aren't given. Thus, logging goes to stdout
.
The servicelogger currently uses a rather complicated way to find out where it should put its logfiles to:
vesseldict
.
(the current directory) is used.The first two items could be implemented in a more general way by just
persist.restore_object
ing thenodeman.cfg
that plainly states the name of the servicevessel.The theoretical downside I can see is that if the servicevessel owner decided to give up the vessel, and/or the owner of another vessel on the node transferred ownership to the servicevessel pubkey, then
nodeman.cfg
would not reflect the new assignement, whereasvesseldict
would, and the service logs would end up in the wrong vessel. (I don't think this has ever happened though. The clearinghouse does nothing like that.)