Closed lampwins closed 7 years ago
Upstream issue in junos-pyez https://github.com/Juniper/py-junos-eznc/issues/750
The proposed upstream solution is to include uptime
in the routing engine dicts in re_info['default']
, which will always be available no matter the fpc location of the routing engines.
Thanks for reporting this @lampwins - as you have the environment to reproduce this, would you be able to submit the fix please?
@mirceaulinic sure, I would be happy to. Until the upstream issue is resolved, are you okay with setting the facts uptime
to None
when it is unavailable from the junos-pyez output?
@lampwins use -1
instead of None
.
I had a similar issue with stack switches that get_facts will timeout. seems to many interfaces that caused this issue.
@sincerywaing there is a "timeout" attribute you can pass when creating the driver object. Try it and if it doesn't work open a new issue, please.
Description of Issue/Question
When calling
get_facts()
on an EX switch stack where the master and backup routing engines are in node positions greater than 0, napalm bombs asRE0
in the facts output isNone
. This is mostly an upstream issue in junos-pyez but nonetheless napalm should check this when trying to gather the uptime.Did you follow the steps from https://github.com/napalm-automation/napalm#faq
[ x ] Yes [ ] No
Setup
napalm-junos version
(Paste verbatim output from
pip freeze | grep napalm-junos
between quotes below)JunOS version
(Paste verbatim output from
show version and haiku
between quotes below) The was observed on two stacks. One is a EX4200 stack and the other is an EX4300 stack. There versions are respectively:Steps to Reproduce the Issue
Error Traceback
(Paste the complete traceback of the exception between quotes below)
The raw ouput from junos-pyez facts tells the story here:
For the 4200 stack where fpc3 and fpc4 are the routing engines:
And the EX4300 stack where fpc1 and fpc2 are the routing engines: