Closed Ch3LL closed 6 years ago
Could you re-test this against this please? https://github.com/saltstack/salt/pull/34456
I was actually testing based off of that commit already as shown in my versions report: edd6b95
. I also tested in 2016.3.1 and saw this behavior as well though.
I get the same error in the master's log when I run salt-run manage.list_state
or salt-run manage.not_alived
.
2016-08-07 05:46:35,784 [salt.transport.ipc][ERROR ][5521] Exception occurred while handling stream: [Errno 0] Success
`Salt Version: Salt: 2016.3.1
Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: 2.5.3 gitdb: 0.6.4 gitpython: 1.0.2 ioflo: Not Installed Jinja2: 2.7.2 libgit2: Not Installed libnacl: 1.4.4 M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.4.6 mysql-python: Not Installed pycparser: Not Installed pycrypto: 2.6.1 pygit2: Not Installed Python: 2.7.5 (default, Nov 20 2015, 02:00:19) python-gnupg: Not Installed PyYAML: 3.10 PyZMQ: 14.7.0 RAET: Not Installed smmap: 0.9.0 timelib: Not Installed Tornado: 4.2.1 ZMQ: 4.0.5
System Versions: dist: centos 7.2.1511 Core machine: x86_64 release: 3.10.0-327.10.1.el7.x86_64 system: Linux version: CentOS Linux 7.2.1511 Core `
Hi, I am seeing a similar exception message when I run a custom runner that sends an emails using the salt smtp module when a state fails or makes changes, but I can't pinpoint the source of these exception messages:
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
Aug 10 16:56:15 gru-mc salt-master: [ERROR ] Exception occurred while handling stream: [Errno 0] Success
versions report:
Salt Version:
Salt: 2016.3.1
Dependency Versions:
cffi: 0.8.6
cherrypy: 3.2.2
dateutil: 1.5
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: 0.21.0
libnacl: Not Installed
M2Crypto: 0.21.1
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.7
mysql-python: 1.2.3
pycparser: 2.14
pycrypto: 2.6.1
pygit2: 0.21.4
Python: 2.7.5 (default, Oct 11 2015, 17:47:16)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 14.7.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.0.5
System Versions:
dist: redhat 7.2 Maipo
machine: x86_64
release: 3.10.0-327.22.2.el7.x86_64
system: Linux
version: Red Hat Enterprise Linux Server 7.2 Maipo
I also am receiving this same error from salt commands issued from the api. I'm using salt 2016.3.2
Same here, also using the API with 2016.3.2. We recently upgraded from 2015.8.10 and it was not happening on that version.
A little more information. The error isn't specific to the API. I can send an event on the command line for a reactor to handle and that also produces the error.
+1 I am seeing this as well. I am calling runners from ext_pillars and getting lines and lines of this error.
Minion Version: Salt Version: Salt: 2016.11.1
Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: Not Installed gitdb: Not Installed gitpython: Not Installed ioflo: Not Installed Jinja2: 2.8 libgit2: Not Installed libnacl: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.4.8 mysql-python: 1.2.3 pycparser: Not Installed pycrypto: 2.6.1 pygit2: Not Installed Python: 2.7.5 (default, Nov 20 2015, 02:00:19) python-gnupg: Not Installed PyYAML: 3.11 PyZMQ: 14.3.1 RAET: Not Installed smmap: Not Installed timelib: Not Installed Tornado: 4.2.1 ZMQ: 3.2.5
System Versions: dist: centos 7.2.1511 Core machine: x86_64 release: 3.10.0-327.el7.x86_64 system: Linux version: CentOS Linux 7.2.1511 Core
Master: Salt Version: Salt: 2016.11.1
Dependency Versions: cffi: 0.8.6 cherrypy: Not Installed dateutil: 2.5.3 gitdb: Not Installed gitpython: Not Installed ioflo: Not Installed Jinja2: 2.8 libgit2: Not Installed libnacl: Not Installed M2Crypto: Not Installed Mako: Not Installed msgpack-pure: Not Installed msgpack-python: 0.4.8 mysql-python: 1.2.3 pycparser: 2.14 pycrypto: 2.6.1 pygit2: Not Installed Python: 2.7.5 (default, Nov 20 2015, 02:00:19) python-gnupg: Not Installed PyYAML: 3.10 PyZMQ: 14.3.1 RAET: Not Installed smmap: Not Installed timelib: 0.2.4 Tornado: 4.2.1 ZMQ: 3.2.5
System Versions: dist: centos 7.2.1511 Core machine: x86_64 release: 3.10.0-327.el7.x86_64 system: Linux version: CentOS Linux 7.2.1511 Core
Any updates on the cause of this? or how to work around it?
Still seeing these messages after updating to 2016.11.3
Salt Version:
Salt: 2016.11.3
Dependency Versions:
cffi: 1.6.0
cherrypy: 3.2.2
dateutil: 1.5
gitdb: 0.6.4
gitpython: 1.0.1
ioflo: Not Installed
Jinja2: 2.7.2
libgit2: Not Installed
libnacl: Not Installed
M2Crypto: 0.21.1
Mako: Not Installed
msgpack-pure: Not Installed
msgpack-python: 0.4.8
mysql-python: 1.2.5
pycparser: 2.14
pycrypto: 2.6.1
pygit2: Not Installed
Python: 2.7.5 (default, Aug 2 2016, 04:20:16)
python-gnupg: Not Installed
PyYAML: 3.11
PyZMQ: 15.3.0
RAET: Not Installed
smmap: 0.9.0
timelib: Not Installed
Tornado: 4.2.1
ZMQ: 4.1.4
System Versions:
dist: redhat 7.3 Maipo
machine: x86_64
release: 3.10.0-514.el7.x86_64
system: Linux
version: Red Hat Enterprise Linux Server 7.3 Maipo
Same here, I get the errors too. I am not using the API either but I am also sending events via scripts to the event bus for the reactor.
Minion logs get so huge that it takes up the entire disk space and the service goes down
I'm also seeing this error message a lot. For me it's happening at times when I'm not running anything, so it seems like it's part of the scheduled tasks in the minion.
Got tons of this
[salt.transport.ipc][ERROR ][25540] Exception occurred while handling stream: [Errno 0] Success
Any news on this? I¨m getting this in my salt-master log:
2017-05-18 14:15:50,470 [salt.transport.ipc][ERROR ][30514] Exception occurred while handling stream: [Errno 0] Success
I think i have managed to narrow it down to beacon "network_info" (from "/etc/salt/minion.d/beacons.conf"):
beacons: network_info:
- ens3:
- interval: 10
- type: greater
- bytes_sent: 0
- bytes_recv: 0
I'm running salt packages with version 2016.11.4+ds-1 on Ubuntu 16.04.2 LTS. It seems that those events doesn't really make through to the salt-master event queue, cause I see no mention of it when running:
salt-run state.event pretty=True
On a side note, i think getting these beacons to run every X seconds is a bit finicky. I want those to run, so I can relay the data to influxdb and render them with grafana.
Let me know if there is any other info that might be helpful, because this is driving me nuts.
Finally managed to make it work. I have configured my beacons like this (using salt-minion from ubuntu 16.04, package version "2016.11.4+ds-1"):
# vim: expandtab ts=2 sw=2 softtabstop=2
beacons:
load:
interval: 10
1m:
- 10.0
- 100000.0
5m:
- 10.0
- 100000.0
15m:
- 10.0
- 100000.0
memusage:
- percent: 0%
- interval: 10
network_info:
interval: 10
ens3:
type: greater
bytes_sent: 0
bytes_recv: 0
packets_sent: 0
packets_recv: 0
errin: 0
errout: 0
dropin: 0
dropout: 0
diskusage:
- interval: 10
- /: 0%
Pay no attention to most of the values. I want it to report every 10 seconds, and this seems to work...
Sorry about all the noise... It's still showing up in the log, but at least with this setup, everything seems to work nonetheless. I'm back at square one with the debugging, so I give up
I'm looking this issue and beginning to suspect it's a false positive on the exception. For anyone seeing this one, can you confirm if events are going missing? Or if something is generally not working when it should be? I ran the quick test that @Ch3LL mentioned at the start of the issue, grabbed the event IDs and confirmed that they were making it to the event bus. Thanks!
Hey, Just came across this thread. We are seeing events missing, when these log messages occur - and many of these messages per day. Not using salt-api, but sending events to be picked up by a reactor.
Salt Version: Salt: 2016.3.5
Dependency Versions: cffi: Not Installed cherrypy: Not Installed dateutil: 1.5 gitdb: 0.5.4 gitpython: 0.3.2 RC1 ioflo: Not Installed Jinja2: 2.7.2 libgit2: Not Installed libnacl: Not Installed M2Crypto: Not Installed Mako: 0.9.1 msgpack-pure: Not Installed msgpack-python: 0.4.6 mysql-python: 1.2.3 pycparser: Not Installed pycrypto: 2.6.1 pygit2: Not Installed Python: 2.7.6 (default, Oct 26 2016, 20:30:19) python-gnupg: Not Installed PyYAML: 3.10 PyZMQ: 14.0.1 RAET: Not Installed smmap: 0.8.2 timelib: Not Installed Tornado: 4.2.1 ZMQ: 4.0.4
System Versions: dist: Ubuntu 14.04 trusty machine: x86_64 release: 4.4.0-78-generic system: Linux version: Ubuntu 14.04 trusty
@b3hni4
Also, in our case, it seems to have escalated over the last days. We've never noticed this before, but since last week, we've lost a number of events (and we use events to trigger reactors and runners for all sorts of internal stuff).
Actually, this specific machine had a number of packages updated on 2015-05-17, and we started noticing the issue from 2015-05-18.
Packages upgraded were:
Start-Date: 2017-05-17 21:00:20
Upgrade: bash:amd64 (4.3-7ubuntu1.5, 4.3-7ubuntu1.7)
End-Date: 2017-05-17 21:00:22
Start-Date: 2017-05-17 21:00:26
Upgrade: hv-kvp-daemon-init:amd64 (3.13.0.117.127, 3.13.0.119.129)
End-Date: 2017-05-17 21:00:27
Start-Date: 2017-05-17 21:00:34
Upgrade: linux-tools-common:amd64 (3.13.0-117.164, 3.13.0-119.166)
End-Date: 2017-05-17 21:00:35
Start-Date: 2017-05-17 21:00:42
Upgrade: git-man:amd64 (1.9.1-1ubuntu0.4, 1.9.1-1ubuntu0.5)
End-Date: 2017-05-17 21:00:45
Start-Date: 2017-05-17 21:00:53
Upgrade: linux-cloud-tools-common:amd64 (3.13.0-117.164, 3.13.0-119.166)
End-Date: 2017-05-17 21:00:54
Start-Date: 2017-05-17 21:01:07
Upgrade: git:amd64 (1.9.1-1ubuntu0.4, 1.9.1-1ubuntu0.5)
End-Date: 2017-05-17 21:01:08
Start-Date: 2017-05-17 21:01:19
Upgrade: openjdk-7-jre-headless:amd64 (7u121-2.6.8-1ubuntu0.14.04.3, 7u131-2.6.9-0ubuntu0.14.04.1)
End-Date: 2017-05-17 21:01:24
Start-Date: 2017-05-17 21:01:37
Upgrade: passwd:amd64 (4.1.5.1-1ubuntu9.4, 4.1.5.1-1ubuntu9.5)
End-Date: 2017-05-17 21:01:40
Start-Date: 2017-05-17 21:01:52
Upgrade: linux-libc-dev:amd64 (3.13.0-117.164, 3.13.0-119.166)
End-Date: 2017-05-17 21:01:53
Start-Date: 2017-05-17 21:02:04
Upgrade: login:amd64 (4.1.5.1-1ubuntu9.4, 4.1.5.1-1ubuntu9.5), linux-libc-dev:amd64 (3.13.0-117.164, 3.13.0-119.166)
End-Date: 2017-05-17 21:02:07
Start-Date: 2017-05-17 21:02:17
Install: linux-cloud-tools-3.13.0-119:amd64 (3.13.0-119.166, automatic), linux-cloud-tools-3.13.0-119-generic:amd64 (3.13.0-119.166, automatic)
Upgrade: linux-cloud-tools-virtual:amd64 (3.13.0.117.127, 3.13.0.119.129)
End-Date: 2017-05-17 21:02:18
Start-Date: 2017-05-17 21:02:26
Install: linux-tools-3.13.0-119-generic:amd64 (3.13.0-119.166, automatic), linux-tools-3.13.0-119:amd64 (3.13.0-119.166, automatic)
Upgrade: linux-tools-virtual:amd64 (3.13.0.117.127, 3.13.0.119.129)
End-Date: 2017-05-17 21:02:27
Start-Date: 2017-05-17 21:02:34
Install: linux-image-4.4.0-78-generic:amd64 (4.4.0-78.99~14.04.2, automatic), linux-image-extra-4.4.0-78-generic:amd64 (4.4.0-78.99~14.04.2, automatic), linux-headers-4.4.0-78:amd64 (4.4.0-78.99~14.04.2, automatic), linux-headers-4.4.0-78-generic:amd64 (4.4.0-78.99~14.04.2, automatic)
Upgrade: linux-generic-lts-xenial:amd64 (4.4.0.75.62, 4.4.0.78.63), linux-headers-generic-lts-xenial:amd64 (4.4.0.75.62, 4.4.0.78.63), linux-image-generic-lts-xenial:amd64 (4.4.0.75.62, 4.4.0.78.63)
End-Date: 2017-05-17 21:03:45
+@b3hni4
The fix for this has been merged. Going to close this one out. If the problem persists, please feel free to reopen the issue. Thanks!
Description of Issue/Question
When using salt-api to call a runner I am seeing the following error:
Setup
Steps to Reproduce Issue
curl -sS localhost:8000/run -H 'Accept: application/x-yaml' -d client='runner' -d fun='jobs.active' -d username='saltdev' -d password='saltdev' -d eauth='pam'
I also saw this error more frequently when using this custom runner:
Also tested 2016.3.1