plone / plone.recipe.codeanalysis

provides static code analysis for Buildout-based Python projects, including flake8, JSHint, CSS Lint, and other code checks
https://pypi.org/project/plone.recipe.codeanalysis/
11 stars 8 forks source link

IOError: [Errno 11] Resource temporarily unavailable #196

Open idgserpro opened 7 years ago

idgserpro commented 7 years ago

I'm trying to use plone.recipe.codeanalysis with csslint. The output of csslint returns many lines (2300 lines). When I try to run the codeanalysis, I get the error:

/opt/programas/jenkins/workspace/SDDUX/SDDS3/serpro.portalsProcess Process-1:
Traceback (most recent call last):
  File "/usr/local/lib/python2.7/multiprocessing/process.py", line 258, in _bootstrap
    self.run()
  File "/usr/local/lib/python2.7/multiprocessing/process.py", line 114, in run
    self._target(*self._args, **self._kwargs)
  File "/opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/__init__.py", line 219, in taskrunner
    if not check.run():
  File "/opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/analyser.py", line 238, in run
    return self.parse_output(output_file, process.returncode)
  File "/opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/csslint.py", line 42, in parse_output
    return super(CSSLint, self).parse_output(output_file, return_code)
  File "/opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/analyser.py", line 206, in parse_output
    self.process_output(output_file.read())
  File "/opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/analyser.py", line 66, in log
    print(msg)
IOError: [Errno 11] Resource temporarily unavailable

I am using python 2.7.9 with Red Hat 6.7.

My settings:

[code-analysis]
recipe = plone.recipe.codeanalysis[recommended]
directory = ${buildout:directory}/src/serpro
csslint = True
csslint-bin = bin/csslint
jshint = True
jshint-bin = bin/jshint
flake8-exclude = bootstrap.py,bootstrap-buildout.py,docs,*.egg.,omelette
flake8-max-complexity = 15
pre-commit-hook = False
multiprocessing = True

.csslintrc:

--format=compact
--quiet
--exclude-list= ...
hvelarde commented 7 years ago

please update you .csslint file to this and test again:

--format=compact
--quiet
--ignore=adjoining-classes,floats,font-faces,font-sizes,ids,important,qualified-headings,unique-headings
idgserpro commented 7 years ago

@hvelarde I stopped running csslint and running only jshint. The output was 260 lines and the error didn't occur. One thing I forgot to mention, is that I'm running in jenkins. On the local machine the error does not occur. It may be some incompatibility with the way the jenkins performs its processes. Any suggestion?

do3cc commented 7 years ago

A similar problem happened here: https://bugs.chromium.org/p/gerrit/issues/detail?id=467 if it is feasable, I'd try to patch /opt/programas/python-cache/eggs/plone.recipe.codeanalysis-2.2-py2.7.egg/plone/recipe/codeanalysis/analyser.py to split up msg in smaller batches and see if printing the batches helps.

idgserpro commented 7 years ago

@do3cc what I think is occurring is that a process is trying to print a file that another process is still writing. Split will not work.

agb80 commented 7 years ago

Same problem here using csslint and jshint when the output is very big. In my case I'm using gitlab.

hvelarde commented 7 years ago

@idgserpro ask @tisto for advice on Jenkins configuration.

gforcada commented 7 years ago

In general, if the problem is that the output is too big, then either run it once manually and do the cleanup or rather make code-analysis avoid those files throwing lots of errors.

The point of p.r.codeanalysis is that if the package is clean, as soon as you introduce a code-analysis error you see it right away, if you have 1000+ errors and in your next commit you introduce 2 more, you will never notice them, making the problem worse and worse as time goes.

jim90272 commented 5 years ago

I have been able to reproduce the problem in Python by doing a non-blocking read on an empty pipe. So maybe the problem is that there is no data, rather than too much data. One way I was able to make the problem go away was to add a "try" clause to the Python program, e.g.

try:
     Result = x.stdout.read()
except:
     time.sleep(1)
     Result=''