pyinvoke / invoke

Pythonic task management & command execution.
http://pyinvoke.org
BSD 2-Clause "Simplified" License
4.36k stars 367 forks source link

Command output not displayed in Windows #201

Closed syu-id closed 9 years ago

syu-id commented 9 years ago

In Windows, command output is not displayed in terminal, even with the simple task below.

@task
def test():
    """output test"""
    run('echo hello invoke')

The --echo command line option also doesn't work in Windows. Things are fine in Linux though.

Windows 7 32bit, Python 2.7.9, Invoke 0.9.0

pfmoore commented 9 years ago

This is because the monkeypatched subprocess.Popen doesn't support the hide argument on Windows. Making it do so probably requires extra lower-level code, similar to that already implemented for Unix.

bitprophet commented 9 years ago

Yup, what @pfmoore said. I am open to patches, I just can't dev them myself due to Windows aversion :)

syu-id commented 9 years ago

Thank you guys. It's a pity because echoing the output is such basic feature. Hope someone would solve this.

pfmoore commented 9 years ago

Honestly, monkeypatching subprocess.Popen seems like a horrible approach. @bitprophet what exactly is that monkeypatching intended to achieve? It may be that there's a 3rd party module that does what you need in a cross-platform way. I'm certainly not willing to dive into the guts of Popen myself - there be dragons :-)

pfmoore commented 9 years ago

@bitprophet AFAICT, the monkeypatching is solely so that runner.run() can support the hide parameter. Would it not be less fragile to just do the select loop directly in runner.run rather than using communicate? That way you wouldn't need the monkeypatching (which, BTW, is duplicating code that has changed between Python 2.7 and 3.4 :-() It'd need to be Unix only, as select doesn't work on stdout and stderr under Windows, but that's fine, just use the communicate approach for now on Windows, it's no worse than currently.

If you did it that way, I could write a PR that used threads to support displaying the output and the hide option for Windows.

Let me know if you want to go down that route. I'm not particularly keen on writing the PR if it's not going to be accepted...

(It might make sense to factor the code out to a function in platform.py, that took 2 file objects and multiplexed the output to stdout/stderr, capturing it as it went past. That would keep the ugly platform specific stuff in one place.)

pfmoore commented 9 years ago

Sigh. I really can't work out what the monkeypatching is doing, it seems to be also making communicate return strings even though universal_newlines isn't set.

I'd love to fix this, but it feels like the existing code is so Windows-hostile that even trying is going to end up in a world of pain :-( So I'm going to leave it, I'm afraid. Ping me if you ever decide to remove the monkeypatching, and I'll take another look.

pfmoore commented 9 years ago

@bitprophet The following implementation of Local.run will work for Windows as well as Unix, and means that the subprocess monkeypatching can be removed. If you want to avoid using threads on Unix, you could either code a select-based approach for Unix and use this for Windows, or just use this for Windows and leave the existing monkeypatching in place for Unix.

Hope this is of use.

from subprocess import Popen, PIPE
import os
import sys
import threading

def run(self, command, warn, hide):
    process = Popen(
        command,
        shell=True,
        stdout=PIPE,
        stderr=PIPE,
    )

    def display(src, dst, cap):
        while True:
            data = os.read(src.fileno(), 1000)
            if not data: break
            os.write(dst.fileno(), data)
            cap.append(data)

    stdout = []
    stderr = []

    threads = []
    if 'out' not in hide:
        t = threading.Thread(target=display, args=(process.stdout, sys.stdout, stdout))
        threads.append(t)
        t.start()
    if 'err' not in hide:
        t = threading.Thread(target=display, args=(process.stderr, sys.stderr, stderr))
        threads.append(t)
        t.start()

    process.wait()
    for t in threads:
        t.join()

    stdout = b''.join(stdout).decode('utf-8', 'replace')
    stderr = b''.join(stderr).decode('utf-8', 'replace')
    return stdout, stderr, process.returncode, None

I've decoded the bytes objects to strings using UTF-8 in the same way the monkeypatching does. I'm not sure that using UTF-8 is right, though - what if the user has a different locale set? It should probably use sys.getfilesystemencoding(). UTF-8 is typically wrong for Windows.

Sorry I've not raised a PR for this, but I don't want to guess which approach you would prefer.

bitprophet commented 9 years ago

Thanks a lot for digging, @pfmoore - I don't remember offhand how necessary the monkeypatching truly is, just that at the time I started poking at things it seemed like the least awful route (which, given how terrible monkeypatching is, implies it seemed like the ONLY route).

I'll definitely look at your suggestions soon to see if past-me was being a dummy, and I also agree that shuffling things around for xplatform compatibility is a good idea, orthogonal to the monkeypatch issue.

bitprophet commented 9 years ago

I poked around and can't find a record of why the subclassing-Popen route seemed superior to doing things manually. Could've just been the first thing I tried that worked :( "Get it working basically, then iterate" has been my overall MO here - which is also why things such as naive approaches to text encoding, are present.

If I slot in your above snippet, many of the tests relating to output hiding & returning fail. I'll dig into it and try to figure out why - it might highlight why the monkeypatch approach was a thing.

pfmoore commented 9 years ago

@bitprophet It's quite possible I got something wrong around the hide options. I didn't really put much effort into that side of it, mostly just made sure that output was displayed and captured at the same time. That's the difference between dumping some sample code here and putting together a proper PR :-)

If I get some spare time (no promises!) I might play with it a bit more. Don't hold your breath, though - I might need to build a Unix box/VM first, though, I don't know if the tests even work on Windows, and I don't want to end up getting distracted trying to make them :-)

bitprophet commented 9 years ago

Oh sure, wasn't blaming you, the tl;dr here is the high level "why did we monkeypatch, is it possible a less dirty approach still works on Unix (and then also makes it easier to work on Windows)" stuff. At the very least I want to come out of this knowing the reasons for the current impl - that'll still be an improvement.

Anyway I'm in the middle of digging so I'll record my findings here & let you know when I have something I want your feedback on :)

Re: tests, until Travis-CI supports Windows (sounds like 'eventually'?) there's no real point mucking with them sadly. Maybe run them to see what that reveals at a high level, but I wouldn't waste time making them all pass if there's nontrivial wacky stuff going on.

bitprophet commented 9 years ago

@pfmoore I've found one wrinkle (beyond functionality updates to fix hide+capture) - was there a particular reason you used os.write(file_obj.fileno(), ...) instead of file_obj.write(...)? This breaks on file-like-but-not-really-file objects, including the ones used for testing purposes.

On my end, when I simply swap those calls with their method counterparts, things still appear to work fine (and the tests then pass). Obviously my worry is that there's a Windows-level distinction I'm not seeing.

Here's a link to highlight the two basic changes I needed to make for things to work on my end: https://github.com/pyinvoke/invoke/compare/7f030f5e1c9ccb7efb99cb3088706ebccbb27de8...54a66af1c2daea7646b6f9c4eff822ccc30c34fe

If you grab HEAD of that branch (https://github.com/pyinvoke/invoke/tree/xplatform-friendly-run-201) I'm curious to see if it works for you.

pfmoore commented 9 years ago

The reason I used os.write was simply to match os.read, which was needed because IIRC anything higher level on the read side doesn't grab what's available without blocking. And that's important if you want the output to appear on screen immediately it's available. But if you're happy to read using the low-level function and write using a higher-level one, that should be fine.

One concern I'd have with mixing the levels is that os.read always returns bytes, whereas what file_obj.write() takes depends on whether file_obj was opened in binary mode or not. I've seen a lot of code that seems to work on Python 2, but breaks on Python 3, to like taking that risk :-) I don't know if your primary platform is Python 2 or Python 3 - I use Python 3 almost exclusively (Windows + Python 3, how minority can I get? :-)) so bytes/string confusion is something I'm very conscious of.

Re tests, are you aware of Appveyor? It's similar to Travis, but with Windows builders.

I'll try to get some time to try out that branch over the weekend.

bitprophet commented 9 years ago

Ah yea, I haven't ran this branch under Python 3 yet (my default is still 2.6 as it is the baseline of what I support for my various projects). I was meaning to (some of monkey.py does have Py3 related code). Will do that soon.

The reading isn't as important for me, either, the main issue is writing - the test harness replaces sys.stdout with an object capable of carbon-copying input (similar to what run itself needs to do, actually) so when it's not written to correctly, the tests asserting stuff like "I expect that it wrote blah to stdout" fail. So if we find issues under Python 3 I'd be okay to switch reading back to the os call.

pfmoore commented 9 years ago

@bitprophet That branch doesn't work for me. I tried the following files:

tasks.py

from invoke import run, task

@task
def build():
    run("python child.py")

client.py

import time

for i in range(10):
    print("Item", i, flush=True)
    time.sleep(2)

Running inv build does not produce any output immediately - as I mentioned above, it looks like the read is blocking until it gets the requested amount of data (in spite of the child flushing the output).

And when it does come back with output, I get

>>>inv build
Exception in thread Thread-1:
Traceback (most recent call last):
  File "C:\Apps\Python34\Lib\threading.py", line 921, in _bootstrap_inner
    self.run()
  File "C:\Apps\Python34\Lib\threading.py", line 869, in run
    self._target(*self._args, **self._kwargs)
  File "c:\work\projects\invoke\invoke\runner.py", line 70, in display
    dst.write(data)
TypeError: must be str, not bytes

which is the issue of opening in binary mode or not.

This is Python 3.4 on Windows 7, 64-bit.

Switching to os.read/os.write works perfectly. Switching to os.read/dst.write fails with "input must be str, not bytes". Hacking it with dst.write(data.decode(dst.encoding)) works, but I guess your tests will be just as likely to fail because your dummy streams don't have an encoding attribute. I suppose data.decode(getattr(dst, 'encoding', 'ascii'), 'replace') is an option, but it's getting pretty ugly at that point...

bitprophet commented 9 years ago

Yea I clearly wasn't paying close enough attention, I can reproduce similar behavior here on other real-world tasks under Python 2 (though interestingly, the one I had been using to test originally - invoke's own test command - seemed to work OK).

I'll poke some more to see if I can get things working cleanly (& test under Python 3 as well). Thanks for the suggestions.

bitprophet commented 9 years ago

Back on this. Confirmed that I can recreate the delayed read/print using your example, under both Python 2 and 3, and that I get the TypeError on 3. Gonna review the discussion & code and see where I can get to.

bitprophet commented 9 years ago

Attempting to summarize (mostly for myself), since there's actually a lot of stuff going on here:

That brings us back to Paul's last comment. When I compare the new code to the old I notice that we (unsurprisingly) already wrestled with this general concern and those tweaks (mostly just accessing the stdout/stderr objects' buffer attribute under Python 3) aren't present in the new code.

Once I add that in, the encoding issue goes away, but we're still left with Python 3 not buffering correctly, even when using os.read, where Python 2 functions as expected. Dropping the read byte count to 1 makes no difference, either.

HOWEVER, this Python 2/3 buffering split behavior is already present on the master branch - meaning we're no worse off than before, and the code should ostensibly be Windows compatible now (and a lot less ugly).

The buffering problem is far from ideal and I don't want Python 3 to be another 2nd-class citizen alongside Windows, but at least it lets us spin that off into an orthogonal issue about "why exactly are file.read/write blocking under Py3 and is there another non-fileno-requiring method that doesn't block?". I should see if I have other info in the past about this because it may well have already been investigated & punted on.

Will doublecheck, tidy up and push in a bit.

bitprophet commented 9 years ago

@pfmoore I've pushed those couple of changes to the branch (still xplatform-friendly-run-201) and would love a retest on your end.

Again, under Python 3, it still buffers too much (feels like file objects have issues with this on both ends on one or both interpreters) but if things otherwise work for you it'll still be a good milestone at least.

Also curious if you've the ability to test this on Python 2 on Windows, since it'd be nice to know for sure that it works there for when we publish the end result.

I will continue to poke into the buffering issue because ugh it's annoying. Really don't want to do some horrible thing like a test-specific execution branch (e.g. only use file.write() when under test, use os.write() otherwise).

bitprophet commented 9 years ago

Aha, I was right about it coming up prior, this (Python 3 subprocesses - but apparently also other things like even bash scripts, since I just tested and that has the same issue) incorrectly buffering their output) is a known issue: #178.

If I change our test script above to do run("python child.py", pty=True) then things print correctly under both interpreters.

Still extremely annoying and it clearly is something easy to forget about / not know about so I wonder if there's not a workaround besides switching pty back to default True. EDIT: and that's actually not gonna help because the PTY solution (pexpect) doesn't even function on Windows.

bitprophet commented 9 years ago

See also #43 whose comments have a lot of the background regarding the work we did for buffering when adding Python 3 support.

bitprophet commented 9 years ago

More digging:

So this backs up the other notes about Python 3 being more aggressive about buffering during writes. The implication before was that this meant Python 3 subprocesses buffered too hard, but that's silly, it'll affect any writes - by the subprocess or by us - so the problem is our Python 3 process writing. Which is exactly what I found above.


My hope is that we can just call stream.flush() explicitly every time we write, and rely on the read-side buffering to make things non-awful in high throughput situations, including in Python 2. EDIT: this appears to work fine.

Alternately, we could limit this to Python 3 only, but the fewer code branches the better, I think.

Orthogonally, we could make this controllable in the rare case that a user wants the more aggressive buffering behavior from Python 3, but I think that can wait until it's proven multiple folks care.

bitprophet commented 9 years ago

@pfmoore Pushed the above change as well. Hoping there aren't any Windows-specific pitfalls with these changes (e.g. if there's a major difference in behavior or interface for BufferedWriter).

pfmoore commented 9 years ago

@bitprophet I haven't looked at the code at all, but I just did a test with the xplatform-friendly-run-201 branch, and it seems to work. (Windows 7, 64-bit, Python 3.4). Here's my test files:

slowout.py:

from time import sleep

for ch in "Hello, world!\nGoodbye, cruel world\n":
    print(ch, end='', flush=True)
    sleep(0.25)

tasks.py:

from invoke import task, run

@task
def example():
    r = run("py slowout.py")
    print("Result is", repr(r.stdout))

Running inv example displays the output character by character, and when finished the result is displayed, confirming that it's been captured as well.

Encoding is broken, though. If I add a £ sign to the string, it works fine when I run slowout.py directly (the £ sign is supported on my console code page which is cp437) but running via invoke shows:

->inv example
Hello, world!
Goodbye, cruel úworld
Result is Traceback (most recent call last):
  File "C:\Users\uk03306\AppData\Local\Temp\VE-2\Scripts\inv-script.py", line 9, in <module>
    load_entry_point('invoke==0.9.0', 'console_scripts', 'inv')()
  File "c:\work\personal\invoke\invoke\cli.py", line 352, in main
    dispatch(sys.argv)
  File "c:\work\personal\invoke\invoke\cli.py", line 345, in dispatch
    return executor.execute(*tasks)
  File "c:\work\personal\invoke\invoke\executor.py", line 98, in execute
    task=task, name=name, args=args, kwargs=kwargs, config=config
  File "c:\work\personal\invoke\invoke\executor.py", line 148, in _execute
    result = task(*args, **kwargs)
  File "c:\work\personal\invoke\invoke\tasks.py", line 111, in __call__
    result = self.body(*args, **kwargs)
  File "C:\work\scratch\tasks.py", line 6, in example
    print("Result is", repr(r.stdout))
  File "C:\Users\uk03306\AppData\Local\Temp\VE-2\lib\encodings\cp437.py", line 19, in encode
    return codecs.charmap_encode(input,self.errors,encoding_map)[0]
UnicodeEncodeError: 'charmap' codec can't encode character '\ufffd' in position 33: character maps to <undefined>

Note the ú character, which is not correct - it looks like the echoing of the stdout is not preserving the bytes (it's probably decoding and encoding with different encodings). Also, the Unicode error on the final print demonstrates that the captured output has corrupted the Unicode character.

I can set up a test for Python 2.7, but TBH I'd rather get the encoding issues sorted out under 3.x so we know they are fixed. I tend to find that working with 3.x forces me to actually understand my encoding issues, and once I do, 2.x just works without any further effort :-)

pfmoore commented 9 years ago

@bitprophet Oh doh. It's obvious what's wrong - you can't do

            if six.PY3:
                dst = dst.buffer
            while True:
                data = os.read(src.fileno(), 1000)
                if not data:
                    break
                if not hide:
                    dst.write(data)

unless dst is opened with the same encoding as is being used by src - which you don't check at all.

Also,

        stdout = b''.join(stdout).decode('utf-8', 'replace')
        stderr = b''.join(stderr).decode('utf-8', 'replace')

is completely wrong, as there's no way stdout and stderr are guaranteed to be UTF-8, even on Unix!

(If you want to test this stuff without needing to get a Windows box, I'd suggest running some tests with your locale set to something other than UTF-8. You should see basically the same as me if you set your locale to CP437, for example).

pfmoore commented 9 years ago

@bitprophet I could probably supply a patch that fixed the encoding issues but as I don't understand your issues with the testing infrastructure that's probably just a waste of time as you wouldn't be able to apply it.

BTW, in case it wasn't obvious, the existing code works perfectly when only given ASCII to deal with, so I'd be entirely comfortable with it going in as it stands, and we raise a separate issue to look at the encoding problems.

pfmoore commented 9 years ago

OK, just to clarify the maze of different encodings involved. Under Python 3, if you don't specify an encoding when opening a file, you will get os.device_encoding(fd.fileno()) if the file is a terminal (actually, it always tries that, but AIUI, device_encoding returns None for anything but a terminal). If that doesn't work, it uses locale.getpreferredencoding().

On my Windows PC (which is probably typical for many English Windows systems) the terminal encoding in a console is cp437. The locale preferred encoding is cp1252.

So when I run a child Python program its output is to a pipe and so will be in cp1252 (because stdout isn't a terminal). However, stdout in the invoke process is to the terminal and so will be expecting cp437. So copying the child's stdout to the screen unmodified will result in mojibake.

Converting the child's (byte) output to characters using utf-8 is just plain wrong.

The pass-through of the child stdout should decode using locale.getdefaultencoding() and re-encode using the sys.stdout.encoding. (With the proviso that most code typically doesn't use the locale module directly, it should use the text IO layer really). This assumes that the child's output is intended as text, but that's a reasonable assumption given that we're displaying it for the user. It needs a bit of care to avoid splitting partial characters if you're using a multibyte encoding like utf8, but it should be possible.

And just a final note - while I'll understand if your reaction is "stupid Windows systems", this really isn't a Windows issue. It would probably be hard to provoke the same behaviour on a Unix system (where programs tend to all use the same locale and it's often UTF-8) but you could probably do it by running a child with an environment where the locale is deliberately set to something other than the parent. That might help in setting up tests for the issue, rather than relying on what information I can provide.

pfmoore commented 9 years ago

Here's some pseudocode that covers most of the corner cases:

while True:
    # This breaks if the child writes more than 1000 bytes without flushing,
    # is using a multibyte character set, and a multibyte character is split
    # *precisely* on the 1000-character boundary.
    # There's no obvious fix for this as read1() doesn't include a means to
    # say "read everything buffered"...
    ch = proc.stdout.read1(1000)
    if not ch:
        break
    # Note the following assumes that the subprocess writes its output
    # in the locale preferred encoding. That is a reasonable assumption
    # given that we are expecting to write the output to the terminal.
    # A more robust implementation would include a method of explicitly
    # saying what encoding the child is using.
    # It may also be worth using the 'replace' error handler here to
    # continue even if the child output can't be encoded to be written
    # on stdout.
    sys.stdout.write(ch.decode(locale.getpreferredencoding(False)))
    sys.stdout.flush()

Oh, and just to be clear on the other bit:

encoding = locale.getpreferredencoding(False)
stdout = b''.join(stdout).decode(encoding, 'replace')
stderr = b''.join(stderr).decode(encoding, 'replace')
pfmoore commented 9 years ago

OK, I've made this into a PR #211 with a couple more changes. I've fixed up the issue with split characters, and I've added an encoding parameter (undocumented for now) to the run() method that lets you override the default encoding assumed for the child. I added an unused encoding parameter to run_pty as well - it would probably be good to use that rather than the hard-coded default of utf-8 but I'm not going to touch that as it's Unix-only.

The advantage of it being a PR is that we can see what Travis thinks :-)

pfmoore commented 9 years ago

Looks good on Python 3. It appears that Python 2 doesn't have a "read1" method for files. I'm afraid I don't really have the interest to try to fight the complicated combination of Python 2's string model, its file APIs, and your test infrastructure's mocking of files, so I'm going to leave that to you, I'm afraid. The principle remains fine, I just don't know how to say in Python 2 what read1 says in Python 3.

By the way, if I don't want to use the invoke tasks for running the tests (because I can't, they use ptys AFAICT) how do I run the tests? It may be futile to even try to run the tests on Windows, but running them via travis every time I fix a typo is painful :-(

bitprophet commented 9 years ago

Encoding is broken, though.

this really isn't a Windows issue

I even saw the "just encode as UTF-8" at the bottom of the method this time around and was all "that's still very wrong for uncommon cases". Re: why that's there, I'll just refer back to https://github.com/pyinvoke/invoke/issues/201#issuecomment-73298928 :)

IOW, I agree we should make at least a good attempt to solve this because it is definitely not just a Windows problem, conversely it's as my env & use cases are extremely simple/mainline so that's what I get working first.

Re: my "bluh to Windows" stance, that's largely because of a) inability to test locally and b) of all users of my software, Windows tends to be quite in the minority, thus the effort cost is much higher. In this case I should be able to test at least some encoding/locale issues locally; no idea if there's more Windows users vs non-Windows-but-also-non-UTF-8 users tho :)

It needs a bit of care to avoid splitting partial characters if you're using a multibyte encoding like utf8

This is something I think came up in the past, at first I thought it was why the old impl was read/writing 1 byte at a time, but that doesn't make sense as it would just ensure multibyte stuff never prints correctly. Hrm.

Don't see a clear solution unless we do something kinda fugly like keep a mapping of known multibyte encodings and ensure our read/write chunk size is a multiple of their respective character size? Feels ripe for problems still. Wonder how similar tools/code handles this, prior art is usually good.

stuff about non-ASCII, non-UTF-8 encodings/locales

I'll need to see if I can recreate this on my end using locale switching as you recommend, simply dropping special characters like ¥ or £ is insufficient to trigger the issue for me.

stuff about mismatching child/parent encodings/locales

Huh, that's a wrinkle I hadn't even considered but which makes sense in retrospect. Thanks a lot for deconstructing and PRing all this!

By the way, if I don't want to use the invoke tasks for running the tests (because I can't, they use ptys AFAICT) how do I run the tests?

Just run the spec command (no args) :) the Invoke task is just extra abstraction for things like selecting a different runner, selecting a specific submodule of the test suite, etc.

bitprophet commented 9 years ago
pfmoore commented 9 years ago

Never mind, I was being dense - there's no reason to use read1 over os.read, it was the write side of things that couldn't be low-level from what you said before. So I've updated my PR #211, let's see what Travis thinks.

pfmoore commented 9 years ago

... yep, all tests green with that change :-)

pfmoore commented 9 years ago

For testing on Mac, I was thinking of making the subprocess be a script that set the locale variable (LANG?) and then ran a Pyton program. That makes the child process and the parent invoke process have differing locales. But I don't know if that would work the same on Unix as on Windows.

bitprophet commented 9 years ago

Oh goodie, glad to hear that :D sadly I get a UnicodeDecodeError on Python 2 with my version of the test script, but I'll see what I can do about that (and hopefully make a real test out of it).

Re: testing, I'm mostly still not sure how to come up with a useful/valid test that changes to a different-enough locale to trigger the types of issues you would be interested in catching.


Also, I skipped over your mention of AppVeyor before; I checked it out briefly and it doesn't seem like it has "easy" Python support (esp given things like https://github.com/ogrisel/python-appveyor-demo exist which appear to do their own Python env setup). Have you used it yourself to do CI for Python code?

My feeling is that it'd be nice to have some sort of early warning system for Windows problems, but as with Windows support generally, I can't commit the time/effort to maintain a second (very different from what I'm used to) CI environment. If you or another Windows user eventually finds time to get an Invoke-friendly env working on AppVeyor, though, I'd probably bless it with links from the site / IRC integration / whatnot.

bitprophet commented 9 years ago

tl;dr for the below:

Original ramble follows:


Yea looks like it's actually a violation of the local-encoding concepts you outlined above; my child.py is encoded as UTF-8 (required for the ¥ symbol to work right, otherwise the child Python interpreter throws a SyntaxError trying to import it) but locale.getpreferredencoding() in the parent (Invoke) is giving me US-ASCII, despite the invoking shell having a locale of en_us.UTF-8.

Doing same in a direct Python shell yields UTF-8, implying (in my limited knowledge) the issue is that Invoke's source code is stored in files w/o an explicit encoding header and thus defaulting to ASCII.

Strangely, if I remove the False from getpreferredencoding (so that it calls setlocale, according to the docs; I'm assuming your environment gets unhappy if that happens?) it tells me UTF-8 correctly, but I still get the UnicodeDecodeError complaining that it's using the 'ascii' encoding.

I verified that the iterdecode is doing...something...insofar as ¥ goes from \xc2 to \xa5, but clearly the result isn't valid. (That's with getpreferredencoding(True); going back to False just means the encoded new value is \ufffd - still not UTF-8 friendly.)

I also checked and the popen pipe stream has encoding None, while the stdout stream (our proc's stdout) is UTF-8. This again sounds like the same overall class of problem your code solves for (at least under Python 3).

Finally, if I temporarily remove the use of iterdecode, things work fine (implying the 'US-ASCII' part is a red herring and that any attempt to decode is somehow breaking things here - the source data is already appropriate for my terminal).

pfmoore commented 9 years ago

Hmm, the docs for getpreferredencoding says "On some systems, it is necessary to invoke setlocale() to obtain the user preferences" (where omitting the False is what causes setlocale() to be called). But doing so (if I follow the docs correctly) can result in threadsafety issues, and as we're using threads here I prefer not to do that (also, using False is what the IO stack does, so I consider it more "correct" (in the sense of matching the default behaviour).

Congratulations, looks like your locale behaviour is even more confused than mine :-)

I tend to avoid ever using non-ASCII in source, and use \N{YEN SIGN} or whatever.

Just out of curiosity, assuming you know (for certain!) what encoding the child is using to write its output, what happens if you explicitly set encoding=<that value> in the run() call?

bitprophet commented 9 years ago

After faffing about, I've found that this may be a common issue where performing the write to stdout is causing Python to try and re-encode the text we're handing it that is what's defaulting to ascii. To wit:

Also, just got your reply, and yea I'm wondering now if I'm just confusing things even further by sticking the special char into my source. Certainly I'm almost at my "fuck it" threshold :(

pfmoore commented 9 years ago

Re testing, I could probably set up Appveyor on my clone of invoke, to see how it works. But to be blunt, I find your test harness too complicated to follow, and I'm not willing to spend time fighting Unix-centric assumptions just to get the tests running. Specifically, using invoke to run its own tests, and the fact that inv test runs using ptys which Windows doesn't support, is not something I want to start unraveling. Also, I don't follow the spec test output, so I find it hard to see what code corresponds to what tests.

If something standard like nosetests worked, I'd be willing to see if I can get the tests going that way. (Personally, I don't use nose so it's a new framework for me to learn, but I'd be willing to do that). A lot of tests would probably fail and need looking at to see if skipping them on Windows was the right choice, or whether there's an actual bug to fix. I'd need your help with that too.

But the result would be a test suite that ran on Windows, coupled with an Appveyor setup which I could provide to automate tests.

bitprophet commented 9 years ago

Backwards:

pfmoore commented 9 years ago

What is sys.stdout.encodingin the invoke process (where the write is failing)? It sounds like it's ASCII, which would tie in with the fact that locale.getpreferredencoding() gives ASCII (as that's what open() uses by default for the encoding).

Did I mention that the Python 2 string model is an abomination? ;-)

pfmoore commented 9 years ago

One thought, if you can give me a self-contained test script that shows your failure on Unix, I can fire up a Linux test VM and see how it fares there. (Of course if it's an OSX only issue, that won't help much, but I guess I could advise you to get a proper OS like Windows in that case, just for a laugh :-))

bitprophet commented 9 years ago

Naw, it's UTF-8 always in this case :) The problem seems to be Python 2's innards default to ascii when doing an implicit "up-coding" in file.write() when handed what it thinks is a str and it wants to be writing unicode (as I understand it).

Again, though, I'm willing to chalk this up to me being dumb re: the non-ascii data in a text file. If the current code works in your environment I'm thinking we just call it good for now, and hope that subsequent updates don't break it too frequently...

pfmoore commented 9 years ago

Cool, I'm happy to say it's OK as long as you are. If you're mucking with that code in future, give me a shout I'd be happy to do a test for you.

On testing, I might open an issue "Tests should work on Windows" or some such, and use that for any conversation we have where I whine at you with questions about the test suite...

bitprophet commented 9 years ago

A ticket for Windows testing sounds fair, sure, & thanks for the offer of manual re-testing, it's appreciated :) (For example - I will be updating this module in the near future as I build out Fabric 2.x, where subclasses of Runner will be doing this exact sort of thing but with SSH stream file objects instead of Popen ones.)

bitprophet commented 9 years ago

OK, I just tidied things up a tiny bit and merged. Thanks so much for all of this & being patient with my complete lack of understanding of this particular problem domain! :smiley_cat: