microsoft / WSL

Issues found on WSL
https://docs.microsoft.com/windows/wsl
MIT License
17.44k stars 821 forks source link

obviate typing the filename extension for Windows executables #2003

Closed raymod2 closed 6 years ago

raymod2 commented 7 years ago

In a Windows command shell I can type "notepad.exe" or "notepad" and in both cases it will launch the same executable binary. In Bash on Windows I must use "notepad.exe". Is it feasible to get "notepad" working on Bash? If "notepad" is not found in the path then search for "notepad.exe", "notepad.com", etc., just like the Windows command shell.

benhillis commented 7 years ago

This is the expected behavior for Linux usermode. You can get around this by creating aliases for the commands you want to run and add them to your .bashrc file, for example:

alias notepad=/mnt/c/Windows/System32/notepad.exe

raymod2 commented 7 years ago

To what are you referring when you say "Linux usermode"? Probably you meant the bash shell? Well, bash shells running under Linux don't allow you to execute Windows binaries at all. The bash shells from Cygwin and MSYS2 allow you to execute Windows binaries without typing the filename extension. Bash on Ubuntu on Windows is the exception. Therefore it is quite the opposite of expected behavior.

Your recommendation regarding aliases is unsatisfactory. It takes more time to type "alias notepad=notepad.exe & notepad" than it does to simply type "notepad.exe"! And putting aliases in your .bashrc is impractical because it is an enormous task and it presumes you know ahead of time all the Windows executables you will ever run from the command line.

benhillis commented 7 years ago

@raymod2 - By Linux usermode I'm referring to whatever shell you are running. I'll go into a little bit of background how this works and explain the design. When you type "notepad" at the command prompt your shell goes through each element in your $PATH environment variable calling the exec system call on each one, for example when you type "notepad":

echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games
exec(/usr/local/sbin/notepad) = ENOENT
exec(/usr/local/bin/notepad) = ENOENT
exec(/usr/sbin/notepad) = ENOENT
exec(/sbin/notepad) = ENOENT
exec(/bin/notepad) = ENOENT
exec(/usr/games/notepad) = ENOENT
exec(/user/local/games/notepad) = ENOENT

The stated goal of WSL is to provide a development environment that is as similar to native Linux as possible using unmodified Linux user mode binaries. Not to be a feature-complete replacement for Cygwin. The two options for implementing the behavior you describe are:

  1. Modify Linux usermode to automatically add ".exe" before the call to the exec system call (similar to what NT usermode does in the CreateProcess API). This would violate our goal of using native Linux bits and introduce potential compatibility problems.
  2. Modify the exec system call to add ".exe" to the supplied filename, which would violate the exec syscall ABI contract and also introduce compatibility problems.

If you don't like my previous suggestion you could also add a custom command not found handle to automatically do this:

https://unix.stackexchange.com/questions/74917/how-to-locally-redefine-command-not-found-handle

raymod2 commented 7 years ago

It sounds like you've put some thought into this. However, you might want to reconsider the tradeoff between compatibility and user convenience. I suspect this is one of the first things with which new WSL users struggle (it was for me).

In the meantime your second suggestion to workaround this issue is much better. I've added the following to my ~/.bashrc and it seems to be working well:

eval "$(echo "orig_command_not_found_handle()"; declare -f command_not_found_handle | tail -n +2)"
command_not_found_handle()
{
   cmd=$1
   shift
   args=( "$@" )

   IFS=:
   for dir in $PATH; do
      for executable in $dir/$cmd.exe $dir/$cmd.com $dir/$cmd.bat; do
         if [ -x $executable ]; then
            "$executable" "${args[@]}"
            return
         fi
      done
   done

   orig_command_not_found_handle "$cmd" "${args[@]}"
}
mateusmedeiros commented 7 years ago

@raymod2 Sorry, I'm really not trying to be rude, nor sound like what you say have no merit, but I believe the user convenience may depend on what you're using WSL for. It would, for example, be a major inconvenience for me if I called python and it by default called the windows python.exe binary instead of the linux one. Also, a lot of scripts that I directly or indirectly use would probably break.

Because of how we may have different users using WSL in different ways with different intentions (and with different possible inconveniences), I believe having the default behavior of linux is a way to stay neutral.

Hope that made sense to you.

raymod2 commented 7 years ago

@mateusmedeiros: When you type "python" your PATH environment variable will be searched (in order) to find the first match. There may be more than one match. For example, you might have both Python 2.x and Python 3.x installed on your system but you want the Python 2.x version to be selected by default. So you configure your PATH variable so the 2.x version comes first. Scripts are generally written using absolute paths for executable binaries so that changing your PATH environment variable does not break them.

Keep in mind that the issues you mention exist even under a purely Linux environment. Windows interoperability does not change this!

aseering commented 7 years ago

@raymod2 -- if I type "notepad.exe" on my native-Linux box, it is in fact perfectly happy to launch Notepad. And if I type "notepad", it's not. (Thank you wine :-) )

I think this issue depends to some extent on whether WSL is aiming to match the expectations of people who mostly use Windows and need Linux, or people who mostly use Linux and need Windows.

raymod2 commented 7 years ago

@aseering: You make a good point. But regardless of your background (Linux or Windows) it is easier to type 'notepad' than 'notepad.exe'. Typing "net use" is more natural than "net.exe use", etc. When MS-DOS was created filename extensions were used to identify executable files. When the Bourne shell was created execute file permissions were used instead so you didn't need a filename extension. In both cases the end result was the same and the user never had to memorize and type an awkward filename extension for every command they executed. So why force Bash on Windows users to do this?

aseering commented 7 years ago

Honestly, I never actually remember either one. I just type "note" and hit tab, and let bash autocomplete it for me :-) If a ".exe" appears unexpectedly, that reminds me that the program won't accept Linux paths; I have to write them in Windows form.

But that's just me. I do see the value in your approach as well. While it would be a disruptive change for me, it would be helpful for you. And there's the trade-off :-)

mateusmedeiros commented 7 years ago

@raymod2 "Scripts are generally written using absolute paths for executable binaries so that changing your PATH environment variable does not break them." I believe this is not entirely true, it depends a lot on the nature of the script. And not only scripts are the problem. Take neovim, for example, it will try to use python within my system, both 2 and 3, but if trying to run the "python" command it actually ran "python.exe", who knows what kind of errors, maybe cryptic errors that would take me some time to understand, I could get.

You do make a good point on the PATH order, surely if I have python on windows and linux and put linux paths before in the PATH var, it would solve that issue, I wouldn't run windows binaries accidentally.

But think about software that may behave differently according to if python is in the PATH or not, in a situation where python is not installed on the linux side. It will find my python on windows and try to use it, and things will probably go awry.

The issues I talked about do exist on a normal linux environment, like a software trying to use python expecting python 2 and finding python 3, but python is really just an example, most tools would be much harder to cause problems with that (and usually software that calls python already have a lot of tricks in their sleeves for the python 2 vs 3 situation). I believe the problem with allowing windows binaries to be called from WSL without the .exe extension is mostly that it will make that kind of problem much more easier to happen, IMHO.

Ultimately, I think it all comes down to exactly what @aseering said, the expected and ideal behavior is different depending on if you mostly use linux and needs windows or if you mostly use windows and needs linux.

I do understand your approach would have it's benefits, it would be inconvenient for me but that's not the end of the world either, I can think of a couple of workarounds I could make to alleviate the problem for me.

If something like this gets implemented, all I ask Microsoft, if possible, is for the behavior to be configurable through a reg key. I think this would be the best way to please the larger amount of people. 😬

CherryDT commented 7 years ago

I do understand the convenience benefits but actually I think it would be a false friend just like register_globals in PHP.

In general I believe WSL should behave exactly like a real Linux machine as much as possible. Now imagining how I would run a Windows executable under Linux is by using Wine. I would register .exe with binfmt so that it runs with Wine automatically. If I explicitely wanted something available under a name without .exe, I'd create an alias or script.

This would behave exactly like WSL does now.

It would in fact be very surprising if Windows binaries would be invokable in the same way as Linux binaries which have the same filename, because they support different things (paths for example), so it would a time bomb, because you wouldn't have control over what now actually runs it.

therealkenc commented 7 years ago

You could probably also do a LD_PRELOAD thing on glibc exec.

But like Ben was saying, violating the syscall contract isn't at all viable, so no matter how you look at it, it isn't really a WSL Thing. If on Real Linux™ you think .sh or .py scripts should run without including the extension, that's basically the same ask with the same solutions.

Maybe there would be a reasonable case to be made for the Linux Powershell folks to consider this as a feature, if they haven't already.

fpqc commented 7 years ago

@therealkenc the Linux powershell guys actually had a huge debate over the merits of using the WinPowershell aliases for internal commands vs using the local system's executables (for example, ls in WinPowershell is an alias for Get-ChildItem). Last time I looked, it hadn't been resolved to anyone's total satisfaction.

raymod2 commented 7 years ago

@aseering: I think automatic conversion between Linux style paths and Windows style paths is another feature request that has merit. The developer toiling away in a bash shell shouldn't have to waste mental energy doing the conversion himself (and figuring out when to do it).

@mateusmedeiros: I really don't think your concerns would materialize. I suggest you try adding the code snippet I pasted above to your .bashrc and see what happens. I believe it would be transparent to you. The Windows .exe, .com, and *.bat files will not be invoked if an executable without an extension is found anywhere in the path. In other words, the old behavior would be an error message saying the command doesn't exist. The new behavior would be to search for a Windows style executable before giving up. Hardly an inconvenience!

@CherryDT: It is your opinion that WSL should emulate Linux emulating Windows? That's quite a convoluted position!

CherryDT commented 7 years ago

@raymod2

About paths: The problem path conversion is that there is no interface definition which could tell WSL what is a path and what not. Maybe somebody can come up with an ingenious idea but at the moment I think it's rather hard, and what would be worse than no translation would be intermittent translation or even unexpected translation (like you get it with Git bash where you have to do weird extra escaping to send /x parameters to Windows executables, for example).

About the rest: Well, the problem is that you are then mixing two realms in a way which can yield to unexpected results. For one, there are things in the Windows path which have the same name as the ones in Linux, and I'm not only talking about things like Python, but totally unrelated commands, like calc, clip, comp, net, dwm, klist, replace, runas, sfc, setfx...

Now I picked commands here which belong to packages under linux which are not installed by default. This shows the problem: Right now, they would invoke the Windows executables. But, this would make it hard for linux applications to realize that it is indeed a Windows application names net in the path. But the opposite is worse: Now you or maybe some other scripts start using the Windows tools with their normal names, e.g. net, and then you need to install samba-common-bin because it's a dependency of something, and whoops, suddenly your net does something entirely different. And you can't even fix it by changing the path because this would then also make all the other .exe-less Windows commands unusable. You would then probably start to write explicitely .exe in scripts etc. at which point we would be back where we started.

And yes, I think Linux emulating Windows has a similar situation of two worlds clashing together, and it's even the same two worlds, so it can be a good starting point. Ultimately, it should be better, of course, but I think at the end of the day, people who need to use Windows and Linux together (in a way not involving a VM) would have used this scenario otherwise as well.

aaronsvk commented 7 years ago

I've made an another implementation of command_not_found_handle function in my ~/.bashrc which handles windows executables and windows commands as well. Since it is based on command_not_found_handle function, all windows executables (or commands) are invoked only when there is no unix executable with the same name.

command_not_found_handle() {
    if cmd.exe /c "(where $1 || (help $1 |findstr /V Try)) >nul 2>nul && ($* || exit 0)"; then
        return $?
    else
        if [ -x /usr/lib/command-not-found ]; then
           /usr/lib/command-not-found -- "$1"
           return $?
        elif [ -x /usr/share/command-not-found/command-not-found ]; then
           /usr/share/command-not-found/command-not-found -- "$1"
           return $?
        else
           printf "%s: command not found\n" "$1" >&2
           return 127
        fi
    fi
}

Now it's working like a charm! image

benhillis commented 7 years ago

@aaronsvk - that's excellent - great job!

raymod2 commented 7 years ago

@aaronsvk: Your version adds the ability to run built-in commands although I can't think of any I would want to run from a Bash window (other than "ver"). But it requires two additional cmd.exe processes to be created for every Windows command you run so it will be considerably slower than my implementation. Also, it's probably not a good idea to redefine the original command_not_found_handle() function in your .bashrc because it could break with future Linux versions.

cbuschardt commented 7 years ago

Despite how fun this sounds, this isn't a good idea :(

Consider the case where you have windows python3 installed -- but not Linux python3. Now consider running python3 myscript.py from your home directory. Python will complain that it can't find the file -- the Windows version doesn't understand Unix paths. Worse, it doesn't even have access to your Linux file-system.

The existing behavior ensures that old scripts are not broken. It also ensures that the syscall interface will remain compatible with Linux.

mateusmedeiros commented 7 years ago

@cbuschardt Yup, that's exactly what I was trying to say.

If that python3 (or whatever it is, python was my first example just because it was the first thing that popped into my head) is not run directly by a script but deep within some software, the errors it will print might end up being unclear, and I fear it could take a while for me to understand that what I have to do to fix it is to install python so that it would stop trying to use my windows one (again, python is just an example, could be anything).

Scripts and other software are usually made to test if a tool is on $PATH and give a meaningful message to the user if it's not (and if it's required. If it's not, they may just silently disable the functionality they need that tool for). If one has the windows version of said tool BUT NOT the linux one, and this is implemented, those scripts and software will mistakenly think the user has that tool when they don't have it, and will try to run it and stuff will break because of the paths and the different filesystems and everything else.

My work on WSL is mostly linux-based (though not entirely!), it's very deep on the linux side. I use a lot of linux tools and software all the time and 99% of everything I do on WSL is on VolFs (aka the fake linux filesystem) instead of DrvFs, so, unfortunately, I'm quite sure stuff would break for me. Not that I am that important myself, but surely I'm not the only one that uses WSL like that.

raymod2 commented 7 years ago

The objections from a few people on this thread are getting a little ridiculous. If you want a pure Linux environment uninstall Windows and install a Linux distribution! Or dual boot. Or run Linux in a VM. WSL is about Windows/Linux interoperability. People have been running Cygwin and MSYS for decades and the sky has not yet fallen because Windows commands can be run from a bash shell without typing the filename extension!

CherryDT commented 7 years ago

The difference to Cygwin and MSYS is that WSL aims to provide an almost-native environment for linux programs so the linux programs don't need to know that they are under WSL at all. Therefore my point that from the perspective of a linux program, the system has to look at feel like real Ubuntu, and Windows-related stuff would then need to be more like Wine works under Linux.

Changing how some things work is OK as long as all parts of the pipeline are in on it. If you are working with unmodified linux binaries though, you get parts which are definitely not in on it.

mateusmedeiros commented 7 years ago

@raymod2 Chill, man. WSL and Cygwin have different objectives, Cygwin didn't solve my problems and I used a linux VM for half a decade because of that, but now WSL do.

If it starts to be more like Cygwin in providing less native emulation and more of a hybrid windows/linux system, the sky will not fall, it will only stop being a solution for me, that's all I'm saying.

I'm not the one that came with the idea of WSL being a layer of linux userspace syscall simulation, but it is what it is. People have different needs, the point you make is a valid one for your uses, I just tried to raise concerns for some people that use it differently (including me), and your answer was "if you use it like that, go use a VM, or dual boot".

Like I said in the very beginning, I believe having the default behavior of linux is a way to stay neutral. This way Microsoft doesn't have to try to guess which side to please every time, they can only focus on following the idea of WSL, which is to emulate the userspace of linux, and for that it only makes sense that they try to keep the default behavior of linux. Of course it's not my call, though, I'm just voicing opinions here.

Last, but not least, I just ask Microsoft one more time that if something like this gets implemented, if possible, keep the behavior configurable through a reg key.

Peace ✌️

raymod2 commented 7 years ago

@CherryDT: The goals of WSL are very similar to the goals of Cygwin and MSYS. Namely, to create a development environment under Windows that provides access to the Linux utilities that many developers rely upon (bash, grep, sed, autotools, etc.). The biggest difference is the strategy used to attain those goals. MSYS, for instance, takes the source for the GNU utilities and compiles them to run natively on Windows. WSL, on the other hand, takes the Linux binaries and executes them directly using a shim layer that translates system calls between POSIX and Windows. The number one feature request submitted by users of WSL was the ability to run Windows binaries from inside a WSL bash shell. I think that makes it pretty clear how the majority of users are using WSL and Microsoft responded by implementing that feature in the Creators Update. This request is simply a logical extension of that feature by making it easier to run Windows binaries in a WSL bash shell. I suspect that most WSL users would be in favor of this despite the loud opposition from a few users here.

@mateusmedeiros: I am guessing you didn't follow my suggestion and add my function to your .bashrc to see if anything breaks for you. You would have more credibility if you reported actual problems rather than theoretical ones.

benhillis commented 7 years ago

Let's keep this discussion civil. It's clear there are differing opinions on what is the right thing to do.

@raymod2 - for some context this feature has been available for a few months via Insider builds and this is the first negative feedback we've heard about having to specify .exe

CherryDT commented 7 years ago

@raymod2 Right, I agree with you there. But as you said, here the ultimate goal (being able to use Linux development utilities under Windows) is achieved through the intermediate goal of running unmodified Linux binaries (or scripts) natively. Therefore, adding something which can break the expectations and therefore function of unmodified Linux programs as a default configuration is in conflict with the intermediate goal. This, in turn, is then in conflict with the ultimate goal.

This is what makes me arrive at my conclusion that I don't think WSL should do it, at least not by default. (And by non-default, you can already achieve it anyway.) I am in no way against adding features which make interoperability more seamless, if and only if they don't endanger compatibility with Linux tools. And I agree that it will not cause things to blow up for people immediately, and hence why nobody showed you an example yet where things actually broke, but it is clear that they can break, and if you look around in this issue track you will see that there are tons of "edge cases" where people ran into issues caused by tiny differences between WSL and real linux, and they are not dismissed just because they happen only 0.1% of the time. A time bomb bug is never good. For example, in #966 Perl actually deletes files it isn't supposed to delete because of a small incompatibility...

By the way, it doesn't really matter to me who is "right", I just don't think oppositions based on technical problems which you can already "smell" when imagining the scenario should be so easily dismissed as "most users will like it anyway", because it's easy to demand something if it's not fully understood what possible unwanted side-effects it can have.

(Also peace ✌️)

raymod2 commented 7 years ago

Probably the best option is for Microsoft to add something similar to what I or aaronsvk wrote to the default .bashrc. For most users this will make WSL easier to use and it will improve the adoption rate. For the rare users who don't want it they can simply remove it.

aseering commented 7 years ago

@raymod2 , for context / as a counterexample to one of your previous comments: In a previous job (not currently), I was working on a team that wrote Linux-only software, but I was required to use a particular brand of computer hardware for my personal computer that did not have great Linux driver support, and to run various Windows-only applications (most notably, MS Lync, though others as well) that don't work under Wine and have A/V issues when run in a Windows VM on a Linux machine whose graphics drivers are less-than-stable.

I couldn't realistically use Cygwin for many reasons -- it handles Linux-specific edge cases in ways that differ from real-Linux and that would break our code; we needed to test for glibc-specific bugs and Cygwin doesn't support glibc at all; etc.

I didn't want to use a VM because the software was performance-intensive and performed operations that are expensive to virtualize, so running in a VM was slow. Plus that project had a large source tree, and while DrvFs is certainly not perfect, my experience with VM shared filesystems has been much worse...

I didn't want to dual-boot because restarting my computer just to take a phone call or instant message someone on my team gets annoying in a hurry.

You might claim that this company's constraints were silly. I won't comment on that, beyond (1) saying that they had reasons that I haven't discussed here, and (2) pointing out that I no longer work there :-) But at the end of the day, it really doesn't matter whether the company is silly. What matters is that it's (1) quite profitable, (2) not willing to change its policies, and (3) huge. If Microsoft were to cut a deal to provide Windows+WSL with any kind of support to even 1% of that company's employees, I would expect that one sales deal to cover the salaries of most or all of the current WSL team. And, based on my experience there, I think there are other big companies out there with similar problems -- targeting native Linux but refusing (or otherwise making it difficult) to let devs run native Linux on their workstations.

Of course, any commentary about Microsoft's sales strategy on my part is purely speculative. I don't work there; I have no idea what they have planned. My point is just that there does exist a set of problems that are solved neither by Cygwin nor by a VM, and that there exist sizeable teams of real engineers who would benefit from a new and distinct tool that solves those problems. Microsoft could target that niche and have basically a monopoly there, or they could try to reimplement and compete against Cygwin, or they could do something else. Regardless, I think it's not entirely fair of you to claim that these objections are getting ridiculous. I admit that I would have called them ridiculous myself a decade ago. But then I learned the hard way why they're not :-)

raymod2 commented 7 years ago

@aseering: And yet, nobody has provided a single example of how this feature request would cause any problems whatsoever. The bash function I wrote above only invokes a 'foo.exe' binary when no 'foo' binary can be found which is exactly what the user wants. It breaks nothing. Futhermore, it is only applied when a human is typing commands into a bash shell. All the hand waving about broken scripts and Linux binaries is nonsense. Add my function to your .bashrc and then create a script that attempts to invoke 'notepad'. It will fail because you didn't use 'notepad.exe' just like it does now.

CherryDT commented 7 years ago

Maybe my Linux knowledge is insufficient, but how does this differentiate between an actual user and a shell script? What about things which pipe data to bash for execution (like some remote installers do, even though this is discouraged for security reasons)? (I can't test it right now)

If it is really limited to the actual user input, it may be less problematic, you are right, but then on the other hand it would be unexpected that things that work on the prompt do not work in a script (even though it would be like that in order to prevent issues).

Thanks for bringing that up, it didn't occur to me that it would be possible like that.

raymod2 commented 7 years ago

@CherryDT: Bash functions are not accessible to child processes (such as a script executed from a shell prompt) unless you export them first (export -f foo_function).

aseering commented 7 years ago

@raymod2 -- actually, examples have been posted to this bugtracker. The problem is that they haven't made it to this ticket :-)

I'm on a cellphone right now and github's mobile search is rather limited... I'll try to circle back, but it may be a day or two, so if someone else has a chance to go looking, it would be much appreciated.

But the scenarios that I recall have to do with NodeJS being installed for both Linux and Windows. npm in particular tends to add multiple binaries for individual Node packages to $PATH (or %PATH%). And node tends to call its own binaries, and to expect that they can call each other. This is, in fact, not the case: First, there's an asymmetry in that Linux binaries can see Windows PATH but Windows binaries can't see the Linux PATH. Ypu could argue that WSL should fix that. But it gets worse: If Windows npm installs a node package, even if that package is pure JavaScript, Linux node cannot use that package. The Windows npm binary is hardcoded to output text files using Windows newlines, and the Linux node binary is hardcoded to reject files that use Windows newlines. That's a deliberate application-level decision, not directly within WSL's control.

This problem can be observed even today because many node packages bundle, even on Windows, shell-script launchers that also get added to PATH. So WSL will in fact launch the Windows versions of these packages today in some cases.

You would think that this might Just Work(tm), and result in a wonderful new form of Windows/Linux interop. You would be wrong :-) You might instead think that node would at least fail fast and force you to go install the Linux version of your dependency. Unfortunately, you would also be wrong. The failures can be so Byzantine that at least one node user has advocated vigorously that WSL needs to drop Windows %PATH% from Linux $PATH entirely.

To be fair, IMO node should go clean up their stuff :-) But node is a very prevalent environment these days; realistically, we have to put up with it.

aaronsvk commented 7 years ago

@raymod2 - I edited my example, now it requires only one cmd.exe process.

Also, some users may want to use other windows commands than ver, here are some examples:

ci70 commented 7 years ago

Has this been fixed? Typing .exe all the time is so time consuming!

CherryDT commented 7 years ago

@joeparkerz : It's not a bug but expected behavior, it works the same when using exe files im real Linux e.g using Wine. See the other comments in this thread to understand why (and why not everybody agrees) and for various workarounds and their up- and downsides and pick the one fitting best to your workflow.

Enzojz commented 7 years ago

Simply without this feature, it's impossible to compile OCaml with cl in WSL, while it's impossible to modify the script to add .exe everywhere. Pitty I still need cygwin because it's so slow!

therealkenc commented 7 years ago

I have not seen the script (and you did not link it), but if it needs to be modified in more than one place then the foo.exe names needs to be refactored into $(FOO) variables. Doing so is a one time pull request. Calling that "impossible" is a gross characterisation. It borders on being bug in the script.

It is unlikely you'll find WSL faster than Cygwin. Any emulation gymnastics that makes Cygwin slow, WSL has to do all the same; just on the other side of the syscall barrier.

Enzojz commented 7 years ago

Well, I am not facing not one script, but hundreds of scripts. The bootstrap script of OCaml is very complicated and it doesn't only call cl.exe and link.exe but also about 30 other component of it's own compilator...alos build script for some other heavy extensions of the lang. That's why I call it "impossible". WSL is obviously much quick, at least when I do make clean of bootstrap of OCaml, WSL takes only 30 secs but Cygwin will take me dizine minutes. If you would like to look at the problem I am face, here is an example: https://github.com/ocaml/ocaml/releases/tag/4.05.0

Serhii-the-Dev commented 6 years ago

@aaronsvk Maybe you may help, how can I adapt your solution for zsh/oh-my-zsh?

aaronsvk commented 6 years ago

For zsh just put this code into ~/.zshrc

command_not_found_handler() {
    if cmd.exe /c "(where $1 || (help $1 |findstr /V Try)) >nul 2>nul && ($* || exit 0)"; then
        return $?
    else
        [[ -x /usr/lib/command-not-found ]] || return 1
        /usr/lib/command-not-found --no-failure-msg -- ${1+"$1"} && :
    fi
}
Serhii-the-Dev commented 6 years ago

Awesome, thank you for help!

danyx23 commented 6 years ago

I come here from a slightly different angle but dealing with the same problem. I don't particularily care that I have to type .exe or create an alias for normal shell use, but this is a problem when you want to use linux developer scripts that try to use tools that you have installed under windows - namely e.g. the kubectl tool to interact with kubernetes.

My ideal setup is to install kubernetes tools under windows, but be able to use the win32 kubectl.exe without the .exe suffix from bash scripts in the WSL (e.g. the kubectx/kubens scripts or the zsh autocomplection for kubectx). For such cases, it would be ideal if I could tell the WSL to redirect kubectl under the WSL to the kubectl.exe I use under windows.

I realize that this is not always what you want (e.g. with the python example specified above). So maybe this could be done on a per case basis somehow?

sartan commented 6 years ago

@danyx23 Yes! This is exactly my angle. I have shell scripts that work fine on MacOS, Linux, and Windows (via git bash), but break under WSL because of the ".exe" suffix. Sure, there are workarounds, but that's not really the point. The issue is interoperability of scripts, not the tedium of having to manually type ".exe".

danyx23 commented 6 years ago

@sartan A workaround I found is to symlink on the WSL side from /usr/bin/kubectl to /mnt/c/.../kubectl.exe - this works for shell scripts etc and can be done on a case by case basis.

Tina-otoge commented 6 years ago

Bash will not automatically add the extension to a file.

If you create foo.py and run the command foo Bash will never try to execute foo.py.

Due to this, I think that (1) you should not expect Bash to try to run notepad.exe when typing notepad and (2) this is out of the scope of WSL.

WSL is not Bash, but Bash can run on WSL, and Bash is usually the default shell when installing WSL. You may want to find or write a shell that will automatically add extensions when typing a non existing commands or configure Bash to do so (ways to do so have been described in this issue).

therealkenc commented 6 years ago

Yes this got marked by-design in April 2017 but wasn't closed while discussion ran its course. Just got lost in the noise.

xtrm0 commented 5 years ago

This is my current solution for zsh:

command_not_found_handler()
{
   cmd=$1
   shift
   args=( "$@" )

   saveIFS="$IFS"
   IFS=:
   for dir in ${(@s/:/)PATH}; do
      for executable in "$dir/$cmd.exe" "$dir/$cmd.com" "$dir/$cmd.bat"; do
         if [ -x $executable ]; then
            IFS="$saveIFS"
            "$executable" "${args[@]}"
            return
         fi
      done
   done

   IFS="$saveIFS"
   if [ -x /usr/lib/command-not-found ]; then
      /usr/lib/command-not-found -- "$cmd" "${args[@]}"
      return $?
   elif [ -x /usr/share/command-not-found/command-not-found ]; then
      /usr/share/command-not-found/command-not-found -- "$1" "${args[@]}"
      return $?
   else
      printf "%s: command not found\n" "$cmd" >&2
      return 127
   fi
}

For bash, change the name of the function to command_not_found_handle and it should work

mrbianchi commented 4 years ago

This is my current solution for zsh:

command_not_found_handler()
{
   cmd=$1
   shift
   args=( "$@" )

   saveIFS="$IFS"
   IFS=:
   for dir in ${(@s/:/)PATH}; do
      for executable in "$dir/$cmd.exe" "$dir/$cmd.com" "$dir/$cmd.bat"; do
         if [ -x $executable ]; then
            IFS="$saveIFS"
            "$executable" "${args[@]}"
            return
         fi
      done
   done

   IFS="$saveIFS"
   if [ -x /usr/lib/command-not-found ]; then
      /usr/lib/command-not-found -- "$cmd" "${args[@]}"
      return $?
   elif [ -x /usr/share/command-not-found/command-not-found ]; then
      /usr/share/command-not-found/command-not-found -- "$1" "${args[@]}"
      return $?
   else
      printf "%s: command not found\n" "$cmd" >&2
      return 127
   fi
}

For bash, change the name of the function to command_not_found_handle and it should work

yours not working : -bash: ${(@s/:/)PATH}: bad substitution

xtrm0 commented 4 years ago

This is my current solution for ----> zsh <---: yours not working : - --->bash<---: ${(@s/:/)PATH}: bad substitution

Change that ${(@s/:/)PATH} for something equivalent that works in bash, iex: $(echo $PATH | sed "s/:/ /g")

mrbianchi commented 4 years ago

This is my current solution for ----> zsh <---: yours not working : - --->bash<---: ${(@s/:/)PATH}: bad substitution

Change that ${(@s/:/)PATH} for something equivalent that works in bash, iex: $(echo $PATH | sed "s/:/ /g") researching but still not working, no errors

UPDATE: my working solution on bash 🥇

command_not_found_handle()
{
   cmd=$1
   shift
   args=( "$@" )

   saveIFS="$IFS"
   IFS=:
   for dir in $PATH; do
      for executable in "$dir/$cmd.exe" "$dir/$cmd.com" "$dir/$cmd.bat"; do
         if [ -x $executable ]; then
            IFS="$saveIFS"
            "$executable" "${args[@]}"
            return
         fi
      done
   done

   IFS="$saveIFS"
   if [ -x /usr/lib/command-not-found ]; then
      /usr/lib/command-not-found -- "$cmd" "${args[@]}"
      return $?
   elif [ -x /usr/share/command-not-found/command-not-found ]; then
      /usr/share/command-not-found/command-not-found -- "$1" "${args[@]}"
      return $?
   else                                                                                                                                            printf "%s: command not found\n" "$cmd" >&2                                                                                                  return 127
   fi
}

and adding too export -f command_not_found_handle to get inheritance on childrens bashs

anyways, hours wasted but cordova subprocess doesnt import this function 👎 any idea?