nbeaver / why-linux-is-better

Objective reasons to prefer Linux to Windows.
528 stars 18 forks source link

Some corrections and counters from a Windows geek :) #7

Open MatejKafka opened 3 years ago

MatejKafka commented 3 years ago

Virtualization and driver limitations.

I'd argue that installing only the drivers necessary for the platform, and not just bundling lots of bloat in kernel is an advantage. Nevertheless, the result is, as you correctly state, that Windows usually won't boot with some hardware configuration changes.

UTF-16, not UTF-8.

Correct me if I'm wrong, but isn't most of Linux still pretending that all strings are ASCII? Not saying that UTF-16 is ideal, but the unicode situation seems much better on Windows than Linux from my (admittedly limited) experience with both.

File extensions are the sole determiner of filetype.

Gotta say, detecting file type based on a magic number doesn't seem like a good way of doing it to me. With extensions, I take a single look at a file name, and I instantly know what will happen when I try to open/run it, as opposed to linux, where i run a .py file, only for linux to launch it as a shell script because someone put a bash shebang at the top.

Ineffectual read-only permissions semantics.

I believe you're talking about the read-only attribute, which is file-only and afaik just a legacy carry-over from earlier systems. If you want to make a folder read-only, just use ACLs, which are much more powerful than linux rwx permissions. So, this one is factually incorrect, as you can trivially make a folder read-only with a single ACL entry.

Limited default debugging tools.

By default, yeah, but installing WinDbg takes about 10 seconds in MS Store, and it's a much more capable debugger than gdb. DebugView is portable, you can just take the .exe with you on a flash drive.

Lack of granular execution access control

Having a single root account is really bad idea in domain environments, and this decision carried over to personal Windows editions. Allowing binaries to automatically run as root created many security vulnerabilities on Linux where a binary with SUID execute bit set accidentally allows you to run shellcode as root. So, yeah, convenience is nice, but here, it's at the expense of security.

Default software.

Default software is bloat, especially development tools that 99% of Windows users never use. Not saying Windows don't have enough of their own bloat, but I don't see any reason to add more. PowerShell 5 is installed by-default on Windows 10 - you can't compare modern Linux with a version of Windows from 2012. Also, Windows GUI system config is good enough that again, most normal users don't need to use a shell.

Software configuration: registries and text files.

While I mostly have to agree with your assessment of the situation, most Windows apps are quite well-behaved. It is rare for an app to use anything else than AppData\Roaming\<appname>, AppData\Local\<appname> or HKCU\SOFTWARE registry directory for storing data/config. Also, modern UWP apps have quite stringent requirements on how to store data, and the paths are well-defined.

While I agree that Registry not a great implementation of the underlying idea, it still imo seems cleaner than dumping random text files into /etc, each with different format. Also, the blame doesn't really lie on Microsoft here, but on app developers who refuse to write their apps the way they are supposed to according to system standards.

Package manager with signed binaries.

Package management on Windows is still terrible, no question. But with winget and its upcoming support for MS Store, there is finally an official package manager which is gaining traction quite fast. Also, UWP is the best realization of a sandboxed package environment I've seen to date, the issue is mostly with distribution.

Fixing configuration problems with commands instead of GUIs.

  1. for normal users, GUI settings is much less error-prone
  2. lsblk; xrandr; lsusb, sudo /etc/init.d/networking restart,... - am I configuring a system here, or summoning a curse unto humankind?

Remote administration.

This is just plain wrong.

  1. There is PowerShell remoting, which is a much more full-featured alternative to ssh. Still, if you really want ssh, you can install it with a single command.
  2. Remote desktop server & client is built-in in Windows Pro and above, and it's still the best implementation of remote desktop I've seen. I'd much rather remote into a machine than use ssh for text commands.
  3. Not responding to pings is quite a sane default for public networks.

Public bug trackers.

There's Feedback Hub now, which is quite good. I'd argue the issue is mostly about users - when most of your system's user base are nerds and developers, public bug trackers work much better. Ever looked at GitHub issues of an OSS project used by non-technical users?

Malware.

So, the whole point of this is that nobody uses desktop Linux, so there isn't any malware targeted at it? Well, yeah.

alexbobp commented 3 years ago

I will respond point by point.

Drivers A typical linux system does not have several dozen megabytes of unused drivers all in one blob. Those dozens of megabytes of unused drivers are in the form of kernel modules, on disk, which only get loaded when the relevant hardware is detected. There are more customizable distros that allow you to avoid even bloating your disk with drivers you don't need, but either way, the disk space cost of the extra drivers is light compared to other sources of bloat in operating system installed size on disk, and there is no cost at runtime to the extra drivers since they are simply not loaded. This might seem odd in contrast to the windows driver model, and linux can be configured more statically, but a typical desktop linux install (eg ubuntu) is dynamically probing your hardware and loading the appropriate drivers every bootup. (as you might be wondering, yes... this does have a cost on bootup times, and there's a difference between what your typical desktop linux install looks like, and what a more optimized install looks like, but the cost is not very severe)

"Correct me if I'm wrong, but isn't most of Linux still pretending that all strings are ASCII?" You're wrong. Not sure how to elaborate on this. I've just long been used to all my tools handling UTF-8 perfectly, including in filenames and such.

File Extensions Honestly, I think you have a good point on this one. The practical reality in linux-land is that we still use filename extensions to distinguish most of our file formats, and our graphical file browsers use those filename extensions when we click the files, so it's not as different from windows-land as the essay makes it seem. The one big exception is executable status, which is a specific piece of metadata, and this still strikes me as a win for linux. The ability to double click something and have it result in executed code seems worth keeping sacred. This also allows linux to do things like mount an untrusted storage medium with the no-execute flag. I actually don't have a problem with filename extensions themselves, and I think this point is oddly made in the original essay. The biggest criticism I'd have of windows on this point is that it has a default setting of hiding filename extensions, and that is the part that makes it most likely to lead to dangerous user error.

Read-Only I think you're totally correct on this point. The original essay doesn't seem to mention ACLs at all.

Debugging Tools I think you have a good point here. I'm not familiar with WinDbg, but I'll assume you're right about it having comparable capabilities. If it's a first party tool and available for free, then whether it's installed by default seems to be unimportant.

Granular Access Control You're mistaken to think windows does not have the equivalent of the linux root account. You're right that windows has security domains it keeps outside the convenient reach of a normal admin... however that's largely an illusion. An administrative user can easily elevate to those higher levels, eg, running a cmd shell as SYSTEM, or loading arbitrary code as a system service or a kernel driver. On the other hand, if what you really need is an operating system that is able to robustly enforce granular access controls even against processes running as root, linux does have multiple solutions, like SELinux and AppArmor.

Default Software Is Bloat This argument just seems a bit silly when the installed size-on-disk of any linux distro with the standard array of office tools and other "bloat" is still far smaller than that of a naked windows install. But putting aside the comparison, your average linux distro doesn't devote that much space to bundled userland software. The heftiest pre-packaged things you typically see are LibreOffice, and a web browser that the end user might actually want to use. And I think most users still have use for a full-featured office suite, especially one they get for free. EIther way, it's easy enough to uninstall these things, or to install a minimal desktop setup and go from there, but overall I think your average linux distro strikes a very reasonable balance of default software that takes up a modest install footprint to provide a lot of basic functionality users actually want in a computer. And again, it's just an easily observable fact that even full-featured linux distros are less bloated than a vanilla windows install with much less functionality out of the box.

Registries vs Text Files The registry isn't inherently a bad idea. In fact, the two leading families of desktop environments for linux, gnome and KDE, both also have their own version of the registry. What I fundamentally like about the flat files in /etc is that I know how to edit them very quickly and easily, as well as back them up on an individualized basis, restore them, etc. What a centralized database system could provide us is robust data integrity protection, backup and state management and snapshotting on the level of individual applications you configure, and a snazzy editor that makes editing a bunch of hierarchical configuration data convenient and not horrifying. Right now windows gives us none of these things, and the blame is on microsoft. They standardized the idea of configuration in a hierarchical registry, but never supported it properly. The registry is not more convenient to edit than applications having their own config files, even though by all rights it should be, and it ends up providing a single point of failure for corruption, as well. A bunch of files in /etc might seem disorganized, but the simplicity allows you to manipulate it with very standard tools (text editors and basic file management), which is good enough for a lot of people. Despite its shortcomings, it makes it easy to see how to meet needs like backing up, distributing, or programmatically modifying or generating application configuration.

Package manager with signed binaries It's definitely true that windows has similar technologies available, but I think the real point here is that on linux, users have a much more realistic expectation of getting most or all of the software they install through such repositories. This means that the frequency with which a user will have to make the decision to trust a 3rd party website to download software is less, and therefore, one can only hope, those users will be more careful about when it's actually worth it to trust 3rd party sources to install software from. There are also sandboxed app distribution environments on linux like Flathub, though I'm really not qualified to compare whether that or UWP is better.

This wasn't mentioned, but my favorite benefit of linux package managers is dependency handling and the usage of system-installed libraries, instead of every program installing its own copies of every library it uses. To my knowledge, the package managers available for windows don't solve that, aside from ones that repackage linux-like environments, like cygwin.

Commands vs GUIs I'd rather have to enter a command because there's no gui for something, than have to click a gui because there's no command for something... which is how I often feel when administrating windows. Commands can be thrown into a script, and then easily turned into a gui, even if the gui is just a folder with multiple scripts you can click on. That said, this is still a valid criticism. It will be good for the linux community to improve the availability of good guis for administrating various system components. Hopefully that comes in time.

I do want to push back against your claim that GUIs are less error-prone: a lot of times, fixing things on either operating system comes down to searching the internet and following instructions. When the instructions are edits to config files or specific commands to run, the user can perform then via direct copy-and-paste, which is more precise than trying to follow instructions involving manipulating regedit or other configuration GUIs.

Remote Administration I think you mostly have it right here, other than claiming that the remote administration capabilities of windows are more full-featured. There are a variety of good remote access solutions for linux. Personally I think the biggest argument for linux here connects with the previous point... remote access through a command line is more universally stable and reliable than through a GUI (and yes, as bandwidth and technology improve, this will seem like a non-issue to more people). The reason linux users traditionally do remote administration on the command line isn't due to a lack of availability of linux servers for vnc, rdp, nx, etc, but rather that they already feel satisfied with their ability to administrate a linux system via the command line. There's also sshfs, which provides the ability to map a remote filesystem to a local mount point over ssh, which allows the use of locally installed GUI editors or configuration tools on remote systems.

Public Bug Trackers I think you're right here. Most software vendors have some way of taking user feedback. Microsoft's problem isn't a lack of feedback, it's that their interests are not aligned with those of the users!

Malware What you typed above is a complete strawman. The main point the article actually makes is the same as what I said above about package management... that windows lacks centralized trusted repositories that users can actually rely on to install most or all of their software, and windows users are largely trained to download installers from individual 3rd party websites and run them.

MatejKafka commented 3 years ago

First of all, thanks for the insightful replies. I may have come across as a Linux hater, but I actually quite like many aspects of it and generally enjoy using it, I just dislike the hate Windows are getting from Linux users who don't really understand them. :)

Drivers

I'm not sure about this, but I think windows also only loads drivers for hardware that is currently present (at least given that it can dynamically download and run drivers for newly connected peripherals, I don't see a reason why it wouldn't do the same with already downloaded drivers).

File extensions

Hiding file extensions is imo really dumb, and I'm not really sure why File Explorer does it by default. For executable bit, there are interesting edge cases - if you have python as the associated app for .py and you click a python file in GUI file explorer, are you executing it, or opening it? Windows "solves" this by not having execute bit and instead just always running the associated program. Still not entirely sure if it's "better" solution, but it's quite convenient as long as you know what each extension does.

Granular Access Control

  1. Technically, SYSTEM is not root - it's an ordinary account, which "just" has a lot of access around system files. Services (daemons) are usually running under it.
  2. By domain, I meant network windows installations with Active Directory,..., not security domains. Sorry, should have been more specific.
  3. I'm not really arguing that administrators are better than single root account for personal devices, I just tried to explain why Windows use the account model they do and where it's appropriate.
  4. Agreed, AppArmor and SE are really nice and I'd love to have similar customizable sandboxing mechanism on Windows.

Registries vs Text Files

Agreed, tooling around windows registry is just plain bad, and the registry itself isn't much better. PowerShell helps, but still nothing to write home about. Linux config files are easy to edit by hand, but what if I want to modify them in a script? I have to figure out what particular format of text file the config is using (usual absence of useful file extensions doesn't help), and then find a parser for it. With registry, I have unified interface. I believe that in the future, with evolving needs of modern apps and tooling around it, standardized binary config will work great, but it needs high-level support from language APIs for developers to use it well.

Package manager with signed binaries

Re: dependency handling - I'd love to have nix on Windows, it's great. I'm actually working on something a bit similar for Windows in my spare time, but it will take a lot of time to make it work well.

Commands vs GUIs

Even a power user, for one-off settings, I quite enjoy using GUIs, but I agree, all config should be scriptable, and Windows fail miserably here. PowerShell lets you configure quite a lot of stuff, but it's not as good as Linux tooling here. For normal users, GUIs are in my experience much more understandable - it's quite hard for them to deal with minor differences in syntax changing the meaning of the whole command, as it's not something you encounter in daily life unless you're a developer. For sharing config, I agree CLI is better.

Remote Administration

I didn't claim (or at least I didn't intend to) that Linux does not have remote GUI options, just that in my experience, they're fragmented and not used much. For mounting remote filesystems, there's SMB, which is standard across all windows devices and works quite well (and also integrates with other network domain features, which is quite important for corporate environments).

alexbobp commented 3 years ago

Fair enough! I didn't take you for a linux hater, but I do still think that some of the windows-hate is a bit more deserved than you think ;)

Drivers yes, it's similar on both operating systems. Except that having to wait for your OS to connect to the internet and maybe find your drivers, instead of already having them ready-to-go, is a hassle.

File Extensions Actually, when it comes to scripts, linux has another good solution... the shebang. A script on linux starts with #! /path/to/interpreter, and when an executable text file is executed, linux uses this designator to find the correct interpreter. Things like double-clicking a .py file are a non-issue, because the file extension association is only used for editors, and your file manager won't execute the script if it's not marked executable. You can still manually run such a script (ie, python script.py), but it won't be happening by mistake from a double-click action.

Granular Access Control You say "Technically, SYSTEM is not root." Frankly, this is a word-game at best. SYSTEM is the functional equivalent of root. It can read and write all other users' files, and system files, and again, it can load kernel-space code, which runs unrestricted by the CPU itself. If you think I'm incorrect, please explain what defence windows has against an attacker with SYSTEM access. SELinux does actually exist in real life, though. SELinux can actually limit the root user's ability to load kernel modules. Windows has absolutely nothing to limit an administrator from loading kernel-mode code. Kernel-mode code means a complete takeover, since the operating system's access controls no longer matter at that point.

Registries vs Text Files While I still agree that, in theory, updates to a database are the cleaner way to make programmatic updates, I'm not sure why you're acting like programmatic changes to text-based configs are difficult. Lots of tooling is already out there based on automatically modifying or generating config files. Just look at the very successful Let's Encrypt certbot for one example, which will edit SSL support right into your existing apache or nginx or other webserver's configuration, automagically. Ultimately I agree with your argument but you're overblowing how much of a problem this is in practice.

Commands vs GUIs Putting aside my main point, that command line UI works as a good basis for building a GUI but not the other way around, and that I'd rather have a command but no GUI than the other way around... if we judge purely by availability of GUI-based configuration, I think windows might have a slight lead in some aspects, but it's probably narrower than you think, and closing fast. There are plenty of GUI-based administration tools for linux, in fact, that you might not know about simply because so many linux administrators stick to the command-line tools by choice, so GUI options often aren't as publicized or don't get mentioned in tutorials. So if anything this is a cultural/ecosystem issue, of linux ecosystems needing to work a little harder to welcome users who prefer GUI. But again, I think the community is making great progress.

Remote Administration You say that linux GUI remote administration options are "fragmented and not used as much"? I really can't guess what you'd mean by "fragmented," but if your complaint is that they're "not used as much," I have to ask, why do you care what other admins choose to use? Do you actually think that they don't know about VNC, or are somehow unable to use VNC, if they wanted to? I think the fact that most linux admins find GUI-based administration to be unnecessary and unappealing, says a lot. It's good to have GUI options, but I think many of us will always feel more in control with our commands that we can put into scripts.

As for SMB, it's a fine option on a trusted network, but do you really expose SMB ports to the public internet or mount SMB shares over the public internet? I wouldn't. Maybe you know something I don't know. SSH, at least, I feel more comfortable with, and it's a single TCP connection which is easier to tunnel through other hosts if necessary, as compared to a p2p protocol.

FormerlyChucks commented 3 years ago

I see Gates is up to his old tricks again.

alexbobp commented 3 years ago

That seems like an unfair reaction. I think Matej was commenting in good faith.

nbeaver commented 3 years ago

@MatejKafka Thanks for providing your feedback. I think a lot of it comes down to differences in priorities, but I certainly learned some things about Windows from the points you brought up below. They also reminded me of some advantages to Linux that I had not previously written down.

Thanks also for (mostly) avoiding the "Windows application x is better than Linux application y" style of argument. It's a testament to the power of platform lock-in that most users only consider application support when evaluating an operating system rather than examining the operating system itself.

Below I've put your comments block-quoted in italics, like this:

Some corrections and counters from a Windows geek

Virtualization and driver limitations.

I'd argue that installing only the drivers necessary for the platform, and not just bundling lots of bloat in kernel is an advantage.

One possibly surprising advantage of the Linux kernel's approach to drivers is that overall they are actually less "bloated", i.e. they require less code and less storage space on disk for the same functionality. This is due to a well-known drawback of closed-source drivers: by necessity, they pass up opportunities for combining and consolidating driver code.

Another key disadvantage of the closed nature of Windows drivers is that large amounts of important subsystem code is duplicated (in different forms) across different Windows drivers. This duplication is also due to the small number of people who have access to the code: because no one person can see all the code, it is very difficult to factor out and optimize common subsystem code.

https://www.linuxfoundation.org/events/2008/06/the-linux-driver-model-a-better-way-to-support-devices/

There are loads of different USB data acquisition devices out in the world, and one German company send me a driver a while ago to support their devices. It turns out that I was working on a separate driver for a different company that did much the same thing. So, we worked together and merged the two together, and we now have a smaller kernel. That one driver turned out to work for a few other company's devices too, so they simply had to add their device id to the driver and never had to write any new code to get full Linux support. The original German company is happy as their devices are fully supported, which is what their customers wanted, and all of the other companies are very happy, as they really didn't have to do any extra work at all. Everyone wins.

http://www.kroah.com/log/linux/ols_2006_keynote.html

This is why Linux kernel maintainers make seemingly improbable claims about about their drivers being one third the size of Windows drivers:

Power management is now handled by the core of the kernel, so you don’t have to add that. You just have to add a few hooks in your driver and then you’re done.

The average driver for Linux is about one third the size of an equivalent driver for another operating system, so you have less code to write and maintain.

[ . . . ]

As I said before, your driver for Linux is one third the size of your driver for Windows, so even at this rate of change, writing a driver for Linux is less work than it is for other operating systems.

In Linux, we’ve re-written our USB stack three or four times. Windows has done the same thing, but they had to keep their old USB stack and a lot of their old codes in order to work for those old drivers. So, their maintenance burden goes up over time while ours doesn’t.

https://howsoftwareisbuilt.com/2009/11/18/interview-with-greg-kroah-hartman-linux-kernel-devmaintainer/

The general idea is that driver code is kernel code, so closed source Linux drivers violate the requirements of the GPLv2 license. The rationale for this development model is explained here more here:

The kernel to userspace interface is the one that application programs use, the syscall interface. That interface is very stable over time, and will not break.

[ . . . ]

You think you want a stable kernel interface, but you really do not, and you don't even know it. What you want is a stable running driver, and you get that only if your driver is in the main kernel tree.

https://www.kernel.org/doc/html/v4.10/_sources/process/stable-api-nonsense.txt

Note that the Linux kernel can also be made almost arbitrarily small by excluding modules; this is how things like Tiny Core Linux are done.

https://superuser.com/questions/370586/how-can-a-linux-kernel-be-so-small/370588

https://elinux.org/Kernel_Size_Tuning_Guide

Nevertheless, the result is, as you correctly state, that Windows usually won't boot with some hardware configuration changes.

The differences between Windows and Linux goes beyond not booting with different hardware. Some is due to licensing restrictions, as anyone who has tried transferring the hard drive from an existing OEM-licensed Windows machine to a new machine will have noticed sooner or later.

UTF-16, not UTF-8.

Correct me if I'm wrong, but isn't most of Linux still pretending that all strings are ASCII? Not saying that UTF-16 is ideal, but the unicode situation seems much better on Windows than Linux from my (admittedly limited) experience with both.

For things like filenames, the Linux kernel is encoding-agnostic: strings are arbitrary byte sequences terminated by a null. Filenames can contain any bytes except nulls or /. This is standard for POSIX kernels.

3.170 Filename

A sequence of bytes consisting of 1 to {NAME_MAX} bytes used to name a file. The bytes composing the name shall not contain the <NUL> or <slash> characters.

https://pubs.opengroup.org/onlinepubs/9699919799/basedefs/V1_chap03.html

Since UTF-16 or UTF-32 may contain nulls (a.k.a. zero bytes), they cannot in general be used to encode Linux filenames. However, there are many other encodings, such as Windows-1252 or ISO 8859-1, that do not contain nulls and so can be used for filenames.

For Linux userspace, such as GTK+ or Qt, the default locale is UTF-8. UTF-8 is a superset of ASCII, so is in a sense backward-compatible since UTF-8 does not contain nulls. Libraries like GLib and GTK+ default to UTF-8 but provide override mechanisms.

Glib uses UTF-8 for its strings, and GUI toolkits like GTK+ that use GLib do the same thing.

https://developer.gnome.org/glib/stable/glib-Character-Set-Conversion.html

File extensions are the sole determiner of filetype.

Gotta say, detecting file type based on magic number doesn't seem like a good way of doing it to me.

Magic numbers are primarily a fallback when e.g. a filename doesn't have a known extension or two different mimetypes have the same extension. To quote the FreeDesktop specification:

  • If a MIME type is provided explicitly (eg, by a ContentType HTTP header, a MIME email attachment, an extended attribute or some other means) then that should be used instead of guessing.

  • If no explicit type is present, the glob rules should be applied to the name to get the type.

  • If no glob rules match, the magic rules should be tried next.

https://specifications.freedesktop.org/shared-mime-info-spec/0.11/ar01s03.html

Note that this is the "recommended checking order", but in practice all the implementations I know of check the glob rule (usually a file extension) first.

In practical terms, this means that, for example, on Linux both public-key files (application/pgp-keys) and Acrobat Publisher files (application/vnd.ms-publisher) can co-exist on the same machine with the same .pub file extension.

With extensions, I take a single look at a file name, and I instantly know what will happen when I try to open/run it, as opposed to linux, where i run a .py file, only for linux to launch it as a shell script because someone put a bash shebang at the top.

It's interesting that you mention Python, because the Python 3 interpreter actually does support shebangs on Windows. I was curious, so I tried this myself: using my Windows 10 machine, I made a simple Tkinter script with a Python shebang and ran it from Windows Explorer, then made a copy and changed the shebang to bash (as you described). The script with the wrong shebang exited with an error instead of running it as a proper Python executable script, which is exactly the behavior you describe as undesirable, but on Windows 10 instead of Linux.

In the case of an executable Python script lacking a shebang altogether, Linux will run it with the default interpreter (usually /bin/sh), whereas Windows will use the file extension, and so here the Windows behavior may be less surprising.

I'll admit I'd never heard of anyone putting a bash shebang on a Python script before; is there a longer story here?

Ineffectual read-only permissions semantics.

I believe you're talking about the read-only attribute, which is file-only and afaik just a legacy carry-over from earlier systems.

Yes, the title should be different for this section; all I'm really complaining about is the read-only attribute. I used Windows for many years before trying Linux, and this always seemed to me like it was misleading at best and an unfixable design wart at worst.

If you want to make a folder read-only, just use ACLs, which are much more powerful than linux rwx permissions.

OK, but let's not pretend Linux is limited to rwx permissions. As best I can tell, Linux has supported standard POSIX ACLs since 2002 or so. SELinux is even older, dating back to 2000, and AppArmor is older still, first used in 1998. The chmod-style rwx Unix permissions are from the early 1970s, and include the ability to make read-only directories.

So, this one is factually incorrect, as you can trivially make a folder read-only with a single ACL entry.

The description I gave is perhaps unclear, but not incorrect: the claim is about the read-only attribute, not that read-only folders are impossible to achieve on Windows. The prior paragraphs indicate this limited scope.

In any case, wouldn't it be nice if we could actually make a folder have read-only behavior using the read-only attribute in Windows Explorer? That would be even easier than making an ACL entry.

Limited default debugging tools.

By default, yeah, but installing WinDbg takes about 10 seconds in MS Store, and it's a much more capable debugger than gdb.

The original argument hinges on what software is available by default... so I guess we agree on this point?

Otherwise this sounds like yet another "application x is better than application y" argument that has no clear decision criteria.

The point I was making is that the basic tools required to diagnose and repair problems are available on every Linux box, but not on every Windows box.

DebugView is portable, you can just take the .exe with you on a flash drive.

Default software still matters, even in an era of ubiquitous internet and large hard drives.

If we're talking about a personal machine with a good internet connection and permissions to install whatever we want, then sure, installing non-default software isn't a big deal. But many people use desktop machines maintained by a corporation, university, non-profit, government agency, etc.

If you're familiar with using computers in an institutional setting such as a national laboratory, you may know that plugging in any kind of flash drives is a big no-no that could result in suspension of user access. Sometimes system administrators literally fill in the USB ports with epoxy or silicone to prevent them from being used.

Posey notes that “one of the oldest and most effective techniques for controlling the use of USB storage devices involves pumping the workstation’s USB ports full of epoxy. This makes it physically impossible for a user to plug a USB device into his or her workstation.”

https://fedtechmagazine.com/article/2017/07/4-ways-prevent-leaks-usb-devices

Sean Greene, a security consultant at Evidence Solutions, advises his clients to use a clear silicone caulk and fill every USB port on every PC to prevent USB attachments.

https://www.cio.com/article/2400017/how-to-prevent-thumb-drive-security-disasters.html

Many IT departments also do things like restricting some machines to local intranet only, and they certainly don't let regular users download and install any software they want.

Consider a Windows box at a national lab set up to run an electron microscope or a Raman spectrometer. Suppose a user is collecting data, but in the middle of a scan the data collection application becomes unresponsive, and so the user calls up their local IT department for help. What tools would a regular user have to rely on for debugging the misbehaving application?

Lack of granular execution access control

Having a single root account is really bad idea in domain environments, and this decision carried over to personal Windows editions. Allowing binaries to automatically run as root created many security vulnerabilities on Linux where a binary with SUID execute bit set accidentally allows you to run shellcode as root.

I'm not sure which specific vulnerability you're referring to, so it's a little hard to address your concern. If bugs in setuid on Linux allowed unprivileged users with no sudo capabilities to run arbitrary executables as root, that would be quite bad. But I don't know of any case where this has actually happened:

Bugs that a user with no sudo access at all can exploit are essentially unheard of.

https://security.stackexchange.com/questions/223154/why-not-use-sudo-instead-of-setuid-setgid

If I'm understanding the argument here, it's that Linux has a per-executable setuid flag, which can be set wrong, whereas Windows does not a have a per-executable setuid flag, so it can't be set wrong.

So, yeah, convenience is nice, but here, it's at the expense of security.

I'm not convinced there's a tradeoff between convenience and security here, at least not in a way where Windows has an advantage over Linux.

Let's talk about why the setuid flag exists in the first place: programs like ping need to use the socket syscall, which requires root permissions. Rather than not letting regular users use ping, system administrators set a special flag on the /bin/ping binary that changes the effective user ID to root's user ID, allowing the process to open a raw socket.

Recent versions of Linux have relaxed the restriction on ICMP Echo sockets, so ping no longer requires setuid to be set.

And how does Windows 10 let ordinary users run ping? I'm not sure. Maybe a friendly Windows geek can tell us. :-)

However, I do know that Windows has analogous trade-offs when running a process as a different user. One common method involves storing credentials on disk in advance and then accessing that administrator password to run the program, which I think most security-minded folks would agree is more problematic than setting a flag on an executable.

Once you /savecred, you're saving your admin password to the users profile, UNCONDITIONALLY, for them to use any time, any way they like. That means, once saved, they can launch a console window (CMD prompt), type in "runas /savecred /user:administrator cmd.exe" and instantly launch a new command console with full admin rights to do anything they want. You probably do not want them to be able to do this!

https://superuser.com/questions/581548/runas-savecred-ask-for-password-if-another-user-runs-the-same-batch-file

So this sounds more like a "multi-user permissions are hard to configure correctly on any OS" problem, not a "setuid is bad" problem. SUID is still a useful tool in the toolbag for system administrators of POSIX-style operating systems, and on Linux it's essential for commands like mount and sudo.

Default software.

Default software is bloat, especially development tools that 99% of Windows users never use.

The examples I gave are mainly development tools, but POSIX utilities aren't necessarily development tools: they are general purpose tools such as date, cd, ls, and grep. (Yes, Windows provides findstr, but its name and usage is different from grep.)

After learning shell commands like grep, exploratory or one-off tasks like finding all dictionary words ending in "gry" or extracting a list of unique email addresses suddenly become much easier. These aren't "development" tasks in the usual sense.

I also don't agree that frequency of use determines bloat. For example, fsck is not a standard POSIX tool and is a command I almost never use, but when dealing with a failing hard drive it is essential. (CHKDSK is the Windows equivalent.)

Not saying Windows don't have enough of their own bloat, but I don't see any reason to add more.

POSIX tools aren't background processes that take up CPU or RAM, they are standalone utilities that run when called and then exit. Cygwin more or less achieves a POSIX environment on Windows, and takes up less than a hundred megabytes on disk for the full base package. If you can't spare 100 MB of disk space, there are other options like Gow that do things like removing debugging symbols and so require less than 20 MB. I'd hardly call that "bloat"; the latest version of Microsoft Visual C++ Redistributable alone requires more than 20 MB, and that's just for a single version.

The reason for providing POSIX utilities by default is simple: it provides a consistent cross-platform set of tools that users can rely on. (System administrators and developers are users, too.)

PowerShell 5 is installed by-default on Windows 10 - you can't compare modern Linux with a version of Windows from 2012.

The article is dated 2014, and so is a comparison between 2014-era Windows and 2014-era Linux. I intentionally chose to make comparisons on things that weren't likely to change, though, and indeed not much has changed.

Windows 8 was relatively new at the time, and most folks were hesitant to upgrade. Windows 10 wasn't released until July 2015, and Windows 7 was still supported until about a year ago (January 2020).

Also, Windows GUI system config is good enough that again, most normal users don't need to use a shell.

Linux users, including me, also use graphical configuration programs and graphical package managers. However, sometimes the shell is indispensable, such as when running programs with with debugging flags enabled or checking for error messages in console output. This is why it's a good thing that both cmd.exe and PowerShell are installed by default. I think we can agree on that.

Software configuration: registries and text files.

While I agree that Registry not a great implementation of the underlying idea, it still imo seems cleaner than dumping random text files into /etc, each with different format.

Cleaner in what way?

The files in /etc/ have different purposes, and so it makes sense that e.g. /etc/fstab has a different format than /etc/localtime. (Note that /etc/localtime is a binary format, not a text file.)

The structure of registry entries also varies greatly, so I'm not sure I buy the "each with a different format" argument in the general case.

Also, the blame doesn't really lie on Microsoft here, but on app developers who refuse to write their apps the way they are supposed to according to system standards.

Perhaps, but the point isn't whose fault it was, but which OS has the better overall outcome.

Linux also has issues with application developers refusing to adhere to the XDG Base Directory Specification.

Package manager with signed binaries.

Package management on Windows is still terrible, no question.

For Python packages, I'd say the situation is equally bad on Linux and Windows.

For general-purpose packaging like system updates, C library updates, or setting up a web server, I'd say most Linux distributions are better off than Windows 10.

But with winget and its upcoming support for MS Store, there is finally an official package manager which is gaining traction quite fast. Also, UWP is the best realization of a sandboxed package environment I've seen to date, the issue is mostly with distribution.

That's nice, but can you use winget to do Windows OS updates?

Linux distributions don't just use package managers to install and update applications or extra libraries, they use them to update and configure the entire distribution, including the OS itself. Security updates to the kernel, web server, the C standard library, and everything else all happen through the same mechanism. (This is one area I plan to emphasize more in the main article.)

Fixing configuration problems with commands instead of GUIs.

for normal users, GUI settings is much less error-prone

Depends on the setting and the user. Here's how I have to enable middle click on the touchpad of a Thinkpad T420 running Windows 10:

  1. Open regedit.

  2. Search for "TrackPointMode". (Check "Values" and "Match whole string only").

  3. Change "TrackPointMode" to hex 2214 (decimal 8724).

  4. Search for "MiddleButtonAction".

  5. Change "MiddleButtonAction" to hex 4 (decimal 4).

  6. Log out / login again, or reboot.

This is technically a GUI setting, but I would find it hard to argue it's easier than running a script. In fact, I wish I knew enough about Windows configuration to write a script for this; it's always annoying.

There also muscle memory and version changes to consider. I've seen a lot of changes in how the Control Panel is arranged between Windows 95 and Windows 10, but Windows configuration commands stay the same. (In any case, major desktop environments like GNOME, KDE, and XFCE provide graphical methods for changing settings in a manner analogous to the Windows Control Panel.)

The FreeDesktop mimeapps.list file, for all its shortcomings, can transfer hundreds of default applications in a single text file. Replicating those default applications by hand on Windows 10 is much more painful and error prone.

Remote administration

There is PowerShell remoting, which is a much more full-featured alternative to ssh. Still, if you really want ssh, you can install it with a single command.

I'm not exactly sure what you mean by "full-featured" in this context, but one of the main purposes of using ssh is to log into a remote machine and perform an OS update for e.g. security patches, and if necessary reboot the machine afterward. As best I understand, by design this is not possible via PowerShell remoting due to the choice of security model. I'll admit I only have secondhand knowledge of this, however.

Remote desktop server & client is built-in in Windows Pro and above, and it's still the best implementation of remote desktop I've seen. I'd much rather remote into a machine than use ssh for text commands.

OK, but we're talking about remote administration here, and not all servers have a graphical desktop. There are plenty of Linux servers that don't, and even some Windows servers are headless and so can't be administered graphically.

Not responding to pings is quite a sane default for public networks.

Maybe, but by default Windows 10 doesn't respond on private networks either. From experience, I've found that pinging a Windows 10 laptop with an Ethernet connection on a home network gets the same response as when using the wifi a coffee shop. (Yes, this can be changed in the firewall settings, but we're talking about defaults here.)

Public bug trackers.

There's Feedback Hub now, which is quite good.

As best I can tell, Feedback Hub is a Windows-only proprietary desktop application, not a publicly accessible bug tracker like Mozilla Bugzilla or Google Monorail.

A publicly accessible bug tracker is often embarrassing to the developers of a project. It's a comprehensive list of security problems, defects, regressions, and user complaints, all on display for the world to see. Since it's on the web, anyone can share a permanent link to it, save a copy, or store it in the Internet Archive. Not only can you see the bugs, you can see how long they've been open or how long the developers took to fix it. It completely changes the developer/user accountability power dynamic.

From this perspective, the fact that large organizations like Debian, the Linux kernel, Red Hat, Ubuntu, Chromium, and Mozilla still use public bug trackers is kind of amazing.

I'd argue the issue is mostly about users - when most of your system's user base are nerds and developers, public bug trackers work much better. Ever looked at GitHub issues of an OSS project used by non-technical users?

I don't accept the premise of this argument, because:

  1. There is no "technical" and "non-technical" binary, but rather countless overlapping fields of knowledge. For example, a user may be highly competent in Excel but know little about system administration or databases.

  2. In many areas of computing, there is an unhealthy attitude of contempt for inexperienced users. Blaming the user and making PEBKAC jokes is easy, but making robust and accessible software is hard. Even the most naive and poorly explained bug report can still provide insight into a user's experience and how to make better software. (This assumes good faith; bad faith bug reports are a different topic.)

  3. I've seen the "technical users" stereotype used by IT departments to deny support for Linux users, roughly "if someone knows enough to install Linux, they know enough to fix their own computer". At one point my Linux laptop was experiencing intermittent 30-second packet delays, which my university's IT department tried to convince me was the fault of my machine since my Windows laptop was working fine. Later they admitted that my Linux laptop's IP address had been allocated to a DHCP pool at the same time as it statically allocated to a different device. There are many issues that a user cannot readily debug or fix on their own, no matter how "technical" the user is.

  4. Flame wars amongst nerds and developers are vicious and pervasive. Seriously, it's not just acrimonious Github issues from non-technical users, it's a serious detriment to community cooperation in free and open source software projects everywhere. People are drawn to controversy; I don't think it's an accident that my most-starred Github project is an aging, lopsided opinion piece about Linux and Windows that is of little practical use to anyone.

Malware.

So, the whole point of this is that nobody uses desktop Linux, so there isn't any malware targeted at it? Well, yeah.

No, that isn't the point at all. Desktop Linux is used en masse at universities, military facilities, government research labs, and other high-value targets. Accordingly, there's plenty of sophisticated malware for desktop Linux; for some recent examples, consider the Drovorub rootkit developed by APT28 or the RansomEXX ransomware trojan.

The distinction is in how software is installed: major Linux distributions use package managers, and the packages are closely monitored by maintainers. Windows users run .exe and .msi files downloaded from various websites. (The Windows Store is missing major software and it has its own malware issues.)

On Windows, even installing open source software like GIMP or nmap is fraught with peril, and I sincerely wish it wasn't.

Conclusion and final notes.

Thanks for your thoughtful "corrections and counters", as you put it. Note that other folks have suggested that there should be a dedicated document with reasons why Windows is preferable to Linux. I'm not the right person to do that, but you might be. :-)