ogham / exa

A modern replacement for ‘ls’.
https://the.exa.website/
MIT License
23.5k stars 659 forks source link

Exa is terribly slow with '-l' (long) mode on specific environment #141

Open wookayin opened 7 years ago

wookayin commented 7 years ago

Exa 0.4.0 on Linux x86-64 (Ubuntu 14.04)

$ exa --version
exa 0.4.0
$ uname -a
Linux XXXXXXXX 3.16.0-60-generic #80~14.04.1-Ubuntu SMP Wed Jan 20 13:37:48 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

exa -l is terribly slow on this environment.

~/.dotfiles$ time exa | wc -l
17

real    0m0.076s
user    0m0.001s
sys     0m0.076s

~/.dotfiles$ time exa -l | wc -l
17

real    0m1.788s
user    0m0.005s
sys     0m1.841s

~/.dotfiles$ time command ls -l  | wc -l
18

real    0m0.004s
user    0m0.002s
sys     0m0.004s

It can be reproduced both on git repository and on non-git repository. I suspect that when it is a long mode, exa is trying to access to some metadata of files but the access is very slow (but I do not know which one is the culprit).

I do not experience the performance degradation on my laptop, macOS 10.11.6. In addition, on another linux system that I have an access (Linux 4.8.6-x86_64, Ubuntu 16.04) the issue is not happening (exa -alh is very fast). Looks like that this issue is environment-specific.

wookayin commented 7 years ago

I inspected into the strace log.

$ time strace exa -lah

... (omitted) ...

futex(0x7f585f027644, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027640, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0240f0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027644, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027640, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0240f0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027004, FUTEX_WAIT_PRIVATE, 1, NULL) = 0
futex(0x7f585f024000, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f585f024390, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
futex(0x7f585f024390, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f585f0275c4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f0275c0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024420, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f024480, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
futex(0x7f585f024480, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f585f027584, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027580, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024510, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f024570, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
futex(0x7f585f024570, FUTEX_WAKE_PRIVATE, 1) = 0
futex(0x7f585f027544, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027540, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024600, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027504, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027500, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0246f0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f0274c4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f0274c0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0247e0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027484, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027480, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0248d0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027444, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027440, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f0249c0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027404, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027400, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024ab0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f0273c4, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f0273c0, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024ba0, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f027384, FUTEX_WAKE_OP_PRIVATE, 1, 1, 0x7f585f027380, {FUTEX_OP_SET, 0, FUTEX_OP_CMP_GT, 1}) = 1
futex(0x7f585f024c90, FUTEX_WAKE_PRIVATE, 1) = 1
futex(0x7f585f024cf0, FUTEX_WAIT_PRIVATE, 2, NULL) = 0
futex(0x7f585f024cf0, FUTEX_WAKE_PRIVATE, 1) = 0

... (omitted) ...

munmap(0x7f586127e000, 8192)            = 0                                                                                            exit_group(0)                           = ?
+++ exited with 0 +++
strace exa -lah  0.01s user 1.92s system 100% cpu 1.909 total

The futex entry appears very redundantly and slowly; therefore it looks like a concurrency problem (the process is locked)

wookayin commented 7 years ago

I also built exa at the current master:

$ cargo build --release --no-default-features --verbose

which also shows the performance degradation with -l. However, the strace reads differently:

...
clone(child_stack=0x7facf07feeb0, flags=CLONE_VM|CLONE_FS|CLONE_FILES|CLONE_SIGHAND|CLONE_THREAD|CLONE_SYSVSEM|CLONE_SETTLS|CLONE_PARENT_SETTID|CLONE_CHILD_CLEARTID, parent_tidptr=0x7facf07ff9d0, tls=0x7facf07ff700, child_tidptr=0x7facf07ff9d0) = 34386
mmap(NULL, 2101248, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS|MAP_STACK, -1, 0) = 0x7faceffff000
mprotect(0x7faceffff000, 4096, PROT_NONE) = 0
...
wookayin commented 7 years ago

Another something weird: If we use -G option along with -l, exa gets very fast.

ogham commented 7 years ago

There have been some changes to the source code recently (finally), and updates to libraries. I’ll try to grab some Ubuntu images and see if this thing is still happening.

ogham commented 7 years ago

Here's a clue:

Another something weird: If we use -G option along with -l, exa gets very fast.

There is actually no parallelisation in the grid-details view! Currently, the code that stats files in parallel is very closely tied to the code that populates the table, so by filling its table differently, the grid-details view has to forgo the parallelisation.

There is a mutex being used behind the scenes: it locks a users cache when displaying users and groups. This means that either rendering the user/group cell is taking too long, or getting a user or group's details is taking too long. Or something else entirely...

ogham commented 7 years ago

GitHub, “May fix #141” is not the same as “fix #141”!

wookayin commented 7 years ago

Is it really fixed? I just checked out the latest master branch (pointed by v0.7.0), but it doesn't seem that it got significantly faster.

❯❯❯ target/release/exa --version
exa 0.7.0
❯❯❯ time target/release/exa -alh > /dev/null
target/release/exa -alh > /dev/null  0.00s user 2.67s system 131% cpu 2.031 total
ogham commented 7 years ago

Damn, I was hoping that the double-mutex was causing it. 2.67s is definitely still broken. Oh well, thanks for the update.

I was unable to reproduce it using an Ubuntu VM, so I’ll have to try a live CD next.

ogham commented 6 years ago

Time marches on.

The other day I spun up a DigitalOcean droplet with like 12 cores and had exa list the entire filesystem in a tree just to see if I could reproduce this. I'm sorry to say that I could not.

exa now has some debugging output (EXA_DEBUG=1). Combined with something like ts to timestamp each line, this could give me a bit more insight. Could I get you to try running it again?

wookayin commented 6 years ago

Sure. With the latest, I still experience the same.

❯❯❯ git rev-parse HEAD
b85f8f249bea321e97ab4f1aed93f924e324eae2
❯❯❯ EXA_DEBUG=1 target/release/exa -v
exa v0.7.0

The output:

❯❯❯ time (EXA_DEBUG=1 target/release/exa -l 2>&1 | ts)
 9월 29 02:27:05 DEBUG:exa: Dir action from arguments: List
 9월 29 02:27:05 DEBUG:exa: Filter from arguments: FileFilter {
 9월 29 02:27:05     list_dirs_first: false,
 9월 29 02:27:05     sort_field: Name(
 9월 29 02:27:05         AaBbCc
 9월 29 02:27:05     ),
 9월 29 02:27:05     reverse: false,
 9월 29 02:27:05     dot_filter: JustFiles,
 9월 29 02:27:05     ignore_patterns: IgnorePatterns {
 9월 29 02:27:05         patterns: []
 9월 29 02:27:05     }
 9월 29 02:27:05 }
 9월 29 02:27:05 DEBUG:exa: View from arguments: Details(
 9월 29 02:27:05     Options {
 9월 29 02:27:05         table: Some(
 9월 29 02:27:05             Table(Columns {
 9월 29 02:27:05                 time_types: TimeTypes {
 9월 29 02:27:05                     accessed: false,
 9월 29 02:27:05                     modified: true,
 9월 29 02:27:05                     created: false
 9월 29 02:27:05                 },
 9월 29 02:27:05                 inode: false,
 9월 29 02:27:05                 links: false,
 9월 29 02:27:05                 blocks: false,
 9월 29 02:27:05                 group: false,
 9월 29 02:27:05                 git: false
 9월 29 02:27:05             })
 9월 29 02:27:05         ),
 9월 29 02:27:05         header: false,
 9월 29 02:27:05         xattr: false
 9월 29 02:27:05     }
 9월 29 02:27:05 )
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "."
 9월 29 02:27:05 INFO:exa::fs::dir: Reading directory "."
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./Cargo.toml"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./target"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./LICENCE"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./xtests"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./Makefile"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./README.md"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./contrib"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./Vagrantfile"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./screenshots.png"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./Cargo.lock"
 9월 29 02:27:05 DEBUG:exa::fs::file: Statting file "./src"
 9월 29 02:27:09 .rw-r--r--  24k wookayin 29  9월  2:20 Cargo.lock
 9월 29 02:27:09 .rw-r--r-- 1.1k wookayin 29  9월  2:20 Cargo.toml
 9월 29 02:27:09 drwxr-xr-x    - wookayin 29  9월  2:20 contrib
 9월 29 02:27:09 .rw-r--r-- 1.1k wookayin  5  7월 18:30 LICENCE
 9월 29 02:27:09 .rw-r--r-- 3.0k wookayin  5  7월 18:30 Makefile
 9월 29 02:27:09 .rw-r--r-- 6.0k wookayin 29  9월  2:20 README.md
 9월 29 02:27:09 .rw-r--r-- 454k wookayin  5  7월 18:30 screenshots.png
 9월 29 02:27:09 drwxr-xr-x    - wookayin 29  9월  2:20 src
 9월 29 02:27:09 drwxr-xr-x    - wookayin  8  7월  2:21 target
 9월 29 02:27:09 .rw-r--r--  27k wookayin 29  9월  2:20 Vagrantfile
 9월 29 02:27:09 drwxr-xr-x    - wookayin 29  9월  2:20 xtests
( EXA_DEBUG=1 target/release/exa -l 2>&1 | ts; )  0.04s user 0.00s system 1% cpu 3.297 total

The log until the end of "Statting file" is immediate. Sorry to say that the debugging output didn't print anything while it is being hung.

wookayin commented 6 years ago

One additional observation is that during the hang is happening (during the multiple strace entries of futex and clone+mmap...), it seems that multiple threads are spawn and they are doing something. A part of lsof output:

exa       184961 184962   wookayin  cwd       DIR                8,3     4096 14566582 /home/wookayin/exa
exa       184961 184962   wookayin  rtd       DIR                8,3     4096        2 /
exa       184961 184962   wookayin  txt       REG                8,3  3246709 14571955 /home/wookayin/exa/target/release/exa
exa       184961 184962   wookayin  mem       REG                8,3  8463488   793134 /usr/lib/locale/locale-archive
exa       184961 184962   wookayin  mem       REG                8,3  1840928  8126563 /lib/x86_64-linux-gnu/libc-2.19.so
exa       184961 184962   wookayin  mem       REG                8,3    92504  8126737 /lib/x86_64-linux-gnu/libgcc_s.so.1
exa       184961 184962   wookayin  mem       REG                8,3   141574  8126564 /lib/x86_64-linux-gnu/libpthread-2.19.so
exa       184961 184962   wookayin  mem       REG                8,3    31792  8126566 /lib/x86_64-linux-gnu/librt-2.19.so
exa       184961 184962   wookayin  mem       REG                8,3    14664  8126494 /lib/x86_64-linux-gnu/libdl-2.19.so
exa       184961 184962   wookayin  mem       REG                8,3   149120  8126560 /lib/x86_64-linux-gnu/ld-2.19.so
exa       184961 184962   wookayin    0u      CHR             136,91      0t0       94 /dev/pts/91
exa       184961 184962   wookayin    1w     FIFO                0,8      0t0 69982105 pipe
exa       184961 184962   wookayin    2w     FIFO                0,8      0t0 69982105 pipe
gyscos commented 6 years ago

I confirm I have the same issue on a machine with 24 core (48 threads).

Here is the complete output of strace -C exa -l in a directory with only one empty file: https://pastebin.com/1B4e9a3N.

As the timings show, the slow part are the mmap and the mprotect, more than the futex calls. Note: this problem does not happen on another machine with the same number of threads - not sure how to identify the differences. Both are using rust stable 1.21. The machine where exa is slow is not quite as up-to-date, but both are Ubuntu 16.04 systems.

EDIT: Pastebin'ed.

EDIT #2: Holy moly, I didn't change a thing, and now exa runs instantly on both machines. It is a strange world.

ghuls commented 5 years ago
$ exa --version
exa v0.8.0

#
$ time ls /usr
bin  games  include  lib  libexec  local  sbin  share  src

real    0m0.004s
user    0m0.000s
sys 0m0.000s

# Similar in time with ls.
$ time exa --color=never /usr
bin  games  include  lib  libexec  local  sbin  share  src

real    0m0.006s
user    0m0.000s
sys 0m0.004s

$ time ls -l /usr
total 128
drwxr-xr-x   2 root root 61440 Sep 23 06:47 bin
drwxr-xr-x   2 root root  4096 Apr 24 13:26 games
drwxr-xr-x  82 root root 20480 Aug 30 13:23 include
drwxr-xr-x 113 root root 12288 Sep 28 06:27 lib
drwxrwxr-x   3 root root  4096 Jun 27  2017 libexec
drwxr-xr-x  13 root root  4096 Dec 13  2017 local
drwxr-xr-x   2 root root 12288 Oct  3 06:27 sbin
drwxr-xr-x 221 root root  4096 Sep 27 16:04 share
drwxr-xr-x   9 root root  4096 Aug 26 06:48 src

real    0m0.005s
user    0m0.004s
sys 0m0.000s

# 23 times slower than "ls -l"
$ time exa -l -g --color=never /usr
drwxr-xr-x - root root 23 Sep  6:47 bin
drwxr-xr-x - root root 24 Apr 13:26 games
drwxr-xr-x - root root 30 Aug 13:23 include
drwxr-xr-x - root root 28 Sep  6:27 lib
drwxrwxr-x - root root 27 Jun  2017 libexec
drwxr-xr-x - root root 13 Dec  2017 local
drwxr-xr-x - root root  3 Oct  6:27 sbin
drwxr-xr-x - root root 27 Sep 16:04 share
drwxr-xr-x - root root 26 Aug  6:48 src

real    0m0.116s
user    0m0.008s
sys 0m0.188s

# Trace exa and its child processes.
strace -f exa -l -g --color=never /usr
...
[pid 80524] exit_group(0)               = ?
[pid 80566] futex(0x7fad7da1e300, FUTEX_WAIT_PRIVATE, 2, NULL <ptrace(SYSCALL):No such process>
[pid 80566] +++ exited with 0 +++
[pid 80540] +++ exited with 0 +++
[pid 80543] +++ exited with 0 +++
[pid 80545] +++ exited with 0 +++
[pid 80544] +++ exited with 0 +++
[pid 80541] +++ exited with 0 +++
[pid 80539] +++ exited with 0 +++
[pid 80553] +++ exited with 0 +++
[pid 80554] +++ exited with 0 +++
[pid 80552] +++ exited with 0 +++
[pid 80538] +++ exited with 0 +++
[pid 80537] +++ exited with 0 +++
[pid 80536] +++ exited with 0 +++
[pid 80535] +++ exited with 0 +++
[pid 80534] +++ exited with 0 +++
[pid 80533] +++ exited with 0 +++
[pid 80547] +++ exited with 0 +++
[pid 80546] +++ exited with 0 +++
[pid 80549] +++ exited with 0 +++
[pid 80555] +++ exited with 0 +++
[pid 80548] +++ exited with 0 +++
[pid 80557] +++ exited with 0 +++
[pid 80556] +++ exited with 0 +++
[pid 80567] +++ exited with 0 +++
[pid 80560] +++ exited with 0 +++
[pid 80563] +++ exited with 0 +++
[pid 80561] +++ exited with 0 +++
[pid 80568] +++ exited with 0 +++
[pid 80562] +++ exited with 0 +++
[pid 80559] +++ exited with 0 +++
[pid 80558] +++ exited with 0 +++
[pid 80565] +++ exited with 0 +++
[pid 80551] +++ exited with 0 +++
[pid 80571] +++ exited with 0 +++
[pid 80550] +++ exited with 0 +++
[pid 80573] +++ exited with 0 +++
[pid 80569] +++ exited with 0 +++
[pid 80570] +++ exited with 0 +++
[pid 80542] +++ exited with 0 +++
[pid 80582] +++ exited with 0 +++
[pid 80564] +++ exited with 0 +++
[pid 80575] +++ exited with 0 +++
[pid 80532] +++ exited with 0 +++
[pid 80574] <... futex resumed> )       = ? <unavailable>
[pid 80572] +++ exited with 0 +++
[pid 80577] <... futex resumed> )       = ? <unavailable>
[pid 80577] +++ exited with 0 +++
[pid 80574] +++ exited with 0 +++
[pid 80576] +++ exited with 0 +++
[pid 80531] +++ exited with 0 +++
[pid 80578] +++ exited with 0 +++
[pid 80584] +++ exited with 0 +++
[pid 80530] +++ exited with 0 +++
[pid 80579] +++ exited with 0 +++
[pid 80583] +++ exited with 0 +++
[pid 80529] +++ exited with 0 +++
[pid 80581] +++ exited with 0 +++
[pid 80528] +++ exited with 0 +++
[pid 80527] +++ exited with 0 +++
[pid 80526] +++ exited with 0 +++
[pid 80525] +++ exited with 0 +++
[pid 80580] +++ exited with 0 +++
+++ exited with 0 +++

exa seems to spawn a thread for each core it finds. On this server it spawns 60 threads. It would be nice if exa provided an option to limit the number of threads it spawns or that by default it maximally spawns 4 threads or so, to avoid this problem.

saethlin commented 5 years ago

I may have solved this in #510, it would be nice to know if my PR fixes the complaints here.

I've added a threadpool for making lstat and stat syscalls which can be very slow in the environment where I use exa; by default my PR launches one thread per CPU but the number of threads can be controlled manually by setting EXA_IO_THREADS.

skoskie commented 5 years ago

I can confirm that the issue remains in 0.9.0, and also on MacOS (10.14.6) with only 4 cores.

My test directory has only 32 files. The command is simply my default alias (artificial line breaks) ...

exa --color auto 
    --all --group-directories-first 
    --long 
    --binary 
    --group 
    --header 
    --modified  
    --sort=name  
    --git  
    --time-style=long-iso  
    --classify

It took 7.5s to run, even after multiple consecutive runs. (Note the time command in the screenshot seems to truncate the full command that was run)

Screen Shot 2019-08-27 at 10 29 29 AM

Adding the -G option drops the time to 0.02s, but also drops other flags, like the headers and git info, etc.

Running dtruss (Mac's strace), it's clear that the major offender is psynch_mutexwait calls.

2019-08-27 EXA dtruss output.txt

Hopefully one more data point helps a bit.

saethlin commented 5 years ago

@skoskie Did you run dtruss -f? It looks like all you recorded is the main thread managing a threadpool, so it's no surprise that it spends a lot of time waiting for the workers to finish jobs.

thomasf commented 4 years ago

I have also bumped in to this issue with exa on some networked file systems where ls takes like half a second and exa takes up to 30 seconds to complete for the same wide view.

I think we need tests for this. there should be some simulated fuse file system out there that exhibit the same "bad" behavior in a repeatable way.

speculative and short observation of behaviour:

If we just take a very simplistic overview it seems like exa is "wasting" time on doing the wrong syscalls even when the information isnt used (?)

Also ls -l only uses lstat while exa -l is using statx with stx_mask=STATX_ALL which I guess means fetching everything.

It's not like I have done any kind of real comparison here, I haven't even read the code in exa or ls to confirm exactly how they work but the syscall traces sure seem to indicate that maybe exa could fetch less data.

▶22 ~/tmp/test
:49 tf@whale$ ls
hello.c  profile.zip                                   profile.zip.2019-12-19T01-03-14.950275+01-00  symling
main.go  profile.zip.2019-12-19T00-58-55.600109+01-00  profiling-127.0.0.1:9000.pprof
▶22 ~/tmp/test
:49 tf@whale$ exa
hello.c  profile.zip                                   profile.zip.2019-12-19T01-03-14.950275+01-00  symling
main.go  profile.zip.2019-12-19T00-58-55.600109+01-00  profiling-127.0.0.1:9000.pprof
▶22 ~/tmp/test
:49 tf@whale$ strace ls 2>&1 | grep stat
fstat(3, {st_mode=S_IFREG|0644, st_size=237735, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=163240, ...}) = 0
fstat(3, {st_mode=S_IFREG|0755, st_size=2025032, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=539176, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=18816, ...}) = 0
fstat(3, {st_mode=S_IFREG|0755, st_size=158288, ...}) = 0
statfs("/sys/fs/selinux", 0x7ffee96fb980) = -1 ENOENT (No such file or directory)
statfs("/selinux", 0x7ffee96fb980)      = -1 ENOENT (No such file or directory)
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=8634688, ...}) = 0
fstat(3, {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
fstat(1, {st_mode=S_IFIFO|0600, st_size=0, ...}) = 0
▶22 ~/tmp/test
:49 tf@whale$ strace exa 2>&1 | grep stat
fstat(3, {st_mode=S_IFREG|0644, st_size=237735, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=108928, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=18816, ...}) = 0
fstat(3, {st_mode=S_IFREG|0755, st_size=158288, ...}) = 0
fstat(3, {st_mode=S_IFREG|0644, st_size=100888, ...}) = 0
fstat(3, {st_mode=S_IFREG|0755, st_size=2025032, ...}) = 0
fstat(3, {st_mode=S_IFREG|0444, st_size=0, ...}) = 0
statx(0, NULL, AT_STATX_SYNC_AS_STAT, STATX_ALL, NULL) = -1 EFAULT (Bad address)
statx(AT_FDCWD, ".", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFDIR|0775, stx_size=4096, ...}) = 0
fstat(3, {st_mode=S_IFDIR|0775, st_size=4096, ...}) = 0
statx(AT_FDCWD, "./profile.zip.2019-12-19T00-58-55.600109+01-00", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=379, ...}) = 0
statx(AT_FDCWD, "./main.go", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=532, ...}) = 0
statx(AT_FDCWD, "./profile.zip.2019-12-19T01-03-14.950275+01-00", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=23985, ...}) = 0
statx(AT_FDCWD, "./profile.zip", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=56998, ...}) = 0
statx(AT_FDCWD, "./symling", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFLNK|0777, stx_size=11, ...}) = 0
statx(AT_FDCWD, "./profiling-127.0.0.1:9000.pprof", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0600, stx_size=56824, ...}) = 0
statx(AT_FDCWD, "./hello.c", AT_STATX_SYNC_AS_STAT|AT_SYMLINK_NOFOLLOW, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=86, ...}) = 0
statx(AT_FDCWD, "./profile.zip", AT_STATX_SYNC_AS_STAT, STATX_ALL, {stx_mask=STATX_ALL, stx_attributes=0, stx_mode=S_IFREG|0664, stx_size=56998, ...}) = 0
▶22 ~/tmp/test
:49 tf@whale$ exa -l
.rw-rw-r--  86 thomasf 23 Jan 22:47 hello.c
.rw-rw-r-- 532 thomasf 23 Jan 22:47 main.go
.rw-rw-r-- 56k thomasf 23 Jan 22:47 profile.zip
.rw-rw-r-- 379 thomasf 23 Jan 22:47 profile.zip.2019-12-19T00-58-55.600109+01-00
.rw-rw-r-- 23k thomasf 23 Jan 22:47 profile.zip.2019-12-19T01-03-14.950275+01-00
.rw------- 56k thomasf 23 Jan 22:47 profiling-127.0.0.1:9000.pprof
lrwxrwxrwx  11 thomasf 23 Jan 22:49 symling -> profile.zip
ariasuni commented 3 years ago

Well, statx wasn’t used in 2017 since that’s a recent change in Rust’s stdlib, and in long mode we use most of this information.

GNU ls 8.32 uses statx, and uses the mask feature to be more efficient. Obviously, it would be a big work to call statx directly instead of relying on Rust’s abstractions because we would also need to maintain other implementations for older versions of Linux and other operating systems, while we have no idea if this is actually noticeably slowing down exa.