Closed bpecsek closed 2 years ago
Sure I can setup a workflow for SBCL, and contributions are welcome
Perfect!
Thanks
Please wait for SBCL 2.1.8 to be released. Should be released in a few days. It has some major improvement in certain areas.
How do I submit the codes?
@bpecsek
How do I submit the codes?
Please refer to this commit and just send PRs. I've added a nbody solution
To verify your changes locally, please refer to readme
Please wait for SBCL 2.1.8 to be released.
I set up the github action workflow with roswell to always install the latest sbcl so that the upgrade should happen automatically
Please note that by default roswell uses core compression resulting in considerably slower startup times that can effect short processes. This can be deactivated with the ros build --disable-compression switch.
Could you please also install (clone) sb-simd (https://github.com/marcoheisig/sb-simd) into ~/.roswell/local-projects folder?
Would it be possible to use longer processes and 4 cores instead of 2? Very short run times favor certain languages. Like for spectral-norm, 1000 is way too short in my opinion. It should be something like 10000 on 4 cores.
Thanks
This can be deactivated with the ros build --disable-compression switch.
I only use roswell to install latest sbcl but not bundling, for build process, it's defined in bench_lisp.yaml
Basically, it is
sbcl --non-interactive --no-userinit --load bundle.cl
to generate a standalone executable to bench againstThere's another testcase without generating standalone exe, but run sbcl --non-interactive --no-userinit --load compile.cl
to get app.fasl
and then bench against sbcl --non-interactive --no-userinit --load run.cl
, this is slower but shows the overhead without optimizations done in save-lisp-and-die
Could you please also install (clone) sb-simd
You can just add files to bench/include/lisp, and it will be aside with the code during build as described above.
BTW, personally, I don't like manual optimizations with simd/(or FFI call to gmp, similarly), although benchmarks game allows that. I feel it's not about benchmarking what compiler/runtime provides by default, but benchmarking speed of simd and its overhead if any. If the purpose is to benchmark the overhead of using SIMD in different langs, maybe we can create a dedicated problem.
Would it be possible to use longer processes and 4 cores instead of 2?
It just uses whatever machine/VM spec github provides, there's no custom CI agent unless someone would offer.
Thank You for the quick reply.
When language speeds are benchmarked against each other and one language can use SIMD then the other should better do the same otherwise the comparison is not valid. C also gets the advantage from auto vectorization for simple loops on array processing.
I’m not against optimization, since that is a must to get fast speeds and SIMD optimization is a form of it. I am agains though when it is overdone to the level that you can hardly recognize the language anymore and the code is closer to assembly code than to the high level language applied.
The fastest C codes for nbody and spectralnorm use ridiculous amount of SIMD intrinsics to the level that they are closer to assembly then C. I don’t intend to write such a code but use simd.
sb-simd needs to have quicklisp/asdf set up properly to load therefore could you please set it up?
Unfortunately there is a bug in sb-simd at the moment that prevents generating core image. I hope it will be fixed soon.
Until then compilation and loading the fast file is the only option.
Could you please allow longer run time for at least spectral-norm where 1000 is clearly too short on modern CPUs.
Thanks again
I have cloned the repository and trying to build the website as described under the Development heading but I am getting this:
$ cd website $ yarn 00h00m00s 0/0: : ERROR: There are no scenarios; must have at least one. $ yarn generate 00h00m00s 0/0: : ERROR: [Errno 2] No such file or directory: 'generate' $ yarn dev 00h00m00s 0/0: : ERROR: [Errno 2] No such file or directory: 'dev'
What am I doing wrong.
What am I doing wrong.
What's the output of yarn --version
, it should print 1.x, classic yarn can be downloaded here
please allow longer run time for at least spectral-norm
Sure, you can update testcase parameter in bench.yaml in your PR.
have quicklisp/asdf set up
Not really familiar with common lisp, will investigate later
Thank you. I had an old yarn version. $ yarn --version 0.32+git
I have installed yarn v1.22.11 however now I get $ yarn generate yarn run v1.22.11 $ nuxt generate ℹ Parsed 23 files in 0,0 seconds @nuxt/content 17:41:36
ℹ NuxtJS collects completely anonymous data about usage. 17:41:36 This will help us improve Nuxt developer experience over time. Read more on https://git.io/nuxt-telemetry ..... ● Client █████████████████████████ building (25%) 133/136 modules 3 active css-loader › postcss-loader › sass-loader › assets/css/site.scss
◯ Server
ERROR (node:93172) [DEP0148] DeprecationWarning: Use of deprecated folder mapping "./" in the "exports" field module resolution of the package at /home/bpecsek/common-lisp/Programming-Language-Benchmarks/website/node_modules/@nuxt/postcss8/node_modules/postcss/package.json.
Update this package.json to use a subpath pattern like "./*".
(Use node --trace-deprecation ...
to show where the warning was created)
node: ../src/coroutine.cc:134: void find_thread_id_key(void): Assertion `thread_id_key != 0x7777' failed. Aborted (core dumped) error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
For installing quicklisp package manager please see https://www.quicklisp.org/beta/
For installing quicklisp package manager please see https://www.quicklisp.org/beta/
I think quicklisp is installed together with roswell, regarding sb-simd is it possible to put it just under include/lisp?
BTW, I saw an open issue that sd-simd does not work with save-and-die yet, can you please disable it for now until that is fixed?
To customize installation process, you can also add below lines to bench.yml
cd $HOME/.roswell/local-projects
git clone https://github.com/marcoheisig/sb-simd.git
We need to execute sbcl --eval "(ql:quickload :sb-simd) --eval "(exit)" ones after cloning to build sb-simd properly. Can I put it after the git clone https://github.com/marcoheisig/sb-simd.git line?
cd $HOME/.roswell/local-projects
git clone https://github.com/marcoheisig/sb-simd.git
sbcl --eval "(ql:quickload :sb-simd)" --eval "(exit)"
Looks like this line failed, my suggestion is that, split sb-simd thing into a seperate PR and try the proper github action setup in your own fork (otherwise wf runs need my approval) to unblock the rest of the changes, does that make sense?
How do I disable the core generation workflow?
Do I have to remove the
- os: linux
compiler: sbcl/exe
version: latest
include: lisp
build: sbcl --non-interactive --no-userinit --load bundle.cl
after_build:
- cp app out
out_dir: out
compiler_version_command: sbcl --version
run_cmd: app
section from the bench_lisp.yaml file?
Is there a way to comment the lines out? Is it #?
Also maybe better to split the input: 5000 change into a seperate one as it may take too long for other langs like python, to compromise I would probably do this, removing 500, try 3000 and exlude python, and maybe ruby
tests:
- input: 1000
- input: 3000
exclude_langs:
- python
Is it #?
Yes
Could you please also reply to https://github.com/hanabi1224/Programming-Language-Benchmarks/issues/144#issuecomment-905647086 regarding failed yarn generate ?
Could you please also reply to #144 (comment) regarding failed yarn generate ?
TBH I've no idea what's going wrong, it works fine locally and in the CI build ur PR tiggered 2 hours ago.
I would try removing yarn.lock node_modules .nuxt and try again
CI build steps can be found here, plz note that yarn build
and yarn generate
use the same underlining command nuxt generate
which is defined in package.json
Since roswell is used only to install sbcl then when sb-simd is installed under ~/.roswell/local-projects, sbcl will not find it only ros run. Therefore, it should surely be installed under ~/quicklisp/local-projects
Could you please also reply to #144 (comment) regarding failed yarn generate ?
TBH I've no idea what's going wrong, it works fine locally and in the CI build ur PR tiggered 2 hours ago.
I would try removing yarn.lock node_modules .nuxt and try again
CI build steps can be found here, plz note that
yarn build
andyarn generate
use the same underlining commandnuxt generate
which is defined in package.json
I did all what you've suggested now I get this:
$ yarn generate
yarn run v1.22.11
$ nuxt generate
ℹ Parsed 391 files in 0,1 seconds @nuxt/content 20:48:41
ℹ Production build 20:48:41
ℹ Bundling for server and client side 20:48:41
ℹ Target: static 20:48:41
ℹ Using components loader to optimize imports 20:48:41
ℹ Discovered Components: node_modules/.cache/nuxt/components/readme.md 20:48:41
✔ Builder initialized 20:48:41
cpp: 7 benchmark results 20:48:41
go: 4 benchmark results 20:48:41
crystal: 6 benchmark results 20:48:41
csharp: 6 benchmark results 20:48:41
haxe: 7 benchmark results 20:48:41
fortran: 7 benchmark results 20:48:41
javascript: 42 benchmark results 20:48:41
julia: 16 benchmark results 20:48:41
kotlin: 34 benchmark results 20:48:41
nim: 5 benchmark results 20:48:41
ocaml: 11 benchmark results 20:48:41
lua: 14 benchmark results 20:48:41
lisp: 6 benchmark results 20:48:41
python: 38 benchmark results 20:48:41
swift: 9 benchmark results 20:48:41
ruby: 47 benchmark results 20:48:41
wren: 9 benchmark results 20:48:41
rust: 123 benchmark results 20:48:41
✔ Nuxt files generated 20:48:42
● Client █████████████████████████ building (24%) 121/126 modules 5 active
css-loader › postcss-loader › sass-loader › assets/css/site.scss
◯ Server
node: ../src/coroutine.cc:134: void* find_thread_id_key(void*): Assertion `thread_id_key != 0x7777' failed.
Aborted (core dumped)
error Command failed with exit code 134.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
bpecsek@bpecsek-Lenovo-Y520-15IKBM:~/common-lisp/Programming-Language-Benchmarks/website$ node:internal/process/promises:246
triggerUncaughtException(err, true /* fromPromise */);
^
RpcIpcMessagePortClosedError: Cannot send the message - the message port has been closed for the process 174364.
at /home/bpecsek/common-lisp/Programming-Language-Benchmarks/website/node_modules/fork-ts-checker-webpack-plugin/lib/rpc/rpc-ipc/RpcIpcMessagePort.js:47:47
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
code: undefined,
signal: undefined
}
I don't know what is going on.
Parsed 391 files in 0,1 seconds
Can u try keeping only a few folders under content? The error message is not informative and looks transient, see if smaller content workload works
Parsed 391 files in 0,1 seconds
Can u try keeping only a few folders under content? The error message is not informative and looks transient, see if smaller content workload works
Same result with only 28 files.
This can be the problem. I have v16.7.0 I'm going to try installing 14.17.5. Could you please also see my message regarding quicklisp instalation. https://github.com/hanabi1224/Programming-Language-Benchmarks/issues/144#issuecomment-905781061 Thanks
Could you please also see my message regarding quicklisp instalation.
Sure it's fine as long as it works, since I'm not a lisp expert. My only recommendation would be split the workflow setup change out into a minimal self-contained one so that it can be easily tested with your own fork and the change can be referred to in the future
Could you please explain how the measurement is actually done and what is actually measured. It is strange to see Rust legging behind in quite a few codes that should not be the case. I also see large speed difference for SBCL between sbcl/exe 2.1.7 and sbcl 2.1.7.
how the measurement is actually done
Basically it's quite similar to time {run_cmd}
, runcmd is defined in bench*.yaml
Take lisp as an example
sbcl --non-interactive --no-userinit --load compile.cl
time sbcl --non-interactive --no-userinit --load run.cl 500000
sbcl --non-interactive --no-userinit --load bundle.cl
time ./app 500000
It is strange to see Rust legging behind
Which one?
Why is the compilation timed? This might be problematic when the program use external libraries. When external libraries are used --no-userinit can not be used since the libraries as well as quicklisp needed to be loaded and those loading times would be also included in the timing that can be substantial. Therefore, only the generated executable should be times that includes everything. I have to push to fix sb-simd to be able to save executable for sure. I have to think about this a bit more.
The spectral-norm and mandelbrot Rust programs are strangely slow.
Why is the compilation timed?
It's not timed, sry for my typo
The spectral-norm and mandelbrot Rust programs are strangely slow.
Maybe simd is not enabled with current cargo build options, will investigate and fix, thanks for pointing it out! numeric-array doc says,
When used with RUSTFLAGS = "-C opt-level=3 -C target-cpu=native", then Rust and LLVM are smart enough to autovectorize almost all operations into SIMD instructions
@bpecsek I did some investigation on rust perf numbers
The spectral-norm and mandelbrot Rust programs are strangely slow.
They both use numeric_array
crate for simd, however, this crate does not explicitly use any simd intrinsics in its code but just rely on free compiler/LLVM optimizations which might not be reliable. I tried simply replacing numeric_array
with core_simd
and get much better number.
Regarding spectral-norm, it seems to be negative optimization with simd (at least in github CI environment), I can get much better number by only doing parallization without simd, note that 8.rs
on nightly rust is using core_simd
but still not performing well.
p.s. I've already added RUSTFLAGS = "-C opt-level=3 -C target-cpu=native"
for numeric_array
as instructed but still not working. Did not check generated asm tho.
Looks much better when it is working though.
I am still getting this when building the website:
ERROR [BABEL] Note: The code generator has deoptimised the styling of /home/bpecsek/common-lisp/Programming-Language-Benchmarks/website/node_modules/.cache/nuxt/router.js as it exceeds the max of 500KB.
<--- Last few GCs --->
[7363:0x628ac50] 164385 ms: Mark-sweep 4080.4 (4117.4) -> 4077.9 (4118.9) MB, 3858.8 / 0.0 ms (average mu = 0.121, current mu = 0.001) allocation failure scavenge might not succeed [7363:0x628ac50] 167774 ms: Mark-sweep 4085.9 (4118.9) -> 4080.8 (4121.9) MB, 3381.7 / 0.0 ms (average mu = 0.071, current mu = 0.002) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0xa25510 node::Abort() [/usr/bin/node] 2: 0x9664d3 node::FatalError(char const, char const) [/usr/bin/node] 3: 0xb9a8be v8::Utils::ReportOOMFailure(v8::internal::Isolate, char const, bool) [/usr/bin/node] 4: 0xb9ac37 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate, char const, bool) [/usr/bin/node] 5: 0xd56ca5 [/usr/bin/node] 6: 0xd5782f [/usr/bin/node] 7: 0xd6566b v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node] 8: 0xd6922c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node] 9: 0xd3790b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node] 10: 0x107fbef v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long, v8::internal::Isolate) [/usr/bin/node] 11: 0x1426919 [/usr/bin/node] Aborted (core dumped) error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
After adding // Build Configuration: https://go.nuxtjs.dev/config-build build: { ... babel: { compact: true },
in nuxt.config.ts the client got built with no issue but the server still failing with this.
Server █████████████████████████ building (65%) 683/734 modules 51 active content/csharp/csharp_linux_netcore_3.1_default_binarytrees_2_18.json
<--- Last few GCs --->
[7914:0x5adac50] 167508 ms: Mark-sweep (reduce) 4076.3 (4101.4) -> 4075.6 (4102.9) MB, 3034.7 / 0.0 ms (average mu = 0.124, current mu = 0.000) allocation failure scavenge might not succeed [7914:0x5adac50] 171994 ms: Mark-sweep (reduce) 4076.6 (4101.9) -> 4076.0 (4103.2) MB, 4484.2 / 0.0 ms (average mu = 0.059, current mu = 0.000) allocation failure scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory 1: 0xa25510 node::Abort() [/usr/bin/node] 2: 0x9664d3 node::FatalError(char const, char const) [/usr/bin/node] 3: 0xb9a8be v8::Utils::ReportOOMFailure(v8::internal::Isolate, char const, bool) [/usr/bin/node] 4: 0xb9ac37 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate, char const, bool) [/usr/bin/node] 5: 0xd56ca5 [/usr/bin/node] 6: 0xd5782f [/usr/bin/node] 7: 0xd6566b v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [/usr/bin/node] 8: 0xd6922c v8::internal::Heap::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [/usr/bin/node] 9: 0xd3790b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [/usr/bin/node] 10: 0x107fbef v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long, v8::internal::Isolate) [/usr/bin/node] 11: 0x1426919 [/usr/bin/node] Aborted (core dumped) error Command failed with exit code 134. info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.
I did export NODE_OPTIONS="--max-old-space-size=8192" and now it builds without issues.
I am getting errors like this during benchmark 021/08/29 16:54:15.577 | chmod: changing permissions of '/home/bpecsek/Development/Benchmark/Programming-Language-Benchmarks/bench/build/c_linux_gcc_latest_default_spectral-norm_5/app': Operation not permitted
chmod: changing permissions
That does not matter, just ignore it
Thanks
Added a couple of more really fast cpp spectral-norm codes.
But please change -march=ivybridge to -march=native and remove -mfpmath=sse and -msse2 for cpp compilation in app.rsp to allow compilation to AVX
I have also notices that with avx instructions the 4 cores, for some reasons, can make a hell of a difference.
Sure, just update app.rsp and send a PR, thanks!
I’ve just done that.
The result with the spectral-norm #7 and #8 codes are strange. On my computers they run about 2x faster than #6. Though I might have to add that all are using relatively modern Intel CPUs of 6th generation and up and I’m running them on 4 cores (HT is switched off). Those codes are using AVX ymm registers. Do we know what CPU is used for running the benchmark? If older AMD CPUs are used with bad AVX support than I can see the reason otherwise something is off.
Something is very strange. I have checked the actual benchmarking process, run yesterday 20hrs ago as you can see here https://github.com/hanabi1224/Programming-Language-Benchmarks/runs/3455746511?check_suite_focus=true
2021/08/29 17:39:21.577 |
[AVG] (lisp_linux_sbcl_exe_latest_default_spectral-norm_1)lisp:spectral-norm:3000 [2 cores]time: 491.3521ms, stddev: 0ms, cpu-time: 800ms, cpu-time-user: 800ms, cpu-time-kernel: 0ms, peak-mem: 33460KB
and the numbers shown there are completely different to the one run 14hrs ago and shown in the benchmark listing
2021/08/29 23:11:41.868 |
[AVG] (lisp_linux_sbcl_exe_latest_default_spectral-norm_1)lisp:spectral-norm:3000 [2 cores]time: 1068.7861ms, stddev: 32.98090203440958ms, cpu-time: 1999.9999ms, cpu-time-user: 1976.6666ms, cpu-time-kernel: 23.3333ms, peak-mem: 35094KB
Could you please check it out. I am getting completely different relative speeds om my CPU.
Why do we have such a huge differences running the generated executable code from the terminal and the ones run in the container and shown on the home page? I am running it on a Laptop with I7-7700HQ CPU with 4 cores active (HT switched off)
From terminal:
.../lisp_linux_sbcl_exe_latest_default_spectral-norm_1$ time ./app 3000
1.274224153
real 0m0,188s
user 0m0,644s
sys 0m0,008s
Shown on the generated home page:
spectral-norm
Input: 3000
lang | code | time | stddev | peak-mem | time(user) | time(kernel) | compiler/runtime
lisp | 1.cl | 322ms | 3.5ms | 25.0MB | 607ms | 13ms | sbcl/exe 2.1.7
I see very large differences for the other languages as well.
I've just rebuilt and rerun the benchmarks with 4 cores (HT switched off on i7-7700HQ, 16Gb RAM) Now the numbers are consistently similar between container and terminal, though the relative speeds compared to your list is quite different. Though the C# 3.cs one is definitely suspicious. (I've checked on terminal and gives unhandled exception. I am not sure what's happened with the Rust codes:(
$ uname -a Linux bpecsek-Lenovo-Y520-15IKBM 5.11.0-31-generic #33-Ubuntu SMP Wed Aug 11 13:19:04 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
bpecsek@bpecsek-Lenovo-Y520-15IKBM:~/Development/Benchmark/Programming-Language-Benchmarks/bench/build/lisp_linux_sbcl_exe_latest_default_spectral-norm_1$ time ./app 3000
1.274224153
real 0m0,171s
user 0m0,631s
sys 0m0,004s
bpecsek@bpecsek-Lenovo-Y520-15IKBM:~/Development/Benchmark/Programming-Language-Benchmarks/bench/build/go_linux_go_rc_default_spectral-norm_4$ time ./app 3000
1.274224153
real 0m0,151s
user 0m0,551s
sys 0m0,005s
bpecsek@bpecsek-Lenovo-Y520-15IKBM:~/Development/Benchmark/Programming-Language-Benchmarks/bench/build/csharp_linux_dotnet_6_default_spectral-norm_3$ time ./_app 3000
1.274224153
real 0m0,225s
user 0m0,615s
sys 0m0,013s
bpecsek@bpecsek-Lenovo-Y520-15IKBM:~/Development/Benchmark/Programming-Language-Benchmarks/bench/build/cpp_linux_g++_latest_default_spectratime ./app 3000
1.274224153
real 0m0,063s
user 0m0,237s
sys 0m0,000s
I am getting these for Rust and lots of error at the end.
2021/08/30 18:15:04.580 | Command[shell:False,print:True,async:False]:: cargo +nightly build --release --features nightly --target-dir /tmp/rsn/target -v
2021/08/30 18:15:04.593 | error: no such subcommand: `+nightly`
2021/08/30 18:15:04.593 | Command[shell:False,print:True,async:False]:: sudo mv /tmp/rsn/target/release/_app out
2021/08/30 18:15:04.600 | mv: cannot stat '/tmp/rsn/target/release/_app': No such file or directory
2021/08/30 18:15:04.705 | Command[shell:False,print:True,async:False]:: cargo +stable build --release --target-dir /tmp/rs/target -v
2021/08/30 18:15:04.718 | error: no such subcommand: `+stable`
2021/08/30 18:15:04.718 | Command[shell:False,print:True,async:False]:: sudo mv /tmp/rs/target/release/_app out
2021/08/30 18:15:04.726 | mv: cannot stat '/tmp/rs/target/release/_app': No such file or directory
though the relative speeds compared to your list is quite different.
That's worth investigating, in the meanwhile, that may happen in different environments, nowadays, many server applications are deployed with docker/k8s, it's always good to do the benchmark on a real production server environment than local dev machine, and this tool can actually facilitate that.
I am getting these for Rust and lots of error at the end.
Did you install rust with rustup? Both stable and nightly channels r needed.
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
rustup update nightly
For speed critical applications (we are benchmarking for a reason) knowing the hardware the application is running on and having solid control over it making sure that speed is not effected negatively is a must in my opinion.
Those speed critical codes are sometimes even optimized for particular hardware and not knowing anything about the environment is no good. But when it can even potentially constantly change is a killer.
How can we compare language speeds in this case? What I am seeing is something completely different to what you are seeing. And what you are seeing might even change day by day if there is no control over the hardware whatsoever.
And I am not talking about minor differences. The differences are very major indeed to the level that making any kind of conclusions is impossible. Just look at the two list and compare language speeds. They will give two completely different result.
In my opinion, for this type of benchmark, the hardware should be fixed on a relatively recent CPU architecture, at least 7th generation Intel or zen 2 AMD CPU, that would make speed at least consistent and comparison possible and meaningful.
Is it possible to do it on your setup?
I've had rustup installed but something must have gone wrong. Thanks for the info regarding Rust. Now it works except the http-server that gets stuck.
Here is the two new list that includes Rust as well.
Could you please also comment on the first half of https://github.com/hanabi1224/Programming-Language-Benchmarks/issues/144#issuecomment-908361968
Have you seen that sbcl-2.1.8 is out. $ ros install sbcl should install it.
I've just seen the new benchmark run numbers. What's happened? Now the spectral-norm numbers are more in line with what I am seeing except something happened with the java based languages and c#. Now they are much slower. The second half of the list looks odd.
Have you seen that sbcl-2.1.8 is out.
I've updated the setup to ues latest sbcl with this commit
the numbers shown there are completely different to the one run 14hrs ago and shown in the benchmark listing
There's TODO to export CI machine/VM cpu mem info, GH might provide machine/VM with different spec across different runs. Now it just assumes machine/VM spec to be consistent which might not be true and needs further verification.
Although per its doc, it should be consisitent
GitHub hosts Linux and Windows runners on Standard_DS2_v2 virtual machines in Microsoft Azure with the GitHub Actions runner application installed. The GitHub-hosted runner application is a fork of the Azure Pipelines Agent. Inbound ICMP packets are blocked for all Azure virtual machines, so ping or traceroute commands might not work. For more information about the Standard_DS2_v2 machine resources, see "Dv2 and DSv2-series" in the Microsoft Azure documentation. GitHub hosts macOS runners in GitHub's own macOS Cloud.
If this is indeed the case then the speeds should be relatively consistent. The other is that all of those CPUs have quite good AVX2 support though the 8273CL is better therefore I am puzzled about the program speed inconsistencies. Codes using ymm registers should run much faster then the ones using xmm so spectral-norm 7.cpp and 8.cpp should run close to 2x faster than 6.cpp like on my CPU.
Is it worthwhile to keep both SBCL speeds numbers? I would keep the sbcl/exe only since with quicklisp libraries the other one is not really meaningful.
Would it be possible to add SBCL to the language pool?
I am aware that SBCL is included on The Computer Language Benchmark Game site but unfortunately the site maintainer is refusing to benchmark codes written by certain programmers, therefore the comparison results are not particularly valid. There are codes sitting in the closed issues that are several times faster than the ones included in the result table.
Keep up the good work.
Thanks and Regards