oven-sh / bun

Incredibly fast JavaScript runtime, bundler, test runner, and package manager – all in one
https://bun.sh
Other
73.6k stars 2.72k forks source link

Discord.js segfault after login #11489

Closed Parth3930 closed 4 months ago

Parth3930 commented 4 months ago

How can we reproduce the crash?

all i did was use fs to get my commands from the commands/messageCommands/{subfolders}, everything works fine when i use npm to run the bot.

JavaScript/TypeScript code that reproduces the crash?

import { Client, GatewayIntentBits, Message, User } from "discord.js";
import { env } from "bun"
import fs from "fs";
import path from "path";

const client = new Client({
  intents: [GatewayIntentBits.Guilds, GatewayIntentBits.GuildMessages, GatewayIntentBits.MessageContent]
});

client.on("ready", () => {
    if (client.user) {
      console.log(`Logged in as ${client.user.tag}`);
    }
});

const readFiles = (dir: string) => {
    const files = fs.readdirSync(dir);
    files.forEach(file => {
        const filePath = path.join(dir, file);
        const stat = fs.lstatSync(filePath);
        if (stat.isDirectory()) {
            readFiles(filePath);
        } else if (file.endsWith(".ts")) {
            const command = require(filePath);
            command.setup(client);
        }
    });
};

readFiles(path.join(__dirname, "commands/messageCommands"));

client.login(env.DISCORD_TOKEN);

Relevant log output

Bun v1.1.7 (b0b7db5c) Windows x64
Args: "C:\Users\{removed the username}\.bun\bin\bun.exe", "run", "main.ts"
Features: jsc Bun.stdin(2) dotenv fetch transpiler_cache(3) tsconfig(4) WebSocket 
Builtins: "bun:main" "node:buffer" "node:events" "node:fs" "node:fs/promises" "node:http" "node:path" "node:string_decoder" "node:timers" "node:timers/promises" "node:url" "node:util" "node:util/types" "node:zlib" "node:worker_threads" "undici" "ws" 
Elapsed: 1399ms | User: 93ms | Sys: 78ms
RSS: 0.16GB | Peak: 0.16GB | Commit: 0.18GB | Faults: 39320

panic(main thread): Segmentation fault at address 0xFFFFFFFFFFFFFFFF
oh no: Bun has crashed. This indicates a bug in Bun, not your code.

To send a redacted crash report to Bun's team,
please file a GitHub issue using the link below:

 https://bun.report/1.1.7/wr1b0b7db5AqoiggV+g4g1CuxgyvC82uje41ngeg7kgekp9xG2zn3uC6y+pF0x1qF40uvFA2DD

error: script "start" exited with code 3

Stack Trace (bun.report)

Bun v1.1.7 (b0b7db5) on windows x86_64 [RunCommand]

Segmentation fault at address 0xFFFFFFFFFFFFFFFF

Sayrix commented 4 months ago

Try upgrading to the latest version and then check if the error is still here

Parth3930 commented 4 months ago

I am on the latest version. The error still persists.

On Sat, Jun 1, 2024, 2:23 AM Sayrix @.***> wrote:

Try upgrading to the latest version and then check if the error is still here

— Reply to this email directly, view it on GitHub https://github.com/oven-sh/bun/issues/11489#issuecomment-2142963668, or unsubscribe https://github.com/notifications/unsubscribe-auth/A2KGDNVY46G6FY22OI36NL3ZFDPNFAVCNFSM6AAAAABISDENJ6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNBSHE3DGNRWHA . You are receiving this because you authored the thread.Message ID: @.***>

Jarred-Sumner commented 4 months ago

likely related to Worker which is not stable yet

shivero commented 4 months ago

For me issue was the same when tried to use bun upgrade from 1.1.11 to 1.1.12

Jarred-Sumner commented 4 months ago

For me issue was the same when tried to use bun upgrade from 1.1.11 to 1.1.12

  • resolved by just running manually powershell -c "irm bun.sh/install.ps1|iex"

While these are both Segmentation faults, the cause is different. Sort of like undefined is not an object

Jarred-Sumner commented 4 months ago

This is either a duplicate of https://github.com/oven-sh/bun/issues/11617 (a bug on Windows when an error is thrown while reading a file via Bun.file(), or it was fixed via https://github.com/oven-sh/bun/pull/11635 and https://github.com/oven-sh/bun/pull/11494.

If you're still running into this issue after Bun v1.1.13 (or in the meantime, try bun upgrade --canary), please let us know and we'll re-open the issue.