Open Rolstenhouse opened 9 months ago
Same issue
Would you be open to share the minimal example as I cannot reproduce it.
Sure - here's some more context
Snippet
mediaUrl = "https://api.twilio.com/2010-04-01/Accounts/ACfbfe2e1e70ce74b02a4151bf91b23693/Messages/MM3fa6329883117973ec3cd7b180c6caca/Media/ME76f45b7483238aac2516ab5429c5018a"
try {
ort.env.debug = true;
ort.env.logLevel = "warning";
logger.info("Removing background for image", { mediaUrl });
const localPath = `file://${process.cwd()}/public/imgly/`;
logger.info("localPath", { localPath });
const blob: Blob = await removeBackground(mediaUrl, {
publicPath:
process.env.NODE_ENV === "production"
? "file:///myapp/public/imgly/"
: localPath,
// publicPath: "https://stickerfy.xyz/imgly/",
debug: true,
model: "small",
progress: (key, current, total) => {
logger.warn(`Downloading ${key}: ${current} of ${total}`);
},
});
buffer = Buffer.from(await blob.arrayBuffer());
} catch (error) {
logger.error("Error while removing background for image", {
mediaUrl,
error,
errorMessage: error.message,
errorStack: error.stack,
errorName: error.name,
});
}
}
// Write the buffer to S3
if (buffer) {
// Upload to S3
logger.info("Uploading image to S3", {
info: {
key: mediaSid!,
contentType: "image/png",
userId: user?.autoId || 0,
buffer: buffer.length,
},
});
backgroundRemovedImage = await uploadImageToS3({
key: mediaSid!,
buffer,
contentType: "image/png",
userId: user?.autoId || 0,
});
}
Here's a screenshot of the logs (and included a CSV with the log output)
Also note: this snippet includes the local file path, but I also ran into this issue when referencing the hosted model.
Deployed server is running on fly.io btw (not sure if that might be an issue)
My server is running on digital ocean with the same issue. Droplet info: Ubuntu 23.10 x64 Node 20 No gpu
Hi, I get the same error on WSL2 (Ubuntu).
Could be related to "onnxruntime-node" or WASM and the fact that TensorFlow and the model need a GPU and on server or a remote environment they are not available ?
I noticed that in the source, the function:
async function createOnnxSession(model, config) {
if (config.debug) {
ort.env.debug = true;
ort.env.logLevel = "verbose";
console.debug("ort.env.wasm:", ort.env.wasm);
}
}
on my WSL2 environment actually prints an empty object to the console:
fetch /models/medium 100%
ort.env.wasm: {}
free(): invalid size
Thanks!
onnxruntime-node
should work without a GPU.
ort.env.wasm
seems wrong and the Node Version does not yet
support the wasm backend.
So it seems ok that it's empty.
I have no access to such a machine at the moment, so unfortunately I cannot reproduce the error. Also, I have no idea what the cause is.
Thanks for looking into it. For other devs that might encounter this, I used a different package: rembg on replicate and just paid the small out of pocket cost.
npm ERR! code 1 npm ERR! path /home1/freeback/nodevenv/imageremove/18/lib/node_modules/onnxruntime-node npm ERR! command failed npm ERR! command sh -c node ./script/install npm ERR! Downloading "https://github.com/microsoft/onnxruntime/releases/download/v1.17.3/onnxruntime-linux-x64-gpu-1.17.3.tgz"... npm ERR! node:internal/deps/undici/undici:7534 npm ERR! return await WebAssembly.instantiate(mod, { npm ERR! ^ npm ERR! npm ERR! RangeError: WebAssembly.instantiate(): Out of memory: wasm memory npm ERR! at lazyllhttp (node:internal/deps/undici/undici:7534:32) npm ERR! npm ERR! Node.js v18.18.2 npm ERR! A complete log of this run can be found in: /home1/freeback/.npm/_logs/2024-08-27T13_49_31_015Z-debug-0.log
getting this error on cpnal it's working fine with local so help me out ?
Hello, i am having the same Issue. I am using the background remover locally without any problems. When i deploy my code to docker i get: munmap_chunk(): invalid pointer
Docker version: 27.2.1 Node: node:22-bookworm-slim image Package version: 1.4.5
Is there a fix for this? Thanks
getting same issue ... am using the background remover locally without any problems. When i deploy my code to docker i get free(): invalid size.
this sucks because the npm package is too big for lambda layer and even when using container it doesn't work.
I made a simple removeBackground api endpoint with express as a test, and it runs without any problems, with the same node image on docker. So maybe its conflicting with some other package i use in my main project.
Anyway, i use this now as a workaround now. Here is the code, if you are interested:
import { removeBackground } from "@imgly/background-removal-node";
import express from "express";
import multer from "multer";
import bodyParser from "body-parser";
import fs from "fs/promises";
import path from "path";
const app = express();
const port = 3838;
const apiKey = process.env.REMOVE_BG_API_KEY;
const upload = multer({ dest: 'uploads/' });
app.post('/removeBackground', upload.single('image'), async (req, res) => {
if (!apiKey) {
return res.status(500).send('API key is not set');
}
if (req.body.apiKey !== apiKey) {
return res.status(401).send('Unauthorized');
}
try {
const imagePath = req.file.path;
const outputImagePath = path.join('uploads', `removed-background-${Date.now()}.png`);
const imageBuffer = await fs.readFile(imagePath);
const imageBlob = new Blob([imageBuffer], { type: 'image/png' });
let removedBackground = await removeBackground(imageBlob);
const arrayBuffer = await removedBackground.arrayBuffer();
const removedBackgroundBuffer = Buffer.from(arrayBuffer);
await fs.writeFile(outputImagePath, removedBackgroundBuffer);
res.sendFile(outputImagePath, { root: './' });
await fs.unlink(imagePath);
await fs.unlink(outputImagePath);
} catch (error) {
console.error(error);
res.status(500).send('Error processing image');
}
});
app.listen(port, () => {
console.log(`Server is running on port ${port}`);
});
FROM node:22-bookworm-slim
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install
COPY . .
EXPOSE 3838
CMD ["node", "server.js"]
When I attempt to remove the background from a file on my server I get the following issue
corrupted size vs. prev_size
ORfree(): invalid size
ORmunmap_chunk(): invalid pointer
Have not been able to identify what triggers which, but feels like it might be an issue with the ML image (I'm using the small one)
Docker host: 20.10.12 linux x86_64 Node version: Node.js v21.6.2 Package version: 1.4.4