google-ai-edge / mediapipe-samples

Apache License 2.0
1.65k stars 422 forks source link

Issue with importing mediapipe library using ES6 modules in Workers #174

Open mabab opened 1 year ago

mabab commented 1 year ago

I am writing to report an issue I have encountered while importing the mediapipe library using ES6 modules in Workers. When attempting to import mediapipe, I consistently receive the following error: "TypeError: Failed to execute 'importScripts' on 'WorkerGlobalScope': Module scripts don't support importScripts()." This error occurs when I set the 'type' attribute to 'module' when connecting the Worker, as shown in the code snippet below:

new Worker(this.workerScriptUrl, { type: 'module' });

After investigating the issue, it has become apparent that the mediapipe library relies on importScripts to import WebAssembly (wasm) files. Unfortunately, due to the limitation of module scripts in Workers, importScripts is not supported, resulting in the aforementioned error. Consequently, I am unable to utilize the mediapipe library within Workers when using ES6 modules.

I kindly request your assistance in resolving this matter. I propose the implementation of an optional workaround to address this issue. Specifically, it would be greatly appreciated if you could introduce an exception that allows an alternative method, such as using fetch or any other suitable approach, to import the wasm files instead of relying on importScripts. This adjustment would facilitate the seamless integration of the mediapipe library with ES6 modules in Workers, enabling developers to effectively leverage its functionalities.

Thank you for your attention to this matter. I eagerly await your response and any guidance you can provide to help resolve this issue.

AhmedKorim commented 11 months ago

@mabab Any update on this issue or a workaround?

mabab commented 11 months ago

@mabab Any update on this issue or a workaround?

My workaround: to use classic worker for mediapipe and in the worker to up new module worker for threejs in my situation

Example: https://gist.github.com/mabab/9ab9d505ef5b1da694262de370b74314#file-worker-js

dev-tarun-nw commented 4 months ago
  1. worker files
import * as ComLink from "comlink";
import TaskGenAI from "../on-device-review/utils/mediapipe/task-genai.js";

class ModelHandlers {
    constructor() {
        this.taskGenAI = new TaskGenAI();
    }

    /**
     * Stores a file blob in the Service Worker cache.
     *
     * @param {Blob} blob - The file blob to be stored in the cache.
     * @returns {Promise<void>} A promise that resolves when the file is successfully cached.
     * @throws {Error} If there's an error during the caching process.
     */
    async storeFileInSWCache(blob, fileName = "model.bin") {
        try {
            const modelCache = await caches.open("models");
            await modelCache.put(fileName, new Response(blob));

            console.log("Model file cached in sw-cache.");
        } catch (err) {
            console.error(err.name, err.message);
        }
    }

    /**
     * Retrieves a file blob from the Service Worker cache.
     *
     * @returns {Promise<Blob>} A promise that resolves with the cached file blob.
     * @throws {Error} If the file is not found in the cache or if there's an error during retrieval.
     */

    async restoreFileFromSWCache(fileName = "model.bin") {
        try {
            const modelCache = await caches.open("models");
            const response = await modelCache.match(fileName);
            if (!response) {
                return null;
                throw new Error(`File model.bin not found in sw-cache.`);
            }
            const file = await response.blob();
            console.log("Cached model file found in sw-cache.");
            return file;
        } catch (err) {
            throw err;
        }
    }

    generateAnswerWithGemma(_cachedModelBlob, question, callback) {
        console.log("using gemma-2b model... 1");
        let cachedGenAI = null;
        let cachedLlmInference = null;
        let cachedModelBlob = _cachedModelBlob;
        return ComLink.proxy(async (question, callback) => {
            try {
                console.log("using gemma-2b model... 2");
                if (!cachedGenAI) {
                    cachedGenAI = await this.taskGenAI.getGenAI();
                }
                if (!cachedLlmInference) {
                    cachedLlmInference = await this.taskGenAI.getLlmInference(
                        cachedModelBlob
                    );
                }
                const answer = await cachedLlmInference.generateResponse(
                    question
                );
                callback(answer);
                return answer;
            } catch (error) {
                throw error;
            }
        });
    }

    async downloadAndStoreModel(url, name) {
        if (!url) {
            throw new Error("No URL provided.");
        }
        const response = await fetch(url);
        const blob = await response.blob();
        await storeFileInSWCache(blob, name);
        return blob;
    }
}

ComLink.expose(ModelHandlers);
  1. mediapipe util file
import { FilesetResolver, LlmInference } from "@mediapipe/tasks-genai";

export default class TaskGenAI {
    async getGenAI() {
        if (!this.cachedGenAI) {
            this.cachedGenAI = await FilesetResolver.forGenAiTasks(
                "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai@latest/wasm"
            );
        }
    }
    async getLlmInference(cachedModelBlob) {
        if (!this.cachedLlmInference) {
            await this.getGenAI();
            this.cachedLlmInference = await LlmInference.createFromOptions(
                this.cachedGenAI,
                {
                    baseOptions: {
                        modelAssetBuffer: cachedModelBlob.stream().getReader(),
                    },
                    maxTokens: 2000,
                    topK: 40,
                    temperature: 0.8,
                    randomSeed: 1,
                }
            );
        }
        return this.cachedLlmInference;
    }
}

@mabab

I'm still encountering this issue even though I'm using the methods from MediaPipe and exposing them as class util.

mabab commented 4 months ago
  1. worker files
import * as ComLink from "comlink";
import TaskGenAI from "../on-device-review/utils/mediapipe/task-genai.js";

class ModelHandlers {
    constructor() {
        this.taskGenAI = new TaskGenAI();
    }

    /**
     * Stores a file blob in the Service Worker cache.
     *
     * @param {Blob} blob - The file blob to be stored in the cache.
     * @returns {Promise<void>} A promise that resolves when the file is successfully cached.
     * @throws {Error} If there's an error during the caching process.
     */
    async storeFileInSWCache(blob, fileName = "model.bin") {
        try {
            const modelCache = await caches.open("models");
            await modelCache.put(fileName, new Response(blob));

            console.log("Model file cached in sw-cache.");
        } catch (err) {
            console.error(err.name, err.message);
        }
    }

    /**
     * Retrieves a file blob from the Service Worker cache.
     *
     * @returns {Promise<Blob>} A promise that resolves with the cached file blob.
     * @throws {Error} If the file is not found in the cache or if there's an error during retrieval.
     */

    async restoreFileFromSWCache(fileName = "model.bin") {
        try {
            const modelCache = await caches.open("models");
            const response = await modelCache.match(fileName);
            if (!response) {
                return null;
                throw new Error(`File model.bin not found in sw-cache.`);
            }
            const file = await response.blob();
            console.log("Cached model file found in sw-cache.");
            return file;
        } catch (err) {
            throw err;
        }
    }

    generateAnswerWithGemma(_cachedModelBlob, question, callback) {
        console.log("using gemma-2b model... 1");
        let cachedGenAI = null;
        let cachedLlmInference = null;
        let cachedModelBlob = _cachedModelBlob;
        return ComLink.proxy(async (question, callback) => {
            try {
                console.log("using gemma-2b model... 2");
                if (!cachedGenAI) {
                    cachedGenAI = await this.taskGenAI.getGenAI();
                }
                if (!cachedLlmInference) {
                    cachedLlmInference = await this.taskGenAI.getLlmInference(
                        cachedModelBlob
                    );
                }
                const answer = await cachedLlmInference.generateResponse(
                    question
                );
                callback(answer);
                return answer;
            } catch (error) {
                throw error;
            }
        });
    }

    async downloadAndStoreModel(url, name) {
        if (!url) {
            throw new Error("No URL provided.");
        }
        const response = await fetch(url);
        const blob = await response.blob();
        await storeFileInSWCache(blob, name);
        return blob;
    }
}

ComLink.expose(ModelHandlers);
  1. mediapipe util file
import { FilesetResolver, LlmInference } from "@mediapipe/tasks-genai";

export default class TaskGenAI {
    async getGenAI() {
        if (!this.cachedGenAI) {
            this.cachedGenAI = await FilesetResolver.forGenAiTasks(
                "https://cdn.jsdelivr.net/npm/@mediapipe/tasks-genai@latest/wasm"
            );
        }
    }
    async getLlmInference(cachedModelBlob) {
        if (!this.cachedLlmInference) {
            await this.getGenAI();
            this.cachedLlmInference = await LlmInference.createFromOptions(
                this.cachedGenAI,
                {
                    baseOptions: {
                        modelAssetBuffer: cachedModelBlob.stream().getReader(),
                    },
                    maxTokens: 2000,
                    topK: 40,
                    temperature: 0.8,
                    randomSeed: 1,
                }
            );
        }
        return this.cachedLlmInference;
    }
}

@mabab

I'm still encountering this issue even though I'm using the methods from MediaPipe and exposing them as class util.

@dev-tarun-nw How you call a new Worker ?

dev-tarun-nw commented 3 months ago

@dev-tarun-nw How you call a new Worker ?

sorry for replying late,

im using comlink to wrap the new wroker intance and use those as normal class methods

const ModelWorker = wrap(
    new Worker(new URL("../workers/model.worker.js", import.meta.url), {
        type: "module",
    })
);

thanks for responding