zen-fs / core

A filesystem, anywhere
https://zen-fs.github.io/core/
MIT License
103 stars 14 forks source link

When `openSync` with `r+` flag on PortFS that file doesn't sync with remote #60

Closed atty303 closed 4 months ago

atty303 commented 4 months ago

main.ts

import * as zenfs from "@zenfs/core";
import * as Comlink from "comlink";
import { WebStorage } from "../fs.ts";
import Worker from "./worker.ts?worker";

const worker = new Worker();
const fs = await zenfs.resolveMountConfig({
  backend: WebStorage,
});
zenfs.attachFS(worker, fs);

Comlink.wrap(worker).main();

worker.ts

import * as zenfs from "@zenfs/core";
import * as Comlink from "comlink";

Comlink.expose({
  async main() {
    await zenfs.configure({
      mounts: {
        "/user": {
          name: "LocalStorage",
          backend: zenfs.Port,
          port: self as unknown as any,
        },
      },
    });

    zenfs.writeFileSync("/user/test.txt", "");
    const fd = zenfs.fs.openSync("/user/test.txt", "r+");
    const buf = new Uint8Array(5);
    buf.set([71, 72, 73, 74, 75]);
    zenfs.fs.writeSync(fd, buf, 0, 5);
    zenfs.fs.closeSync(fd);
  },
});

Executing this code will cause the contents written by writeSync to be lost. Local storage 4346683818588372979 is the content of the file but it is empty.

image

If I re-run it here, I get an error in crossCopy.

Uncaught (in promise) Error: ENOENT: No such file or directory, '/test.txt'
    at _ErrnoError.With (error.js:249:16)
    at StoreFS.openFile (fs.js:225:30)
    at async handleRequest (:5173/node_modules/.···v=c0111006:10943:17)
    at PortFS.rpc (fs.js:112:20)
    at PortFS.openFile (fs.js:132:21)
    at PortFS.crossCopy (filesystem.js:205:46)
    at async PortFS.crossCopy (filesystem.js:201:21)
    at async PortFS.ready (filesystem.js:137:17)
    at async PortFS.ready (fs.js:120:9)
    at async resolveMountConfig (config.js:44:5)
    at async Module.configure (config.js:44:5)
    at async Object.main (worker.ts:6:5)

If you open with w+, you can write normally.

src/emulation/sync.ts:156-167

The root cause seems to be in the code above. When the mode is w+, createFileSync is called, but when r+, openFileSync is called.

src/filesystem.ts:361-368

In the case of createFileSync, the AsyncFileSystem's createFileSync is executed and returns a PreloadedFile with the AsyncFileSystem as fs, so it is correctly synchronized with the Store.

src/filesystem.ts:370-372

However, in the case of openFileSync, a PreloadedFile with AsyncFileSystem#_sync as fs is returned. This is not synchronized with Store even if close is performed because the relationship with AsyncFileSystem is severed.

I was not sure how to correct it correctly, so I hope you can address this.

atty303 commented 4 months ago

I'm also implementing WebStorage myself since @zenfs/dom doesn't keep up with the latest core. I'd be happy to have this one updated accordingly.

james-pre commented 4 months ago

@atty303 @zenfs/dom v0.2.8 has the latest core. Sorry I didn't get it out earlier.

james-pre commented 4 months ago

Also, this error is worrisome. It explains why some of the Port tests fail. They are disabled, which in the future probably shouldn't be done.

james-pre commented 4 months ago

@atty303 Does v0.11.2 fix the issues?

atty303 commented 4 months ago

@james-pre The writing problem has been resolved! However, I still have a problem with reading. Reading after writing is normal, but reading immediately after startup returns empty.

import * as zenfs from "@zenfs/core";
import * as Comlink from "comlink";

Comlink.expose({
  async main() {
    await zenfs.configure({
      mounts: {
        "/user": {
          name: "LocalStorage",
          backend: zenfs.Port,
          port: self as unknown as any,
        },
      },
    });

    console.log(zenfs.fs.readFileSync("/user/test.txt", "utf8"));
  },
});
james-pre commented 4 months ago

@atty303 I think this read after startup issue could be related to #61. I will keep track of it there and I am closing this issue since the original big has been fixed.

Thank you so much for contributing!