Open andreubotella opened 3 years ago
I don't know if we would need to have have extensions to Deno.core
. Wouldn't we just need something like:
const denoSerialize = Symbol("Deno.serialize");
const denoDeserialize = Symbol("Deno.deserialize");
class DOMException {
static [denoSeralize](instance) {
// serialize steps that return a v8 serialization
}
static [denoDeseralize](value) {
// returns a hydrated instance from a v8 serialization
}
}
And if we put the denoSerialize
and denoDeserialize
in the web platform helpers and not using .for()
would ensure they are unique and not accessible from userspace.
I don't know if we would need to have have extensions to
Deno.core
. Wouldn't we just need something like:const denoSerialize = Symbol("Deno.serialize"); const denoDeserialize = Symbol("Deno.deserialize"); class DOMException { static [denoSeralize](instance) { // serialize steps that return a v8 serialization } static [denoDeseralize](value) { // returns a hydrated instance from a v8 serialization } }
And if we put the
denoSerialize
anddenoDeserialize
in the web platform helpers and not using.for()
would ensure they are unique and not accessible from userspace.
That wouldn't be enough, at least for serializable interfaces, because the serializable platform objects might be deep into the object graph. Deno.core.createHostObject()
is the only current way to implement custom serialization – with v8::ValueSerializerImpl::write_host_object()
(although see https://bugs.chromium.org/p/v8/issues/detail?id=11927 for adding an easier way to fix this in v8).
Edit: Rereading the post above, I see that maybe @kitsonk meant adding custom symbols in addition to creating the object with Deno.core.createHostObject()
. And yeah, that might work.
Edit: Rereading the post above, I see that maybe @kitsonk meant adding custom symbols in addition to creating the object with
Deno.core.createHostObject()
. And yeah, that might work.
I hadn't but I understand the point better about write_host_object()
that I hadn't appreciated.
Curious if deep freeze objects can be safely considered as Transferrable
, and if not, what would be the limitation for that?
Curious if deep freeze objects can be safely considered as
Transferrable
, and if not, what would be the limitation for that?
User-created objects, whether frozen or not, are automatically serializable (not transferable, which in this context means that the original object is no longer usable after you transfer it, as happens with MessagePort
), but the resulting clone won't be frozen. And as usual with the structured clone algorithm, the prototype won't be preserved. This isn't something we would want to change, since that's how browsers do things.
But this issue is about the web APIs that Deno implements, and how their behavior with the structured clone algorithm doesn't match browsers. For these APIs, browsers do preserve their prototype, as well as associated state – and Deno should do the same.
So one of the problems with the symbols that @kitsonk suggested is that we need to be able to find the right class when deserializing an object – if you create a subclass of, say, File
, and then serialize it, the deserialization must call File[denoDeserialize]()
, not FileSubclass[denoDeserialize]()
. And this must happen even if the user has deleted globalThis.File
.
A map registry of transferabld prototypes? The only problem with that is they wouldn't have access to private fields, though I believe we don't use them for web platform prototypes, but instead use symbols again.
Since registering an interface as serializable or transferable is going to be needed either way, why not register it together with (de)serialization functions?
Deno.core.registerPlatformInterface({
name: "Blob",
serialize(instance) {
// TODO
},
deserialize(value) {
// TODO
},
// Transferable interfaces would have the `transfer` and `transferRecv` functions.
});
and Deno.core.createHostObject()
could be changed to take the interface's name, which it'd store as the object's embedder field.
Doing the serialization based on an object's prototype hierarchy would lead to fakeBlob
below serializing as a Blob
, which shouldn't happen per the spec:
const fakeBlob = {};
Object.setPrototypeOf(fakeBlob, Blob.prototype);
Deno.core.serialize(fakeBlob);
We make extensive use of both Workers and ReadableStream
s (in the form of request and response bodies), so being able to transfer them between workers and the main thread directly would be immensely helpful to us. Any news or updates on this would be much appreciated.
hited a roadblock b/c ReadableStream<Uint8array>
wasn't transferable... instead i think i have to make a custom reader/writer based on messageChannel 😞
IMHO, serializable support for File
, Blob
, etc. will be useful for Deno KV, so it may be a good idea to give it a higher priority.
Hmm, aren't File and blob transferable either? what a bummer...
Anyhow if any body wish to build a polyfill for supporting transferring blob and files in a sync manner and also build a fix for cloning blobs with structuralClone today without waiting for Deno to fix it (or just want to support older versions for backward comp.), then i got just the thing for you... i have created await-sync
that can perform async operations in a separate thread and returns that value while the main thread is blocked.
As it turns out worker share ObjectURL i(unlike in NodeJS - which is a bug)
import { createWorker } from 'https://cdn.jsdelivr.net/gh/jimmywarting/await-sync/mod.js'
const awaitSync = createWorker()
// Create our little util function
const readUrlSync = awaitSync(url => {
const res = await fetch(url)
const ab = await res.arrayBuffer()
return new Uint8Array(ab)
})
// Create a blob
const blob = new Blob(['abc'])
const url = URL.createObjectURL(blob)
// Read it while main thread gets blocked
const uint8array = readUrlSync(url)
// create a copy (clone)
const ab = uint8array.buffer
const copy = new Blob([ab])
// Don't forget to clean up
URL.revokeObjectURL(url)
// should be the same data.
await blob.text() === await copy.text() // true
// send the data in your own (de)serializable format
postMessage({
$blob: {
data: ab,
type: blob.type
}
}, [ ab ]) // Transfer ArrayBuffer instead of copying
// worker.js
onmessage = evt => {
const b = evt.data.$blob
const blob = new Blob([b.data], { type: b.type })
// olé - you have copied a blob over.
}
ofc this is really such a inefficient way of copying the hole blob. a blob should be nothing more than just a handle that points to the data where it's located in memory or on the disk. and you shouldn't have to read the content of the file in order to transfer it over postMessage.
I opened another issue which is effectively this one, noticing the limitation of what can be stored in Deno KV that should/could. I listed the web APIs that should be cloneable, but aren't:
This should be easier to implement now with https://github.com/denoland/deno/pull/21358 and its dependencies having landed.
Deno's implementation of structured serialize currently supports serializing the JS built-in
SharedArrayBuffer
, and transferring the JS built-inArrayBuffer
as well as theMessagePort
web API. With #11823 it will also support serializing the wasm built-inWebAssembly.Module
.But Deno also implements some web APIs that per the spec should be serializable or transferable but aren't in Deno's implementation. In particular,
DOMException
,File
,Blob
andCryptoKey
should be serializable; andReadableStream
,WritableStream
andTransformStream
should be transferable.Implementing this would probably need adding some
Deno.core
API to define serialization and transfer steps, possibly in connection to the existingDeno.core.createHostObject()
. This would also allow refactoring the implementation ofMessagePort
.