Alternatively, serializations can be limited to a flattened representation so that state does not need to be managed over a large set of embedded nodes (this is probably the most natural way to blindly serialize a large dataset, in any case.
The core JSON-LD API algorithms are intended to work in memory, which is incompatible with the needs of medium to large datasets.
Of note are discussions in https://github.com/w3c/json-ld-syntax/issues/366 and https://github.com/rubensworks/jsonld-streaming-parser.js/issues/65. Best practices should include the principles of JSON-LD streaming, where keys come in a strict order, as well as the use of the streaming document profile so that clients understand the considerations for order of processing when expanding JSON-LD and turning into RDF.
Alternatively, serializations can be limited to a flattened representation so that state does not need to be managed over a large set of embedded nodes (this is probably the most natural way to blindly serialize a large dataset, in any case.
The core JSON-LD API algorithms are intended to work in memory, which is incompatible with the needs of medium to large datasets.
cc/ @wouterbeek @rubensworks