Open aliko-str opened 3 years ago
I can update this :+1:
Take it away, Alexander!
@aliko-str would this be the right example?
#!/usr/local/bin/node
(() => {
let payloadSize = null;
// A queue to store the chunks as we read them from stdin.
// This queue can be flushed when `payloadSize` data has been read
let chunks = [];
// Only read the size once for each payload
const sizeHasBeenRead = () => Boolean(payloadSize);
// Record leftover data to add back in Chunks
let leftoverData = stringData.slice((payloadSize + 4)); // Note: flushChunksQueue() zeroes payloadSize
// Reset the read size and the queued chunks
flushChunksQueue();
// Add leftover data back in Buffer and repeat processing
if(leftoverData.length){
chunks.push(leftoverData);
processData();
}
const processData = () => {
// Create one big buffer with all all the chunks
const stringData = Buffer.concat(chunks);
// The browser will emit the size as a header of the payload,
// if it hasn't been read yet, do it.
// The next time we'll need to read the payload size is when all of the data
// of the current payload has been read (ie. data.length >= payloadSize + 4)
if (!sizeHasBeenRead()) {
payloadSize = stringData.readUInt32LE(0);
}
// If the data we have read so far is >= to the size advertised in the header,
// it means we have all of the data sent.
// We add 4 here because that's the size of the bytes that old the payloadSize
if (stringData.length >= (payloadSize + 4)) {
// Remove the header
const contentWithoutSize = stringData.slice(4, (payloadSize + 4));
// Reset the read size and the queued chunks
flushChunksQueue();
const json = JSON.parse(contentWithoutSize);
// Do something with the data...
}
};
process.stdin.on('readable', () => {
// A temporary variable holding the nodejs.Buffer of each
// chunk of data read off stdin
let chunk = null;
// Read all of the available data
while ((chunk = process.stdin.read()) !== null) {
chunks.push(chunk);
}
processData();
});
})();
I'm not sure you've attached the code that you meant...
I think I'd do smth like the code below, though it could probably be improved (MDN shouldn't show imperfection..): sizeHasBeenRead seems unnecessary (I don't think it has a practically smaller overhead than re-reading payloadSize; if anything, repeating Buffer.concat every time is worse), flushing and then putting data back in 'chunks' feels wrong, recursion (processData from processData) usually means I was too lazy to do things properly, 'chunks' can be a const, 'payloadSize' can be kept inside processData if it's re-read, flushChunksQueue is then unnecessary and can be reduced down to 'chunks.splice(0);', and maybe smth else.
(() => {
let payloadSize = null;
// A queue to store the chunks as we read them from stdin.
// This queue can be flushed when `payloadSize` data has been read
let chunks = [];
// Only read the size once for each payload
const sizeHasBeenRead = () => Boolean(payloadSize);
// All the data has been read, reset everything for the next message
const flushChunksQueue = () => {
payloadSize = null;
chunks.splice(0);
};
const processData = () => {
// Create one big buffer with all all the chunks
const stringData = Buffer.concat(chunks);
// The browser will emit the size as a header of the payload,
// if it hasn't been read yet, do it.
// The next time we'll need to read the payload size is when all of the data
// of the current payload has been read (ie. data.length >= payloadSize + 4)
if (!sizeHasBeenRead()) {
payloadSize = stringData.readUInt32LE(0);
}
// If the data we have read so far is >= to the size advertised in the header,
// it means we have all of the data sent.
// We add 4 here because that's the size of the bytes that old the payloadSize
if (stringData.length >= (payloadSize + 4)) {
// Remove the header
const contentWithoutSize = stringData.slice(4, (payloadSize + 4));
//If FF sent more than 1 message, record leftover data to be added back in Chunks
let leftoverData = stringData.slice((payloadSize + 4)); // Note: flushChunksQueue() zeroes payloadSize, which is why we save leftoverData here
// Reset the read size and the queued chunks
flushChunksQueue();
const json = JSON.parse(contentWithoutSize);
// Do something with the data...
// Add the leftover data back in Buffer and repeat processing
if (leftoverData.length) {
chunks.push(leftoverData);
processData();
}
}
};
process.stdin.on('readable', () => {
// A temporary variable holding the nodejs.Buffer of each
// chunk of data read off stdin
let chunk = null;
// Read all of the available data
while ((chunk = process.stdin.read()) !== null) {
chunks.push(chunk);
}
processData();
});
})();
@aliko-str would you like to submit a PR with you changes?
MDN URL: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging
What information was incorrect, unhelpful, or incomplete?
The nodejs example of the native app processing messages from FF.
The current example flushes all data ("chunks.splice(0);") after one message has been read - even if there are left-over data/messages in the buffer array.
Specific section or headline?
"App side"
What did you expect to see?
A quick fix would be to push leftover data in the buffer array after buffer flushing, and then process Data again:
Did you test this? If so, how?
Yes. Run my extension and saw that the issue of disappearing/unprocessed messages has gone away.
MDN Content page report details
* Folder: `en-us/mozilla/add-ons/webextensions/native_messaging` * MDN URL: https://developer.mozilla.org/en-US/docs/Mozilla/Add-ons/WebExtensions/Native_messaging * GitHub URL: https://github.com/mdn/content/blob/master/files/en-us/mozilla/add-ons/webextensions/native_messaging/index.html * Last commit: https://github.com/mdn/content/commit/3e5d8947cf03d1ae758aff8926c04c38eca2db2a * Document last modified: 2021-02-01T10:33:54.000Z