Open Rzpeg opened 1 year ago
I'm interested in this too. I believe Anoop mentioned in this other thread that it's not yet allowed for security reasons, but that the goal would be to allow users to explicitly enable access.
My use case is that in my pre-request script, I want to read a token that is stored/updated in a specific file on my filesystem to make http calls to a Vault instance, to then retrieve secrets for use in fetching OAuth tokens that I can use in my requests.
If there are any updates on this, please let us know. If you're comfortable opening this up to a contribution, I'd be happy to investigate and make an attempt at a PR.
Edit: I tried to implement a work around locally, but ran into issues using the node-vault
package, which apparently internally depends on the node tty
package. (https://nodejs.org/api/tty.html). I'm tempted to ask if we can enable it like Anoop suggested we could enable stream
, path
, url
, and util
in the issue I linked above, but I'm afraid we might end up in a game of infinite whack-a-mole where there are constantly packages to be whitelisted. Instead, I might suggest that we allow users to specifically enable/disable certain built-in packages in their VM, or even allow an option for running the VM with a host
context instead of sandbox
, perhaps with an appropriate warning about the security implications?
Could be secure to access files inside collections? I mean, having a module that only has access to files stored on the same path that the .bru files.
There are two access that we need to consider when executing a script inside vm2
We decided to allow network access to the script by default since it is a really common thing that people who use Bruno need.
In the future, we will provide a toggle to disable network access in bruno.json
We decided to disable filesystem access by default.
The user can however enable it by adding the following to bruno.json
{
"filesystemAccess": {
"allow": true
}
}
Could be secure to access files inside collections? I mean, having a module that only has access to files stored on the same path that the .bru files.
Yes, we will have to build a wrapper around fs
before passing to vm2
Just got this completed. Cutting a new release now.
Available in GUI v0.19.0
Not yet available in CLI
Go give it a whirl and let me know if your case is solved.
@helloanoop Amazing. My case is solved, as I can now do 2 things I have been missing:
Now need this supported in CLI :)
Thanks!
@Rzpeg Can you share your script. You can replace the URLs and other things for privacy. Your script would be super helpful for the community to who want to do file uploads using bruno.
@helloanoop Sure, here it goes:
const fs = require('fs');
const path = require('path');
const attachmentFilename = "debug-data.bin";
const attachmentPath = path.join(bru.cwd(), attachmentFilename);
const attachment = fs.readFileSync(attachmentPath, "base64");
const attachmentLength = attachment.length;
req.setHeader("Content-Type", "application/octet-stream");
req.setHeader("Content-Disposition", `attachment; filename="${attachmentFilename}"`);
req.setHeader("Content-Length", attachmentLength);
req.setBody(attachment);
@helloanoop Does the CLI already support running scenarios that use fs module?
Edit: Just did a test run on CLI 0.12.0, and not supported yet.
Great job @Rzpeg. I have tried this changing "application/octet-stream" for "multipart/form-data" but does not work.
Great job @Rzpeg. I have tried this changing "application/octet-stream" for "multipart/form-data" but does not work.
This is because multipart/form-data content type has different body structure / specification.
You need to define boundaries and construct request the proper manner: https://www.w3.org/TR/html401/interact/forms.html#h-17.13.4.2
@eferro70 Once you figure how to solve your use case, it wou6be great if you could share it in Scriptmania - https://github.com/usebruno/bruno/discussions/385
Hello, I got here via this issue. Please tell me if I should rather open a new ticket, but I found this thread quite relevant.
My problem:
I am attempting to access secrets in Azure Key Vault using @azure/keyvault-secrets
and @azure/identity
in a pre-script, but I am facing module issues:
Error invoking remote method 'send-http-request': VMError: Cannot find module 'os'
. I managed to sneak past this one by installing this exported version, but then only to face the next one in line:
Error invoking remote method 'send-http-request': VMError: Cannot find module 'crypto'
.
What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json
?
What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json?
Yes, I agree with the whitelisting approach.. moduleWhitelist
Also curious to know which tool were you using before Bruno :) And how were you managing secrets before.
@helloanoop whitrlisting is a neat idea. Also, while you are here, do you happen to have any ETA for fs support in CLI?
2 hrs :)
That's... At least 2 days quicker than anticipated. :) Take a weekend and... REST :P
What is the current plan on resolving issues like this? Maybe a list of whitelisted packages from node could be included in bruno.json?
Yes, I agree with the whitelisting approach..
moduleWhitelist
Also curious to know which tool were you using before Bruno :) And how were you managing secrets before.
Awesome! In all honesty my current workflow is the good old copy/paste, which I'm trying to get away from. I am currently using Insomnia a bit on and off. There is an azure key vault plugin that I didn't get to work last time I tried (my fault no doubt). I stumbled across Bruno today and I got curious if I could get it working 😄
@jonasgheer @Rzpeg I have published Bru CLI v0.13.0
and Bru GUI v0.21.0
These have support for whitelisting and filesystem access. There is a breaking change in config
{
"version": "1",
"name": "bruno-testbench",
"type": "collection",
"scripts": {
"moduleWhitelist": [
"crypto"
],
"filesystemAccess": {
"allow": true
}
}
}
Let me know if you face any further issues. Also @jonasgheer Once you get Azure Key Vault working, please share your script for the community reference in Scriptmania - https://github.com/usebruno/bruno/discussions/385
Let's keep this ticket open. Need to update docs.
Oh wow, that was quick! Unfortunately I'm still facing some issues. After whitelisting "os" and "crypto" I ran into this error:
Error: Error invoking remote method 'send-http-request': VMError: Operation not allowed on contextified object.
. I had a go at grabbing the VM code in an attempt to narrow down the problem (on the way I even bumped into an issue you filed on the vm2 repo, so thanks for that one 😄). Unfortunately the only thing I managed to conclude is that the script crashes somewhere in the require call: const { SecretClient } = require("@azure/keyvault-secrets");
. From what I have gathered I'm guessing the code in there tries to manipulate something it's not allowed to.
Granted, the module requires that you have the azure cli installed, and that you are logged in. So I'm not even sure if it really possible to have it run in a somewhat "safe" manner? I'm not sure if these sorts of modules more or less require full blown host access or not 🤔 I'm flailing a bit in the dark here. My limited knowledge on both vm2
and @azure/keyvault-secrets
had me a bit stumped.
@helloanoop
Oh wow, that was quick! Unfortunately I'm still facing some issues. After whitelisting "os" and "crypto" I ran into this error:
Error: Error invoking remote method 'send-http-request': VMError: Operation not allowed on contextified object.
. I had a go at grabbing the VM code in an attempt to narrow down the problem (on the way I even bumped into an issue you filed on the vm2 repo, so thanks for that one 😄). Unfortunately the only thing I managed to conclude is that the script crashes somewhere in the require call:const { SecretClient } = require("@azure/keyvault-secrets");
. From what I have gathered I'm guessing the code in there tries to manipulate something it's not allowed to.Granted, the module requires that you have the azure cli installed, and that you are logged in. So I'm not even sure if it really possible to have it run in a somewhat "safe" manner? I'm not sure if these sorts of modules more or less require full blown host access or not 🤔 I'm flailing a bit in the dark here. My limited knowledge on both
vm2
and@azure/keyvault-secrets
had me a bit stumped.@helloanoop
I might be making some progress here, hold your horses 😄
I have identified some problems. I'll start with the biggest one, which sadly might make this whole thing a no-go.
@azure/keyvault-secrets pulls in 3 libraries amongst a bunch of others:
These packages all have a line of code in them calling util.inherits
:
I found a related issue in the vm2 repo (it links further to other related issues). And as far as I have collected I guess the conclusion is that the internal node modules are not "real" but they are proxied (I'm on thin ice here) and as a result you are not allowed to extend other objects using them. This is as far as I can tell "Known issue" nr 2 in the readme:
It is not possible to define a class that extends a proxied class. This includes using a proxied class in
Object.create
.
I guess this might be something to pick up again after the project have moved to isolated-vm? Unless I have completed missed the mark here and there is an easier solution just in front of me that I can't think of 😅
I guess this might be something to pick up again after the project have https://github.com/usebruno/bruno/issues/263?
Yes, that makes sense.
Now, that the CLI route didn't work, I recommend trying the API route. I created a separate ticket https://github.com/usebruno/bruno/issues/454 to track this on doing it via API
@jonasgheer Can you take a stab at it?
I've tried to download a .zip file (from a local server) using the post response scripting.
This is my post response script:
const fs = require('fs');
const path = require('path');
const buffer = Buffer.from(res.getBody(), 'binary');
const zipFilename = "output.zip";
const zipPath = path.join(bru.cwd(), zipFilename);
const contentLength = res.headers['content-length'];
if (buffer.length !== parseInt(contentLength)) {
throw new Error('Downloaded content length does not match the expected content length.');
}
fs.writeFileSync(zipPath, buffer);
However, I run into the Error every time. The file is a stream (with transfer-encoding: chunked
). If I don't use the post response script, I get a Response with binary text. Postman allows to download request responses. Is there anything wrong in the script or is Bru messing up the download somewhere?
I think this is due to the underlying vm2 module not allowing write access to file system. I feel we should be able to fix this at the code level by passing a fs wrapper here into the script runtime.
The idea here is that the the vm2 delegates the write task back to the electron layer, which has access to write to the filesystem.
If anyone has bandwidth in the community, please pick this up.
cc @DivyMohan14
Writing kinda works. If I rewrite the code and ignore the error, the file gets written, but is corrupt. Trying to unzip it using unzip
(from apt) returns this:
error [output.zip]: missing 291 bytes in zipfile
(attempting to process anyway)
error [output.zip]: start of central directory not found;
zipfile corrupt.
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
Makes sense @helloanoop let me have a look at this ...
@DivyMohan14 I confirm that writing indeed works
const fs = require('fs');
const path = require('path');
const filepath = path.join(bru.cwd(), 'log.txt');
fs.writeFileSync(filepath, JSON.stringify(res.getBody()));
@DivyMohan14 When you have some time, please see if you are able to implement streaming event listening on the res object
We should be then be able to use something like this
let responseData = '';
res.on('data', (chunk) => {
responseData += chunk;
});
res.on('end', () => {
try {
let jsonData = JSON.parse(responseData);
// write to file
} catch (error) {
}
});
I did the setup that @Xceron might have, and yeah indeed the zip file write does not work as expected, and as you said @helloanoop the normal write works as expected..
The problem here does not seem to be with the file system or the write access, it looks to be some issue with the response configuration I am on it expecting to find the root cause in sometime
Looks like I found the issue, the issue is related to axios returning binary data, dealing with seems to be an issue an easier solution is to get the response as arrayBuffer instead...
Wrote a logic in prepare request to check if the Content-Type is application/zip then set responseType as arrayBuffer in axios config
A good resource for the issue: link here
@helloanoop please have a look at PR
@Xceron you can try to run the server from my branch in the meantime to check if this solves the issue ?
@DivyMohan14 thank you for the quick PR! I've tried running your version but still run into the same issue with my code mentioned here. What is your post response script to download the zip?
@Xceron Interesting, I was running the same post request script that you are using. can you maybe send the server code which you are running I had setup a mock server which streams a zip file.
Btw in the request headers did you mention application/zip
as Content-Type
to cross check
My zip file stream mock setup
var express = require('express')
var router = express.Router()
var path = require('path')
var fs = require('fs')
router.get("/", async(req, res, next) => {
let filepath = path.join("png.zip")
let stat = fs.statSync(filepath)
console.log("request received")
res.set({
"Content-Type": "application/zip",
"Content-Length": stat.size,
"File-Name": `${path.basename(filepath)}`
})
var readStream = fs.createReadStream(filepath)
readStream.pipe(res)
})
module.exports = router
The server is a non-public Spring Boot (Java) Server, so I cannot share it, sadly.
The headers of the response are content-type: application/zip
and transfer-encoding:chunked
.
I will try and set up your mock server to test. I'm not a JS guy, so it'll take some time. Will report back if I manage to pinpoint the problem.
Cool no issues, in the PR if you noticed I was using 'Content-Type' as the identifier.
So in bruno I believe in the header you are setting Content-Type
as application/zip
, if not can you please check after setting this ?
And can you confirm one more thing in the response that you are seeing in the response tab is it still binary data for you ?
Don't know whether I can follow you, let me explain:
My request is a file upload (which I do using the pre-request script shared here), which takes multiple files as input and uploads it to a server. The server uses those files, does some things and sends back a .zip.
I do not set the request headers manually (or at all), but tried setting content-type: application/zip
and content-type: multipart/form-data
in the request to no success in both cases.
The files which the server receives are correct and work perfectly fine. So the request side from Bru work without any problem. The server then sends back the .zip and sets content-type:application/zip
in the response. Bru shows the binary data in the reponse pane. The binary data is also mostly correct, as I can see the correct file names every time. The zip is just missing some bytes to be a valid zip
You run the script you mentioned here, post you get the response from the server right.. it said post response script ..?
I will share my bru file here...
meta {
name: Test zip file download issue
type: http
seq: 2
}
get {
url: http://127.0.0.1:3002/
body: none
auth: none
}
headers {
Content-Type: application/zip
}
script:post-response {
const fs = require('fs');
const path = require('path');
const buffer = Buffer.from(res.getBody(), 'binary');
const zipFilename = "output.zip";
const zipPath = path.join(bru.cwd(), zipFilename);
const contentLength = res.headers['content-length'];
if (buffer.length !== parseInt(contentLength)) {
throw new Error('Downloaded content length does not match the expected content length.');
}
fs.writeFileSync(zipPath, buffer);
}
can you also share your bru file if possible ? will be easier to debug..
meta {
name: Upload Files and zip them
type: http
seq: 11
}
post {
url: http://127.0.0.1:8081/upload_files
body: multipartForm
auth: none
}
body:form-urlencoded {
firstFile: undefined
secondFile: undefined
}
script:pre-request {
const fs = require('fs');
const path = require('path');
const FormData = require('form-data');
const firstFile = "first.txt";
const secondFile = "second.txt";
const firstPath = path.join(bru.cwd(), firstFile);
const secondPath = path.join(bru.cwd(), secondFile);
const form = new FormData();
form.append('firstFile', fs.createReadStream(firstPath), firstFile);
form.append('secondFile', fs.createReadStream(secondPath), secondFile);
req.setBody(form);
}
script:post-response {
const fs = require('fs');
const path = require('path');
const buffer = Buffer.from(res.getBody(), 'binary');
const zipFilename = "output.zip";
const zipPath = path.join(bru.cwd(), zipFilename);
const contentLength = res.headers['content-length'];
if (buffer.length !== parseInt(contentLength)) {
throw new Error('Downloaded content length does not match the expected content length.');
}
fs.writeFileSync(zipPath, buffer);
}
Yeah can you please copy this headers section from my bru file into yours ? and then try once
Just tried that, still running into the Error
In the response section are you still seeing the response in binary format? if yes then it is a problem it should have changed to response array buffer if you are running from my branch
No, with the headers set I just run into the error and get no response.
Got it thanks let me check, one thing maybe the transfer-encoding chunked
might be a differentiator here.. did not look at that
can you share the response headers @Xceron ? will check that
Name | Value |
---|---|
content-type | application/zip |
transfer-encoding | chunked |
date | Wed, 11 Oct 2023 12:46:22 GMT |
connection | close |
@Xceron as you can see if the transfer-encoding is chunked, then the server will not return the content-length so it will technically always fail... can you please remove that check and run the script again if you have not already done it in this new branch ?
If I just remove the check, the request will go through, I see binary outputs in the response, but the written zip is still corrupt. This happens with headers set and with headers not set.
Cool thanks, Can you help me with some questions?
transfer-encoding: chunked
{ "0":80,"1":75,"2":3,"3":4,"4":20 }
basically as buffer, which I believe is not happening for you, you are still seeing binary response ?The server uses Spring's InputStreamResource to send the read file in chunks, so it streams it
I don't get any buffer. Could this be related to the file size? The .zip in my tests are tiny (~3 KB) and the requests finish instantly
Hmm interesting I also set up a server which streams zip files but the same setup seems to be working fine for me.
@Xceron when trying to run your code I get an error: "Error invoking remote method 'send-http-request': VMError: Cannot find module 'form-data'". Any idea what is wrong?
Please include the fs modules (fs, fs-extra) in the distribution and allow accessing the filesystem. This is needed to generate attachments from files in pre-request scripts