kylebebak / Requester

Powerful, modern HTTP/REST client built on top of the Requests library
https://kylebebak.github.io/Requester/
MIT License
307 stars 10 forks source link

Feature suggestion: allow "manual" (or step-by-step) chaining of the requests #31

Open igor-kupczynski opened 3 years ago

igor-kupczynski commented 3 years ago

The docs specify how to chain requests:

get('http://httpbin.org/get') get('http://httpbin.org/cookies', cookies={'url': Response.json()['url']})

or

get('httpbin.org/get', name='first_response') get('google.com', allow_redirects=False) get('httpbin.org/cookies', cookies={'url': first_response.json()['url']})

The problem is that you have to:

  1. Select all of the requests
  2. ⌘⇧P and Reuqester: Run Requests Serially.

If you want to run them one-by-one you'll get: image

I think I understand why is the -- env is per run if you run the requests one by one then the second run has a separate env, without the Response.

It would be nice to allow some form of persistent for such exploratory runs when you just prefer to run the script one-by-one. What do you think?

igor-kupczynski commented 3 years ago

Also, when you try to export such file to curl it fails :( image

Do you think this can be fixed? Do you have any hints on how to do that?

kylebebak commented 2 years ago

I think I understand why is the -- env is per run if you run the requests one by one then the second run has a separate env, without the Response.

This is exactly right. It might be possible to add another arg, like persist_response, that ensures the named response is in memory that's shared across Requester calls, and can be referenced in future Requester calls

A simpler option is not sharing memory across Requester calls, but serializing response object with something like pickle so it can be written to disk and pulled into memory on a future call

I think this is an edge case, but would be valuable for "exploratory runs" as you suggest. PR welcome