Closed brandontoups closed 2 years ago
I am getting the same error all of a sudden. It seems the res.text is getting cut off and not pulling the full string.
I am guessing this is due to some sort of upstream change with knack
I did confirm that calling the native API will produce this error, so it does look like upstream error. Sorry for raising the issue here. We'll reach out to Knack.
import requests
import json
url = "https://api.knack.com/v1/objects/object_7/records?filters=[{\"field\":\"field_75\", \"operator\":\"is after\",\"value\":\"10/14/2021\"},{\"field\":\"field_75\",\"operator\":\"is before\",\"value\":\"12/14/2021\"}]&rows_per_page=1000&page=" + str(page)
payload = ""
headers = {
'X-Knack-Application-Id': 'REDACTED',
'X-Knack-REST-API-Key': 'REDACTED',
'Content-Type': 'application/json',
'Cookie': 'WITHDRAWN'
}
response = requests.request("GET", url, headers=headers, data=payload)
print(response.text)
with expected total_records of ~3500. I'm using the rows_per_page of 1000 which matches what knackpy is using, and matches the documentation here
Sorry for bringing this up in your issues and not with Knack directly
@brandontoups sorry for the slow reply. glad to hear this isn't an Knackpy issue! closing.
@brandontoups if you do hear from Knack i'd love to understand what's happening here. we have occasional ETLs that pull tens of thousands of records without issue. we only test knackpy against an enterprise plan but i would be surprised if that was relataed.
I'm not sure what the issue with the api was but they resolved it. I went into the code to reduce the number of rows per pull while it was a problem but I returned it to 1000 and it is working
👍 thanks!
@verifiedathletics @brandontoups if y'all haven't already, please ⭐ our repo if it's gettin the job done for you. it helps us build internal support for the project.
@johnclary asked about it but seemed to start working about a week later so never followed up. Sorry for never following up here
Howdy
I'm seeing some issues with both the old and new version of the knackpy .get functionality erroring out. Both of these fetches will sometimes work randomly (though 99% of the time they don't).
We can't tell if it's an issue with pagination, improper responses from Knack, or something else.
I've confirmed using Knack's native API that the knackpy GETs are attempting a fetch of 143 pages and 3561 total records for this fetch, with an estimated (total) size of 21MB. For an API fetch this doesn't seem substantially large enough to warrant the errors we're seeing.
For repro, I was having trouble tracking down an exact setup that would trigger this every time. It wasn't a specific subset of records, as it would happen intermittently. It seemed like the number of record's returned was most likely the culprit, but again, we've had success with larger than 4k records plenty of times before. This feels like it's somewhat new, and I don't think response errors should be bubbling up to the native python libraries.
Sorry if this is actually an issue upstream of knackpy.
System
Issue/Repro for knackpy==1.0.20
The following code
produced the following error
With the same code I've also gotten different errors akin to
Issue/Repro for knackpy==0.1.1
Code:
Error: