Closed gyrter closed 1 year ago
Could you explain first why the query result is so huge? What is actually in the trace?
I will try to found largest doc in index, decode and send you example trace.
Do you mean, that problematic trace came from backend job and it was not API or norman web Render trace?
Do you mean, that problematic trace came from backend job and it was not API or norman web Render trace?
This seems possible. But let's discuss based on the real data.
@gyrter I deleted the message for you. Please don't post your env data in this public channel. It could leak many information you didn't expect. As trace usually includes IP:port and server relationship.
What part do you fail to read? I could guide you to the method, and you are better to decode on your private env only.
Elastic search returns heavy document. Example query:
curl -X GET "localhost:9200/sw_segment-20230927,sw_segment-20230928,sw_segment-20230929,sw_segment-20230930,sw_segment-20231001,sw_segment-20231002,sw_segment-20231003,sw_segment-20231004/_search?ignore_unavailable=true&expand_wildcards=open&allow_no_indices=true" -H 'Content-Type: application/json' -d'{"from":0,"size":20,"query":{"bool":{"must":[{"range":{"time_bucket":{"gte":20230927000000,"lte":20231004235959}}},{"term":{"service_id":"b2JpX2xvY2Fs.1"}}]}},"sort":[{"start_time":{"order":"desc"}}]}'
And I have problems with responce.
{
"took": 141,
"timed_out": false,
"_shards": {
"total": 15,
"successful": 15,
"skipped": 0,
"failed": 0
},
"hits": {
"total": {
"value": 3,
"relation": "eq"
},
"max_score": null,
"hits": [
{
"_index": "sw_segment-20231003",
"_type": "_doc",
"_id": "33659203614752764205590757610046136365",
"_score": null,
"_source": {
"start_time": 1696338779995,
"trace_id": "140293663296148403565889061609312210859",
"data_binary": ....
....
Skywalking server show me error while accessing this documents. I inspected data with jq:
zcat out.json.gz | jq '.hits.hits[] | ._source.data_binary | length'
21704012
21704012
21704016
As you can see field data_binary
is too large. Jackson library had limit 20000000 for String, but I have 21704016.
What part do you fail to read? I could guide you to the method, and you are better to decode on your private env only.
How can I decode data_binary
field?
You could use SegmentObject segmentObject = SegmentObject.parseFrom(segment.getDataBinary());
to decode it.
Ref from
@gyrter I deleted the message for you. Please don't post your env data in this public channel. It could leak many information you didn't expect. As trace usually includes IP:port and server relationship.
It was from docker compose environment with ephemeral ips and credentials.
I think it was strange fluctuation. Everything works fine on latest master build.
Search before asking
Description
Hello there. I'm trying your product with our Magento PHP project and got error with elasticsearh storage. Out traces reach Jackson and Armeria limits.
It is good idea to add trace limiter or something like that.
Use case
Firstly I got this error
I added
-Dcom.linecorp.armeria.defaultMaxResponseLength=0
toJAVA_OPTS
, but then I got this oneAs I understand I need to change
com.fasterxml.jackson.databind.ObjectMapper
configuration hereBut I think, that this is not right way. I think php agent need trace limit or something like that.
Related issues
No response
Are you willing to submit a pull request to implement this on your own?
Code of Conduct