Open cryptoenthu1 opened 3 years ago
Can you do graph indexer rules get all
and put it into a file or pastebin and send it here?
And also, can you grab the logs from the indexer agent with Pino-pretty and send a file here with one hour worth of logs?
npm install -g pino-pretty
docker logs indexer-agent 2>&1 | pino-pretty -c -t | sed -r "s/[[:cntrl:]]\[[0-9]{1,3}m//g" | tee agent.log &> /dev/null &
graph indexer rules get all : https://pastebin.com/JnEGmSDb
Sending logs soon
Wow, this is bad. How come you have 30K GRT allocated instead of the 120k you have in your rules table? Did you ever try to allocate these 30K GRT before going with that higher amount?
Wow, this is bad. How come you have 30K GRT allocated instead of the 120k you have in your rules table? Did you ever try to allocate these 30K GRT before going with that higher amount?
I have allocated 30 k to each subgraph few days back using a small script as below. for id in $(cat /home/graph/subgraphs.txt); do graph indexer rules set $id allocationAmount 30000 decisionBasis always done
But Due to lack of GRT , it seems 30 k might not allocated to all subgraphs that time.
Now after reset, The moment new 5 M I put in network (staked), the remaining subgraphs start getting allocated 30 k each. Means the command executed few days back has haulted due to lack of GRT and reseumed( executed) now the moment I allocated GRT in network.
Can you do
graph indexer rules get all
and put it into a file or pastebin and send it here?And also, can you grab the logs from the indexer agent with Pino-pretty and send a file here with one hour worth of logs?
npm install -g pino-pretty
docker logs indexer-agent 2>&1 | pino-pretty -c -t | sed -r "s/[[:cntrl:]]\[[0-9]{1,3}m//g" | tee agent.log &> /dev/null &
Agent log : agent.log
Yes, it is, unfortunately, the expected behavior, as all of your rules will have an internal queue unless they're dropped. Meaning if you allocated with a script, it will add all these allocations to be sent in transactions one by one until they're done.
For example if you go global always then global never you will have two full sets of transactions that you can't drop unless you turn off your agent and get rid of the rules.
It's not really handling this too well right now. I'm sure it can be improved.
Yes, it is, unfortunately, the expected behavior, as all of your rules will have an internal queue unless they're dropped. Meaning if you allocated with a script, it will add all these allocations to be sent in transactions one by one until they're done.
For example if you go global always then global never you will have two full sets of transactions that you can't drop unless you turn off your agent and get rid of the rules.
It's not really handling this too well right now. I'm sure it can be improved.
Thank you. So which one is correct/ effective now w.r.to GRT allocation to subgraph ? The one showing in explorer (30 k) or the one showing on server thru command graph indexer rules get all (131500.0) ?
You should have new allocations replacing those once the epoch passes soon ™. Keep an eye on the explorer and see if that changes
You should have new allocations replacing those once the epoch passes soon ™. Keep an eye on the explorer and see if that changes
Thank you
So? Is it fixed? @cryptoenthu1
So? Is it fixed? @cryptoenthu1
Seems not fixed @trader-payne ..... This is how explorer showing now :
It would help if you also share a file containing your cli settings
No, the indexer-cli rules, sorry 😄
Also, can you upgrade to graphprotocol/indexer-agent:sha-dfd5963
to see if it makes any difference?
I am seeing 2 issues here. 1.) After 5 M tokens allocated after reset network, I executed (allocated 131500) to each SUbgraph. graph indexer rules set $id allocationAmount 131500 decisionBasis always
graph indexer status output and Explorer display are showing different values. Which one should I consider as correct ?
graph indexer status output :
CLI Display(graph indexer status) output.xlsx
Explorer display : Explorer Image.PNG
2.) Another very interesting observation : I have allocated 30 k to each subgraph few days back using a small script as below. for id in $(cat /home/graph/subgraphs.txt); do graph indexer rules set $id allocationAmount 30000 decisionBasis always done
But Due to lack of GRT , it seems 30 k might not allocated to all subgraphs that time.
Now after reset, The moment new 5 M I put in network (staked), the remaining subgraphs start getting allocated 30 k each. Means the command executed few days back has haulted due to lack of GRT and reseumed( executed) now the moment I allocated GRT in network.
Please review these.