openiomon / hds2graphite

A utility to query metrics from Hitachi Vantara block storage and transfer them to graphite backend
http://www.openiomon.org
GNU General Public License v3.0
7 stars 0 forks source link

Dashboard non real time graph missing #7

Closed Trolls closed 2 years ago

Trolls commented 2 years ago

Hello,

i'm using hds2grafite with export tool and realtime (HIAA). Many thanks for this tool!

From realtime dashboards, all graphs are corrects. But all others, for exemple Pool statics, nothing appears.

In logs, export tools and raidcom look like to works correctly:

2021/10/20 20:40:17 [INFO] (/opt/hds2graphite/bin/hds2graphite-worker.pl:653) main::importmetric > Importing: /opt/hds2graphite/out/12345/LDEVEachOfCU_dat/LDEV_Write_TransRate/LDEV_Write_TransRate85.csv 2021/10/20 20:40:17 [DEBUG] (/opt/hds2graphite/bin/hds2graphite-worker.pl:1486) main::logscriptstats > Logging stats: metric.count.LDEV.WRITE_TRANSFER for current run!

I don't understand what is wrong..

Many thanks for help.

munokar commented 2 years ago

Hi,

can you verfiy the content of the export tool files? Is there actual data in there? For example in /opt/hds2graphite/out/12345/LDEVEachOfCU_dat/LDEV_Write_TransRate/LDEV_Write_TransRate85.csv

Trolls commented 2 years ago

yes, CSV are are filled: I take the CU 50 for exemple. The cu 80 and 85 are dedicated for snapshot :) "No.","time","00:50:00X","00:50:01X","00:50:02X","00:50:03X","00:50:04X","00:50:05X","00:50:06X","00:50:07X","00:50:08X","00:50:09X","00:50:0AX","00:50:0BX","00:50:0CX","00:50:0DX","00:50:0EX","00:50:0FX","00:50:10X","00:50:11X","00:50:12X","00:50:13X","00:50:14X","00:50:15X","00:50:16X","00:50:17X","00:50:18X","00:50:19X","00:50:1AX","00:50:1BX","00:50:1CX","00:50:1DX","00:50:1EX","00:50:1FX","00:50:20X","00:50:21X","00:50:22X","00:50:23X","00:50:24X","00:50:25X","00:50:26X","00:50:27X","00:50:28X","00:50:29X","00:50:2AX","00:50:2BX","00:50:2CX","00:50:2DX","00:50:2EX","00:50:2FX","00:50:30X","00:50:31X","00:50:32X","00:50:33X","00:50:34X","00:50:35X","00:50:36X","00:50:37X","00:50:38X","00:50:39X","00:50:3AX","00:50:3CX","00:50:3DX","00:50:3EX","00:50:3FX","00:50:40X","00:50:41X","00:50:42X","00:50:43X","00:50:44X","00:50:45X","00:50:46X","00:50:47X","00:50:48X","00:50:49X","00:50:4AX","00:50:4BX","00:50:4CX","00:50:4DX","00:50:4EX","00:50:4FX","00:50:50X","00:50:51X","00:50:52X","00:50:53X","00:50:54X","00:50:55X","00:50:56X","00:50:57X","00:50:58X","00:50:59X","00:50:5AX","00:50:5BX","00:50:5CX","00:50:5DX","00:50:5EX","00:50:5FX","00:50:60X","00:50:61X","00:50:62X","00:50:63X","00:50:64X","00:50:65X","00:50:66X","00:50:67X","00:50:68X","00:50:69X","00:50:6AX","00:50:6BX","00:50:6CX","00:50:6DX","00:50:6EX","00:50:6FX","00:50:70X","00:50:71X","00:50:72X","00:50:73X","00:50:74X","00:50:75X","00:50:76X","00:50:77X","00:50:78X","00:50:79X","00:50:7AX","00:50:7BX","00:50:7CX","00:50:7DX","00:50:7EX","00:50:7FX","00:50:80X" "1","2021/10/20 17:28",0,0,22,0,32,1,14,0,16,0,1,0,0,0,0,0,0,0,216,0,251,0,0,0,0,69,0,0,0,11,1,33,0,1,52,0,0,0,0,34,0,37,0,0,169,37,1,9,20,7,0,1,17,1,16,0,45,1,0,0,1,0,13,0,0,0,0,135,191,13,0,0,6,1,1,6,1,5,1,5,1,5,1,5,0,1,25,1,0,1,53,0,0,0,0,21,0,0,211,12,0,206,0,166,0,201,0,174,92,0,0,155,0,140,5,1,1,0,5,0,8,196,266,215,194,234,119,223 "2","2021/10/20 17:29",0,0,22,0,28,1,13,0,16,0,1,0,0,0,0,0,0,0,221,0,233,0,0,0,0,68,0,0,0,39,3,32,0,1,16,0,0,0,0,37,0,34,0,0,154,38,1,10,16,7,0,1,11,1,13,0,25,1,0,0,12,0,18,0,0,0,0,135,181,23,0,0,5,1,1,5,1,5,1,5,1,5,1,5,0,1,23,1,0,1,34,0,0,0,0,18,0,0,199,39,0,198,0,158,0,200,0,160,85,0,0,141,0,145,5,1,1,0,5,0,9,184,266,206,184,221,111,204 "

mamoep commented 2 years ago

Can you check your carbon log for errors? Maybe the metric import fails.

Trolls commented 2 years ago

creation looks like ok:

20/10/2021 21:04:34 :: creating database file /var/lib/carbon/whisper/hds/perf/physical/g1500/XXXXX/LU/CL4-D/HPXXXXX_00/029-00:00:62/WRITE_LATENCY.wsp (archive=[(60, 10080)] xff=0.5 agg=average)

mamoep commented 2 years ago

You can try to execute the command "whisper-dump" on the wsp file to see if the values/timestamps match the CSV. You can also try the Explore feature Grafana to check if you get values on that series.

Series: hds.perf.physical.g1500.XXXXX.LU.CL4-D.HPXXXXX_00.029-00:00:62.WRITE_LATENCY

If both things are fine, it is most likely a misconfiguration of the datasource in the dashboard.

Trolls commented 2 years ago

it's very strange, for HSD details or port Details view, i got some graphs about HSD or port, but not for all.

I'll will reset the configuration and recreate a new.

About the data sources Graphite, i need to set some settings particularly (name, etc ?)

munokar commented 2 years ago

Do you see port performance or muliprocessor details for all of your ports or MPs? Is it just LDEV and LU information that is missing?

Trolls commented 2 years ago

about MP nothing appears. For port, i got only 2 ports.. :)

When i tried to explore for exemple the capacity, i can't select the pool:

hds.capacity.g1500.XXXXX.ldev.DP.001..

munokar commented 2 years ago

What carbon are you using? Can you check how many metrics are inserted? Would be interesting to understand if the metrics are persisted by carbon or whether the just don't arrive at the carbon level.

Trolls commented 2 years ago

these storage box host MainFrame and open with a lot of volumes. Do you think it could be that?

i use the graphite WEBAPP_VERSION = '0.9.15'

when i check from the graphite web explorer, some capacity datas configuration are missing.

Fo realtime, all items/object are presents.

Trolls commented 2 years ago

is exporttool value performance are insert int the same place as realtime value?

munokar commented 2 years ago

The difference between realtime and "historical" performance (exporttool) is that realtime continuously imports a medium amount of metrics while "historical" import a very high amout of metrics at a short time.

I have experienced that the "normal" carbon (pyhton-carbon in RHEL) service is to slow in persisting this amount of data. Thats why I'm using go-carbon.

You could try to reduce the amount of metrics per minute to 100.000 or below

max_metrics_per_minute = 100000

The carbon itself should also show you some stats about itself which indicate if there is a long delay in file creation etc.

Trolls commented 2 years ago

i don't see any entry or example in the /etc/carbon/carbon.conf with this parameter.

I will install the VM with the last grafana and go-carbon and test again.

Thanks for these advices :)

I'll keep you inform

munokar commented 2 years ago

The parameter is part of the hds2graphite.conf... hds2graphite has an internal throttling to prevent overloading carbon services.

Trolls commented 2 years ago

ok, sorry.

parameter set, i will test. thanks.

Trolls commented 2 years ago

the import of metrics is not finished yet.

munokar commented 2 years ago

If you reduce the value vom 1.000.000 to 100.000 it will take at least 10 times longer to import. If data is shown after the import we know that this is related to carbon performance. Which timeframe are you trying to import?

Trolls commented 2 years ago

yes sure only 1 hours.

I'm searching on the web how to install go-carbon instead of carbon. I found this link: https://github.com/go-graphite/go-carbon Can i follow it to install go-carbon?

Many thanks,

munokar commented 2 years ago

Yes you can use go-carbon and install via RPM. Configuration is easy. You can keep your storage-schema from the python-carbon.

Trolls commented 2 years ago

i need to install the "standard carbon" with graphite also?

munokar commented 2 years ago

The go-carbon is "just another implementation" of carbon and replaces python-carbon. Both can bin installed in parallel but only one should be active at a time.

munokar commented 2 years ago

Normaly I would recommend go-carbon in combination with graphite-api which is part of EPEL release. This will fully replace graphite-web. But you can also continue to use graphite-web.

Trolls commented 2 years ago

I got an error with th ego-carbon:

[2021-10-22T13:49:14.905+0200] ERROR [persister] mkdir failed {"dir": "/var/lib/graphite/whisper/hds/perf/physical/g1500/XXXX/PRCS/MPB8/LDEV/00:13:39", "error": "mkdir /var/lib/graphite: permission denied", "path": "/var/lib/graphite/whisper/hds/perf/physical/g1500/XXXXX/PRCS/MPB8/LDEV/00:13:39/REALTIME_WRITE_XFER_RATE.wsp"}

Why he can't create the path?

munokar commented 2 years ago

Who is the owner of these dirs? The go-carbon user is configured in the /etc/go-carbon/go-carbon.conf and the default is user = "carbon"so perhaps this user (carbon:carbon) doesn't have permissions in the already existing dirs since you are reusing the existing whisper-structure?

mamoep commented 2 years ago

Check the path. He wants to create "/var/lib/graphite". Your old carbon wrote to "/var/lib/carbon"

Trolls commented 2 years ago

yes, i create the path with carbon as owner. But now an other error:

[2021-10-22T14:00:25.112+0200] INFO [tcp] parse failed {"error": "bad message: \"hds.perf.physical.g1500.xxxx.PORT.CL8-A.REALTIME_CHA_NAME \\"CHA-2PC\\" 1634903940\"", "peer": "127.0.0.1:38230"}

munokar commented 2 years ago

This is a know issue. Please pull a new copy of g1500_realtime_metrics.conf and put it to the /opt/hds2graphite/conf/metrics/ folder. https://github.com/openiomon/hds2graphite/blob/master/conf/metrics/g1000_realtime_metrics.conf

Trolls commented 2 years ago

Thanks, Port error should be disapears.

but I got new errors:

[2021-10-22T14:13:53.517+0200] ERROR [persister] fail to update metric {"path": "/var/lib/graphite/whisper/hds/perf/physical/g1500/XXXXX/LDEV/DP/001/00:05:31/REALTIME_READ_IO_RATE.wsp", "error": "Failed to propagate\ngoroutine 60 [running]:\nruntime/debug.Stack(0xc0003ac6b0, 0xd788a0, 0x10139d0)\n\t/home/travis/.gimme/versions/go1.15.5.linux.amd64/src/runtime/debug/stack.go:24 +0x9f\ngithub.com/go-graphite/go-whisper.(Whisper).UpdateManyForArchive.func1(0xc0003acaf0)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/vendor/github.com/go-graphite/go-whisper/whisper.go:691 +0x57\npanic(0xd788a0, 0x10139d0)\n\t/home/travis/.gimme/versions/go1.15.5.linux.amd64/src/runtime/panic.go:969 +0x1b9\ngithub.com/go-graphite/go-whisper.(Whisper).archiveUpdateMany(0xc005c441c0, 0xc005c3d300, 0xc0056892e0, 0x1, 0x1, 0xc0056892e0, 0x1)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/vendor/github.com/go-graphite/go-whisper/whisper.go:782 +0x756\ngithub.com/go-graphite/go-whisper.(Whisper).UpdateManyForArchive(0xc005c441c0, 0xc0056892e0, 0x1, 0x1, 0xffffffffffffffff, 0x0, 0x0)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/vendor/github.com/go-graphite/go-whisper/whisper.go:718 +0x436\ngithub.com/go-graphite/go-whisper.(Whisper).UpdateMany(...)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/vendor/github.com/go-graphite/go-whisper/whisper.go:684\ngithub.com/go-graphite/go-carbon/persister.(Whisper).updateMany(0xc000560000, 0xc005c441c0, 0xc005c405b0, 0x69, 0xc0056892e0, 0x1, 0x1)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/persister/whisper.go:172 +0xcf\ngithub.com/go-graphite/go-carbon/persister.(Whisper).store(0xc000560000, 0xc000ce6d20, 0x4b)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/persister/whisper.go:323 +0x3e7\ngithub.com/go-graphite/go-carbon/persister.(Whisper).worker(0xc000560000, 0xc000310300)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/persister/whisper.go:402 +0x14e\ngithub.com/go-graphite/go-carbon/helper.(Stoppable).StartFunc.func1.1(0xc0002097b0, 0xc000310300, 0xc000560000)\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/helper/stoppable.go:42 +0x30\ncreated by github.com/go-graphite/go-carbon/helper.(*Stoppable).StartFunc.func1\n\t/home/travis/gopath/src/github.com/go-graphite/go-carbon/helper/stoppable.go:41 +0x7d\n"}

[2021-10-22T14:16:18.018+0200] ERROR [persister] create new whisper file failed {"path": "/var/lib/graphite/whisper/hds/perf/physical/g1500/XXXXX/PRCS/MPB0/LDEV/00:B1:67/REALTIME_READ_IO_RATE.wsp", "error": "open /var/lib/graphite/whisper/hds/perf/physical/g1500/XXX/PRCS/MPB0/LDEV/00:B1:67/REALTIME_READ_IO_RATE.wsp: no space left on device", "retention": "60s:30d,1h:5y", "schema": "default", "aggregation": "default", "xFilesFactor": 0.5, "method": "average", "compressed": false}

--> what appened, the 280Go of disk has been filled in 5 minutes lol

mamoep commented 2 years ago

Whisper creates the full file size of the configured retention during the first write of the wsp. Check your retention settings. Copy your previous configured config to /etc/go-carbon/storage-schemas.conf

mamoep commented 2 years ago

WSPs have to be recreated, there is no in-flight change of retention.

munokar commented 2 years ago

Really easy... Whisper will preallocate the capacity that will be needed to store the data based on you retention settings. So a single whisper file might be 1MB in size. If you have 1000s of LDEVs and for each LDEV 10 counter you will end up in some GB of space needed. If you want to decrease capacity you need to shorten the intervals where performance data is kept. You need to decrease especially the interval where the data is kept with high granularity.

So change for example from:

[hds.realtime] pattern = ^hds.perf..*.REALTIME retentions = 1m:7d

[hds.perf] pattern = ^hds.perf. retentions = 1m:7d,5m:30d,1h:1y

to [hds.realtime] pattern = ^hds.perf..*.REALTIME retentions = 1m:3d

[hds.perf] pattern = ^hds.perf. retentions = 1m:3d,5m:15d,1h:1y

or reduce the amount if metrics being collected by editing the metrics.conf and remove unneeded metrics.

Trolls commented 2 years ago

Thanks, I set with these values.

i got only this error: [2021-10-22T15:20:09.345+0200] INFO [tcp] parse failed {"error": "bad message: \"\"", "peer": "127.0.0.1:60374"}

but no information about the "hds collect" in go-carbon log. like nothing is graph on grafana.

munokar commented 2 years ago

What component are you using for the graphite-api? If you are using your existing graphite-web have you reconfigured it to use the files in /var/lib/graphite instead of /var/lib/carbon wich was was used before?

Trolls commented 2 years ago

this conf : /bin/graphite-build-index:WHISPER_DIR="/var/lib/carbon/whisper" ?

munokar commented 2 years ago

What service / package have you installed? Is this graphite-web? https://graphite.readthedocs.io/en/stable/ Where is your grafana datasource pointing to?

I'm using graphite-api since it is a much lighter implementation which doesn't contain a full blown web server and not web ui but only provides communication between grafana and the whisper files.

Trolls commented 2 years ago

i installed both, i discover these tools. It's easier for me to check data on the web graphite GUI

munokar commented 2 years ago

OK, if graphite-api is configured. Can you please provide the content of the following files:

/etc/sysconfig/graphite-api& /etc/graphite-api.yaml

also an output of systemctl status graphite-apiwould be nice.

It would also be great to see a screenshot of your datasource configuration in Grafana.

Trolls commented 2 years ago

Hi,

Please find following files:

[root@vm-grafana hds2graphite]# cat /etc/sysconfig/graphite-api

# Address to bind the graphite-api-gunicorn to
GRAPHITE_API_ADDRESS=127.0.0.1

# Port to bind the registry to
#GRAPHITE_API_PORT=8888
GRAPHITE_API_PORT=8888

# Number of workers to handle the connections
GUNICORN_WORKERS=4

[root@vm-grafana hds2graphite]# systemctl status go-carbon

● go-carbon.service - Golang implementation of Graphite/Carbon server.
   Loaded: loaded (/usr/lib/systemd/system/go-carbon.service; enabled; vendor preset: disabled)
   Active: active (running) since lun. 2021-10-25 11:55:34 CEST; 1min 31s ago
     Docs: https://github.com/go-graphite/go-carbon
  Process: 2279 ExecStart=/usr/bin/go-carbon -config /etc/go-carbon/go-carbon.conf -pidfile /var/run/go-carbon.pid -daemon (code=exited, status=0/SUCCESS)
 Main PID: 2287 (go-carbon)
   CGroup: /system.slice/go-carbon.service
           └─2287 /usr/bin/go-carbon -config /etc/go-carbon/go-carbon.conf -pidfile /var/run/go-carbon.pid -daemon

oct. 25 11:55:34 vm-grafana systemd[1]: Starting Golang implementation of Graphite/Carbon server....
oct. 25 11:55:34 vm-grafana systemd[1]: Started Golang implementation of Graphite/Carbon server..

[root@vm-grafana hds2graphite]# systemctl status graphite-api

● graphite-api.service - Run graphite-api as gunicorn service
   Loaded: loaded (/usr/lib/systemd/system/graphite-api.service; enabled; vendor preset: disabled)
   Active: active (running) since lun. 2021-10-25 11:35:12 CEST; 21min ago
 Main PID: 1796 (gunicorn)
   CGroup: /system.slice/graphite-api.service
           ├─1796 /usr/bin/python /usr/bin/gunicorn --access-logfile - -b 127.0.0.1:8888 -w 4 graphite_api.app:app
           ├─1801 /usr/bin/python /usr/bin/gunicorn --access-logfile - -b 127.0.0.1:8888 -w 4 graphite_api.app:app
           ├─1802 /usr/bin/python /usr/bin/gunicorn --access-logfile - -b 127.0.0.1:8888 -w 4 graphite_api.app:app
           ├─1807 /usr/bin/python /usr/bin/gunicorn --access-logfile - -b 127.0.0.1:8888 -w 4 graphite_api.app:app
           └─1816 /usr/bin/python /usr/bin/gunicorn --access-logfile - -b 127.0.0.1:8888 -w 4 graphite_api.app:app

oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"index_path": "/var/lib/graphite-api/index", "duration": 3.600120544433594e-05, "total_entries": 0, "event": "search index reloaded"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"timezone": "Europe/Paris", "event": "configured timezone"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"path": "/etc/graphite-api.yaml", "event": "loading configuration"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"index_path": "/var/lib/graphite-api/index", "event": "reading index data"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"index_path": "/var/lib/graphite-api/index", "duration": 3.910064697265625e-05, "total_entries": 0, "event": "search index reloaded"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"timezone": "Europe/Paris", "event": "configured timezone"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"path": "/etc/graphite-api.yaml", "event": "loading configuration"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"index_path": "/var/lib/graphite-api/index", "event": "reading index data"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"index_path": "/var/lib/graphite-api/index", "duration": 3.910064697265625e-05, "total_entries": 0, "event": "search index reloaded"}
oct. 25 11:35:12 vm-grafana gunicorn[1796]: {"timezone": "Europe/Paris", "event": "configured timezone"}

[root@vm-grafana hds2graphite]# cat /etc/graphite-api.yaml

---

# Graphite-API configuration, see also /etc/sysconfig/graphite-api

# search_index
#
# The location of the search index used for searching metrics. Note that it
# needs to be a file that is writable by the Graphite-API process.
search_index: /var/lib/graphite-api/index

# finders
#
# A list of python paths to the storage finders you want to use when fetching
# metrics.
finders:
  - graphite_api.finders.whisper.WhisperFinder

# functions
#
# A list of python paths to function definitions for transforming / analyzing
# time series data.

functions:
  - graphite_api.functions.SeriesFunctions
  - graphite_api.functions.PieFunctions

# whisper
#
# The configuration information for whisper. Only relevant when using
# WhisperFinder. Simply holds a directories key listing all directories
# containing whisper data.

whisper:
  directories:
    - /var/lib/carbon/whisper

# time_zone
#
# The time zone to use when generating graphs. By default, Graphite-API tries
# to detect your system timezone. If detection fails it falls back to UTC. You
# can also manually override it if you want another value than your system's
# timezone.

#time_zone: UTC

# carbon
#
# Configuration information for reading data from carbon's cache. Items:
#
#    hosts
#        List of carbon-cache hosts, in the format hostname:port[:instance].
#    timeout
#        Socket timeout for carbon connections, in seconds.
#    retry_delay
#        Time to wait before trying to re-establish a failed carbon connection,
#        in seconds.
#    hashing_keyfunc
#        Python path to a hashing function for metrics. If you use Carbon with
#        consistent hashing and a custom function, you need to point to the
#        same hashing function.
#    carbon_prefix
#        Prefix for carbon's internal metrics. When querying metrics starting
#        with this prefix, requests are made to all carbon-cache instances
#        instead of one instance selected by the key function. Default: carbon.
#    replication_factor
#        The replication factor of your carbon setup. Default: 1.

#carbon:
#  hosts:
#    - 127.0.0.1:7002
#  timeout: 1
#  retry_delay: 15
#  carbon_prefix: carbon
#  replication_factor: 1

# sentry_dsn
#
# This is useful if you want to send Graphite-API's exceptions to a Sentry
# instance for easier debugging.

#sentry_dsn: https://key:secret@app.getsentry.com/12345

# allowed_origins
#
# Allows you to do cross-domain (CORS) requests to the Graphite API. Say you
# have a dashboard at dashboard.example.com that makes AJAX requests to
# graphite.example.com, just set the value accordingly:

#allowed_origins:
#  - dashboard.example.com

# You can specify as many origins as you want. A wildcard can be used to allow
# all origins:

#allowed_origins:
#  - *

# cache
#
# Lets you configure a cache for graph rendering. This is done via Flask-Cache
# which supports a number of backends including memcache, Redis, filesystem or
# in-memory caching.
#
# Cache configuration maps directly to Flask-Cache's config values. For each
# CACHE_* config value, set the lowercased name in the cache section, without
# the prefix. Example:

#cache:
#  type: redis
#  redis_host: localhost
#  default_timeout: 60
#  key_prefix: 'graphite-api'

# statsd
#
# Attaches a statsd object to the application, which can be used for
# instrumentation. Currently Graphite-API itself doesn't use this, but some
# backends do, like Graphite-Influxdb.

#statsd:
#    host: 'statsd_host'
#    port: 8125  # not needed if default

# render_errors
#
# If True (default), full tracebacks are returned in the HTTP response in case of application errors.

#render_errors: True
Trolls commented 2 years ago

I got only this error in go-carbon logs:

[2021-10-25T12:05:26.770+0200] INFO [tcp] parse failed {"error": "bad message: \"\"", "peer": "127.0.0.1:58112"}

Trolls commented 2 years ago

datasource in grafana:

image

mamoep commented 2 years ago

You graphite-api service is running on Port 8888, so your datasource URL must be changed to http://localhost:8888 to use it.

Trolls commented 2 years ago

no more graph

image

mamoep commented 2 years ago

Does the directory in the graphite-api.yaml match your target directory in go-carbon?

whisper:
  directories:
    - /var/lib/carbon/whisper
Trolls commented 2 years ago

yes

[root@vm-grafana hds2graphite]# ll /var/lib/carbon/whisper/
drwxrwxrwx 3 carbon carbon 20 22 oct.  10:52 carbon
drwxr-xr-x 5 carbon carbon 54 22 oct.  16:22 hds
munokar commented 2 years ago

But from the logs above we can see that go-carbon is configured to store data at:

/var/lib/graphite/whisper

Trolls commented 2 years ago

yes right. I continue to search why

Trolls commented 2 years ago

Hi,

I found the correct configuration for go-carbon: the path was carbon instead of graphite.

/etc/graphite-api.yaml

whisper:
  directories:
#    - /var/lib/carbon/whisper
    - /var/lib/graphite/whisper

all works fine! thanks a lot!

Trolls commented 2 years ago

have you already test this setting for the storage-schemas.conf of go-carbon ?

compressed = yes

mamoep commented 2 years ago

I didn't use it so far because it is marked "experimental".

All questions answered? Can we close the issue?

Trolls commented 2 years ago

yes, the issue can be closed.