elastic / kibana

Your window into the Elastic Stack
https://www.elastic.co/products/kibana
Other
19.81k stars 8.2k forks source link

Load saved objects from filesystem #2310

Open coderlol opened 9 years ago

coderlol commented 9 years ago

In Kibana 1.3.x, I could my dashboard javascript scripts in the .../app/dashboards directory directly. How do I do that with Kibana 4? I

have a number of auto-generated dashboards that I need to "upload" to Kibana. The preferred means of uploading is to drop the .js into the app/dashboards directory. Would that be possible with Kibanana 4?

rashidkpc commented 9 years ago

Dashboards are stored exclusively in elasticsearch for the time being. You can take a look at the .kibana index, or in Settings -> Objects if you'd like to see the format.

coderlol commented 9 years ago

It possible with the current Kibana to load dashboards from the filesystem, from .../app/dashboards. We have quite a few dashboards developed using templates and javascripts. What are we going to do with them in Kibana 4?

ranji2612 commented 9 years ago

"Dashboards are stored exclusively in elasticsearch for the time being." .. Is it intended to change in the future ?? ?... I'm currently creating individual searches and indices using scripts which add the data to ElasticSearch thereby automatically making them available in Kibana...

coderlol commented 9 years ago

Can you share how you upload your queries straight into ElasticSearch? Would that be possible to upload Kibana 3 dashboards into ElasticSearch and have Kibana 4 making use of them?

ranji2612 commented 9 years ago

@coderlol To create Index pattern :

This includes all the fields so its big, you can customize based on what you need and what you dont,,,

curl -XPUT 'http://localhost:9200/.kibana/index-pattern/_guid_-logstash-_' -d '{"title":"guid-logstash-_","timeFieldName":"timestamp","customFormats":"{}","fields":"[{\"type\":\"string\",\"indexed\":false,\"analyzed\":false,\"name\":\"_source\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"dsid\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"type\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"@version\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"ecid\",\"count\":0},{\"type\":\"date\",\"indexed\":true,\"analyzed\":false,\"doc_values\":false,\"name\":\"timestamp\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":false,\"name\":\"_type\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"level\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":false,\"name\":\"_id\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"userId\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"path\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"tid\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"logMessage\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"application\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"module\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"host\",\"count\":0},{\"type\":\"string\",\"indexed\":false,\"analyzed\":false,\"name\":\"_index\",\"count\":4},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"messageType\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"message\",\"count\":0},{\"type\":\"date\",\"indexed\":true,\"analyzed\":false,\"doc_values\":false,\"name\":\"@timestamp\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"server\",\"count\":0},{\"type\":\"string\",\"indexed\":true,\"analyzed\":true,\"doc_values\":false,\"name\":\"messageId\",\"count\":0}]"}'

to create Search

curl -XPUT 'http://localhost:9200/.kibana/search/_guid_' -d '{"title":"guid","description":"Search panel for guid Logs","hits":0,"columns":["_type","messageType","application","logMessage","module"],"kibanaSavedObjectMeta":{"searchSourceJSON":"{\"query\":{\"query_string\":{\"query\":\"\"}},\"filter\":[],\"index\":\"guid-logstash-\"}"}}'

A similar way could be done for dashboards too..

coderlol commented 9 years ago

I appreciate that. Thanks...alas, my dashboards are javascripts and not templates, so not sure if that will work in Kibana 4.

coderlol commented 9 years ago

@ranji2612, this works well. It's a bit of a pain, but it works. Thanks again.

ranji2612 commented 9 years ago

@coderlol My scenario was creating more Searches and Indexes with different names based on a fixed template.. The JSON for the template can be obtained from the edit index page or edit searches based on existing ones... Just copy pasted them :+1: Can you add some .js of yours here if possible, i'd like to have a look...

coderlol commented 9 years ago

Here you go ... { "title": "overview-all", "description": "", "hits": 0, "columns": ["log", "thread", "source", "method", "data", "path", "host"], "kibanaSavedObjectMeta": { "searchSourceJSON": "{\"query\":{\"querystring\":{\"query\":\"\"}},\"highlight\":{\"pre_tags\":[\"@kibana-highlighted-field@\"],\"posttags\":[\"@/kibana-highlighted-field@\"],\"fields\":{\"\":{}}},\"filter\":[],\"index\":\"logstash-*\"}" } }

coderlol commented 9 years ago

@ranji2612, my workflow is as follows:

  1. Go to Kibana, create searches, visualizations, dashboards, etc
  2. Go to elasticsearch, extract the js files
  3. Use curl to re-upload the js files to elasticserach (at a later time or for a different installation)
apogre commented 9 years ago

*New URL format eliminates the need for templated and scripted dashboards

lgoldstein commented 9 years ago

I agree with @coderlol - What are we going to do with them in Kibana 4? - why is this issue defined as an "enhancement" when it is obviously a regression - it used to work and no longer works. Which also brings us to issue #3486 - why is it closed ? After all, it isn't fixed and it also is a regression of a (very useful) feature that used to work and now no longer works.

P.S. if you are trying to make Kibana popular you are going about it the wrong way...

davux commented 9 years ago

Agreed with @lgoldstein. This is not an enhancement request at all, it's a bug report as it used to work and now it doesn't. If Kibana 4 is not marked as Beta anymore, then it should provide at least the same functionalities as Kibana 3.

lambodar commented 9 years ago

Team, Any idea when this future will release, Fyi currently I am using Kabana-4.1.0?

bounddog commented 9 years ago

I'm looking for the solution too, but seems there is no good solution so far. The visualizations were designed base on Index patterns, current Kibana 4.1.1 only support to export/import visualizations and dashboards objects, what's about those index patterns? We have to backup all the .kibana index and restore the whole index on a new production environment.

f15a commented 7 years ago

hi everyone, do you know if it is possible to load/refresh a shared dashboards on kibana 5 automatically every period of time (e.g. 1h) ? -my problem yet: how push the loaded shared dashboard via URL to the RIGHT folder where Kibana 5 loads the dashboard on debian or linux.

regads

turtlemonvh commented 7 years ago

It's not very difficult but unfortunately it doesn't look like there is an option for doing this automatically right now. I wrote a simple python program that loads saved kibana objects (dashboards or visualizations) from a file here.

https://gist.github.com/turtlemonvh/36511794f7dff3bc9fd4156e0a29c5a1

stacey-gammon commented 6 years ago

We have the ability to import/export dashboards, both through the UI and through an API. Is that sufficient for your use cases?

webwurst commented 6 years ago

@Stacey-Gammon Do you have a link to some documentation around that API?

ppf2 commented 6 years ago

+1 Given that we have started encouraging users to use this new API, we will want to document this (https://github.com/elastic/kibana/pull/11632), as well as https://github.com/elastic/kibana/issues/14872. If it's experimental, we can mark it as such.

szechyjs commented 6 years ago

It would be great if there was at least a CLI interface to this API so that a JSON file could easily be imported. A common use case that we have from Kibana 3 was building prepackaged dashboards in our system installs, this was a simple as making sure a file got placed in the correct directory, using tools like Docker or Omnibus. It was also easy to update these dashboard because all we had to do was ship a new file.

awh commented 6 years ago

We have the ability to import/export dashboards, both through the UI and through an API. Is that sufficient for your use cases?

@Stacey-Gammon It's sufficient, but far from ideal for anyone wanting to do revision control and configuration management of their dashboards, index patterns etc. As convenient and clean as this decision is from your perspective as an implementer, anyone wanting to use PaaS automation and leverage orchestrator features such as Kubernetes ConfigMaps is going to have to kludge around it to avoid annoying one-off manual configuration steps in their flow.

Grafana has gone through a similar evolution, where for example it was only possible to configure a datasource through the UI; we ended up having to create a separate service that continually pokes the desired config into the API endpoints. Thankfully now it's possible to load dashboards and datasources as configuration from the filesystem (grafana/grafana#9504) and we can throw that away.

So yes, whilst it is of course possible for end-users with these requirements to individually cobble together scripts to do once-and-only once initial configuration and watch the filesystem for changes and push them into your API endpoints, I would respectfully submit that it would be better for you to do that yourselves, and ship it with Kibana for everyone to use, so that we can all benefit from the same bugfixes and the solution being kept in sync with upstream changes.

Ideally we would have the following items read from static configuration in the filesystem, watched for changes:

At this point one can keep all of the above in git in a Kubernetes ConfigMap or equivalent and have it mounted into the filesystem of the Kibana Docker container by the orchestrator at the end of a CD pipeline. The UI stuff is great for playing around and exploring, but when it comes to production deployments strong versioning and repeatability are paramount - let us use the tools we already have to obtain them.

Thankyou for a great product, and for listening!

sandstrom commented 6 years ago

@spalger @w33ble @ppisljar Would be great to hear your thoughts on this matter!

With docker, kubernetes and the widespread use of chef/puppet/ansible/salt being able to configure these things via the filesystem would be very useful! 😄

ppisljar commented 6 years ago

@awh just to be sure, we are talking about loading dashboards/searches/... directly from filesystem (instead of .kibana index in elasticsearch), and not just about adding a CLI interface to our saved objects API ?

also, you don't want the objects read from this static files to be indexed in elasticsearch ? (that's what you could achieve already with our saved object api and a simple custom script)

I can see this could be very useful, also I don't see (atm) any reasons why we shouldn't do this? @Stacey-Gammon @spalger ?

spalger commented 6 years ago

Having saved objects that are not stored in Elasticsearch means that some objects would be loaded from outside the SavedObject client, or that we would need to load objects from disk and Elasticsearch in the SavedObject client.

I don't like either of those options and would much rather Kibana offered the ability to automatically index dashboards that were found on disk. If they were required to specify an ID or something then Kibana could know if they already existed and not wipe out changes or anything. This would still probably conflict with #18473

cc: @kobelb

sandstrom commented 6 years ago

(my point of view may not reflect the needs of others who've expressed interest in the functionality discussed here)

@spalger For us, automatically adding index-patterns (and dashboards) found on disk would work well. By philosophy we have immutable infrastructure (i.e. no long-running chef or similar, nodes are only setup once).

Our current problem is that we need to use complicated polling scripts in bash that wait for kibana nodes to bootup completely, and then inject the needed index patterns. This is brittle (sometimes the kibana node won't be booted completely in time for our bash script).

Auto-adding index-patterns from disk (based on a config-file) would totally solve our problem. Having an ID to avoid overrides sounds like a good idea.

epixa commented 6 years ago

I see this as essentially the same thing as import/export, and I would expect the format of these imports to be exactly the same. The only addition here is that Kibana would somehow discover exports from the local file system and import them automatically.

The open question to me is to what extent we should handle changes in those exports. @spalger mentioned one possible way by tracking things by id, but other people in this thread specifically mentioned the need to track changes.

epixa commented 6 years ago

I'm re-categorizing this as platform since we'd implement this as a generic saved object thing rather than exclusive to dashboards.

TagadaPoe commented 5 years ago

I also am interested in this feature, which would allow me to define dashboards using a Kubernetes ConfigMap, just like I do with Grafana.

Securitybits-io commented 5 years ago

This would be awesome for me aswell as i am figuring out on howto import from filesystem using Salt! would be alot easier to just file.managed a file onto the filesystem then have elastic import this.

turtlemonvh commented 5 years ago

@Securitybits-io we use Salt to manage Kibana, and the approach I mentioned above is what I designed to handle this problem: https://github.com/elastic/kibana/issues/2310#issuecomment-277479413

Securitybits-io commented 5 years ago

@turtlemonvh Yeah that would work, and in salt you can put that curl in a cmd.run state file..

Problem is that your script seems to be stateless, as in it will run every time you apply a state file. and re-import unless collision.

theres also the use case in which theres an update to the dashboard. taking grafana for example. you drop a basic dashboard into /var/lib/grafana/dashboards, this will load any dashboard identified as new, or changed.. having the same functionality for ELK would be awesome.