Open milan-koudelka opened 11 years ago
Even though you're getting those errors, it might still work. See why here: http://splunk-base.splunk.com/answers/85635/shuttl-archiving-errors/85832 In short, Shuttl is using the same libraries as Hadoop is using for S3, and they are out dated, but should be well tested. There's a patch to update the S3 library, but it's not in a release yet.
As for the buckets not being at the root, here's my guess (which might be off by one or two):
To try:
Note:
Thanks and let me know if it doesn't work,
Hi Petter, thank you for your fast response. Data aren't really loaded to S3. Just from test, not from real process. I can't try that through UI, because I have pure indexers without UI.
In this case. When I don't have UI on indexers, it will be probably better to use directly shell script with s3cmd command.
Best regards Milan Koudelka
You can control your indexers' Shuttl apps through your Search Head as long as you have the Shuttl app installed on all Splunk instances. All your commands run on your SH will be executed on your connected Search Peers/Indexers as well.
Otherwise, if you want to test Shuttl isolated on an indexer, you can call the Shuttl server's REST endpoints for thawing. Example of calling Shuttl's thaw endpoint:
POST parameters:
where "from" and "to" parameters are optional.
That's nice ! I've installed it also on search head. Both pages Thaw and Flush are showing 404.
The path '/en-US/custom/shuttl/Archiving/show' was not found.
There are few sucessfull Archived Buckets on main page. But all of them are from Testing script :-/
To Shuttl some real Splunk buckets fast, you can do the following:
[] homePath = $SPLUNK_HOME/var/lib/splunk/archiver-test-index/db coldPath = $SPLUNK_HOME/var/lib/splunk/archiver-test-index/colddb thawedPath = $SPLUNK_HOME/var/lib/splunk/archiver-test-index/thaweddb coldToFrozenScript = $SPLUNK_HOME/etc/apps/shuttl/bin/coldToFrozenScript.sh rotatePeriodInSecs = 5 frozenTimePeriodInSecs = 15 maxWarmDBCount = 1 maxDataSize = 1
$SPLUNK_HOME/var/lib/splunk/<index name>/colddb
. Here's a command for adding data to the indexer:$SPLUNK_HOME/bin/splunk add oneshot <1+MB file> -index <index name> -auth admin:<passwd>
I'm going to be offline for the rest of the day. I'll help you more tomorrow!
- Petter
Hi, yes, sorry, I've restarted Splunk and these pages are accessible now. However in index field there is alway Loading.
I'm testing that on real index. I've didn't have maxWarmDBCount there, but I think that this wasn't important.
My current configuration is:
[dev-os]
homePath = $SPLUNK_DB/dev-os/db
thawedPath = $SPLUNK_DB/dev-os/thaweddb
coldPath = $SPLUNK_DB/dev-os/colddb
maxTotalDataSizeMB = 19800
maxWarmDBCount = 5
frozenTimePeriodInSecs = 604800
coldToFrozenScript = $SPLUNK_HOME/etc/apps/shuttl/bin/coldToFrozenScript.sh
In log, there is still the same alert
2013-07-30 22:50:12,410 INFO com.splunk.shuttl.archiver.archive.BucketFreezer: will="Attempting to archive bucket" index="dev-os" path="/mnt/ebs/splunk/dev-os/colddb/db_1369127814_1369119469_180"
2013-07-30 22:50:12,567 ERROR com.splunk.shuttl.archiver.model.MovesBuckets: did="Attempted to move bucket" happened="move failed" bucket="LocalBucket [getDirectory()=/mnt/ebs/splunk/dev-os/colddb/db_1369127814_1369119469_180, getName()=db_1369127814_1369119469_180, getIndex()=dev-os, getFormat()=SPLUNK_BUCKET, getPath()=/mnt/ebs/splunk/dev-os/colddb/db_1369127814_1369119469_180, getEarliest()=Tue May 21 08:57:49 CEST 2013, getLatest()=Tue May 21 11:16:54 CEST 2013, getSize()=127725]" destination="/mnt/tmp/shuttl_archiver/data/safe-buckets/dev-os"
And there aren't any new data on S3.
The move that is failing is a normal java.io.File#rename(), which should be equal to a unix mv
I'm not unsure why the index field is always Loading. Hopefully you can find something in the logs or through debugging the site in the browser.
Hi Petter, i've tried to reinstall that at all. I've installed that through manager. So manager put that app to /mnt/shared_storage/etc/apps/shuttl. That's path for search head pooling shared storage.
I've configured just splunk.xml for start and tried to restart server.
There is still same issue that Flush and Thaw pages are still writing Loading in list of indexes.
And I can see these errors in logs, even when I tried to copy these files to correct path:
08-15-2013 00:02:58.819 +0200 ERROR SearchResults - Failed to remove "/mnt/shared_storage/etc/users/milan.koudelka/shuttl/history/splunk-sh1-dev.csv.tmp": No such file or directory
08-15-2013 00:04:22.430 +0200 ERROR FrameworkUtils - Incorrect path to script: /opt/splunk/etc/apps/shuttl/bin/coldToFrozenRetry.sh. Script must be located inside $SPLUNK_HOME/bin/scripts.
08-15-2013 00:04:22.430 +0200 ERROR ExecProcessor - Ignoring: "/opt/splunk/etc/apps/shuttl/bin/coldToFrozenRetry.sh"
08-15-2013 00:04:22.430 +0200 ERROR FrameworkUtils - Incorrect path to script: /opt/splunk/etc/apps/shuttl/bin/start.sh. Script must be located inside $SPLUNK_HOME/bin/scripts.
08-15-2013 00:04:22.430 +0200 ERROR ExecProcessor - Ignoring: "/opt/splunk/etc/apps/shuttl/bin/start.sh"
08-15-2013 00:04:22.430 +0200 ERROR FrameworkUtils - Incorrect path to script: /opt/splunk/etc/apps/shuttl/bin/warmToColdRetry.sh. Script must be located inside $SPLUNK_HOME/bin/scripts.
08-15-2013 00:04:22.430 +0200 ERROR ExecProcessor - Ignoring: "/opt/splunk/etc/apps/shuttl/bin/warmToColdRetry.sh"
[root@splunk-sh1-dev:/opt/splunk] echo $SPLUNK_HOME
/opt/splunk
[root@splunk-sh1-dev:/opt/splunk] cat /opt/splunk/etc/system/local/distsearch.conf
[searchhead:splunk-sh1-dev]
mounted_bundles = true
bundles_location = /mnt/shared_storage/etc/
I think that it isn't capable to run on architecture with shared storage for search head pooling. It's probably expecting that whole installation will be in $SPLUNK_HOME. But, that's not possible when I'm using search head pooling.
Do you have any advice please ?
I haven't seen the: Incorrect path to script: /opt/splunk/etc/apps/shuttl/bin/coldToFrozenRetry.sh. Script must be located inside $SPLUNK_HOME/bin/scripts.
.
I should fix that!
I think that I might assume that the app is installed at $SPLUNK_HOME/etc/apps/shuttl, somewhere. I should probably not assume that.
My suggestions are:
$SPLUNK_HOME/etc/apps/shuttl
shuttl/bin/*.sh
to $SPLUNK_HOME/bin/scripts
shuttl/default/inputs.conf
to be $SPLUNK_HOME/bin/scripts
instead of shuttl/bin/
It's not possible to install that app to $SPLUNK_HOME/etc/apps/shuttl If you are using search head pooling and mounted bundles, your apps in this paths are ignored are Splunk is using only apps on shared storage. I've copied these files there. But there isn't any change even after restart. The paths in inputs.conf was already as you wrote. I've change the path to the real location of application on shared storage. I've tried to rewrite all these $SPLUNK_HOME to correct path in environment with search head pooling. But it's everywhere :-/ It isn't possible to use this with search head pooling due to expectation that the app will be in $SPLUNK_HOME. Maybe I can try little hack. Create a symlink from $SPLUNK_HOME/etc/apps/shuttl to shared_storage.
With symlink, it has other errors :-)
08-16-2013 02:15:25.827 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" 16.8.2013 2:15:25 com.sun.jersey.api.core.PackagesResourceConfig init
08-16-2013 02:15:25.827 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" INFO: Scanning for root resource and provider classes in the packages:
08-16-2013 02:15:25.827 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" com.splunk.shuttl.server.mbeans.rest
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" 16.8.2013 2:15:25 com.sun.jersey.api.core.ScanningResourceConfig logClasses
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" INFO: Root resource classes found:
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ShuttlServerRest
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.FlushEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ListThawEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ShuttlConfigurationEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.CopyBucketEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ThawBucketsEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ArchiveBucketEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" class com.splunk.shuttl.server.mbeans.rest.ListBucketsEndpoint
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" 16.8.2013 2:15:25 com.sun.jersey.api.core.ScanningResourceConfig init
08-16-2013 02:15:25.902 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" INFO: No provider classes found.
08-16-2013 02:15:26.015 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" 16.8.2013 2:15:26 com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
08-16-2013 02:15:26.015 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" INFO: Initiating Jersey application, version 'Jersey: 1.11 12/09/2011 11:05 AM'
08-16-2013 02:16:27.600 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" 16.8.2013 2:16:27 com.sun.jersey.spi.container.ContainerResponse mapMappableContainerException
08-16-2013 02:16:27.600 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" SEVERE: The RuntimeException could not be mapped to a response, re-throwing to the HTTP container
08-16-2013 02:16:27.600 +0200 ERROR ExecProcessor - message from "/mnt/shared_storage/etc/apps/shuttl/bin/start.sh" java.lang.RuntimeException: java.net.ConnectException: Call to localhost/127.0.0.1:9000 failed on connection exception: java.net.ConnectException: Connection refused
I had no idea "search head pooling and mounted bundles" was a thing :) I need to ask around how people deal with this. I can probably hit some splunk endpoint to figure out where the app lives. Thanks for finding this! The fix shouldn't be too hard to fix, so expect it in a week or two. Remind me again if I haven't fixed it by then!
Amazing. If you need some help with that, let me know. Maybe I can help you somehow. This is great app. I'm looking forward to use it.
I'm looking in to this right now, @milan-koudelka.
To support search head pooling:
Hi, I thing that best way how to find the path of apps location is read this file
[root@splunk-sh:~] cat /opt/splunk/etc/system/local/distsearch.conf # http://docs.splunk.com/Documentation/Splunk/latest/Deploy/Mounttheknowledgebundle
[searchhead:splunk-sh] mounted_bundles = true bundles_location = /mnt/shared_storage/etc/
I'm looking in to this right now, @milan-koudelka.
To support search head pooling:
- inputs.conf's scripts need to have the path "script://./bin/script.sh", without the $SPLUNK_HOME/etc/apps/
- Figure out the best way to get where my app is installed. If I can't, I'll have to do something about the custom configuration that Shuttl has (for historical reasons). That'll hopefully be it though!
Reply to this email directly or view it on GitHub: https://github.com/splunk/splunk-shuttl/issues/131#issuecomment-24628589
Hi, do you have any update on this ?
Best regards
Yes! I've updated the develop branch with fixes to search head pooling. Would you like to try it out?
Clone the repository and run ./buildit.sh, then you'll have a build in ./build/shuttl.tgz
Great, I will test it asap on my DEV environment.
Have you looked into using s3ql (see http://groups.google.com/group/s3ql) for the filesystem? It's the best filesystem-on-s3 implementation that I've ever seen, and comes the closest to making it look like "just another" filesystem that is mounted.
The core of the code is still single-threaded, but I think that's true of a lot of filesystem code that is still in use today.
Just a thought. Thanks!
Hi, I've tried to use it, but it's not working there are errors even when T'm running test script. Test script uploaded some files, but they aren't in correct path, they're in root folder.
My configuration: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>