This add-on exports your Splunk search results to remote destinations so you can do more with your Splunk data. It provides search commands and alert actions to export/push/upload/share your data to multiple destinations of each type. The app must be configured via the Setup dashboard before using it. The setup dashboard includes a connection test feature in the form of a "Browse" action for all file-based destinations.
We offer paid Commercial Support for Export Everything and our other published Splunk apps using GitHub Sponsors or through a direct support agreement. Contact us for more information.
Free community support is also available, but not recommended for production use cases. In the event of an issue, email us and we'll help you sort it out. You can also reach the author on the Splunk Community Slack.
We welcome your feature requests, which can be submitted as issues on GitHub. Paid support customers have priority feature requests.
Use the Credentials tab to manage usernames, passwords, and passphrases (used for private keys) within the Splunk secret store. Certain use cases (such as private key logins) may not require a password, but Splunk requires one to be entered anyway. For passphrases, type any description into the username field. OAuth credentials such as those for AWS use the username field for the access key and the password field for the secret access key. Due to the way Splunk manages credentials, the username field cannot be changed once it is saved.
Add read capabilities for each command to users who require access to use the search command or alert action. Add write capability to allow them to make changes to the configuration. By default, admin/sc_admin has full access and power has read-only access. Credential permissions must be granted separately, but are required to use each command that depends on them.
All file-based destinations support keywords for the output filenames. The keywords have double underscores before and after. The keyword replacements are based on Python expressions, so we can add more as they are requested. Those currently available are shown below:
__now__
= epoch
__nowms__
= epoch value in milliseconds
__nowft__
= timestamp in yyyy-mm-dd_hhmmss format
__today__
= date in yyyy-mm-dd format
__yesterday__
= yesterday's date in yyyy-mm-dd format
The following arguments are common to all search commands in this app:
Syntax: target=<target name/alias>
Description: The name/alias of the destination connection
Default: The target specified as the default within the setup dashboard
The following arguments are common to search commands with file-based destinations in this app:
Syntax: outputfile=<[folder/]file name>
Description: The name of the file to be written to the destination. If compression=true, a .gz extension will be appended. If compression is not specified and the filename ends in .gz, compression will automatically be applied. Keyword replacements are supported (see above).
Default: app_username___now__.ext
(e.g. search_admin_1588000000.log
). json=.json, csv=.csv, tsv=.tsv, pipe=.log, kv=.log, raw=.log
Syntax: outputformat=[json|raw|kv|csv|tsv|pipe]
Description: The format for the exported search results
Default: csv
Syntax: fields="field1, field2, field3"
Description: Limit the fields to be written to the exported file. Wildcards are supported.
Default: All (*)
Syntax: blankfields=[true|false]
Description: Include blank fields in the output. Applies to JSON and KV output modes.
Default: False
Syntax: internalfields=[true|false]
Description: Include Splunk internal fields in the output. Individual fields can be overridden with fields. Currently these include: _bkt, _cd, _si, _kv, serial, _indextime, _sourcetype, splunk_server, splunk_server_group, punct, linecount, _subsecond, timestartpos, timeendpos, _eventtype_color
Default: False
Syntax: datefields=[true|false]
Description: Include the default date_* fields in the output. Individual fields can be overridden with fields.
Default: False
Syntax: compress=[true|false]
Description: Create the file as a .gz compressed archive
Default: Specified within the target configuration
Export Splunk search results to AWS S3-compatible object storage. Connections can be configured to authenticate using OAuth credentials or the assumed role of the search head EC2 instance.
<search> | epawss3
target=<target name/alias>
bucket=<bucket>
outputfile=<output path/filename>
outputformat=[json|raw|kv|csv|tsv|pipe]
fields="<comma-delimited fields list>"
blankfields=[true|false]
internalfields=[true|false]
datefields=[true|false]
compress=[true|false]
Syntax: bucket=<bucket name>
Description: The name of the destination S3 bucket
Default: Specified within the target configuration
Export Splunk search results to Azure Blob or Data Lake v2 object storage. Configure connections to authenticate using storage account keys or Azure Active Directory app credentials.
<search> | epazureblob
target=<target name/alias>
container=<container name>
outputfile=<output path/filename>
outputformat=[json|raw|kv|csv|tsv|pipe]
fields="<comma-delimited fields list>"
blankfields=[true|false]
internalfields=[true|false]
datefields=[true|false]
compress=[true|false]
append=[true|false]
Syntax: container=<container name>
Description: The name of the destination container
Default: Specified within the target configuration
Syntax: append=[true|false]
Description: Append the search results to an existing AppendBlob object. This setting will omit output headers for CSV, TSV, and Pipe-delimited output formats. Does not support JSON or compressed (gz) file types.
Default: false (overwrite)
Export Splunk search results to Box cloud storage. Box must be configured with a Custom App using Server Authentication (with JWT) and a certificate generated. Then, the app must be submitted for approval by the administrator. The administrator should create a folder within the app's account and share it with the appropriate users.
<search> | epbox
target=<target name/alias>
outputfile=<output path/filename>
outputformat=[json|raw|kv|csv|tsv|pipe]
fields="<comma-delimited fields list>"
blankfields=[true|false]
internalfields=[true|false]
datefields=[true|false]
compress=[true|false]
Export Splunk search results to SFTP servers.
<search> | epsftp
target=<target name/alias>
outputfile=<output path/filename>
outputformat=[json|raw|kv|csv|tsv|pipe]
fields="<comma-delimited fields list>"
blankfields=[true|false]
internalfields=[true|false]
datefields=[true|false]
compress=[true|false]
Export Splunk search results to SMB file shares.
<search> | epsmb
target=<target name/alias>
outputfile=<output path/filename>
outputformat=[json|raw|kv|csv|tsv|pipe]
fields="<comma-delimited fields list>"
blankfields=[true|false]
internalfields=[true|false]
datefields=[true|false]
compress=[true|false]
Stream Splunk search results to a Splunk HTTP Event Collector (HEC) or Cribl Stream HEC endpoint.
<search> | ephec
target=<target name/alias>
host=[host_value|$host_field$]
source=[source_value|$source_field$]
sourcetype=[sourcetype_value|$sourcetype_field$]
index=[index_value|$index_field$]
Syntax: host=[host_value|$host_field$]
Description: Field or string to be assigned to the host field on the pushed event
Default: $host$, or if not defined, the hostname of the sending host (from inputs.conf)
Syntax: source=[source_value|$source_field$]
Description: Field or string to be assigned to the source field on the pushed event
Default: $source$, or if not defined, it is omitted
Syntax: sourcetype=[sourcetype_value|$sourcetype_field$]
Description: Field or string to be assigned to the sourcetype field on the pushed event
Default: $sourcetype$, or if not defined, json
Syntax: index=[index_value|$index_field$]
Description: The remote index in which to store the pushed event
Default: $index$, or if not defined, the remote endpoint's default.
The following binaries are written in C and required by multiple python modules used within this app:
The following binaries are customized within this app to conform to Splunk AppInspect requirements: