Closed TheFausap closed 3 years ago
Hi @TheFausap, Thank you for creating this issue Can you provide me more information about your Splunk instance ? This application is used on my testing environment and on a live environment and I didn't see this issue anytime.
A new version will come soon with the integration of TheHive and a review of the internal behavior. I will take into account to have this library included if there is no legitimate reason for this bug.
Hi @LetMeR00t I am using Splunk 7.3.6 (under CentOS) with the python 2.7 shipped withing the installation. I "fixed" the error with a manual install of ctypes modules in the app "lib" directory, but when I run the simple SPL,
| cortexjobs
I got an exception
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': DEBUG:command_cortex_jobs.log:LEVEL changed to DEBUG according to the configuration
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': DEBUG:command_cortex_jobs.log:Fields found = []
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': DEBUG:command_cortex_jobs.log:filter_data: , filter_datatypes: *, filter_analyzers: *
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': INFO:command_cortex_jobs.log:Query is: {}
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': /opt/splunk/etc/apps/TA_cortex/bin/lib2/cortex4py/api.py:24: UserWarning: You are using Python 2.x. That can work, but is not supported.
10-02-2020 13:42:03.106 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': warnings.warn('You are using Python 2.x. That can work, but is not supported.')
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': Traceback (most recent call last):
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py", line 77, in <module>
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': cortex = Cortex(configuration.getURL(), configuration.getApiKey(), settings["sid"], logger)
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/cortex.py", line 43, in __init__
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': self.api.analyzers.find_all({}, range='all')
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/lib2/cortex4py/controllers/analyzers.py", line 16, in find_all
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': return self._wrap(self._find_all(query, **kwargs), Analyzer)
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/lib2/cortex4py/controllers/abstract.py", line 18, in _find_all
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': return self._api.do_post(url, {'query': query or {}}, params).json()
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/lib2/cortex4py/api.py", line 107, in do_post
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': self.__recover(ex)
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': File "/opt/splunk/etc/apps/TA_cortex/bin/lib2/cortex4py/api.py", line 54, in __recover
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': raise CortexError("Unexpected exception")
10-02-2020 13:42:03.110 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': cortex4py.exceptions.CortexError: Unexpected exception
10-02-2020 13:42:03.120 ERROR script - sid:1601638919.2488 External search command 'cortexjobs' returned error code 1.
I checked the Cortex API and it's working properly, so maybe this is a problem with the python version.
thanks, Fausto
The warning message is not an issue as the cortex4py was designed for python 3.x by default and I modified it to be able to run under python 2.x Problem here is that it's the cortex library that seems to have an issue. Could you give me your setup of TheHive/Cortex ?
I didn't setup TheHive/Cortex, I am using an already existing installation. If you need something specific, I can ask. The API is working on HTTPS protocol on port 443, but there's a reverse proxy in front of the server. I don't know if this can create some issues.
Do you mean that you are using an online version of Cortex that I can reach to perform some tests on it ?
No it's a local installation, in our internal network. Could this happen due to some untrusted certificate, because I am using https? If I try the job API with curl, for example, I have to specify the "-k" option, in order to skip the cert verification.
If this is the case, you should try to perform the same requests using only the 80 port if it's possible.
What I cannot understand why it's calling a do_post
10-02-2020 14:32:07.356 ERROR ScriptRunner - stderr from '/opt/splunk/bin/python /opt/splunk/etc/apps/TA_cortex/bin/cortex_jobs.py': return self._api.do_post(url, {'query': query or {}}, params).json()
the command cortexjobs should return all the jobs, so it should be a GET call.
If, for example, I run curl -k -H 'Authorization: Bearer TOKEN' https://cortex.internal.org:443/api/job
I have the complete job list.
@TheFausap , It's not a GET request :) When I think about this app, I choose to recover all analyzers when the authentication is done. To do so, I use a POST request to request all analyzers with an empty filter (so there is a parameter "filter" sent and that's why it's a POST request). Do you have access to a Wireshark software for doing a capture on the network and see what we can find ? (you will have to use the 80 port to do so). I will try to install a HTTPS certificate on my side to perform this test but for me there is no issue to have a certificate for doing that (except maybe if the certificate is expired, which was not my case).
What is strange is that this is something not covered by my application but overall by the library itself of Cortex ...
ok, now I understood the confusion. Because when I tried the API to get the analyzer, I used this API call curl -H 'Authorization: Bearer API_KEY' 'https://CORTEX_APP_URL:443/api/analyzer', so the confusion between POST and GET. It seems, anyway, not an issue with the certificate, because I modified the requests calls with a verify=False, and I get the same error. Maybe my account doesn't have any associated analyzer, so I got the exception?
In this case, I assume you would have an empty response from Cortex ... When you are on Cortex, you should see your analyzers and if it's the case, as you have the same rights using the API, it should not be a problem.
I checked with postman calling the api/analyzer query and I got the results, but I used the GET call. I want to test your way with POST, maybe there's some problem in our Cortex implementation. Are you using this kind of call?
curl -XPOST -H 'Authorization: Bearer **API_KEY**' -H 'Content-Type: application/json' 'https://CORTEX_APP_URL:9001/api/analyzer/_search' -d '{
"query": {}
}'
Ok, if I use the GET to have the list of the enabled analyzers, this is working. With the POST, I have this error (I'm trying with Postman)
A client error occurred on POST /api/analyzer/_search : No CSRF token found for application/json body
Anyway I did some tests with Postman and I was able to perform a POST request (with the reverse proxy in front) disabling the cookie jar, and deleting all the cookies that were existing in the query. So if you could modify the code disabling the cookie jar and preventing adding cookies in the first request, this should work.
Hi I’m not sure to catch everything here You are asking me to prevent to add cookies to the first POST request so the request that is performing the authentication or the one for getting analyzers ? Anyway, you have to specify the cookies (at least the session cookie) to access to these information right ? Are you saying that there is one cookie at least which cause this issue ? In all these case , the problem is linked to the library itself and not with my Splunk app right ? ... we should open an issue to the cortex GitHub repository.
Hello,
yes, you are correct, the problem is in the API library itself. Just a quick recovery action, I was thinking to force the POST request to disable cookie-jar and manage cookies at all, like I did in Postman.
[image: image.png]
On Mon, Oct 5, 2020 at 8:44 AM LmR notifications@github.com wrote:
Hi I’m not sure to catch everything here You are asking me to prevent to add cookies to the first POST request so the request that is performing the authentication or the one for getting analyzers ? Anyway, you have to specify the cookies (at least the session cookie) to access to these information right ? Are you saying that there is one cookie at least which cause this issue ? In all these case , the problem is linked to the library itself and not with my Splunk app right ? ... we should open an issue to the cortex GitHub repository.
— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/LetMeR00t/TA-thehive-cortex/issues/1#issuecomment-703432934, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAOCZKZUIXYKBOVBOGHXPHDSJFTL5ANCNFSM4SBNXGYQ .
i have the same issue, thehivecase and cortexjobs tab not working, 'External search command 'thehivecases' returned error code 1. '. Does anyone can help me ?
Hi The issue was resolved on another issue Could you precise the TA version and your OS environment ? Thank you
OS: Ubuntu 18.04 , begin i used newest version TA 1.1.3 but both 'thehivecases' and 'cortexjobs' returned error code 1, after i installed 1.1.1 , 'cortexjobs' working but 'thehivecases' returned error
I recommend you to only work with the 1.1.3 Did you fill all the information in the configuration page ? And then did you add your inputs for each of your instance ? if so, please run a search then when you have the error, open the job.log and see at the end of the message if you have any python error and copy paste it here Thank you
i filled all the information . so, cortexjobs search return correct result. when run thehivecases search, it returned error code. tail -f /opt/splunk/var/log/splunk/command_thehive_cases.log 2020-12-18 06:58:14,868 INFO thehive_search_cases:28 - Parameter "keyword" not found, using default value="" 2020-12-18 06:58:14,869 INFO thehive_search_cases:28 - Parameter "status" not found, using default value="" 2020-12-18 06:58:14,869 INFO thehive_search_cases:28 - Parameter "severity" not found, using default value="" 2020-12-18 06:58:14,869 INFO thehive_search_cases:28 - Parameter "tags" not found, using default value="" 2020-12-18 06:58:14,870 INFO thehive_search_cases:28 - Parameter "title" not found, using default value="" 2020-12-18 06:58:14,870 INFO thehive_search_cases:28 - Parameter "assignee" not found, using default value="" 2020-12-18 06:58:14,870 INFO thehive_search_cases:28 - Parameter "date" not found, using default value=" TO " 2020-12-18 06:58:14,870 INFO thehive_search_cases:28 - Parameter "max_cases" not found, using default value="100" 2020-12-18 06:58:14,870 INFO thehive_search_cases:28 - Parameter "sort_cases" not found, using default value="-startDate" 2020-12-18 06:58:14,870 INFO thehive_search_cases:116 - Query is: {}
Could you rerun the test but setting the DEBUG mode to logging (under the Configuration page) ? For now and from what I see, the script is working well but it’s maybe the HTTP response that have an issue. which version of TheHive do you have ?
i used TheHive 4.0.2-1
i have set debug mode and rerun the search, Everything seems to be working properly. but splunk search not return result 2020-12-18 07:31:28,832 DEBUG common:45 - LEVEL changed to DEBUG according to the configuration 2020-12-18 07:31:28,877 INFO thehive_search_cases:28 - Parameter "keyword" not found, using default value="" 2020-12-18 07:31:28,877 INFO thehive_search_cases:28 - Parameter "status" not found, using default value="" 2020-12-18 07:31:28,877 INFO thehive_search_cases:28 - Parameter "severity" not found, using default value="" 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "tags" not found, using default value="" 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "title" not found, using default value="" 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "assignee" not found, using default value="" 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "date" not found, using default value=" TO " 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "max_cases" not found, using default value="100" 2020-12-18 07:31:28,878 INFO thehive_search_cases:28 - Parameter "sort_cases" not found, using default value="-startDate" 2020-12-18 07:31:28,879 DEBUG thehive_search_cases:68 - filterKeyword: , filterStatus: , filterSeverity: , filterTags: , filterTitle: , filterAssignee: , filterDate: TO , max_cases: 100, sort_cases: -startDate 2020-12-18 07:31:28,879 INFO thehive_search_cases:116 - Query is: {} 2020-12-18 07:31:28,946 DEBUG thehive_search_cases:124 - Get case ID "~24648" 2020-12-18 07:31:28,946 DEBUG thehive_search_cases:125 - Case details: {'_id': '~24648', 'id': '~24648', 'createdBy': 'xxx@gmail.com', 'updatedBy': 'xxx@gmail.com', 'createdAt': 1607501500499, 'updatedAt': 1608104366542, '_type': 'case', 'caseId': 3, 'title': 'My first case', 'description': 'This is my first empty case using TheHive!', 'severity': 2, 'startDate': 1607501460000, 'endDate': 1607502272729, 'impactStatus': 'NoImpact', 'resolutionStatus': 'TruePositive', 'tags': [], 'flag': False, 'tlp': 2, 'pap': 2, 'status': 'Open', 'summary': 'no', 'owner': 'xxx@gmail.com', 'customFields': {}, 'stats': {}, 'permissions': ['manageShare', 'manageAnalyse', 'manageTask', 'manageCaseTemplate', 'manageCase', 'manageUser', 'managePage', 'manageObservable', 'manageConfig', 'manageAlert', 'accessTheHiveFS', 'manageAction']}
Hi Do you have an example of your search ? is the TheHive dashboard (navigation bar) working ?
i try search : | makeresults | thehivecases it returned error. TheHive dashboard not working. cortexjobs search and dashboard working properly
You said that the script raised an error code when your ran the command ... that’s right ? the search have an error right ? If so, you must have a python error in your job.log after the logs you sent me ...
when search splunk with query above, i think python has no error and return result as log. But splunk returned 'External search command 'thehivecases' returned error code 1.' Here is splunkd.log 12-18-2020 10:15:23.560 +0000 WARN HttpListener - Socket error from 127.0.0.1:55718 while accessing /servicesNS/nobody/TA-thehive-cortex/storage/collections/data/kv_cortex_analyzers/: Connection closed by peer 12-18-2020 10:15:29.514 +0000 INFO LMStackMgr - license_warnings_update_interval=auto has reached the minimum threshold 10. Will not reduce license_warnings_update_interval beyond this value
This is not an error message but a warning one. If the search is saying that an error occurred, then you should have an error message in the job.log. Be sure to check the job log and not the file having the script logs only. Could it be possible for you to send me the job.log by email at letmer00t@gmail.com ?
Hi @datkps17 Did you manage to solve the issue ?
Hello, I configured the app in Splunk 7.x but I have an error about a missing module ctypes. I tried to run the command "cortexjobs" and I got this in the job log
Do I have to add this module manually?
thanks, Fausto