Closed zakrush closed 3 years ago
Into 1.15 I created endpoint by script:
def create_new_endpoint(product_id, path, host='localhost', tags=[]):
print(f"Creation new ENDPOINT: {host}/{path}")
return dd.create_endpoint(product_id, path, host=host, tags=tags).data['id']
def create_endpoint(self, product_id, path, protocol="http", host="localhost", port=0, tags=None, fragment="", query=" "):
data = {'protocol': protocol, "product": product_id, "path": "/"+path, "host": host, 'port': port,
"fragment": fragment, "query": query}
if tags:
data['tags'] = tags
return self._request("POST", 'endpoints/', data=data)
Result of my host is False: Is this the intended behavior?
My endpoint data is:
{
"id": 42,
"tags": [
"bulletins:tip/tip"
],
"protocol": "http",
"userinfo": null,
"host": "tip.tip",
"port": 0,
"path": "/bulletins",
"query": "",
"fragment": "",
"mitigated": false,
"product": 56,
"endpoint_params": [],
"endpoint_status": [
351
]
}
In my case incorrect path, not hostname. And I think apply of migrations say "All good", becouse it is not cheking path
field.
FYI @kiblik
@kiblik why do you transfer only endpoint.host
to Endpoint.from_uri
on migration? I added some debug to migration:
if not re.match(r'^[A-Za-z][A-Za-z0-9\.\-\+]+$', endpoint.host) or re.match(r'^/', endpoint.path): # is old host valid FQDN?
try:
validate_ipv46_address(endpoint.host) # is old host valid IPv4/6?
except ValidationError:
try:
if '://' in endpoint.host: # is the old host full uri?
parts = Endpoint.from_uri(endpoint.host)
# can raise exception if the old host is not valid URL
else:
parts = Endpoint.from_uri('//' + endpoint.host)
# can raise exception if there is no way to parse the old host
print("PARTS IS:", parts.host, parts.port, parts.path)
And get this into output:
ENDPOINT: tip.tip /intelligence
PARTS IS: tip.tip None None
Why do you check only host and no path?
My error is here:
url = hyperlink.parse(url="//tip.tip:0//auth")
print(url.path)
('', 'auth')
After update, I get this url. Before update it seems as http://tip.tip:0/auth
AND the SECOND BUG I change all endpoints:
On image valid endpoint. If I upload scan_results with endpoint_id. DD is creating new endpoint with the same atributes!!
On this images it's two different endpoints. I describe it erly!
@zakrush, thank you for reporting, perfect description and analysis.
Regarding if not re.match(r'^[A-Za-z][A-Za-z0-9\.\-\+]+$', endpoint.host)
, yes, this is current. We are looking for non-standard hosts (previous version of DD supported formats like foo.bar:80
or full url, so we needed to fix them).
Cleaning of endpoint (e.g. removing slash from the front of the path) should be done later (same file but a couple of lines lower): https://github.com/DefectDojo/django-DefectDojo/blob/48f5bb79183befd86de3afffca301216a5f84823/dojo/endpoint/utils.py#L186
However:
.clean()
is not possible during migration (and I didn't know about it until now).save()
)I fixed both mistakes in #4887.
Regarding Endpoint.from_uri()
. This call parses any string into a valid Endpoint. In the case which you mentions, it is used only for fixing wrong hosts. As I mentioned, all other cleanings are done in .clean()
"Second bug". Can you send output of API call api/v2/endpoints/{id1}
and api/v2/endpoints/{id2}
, please?
@kiblik Here is endpoint first:
{
"id": 10,
"tags": [
"auth:tip/tip"
],
"protocol": "http",
"userinfo": null,
"host": "tip.tip",
"port": 0,
"path": "auth",
"query": "",
"fragment": "",
"mitigated": false,
"product": 1,
"endpoint_params": [],
"endpoint_status": [
1354,
1355,
1356,
1357,
1358,
1359,
1360,
1361,
1362,
1363,
1364,
1365,
1366,
1367,
1368,
1369,
1935,
1936,
1937,
1938,
1939,
1940,
1941,
1942,
1943,
1944,
1945,
1946,
1947,
1948,
1949,
1950
]
}
Second. That is created when I upload scan_report with endpoints
{
"id": 48,
"tags": [],
"protocol": "http",
"userinfo": null,
"host": "tip.tip",
"port": 0,
"path": "auth",
"query": null,
"fragment": null,
"mitigated": false,
"product": 1,
"endpoint_params": [],
"endpoint_status": [
2807,
2808,
2809,
2810,
2811,
2812,
2813,
2814,
2815,
2816,
2817,
2818,
2819,
2820,
2821,
2822
]
}
Thank you @zakrush
Well, null
is not ""
.
Fixing these issues was done in .clean()
but as I wrote there was a bug in calling .clean()
during the migration process. So it wasn't cleaned properly. As I mention, I fixed this calling so it should work now.
If you are able to test #4887 in your environment and confirm it works, it can really help us.
I try to reproduse it on copy of my environment
@kiblik I get when exec manage.py migrate. I added you changes by hands. And repeat all stages.
docker exec -it defectdojo_uwsgi_1 ./manage.py migrate
[29/Jul/2021 14:31:02] INFO [dojo.models:3613] enabling audit logging
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 85, in wrapped
res = handle_func(*args, **kwargs)
File "/usr/local/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 92, in handle
executor = MigrationExecutor(connection, self.migration_progress_callback)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/executor.py", line 18, in __init__
self.loader = MigrationLoader(self.connection)
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 53, in __init__
self.build_graph()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/loader.py", line 255, in build_graph
self.graph.validate_consistency()
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/graph.py", line 195, in validate_consistency
[n.raise_error() for n in self.node_map.values() if isinstance(n, DummyNode)]
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/graph.py", line 195, in <listcomp>
[n.raise_error() for n in self.node_map.values() if isinstance(n, DummyNode)]
File "/usr/local/lib/python3.8/site-packages/django/db/migrations/graph.py", line 58, in raise_error
raise NodeNotFoundError(self.error_message, self.key, origin=self.origin)
django.db.migrations.exceptions.NodeNotFoundError: Migration dojo.0119_endpoint_host_migration_rerun dependencies reference nonexistent parent node ('dojo', '0118_remove_finding_images')
Also endpoint migrations:
Traceback (most recent call last):
File "./manage.py", line 11, in <module>
execute_from_command_line(sys.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 401, in execute_from_command_line
utility.execute()
File "/usr/local/lib/python3.8/site-packages/django/core/management/__init__.py", line 395, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 330, in run_from_argv
self.execute(*args, **cmd_options)
File "/usr/local/lib/python3.8/site-packages/django/core/management/base.py", line 371, in execute
output = self.handle(*args, **options)
File "/app/dojo/management/commands/endpoint_migration.py", line 22, in handle
clean_hosts_run(apps=apps, change=bool(options.get('dry_run')))
File "/app/dojo/endpoint/utils.py", line 99, in clean_hosts_run
for endpoint in Endpoint_model.objects.order_by('id'):
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 287, in __iter__
self._fetch_all()
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 1308, in _fetch_all
self._result_cache = list(self._iterable_class(self))
File "/usr/local/lib/python3.8/site-packages/django/db/models/query.py", line 53, in __iter__
results = compiler.execute_sql(chunked_fetch=self.chunked_fetch, chunk_size=self.chunk_size)
File "/usr/local/lib/python3.8/site-packages/django/db/models/sql/compiler.py", line 1156, in execute_sql
cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 66, in execute
return self._execute_with_wrappers(sql, params, many=False, executor=self._execute)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 75, in _execute_with_wrappers
return executor(sql, params, many, context)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/utils.py", line 90, in __exit__
raise dj_exc_value.with_traceback(traceback) from exc_value
File "/usr/local/lib/python3.8/site-packages/django/db/backends/utils.py", line 84, in _execute
return self.cursor.execute(sql, params)
File "/usr/local/lib/python3.8/site-packages/django/db/backends/mysql/base.py", line 73, in execute
return self.cursor.execute(query, args)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 206, in execute
res = self._query(query)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/cursors.py", line 319, in _query
db.query(q)
File "/usr/local/lib/python3.8/site-packages/MySQLdb/connections.py", line 259, in query
_mysql.connection.query(self, query)
django.db.utils.OperationalError: (1054, "Unknown column 'dojo_endpoint.userinfo' in 'field list'")
I go to risK and do it onto productions. So into initializie I get the same error Endpoint migrations was succsess, and scan aploading without new endpoint now. But:
I have deduplication:
'Dependency Check Scan': ['cve', 'cwe', 'file_path', "endpoints"],
And now endpoints have the different hash_code. I run ./manage.py dedupe --hash_code_only
it's not help me.
Here is 2 findings:
{
"id": 724,
"images": [],
"tags": [
"auth:tip/tip"
],
"request_response": {
"req_resp": []
},
"accepted_risks": [],
"push_to_jira": false,
"age": 52,
"sla_days_remaining": 68,
"finding_meta": [],
"related_fields": null,
"jira_creation": null,
"jira_change": null,
"display_status": "Active, Verified",
"finding_groups": [],
"title": "[AUTH] Knex:0.15.2 | knex.js Versions Before 0.19.5 Are Vulnerable to SQL Injection Attack. Identifiers Are Escaped Incorrectly as Part of the MSSQL Dialect, Allowing Attackers to Craft a Malicious Query to the Host DB.(in Knex:0.15.2)",
"date": "2021-06-07",
"sla_start_date": null,
"cwe": 89,
"cve": "CVE-2019-10757",
"cvssv3": "",
"cvssv3_score": null,
"url": null,
"severity": "Low",
"description": "knex.js versions before 0.19.5 are vulnerable to SQL Injection attack. Identifiers are escaped incorrectly as part of the MSSQL dialect, allowing attackers to craft a malicious query to the host DB.",
"mitigation": "Обновить knex в auth модуле до версии 0.19.5 или выше",
"impact": "",
"steps_to_reproduce": "` npm ls knex`\r\nВ выводе версия 0.15.2",
"severity_justification": "",
"references": "name: https://snyk.io/vuln/SNYK-JS-KNEX-471962\r\nsource: CONFIRM\r\nurl: https://snyk.io/vuln/SNYK-JS-KNEX-471962\r\n\r\n\r\nhttps://github.com/knex/knex/pull/3382",
"is_template": false,
"active": true,
"verified": true,
"false_p": false,
"duplicate": false,
"out_of_scope": false,
"risk_accepted": false,
"under_review": false,
"last_status_update": "2021-06-08T09:11:12.574746Z",
"under_defect_review": false,
"is_mitigated": false,
"thread_id": 0,
"mitigated": null,
"numerical_severity": "S3",
"last_reviewed": "2021-06-23T13:27:57.270262Z",
"line_number": null,
"sourcefilepath": null,
"sourcefile": null,
"param": null,
"payload": null,
"hash_code": "d6f7562cf9803960fb58992129047fb1511a1a95168b07d9d2f86f365586ab80",
"line": null,
"file_path": "auth/knex:0.15.2",
"component_name": "knex",
"component_version": "0.15.2",
"static_finding": true,
"dynamic_finding": false,
"created": "2021-06-07T18:03:46.178981Z",
"scanner_confidence": null,
"unique_id_from_tool": null,
"vuln_id_from_tool": null,
"sast_source_object": null,
"sast_sink_object": null,
"sast_source_line": null,
"sast_source_file_path": null,
"nb_occurences": null,
"publish_date": null,
"test": 50,
"duplicate_finding": null,
"review_requested_by": null,
"defect_review_requested_by": null,
"mitigated_by": null,
"reporter": 1,
"last_reviewed_by": 2,
"sonarqube_issue": null,
"endpoints": [
40
],
"endpoint_status": [
329
],
"reviewers": [],
"notes": [],
"files": [],
"found_by": [
118
],
"prefetch": {}
}
{
"id": 3638,
"images": [],
"tags": [],
"request_response": {
"req_resp": []
},
"accepted_risks": [],
"push_to_jira": false,
"age": 0,
"sla_days_remaining": 7,
"finding_meta": [],
"related_fields": null,
"jira_creation": null,
"jira_change": null,
"display_status": "Active",
"finding_groups": [],
"title": "knex:0.15.2 | knex.js Versions Before 0.19.5 Are Vulnerable to SQL Injection Attack. Identifiers Are Escaped Incorrectly as Part of the MSSQL Dialect, Allowing Attackers to Craft a Malicious Query to the Host DB.(in knex:0.15.2)",
"date": "2021-07-29",
"sla_start_date": null,
"cwe": 89,
"cve": "CVE-2019-10757",
"cvssv3": null,
"cvssv3_score": null,
"url": null,
"severity": "Critical",
"description": "knex.js versions before 0.19.5 are vulnerable to SQL Injection attack. Identifiers are escaped incorrectly as part of the MSSQL dialect, allowing attackers to craft a malicious query to the host DB.",
"mitigation": "Update knex:0.15.2 to at least the version recommended in the description",
"impact": null,
"steps_to_reproduce": null,
"severity_justification": null,
"references": "name: https://snyk.io/vuln/SNYK-JS-KNEX-471962\nsource: CONFIRM\nurl: https://snyk.io/vuln/SNYK-JS-KNEX-471962\n\n",
"is_template": false,
"active": true,
"verified": false,
"false_p": false,
"duplicate": false,
"out_of_scope": false,
"risk_accepted": false,
"under_review": false,
"last_status_update": "2021-07-29T14:55:43.946995Z",
"under_defect_review": false,
"is_mitigated": false,
"thread_id": 0,
"mitigated": null,
"numerical_severity": "S0",
"last_reviewed": "2021-07-29T14:55:42.435313Z",
"line_number": null,
"sourcefilepath": null,
"sourcefile": null,
"param": null,
"payload": null,
"hash_code": "408513151a2440c95370ecb3a517363edf3f9ab715549e9d4a3bfa7f9e36bb1a",
"line": null,
"file_path": "knex:0.15.2",
"component_name": "knex",
"component_version": "0.15.2",
"static_finding": true,
"dynamic_finding": false,
"created": "2021-07-29T14:55:43.841086Z",
"scanner_confidence": null,
"unique_id_from_tool": null,
"vuln_id_from_tool": null,
"sast_source_object": null,
"sast_sink_object": null,
"sast_source_line": null,
"sast_source_file_path": null,
"nb_occurences": null,
"publish_date": null,
"test": 312,
"duplicate_finding": null,
"review_requested_by": null,
"defect_review_requested_by": null,
"mitigated_by": null,
"reporter": 2,
"last_reviewed_by": 2,
"sonarqube_issue": null,
"endpoints": [
40
],
"endpoint_status": [
1821
],
"reviewers": [],
"notes": [],
"files": [],
"found_by": [
118
],
"prefetch": {}
}
I think, I find whi deduplications is not working after migrations: It's new finding: it's old: There is different of dates. Maybe it into hash code calculate process?
@kiblik I fount problem with deduplication. Here my finding after scan import import:
{
"id": 73,
"tags": [
"settings:tip/tip"
],
"request_response": {
"req_resp": []
},
"accepted_risks": [],
"push_to_jira": false,
"age": 0,
"sla_days_remaining": 30,
"finding_meta": [],
"related_fields": null,
"jira_creation": null,
"jira_change": null,
"display_status": "Active",
"finding_groups": [],
"title": "[SETTINGS] find-my-way:2.2.4 | This Affects the Package Find-My-Way Before 2.2.5, From 3.0.0 and Before 3.0.5. It Accepts the Accept-Version' Header by Default, and if Versioned Routes Are Not Being Used, This Could Lead to a Denial of Service. Accept-Ve",
"date": "2021-07-29",
"sla_start_date": null,
"cwe": 444,
"cve": "CVE-2020-7764",
"cvssv3": null,
"cvssv3_score": null,
"url": null,
"severity": "High",
"description": "This affects the package find-my-way before 2.2.5, from 3.0.0 and before 3.0.5. It accepts the Accept-Version' header by default, and if versioned routes are not being used, this could lead to a denial of service. Accept-Version can be used as an unkeyed header in a cache poisoning attack.",
"mitigation": "Update find-my-way:2.2.4 to at least the version recommended in the description",
"impact": null,
"steps_to_reproduce": null,
"severity_justification": null,
"references": "name: https://snyk.io/vuln/SNYK-JS-FINDMYWAY-1038269\nsource: MISC\nurl: https://snyk.io/vuln/SNYK-JS-FINDMYWAY-1038269\n\nname: https://github.com/delvedor/find-my-way/commit/ab408354690e6b9cf3c4724befb3b3fa4bb90aac\nsource: MISC\nurl: https://github.com/delvedor/find-my-way/commit/ab408354690e6b9cf3c4724befb3b3fa4bb90aac\n\n",
"is_template": false,
"active": true,
"verified": false,
"false_p": false,
"duplicate": false,
"out_of_scope": false,
"risk_accepted": false,
"under_review": false,
"last_status_update": "2021-07-29T17:29:44.755360Z",
"under_defect_review": false,
"is_mitigated": false,
"thread_id": 0,
"mitigated": null,
"numerical_severity": "S1",
"last_reviewed": "2021-07-29T17:29:44.016211Z",
"line_number": null,
"sourcefilepath": null,
"sourcefile": null,
"param": null,
"payload": null,
"hash_code": "a6b08417e8f77b141b66f738a6be287b808b49ed775dbf45335066c427263d89",
"line": null,
"file_path": "settings/find-my-way:2.2.4",
"component_name": "find-my-way",
"component_version": "2.2.4",
"static_finding": true,
"dynamic_finding": false,
"created": "2021-07-29T17:29:44.632861Z",
"scanner_confidence": null,
"unique_id_from_tool": null,
"vuln_id_from_tool": null,
"sast_source_object": null,
"sast_sink_object": null,
"sast_source_line": null,
"sast_source_file_path": null,
"nb_occurences": null,
"publish_date": null,
"test": 10,
"duplicate_finding": null,
"review_requested_by": null,
"defect_review_requested_by": null,
"mitigated_by": null,
"reporter": 1,
"last_reviewed_by": 1,
"sonarqube_issue": null,
"endpoints": [
1
],
"endpoint_status": [
64
],
"reviewers": [],
"notes": [],
"files": [],
"found_by": [
118
],
"prefetch": {}
}
And it is not same as original hashcode.
Then I did ./manage.py dedupe --hash-code_only
And hash code become is: "hash_code": "ffca640496f890b18e7309cf8dbe4594a9c9f56d8532df313e73092cac93bc20",
The same as original!!!
With migration all ok. I did migrations, and new finding correct uploading
@zakrush, ufff, run these non-accepted changes on production was a really risky step (I hoped you will load backup to some dev environment).
If you run 0119_endpoint_host_migration_rerun
in production, and it will not be merged as it is (maybe there will be some changes - e.g. https://github.com/DefectDojo/django-DefectDojo/pull/4887#issuecomment-889461018) you could have a problem with migrations in the future.
Btw, I'm not sure that you should set endpoints
as part of dedupe. Have you already seen it somewhere else?
Btw, I'm not sure that you should set endpoints as part of dedupe. Have you already seen it somewhere else?
@kiblik I can do dedup only this method. Before this bug all work correctly with endpoint into dedup. For me it was 1.15. Standard dedup is not working for me. See here: https://github.com/DefectDojo/django-DefectDojo/issues/4888 I can't understand why is hashcode changing and not correct calculate when scan_import is doing. I think error into algorithm of uploading of scan results with endpoints. It should be is aploaded, mapping endpoint, and after that calculate hash. also endpoint I will use for other tools, becouse developers is using monorepo, and it gives to me very much problems.:(
If you run 0119_endpoint_host_migration_rerun in production, and it will not be merged as it is (maybe there will be some changes - e.g. #4887 (comment)) you could have a problem with migrations in the future.
I merged to all.
All ok. After merge request this problem ma be to resolved
can this be closed?
I suppose this bug was fixed in #4887 @zakrush, do you agree?
Slack us first! https://owasp.slack.com/archives/C2P5BA8MN/p1627056588112000 here is describe my problem ^
Be informative I upgraded from 1.15 to 2.0.3 I have deduplication on endpoints, and when I import scan I add id of endpoint. After update when script is importing scan it created new endpoint instead needed. My deduplication is broken.
Then I saw that I needed to apply migration. Results of migration here:
My endpoints is: here tags endpoints endpoint from 1.15, and without tags - creating when import is uploading. U can see, that old endpoints have
//
. It's incorrect and migration is not edited it.Bug description See describe. Migration is not applyed after update. When I importing scan report with findings, it's create another endpoint for findings.
id_endpoint
here is28
. But into findind I get newendpoint_id
is90
. Deduplication with endpoint is not working.Steps to reproduce Steps to reproduce the behavior:
Expected behavior Migration edited my endpoints. Hashsums of findings is recalculated. Import of findings is uploading correct, new findings setup dedup status.
Deployment method (select with an
X
)X
] DockerEnvironment information
git show -s --format="[%ci] %h: %s [%d]"
]Sample scan files (optional) If applicable, add sample scan files to help reproduce your problem.
Screenshots (optional) If applicable, add screenshots to help explain your problem.
Console logs (optional) If applicable, add console logs to help explain your problem.
Additional context (optional) Add any other context about the problem here.