scylladb / scylla-manager

The Scylla Manager
https://manager.docs.scylladb.com/stable/
Other
48 stars 33 forks source link

Manager restore error (kms_error AccessDeniedException) for multiDC cluster with EAR enabled #3871

Open mikliapko opened 2 months ago

mikliapko commented 2 months ago

Issue description

Manager restore operation returns kms_error AccessDeniedException for multiDC cluster with EAR enabled.

Full error message: Found exception: kms_error (AccessDeniedException: User: arn:aws:iam::797456418907:role/qa-scylla-manager-backup-role is not authorized to perform: kms:Decrypt on the resource associated with this ciphertext because the resource does not exist in this Region, no resource-based policies allow access, or a resource-based policy explicitly denies access)

The issue has been observed just recently after we tried to switch Managet SCT tests to run against Scylla 2024.1 instead of 2022.

Impact

The restore operation returns this error multiple times during one run but the whole process finishes successfully. The reason of that I suppose in kms availability of some nodes of the cluster.

How frequently does it reproduce?

Every restore operation performed in such configuration.

Installation details

SCT Version: 31ff1e87d830ce7fe2587e0c609d113d2f66f8a4 Scylla version (or git commit hash): 2024.1.3-20240401.64115ae91a55

Logs

mikliapko commented 2 months ago

@fruch The question here - is it a valid case to have a multiDC cluster with turned on Encryption on Rest feature and expect all the Manager's features like restore, etc fully functional? If yes, this issue should be address and fixed in scope of Manager's activities, otherwise, some fixes to SCT for these Manager's tests are required.

@karol-kokoszka @Michal-Leszczynski @rayakurl FYI

vponomaryov commented 2 months ago

@mikliapko

So, as a first step check the sstables per-region limitation for violation by your scenario. If above is false then need to provide more details about the steps that get done in scope of the mgmt restore operation.

mikliapko commented 2 months ago

Hey @vponomaryov,

Thanks for your reply.

vponomaryov commented 2 months ago
  • Just to clarify, if keys are the same (I mean content) but uploaded to different regions they will be considered by AWS as different and it will basically result in restore issue I described above?

No. Scylla gets configured with the KMS keys by their aliases. It doesn't check keys equality. Just need to understand that encrypted sstables may be decrypted only with proper private/encryption key.

  • You mentioned that SCT uses per-regiod KMS keys. Then probably the approach with using a kind of multi-region key (https://docs.aws.amazon.com/kms/latest/developerguide/multi-region-keys-overview.html) should help to avoid this issue for multiDC setup. What do you think about it? Multi-region keys will help to workaround the incorrect approach for restore - yes. But need to fix the approach for backup+restore operations in the test. The question not in KMS keys, the question is in usage of sstables. Move sstables only in scope of a single region - for each of the regions.

Again, need to understand the steps done in the test and why it is so? Is it expected that sstables are stored in encrypted state? Why then it is stored without private/decryption key reference?

I don't see proofs that the mgmt test approach is correct. The argument for the per-region KMS keys - if customer chooses this way, then he will not be able to use manager? Doesn't sound serious.

mikliapko commented 2 months ago

Again, need to understand the steps done in the test and why it is so? Is it expected that sstables are stored in encrypted state? Why then it is stored without private/decryption key reference?

I got your point. Okay, I need to dig deeper into the test itself because what I did is only changed the version of Scylla (from 2022 to 2024) for the test_manager_sanity test which worked fine on 2022. So, the initial test didn't consider this EaR feature to be here. But now it should be adjusted to it properly.

I don't see proofs that the mgmt test approach is correct. The argument for the per-region KMS keys - if customer chooses this way, then he will not be able to use manager? Doesn't sound serious.

Agree.

mikliapko commented 2 months ago

@vponomaryov I'm wondering does the current implementation allow to disable Scylla Encryption?

From what I see in the code the encryption will be enabled by default for:

https://github.com/scylladb/scylla-cluster-tests/blob/caec2aae51d82dd3cbc0870ec59821ddec759401/sdcm/tester.py#L810

And I don't see any ways to correctly disable it except providing any non-equal to auto kms_host parameter value.

vponomaryov commented 2 months ago

@vponomaryov I'm wondering does the current implementation allow to disable Scylla Encryption?

From what I see in the code the encryption will be enabled by default for:

  • Enterprise version >= '2023.1.3';
  • AWS cluster;
  • non-mixed db type;
  • in absence of custom encryption options. That's basically the configuration used in Manager tests and Encryption becomes enabled by default.

https://github.com/scylladb/scylla-cluster-tests/blob/caec2aae51d82dd3cbc0870ec59821ddec759401/sdcm/tester.py#L810

And I don't see any ways to correctly disable it except providing any non-equal to auto kms_host parameter value.

Just update the config the following way:

scylla_encryption_options: "{'key_provider': 'none'}"
mikliapko commented 2 months ago

Just update the config the following way:

scylla_encryption_options: "{'key_provider': 'none'}"

Got it, thanks

fruch commented 2 months ago

Just to emphasize, the scylla causing the issue is the one installed on the monitor node for manager

We never tested it with KMS enabled, since the SCT code is written to enable KMS by default in the supported versions, it got enabled.

There is no reason it shouldn't be working, I'm guessing it's just a configuration error picking the wrong region in scylla.yaml configuration, and should be fixed.

mikliapko commented 2 months ago

Just to emphasize, the scylla causing the issue is the one installed on the monitor node for manager

Hm, I thought that the problem is in cluster nodes (in one of the region) because while going through logs I've seen this error for manager-regression-manager--db-node-52191a6f-1, manager-regression-manager--db-node-52191a6f-2 which are located in one region and no errors for the node in another region. I'm looking into this pipeline, job 11.

@fruch How to understand that the problem here is related to monitor node?

fruch commented 2 months ago

Just to emphasize, the scylla causing the issue is the one installed on the monitor node for manager

Hm, I thought that the problem is in cluster nodes (in one of the region) because while going through logs I've seen this error for manager-regression-manager--db-node-52191a6f-1, manager-regression-manager--db-node-52191a6f-2 which are located in one region and no errors for the node in another region. I'm looking into this pipeline, job 11.

@fruch How to understand that the problem here is related to monitor node?

I take it back, I was confused cause of the name of node had manger in it

Yes it's the DB nodes, and yes the expectation is that the manager is putting the stables back in the same nodes or at least the same region

I don't know what the cloud did by default, but if it's not multi-region keys, restore for multi region setup would be broken

We can't disable KMS before understanding the situation

mikliapko commented 2 months ago

and yes the expectation is that the manager is putting the stables back in the same nodes or at least the same region

@karol-kokoszka Could you please elaborate on this?

karol-kokoszka commented 2 months ago

Yes it's the DB nodes, and yes the expectation is that the manager is putting the stables back in the same nodes or at least the same region

To make it working the way that SSTables are sent to the same region (DC), you must specify the DC when adding location to the restore task https://manager.docs.scylladb.com/stable/sctool/restore.html#l-location

"The format is [:]:. The parameter is optional. It allows you to specify the datacenter whose nodes will be used to restore the data from this location in a multi-dc setting, it must match Scylla nodes datacenter. By default, all live nodes are used to restore data from specified locations."

If the DC is not specified in the location, then it may be sent to any node.

I guess you must restore multiDC cluster, DC by DC when the encryption at rest is enabled.

mikliapko commented 2 months ago

I guess you must restore multiDC cluster, DC by DC when the encryption at rest is enabled.

It means that backup should be also done with --location option specified with every cluster's DC, right?

karol-kokoszka commented 2 months ago

It means that backup should be also done with --location ([:]:) option specified with every cluster's DC, right?

I don't think it's necessary. It's needed when you want to have separate backup bucket per DC.

mikliapko commented 2 months ago

It means that backup should be also done with --location ([:]:) option specified with every cluster's DC, right?

I don't think it's necessary. It's needed when you want to have separate backup bucket per DC.

In such case I suppose, I need to know which DC to use during restoring and specifically which key was used to encrypt the SSTables during backup, right? Does sctool provides such opportunity?

karol-kokoszka commented 2 months ago

In such case I suppose, I need to know which DC to use during restoring and specifically which key was used to encrypt the SSTables during backup, right? Does sctool provides such opportunity?

SCTool does not concern itself with encryption at rest and is not aware of the keys used to encrypt SSTables. Therefore, it is unnecessary for you to know which keys were used during the backup process.

It is the responsibility of the Scylla server to manage decryption of the data. When SM (presumably Scylla Manager) employs the load & stream feature for restoration, it calls the Scylla server and passes the SSTable. Subsequently, Scylla is tasked with identifying the appropriate node to which the SSTable should be streamed.

I presume that Scylla must first decrypt the SSTable in order to determine the correct destination for streaming. In the scenario you described with this issue, there is a possibility that an SSTable encrypted with a key stored in a different region was sent to a node lacking access to the Key Management Service (KMS) in that region.

To mitigate this issue, it is advisable to restore data center (DC) by data center (DC), ensuring that SSTables encrypted with a specific key (e.g., key A) are decrypted with the corresponding key A.

mikliapko commented 2 months ago

To mitigate this issue, it is advisable to restore data center (DC) by data center (DC), ensuring that SSTables encrypted with a specific key (e.g., key A) are decrypted with the corresponding key A.

Thanks a lot for detailed explanation, I'll experiment

mikliapko commented 2 months ago

@karol-kokoszka could you please take a look?

I made an attempt to restore specifying two locations - one for every DC. sctool returns error that location specified multiple times.

Command: 'sudo sctool restore -c 39298668-5b05-4338-96e9-3f0b9425dff4 --restore-tables --location us-eastscylla_node_east:s3:manager-backup-tests-us-east-1,us-west-2scylla_node_west:s3:manager-backup-tests-us-east-1  --snapshot-tag sm_20240425231055UTC'
Exit code: 1
Stdout:
Stderr:
Error: create restore target, units and views: init target: location us-west-2scylla_node_west:s3:manager-backup-tests-us-east-1 is specified multiple times
Trace ID: GKvUjE6lRCWpY2NO5K1jfQ (grep in scylla-manager logs)
Michal-Leszczynski commented 2 months ago

@mikliapko this is somewhat of SM limitation/bug - that you can't specify given location with many DCs and other location with other DC.

How many DCs do you have in the restore destination cluster? If only mentioned 2, then you can run restore with a single location without DC specified (it will use all nodes with location access for restoring the data).

karol-kokoszka commented 2 months ago

@Michal-Leszczynski the goal is to restore DC by DC. And to send node data from DC A to the nodes from DC A.

@mikliapko Please just use separate restore tasks, one per DC.

Michal-Leszczynski commented 2 months ago

But something like this is not supported by SM right now. When location is specified, nodes with access (or nodes from specified DC with access) to it restore the whole backup data from this location. So one would need truly separate backup locations for this purpose.

karol-kokoszka commented 2 months ago

But something like this is not supported by SM right now. When location is specified, nodes with access (or nodes from specified DC with access) to it restore the whole backup data from this location. So one would need truly separate backup locations for this purpose.

If so, then this is a bug, that we must address in some of the upcoming releases. It doesn't fit the backup specification that we advertise with our documentation https://manager.docs.scylladb.com/stable/backup/specification.html

Backup location structure, explicitly defines the tree path to exact DC.

Restore must take advantage of it. We may have problems with multiDC EaR without it.