Closed srfaytkn closed 2 years ago
Aren't you missing step 4.1 to delete the PVCs?
Oh, I see what you mean, but that's not supported as the services you've deleted hold the ClusterIPs so in theory this could bootstrap with new ClusterIPs but replacing seed nodes is not supported yet and so far the replace operation is only manual.
I see, so how can I manually make this operation using the same pvc? Thanks.
Currently, you can't change the IPs of seed nodes so your best chance would be to recreate the Kubernetes Service objects with the same ClusterIPs as they used to have (same as scylla node). But I have'nt tried it so check with some toy cluster first.
I followed the same scenario and made it start by purging the existing seed list.
In order to do this, I went to every node and started it outside the ring:
1) vi /etc/scylla/scylla.yaml
. Add line load_ring_state: false
2) supervisorctl restart scylla
3) delete all the peers that are left from the old deployment
cqlsh
SELECT peer, rpc_address FROM system.peers;
DELETE FROM system.peers WHERE peer='the old non-existing ip here';
4) Join the ring again:
vi /etc/scylla/scylla.yaml
. Delete line load_ring_state: false
5) supervisorctl restart scylla
There's also an alternative "brutal" solution:
rm -rf /var/lib/scylla/data/system/peer*
supervisorctl restart scylla
I've tried to recreate this issue and it seems to be working fine now. Steps I've taken to reproduce it:
cassandra-stress write no-warmup n=100000 cl=ONE -mode native cql3 connectionsPerHost=1 -col n=FIXED\(5\) size=FIXED\(64\) -pop seq=1..10000000 -node "scylla-cluster-client.scylla.svc" -rate threads=50 -log file=/cassandra-stress.load.data -schema "replication(factor=1)" -errors ignore; cat /cassandra-stress.load.data
read
cassandra-stress. No problems there, so no data was lost.@rzetelskik would you mind sending a PR with an e2e test covering this case before we close it?
Describe the bug I use ebs volumes on aws. When i uninstall the scylla cluster with helm and reinstall it with same ebs volumes, it tries to connect to the old node ip addresses.
To Reproduce Steps to reproduce the behavior:
Expected behavior Getting a new cluster up without losing data
Old Service Ip List
10.100.70.138, 10.100.197.104, 10.100.118.129
New Service Ip List
10.100.113.44, 10.100.39.182, 10.100.31.7
Logs
Environment: