Closed good-134 closed 1 month ago
The locking issue was just a mistake of hand. Can you debug the variables actualTarget and availableTargetNames? That sounds kind of weird.
I have debugged the variables availableTargetNames and actualTarget as you suggested. availableTargetNames contains the logical table name agreement. actualTarget contains a single physical table name agreement_2. From this, it seems that my custom sharding algorithm is correctly mapping the logical table to the appropriate physical table. However, my original question was about why my configuration works even though I didn't specify the physical table names explicitly in the actual-data-nodes property. Could it be that ShardingSphere relies on the sharding algorithm to resolve the physical table names from the logical table name, even when the actual-data-nodes doesn't explicitly list all the tables? Is this behavior expected?
Thank you for your help.
Is this behavior expected?
actualDataNodes
is empty. The actual table of target routes is generated by the logic function of the custom algorithm class.
I have observed an unexpected behavior in ShardingSphere's table sharding configuration. My configuration doesn't explicitly specify all physical tables, yet it works correctly. I'm trying to understand why this is possible.
Here's my current configuration and sharding code:
In my database, I have four physical tables: agreement_0, agreement_1, agreement_2, and agreement3. However, in my configuration, I've only specified actual-data-nodes: ds0.agreement instead of actual-data-nodes: ds0.agreement$->{0..3}.
Despite this, my sharding setup works correctly. The system successfully routes queries to the appropriate physical tables.
My questions are:
Is there some default behavior in ShardingSphere that I'm not aware of, or is it related to my custom sharding algorithm? Thank you.