Open kiss81 opened 2 years ago
I believe the plot filter is based on a hash of the plot ID and the challenge (and maybe something else). Different plots have to have different plot IDs, and you can't pick two plot IDs that will consistently hash to the same value when combined with a variety of challenges. I believe the system was specifically designed to prevent the strategy you're proposing.
I believe the plot filter is based on a hash of the plot ID and the challenge (and maybe something else). Different plots have to have different plot IDs, and you can't pick two plot IDs that will consistently hash to the same value when combined with a variety of challenges. I believe the system was specifically designed to prevent the strategy you're proposing.
Kind of make sense of course. But if some kind of filtering is possible before completing the plot that would be great.
@staticmethod
def passes_plot_filter(
constants: ConsensusConstants,
plot_id: bytes32,
challenge_hash: bytes32,
signage_point: bytes32,
) -> bool:
plot_filter: BitArray = BitArray(
ProofOfSpace.calculate_plot_filter_input(plot_id, challenge_hash, signage_point)
)
return plot_filter[: constants.NUMBER_ZERO_BITS_PLOT_FILTER].uint == 0
@staticmethod
def calculate_plot_filter_input(plot_id: bytes32, challenge_hash: bytes32, signage_point: bytes32) -> bytes32:
return std_hash(plot_id + challenge_hash + signage_point)
The plot id, challenge hash, and signage point are all concatenated and then hashed. The result is checked against the filter setting. The overall hash cannot be predicted based on the plot id alone so there is no opportunity to control generation of nor group the plots such that they either all or none pass the filter. Nor that they mostly do or mostly don't pass the filter.
As far as I know, big disks should run and little disks can potentially be shut down to save power. Where that boundary is depends on your concern about the life of the disk and your ability or willingness to create large plots.
@staticmethod def passes_plot_filter( constants: ConsensusConstants, plot_id: bytes32, challenge_hash: bytes32, signage_point: bytes32, ) -> bool: plot_filter: BitArray = BitArray( ProofOfSpace.calculate_plot_filter_input(plot_id, challenge_hash, signage_point) ) return plot_filter[: constants.NUMBER_ZERO_BITS_PLOT_FILTER].uint == 0 @staticmethod def calculate_plot_filter_input(plot_id: bytes32, challenge_hash: bytes32, signage_point: bytes32) -> bytes32: return std_hash(plot_id + challenge_hash + signage_point)
The plot id, challenge hash, and signage point are all concatenated and then hashed. The result is checked against the filter setting. The overall hash cannot be predicted based on the plot id alone so there is no opportunity to control generation of nor group the plots such that they either all or none pass the filter. Nor that they mostly do or mostly don't pass the filter.
As far as I know, big disks should run and little disks can potentially be shut down to save power. Where that boundary is depends on your concern about the life of the disk and your ability or willingness to create large plots.
Great explanation! What I could do is sort the plots on my disks. But if I want to improve it to the maximum I need either go with very large plots (k38 / k39) and / or sort them... If I have a disk full of filter "1" plots that would work as well as one large plot.
The point is that no, you can't sort the plots. Say you have plots A, B, and C. For challenge one, maybe plots A and B pass the filter. For challenge two, it could be A and C. Then for challenge three just C passes. The challenge, the signage point, and the plot id are all included in the filter check. You have to handle many challenges and signage points and you can not predict them so you can not pre-sort your plots into groups that will align with passing the filter or not.
That makes sense. So the only solution to achieve the least possible spin ups a day is super large plots... That raises the question if it will be possible to make a k38 / k39 plot :p
I largest plot I am aware of was a single k38 that took 43 days. Presumably it could be somewhat quicker now. With larger drives the power consumption is less of an issue. As far as I know 18TB drives aren't generally more power hungry than 1TB drives. As such they end up 18x more efficient anyways. With 18TB drives you are probably looking at maybe 500W/PB? It all depends on your setup. So, the drives that save the most power per TB are the smaller drives where the smaller k34 etc are still a relevant reduction. There are a few people that have been looking at this for smaller drives. It might be worth checking out #farming on https://keybase.io/team/chia_network.public if you want to discuss it.
Had no idea how to describe this in a proper way so here is the background: Although I know spindowns of harddisks is a controversial topic: I want to let my chia harddisks spin down. Af first I was thinking of making k39 plots, but that's not really feasible. Therefore I want to put (k33 or k34) plots with the same filter index on the same drive. By doing so the disks have a lot of time to spin down.
So here is the real question: Is it possible to create plots with a constant filter index? So I can choose on beforehand?