Closed NateThornton closed 7 years ago
Good catch.
But your fix is equivalent to saying that zbd_lblock_size is 512. It is 4K in your case isn't it ? Checking the code again, I noticed that the line 1307 is wrong anyway:
dev->zbd_info.zbd_lblocks = fdev->zbd_nr_zones * zone_sz;
zone_sz is 512B sector unit, so this ends up assuming that lblocks are 512B. Fixing this line, the remaining of the code should be correct as is. That is:
dev->zbd_info.zbd_lblocks = (fdev->zbd_nr_zones * zone_sz) / dev->zbd_info.zbd_lblock_size;
should fix the problem. Which also points to a missing check to see if the zone size aligns to the logical block size of the backing device.
I will push a fix asap.
Doesn't that make zbd_lblocks a ratio of 512-sector unit over 4K byte size?
So to truly get the zbd_lblocks you would need:
dev->zbd_info.zbd_lblocks = (fdev->zbd_nr_zones zone_sz 512) / dev->zbd_info.zbd_lblock_size;
Thanks
Yes, correct. I was trying to correct a bug with another bug :)
I just pushed some modifications to address the problem, and also enforce checks for set zones operation. Please check and confirm that this fixes the problem for you. Everything seem OK on my side with 512n, 512e and 4K drives, as well as regular files.
I made a comment - I think the zone descriptors also need to be in logical sector size. Right now they are in the zone_sz 512 byte blocks.
https://github.com/hgst/libzbc/commit/700d1b55dd927c5e61af93eebea3f5f46b208c51
You are correct: a disk would use the logical block size (which may not be 512B) as the unit for a zone descriptor zone start, length and write pointer. But the fake backend uses directly the API unified 512B sector size in the metadata to avoid conversions between 512B and logical block size. This is just a simplification of the code and in no way does this reflects the working of a real disk. This a "fake" disk after all !
Thank you for looking into the code and helping improving it.
This still doesn't seem right to me. The sum of the zone descriptor lengths should equal the number of logical blocks; now we're mixing 4K logical blocks in the header and 512-byte sectors in the zone descriptors. Shouldn't we accurately emulate the zone descriptors to logical sector size as defined by T10?
The device info structure has the field zbd_sectors which represents the capacity of the disk in 512B sector unit, and that is equal to the sum of the all the zone descriptor length. Since v5, all functions of libzbc have been moved to use 512B sector unit, regardless of the disk LBA size. The values in the zone descriptors returned by zbc_report_zones() is no exception and the values of the start, length and wp fields of the zone descriptors are always converted to 512B sector unit (see for example zbc_scsi.c around line 445 and the use of zbc_dev_lba2sect()).
The fake backend driver purpose is to implement this API on top of a file or a regular block device. It is not to emulate perfectly a ZBC disk as defined by T10. Hence the direct use of the 512B sector unit in the zone descriptors rather than the chosen LBA size (which may be arbitrary or reuse the backing store LBA size). That is maybe a little confusing and better comments in the code could avoid this. I will add more explanation.
If you are more interested in a real full emulation of a ZBC disk (following T10 specs), I recommend that you take a look at tcmu-runner which has since a few weeks ago a ZBC handler to emulate a host aware or host managed ZBC disk over a file.
https://github.com/open-iscsi/tcmu-runner
This is a fully compliant emulation so that the kernel SCSI stack sees a well behaved ZBC disk through the target code connection. Please see tcmu-runner README file to get more information on how to set the ZBC handler.
Thanks for the through explanation. 👍
When using fake disks with a drive which has logical sector size of 4K, the capacity calculation when performing a set zones is incorrectly using the 4K sector size instead of 512