Open PrimoAngelo opened 1 year ago
Good, I'm not going crazy.
Ripping the disc with number of sectors to read set to the same amount as the original iso results in usable ecc correction data, but if the ecc data was generated on ubuntu and ripped on windows (with dvdisaster) the ecc isn't recognized. (RS03)
I'm switching to rs02 for my backups for now. rs03 needs a little work on the augmented side.
Hello @PrimoAngelo ,
First, you seem to be using an old version of the software (0.79 devel 1), can you update to the latest version to see if your SATA drives are better detected? That being said, I'm not sure I would be able to fix this issue, as I'm not really a Windows guru, and my my own 3 SATA drives are indeed detected under W10 AFAICT.
Now, about the size of the BD ISOs. The numbers you're seeing on your blank discs are the sizes of the discs taking into account the space reserved for the "defect management" feature of BluRays. This is reserved space that is used when you burn files on a BluRay, that is used in case your blank disc has a few defects: in this case your burner will transparently relocate data to this reserved space, instead of failing the burn and giving you a toaster.
In recent versions of dvdisaster, I've added the option to use this space and disable the defect management feature of your discs, which means that if the disc is not perfect, burning will fail, but if it is, you have some extra gigabytes of dvdisaster ECC data on your disc, as it'll take up normally reserved/unused space:
I think I didn't add this option on the GUI, but this is available on the command-line version with the --no-bdr-defect-management
option.
@Mo1sture can you explain with more details what your issue is, maybe opening a new ticket in the process? It seems your issue might be different than the one @PrimoAngelo is talking about.
As I'm just planning to burn a bunch of Blu-Rays, I was following this issue to decide on whether to use RS02 or RS03. As a Windows user, I also use ImgBurn and tried to research this reserved space for defect management.
I had no idea about this, but ImgBurn can enable it if you use the Erase Disc feature on a BD-R, as you would use on a BD-RE. However, there are usually other options available for the formatting, such as 12088320 sectors - meaning you don't have to lose all reserved sectors even if you do want to use the drive's defect management. In other words, a single on/off switch isn't enough for all use cases.
Here is the thread in which I found this information (it's at the end): https://forum.imgburn.com/topic/19688-imgburn-bd-r-defect-management/
I still have to test burning an ISO after going through this process. ImgBurn's developer speculates there could be a trade-off where defect management lowers your burn quality because the disc spins faster (though it is unclear if every burner will do this - if they don't, the effective burning speed is halved).
Maybe things are different under Linux, but I'd guess a lot of people use ImgBurn on Windows and they'd never take advantage of this defect management feature, since it requires multiple steps (changing a setting AND formatting every disc). If you don't do these things, the burner will not do anything - it will just burn the disc as normal (ImgBurn assumes you'll do software verification, but they have different purposes).
I don't know how Windows itself burns ISOs. However, I did notice that the available space reported by File Explorer increased after using Erase Disc with "Maximum" setting (12088320 sectors).
Currently, RS03 is not filling the medium by default anyway if you don't follow the steps to format the disc and/or use some specific burning solution that will do what it expects. Maybe that's the issue @Mo1sture is facing?
Maybe RS03 could use the same UI settings as RS02 unless "Use at most..." is set? I don't know, really.
And another question (somewhat unrelated): the table in the project page says RS03 is more robust against corruption in the correction data, but I'm not sure if that's taking into account the fact it's only compatible with linear mode. RS02 might be less resilient, but it's compatible with adaptive reading - is still worse than RS03 even with this advantage?
About the burning speed with or without defect management, I can say that in my experience, the burning speed is indeed exactly twice as slow when defect management is enabled, because the drive immediately tries to read what it has just burned to ensure that this corresponds exactly to what it tried to write. I think this is the same regardless of the OS, because this is handled by the drive's firmware logic directly and not at the software or OS level.
For RS03, the codec's design means that it cannot be given a custom number of sectors (such as with RS02), because the number of sectors is a piece of data that needs to be implicitly known to search and find the RS03 roots when attempting to repair a damaged image. This is, in effect, the "price to pay" to have a higher resilience against corruption of the RS03 data itself. This is why the original author hardcoded the number of sectors for each media type for RS03, so that dvdisaster knows which number of sectors was given as input to the RS03 algorithm on the generation phase, just by looking at the media type (CD, DVD, BD with 1/2/3/4 layers).
Now, it is technically possible to specify any number of sectors to augment an image with RS03, but you would need to remember this number of sectors, years later, when attempting to verify/repair such an augmented disc, and specify it to dvdisaster to override what it would assume given the medium type. Otherwise, it won't find the RS03 roots and won't be able to repair anything. A way would be to write down the number of sectors that were specified to dvdisaster to build RS03 data directly on the label of the media. Note that overriding the number of sectors manually for RS03 is already possible with the command-line version, this is mainly used for integration tests, but it works, and is undocumented because the consequences really need to be understood.
About the adaptive reading and RS03, that is a valid point. When using RS02, dvdisaster won't try to read more data that is needed on a damaged media, because it'll stop as soon as enough original sectors + correction data has been read. If the media is augmented with RS03 instead, adaptive reading won't be able to do that, however it'll still use the so-called "divide and conquer" algorithm for reading, as opposed to linear reading. It will just NOT automatically stop as soon as enough original sectors + RS03 data has been read. You may, however, stop it manually if you know you have already enough sectors, and directly attempt a verify to see if this is the case. If it's not, you may resume reading in linear or adaptive mode until this is the case. It's up to you to decide whether RS02 or RS03 is better for your use case. I'll add this to the project page.
Thanks a lot for those clarifications!
I think this is the same regardless of the OS, because this is handled by the drive's firmware logic directly and not at the software or OS level.
Sorry, I didn't mean to say that defect management was different under Linux. I was referring to the fact that many Windows users might use ImgBurn with default settings and thus lose those extra gigabytes and have no defect management done by the drive. I think ImgBurn's defaults are not optimal because "Verify Not Required" won't really do anything if you didn't "format" the disc, and if you did format it then that setting is making you waste the spare area you just went out of your way to create. As these defaults are unlikely to change, others might have this question when they see dvdisaster leaving "empty" space for BD-Rs.
That was also related to my speculation regarding the other problem. The project page states that RS03 must fill the medium, so I thought that it caused problems if you were to not use the full size of the BD and also not "format it" for defect management. In this situation, the medium won't be full as there are sectors left over. However, if different software/different OS burns the medium with the recovery sectors, then it would have filled all available sectors, fulfilling the "must fill the medium" requirement.
But I guess the project page is just stating it in a simple manner that the number of sectors RS03 worked with must be known, so it doesn't really matter if it filled the media as long as you know how many sectors it was working with. (With your explanation, I believe my assumption was entirely wrong. But I thought I'd explain it just in case.)
It's up to you to decide whether RS02 or RS03 is better for your use case.
I would just like to avoid picking a clearly inferior alternative. My reasoning right now is that I would like to use defect management with fewer sectors because if the media failed that much it's probably better to burn it again anyway. This would also allow for around 700 MB of additional dvdisaster recovery data. I guess there's no way to jump into the future and figure out if RS03 or RS02 would have worked better for a particular corruption pattern, but as it's easier to setup dvdisaster to use this additional recovery space and the ability to use adaptive reading, maybe RS02 won't be that much worse.
but you would need to remember this number of sectors, years later
This is such a good point! I agree there should be no easy way for most people to change that, since you're effectively losing all recovery data if you forget it.
I'm planning on printing the dvdisaster logo on my discs just to remember they have correction data. I used dvdisaster around 15 years ago, and I had forgotten the name of the software itself. I moved away from optical media for a time, and that was enough to make me forget how I used to protect my discs. Staking all your recovery data on an arbitrary number that you have to remember a decade later is a big risk.
I was using PAR files, which are easy to remember as they would be right in your face inside the disc. But they failed to recover one of my media, so I found dvdisaster again after looking for a better alternative. I was very glad to see it was still alive thanks to your efforts.
But I guess the project page is just stating it in a simple manner that the number of sectors RS03 worked with must be known, so it doesn't really matter if it filled the media as long as you know how many sectors it was working with.
Exactly. I tried to amend the README so that it's clearer, what do you think? https://github.com/speed47/dvdisaster/compare/readmers03
I would just like to avoid picking a clearly inferior alternative. My reasoning right now is that I would like to use defect management with fewer sectors because if the media failed that much it's probably better to burn it again anyway. This would also allow for around 700 MB of additional dvdisaster recovery data. I guess there's no way to jump into the future and figure out if RS03 or RS02 would have worked better for a particular corruption pattern, but as it's easier to setup dvdisaster to use this additional recovery space and the ability to use adaptive reading, maybe RS02 won't be that much worse.
Clearly there's no "obviously" inferior alternative, RS02 and RS03 are both good codecs. If adaptive reading is important to you (which is a very good point), then go the RS02 route indeed! I don't think we'll have what I call "intelligent" adaptive reading (auto-stopping when enough data has been read for the repair to be possible) for RS03, as upstream development has stalled, and the original author said that this would require a rework of RS03, to the point that it might have been called RS04.
This is such a good point! I agree there should be no easy way for most people to change that, since you're effectively losing all recovery data if you forget it.
Exactly. I might add the feature in the GUI at some point, but I would have to add a fair number of big warnings so that the user understands that they'll need to write down this number somewhere and be able to specify to dvdisaster 15 years later when image recovery is needed.
I'm planning on printing the dvdisaster logo on my discs just to remember they have correction data. I used dvdisaster around 15 years ago, and I had forgotten the name of the software itself. I moved away from optical media for a time, and that was enough to make me forget how I used to protect my discs. Staking all your recovery data on an arbitrary number that you have to remember a decade later is a big risk.
Obviously! And the few added megabytes of recovery data would probably not be worth the risk of just not being able to use it later.
I was using PAR files, which are easy to remember as they would be right in your face inside the disc. But they failed to recover one of my media, so I found dvdisaster again after looking for a better alternative. I was very glad to see it was still alive thanks to your efforts.
Well, if you're really paranoid, you can use both (I sometimes do ;) ), and par2 is also an easy and quick way to verify that all the files are readable and correct. In any case, I'm glad ensuring that dvdisaster is still usable in 2024 is of some use to someone :)
I tried to amend the README so that it's clearer, what do you think? https://github.com/speed47/dvdisaster/compare/readmers03
The trade-offs are much clearer now.
The only thing that I think is a bit confusing is the "Speed" metric. On media read, it says intelligent adaptive reading can be up to 90% faster (which I didn't know!), and that it's only for RS02, but then it also says RS03 is faster on modern hardware as it's multicore. As I understand, this is looking at each step (processing speed and read speed) as separate, but recovering damaged media will always require both, right? Even if you have the recovery data elsewhere, you still have to read the media.
Perhaps RS02 deserves one extra "Speed" star for that (just because it looks weird that it can allow for 90% faster read yet only get one star)? Or if you have some anecdotes for "Total Recovery Time" for all three algorithms, that could be a helpful substitute so "Speed" isn't just about processing. I realize this is tricky as comparing different systems and different damaged media is nearly impossible, but I can't think of a better way to illustrate the speed differences.
As we're talking about the future, I also think CPUs in 10-15 years are likely to be faster, while my BD readers will not... and are far more likely to be more difficult to replace when they break, so I want to avoid wear on them as much as possible. If I am correct in assuming that RS02 could cause less wear on my drive thanks to its intelligent reading, then these changes are good, because I had no idea this could be the case! :)
Well, if you're really paranoid, you can use both (I sometimes do ;) ), and par2 is also an easy and quick way to verify that all the files are readable and correct.
Should I be embarrassed that I just use SFV/CRC32 for verification? I figured that, since I'm not actually fighting an adversary that might want to poison the data without changing the checksum, that this was enough...
The only thing that I think is a bit confusing is the "Speed" metric. On media read, it says intelligent adaptive reading can be up to 90% faster (which I didn't know!), and that it's only for RS02, but then it also says RS03 is faster on modern hardware as it's multicore. As I understand, this is looking at each step (processing speed and read speed) as separate, but recovering damaged media will always require both, right? Even if you have the recovery data elsewhere, you still have to read the media.
Ah I see, this can be confusing indeed. What was meant here for the "speed" metric, is only the computational time it takes to build the RS0x correction data from the (supposedly already read) image data to protect. So this is purely about the CPU time it takes, and only for the creation of the corrective data, not at all for the verify/recovery. I'll amend that.
About the 90% faster for adaptive reading part, this is what is being said in the PDF of the original author where they describe the functionality. To be honest this is probably the best case scenario, in case you have a damaged media, that your drive will have a hard time to read, sector by sector. In that case, with RS02 and the fact that adaptive reading will not attempt to read more data that what is strictly required to repair the image, you'll cut down all the time it would have taken to actually read all the disc (or at least all the sectors that your drive can read), even if it's not strictly necessary starting from when you've crossed the threshold of good sectors and RS02 data already read that is sufficient to stop reading altogether, and hand it off to the CPU to rebuild/repair the image from the successfully read data + successfully read corrective data. When you have a damaged media, reading sectors is so long that giving up at the earliest time possible then rebuilding from the corrective data is always several orders of magnitude faster than attempting to read all the disc and possibly not having to repair at all. I'll try to find a way to explain that but with fewer words :)
Perhaps RS02 deserves one extra "Speed" star for that (just because it looks weird that it can allow for 90% faster read yet only get one star)? Or if you have some anecdotes for "Total Recovery Time" for all three algorithms, that could be a helpful substitute so "Speed" isn't just about processing. I realize this is tricky as comparing different systems and different damaged media is nearly impossible, but I can't think of a better way to illustrate the speed differences.
Yes, separating "corrective data generation speed" and "repair speed" (taking into account the potentially long time to read a damaged media) might be a good idea to avoid the confusion.
As we're talking about the future, I also think CPUs in 10-15 years are likely to be faster, while my BD readers will not... and are far more likely to be more difficult to replace when they break, so I want to avoid wear on them as much as possible. If I am correct in assuming that RS02 could cause less wear on my drive thanks to its intelligent reading, then these changes are good, because I had no idea this could be the case! :)
That is an excellent point, making the assumption that you'll use adaptive reading with RS02. Note that you can still use adaptive reading with RS03 and calculate yourself the rough number of sectors that might be enough and hit the stop button when it seems right (e.g. if you know your image file is 20G and you added 5G of corrective data, you know that you should only have to read ~20G of data+corrective data to be able to rebuild the original data, maybe a bit more depending on your "luck"). Then hit the repair button, and see if dvdisaster says it can recover the data or not. If not, you can resume reading for a bit, hit stop again, attempt a repair, until it works. This is still less efficient that having dvdisaster doing it for you as it does with RS02 obviously.
Should I be embarrassed that I just use SFV/CRC32 for verification? I figured that, since I'm not actually fighting an adversary that might want to poison the data without changing the checksum, that this was enough...
Honestly you should not. Using dvdisaster is already a sign that you're kinda paranoid with your data (how many people used it even when it was the optical media era ?). SFV-like verification is enough at the file-level, and if there's damage, you can use dvdisaster to repair your image, making the assumption that you augmented it beforehand of course. I was just saying that to help point the difference between par2 (file-level) and dvdisaster (image-level), and that theoretically, you can use both, even if it's probably not very efficient (dvdisaster will add correction data to protect the par2 correction data ... which is less efficient than having a smaller image without par2, leaving room for a higher dvdisaster data redundancy setting)
Amended the comparison table, hopefully this is a bit clearer without being too complicated https://github.com/speed47/dvdisaster/tree/readmers03?tab=readme-ov-file#mag-comparison-table
Amended the comparison table, hopefully this is a bit clearer without being too complicated https://github.com/speed47/dvdisaster/tree/readmers03?tab=readme-ov-file#mag-comparison-table
I think it's perfect now. Although this may seem like a bit of "information overload" - as the table grew considerably in size - I believe, as you noted, that most people looking into dvdisaster are more paranoid/careful than most and will appreciate this level of depth.
Thanks a lot for hearing out my suggestions and concerns.
Note that you can still use adaptive reading with RS03 and calculate yourself the rough number of sectors that might be enough and hit the stop button when it seems right (e.g. if you know your image file is 20G and you added 5G of corrective data, you know that you should only have to read ~20G of data+corrective data to be able to rebuild the original data, maybe a bit more depending on your "luck").
I thought about this point, but then I also thought that I might not remember how much data is on the disc (assuming a really badly damaged one). If you can read it enough to enumerate the files but can't read some of them (a more common scenario, really), this is probably very easy to do. In that regard, RS03 and RS02 are very similar.
I think my personal decision right now will be to use RS02 for BD-Rs and RS03 if I decide to use some double layer DVDs, mostly because of how many more DVD drives are available if I need some extra wear to get the data back and because DVDs don't have this whole formatting business.
To get more on topic, I have also confirmed that "Erasing" BD-Rs in ImgBurn with "Maximum" spare areas enables the drive to use 12088320 sectors for BD-R, 24307712 sectors for BD-R DL and 48854016 for TLs while still remaining compatible with defect management. If my math is correct, this allows for 1.2 GB of additional correction data in double-layer discs and around 2.8 GB for triple-layer discs using custom sizes in RS02.
This appears to be a standard, too. For DLs, I have CMCMAG-DI6-000 and MEI-T01-001, and they both have the same formatting options. Same for the 25 G BD-Rs from Ritek and CMCMAG-BA5. I guess a possible improvement - instead of "custom" sizes - would be to have a toggle between these formatting options, as you do in ImgBurn (Preferred (Default)/Minimum/Maximum/No Spare Area) and then have a command-line-only override. At worse I guess you would have to try all four options if you forgot? I think this is strictly a problem for BDs.
I don't know how much work would be required to implement that, but if you ever decide to tackle this, maybe that's the way. I'm happy enough using RS02, including by implementing my own multi-core solution in the form of running multiple dvdisaster instances. :)
This is the reference from my media:
CMCMAG-BA5-000
FT: 0x32 - NB: 11826176 (0x00B47400) - TDP: 0
FT: 0x32 - NB: 5796864 (0x00587400) - TDP: 0
FT: 0x32 - NB: 12088320 (0x00B87400) - TDP: 0
CMCMAG-DI6-000 50G
FT: 0x32 - NB: 23652352 (0x0168E800) - TDP: 0
FT: 0x32 - NB: 11200512 (0x00AAE800) - TDP: 0
FT: 0x32 - NB: 24307712 (0x0172E800) - TDP: 0
MEI-T01-001 50G
FT: 0x32 - NB: 23652352 (0x0168E800) - TDP: 0
FT: 0x32 - NB: 11200512 (0x00AAE800) - TDP: 0
FT: 0x32 - NB: 24307712 (0x0172E800) - TDP: 0
VERBAT-IMk-000 100G
FT: 0x32 - NB: 47305728 (0x02D1D400) - TDP: 0
FT: 0x32 - NB: 30790656 (0x01D5D400) - TDP: 0
FT: 0x32 - NB: 48854016 (0x02E97400) - TDP: 0
Thanks for dumping the different format capacities available for your media, this is interesting and shows that for BD-R(E), the number of sectors is really not that standard. In another recent ticket (#97), another user has yet other numbers of sectors available:
CMCMAG-CN2-000 (BD-RE, 25G)
DT: 0x02 - NB: 12219392 (0x00BA7400) - TDP: 0
FT: 0x00 - NB: 11826176 (0x00B47400) - TDP: 12288
FT: 0x30 - NB: 11826176 (0x00B47400) - TDP: 12288
FT: 0x30 - NB: 11564032 (0x00B07400) - TDP: 20480
FT: 0x30 - NB: 12088320 (0x00B87400) - TDP: 4096
FT: 0x31 - NB: 12219392 (0x00BA7400) - TDP: 2048
For RS03, defaulting to the lowest common denominator seems to be a good idea to avoid frustrations and requiring the users to have a deep understanding of the formatting mechanics for BD-R(E), at the sacrifice of possibly some gigabytes of additionnal ECC data that could have been added with some discs.
Your idea of teaching dvdisaster, at least for the verify/repair phase, all the "seen in the wild" number of sectors per media type would seem to be a good idea, but the problem is that RS03 would seem to work on the verify phase as long as the roots are found, BUT will FAIL on the repair phase should a repair be needed. I forgot that but luckily my self past did leave a note in the code about this issue:
So, this can really be nasty if you forgot the number of sectors you've used (writing it down using a marker on the media itself is probably a good idea). Well, for now this can't be customized and this complexity is probably well better left for adventurous users who understand the implications, I suppose.
EDIT: This is actually only a problem if the metadata of the image is damaged, which means that there is no way to understand the structure of the image itself. This is where the high resiliency of RS03 kicks in and in this precise case we need to know how many sectors long was the image, so we can infer the location of our ecc data (or, more accurately, the number and size of the slices we used to compute the correction data). So, this might not be as dramatic as it sounds.
The 11564032 value appears to be the outlier, as 12219392 is the "unformatted" capacity. It's also way more reasonable than the 5796864 value for [my] BD-Rs. Still, it's enough to introduce doubts as to how "standard" these sector counts are, and I agree that sticking with reliable defaults is the better alternative -- especially for code that you expect to save you when things are already failing.
It's only for the 100 GB discs that I think the loss is quite severe - perhaps because they thought it was better to have a larger spare area for these discs since each layer is 33 GB. Nevertheless, my USB burner doesn't want to touch these discs above 2x, so burning with defect management takes 7 hours at effective 1x speed, and I don't plan on getting these discs again after mine are used up.
I also somehow think this is all fascinating.
EDIT: This is actually only a problem if the metadata of the image is damaged, which means that there is no way to understand the structure of the image itself.
A bit of a similar situation as to what I mentioned above in regards to adaptive reading, perhaps? If your media is so damaged that you don't know how much you need to read with RS03, RS02 probably would have failed you as well!
7 hours is indeed quite long! It's probably not a bad idea to just give up with defect management in that case and use dvdisaster to augment the image indeed, but as you said, with the default params, you would be missing out on a few gigabytes.
A bit of a similar situation as to what I mentioned above in regards to adaptive reading, perhaps? If your media is so damaged that you don't know how much you need to read with RS03, RS02 probably would have failed you as well!
You're not wrong. Actually, I just gave a look at the code that performs the exhaustive RS03 search (when the "normal" and fast method has failed, because the image is very broken), and the original author left an interesting note, with commented-out and unfinished code (omitted below):
Verbose("-- whole medium/image scanned; %d layers remain untested\n", untested_layers);
Verbose("-- giving now up as ecc-based search is not yet implemented\n");
free_recognize_context(rc);
return FALSE;
/*
* TODO: Assemble all ecc blocks and see whether the error corrction
* succeeds for a certain number of roots
*
*/
I've patched the code a bit (not the unfinished one, the current one) to make it try all the possible layer sizes, and it's able to test more than 100 per second, so this means trying all the possible medium sizes (in increments of 1 sector, 2048 bytes),from 0 to the size of a BDXL (128G) in half an hour. The code can probably be optimized to cut this down to minutes or even seconds. So, this would entirely remove the "must not forget the medium size when using RS03 for repair", which would be good. This would probably be less efficient than the code the original author started to write and never finished, but given our current CPUs, it's probably not really that much of a problem.
I've patched the code a bit (not the unfinished one, the current one) to make it try all the possible layer sizes, and it's able to test more than 100 per second, so this means trying all the possible medium sizes (in increments of 1 sector, 2048 bytes),from 0 to the size of a BDXL (128G) in half an hour. The code can probably be optimized to cut this down to minutes or even seconds. So, this would entirely remove the "must not forget the medium size when using RS03 for repair", which would be good.
That sounds great! I think, as long as the search for "known" values is prioritized before trying to "brute force," it will still be really fast in most cases.
I'm assuming you made this change to fix the issue in #97, but that ticket now gives me a 404 error.
It's probably not a bad idea to just give up with defect management in that case and use dvdisaster to augment the image indeed, but as you said, with the default params, you would be missing out on a few gigabytes.
I ended up burning everything with defect management (so effective 1x) with either RS02 or RS03. RS02 was used on most discs, but I decided to go with RS03 on a few that were more empty (like a 33 G image that was burned to a DL disc) as I figured there was already so much correction data that the extra bits were less helpful than the resilience.
For TLs, though, I used RS02 with 48854016 sectors. I usually didn't have that much room to give up nearly 3 gigabytes of correction data - it's why I was using a TL, after all.
I have a couple of problems. The first problem is that all my internal SATA drives are not visible in the program (Under W10) but the internal drives are correctly recognized by other burning or ripping programs. All USB drives are recognized. The second is the incorrect BD "automatic" RS03 Disc size. (With RS02 I can select a correct size manually) For example if I have a 25GB BD with: Free Sectors: 12.219.392 Free Space: 25.025.314.816 bytes the program generate only a 11.776.657 sectors ISO, basically wasting about 805MB of free space on disc. There is a fix for that?
Difference in size:
Not tried yet with BD50-100-128GB to check if size is also not correct. Tried with BD 50, same problem, it uses the "default" sector number. So now the image is 1803MiB smaller from the top "preview" and in the end 1451MiB smaller with final files.
For reference the numbers I have with all my bluray are:
BD25: Free Sectors: 12.219.392 Free Space: 25.025.314.816 bytes (default dvdisaster sector value 11826176)
BD50: Sectors: 24.438.784 Size: 50.050.629.632 bytes (default dvdisaster sector value 23652352)
BD100: Sectors: 48.878.592 Size: 100.103.356.416 bytes (default dvdisaster sector value 47305728)
BD128: Free Sectors: 62.500.864 Free Space: 128.001.769.472 bytes (default dvdisaster sector value 60403712)
The only BD128 ever in production is SONY-NQ1-001 (found in 3 packages: Optical disk Archive, Consumer 25 cakebox and Enterprise 25 cakebox all with the same Disc/Media ID)
Thank you