Open klturi421 opened 6 years ago
It will be the size of the drive.
What OS are you using?
I’m running Linux Ubuntu 18.04 LTS.
Edited to reflect Ubuntu instead of Linux.
He means Ubuntu.
andlabs - due to my currently restricted disk space, is there a way to split the img and go through it individually or does it need to be all in one file?
If reallymine supports output to stdout (I don't know if it does), then you could pipe it into something like "split -b 100000000" to make 100MB chunks. But if you break up the image, you will have to put it back together again to mount it. The dmsetup utility can map all the pieces into a virtual complete drive. You'll have to read the manual if you go that route, because I am no expert. Or later, when you have a new disk, you could concatenate the pieces onto the new drive.
I decided to go ahead and get a larger (4TB) drive to copy the image to. I began to decrypt the original drive and I noticed that the size of the img stopped growing around 360gb. Is it normal for (I'm guessing) the decryption to stop or slow down at a certain point?
I don't think so.
I'm currently attempting to decrypt with the binary from issue #38. Is it stable enough for regular use or should I stick to the release from 2016?
I don't know. Ask @andlabs
You've probably found a bug with it if it's not going past 360GB. What does the first 4096 bytes look like?
@andlabs forgive my lack of understanding, as for the first 4096 bytes, is that found by running the dumpfirst
command?
Yes. Send the output to xxd
and go up to 00001000
.
I'm sorry to sound like an idiot but what would the code to run be? I've found a few examples that include xxd
but I'm not seeing ones that refer to this particular request.
Would something like this work? xxd -p -c 16 kb0.bin > kb0.hex
but replacing kb0.bin and kb0.hex with outfile.bin and outfile.hex.
Yes, but without the -p
and -c 16
. Then you can open outfile.hex as a text file and copy the beginning here, up to the line that begins with 00001000
.
After running dumpfirst
and xxd
here is what I cam up with up until the line 0000100
. Something tells me that from what I'm seeing so far things aren't looking to great, but I may be wrong, I hope I am.
Instead of pasting it I've decided to upload the hex file instead due to the length that it would have made this comment. I saved it as .txt
since the upload does not permit .hex
.
Also, as of this writing, I started running the previous release (2016) at around 2:30 PM and as of 12:30 AM it has only decrypted roughly 19 gb. The other updated version that runs quicker had completed around 360.3 gb in about an hour. Is it expected that it will take quite a long time to decrypt the 3TB on the previous release?
Hm, I'm not noticing any obvious bugs in reallymine there, other than what appears to be some undecrypted zeroes around 4400 or so (but those could be correct)... Will definitely have to investigate further when I can.
And yes, the old binary will be slow :( Sorry
I re-ran the concurrent file all last night and woke up to error running decrypt: read /dev/sda: input/output error
and the file size is again 360.3 on the nose.
Even though it keeps stopping at 360.3gb out of 3tb, is it still possible to attempt mounting the file and seeing if any files are able to be pulled off?
Sounds like a bad block on the drive. Try github.com/themaddoctor/linux-mybook-tools There is a PDF there with instructions for doing it in linux.
And if you have the JMS538S chip, I would like a copy of your keyblock, please.
@andlabs Not trying to steal your guy, but he asked.
Not a problem. I was going to suggest running badblocks
to see if the drive actually was damaged or not. If it is, you're better off using GNU ddrescue before reallymine.
I did not previously have a backup of the disk and decided to go ahead and run DDRescue on the drive to create a backup of it. With that in mind, I'm guessing I will likely need a second drive of 3TB or larger to decrypt the information to.
I'm at work at the moment but I will run badblocks
on it tonight and upload the results then.
@themaddoctor What's interesting is I started my journey of attempting to decrypt the drive by using your guide but ran into a few issues which is how I ended up with @andlabs's tutorial. The problem that I believe that I am having is that once I get to the mounting section of your guide I get an error (going off memory at the moment) along the lines of mount wrong fs type bad superblock
. I've been able to follow your tutorial and I have the Symwave chip (non XTS). Would you prefer I start an issue thread on yours as well?
Also so you both are aware, I am fully comfortable with the fact that I may have forever lost the data that is on the drive. I acquired it from my father who had a WD My Book that the USB board stopped working. It was connected to a Windows computer afterwards and a quick format was performed before I got to the drive. At this point, I am merely exploring these options to see what, if anything can be recovered.
Well, if it was formatted, even quickly, that explains why you can't mount it. You have to decrypt it and then do data recovery on the decrypted image. Good luck.
Once I've got a decrypted image, is there any software that is recommended to use to attempt recovery (Windows or Ubuntu)?
Understood. I was just curious and looking to explore any opportunity that I can.
I'm wondering, running the non-concurrent binary of reallymine, what is the average time it takes to decrypt? I'm guessing quite a long time? I've been running it for about 30 hours and have only decrypted 61 GB / 3TB. At the current rate, at least from what I've been able to calculate, it will take ~75 days (~40gb per 24 hours / 3000 gb = 75 days). Is that right?
@andlabs I have a DDRescue .img of the drive and now going to attempt to decrypt the file but a little lost with the commands. From what I have read so far about decrypting the file is that I will have to use decryptfile
while also include the dek
and steps
. Is there a way to use the concurrent binary to decrypt the file?
You can use the standard decrypt
command to decrypt those images too.
When I attempted to use the standard decrypt
command the file size is 0 and stays at 0. But when I try running decryptfile
the file size begins to increase. Could I be doing something wrong?
I run the command as follows sudo ~/reallymine decrypt /media/klturi421/Backup/Backup.img /media/klturi421/Backup/Decrypted.img
. I have attempted to use the concurrent and non-concurrent releases but neither will increase the file size and shows no errors in terminal.
Due to disk space issues on my Backup drive only 2.7 of 3 TB was able to be copied over. Could this be a reason why the Decrypt option isn't running on the img?
No, the decrypt
option is trying to find that key sector. If you have the key manually, you can use decryptfile
, but it isn't concurrent yet.
(I should probably write all this from scratch again...)
Could it be possible that the key sector was not copied over in that last .3 gb?
The key sector is in the last few MB. Yes.
Get the key sector from the original disk.
I was able to "complete" the image and am now able to use the decrypt
command. After about 10 minutes of running I am at about 20 GB. I expect it should be finished decrypting around 11:30 AM tomorrow.
The .img has been completely decrypted. At this point, I am now needing to check to see if the files can be recovered. Is there any recommended software (Windows/Linux) that can be used to attempt recovery?
@andlabs is there a way to determine if the decryption was successful besides having another .img created?
You can use loopback devices to treat the decrypted image as a disk; Linux should automatically detect all the partitions.
With the DDRescue img, does the USB bridge need to be connected or can it be connected directly to the PC using SATA cables?
Just to lay a few things out here, when I have the drive connected via USB DDRescue stops reading the disk at exactly 360.3 GB (just like when I run the concurrent version of reallymine). When I disconnect the board and connect directly with SATA to the PC it generates the full version. Reallymine concurrent still will only read 360.3 GB. I forget which type of image that I attempted to recover files from but using some software on Windows I was able to view a few files. I know there is some hope in here somewhere, I'm just trying to figure out the combination that works.
Currently I have attempted using R-Studio (demo) to read the img files (DDRescue and reallymine decrypted img of ddrescue) but am coming up with un-readable files.
@andlabs - upon some further testing I was able to determine that the non-concurrent release of reallymine does in fact decrypt the drive. I tested this on both the drive connected via SATA and USB. I have even run the test on the DDResuce img that I recovered. What I also found is that the concurrent release of reallymine does work faster but does not properly decrypt the drive.
For example, I used DMDE to examine the images that reallymine created and found that the non-concurrrent release identifies the FS as NTFS [4K] to which we know that the drive was factory set at 4K blocks. When I use DMDE to examine the concurrent imgs created, the FS shows as a straight NTFS. This is true either SATA or USB.
As of right now, I am going to spend the next 2 months running the non-concurrent reallymine to decrypt the 3TB drive and recover the data.
If you do decide to further develop reallymine and to potentially further speed it up, is there anything I can contribute from my disk that may help?
I have 4TB drive using the JMS538S chip. The latest run I had stopped @ 2.6TB after 10 days and 10 hours. It encountered winapi error #8.
I noticed that the maximum data that was being written to disk was about 5MB/s on my first run that's why I set the const NumSectorsAtATime = 5120 as my previous run stopped at 1.9TB with const NumSectorsAtATime = 102400 without any error message. It was maxing out my 16GB RAM as well.
System Specs for reference:
If my memory serves me right, I only used 2TB/4TB so essentially it was already decrypting the free space before it stopped.
PassMark OSFMount 2.0.1001 is seeing some partition when I tried mounting the partially completed decrypted IMG.
Will continue to share my experiences with this tool should the author wishes to develop it further. For now will research on winapi error #8. =)
@Groundeffects Can you post your keyblock?
@themaddoctor - I was wondering if this is what you were referring to: bridge type JMicron DEK: F1E53C2287877B6A807B97925D63366547CC4CE1EC96E0E4DA6737429C8EB3E1 decryption steps: reverse decrypt reverse
I meant the sector from the drive that holds the key in an encrypted form.
Referred to issue - Keyblock locations #45, here's what I got:
Thank you. If it's not too much trouble, could you copy and paste it as text, instead of a graphic?
Here you go:
I enabled the Virtual Memory (Pagefile.sys) which I disabled long long time ago to prolong my SSD's life (I guess my SSD's will even outlive me...). Hope this will fix the winapi error #8.
I re-ran the original program with const NumSectorsAtATime = 102400. So far the decryption process is not yet maxing out my RAM after 8 hours of running.
Noticed that the decryption process is writing to disk @ 2MB/s-6MB/s (on the average around 250GB per day). With this rate, I'll be finished decrypting my 4TB disk in 16 days if there will be no hiccups.
I am currently in the process of decrypting the drive and generating a .img file. I am aware that the file will grow in size but I am curious to know if it will be the same size as my drive or will it be the size of the available data? I ask because I have a 3TB drive that I am attempting to decrypt but do not have another 3tb or larger drive to copy the data to, at least at the moment.
Also, once the .img file has been completed, are there any particular instructions on how to mount the file and view the contents?