1000001101000 / Debian_on_Buffalo

Tools for Installing/Running Debian on Buffalo ARM based Linkstation/Terastation/Kurobox/Cloudstor devices.
327 stars 41 forks source link

unbootable: LS441DE - Host unreachable after installation and reboot #111

Open daya opened 3 years ago

daya commented 3 years ago

Hi I followed the instructions from here and loaded installer using stock firmware

I have 4 drives, two 2TB and two 500GB drives arranged in 2TB, 500GB, 2TB, 500GB in LS441DE device. The device is connected to router via Ethernet cable. The router also does not show the mac address of this device.

Created the partition as recommend, I don't have any pics of the partition to show. But the installer was showing them as RAID1

I am not sure if /boot being ET4 is the problem?

image

Is there a way to restore this to factory defaults or boot from a USB with an iso?

Any help is much appreciated. Thank you in advance.

daya commented 3 years ago

Is there a way to install stock firmware in my current state?

There doesn't seem like an option to boot using a USB drive, any idea what my options are at this moment ?

1000001101000 commented 3 years ago

Good to hear from you!

/boot has to be ext3 for the bootloader to load boot files from it. That’s probably why it’s failing to boot.

you might be able to get back to stock quickly via the quick setup procedure: https://www.buffalotech.com/knowledge-base/Initial-Setup-of-LinkStation-421DE-or-441DE-Diskless-Enclosure

You may need to blank the drives first, it’s been a while since I did that.

daya commented 3 years ago

hi @1000001101000 thanks for your quick reply. I followed those steps just before I got your reply and also tried the factory reset method. But none are working in each case I waited for over 20 min for the reset/restore to finish and LED lights to become steady. The router doesn't that device as connected to LAN and neither can I ping it.

Should I try to take out the drive and manually partition it with /boot EXT3? Do you think that will work? Or any other ideas?

I am assuming the stock firmware initrd and uImage are long gone by now along with their backups which were just copies on the same filesystem.

daya commented 3 years ago

Updates

I took out all the drives and restarted the NAS and automagically it connected to router and I am able to ping it.

Now I am trying to attempt the process all over again. Will report back.

1000001101000 commented 3 years ago

Nice!

Let me know how it goes.

daya commented 3 years ago

Updates:

I am able to telnet into the NAS and login as root without password. But there were no bootable images in /boot so I copied them from Debian_on_Buffalo/Buster/installer_images/armhf_devices and rebooted

But that didn't work i.e. no SSH port was opened so I couldn't login as installer to start the install process. So I did the telnet thing again and saw the /boot is again empty even though I had done sync

1000001101000 commented 3 years ago

sounds like you're in EM mode, that runs from ram so the /boot isn't persistent. I'm guessing your boot partition is still ext4 and needs to be reformatted. You could try doing that from within EM mode using mkfs.ext3 or use LSUpdater to force a full stock reformat/reinstall.

daya commented 3 years ago

or use LSUpdater

Where can I find that ?

daya commented 3 years ago

Another update:

The moment I put in any drive in the bay and reboot (coz I don't know how to auto detect the insertion of new drive) then the system doesn't ever boot and I am forced to go back to EM mode

1000001101000 commented 3 years ago

lsupdater is part of the firmware updater which you can get form buffalo's site: http://buffalo.jp/support_ap/support/products/ls441de.html

1000001101000 commented 3 years ago

you would follow this process to get back to stock firmware (with appropriate /boot), then start the debian install over from there: https://buffalonas.miraheze.org/wiki/Restoring_Stock_Firmware_without_TFTP

daya commented 3 years ago

so does this mean I won't be able to proceed with debian installer unless I restore to stock firmware ?

For me to manually create /boot with ext3 I still need to mount a hard drive which I am not sure how to detect it's presence in the bay ? as lsblk doesn't list the hard drive

daya commented 3 years ago

lsupdater is part of the firmware updater which you can get form buffalo's site: http://buffalo.jp/support_ap/support/products/ls441de.html

and this requires a Mac or Windows machine to start the restore process, right?

1000001101000 commented 3 years ago

You don’t need to start from stock, you could just make an ext3 partition on one of the drives (connected to a linux machine). The github wiki page talks about that option. many find it easier to start from the stock firmware but it certainly isn’t required.

lsupdater also works using wine on linux. That’s how I typically use it.

1000001101000 commented 3 years ago

I have a note about some wine options here: https://buffalonas.miraheze.org/wiki/Enable_Debug_mode

daya commented 3 years ago

Updates:

I was able partition the 2TB HD using SATA connector and GParted, so now /boot is EXT3 and boot into it and install Debian Buster successfully. After all that I am stuck at this

image

So how do I solve this above?

After I solve that I still have the issue of unused space in drive 1, how do I make it a part of some RAID? I have 2x2TB and 2x500GB drives and the first is currently bootable.

@1000001101000 thank you so much for your tips they were very helpful indeed and I understand the above question is going beyond the scope of the issue but any pointers there would be much appreciated.

daya commented 3 years ago

Update:

I was able to mount all 3 drives using SATA and create GPT partitions on them, put them back in NAS. The system booted fine and disks are recognized by fdisk. Further I created EXT4 file system on each of the 3 disks (and also on 4th only on the free space)

But now I still have to setup a RAID Array and I think I missed a step during install to use mdadm to setup RAID. Can I setup RAID post install given the current drive and partitions ??

image

daya commented 3 years ago

Update:

With the above setup (no RAID yet) the system sometimes tries to boot from 2nd drive on which there is no boot partition. Is this because I did not set RAID1 or RAID6 and system on boot tries to find boot on smaller drive sdc and sdd ??

I am kinda lost here any ideas?

1000001101000 commented 3 years ago

I've gotten a little confused about which questions are still active, I'll try to answer all the current ones. Let me know if there are more.

These devices are....a little weird about how they decide which disk to boot from at startup. Since we don't have access to the bootloader source code or even console output for this device we don't really know the details. Based on what I've seen from older devices they seem to look at the disk partition tables for a signature that indicates whether it was created by the stock installer for that particular device as well as the timestamps on the boot files and makes some sort of decision on which to boot. On the newer devices like this it seems to go out of its way to choose the wrong disk. In this case I think what is happening is that it is scanning the "partition 1" for each disk but the scan crashes because some of the partitions are too big for it to scan (the limit is just a few GB).

How you move forward depends on how you want your device to work when you're done.

My typical recommendation is pretty much the same as what the stock firmware uses: RAID1 for /boot, rootfs and swap which each span all the disks. In that case you should probably wipe all the disks except one which you boot into the installer from and then set up the raid arrays within the installer so that everything gets configured as desired. Or wipe the disks and let the device do the partitioning and build the raid arrays as part of the stock install process, then overwrite them with Debian.

If you want to have Debian running on a single disk with other disks attached you'll need to make sure the other disks don't have a "partition 1". I've had another user successfully run with that configuration by just creating their partitions starting with "partition 2" via fdisk/etc.

daya commented 3 years ago

Update: (on renumbering the partition option)

I was able to renumber the partition on the other 3 disks to number 2 and have EXT4 on them, upon reboot things are working fine now with all the drives in.

Now comes the hard part of setting the correct RAID from my current state. image

For that I am thinking of following this blog post but Debian buster doesn't have sfdisk for me to clone the partitions which I assume are necessary for RAID1, am I right?

On the other hand I am leaning towards RAID10 but I understand I may have to have RAID1 for /boot, rootfs and swap. Is it possible to have bootable RAID10? or is RAID1 my only option?

1000001101000 commented 3 years ago

The bootloader (uboot) doesn't understand RAID, it can only work with bare partitions or with RAID1 with metadata version 0.9. This is because the RAID metadata is at the end of the disk in that configuration, so from uboot's standpoint RAID1 and bare partitions are basically the same. (this works since it only reads from the disks). This is why RAID1 is the only real option for the boot portion.I recommend RAID1 for swap and rootfs as well since you should then be able to boot of any single disk which is handy when messing with RAID changes on your data volumes. You can use a different RAID level on your data volumes as desired.

Trying to implement RAID on a running system that is non-RAID is a pain. That guide does seem to cover a lot of the basics which should work for setting up your data volume though the system volumes would give your trouble unless you generate a new initramfs to include the changes to fstab/mdadm.conf etc. Setting them up within the Debian installer simplifies a lot of that since it will handle all that configuration for you.

For cloning partition tables I typically use gdisk. Actually I use gdisk for basically everything.

since you seem to have some linux skill already you might consider using this method: https://github.com/1000001101000/Debian_on_Buffalo/wiki/Alternate-install-method-via-debootstrap-script

That allows you to generate a disk image with the raid partitions already built for you which you can then boot and extend to additional disks. The wiki page includes some details about copying partition tables and that sort of thing as well.

daya commented 3 years ago

Update:

I was able to finally reinstall Debian with bootable RAID1 without restoring stock firmware. So here is my current state I have 4 SATA drives (2TB, 0.5TB, 2TB, 0.5TB) a total of 5TB

image

The problem I still have and had during install was seriously low amount of space available for actual backup :frowning_face:

Here are some screenshots of the problems I faced during install

So my questions now are

@1000001101000 thanks for your help in this long issue and I understand the above questions are beyond the scope of this issue but any pointers will be much appreciated.

1000001101000 commented 3 years ago

RAID1 and RAID5/6 generally require that all the drives are the same size, if they are different the size will be set to match the smallest drive. They also don't support being expanded.... even if you replace all the drives in a RAID1 with bigger drives the array will remain the same size.

All of the above is very frustrating.

You could probably add both sets of matching drives into RAID1 arrays and then combine the arrays into a big RAID0 and get the desired mix of size and fault tolerance. It wouldn't support future expansion though.