Open thea1ien opened 5 years ago
The debootstrap process is pretty obsolete at this point. I think I made it sort of work with Stretch with a massive set of workarounds but ultimately the firmware just can't handle a modern version of debootstrap anymore.
There's not much reason to want to do it that way anyway with the installer images available. The ls-wxl images should work fine (I think I did it with mine at one point).
make sure you're not using the testing or daily images, those tend to have problems. Here is a link to the current stable images: http://ftp.nl.debian.org/debian/dists/buster/main/installer-armel/current/images/kirkwood/network-console/buffalo/ls-wxl/
If you still get errors let us know what the errors are, odds are we can help.
I've tried now 2 times, using the stable image as provided. First off, I have a spare drive I imaged from a drive I use in bay/slot #2 (in case trying this fails to work). The way the Buffalo method works, this disk expects to boot in bay#2. I'm doing this using only this drive (bay #1 is empty).
Anyway, I'm removing and slaving this drive, and placing the 3 image files (replacing 2 that already exist) in md0 on the drive (I use PartedMagic to mount the drive and put the image files in). I'm pretty sure this isn't a problem. When I reboot with the drive, I can use SSH under the installer account as documented. Everything appears to go well, I choose the default/recommended install choices, and I can see it download packages, partman runs and shows the partition scheme it will use, etc.
However, once it completes and gives me the option to reboot, it reboots the unit but just boots back into the installer image I placed on, instead of booting into a new install environment. I'm not sure why it fails to boot into the new installation like it should.
Any thoughts? If there should be any logs created I should look at, please toss me a pointer.
Also, there's a mention in the instructions mentioning to run a 2nd SSH shell in order to change metadata. After I had this issue my first attempt, I tried to follow those steps. However if I connect using a second SSH, the first session will halt setup stating out-of-memory. I can only bet the Buffalo doesnt have enough physical memory to let me connect a 2nd SSH session while setup is running, and probably isnt using a swap partition in the image.
That makes sense.
Partman isn’t aware of the requirements for setting up these devices so using the recommended settings will always fail. You need to manually specify your partitions and ensure that md0 or sda1 are set as /boot. You also need to ensure md0 uses metadata v0.90, if it was created by the buffalo firmware it will already be correct but partman/modern distros won’t by default.
I’ve never run into that memory issue opening a second shell. I’ve recently done it on the ls-gl and terastation II pro which both also have 128mb ram. One thing that may help is also setting up the swap in partman (you should do that regardless).
Let me know if you still have trouble, I have a cx-wxl I could pull out and experiment with if needed.
I gave it another try tonight. First I used Gparted to take an inventory of the drive partitions (Buffalo firmware setup): sda1 = (boot?) (1GB) MD0 sda2 = / (root) (4.77 GB) MD1 sda3 = 1MB sda4 = 1MB sda5 = SWAP (1GB) MD10 sda6 = DATA STORAGE MD22
Obviously MD0,1,10, and 22 are the RAID partitions for the above drive partitions. I assumed SDA1/MD0 would be boot. I am not sure what SD3 and SD4 are. I am guessing some sort of integrity logs for the RAIDs, but that's only a guess.
I went into Partman during the install and selected manual, and told it to write into the respective RAID partitions (ie MD and not SD) and selected what I have listed above. I assume also there is no seporate home partition. I selected to use ext3 for the filesystems for boot, root and data. I also told it to erase data in the root partition. I allowed setup to complete and to reboot. However the unit never comes up as remotely accessible. I have tried to ping and SSH it (I have the unit set to a reserved IP address btw). It also does not show up in the Buffalo NAS Navigator (I expected it not to anyway, but I still gave it a try). The status LED on the unit is solid lit, normally indicating that the unit has booted up.
I'm currently at a loss from here on what I should try next.
When you say reserved ip, what do you mean? If you mean reserved in dhcp in your router that should ensure it comes up at the expected address, the fact the installer came up where expected is usually a good sign of that.
If the light is solid it must have booted. One thing to try is leave it online for an hour or two and see if you can connect. If that works it’s likely because sshd has been waiting for the rng to initialize. If that is the case installing haveged will keep that from happening.
let me know if you still have trouble, I should get a chance to try with mine this weekend sometime if needed.
After allowing it to power on/boot for over 4 hours, I still am unable to remotely get into the unit. I'm not sure what kind of logging options there are. Maybe something can be setup so I can see if it really is loading and how far along it is getting? Let me know if you come up with anything.
On Thursday, October 24, 2019, 11:11:18 AM EDT, 1000001101000 <notifications@github.com> wrote:
let me know if you still have trouble, I should get a chance to try with mine this weekend sometime if needed.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
I took an existing disk from one of my other devices running Buster and set it up with the lswxl dtb. Everything appears to work just fine. I'll give the installer a try later and see what happens.
If you connect the disk to a device you can take a look at /var/log/syslog and see what is there. Depending on where it is getting stuck it could tell you exactly what's going wrong or there could be nothing there at all.
I have a couple of ideas for things that could be going wrong though most are highly unlikely or wouldn't result in the LED turning solid. One such scenario would be the device somehow getting a different IP address. This should not happen but it would be worth scanning your network for devices with port 22 open just in case.
I'll let you know what I figure out once I've had a chance to try it with the installer.
Well, I had a chance to locate the logs, and see that it was indeed booting up. It seems that using the Buffalo firmware, it would request DHCP using it's MAC address. However in debian it was requesting using a much longer unique ID, thus getting another IP address instead of the one intended. I've added the new address so that it will be reserved. I can now SSH in and able to log in so far.One thing to note, flipping the power switch to OFF does not appear to do anything. I imagine when you flip it off, it normally sends a "press power button" signal or something similar, and the Bufflo firmware would then init shutdown (and the light would start to blink until shutdown has completed).However with Debian, it appears to ignore this and never shuts down. I'll try to test a few things, such as making sure the shutdown command works. I don't suppose btw there is any kind of interface available to allow easy configuration of drive shares, setup, etc like Buffalo had (which was web based). Now that I have this sorted and it appears to be working, I'd like to look into getting 2 large size drives to place in. Currently I've been testing with a single 250 GB drive. I'm not sure exactly the best way to proceed with this. Also maybe a better way to build up from scratch (currently I have a Clonezilla image of a 250GB drive with Buffalo firmware). It would be nice to have something easier or smaller, in a case of getting new drives, or maybe some kind of corruption and a need to start from scratch. If there anything you recommend for managing the drives, setting up RAID and such? I've never really messed much with linux software raiding before. Thanks!
On Friday, October 25, 2019, 9:40:07 AM EDT, 1000001101000 <notifications@github.com> wrote:
I took an existing disk from one of my other devices running Buster and set it up with the lswxl dtb. Everything appears to work just fine. I'll give the installer a try later and see what happens.
If you connect the disk to a device you can take a look at /var/log/syslog and see what is there. Depending on where it is getting stuck it could tell you exactly what's going wrong or there could be nothing there at all.
I have a couple of ideas for things that could be going wrong though most are highly unlikely or wouldn't result in the LED turning solid. One such scenario would be the device somehow getting a different IP address. This should not happen but it would be worth scanning your network for devices with port 22 open just in case.
I'll let you know what I figure out once I've had a chance to try it with the installer.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
I guess getting a different ip via DHCP wasn't as unlikely as I thought.
some folks use open media vault (OMV) to provide the type of web interface your looking for. I don't know how well it would work on this device since it only has 128MB of memory but it's something you could try.
you could move to larger drives by: -destroying your existing data array if you have one -removing it from fstab and mdadm.conf -using mdadm to remove one of the drives from the root/boot/swap arrays -replace it with a bigger drive and create the same size partitions on it -use mdadm to add the drive to each of the root/boot/swap arrays -repeat to replace the other drive -set up your new larger data array
Thanks for the assistance and suggestions.I looked into OMV however as you pointed out, it likely would not be possibly to setup on this unit due to the limited memory. I imagine it's possible to have something that would still work (since Buffalo obviously had an interface). But I realize that might require a build from the ground up. At least it seems like I have managed to get this worked out enough to get a light Debian install setup. I already have some expierence with Linux, however not so much with using Linux software raids. While a nice easy-to-use UI would be nice, this could also be useful to try and learn how to. I just attempted to work my problem with making a mini-image to be able to rebuild from. It took some time to delete the data/storage RAID (both md22 and sda6), and I think I have succeeded in making a new one (that is much smaller). I am not sure what Buffalo is expecting right now, so I have yet to boot off the drive and confirm it's right. It had an existing label which is longer than I can appear to set, so I'm not sure why that is. My understanding is the disk format is XFS and it doesnt support resizing (or at least shrinking). So it looks like it always needs to be destroyed and recreated. Would you by chance happen to know what the 2 partitions (sda3 and sda4) are for? They are only 1MB in size. Really small I know, but if they aren't needed, I imagine I could get rid of them with Debian. I also assume that the Buffalo device expects certain partitions to be a certain setup. However with the Debian setup, I am guessing that other than the boot and root partitions, I can setup and use any kind I wish. I plan to look into the differences between ext3/4 and XFS, esp the pros and cons. For the web ui interface, maybe it's possible to look at the Buffalo one, and figure it out so it could be tailored. Thanks again for the help, if you have further input to assist, please feel free to let me know. Thanks!
On Friday, October 25, 2019, 2:51:29 PM EDT, 1000001101000 <notifications@github.com> wrote:
I guess getting a different ip via DHCP wasn't as unlikely as I thought.
some folks use open media vault (OMV) to provide the type of web interface your looking for. I don't know how well it would work on this device since it only has 128MB of memory but it's something you could try.
you could move to larger drives by: -destroying your existing data array if you have one -removing it from fstab and mdadm.conf -using mdadm to remove one of the drives from the root/boot/swap arrays -replace it with a bigger drive and create the same size partitions on it -use mdadm to add the drive to each of the root/boot/swap arrays -repeat to replace the other drive -set up your new larger data array
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
I believe the partition 3&4 that you're seeing are just placeholders. I believe some versions of buffalo firmware have used them for things in the past but I think they keep them for consistency so that partition 6 is always data etc. This allows them to re-use a lot of their code between different generations of devices.
For your purposes the only partition/filesystem with any real restrictions is the one used as /boot. It has to be contained in the first partition, it has to be formatted either ext2 or ext3 (use ext3), if it's a raid array it must be raid1 and must be metadata version 0.9, it also must be relatively small. All of this is to accommodate the version of u-boot that the device uses as its boot loader.
You can set up the other filesystems however you like, at least within the limits of what Debian supports. I've never found a good benchmark for determining what the best filesystem configuration is. I've tried just about every possible variation of ext3/ext4/xfs with most possible raid chunk sizes and stripe-widths and have largely been unable to detect any difference.
I forgot to mention one thing. When working on the ts2pro I ran into some systemd issues related to the max size of /run being too small. It turns out that the default is 10% of your ram size which cones out to 12M but systemd want 20M available for some operations.
you can configure the system to set a larger max and keep systemd happy by adding the following to your fstab:
tmpfs /run tmpfs nosuid,noexec,size=26M,nr_inodes=4096 0 0
I do this automatically in my ts2pro installer and so far I haven't seen any side effects (though I don't know that anyone else has actually used my ts2pro installer yet).
This might seem like a dumb question (but they say the only dumb question is the one you don't ask). Do the Buffalo's boot using Grub? I set up a drive partitions, set with iSCSI and copied files to it. Later, I added a 2nd drive into the unit. I then ran commands that first made copies of the partition setup from the sda to sdb, then added the respective partitions on sdb to the raid arrays and allowed them
I was pretty tired that night, I think I recall later taking out sda and booting up and it booted correctly, which I suspect would mean the Buffalo just boots from the first partition. Key thing is making sure the procedure when needing to replace a drive.
On Monday, October 28, 2019, 12:31:13 PM EDT, 1000001101000 <notifications@github.com> wrote:
I forgot to mention one thing. When working on the ts2pro I ran into some systemd issues related to the max size of /run being too small. It turns out that the default is 10% of your ram size which cones out to 12M but systemd want 20M available for some operations.
you can configure the system to set a larger max and keep systemd happy by adding the following to your fstab:
tmpfs /run tmpfs nosuid,noexec,size=26M,nr_inodes=4096 0 0
I do this automatically in my ts2pro installer and so far I haven't seen any side effects (though I don't know that anyone else has actually used my ts2pro installer yet).
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub, or unsubscribe.
Not a dumb question, these devices work a little differently than a PC since the BIOS/Bootloader relationship isn't the same. These devices use uboot instead of grub, it is located in the flash memory on the device. This is why we go to all the trouble to make sure that the boot files are stored on the first partition and with those specific file names, because these values are hardcoded in the uboot environment and are difficult to modify (and making a mistake while changing them can render the device unable to boot).
@thea1ien, @1000001101000: Have you progressed on it?
Note that now, there is for Bookworm:
I'm hoping I could use this on a Buffalo CloudStor CS-WX. It's a 2-bay Buffalo NAS with a Feroceon-Kirkwood CPU. From what I understand, I believe it stores all of it's firmware and OS on the harddrive (I think some have onboard flash?). I'm able to gain SSH access and place the scripts for OpenLinkstation on. However when I run the first script, it gets to first stage, debootstrap, and fails with: FATAL: kernel too old Segmentation fault /mnt/disk2 DEBOOTSTRAP failed.
I also attempt to write the miniDebian files for the LS-WXL which I found online, and seems to be similiar with what OpenStation is. The initial images boot, but seems to fail during the install process. (Ref: https://miniconf.debian.or.jp/assets/files/Debian%20Installer%20for%20Buffalo%20Linkstation%20NAS.pdf)
Any assistance would be helpful. I'd really like to replace the out-dated firmware that Buffalo has for this unit. Thanks!