Kulim13 / alt-f

Automatically exported from code.google.com/p/alt-f
0 stars 0 forks source link

"RAID Creation and Maintenance" fails to display properly #147

Closed GoogleCodeExporter closed 9 years ago

GoogleCodeExporter commented 9 years ago
The "RAID Creation and Maintenance" page looks weird on my system.

First of all, the top "fdisk: device has more than 2^32 sectors, can't use all 
of them fdisk: device has more than 2^32 sectors, can't use all of them" 
because I have 3TB drives.  This can be remedied by redirecting stderr of this 
line to /dev/null ("2> /dev/null"):
https://code.google.com/p/alt-f/source/browse/trunk/alt-f/customroot/usr/www/cgi
-bin/raid.cgi#157

The second problem is that the "RAID Maintenance" section was not getting 
populated -- just the header of it was getting filled.

I tracked it down to the fact that my curdev variable is "md/0".  Here's my 
"mdadm --examine --scan" line:
ARRAY /dev/md/0 metadata=1.2 UUID=5fa28ae1:01f01677:793a4426:5673aff5 
name=foobar:0

On my system /dev/md/0 is a symlink to /dev/md0.

I'm not sure how it got like this as I used all the wizards to get it going.

Anyway, the fix for me was to add this line:
"if test -h /dev/$mdev; then mdev=$(basename $(readlink -f /dev/$mdev)); fi"
just above here:
https://code.google.com/p/alt-f/source/browse/trunk/alt-f/customroot/usr/www/cgi
-bin/raid.cgi#234

I have a DNS-323 with 2 3TB drives (ST3000DM001-9YN166), running RC3 although 
it looks like these problems are still present (as I linked SVN head revision 
there).

Thanks for making this!

Original issue reported on code.google.com by neoman...@gmail.com on 14 Aug 2013 at 10:44

GoogleCodeExporter commented 9 years ago
> On my system /dev/md/0 is a symlink to /dev/md0.

This is really odd, as you say that you have used the Disk Wizard, and nobody 
else complained (except Martin Melissen in the forum)

Can you please post the output of

cat /proc/mdstat
mdadm --examine /dev/sda2 /dev/sdb2
mdadm --detail /dev/md0

Thanks

PS-Please use https://sourceforge.net/p/alt-f/tickets/  for new bug reports

Original comment by whoami.j...@gmail.com on 16 Aug 2013 at 1:01

GoogleCodeExporter commented 9 years ago
Sorry about the wrong bug tracker...  here's the information requested.  Note 
that one of the drives has recently failed.   I'm about to install 2 new drives 
and restart the process.  I'll try to have things as clean as can be, to try to 
reproduce.

$ cat /proc/mdstat 
Personalities : [linear] [raid1] 
md0 : active raid1 sda2[1]
      2929740112 blocks super 1.2 [2/1] [_U]
      bitmap: 4/22 pages [16KB], 65536KB chunk

unused devices: <none>
$ mdadm --examine /dev/sda2 /dev/sdb2
/dev/sda2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 5fa28ae1:01f01677:793a4426:5673aff5
           Name : foobar:0  (local to host foobar)
  Creation Time : Tue Aug 13 11:44:57 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5859480496 (2794.02 GiB 3000.05 GB)
     Array Size : 5859480224 (2794.02 GiB 3000.05 GB)
  Used Dev Size : 5859480224 (2794.02 GiB 3000.05 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : clean
    Device UUID : 8066c46a:63b6e685:1dcff960:2920b5b4

Internal Bitmap : 8 sectors from superblock
    Update Time : Sun Aug 18 10:58:23 2013
       Checksum : 5b9bb084 - correct
         Events : 43679

   Device Role : Active device 1                                                                                                                                                                            [16/79]
   Array State : .A ('A' == active, '.' == missing)
/dev/sdb2:
          Magic : a92b4efc
        Version : 1.2
    Feature Map : 0x1
     Array UUID : 5fa28ae1:01f01677:793a4426:5673aff5
           Name : foobar:0  (local to host foobar)
  Creation Time : Tue Aug 13 11:44:57 2013
     Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 5859480496 (2794.02 GiB 3000.05 GB)
     Array Size : 5859480224 (2794.02 GiB 3000.05 GB)
  Used Dev Size : 5859480224 (2794.02 GiB 3000.05 GB)
    Data Offset : 2048 sectors
   Super Offset : 8 sectors
          State : active
    Device UUID : a0292d0c:a995030d:4ff8d412:92765634

Internal Bitmap : 8 sectors from superblock
    Update Time : Wed Aug 14 20:30:29 2013
       Checksum : b5977b28 - correct
         Events : 4909

   Device Role : Active device 0
   Array State : AA ('A' == active, '.' == missing)
$ mdadm --detail /dev/md0
/dev/md0:
        Version : 1.2
  Creation Time : Tue Aug 13 11:44:57 2013
     Raid Level : raid1
     Array Size : 2929740112 (2794.02 GiB 3000.05 GB)
  Used Dev Size : 2929740112 (2794.02 GiB 3000.05 GB)
   Raid Devices : 2
  Total Devices : 1
    Persistence : Superblock is persistent

  Intent Bitmap : Internal
    Update Time : Sun Aug 18 10:58:48 2013
          State : active, degraded
 Active Devices : 1
Working Devices : 1
 Failed Devices : 0
  Spare Devices : 0

           Name : foobar:0  (local to host foobar)
           UUID : 5fa28ae1:01f01677:793a4426:5673aff5
         Events : 43681

    Number   Major   Minor   RaidDevice State
       0       0        0        0      removed
       1       8        2        1      active sync   /dev/sda2

Original comment by neoman...@gmail.com on 18 Aug 2013 at 3:02

GoogleCodeExporter commented 9 years ago
Hi! I have posted a related bug report on sourceforge : 
https://sourceforge.net/p/alt-f/tickets/10/
can you please take a look at it when you have the chance? Thanks alot (and 
keep up the good work, alt-f is really great!)

Stephane

Original comment by stephane...@gmail.com on 2 Sep 2013 at 6:23

GoogleCodeExporter commented 9 years ago
Closed by SVN commit 2369:

Disk RAID: closes issue 147
-fdisk: device has more than 2^32 sectors 
 Reason: fdisk warning for greater than 2TB disks
-the "RAID Maintenance" section was not getting populated -- just the header of 
it was getting filled.
 Reason: for metadata 1.2 mdadm reports /dev/md/0, /dev/md/1... which are simlinks to /dev/md0, /dev/md1...

RAID metadata 1.2 is automatically used on greater than 2TB disks, that's the 
reason why I never saw it.

Thanks for the report, diagnosis and proposed fix

Original comment by whoami.j...@gmail.com on 3 Sep 2013 at 3:41

GoogleCodeExporter commented 9 years ago
Hi, I hace stable RC3 and just installed 2x4TB to upgrade from my 2x1TB.  I 
started by shutting the DNS323 down, ejecting both 1TB drives, installed 2x4TB 
drives, allowed the Wizard to create a new md0 out of the 2x4TB drives (worked 
with no errors). However, when I pull up the RAID page, I get the fdisk error 
stated above.  I believe the proposed fix would work for me too, but I'm not 
sure how to get the additional line of code in without hosing things.  Can 
someone share the proper steps to insert this code into the RAID cgi?

Thanks

Original comment by cemb...@gmail.com on 25 Dec 2013 at 5:00