-
Hello,
I have been considering migrating from GlusterFS to LizardFS for a variety of reasons. I digress, I already started migrating some of my company's high valid data to MooseFS CE since replic…
-
I have a cluster of several servers running 3.12 (they also provide other tasks, like a GaleraDB cluster, and a few others). My plan is to replace one server at a time. The old servers are running Deb…
-
After upgrading chunkservers to 3.13.0~rc1 I'm afraid I'm not getting away without massive data loss: `mfsmaster` logs `replication status: IO error` all the time and as replication progresses, CGI's …
-
I just wanted to open a general topic asking how the people that use LizardFS are using it today. It would be valuable to know things like how big your cluster is ( number of nodes, cluster storage ca…
-
Situation: chunkserver with 30 million chunks and several hard disk is re-started with `HDD_HIGH_SPEED_REBALANCE_LIMIT = 3`:
This is what happens after scanning of local HDDs:
```
Jan 31 09:01:…
-
I've built a 2.5 node proxmox cluster. 2 VM hosts and a third small node that only handles quorum. I can replace the third node once I need more compute/ram.
The cluster currently runs glusterfs to…
-
Just for kicks I thought I would share one of my crazy scripts. This is a cron task that paints the main local console with status for our Corosync and LizardFS Master (masters and shadows) so to kno…
-
QUESTION for the COMMUNITY/Developers
System information
3.11.2 running on Centos
Operating system (distribution) and kernel version.
Centos 7
Hardware / network configuration, and underlying…
-
I recently built a new single-node LizardFS cluster on an ARM platform (ODroid HC2) for testing with the intent of expanding it to 4 nodes later if things went well.
Unfortunately I can only seem t…
-
I'm investigating an error reported on one of the disks in the CGI and found the following in logs:
~~~~
03:51:02 node01 mfschunkserver[225550]: hdd_io_begin: file:/var/lib/mfs/SSD400/ext4//86/chu…