Let’s back up here.
- You tested the mount at the command line, mounting on /mnt
- You umounted it
- You added the entry in fstab
- You next tested mounted it via the fstab reference.
Is this correct sequence?
Let’s back up here.
Is this correct sequence?
OK so I
Ok…
,nofail
Adding nofail option prevents it from blocking system boot.
With this in place, restart so it comes up cleanly.
like so?
/dev/md0 /mount/disks/Plex ext4 auto,defaults,rw,nofail 1 2
if you have those directories already existing then yes.
If they do not exist, it will fail.
Yes, that is correct inclusion of the nofail option.
should there be a space after /mount? I know im new to this but that makes me think its part of the path. Or am I reading that wrong
The format is:
/device/spec/no/spaces <space> /path/no/spaces <space> Options <space> fsck-flags <space> backup-flags
ok so now that fstab is updated is there anything else i can do until the recovery finishes? I read that I should not touch the mdadm.conf until that process is finshed
also i shouldnt transfer any files yet right?
First, let’s get the mounting correct. After that, we can handle the media itself
ok it appears it is going to take another 6 hours to finish building the raid when i look at /proc/mdstat
that makes sense.
just trying to find things I can do while I wait lol. What do I need to put into the mdadm.conf once the raid is finished?
Nothing you can do until it’s ready
Well since this wont finish until after midnight tonight I wont be up when it does. I will have to work on it in the morning. Can you tell me what my next steps are?
I know I have to edit the mdadm.conf and I will have to set permissions and last time I had to mess with fcacl (think that was the command) lists and then I need to make sure it is accessible by my windows machine that is also on this network. Which if i recall is just me mapping the IP address to the server.
After it finishes,
df -h)chmod as needed. Directories get 755.OK thanks for the help. I will update here in the morning. fingers crossed without any issues
After I did a reboot this morning it changed it from md0 to md127.
It also did not mount the drives.
root@Server:/home/jason# df -h
Filesystem Size Used Avail Use% Mounted on
udev 32G 0 32G 0% /dev
tmpfs 6.3G 9.4M 6.3G 1% /run
/dev/sda2 395G 6.8G 368G 2% /
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/sda1 511M 132K 511M 1% /boot/efi
tmpfs 6.3G 20K 6.3G 1% /run/user/1000
root@Server:/home/jason# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 465.8G 0 disk
├─sda1 8:1 0 512M 0 part /boot/efi
├─sda2 8:2 0 401.3G 0 part /
└─sda3 8:3 0 64G 0 part [SWAP]
sdb 8:16 0 3.7T 0 disk
└─md127 9:127 0 7.3T 0 raid5
sdc 8:32 0 3.7T 0 disk
└─md127 9:127 0 7.3T 0 raid5
sdd 8:48 0 3.7T 0 disk
└─md127 9:127 0 7.3T 0 raid5
change the fstab from md0 to the current md127
Yes.
I have ZERO idea what the heck is happening to your Centos but work with it.
It thinks it knows best.
lol
nofail is nice, isn’t it?