Constant buffering

Where is the /dev/nvme0n1 ?

Observe:

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/nvme0n1p2 during installation
#UUID=f60ef19a-35cf-4432-9b81-4e80a714e0d3 /               xfs     defaults        0       0
# /boot/efi was on /dev/nvme0n1p1 during installation
#UUID=49B7-3D1E  /boot/efi       vfat    umask=0077      0       1
# /home was on /dev/nvme0n1p4 during installation
#UUID=6617e293-eb35-467c-9cbb-50810b615e7b /home           xfs     defaults        0       0
# swap was on /dev/nvme0n1p3 during installation
#UUID=74094dff-1a82-47ab-b1a7-778f33d7898f none            swap    sw              0       0
UUID=33dc7666-f7ff-42ee-8f8b-71fae9db1d72 /               ext4    errors=remount-ro 0       1

Is nvme0n1p2:

nvme0n1                                                1.8T          
├─nvme0n1p1 /boot/efi                                  512M vfat     0425-70B4
└─nvme0n1p2 /                                          1.8T ext4     33dc7666-f7ff-42ee-8f8b-71fae9db1d72

Where is that listed in FSTAB? :thinking:

FWIW: I used dd when I moved from my SATA SSD to my NVMe one.

Should be this right?

# <file system> <mount point>   <type>  <options>       <dump>  <pass>
# / was on /dev/sdg2 during installation
UUID=33dc7666-f7ff-42ee-8f8b-71fae9db1d72 /               ext4    errors=remount-ro 0       1

The UUID is the same.

From a smaller partition to a larger one on the SSD?

It was the same sized drive. Both were 2TB (1.8TB) drives.

you literally did

sudo dd if=/dev/sdX  of=/dev/nvme0n1 bs=XXXX

?

That looks about right yeah. After I ran some tests to figure out the optimal bs but yeah. I did a little research that said that should be fine but you know the internet so I may have been led astray.

Let’s assume:

  1. Booted the USB live thumb
  2. Launched a terminal session
  3. USB HDD became /dev/sdb
  4. NVME SSD will stay right where it is

In this scenario, optimal is simple.

sudo dd if=/dev/sdb of=/dev/nvme0n1 bs=4M

SSD pages are 4K each.
HDD sectors are 512 Bytes.

The math works perfectly for optimal alignment in all regards

Yep thats probably pretty accurate as to what I did to clone the drive. I think my bs was bigger but I dont recall the command I used to be honest but I believe I used the page here:

I am VERY bothered by you missing 500 GB.

HDDs and SSDs really aren’t meant to be copied from one to another that way BUT you can get away from it with minimal waste.

what does sudo parted /dev/nvme0n1 show?

Is all the space accounted for there?

You can also look here.

[chuck@lizum ~.2002]$ cd  /dev/disk/by-uuid/
[chuck@lizum by-uuid.2003]$ ls -la
total 0
drwxr-xr-x 2 root root 160 Feb  1 23:43 ./
drwxr-xr-x 8 root root 160 Feb  1 04:30 ../
lrwxrwxrwx 1 root root  10 Feb  1 23:43 142f70e7-5387-4f61-8d90-c92559c0d252 -> ../../sda1
lrwxrwxrwx 1 root root  10 Feb  1 23:43 446c8f7a-858b-47c5-94da-5fd8da44a18f -> ../../sdb1
lrwxrwxrwx 1 root root  15 Feb  1 23:43 49B7-3D1E -> ../../nvme0n1p1
lrwxrwxrwx 1 root root  15 Feb  1 23:43 6617e293-eb35-467c-9cbb-50810b615e7b -> ../../nvme0n1p4
lrwxrwxrwx 1 root root  15 Feb  1 23:43 74094dff-1a82-47ab-b1a7-778f33d7898f -> ../../nvme0n1p3
lrwxrwxrwx 1 root root  15 Feb  1 23:43 f60ef19a-35cf-4432-9b81-4e80a714e0d3 -> ../../nvme0n1p2
[chuck@lizum by-uuid.2004]$ 

It was an SSD to SSD if that makes a difference.

Number Start End Size File system Name Flags
1      1049kB 538MB 537MB fat32 EFI System Partition boot,esp
2      538MB 2000GB 2000GB ext4

That being as partitioned,

From here disk usage - Missing ~500GB of hard hard drive space - Ask Ubuntu

Where you show 1.3TB in use. you gotta fsck and fix that .

Did you copy the file system LIVE (while it was running) ?

Having a sneaking suspicion that the i/o wait is getting sucked up because Linux’s free block list is all hosed up. (It’s looking for blocks down a 500GB long list of unaccounted for pages)

I did not- I did a USB disc and did it from there :slight_smile: I just finished a fsck and it did make some modifications. Didn’t take too long but It was saying there was a number of optimizations to make so I let it do them. Lets see how it looks now

Doesn’t seem to have changed the missing space as far as I can see so far. IO Wait does seem to be about half it was but maybe that just takes time.

I re-ran the dd command and that came out the same more or less, a tad bit better.

20000+0 records in
20000+0 records out
83886080000 bytes (84 GB, 78 GiB) copied, 52.8339 s, 1.6 GB/s

The dd rate must be the spec of your SSD. My SSD is a Samsung. This one does about 2.5-2.7 and my other Samsung does 3.5. It all depends on the tech used.

Not wasting more effort on the raw perf. 1.5/1.6 is good enough for now.
If the IOwait is down – GOOD. that’ll go a long way.

How does PMS behave now? A bit crisper loading up sections?

The sections seem to load fine but im still getting buffering issues across multiple episodes which are on multiple different drives which I wasn’t getting until a couple weeks ago.

Plex Media Server Logs_2022-02-03_05-21-39.zip (4.9 MB)
(if it helps)