@BulletZ
I’m sorry I didn’t see this until now. Since this topic touches on a number of things at this point, I’d like to make a sorta ‘explanatory’ post to hopefully shed some light on what’s going on.
I am not referring to the SSDP protocol itself
I am referring to how PMS handles unexpected results of SSDP queries (e.g. malformed XML - which does occur quite often) from nodes.
Normally, PMS will output one line acknowledging the device is found or gone idle.
When it gets a bad reply, dependent on the actual reply, as many as 15 lines can be written.
While PMS logs are always generous , those add to it.
One caution which I need state is:
- The defaults are DEBUG: ON, VERBOSE: OFF
- If VERBOSE is manually enabled, thinking it will help, log file output can 10-20x greater. At 10 MB/log file, with a total retention of 60 MB per which will ‘roll off’ in 2-3 minutes, you have 1.2 GB/hr wasted in Verbose log file writing. This is multiplied exponentially when streaming because EVERY packet exchange between Server and Player is logged.
- On systems which are totally “production stable”, it’s perfectly OK to turn off DEBUG, with the caveat DEBUG is enabled again and an issue re-created before posting logs in the forum for us to review. INFO/WARN/ERROR (when DEBUG is OFF) is never enough for us to diagnose with.
An alternative to PMS writing in the Logs directory is to use SYSLOG.
On normal Linux, this is an option however we don’t support diagnosing syslogs. (Would you want to sift through months of someone’s syslog trying
On Synology, we don’t have that luxury.
Speaking to the “Disk Activity” itself,
To list the activities PMS is doing when it’s not in use is all listed in the Scheduled Tasks and Library sections.
If you click SHOW ADVANCED in both, you’ll see where PMS will:
- Monitor your media directories for any changes and rescan when changes are detected.
- Optionally also scan periodically.
- Unless “Partial Scan” is enabled (only of benefit when all media is local) it will rescan all the media directories. Enabling “Partial Scan” is, imho, a MUST ENABLE option on NAS boxes. – This also minimizes database and log file activity as well (significantly)
- The list of scheduled tasks. Most of these are enabled by default and constitute a substantial amount of disk activity during the maintenance period. (shown at the top of the Scheduled Tasks page)
Backup database every three days
Optimize database every week
Remove old bundles every week
Remove old cache files every week
Refresh local metadata every three days
Update all libraries during maintenance
Upgrade media analysis during maintenance
Refresh music library metadata periodically
Perform extensive media analysis during maintenance
Perform refresh of program guide data.
Fetch missing location names for items in photo sections
Analyze and tag photos
The media analysis will read every bit of indexed media and profile it for auto bit rate use (Remote Access).
Audio will have loudness analyzed as well.
I’m not trying to defend what PMS is doing. I know it pushes hard on the system during maintenance and I know that if you’re going to use a SSD, don’t get a cheap one. Use a “Pro” rated SSD and get a big one. SSD life is dependent on the total number of storage pages on the device. There is a finite limit to how many times a storage page can be written. Obviously , the more free pages on the device, the longer it will last (device wear leveling in the device ASIC).
I use the Samsung 970 Pro 1TB. It has a TBW (TerraBytesWritten) rating of 1200 TBW (1.2 Petabytes). Yes.. it’s not cheap. $350 USD list price.
On my QNAP (where I keep my main PMS and all the QA test media), 42GB for metadata.
[/share/PlexData/Plex Media Server] # du -ms .
42835 .
[/share/PlexData/Plex Media Server] #
Total number of files indexed:
[chuck@lizum /vie.166]$ find movie* qa tv* *music* -type f -print | wc -l
166035
[chuck@lizum /vie.167]$
If I start doing rough math (VERY rough).
- I can rebuild the entire Plex database 23 times before getting to 1 TB of usage.
- I can roll though 13,000 full sets of logs (at 75 MB / full set) before getting to 1 TB of usage when in DEBUG logging mode.
Let’s assume
10 sets of logs / day (750MB) for 1000 days = 750 GB.
1000 full rebuilds of the Plex datase (43GB) = 43,000 GB
43.750 TB of the total life of a typical 600 TBW SSD.
1000 days @ 1 rebuild per day = 3+ years.
Under normal use, you will have grown out of the NAS long before.
Between the two above, I need over 10 years to wear out the SSD.
This begs the question:
What else is being written to the SSDs ?
Where’s the Transcoder temp directory? By default, on Synology, I keep it in the Plex share /volume1/Plex/tmp_transcoding.
If the entire Plex share is on SSD and the transcoder temp is still at its default location – “Houston, we have a problem”
Lastly, to the point of disk activity , which I’ve deviated far from.
- Drives don’t go bad by using them
- Drives, very much like light bulbs, fail from turning them off (they cool down) and rapid heating when turned back on. (bulbs fail when you turn them on and rarely fail if left on continuously - true?)
- My drives are left on continuously. I don’t hear what PMS is doing. All my equipment is in the closet behind a closed door.
If this is of any info, with drives left on continuously since last upgrade ,
(Syno got the then-older & smaller 6TB drives and the QNAP got the 8TB drives)
Newer drives in the QNAP
What I’m trying to say here is that
- PMS does a lot of work behind the scenes.
- It figures out, in advance, how to stream under both local & remote situations.
- It keeps metadata up to data (it does change from time to time but not the norm)
- It will work its way through the entire media content until everything has been fully “imported” (analyzed in depth).
-
If NAS-rated drives are being used, don’t worry about them being on all the time. It’s actually better for them in the long haul (thermal stability + heads remain out over the platters). If Desktop drives are being used, Danger Will Robinson 
-
If SSD wear is an issue, let’s work on that. This is largely an educational issue.