Why is this thread still marked as popular instead of very popular? Based on the stats you would think this would not be the case.
[Popular] Server-Side Speed Limits/Caps for Shared/Subscribed Users | 1,145 points
[Very Popular] Plex Sync for Desktop Clients | 1,093 points
[Very Popular] Live TV support in Plex (Media Server) | 959 points
[Very Popular] myPlex: buffer content Youtube-style | 871 points
[Very Popular] Show who is streaming what and when | 867 points
Oh, I think the Plex team knows full well that this is a heavy hitter for a lot of people regardless of how itās tagged in the title.
They know full well not only what we want from this feature, but how other existing or requested features will tie in together.
I just think itās odd that the developers have not weighed in on this topic at all from what I can seeā¦
+1 This is needed.
+1
Iād like this feature as well
Maybe with the way the media server is designed this feature is harder to implement then we think? It would be nice to hear what the Plex team thinks about this feature and to know if it is feasible at least?
I would like to see this feature, also restrict times in which subscribers can access your content, which would also help with parental controls.
+1
Definitely!
Force transcoding for all remote addresses would be great
+1 for this feature.
@justinnjess said:
Force transcoding for all remote addresses would be great
No, not at all. Direct play is easier on the server and gives a better experience on the client, especially when seeking.
@justinnjess said:
Force transcoding for all remote addresses would be great
Only if you donāt want your server to handle more than a couple of streams at a time. If you share out to more than just a few people at a time, and they only use a few streams, total, then this will work for you. If you share out to more than 10 or 20, and they all seem to hit at roughly the same time frames., this will never give the a good experience.
The only way to ensure everyone views everything without buffering is to provide them with alternate versions that can be selected on the server side, for the connections each user has. The client negotiates with the server the speed, the server looks to see if the right bitrate is available, and only transcodes if it canāt find a version that fits the requirements.
Transcoding is really CPU intensive. And those with smaller CPUās arenāt able to handle even one transcode, let alone 2 or more.
I currently have 9 total remote streams going and 7 of them are transcoding without issue. The other two are direct play. My CPU is an i7 in a mac mini. The best performance I see is when everyone is transcoding - less going through the pipe at the same time.
How are you determining that you are getting better performance with more transcoding happening? What industry standards are you using to test this with?
If you arenāt applying any industry standard tools, but are relying on your friends reporting things to you, then you arenāt really having quantifiable results. Your results are āgut feelingā and may very well be giving you a warm fuzzy, before the machine actually fails catastrophically.
Ideally, you want to look at things like CPU resources used, memory used, disk drive reads/writes, drive up time, spin time, etc. SMART data on your drives, and yes, network through put are also important metrics that need to be looked at.
Drive failure is more typically going to happen during a write than during a read. Transcoding means the media is converted from one bitrate or codec to another, and the ānewā: file is written to the disk before itās read again to be sent out the wire. Just the act of transcoding the same file multiple times for various users can hasten the failure on a drive. Maintaining a premade second copy of the file that fits that bitrate is less wear and tear on the system, as itās only written once, instead of 2, 20 or 200 times for the same media.
From my own personal testing I can show that Direct Play of a 1080p/10Mbps stream uses roughly 2-4% of a 4 core CPU (with a total of 400% available.) Transcoding a 20Mbps down to 10Mps uses 300-350% of the 400% available. Transcoding is much harder on the system. More CPU cycles need to be allocated to the task, more disk writes and reads are taking place to transcode the media. More heat is generated in the system, and we all know heat is a bad thing when it comes to electronics.
If you have a copy of the media in the correct bitrate for the userās request, then your system should not need to transcode the media at all. Itās going to Direct Play it, at the requested rates, and have less over-all performance requirements than transcoding it on depend. The stream thatās pre-transcoded to the lower bitrate is going to be higher quality, and produce better results on your friendās device.
If you have a single media file of 1080p/10Mbps, you are going to require a fair amount of space for transcoding it, but if you have the same file, and then one in 720p/4Mbps, the actual requirements are going to be much less to maintain that second copy. Yeah, over-all the required disk space increases, but the on-demand decreases. Transcoding that 4Mbps down to 720 reduces the on-demand requirements considerably as well.
Direct Play will always be less resource intensive on the system than transcoding will ever be. But to ensure you have the media to fit the requested bitrates, that means you have to pre-transcode for that bitrate. Through using the OM feature, or other applications to make a second (or third or even fourth version) to fit the limits you want each user to use. This requires something from you, to set it up, and additional space to store the extra copies onā¦
4K media is going to present itās own set of problems when it comes to transcoding, because the screen size is actually 4X the 1080p we have been used to seeing. That means bitrates are going to be 4X at least. (And likely higher!) So most people are going to want to store their media in H265 instead of H264 codec to save on disk space. Just this codec alone is going to require 1.5x to 2x the passmarks scores for transcoding, as there arenāt many native H265 players out there yet. Now factor in the 4X bitrates and suddenly your i7 doesnāt have enough resources to handle even a single stream of 4K H265 down to 4Mbps as the client requests.
Just setting a cap for your users isnāt an all encompassing answer. We have to have adaptive bitrates and tying in multiple versions to give the best results. Results that can be measured, repeatable, and quantified in a known manner. Gut feeling is going to have to go out the window then, because itās going to lead to some pretty dynamic failures on peopleās systems as they try to do things their machines just canāt do.
Iām not questioning that you feel you are getting better results. How are they being measured, and are your results repeatable 100% of the time? Will they continue to be relevant if/when you go to 4K media? I donāt believe they are repeatable, nor relevant for 4K, nor even 1080p. I think you arenāt looking at the whole picture, but filtering it from what you want to see, and not the reality your system is experiencing.
Youāre right, Iām relying on those watching the remote streams (including myself) to report back as to if they are buffering every 10 seconds, buffering every few minutes, or not buffering at all. When I said āperformanceā I was really referring to buffering vs. not buffering. The bottleneck in my process is āupā speeds (150/20 via Comcast - switching to FIOS 500/500 soon so transcoding might not be as big of an issue), not the CPU (quad core i7) and not the disk arrays (4 Pegasus Thunderbolt P6ās). Hence smaller files via on the fly transcoding (and my āthrottleā setting is set at 10 minutes) means less data going through the pipe. In my setup, I have roughly 41 of 60 TB used so converting the files to separate versions is not really a viable option, and the CPU is the cheapest hardware in the chain so thus easiest to replace if there is failure. My transcoding is to a RAM drive so writes arenāt actually physical disk writes. Those streaming remote are willing to sacrifice video quality over buffering time so most turn those settings down pretty low (I use 720 or 320 kbps from a hotel on their flat screen TVās via Chromecast). Now keep in mind I have been running this server in pretty much this same configuration 24/7 for the past several years (though obviously the library has grown over that time but the hardware was pre-sized for this to happen). You did mention heat, and yes heat is bad for any electronics, but my equipment is in a climate controlled room in a cooled rack mount system (my own mini data center if you will).
Now donāt get me wrong, I agree with you that the ultimate solution is for adaptive bitrates like those used by Netflix, Amazon, and Hulu. The ability to have networked servers to handle additional load balancing might also be helpful, but one step at a time.
I use netdata, which is a relatively new Linux app for monitoring my system. It shows me CPU load, and gives a breakdown of the load based on applications. Itās configurable for the applications by a configuration file. It runs 24/7 and also gives info on network traffic through the NAS and disk traffic, etc.
Here are a couple of screen caps of Plex processes in use on my NAS. Both of these are the same resolution and bitrate (1080p/7Mbps) , but one is direct played, the other is transcoded down from a 1080p/10Mbps. This illustrates exactly what Iāve tried to explain above:
Direct Play:
Transcoded:
As you can see there is significant load on the transcoded file, until it reaches the throttled state, and then after this, the work is reduced, to basically maintenance levels, to stay far enough ahead of the actual time-line of the movie. These were captured on an i3-4330 CPU with a passmark score of just over 5K. The largish spikes on the Direct Play capture are the actual opening of the media info to verify the stats of the file, while I prepared the capture. (Did this a couple of times, to see how much usage just opening up the media page in the web app actually affected resources.)
If you notice the scale for each capture you start to realize how much impact transcoding can actually have on a system. As you can see, there is significant usage of the CPU in a transcode, meaning heat, disk reads/writes, and an over-all busy system, that might not be able to handle requests if the CPU is busy doing something else. I know this CPU can handle 2+ transcodes of 1080p 10Mbps media. But why would I want to put that kind of strain on the system.
Iām not going to debate storing multiple versions, and how having a 2nd or even 3rd version of your media at a lower bitrate just makes sense long term. If you want, we can get into it, and I can show you how to make a working bitrate limit with what we already have for tools in Plex. I think I have already covered it in this thread. (but could be wrong.)
If you are really interested in doing it, send me a PM and Iāll go into details. But I warn you, itās going to require another drive or two to handle the extra storage. Itās also going to require making at least one other library, and will likely take larger systems a while to make the required versions of the media. Itās still more than doable, itās just going to take time and money investment.
MikeG6.5, I do understand what you are saying and you obviously know a lot about this topic (and your sharing that with all of us is greatly appreciated). I am curious what your āTranscoder Qualityā is set at? Mine is set for āFaster Encodingā as video quality isnāt really as important to the remote users as a āsmoother rideā is. When I set my setting to āMake my CPU hurtā it cripples my machine but on the lowest setting it seems to run ok. This is a dedicated box running only the PMS software aside from the native OS X processes. DASD is more expensive when working with RAID arrays so keeping multiple versions is just not as cost effective as running the CPU into the ground and then replacing it with the latest and greatest when that happens. (which hasnāt happened yet on this one Iāve been running since 2012 knock on wood ). You have me curious as to what my actual metrics are so I may look into some software to measure this. Iāve been kinda hoping to replace this CPU by next year with a 12 core Mac Pro (at which time I will care about not killing the CPU).
So would you say that the best solution right now would be to increase the size of the outbound pipe and try to direct play or direct stream whenever possible?
Sending you a PM as some people may not appreciate the hijacking this thread⦠
At the risk of being spammy⦠Iāll add my +1 here as well, would love to have this.
Iāve got the CPUs, not the bandwidth. I want forced transcoding for all users.
@MikeG6.5 said:
@justinnjess said:
Force transcoding for all remote addresses would be greatOnly if you donāt want your server to handle more than a couple of streams at a time. If you share out to more than just a few people at a time, and they only use a few streams, total, then this will work for you. If you share out to more than 10 or 20, and they all seem to hit at roughly the same time frames., this will never give the a good experience.
The only way to ensure everyone views everything without buffering is to provide them with alternate versions that can be selected on the server side, for the connections each user has. The client negotiates with the server the speed, the server looks to see if the right bitrate is available, and only transcodes if it canāt find a version that fits the requirements.
Transcoding is really CPU intensive. And those with smaller CPUās arenāt able to handle even one transcode, let alone 2 or more.

