The transcode is stopped if the player exits playback. The issue is more the hardware transcoder is typically running 2-10x faster than playback so it can get way ahead of an active session. This is probably a good thing when it comes to seeking during playback, but maybe a bad thing when it comes to temp space usage vs the old way of throttling the transcoder if it gets more than a few minutes ahead of the playback position.
im aware. if it didnât stop transcoding when the player is closed that would be doubly stupid, lol.
I tested this last night on a P4, I found if i started 4-5 streams right after each other, it would stall. However if I left around 30 seconds inbetween opening streams i managed 15-16 1080p to 720p and it still had room to start streams (this was where i stopped as having 2 shields, 2 tablets, a phone and 10 windows open on my pc seemed ridiculous.) I honestly don thtink that part of it is a big problem. Need to bare in mind once its finished converting that file (5-10min depending on the file) it stops running nvenc. In stress testing i noticed it that would have xx transcodes running however nvenc only showed X.
I can agree the transcode cache filling being excessive, but its also great for users with large caches/ram disks as it stops the disk being read throughout, almost like a tiered storage setup.
thereâs already a setting for how much you want to transcode before throttling⊠your argument would only make sense if that setting didnât exist.
Correct which is why a reply i stated before âhowever on a bug level the transcode buffer should respect the option of âTranscode default throttle bufferâ for those with smaller RAM disks.â
In your previous comment. You basically made an argument that the new way would be great. â But that argument is invalid because nothing changes in that regard, itâs the same as before. Itâs like they cancel each other out WHILE adding the disk space problem.
If youâre going to make a positive argument about the new change it has to be something new that wasnât available before. You canât just say âitâs good because now Iâm able to do thisâ because you were able to do that beforeâŠ
This is now ignored as of the latest beta - thats the issue!
I think you should re-read everything i have written. I said i like the idea of a toggle of âgo nutsâ setting the transcode buffer to 99999999 seconds is abit rubbish. I agreed with all the sentiments that the transcode buffer should be respected in all accounts.
yes, he was trying to spin the problem as a positive, lol. It isnât a positive if it wasnât a problem beforehand. You canât say this is better for people with large caches because itâs not better, itâs exactly the same as beforeâŠ
But wait thereâs more! We now have this problem of disk space issues and crashing, lol. So no you canât spin this as somehow better.
ALL:
Folks, Engineering has heard the feedback.
They are re-evaluating the change and unexpected impact itâs had.
While Iâd like to share whatâs being discussed, I canât until Engineering has finalized their plans and put a date on it.
What I can say is that I like the current proposal.
Again, I will share more when itâs finalized and theyâve given permission for me to share those details.
Is it ok to ask what problem there was with the buffer throttle limit in regards to transcodes in the first place?
The original problem with the throttle was that it actually never stopped.
If you look at your old logs, you will see âslothâ mode.
It would continue to transcode no matter what was actually happening.
Left in pause long enough, or with a fast enough GPU like is common now, the temp space (/dev/shm) fills up and fails just as it does now.
As it filled, it would purge blocks. Most of those blocks were either just ahead of or just behind where you playback was which made seeking difficult / failure-prone. At minimum it introduced considerable latency as those now-missing blocks were transcoded again.
As is now obvious, that first attempt doesnât work for all scenarios.
Again, given I cannot speak for Engineering â
- The discussion so far is going in a direction I think is a great step
- When I know their final design and can speak, Youâll know.
This has me confused. The problem was that they werenât being throttled so the fix was to explicitly not throttle them? Am I not understanding something here?
Thatâs pretty much how it came out.
There are three types of video transcoding happening.
- Streaming transcoding - (when you play a video / audio which needs it)
- Static job transcoding - (when you optimize or download a vldeo)
- DVR transcoding - (when video is recorded by the DVR)
As I understand it,
-
The goal was to improve the static job transcoding when using HW transcoding.
-
The implementation was a little too broad in that it also removed throttle from streaming transcoding too.
As we all see now, the removal of throttling for streaming also impacted the tmpfs transcoder temp buffer management logic because HW transode is much faster than normal video consumption rate (8x vs 1x playback)
Looking at the first step of the Engineering changes looks good (the code)
Iâll know more when the next build is produced and I can test it.
That does make more sense now. Thank you.
Thanks for the report and the discussion, @ChuckPa. I reported this in a separate thread about the Mac version of PMS, which stores the transcode chunks in a âCachesâ folder in the userâs profile. Glad your team is on top of it!
My thanks go out to all of you for being so prompt and cooperative in reporting it
With everyoneâs help, it was easy to quantify, identify, and get it to Engineering for rapid attention.
Yeah, this makes it a lot clearer why this happened. Thank you.
I see a new beta release has been posted with this line in the notes.
- (Transcoder) Unconditional unthrottling of HW transcodes would affect streaming transcodes (PM-2703)
Does that mean this had been adjusted to not affect streaming transcoding and to only affect static job transcoding now?
I installed 1.41.4.9421 and am no longer having a problem with overrun of my RAM disk for transcoding