Transcoding Throttle Removal Problems - An Example

The transcode is stopped if the player exits playback. The issue is more the hardware transcoder is typically running 2-10x faster than playback so it can get way ahead of an active session. This is probably a good thing when it comes to seeking during playback, but maybe a bad thing when it comes to temp space usage vs the old way of throttling the transcoder if it gets more than a few minutes ahead of the playback position.

1 Like

im aware. if it didn’t stop transcoding when the player is closed that would be doubly stupid, lol.

I tested this last night on a P4, I found if i started 4-5 streams right after each other, it would stall. However if I left around 30 seconds inbetween opening streams i managed 15-16 1080p to 720p and it still had room to start streams (this was where i stopped as having 2 shields, 2 tablets, a phone and 10 windows open on my pc seemed ridiculous.) I honestly don thtink that part of it is a big problem. Need to bare in mind once its finished converting that file (5-10min depending on the file) it stops running nvenc. In stress testing i noticed it that would have xx transcodes running however nvenc only showed X.
I can agree the transcode cache filling being excessive, but its also great for users with large caches/ram disks as it stops the disk being read throughout, almost like a tiered storage setup.

there’s already a setting for how much you want to transcode before throttling
 your argument would only make sense if that setting didn’t exist.

Correct which is why a reply i stated before “however on a bug level the transcode buffer should respect the option of “Transcode default throttle buffer” for those with smaller RAM disks.”

In your previous comment. You basically made an argument that the new way would be great. — But that argument is invalid because nothing changes in that regard, it’s the same as before. It’s like they cancel each other out WHILE adding the disk space problem.

If you’re going to make a positive argument about the new change it has to be something new that wasn’t available before. You can’t just say “it’s good because now I’m able to do this” because you were able to do that before


1 Like

This is now ignored as of the latest beta - thats the issue!

I think you should re-read everything i have written. I said i like the idea of a toggle of “go nuts” setting the transcode buffer to 99999999 seconds is abit rubbish. I agreed with all the sentiments that the transcode buffer should be respected in all accounts.

1 Like

yes, he was trying to spin the problem as a positive, lol. It isn’t a positive if it wasn’t a problem beforehand. You can’t say this is better for people with large caches because it’s not better, it’s exactly the same as before


But wait there’s more! We now have this problem of disk space issues and crashing, lol. So no you can’t spin this as somehow better.

3 Likes

ALL:

Folks, Engineering has heard the feedback.

They are re-evaluating the change and unexpected impact it’s had.

While I’d like to share what’s being discussed, I can’t until Engineering has finalized their plans and put a date on it.

What I can say is that I like the current proposal.

Again, I will share more when it’s finalized and they’ve given permission for me to share those details.

10 Likes

Is it ok to ask what problem there was with the buffer throttle limit in regards to transcodes in the first place?

3 Likes

The original problem with the throttle was that it actually never stopped.

If you look at your old logs, you will see ‘sloth’ mode.
It would continue to transcode no matter what was actually happening.

Left in pause long enough, or with a fast enough GPU like is common now, the temp space (/dev/shm) fills up and fails just as it does now.

As it filled, it would purge blocks. Most of those blocks were either just ahead of or just behind where you playback was which made seeking difficult / failure-prone. At minimum it introduced considerable latency as those now-missing blocks were transcoded again.

As is now obvious, that first attempt doesn’t work for all scenarios.

Again, given I cannot speak for Engineering –

  1. The discussion so far is going in a direction I think is a great step
  2. When I know their final design and can speak, You’ll know.
8 Likes

This has me confused. The problem was that they weren’t being throttled so the fix was to explicitly not throttle them? Am I not understanding something here?

@jebrum

That’s pretty much how it came out.

There are three types of video transcoding happening.

  1. Streaming transcoding - (when you play a video / audio which needs it)
  2. Static job transcoding - (when you optimize or download a vldeo)
  3. DVR transcoding - (when video is recorded by the DVR)

As I understand it,

  • The goal was to improve the static job transcoding when using HW transcoding.

  • The implementation was a little too broad in that it also removed throttle from streaming transcoding too.

As we all see now, the removal of throttling for streaming also impacted the tmpfs transcoder temp buffer management logic because HW transode is much faster than normal video consumption rate (8x vs 1x playback)

Looking at the first step of the Engineering changes looks good (the code)
I’ll know more when the next build is produced and I can test it.

3 Likes

That does make more sense now. Thank you.

Thanks for the report and the discussion, @ChuckPa. I reported this in a separate thread about the Mac version of PMS, which stores the transcode chunks in a “Caches” folder in the user’s profile. Glad your team is on top of it!

My thanks go out to all of you for being so prompt and cooperative in reporting it

With everyone’s help, it was easy to quantify, identify, and get it to Engineering for rapid attention.

6 Likes

Yeah, this makes it a lot clearer why this happened. Thank you.

I see a new beta release has been posted with this line in the notes.

  • (Transcoder) Unconditional unthrottling of HW transcodes would affect streaming transcodes (PM-2703)

Does that mean this had been adjusted to not affect streaming transcoding and to only affect static job transcoding now?

1 Like

I installed 1.41.4.9421 and am no longer having a problem with overrun of my RAM disk for transcoding

1 Like