Streaming broke recently for VC-1 files...

I can’t quite figure out what changed but I did a recent upgrade to Plex. VC-1 files can no longer stream remotely. They stutter and pause continuously. I downgraded Plex Media Server. Nothing changed. I am back to running 1.2.3.2914.

Yes these worked before… I’m running my Plex Server virtualized. Host proc. is Xeon E3-1276v3. I’m only trying to transcode a single stream. No other running processes.

Please help…

I have narrowed it down to VC-1 files as everything else works fine.

I like it when you have a single stream running, Would you please recreate the event if you don’t have the log files from when it happened. When you do, please just grab the bunch, tar or zip them, and attach here with your next post. We’ll take a look and see what’s happening.

What level of logging should I have enabled?

I tested utilizing my cellular connection.

Issues start at 10/29 - 00:58 is when I logged in and attempted to stream a file.

Lots of these appear:

Oct 29, 2016 00:58:36.017 [3476] WARN - Held transaction for too long (…\Sync\SyncItemGenerator.cpp:139): 0.703125 seconds


I did further narrow down… I don’t have these issues while streaming locally which also requires transcoding. Only when remote…

Look a little further down your logs and you will find the real reason.

Oct 28, 2016 15:00:35.146 [0396] WARN - SLOW QUERY: It took 203.125000 ms to retrieve 7 items.

This is telling you the database itself (internally and not the file) is fragmented and needs optimizing. This happens when a lot of media is added , removed, or moved around in a short time.

If you’re not familiar with where you do this:

I’ve run the optimize a few times… I just ran it again… Isn’t this set to automatically happen during nightly maintenance?

Anyways I tested again after optimizing. Still get a ton of the “Transaction was held for too long messages…”

Settings - Server - Scheduled Tasks is where you enable database optimization.

@natelabo said:
I’ve run the optimize a few times… I just ran it again… Isn’t this set to automatically happen during nightly maintenance?

Anyways I tested again after optimizing. Still get a ton of the “Transaction was held for too long messages…”

You have the same problem that I had. Actually it’s more of a problem with Plex. I had a pm discussion with a Plex dev and we confirmed that it would not help with even a rebuild of the database.

The problem is with your sync items. Or rather, that Plex has pretty complex queries regarding them. In my case, the combination that I have a rather big library (database is almost 500 MB) and had like 70 different sync jobs caused this problem.

Turns out Plex runs these queries almost every time a transcode is started. At least in my case. For me this caused a CPU spike on the server from pms.exe which on my meager dual core was enough to start stuttering of the transcode. Obviously your CPU is more powerful, but maybe checking this wouldn’t be a bad idea? Maybe the transcode is on the same core? I think I’ve heard vc1 transcodes are limited to 1 core so maybe that’s why? Just guessing.

Anyway, at the end of the day I just deleted my sync items and that made the problem go away. Let’s hope you don’t have to but it kinda seems like the same thing.

What i am finding is that there are file types that seem to not mulitthread well. I have blu Ray rips that are mkv vc-1 / DTS that only use 1 core of a 6 core VM… while other mkv blu rays seem to use multiple cores.

This behavior is consistent across real time transcoding as well as syncing. And I have not set the Brain and this is on my internal network

MANY of these issues I believe started during the NEW plex transcoder transition … I never had issues running 0.9.xx but once I ‘move up’ to plex pass and using 1.X releases the wheels came off the bus.

Here are the issues as I see them from afar…,

First and foremost… the ‘project’ seems far too loosly managed from a development cycle standpoint and software QA seem non existent. Its a code ■■■■ and throw it at the wall (plex pass users) to see what sticks.

Plex needs to define what formats and containers the backend can both ingest and spit out… from both a transcode and direct stream/play standpoint.

The second issue is plex has not had a front end since abandoning PHT. Lets face it… there are way too many people using clients that Plex no longer controls… how can you code a backend when there is no defined target to stream too… ? Kinda difficult…

From a users standpoint, there are way too many confusing and poorly documented settings and options scattered about the server interface, both in server settings and the individual library settings… I have been using plex since the beginning so I have grown up with these and know pretty much how to squeze that lemon but many noobs here don’t… more importantly, the code should have matured by now to do away with alot of these ‘developer test’ options and make the user GUI more intuitive and user friendly…

PHT is so far out of date with current 1.X server code that most folks should stop using it…

OpenPHT is a different story but here again I think plex is doing things and changing thing on the backend that the devs over at openPHT are never made aware of … once again disconnects will happen between the 2 codebasses…

The plex ongoing lifetime beta/alpha of their only home theater player PMP is grossly behind in features and the mobile platforms seem like the only ones actively being improved…

Got news for plex… most people this into media are more interested in the experience on their $2000 TVs than their iphones…

This project needs to get its priorities straight…

It needs to publish to the user base its standards for video formats that it is ENGINEERING its product to be able to handle… and a definitive guide to its media devices and players as well…

Then actually TEST your products against these formats BEFORE release rather than updates every other day that have errors and bugs that the USER base has to straighten out for you…

These are just a couple rants from someone that did software engineering for DoD…

I love plex and will be a diehard fan but you need to up the game and become more professional now that this is no longer a FREE effort but one that charged $150 bucks for its product…

@ChuckPa said:
Settings - Server - Scheduled Tasks is where you enable database optimization.

I already had this set… Thanks for checking!

@dragonmel said:
It needs to publish to the user base its standards for video formats that it is ENGINEERING its product to be able to handle… and a definitive guide to its media devices and players as well…

Then actually TEST your products against these formats BEFORE release rather than updates every other day that have errors and bugs that the USER base has to straighten out for you…

I just wanted to quickly comment on this, because it’s a fairly unrealistic thing to ask for. There are an insane number of different codecs and formats, as well as a crazy number of various flavors/versions of encoders (not to mention literally millions of different possible settings for an encoder). We actually do have a fairly broad set of media samples we use to test our products with, but it’s truly impossible to capture all the different nuances which can – sadly – contribute to issues.

The best way to help us is to make (small) samples of files which demonstrate specific issues. Then we can test with them, and see what’s going on!

@d2freak said:

@natelabo said:
I’ve run the optimize a few times… I just ran it again… Isn’t this set to automatically happen during nightly maintenance?

Anyways I tested again after optimizing. Still get a ton of the “Transaction was held for too long messages…”

You have the same problem that I had. Actually it’s more of a problem with Plex. I had a pm discussion with a Plex dev and we confirmed that it would not help with even a rebuild of the database.

The problem is with your sync items. Or rather, that Plex has pretty complex queries regarding them. In my case, the combination that I have a rather big library (database is almost 500 MB) and had like 70 different sync jobs caused this problem.

Turns out Plex runs these queries almost every time a transcode is started. At least in my case. For me this caused a CPU spike on the server from pms.exe which on my meager dual core was enough to start stuttering of the transcode. Obviously your CPU is more powerful, but maybe checking this wouldn’t be a bad idea? Maybe the transcode is on the same core? I think I’ve heard vc1 transcodes are limited to 1 core so maybe that’s why? Just guessing.

Anyway, at the end of the day I just deleted my sync items and that made the problem go away. Let’s hope you don’t have to but it kinda seems like the same thing.

I removed all sync items. Are you referring to items that sync to devices? Problem still exists…

I’m not sure how large my database is… But library consists of 60,000 songs,1200 movies and 500 TV Shows?

I’ve checked core use and it often is not going over 50%… Actually while it’s not working PlexPy is reporting that playback is paused. Transcoding speed never goes above 1.

Devices are now reporting that server doesn’t have enough power to convert.

This wasn’t an issue a week ago… Super frustrated.

I’m probably going to spin up a test VM and see if it’s just this server install…

Thanks for your insight.

@dragonmel said:
What i am finding is that there are file types that seem to not mulitthread well. I have blu Ray rips that are mkv vc-1 / DTS that only use 1 core of a 6 core VM… while other mkv blu rays seem to use multiple cores.

This behavior is consistent across real time transcoding as well as syncing. And I have not set the Brain and this is on my internal network

MANY of these issues I believe started during the NEW plex transcoder transition … I never had issues running 0.9.xx but once I ‘move up’ to plex pass and using 1.X releases the wheels came off the bus.

Here are the issues as I see them from afar…,

First and foremost… the ‘project’ seems far too loosly managed from a development cycle standpoint and software QA seem non existent. Its a code **** and throw it at the wall (plex pass users) to see what sticks.

Plex needs to define what formats and containers the backend can both ingest and spit out… from both a transcode and direct stream/play standpoint.

The second issue is plex has not had a front end since abandoning PHT. Lets face it… there are way too many people using clients that Plex no longer controls… how can you code a backend when there is no defined target to stream too… ? Kinda difficult…

From a users standpoint, there are way too many confusing and poorly documented settings and options scattered about the server interface, both in server settings and the individual library settings… I have been using plex since the beginning so I have grown up with these and know pretty much how to squeze that lemon but many noobs here don’t… more importantly, the code should have matured by now to do away with alot of these ‘developer test’ options and make the user GUI more intuitive and user friendly…

PHT is so far out of date with current 1.X server code that most folks should stop using it…

OpenPHT is a different story but here again I think plex is doing things and changing thing on the backend that the devs over at openPHT are never made aware of … once again disconnects will happen between the 2 codebasses…

The plex ongoing lifetime beta/alpha of their only home theater player PMP is grossly behind in features and the mobile platforms seem like the only ones actively being improved…

Got news for plex… most people this into media are more interested in the experience on their $2000 TVs than their iphones…

This project needs to get its priorities straight…

It needs to publish to the user base its standards for video formats that it is ENGINEERING its product to be able to handle… and a definitive guide to its media devices and players as well…

Then actually TEST your products against these formats BEFORE release rather than updates every other day that have errors and bugs that the USER base has to straighten out for you…

These are just a couple rants from someone that did software engineering for DoD…

I love plex and will be a diehard fan but you need to up the game and become more professional now that this is no longer a FREE effort but one that charged $150 bucks for its product…

Not trying to make this a rant about Plex and their seemingly Google ways… IE product always in beta… I enjoy the product very much.

I just wish that my core functionality was working as it was a week ago.

I guess that IS my point…

Their CORE product is broken…

The backend has been flakey since leaving .0.9.12-16 ish

The frontend developlement is a disaster and anyone with a home theater PC is either using a way outdated PHT that was abandoned years ago, or openPHT this is a third party developlement that especially on pi works much better than the plex pass only PMP front end.

Non plex pass users have NO official PC plex client available at all…

I keep seeing these posts by plex for feature wish lists… ok here is one

A backend that WORKS and is QA’d before leaving your shop
Historical download portal so. I can choose what level of broken I want to download and install

A larger push at getting paid users a front end PMP that is is MORE functional and EASIER to use that the old PHT… the current PMP leaves alot to be desired

@dragonmel

If I might offer my personal viewpoint here. This is me, not Plex, speaking only.

I have spent my career (30 years) doing DoD systems & software engineering so I understand where you’re coming from. Everything from support systems, to mission-critical, to ‘the other stuff’. I like to offer my personal perspective.

I had a hard time translating our world of software engineering when I finally left and came out to the commercial world. Having been both the engineer in the trenches as well as the guy who has to sign off on it, I know the process well. We’re used to a “It must be meet the spec perfectly” when it goes out the door in almost every case. What I’ve had trouble with is learning to relax. While yes a P.I.T.A. The other thing I’ve had to put in perspective is the challenge of mapping technology, to product development, to resources, to schedule. Would you or I even considered developing to a moving specification or needs from the customer(s)? No, we wouldn’t. We’d draw a line in the sand, spec it out completely, develop to it, test it, fix it, then deliver it BEFORE considering the next phase/milestone. That’s not how it works here. If it did, PMS wouldn’t have 1/3 of what it is capable of.

Since I know a little about the company, what these guys do is amazing. You and I couldn’t even think of pulling this off in our world with the same level of resources and schedule. These guys make a Skunk Works team look lazy.

I do respectfully hope, before ranting further about a product and DoD software engineering, you have also voiced your objections to a certain aircraft manufacturer for having put guns in an aircraft which the firmware can’t fire when the pilot pulls the trigger; for blaming changing / conflicting needs for different customers for the problems; an other such ‘not-created-here’ issues as the reason for being years behind schedule; billions over budget on an aircraft whose predecessor was functionally scrapped in development due to screwups (I was there, I saw those screwups. You don’t put 4 strands of 1553 (your ‘redundancy’ all in the same wire bundle and located such that a single shot can take out all four) . Oh wait, that’s how DoD contracting works now… Screw it up and get paid anyway. Equally, I do hope you weren’t a member of one of those DoD contractors.

If you feel you have skills which will contribute to Plex, send them your resume. I did.

/personal viewpoint

puts on kevlar in preparation for flame

I think it’s great the amount of features that Plex pushes out. All the new developments are brilliant… but none are anywhere near the level they should be.

Plex tries to be everywhere. It has limited resource and should focus on a more complete delivery on a smaller product set. That’s my huge gripe… invest the excess resource the business has in maturing the product set before developing new products.

Once again you are making my point for me…

First and foremost I am trying hard not to bash on Plex, again I have been here since almost the very start… but again it was free and helping them develop and test was a community effort

however that is the latest business model these days… look how many companies have done this now over the last 10 years… get others to develop and test your ‘community’ project then close it off and start charging… or sell it or a commercial concern… not cool to those that ‘worked’ for free…

But as you so specifically nailed it… this has nothing to do with DoD spec… it has to do with this generations idea of what engineering is…

After coding for the AF in the early '90s. I got to fly jets for 15 years of my career… the aircraft we built in the 50’s were more reliable and better (smarter) engineered than the ones we have today… it has a bit to do with compexity … but to be honest plex, just like some dod projects just got too complex because of a lack of engineerings ability to plan and simplify before starting… you have to plan before you code, today everyone just wants to sit at a console and code / patch / code / get complaints/ patch / core … rinse repeat… there is no planning… that is the boring hard part. … good systems analysis … coding in the middle and most important … .a testing plan that tests at a minimum critical functionality

your point of pulling redundancy along the same part of the airframe is a perfect point… the spec calls for 4 redundant channels… a smart engineer knows that means that the 4 channels should be survivable and separates them… a lazy one pulls all 4 in the same bundle… both met the spec but only one is a solution…

This is not just a problem at plex… vmware of late is just as guilty… their code is riddled with bugs, their current release is not manageable with their own front end tools, and the different management interfaces all have different layouts, the perfomance monitoring charts all have different scales… disk activity on the host app might be kb/s while the VCSA appliance displays the same metric as MB/s … just poor design the product should be seamless… just a lack of planning, coordination…

Now… back to plex… if you cant make the basics work… than stop development on new features and fix the base system first… or have a new team and a parallel code and release base.

If you are going to build new features than baseline the working code and make that available to those that don’t want to have their install destroyed.

Right now the only options for a plex pass user is the bleeding edge of instability with plex pass version or the general release without plex pass functionality …

If you cant keep up… than make available the previous versions or revisions…

Fix bugs at a baseline 's prior build while also advanceing other forks…

So lets say that we are happy-ish with 1.2.3.XXXX… than you call 1.2.3.xxxx. Baseline 1.2.3.xxxx and keep bug fixing only that baseline while you continue to release others that may seek to change actual functionality … but you have to be able to download prior versions if a full test plan is not in place so that downgrading to official code is possible… having to pull previous versions from 3rd parties is too risky these days especially when you have to enter plex pass credentials… so plex must start allowing us to download past versions…

This over simplified download strategy is not going to service its customers when even basic things get realeased with issues…

Perfect example is the latest snafu with the web interface on editing tags… that should have been caught… it has nothing to do with media versions or moving targets… it is a basic core function that was broken, never tested and pushed out to the install base… so now only online users with accounts and access to plex.tv can fix and edit tags until the next release since there is not repo to download the working prior version…

Oh and the prior version corrupted peoples databases with incorrect xml data during media analysis so we have to go back 2-3 realeases to get to one that worked… seee my point…

You and I are of the same cloth. I’m also retired AF “Herc” driver. I returned to my engineering roots after those days. Software Engineering isn’t what it used to be. Hell, NONE of engineering is what it used to be. Schools teach ‘software’ by throwing it at the kids. “Engineering” is the word printed on their degree. You and I can probably look at code and directly count the CPU cycles it takes to do something. That’s not taught anymore. We knew how to write a DFSM in our sleep. Not anymore. When something didn’t fit, we “worked smarter, not harder”. Most of the new-hire engineers on my teams were of the belief “Get a bigger processor” or “Add more memory”" … Times changed and, in many ways, not for the better. Yeah, it sucks. It is what it is. In spite of all that, this team is still freaking awesome. The guys will tell you, I speak out too and not always in ways which are ‘comfortable to hear’. :smiley: I will say this; Every valid, objective, point which is made it’s taken seriously. I also pour my time back into making things better wherever I can.

Regarding your asserting “basic core function that was broken, never tested and pushed out to the install base” isn’t entirely true. It got tested. Was it tested for every conceivable combination? that’s not possible until the fictitious day when Plex has amassed every conceivable variable to test against. You and I had the luxury of FQT for each customer. Each customer got their own set of specs, config, build, and, in most cases, team.

Procedure here has become quite formal since 1.0.0. The rapidly changing version numbers is evidence of that. When I find and fix an issue which effects only the NAS units, it’s written up and submitted, my sandbox with the changes are reviewed by two other teams for potential impact to any other platform before being put into a build. First-pass testing is done before it even is given to QA. QA has final say on it (and they’re not shy about rejecting it either)

Something else you and I had as a SERIOUSLY constrained scope. If you or I say “Repeat sensor video to both left and right seat displays”, we already know how well how constrained that is. We also have control over the hardware. None of that exists here. Every time Plex says “The minimum is:” somebody bitches. How many different video formats are there? How many different tools are there out there to encode video? Does everyone encode their video to exactly the same standard with the exact same procedure? The number of variables is insane and permutations even worse.

The point of all I am writing here is this:

While you and others, just like me, get annoyed or even frustrated when things break due to a regression, let’s take the proactive posture. Let’s try to create the definitive test case to show how it can most easily be reproduced and then submit it. Having a 1-2-3 procedure is a shitload better than “It’s broke” and throwing log files at someone like me to figure out WTH is happening.

Yes, Plex is a product, but we’re also a family. Yeah, kinda dysfunctional at times, but we get it done and pull together when it counts and do our part to make it better.

Sound like a plan?

@natelabo said:

@d2freak said:

@natelabo said:
I’ve run the optimize a few times… I just ran it again… Isn’t this set to automatically happen during nightly maintenance?

Anyways I tested again after optimizing. Still get a ton of the “Transaction was held for too long messages…”

You have the same problem that I had. Actually it’s more of a problem with Plex. I had a pm discussion with a Plex dev and we confirmed that it would not help with even a rebuild of the database.

The problem is with your sync items. Or rather, that Plex has pretty complex queries regarding them. In my case, the combination that I have a rather big library (database is almost 500 MB) and had like 70 different sync jobs caused this problem.

Turns out Plex runs these queries almost every time a transcode is started. At least in my case. For me this caused a CPU spike on the server from pms.exe which on my meager dual core was enough to start stuttering of the transcode. Obviously your CPU is more powerful, but maybe checking this wouldn’t be a bad idea? Maybe the transcode is on the same core? I think I’ve heard vc1 transcodes are limited to 1 core so maybe that’s why? Just guessing.

Anyway, at the end of the day I just deleted my sync items and that made the problem go away. Let’s hope you don’t have to but it kinda seems like the same thing.

I removed all sync items. Are you referring to items that sync to devices? Problem still exists…

I’m not sure how large my database is… But library consists of 60,000 songs,1200 movies and 500 TV Shows?

I’ve checked core use and it often is not going over 50%… Actually while it’s not working PlexPy is reporting that playback is paused. Transcoding speed never goes above 1.

Devices are now reporting that server doesn’t have enough power to convert.

This wasn’t an issue a week ago… Super frustrated.

I’m probably going to spin up a test VM and see if it’s just this server install…

Thanks for your insight.

Regarding sync items, I mean the sync “jobs”, I guess. I mean the things you set up to create a sync. Like if you go to Plex Web and click the sync tab in the upper right (under “status”). I had like 70 jobs here (one for each music artists I had synced to Android). I don’t know if the amount mattered, but I removed all of the sync jobs just to be sure, and my problem went away. And yeah, our libraries looks more or less similar size wise.

To be clear, I’m talking specifically about the “Transaction was held for too long messages” in the sync item generator. When this message appeared in my logs, Plex Media Server.exe ate up an entire 3.9GHz core for about a full minute. This left only one core for transcoding (Plex Transcoder.exe) and it would sometimes not be enough to keep transcoding speed over 1.

So, do you still get the message in the log after you deleted your sync items? You say that your transcoding speed never goes above 1. That tells that your server can’t transcode the file fast enough. Which could be the case if the sync items causes a spike on the same CPU as the transcode.

Otherwise, since it’s only happening to vc-1 files, and since that is the only format that I know of that uses only one core for transcoding, it has to be assumed that either one single core of your CPU is too slow for your particular transcode, or that suddenly, something demands attention of that core at the same time. Which is why I keep nagging about the sync items :slight_smile: