I don’t understand from what context you think I claimed you were asking for a Linux version. But since you called the lack of a Linux version humbug, I pointed out that you could just ask for a Linux version.
I merely tried to share something helpful and you end up accusing me of lacking comprehension and reading skills.
As I said, I’m amused by your attempt to be right, but I’ve never accused you of being wrong. Only I pointed out to you, that if you miss a Linux version simply ask the programmer who provides the whole thing for free, if he provides you with a Linux version. As I said, what you make of it, is up to you.
Where your entitled attitude and your insinuation of laziness comes from - regarding a project which is under an open source license - only you can explain.
Since I don’t feel like I’m accomplishing anything meaningful with you here, it’s been my last post in our shared discussion.
Oh, I don’t know. Perhaps it’s because of such statements you made like “I forgot that asking for a linux version is already an obligation”. Nobody asked for a Linux version now did they? Where do you come up with that? I was merely stating I’m under no obligation to contribute to the project nor to pester anybody for a Linux version. I merely said that there’s a Windows and Mac version and no Linux version and that sucks. You then got bent out of shape for unknown reasons.
And you can be amused to death as one English rock star put it. I could care less.
So here’s the thing… we don’t just take an idea and run with it here at Plex. Nope, we try and make things magical. Take intro detection as an example. I am sure that a bunch of services with a specific set of titles can go through and pay someone to find the start and end timestamps of all credit sequences in their discrete library and to build whatever metadata info is needed for proper skipping of intros. Nice, but how did we solve it? We can’t afford the luxury of paying someone to do that for all titles everywhere in existence, and since we don’t know what is on servers we couldn’t even do that if we wanted to anyways. So some much smarter people than me figured out a way to fingerprint specific intro segments and then run that process at the server level and boom! A way to find intros on any title on any server no matter the encoding or configuration. Sure, there are still a few spots where this can be improved and we are working to make it even better all the time - but it is still a pretty amazing, and possibly even magical, feature.
Why am I telling this story? Mainly because we hear you on subtitles and we think there is definitely a magical way to make this happen as well. And this isn’t just some theoretical idea - all I can say is that hackathons with incredibly talented engineers are more akin to magic shows than engineering presentations and this is something we are very excited about heading into next year! Stay tuned!
The way Plex implemented this feature is complety useless for at least everybody who uses subtitles for the second audio stream: “This is accomplished by having the Plex Media Server analyze the primary (first) audio stream in the video file to detect when dialog occurs.”
My primary audio stream is always my native language. But I watch the movies with the second audio stream which is always the original language. If I want to enable auto sync I can forget about it.
Just wondering… even with a dubbed video, won’t there be dialogue in the same places?
Yes, I get that non-native subtitles will have segments for dialogue as well as certain on-screen texts. The way I’ve been understanding the feature, it shouldn’t be thrown off by that.
If you have specific situations where this is causing trouble, please share those.
As far as I know, there should at least be some kind of matching algorithm that not only recognizes whether a dialogue occurs, but also compares the spoken dialogue with the text-based subtitle. Otherwise it would be extremely inaccurate. When I tried out this function, I noticed that the subtitles and the spoken word were not displayed or spoken at the same time at all.
I can experiment further and see if I have simply misunderstood it.
Okay I’ve tested it whith a movie and an unsynced srt file. My test setup, a movie with 2 language audio tracks. The first track is German, and the second track is english.
I put in 2 separate srt files. One for the german audio track and one for the english audio track. Both are unsynced. One is about 6 minutes off, and the other (german) is about 4 minutes off.
I’ve enabled the auto sync feature while having the english audio track and the english srt enabled. Turns out the feature does nothing.
I’ve enabled the german audio track and german srt. Works like a charm.
Though it works like I expected it to work. Guess I will stick to syncing my subtiles beforehand with subsync.
But why did it work once I selected the first audio track then? Maybe I’m interpreting the displayed value within subsync wrong and it’s only seconds. Don’t know. Still only worked for me as long as I’ve had the first audio track selected. Not the second one. I’m not going as far as removing the first audio track to make english the first audio track to prove my point.