How to prepare for differences between clients..?

As pointed out to me in one of my other posts, different clients appear to have separate issues from each other, and after a bit of testing on a few of them I can definitely confirm it's true. :(
 
So how does one prepare for such situations? Right now I have a channel that suffers from different issues per client and I do not know what is, or is not, to be expected, nor if there are any workarounds for them?

Plex Web:
1) When blending DirectoryObjects & VideoClipObjects within the same ObjectContainer, the user-defined order is broken by the client and DirectoryObjects are grouped at the bottom. Iteration is confirmed successful as the index numbers in the title follow in the correct alphabetical order, regardless of their overall positioning.
2) No difference in view when setting different view type: (List & InfoList always looks the same...could be expected limitation in Web?)
 
Plex Home Theatre:

 

Default Plex Skin:

  1. Large percentage of title/summary metadata does not display, suspect encoding issues..
  2. When navigating left/right the view rotates on each keystroke. Once you reach the far right it ends on a list view where up/down navigation works fine. Impossible to use this way, and this appears to only happen in the default Plex skin? 

MediaStream Skin:

  1. Large percentage of title/summary metadata does not display, suspect encoding issues..

 
Plex for Android:

  1. Assuming auto-page loading is a feature of NextPageObject(), auto-scrolls to the next page instead of displaying a next page button. Works fine until the last page, which does not load. (will double check my code for errors but with a page button present it never fails)
  2. When pressing the area where the page button would be expected to display, channel crashes (happens on last page only, might be coincidental timing when last page fails to load...so far nothing in logs to confirm with)
  3. Only displays in a thumbnail view, even though it is set to 'List'. (could be expected limitation?)
  4. Can't seem to force use of page button, which is preferred. (could be expected limitation?)

 

These issues are all client-specific, and with PHT there appears to be further issues when switching between skins? It could very well be that I am not aware of certain limitations or workarounds to avoid certain known issues, but either way I am not sure how to adjust my channel to compensate for them?
 
The missing title/summary metadata from PHT is a pretty big issue for me, and if I were to guess at it I would say it's an encoding-related issue. My initial assumption was if it works in one client, similar logic would be used in the rest, but if that's not the case then how does one properly encode strings to ensure they are compatible throughout all Plex clients, or at the least understand what is happening if something displays in one, but not in the other?
 
It's possible that the last page issue on Android might be on my end, but I can confirm that it works correctly when the page button is used in place of auto-paging. Either way, if I cannot find a solution for it I already have another solution I created before I was made aware of NextPageObject().

 

 

As usual, all help is appreciated!

Thumbnails did not attach properly...

In the multi-platform eco-system that Plex provides, there will always be differences from one platform to the next. It's neither realistic nor prudent for channel developers to try to bridge the gap between them all. However, it's definitely worth reporting any behaviour that is different from what's expected. Some may be by design or a limitation of the platform's developer API. Others may be bugs or open to be changed.

In regards to your specific concerns:

Plex Web:

  1. The next version of the web interface should already have fixed the failure to respect the given sort order. You can test it by pointing your browser at plex.tv/web
  2. Setting view types in channels is likely going to be deprecated in the future. Most clients ignore that and choose "the best" view type based on their own set of criteria.

Plex Home Theatre:

Default Plex Skin:

  1. If there are entire fields of data missing, that may be either an oversight or by design. It would be good to confirm either way. If you have encoding issues with the strings, that can wreak all kinds of havoc. Good to track those down.
  2. IIRC, PHT has different display types depending on the high-lighted item. If you're using mixed-type directories, that would explain the display type changing back and forth as you move between items. I would find that a less than pleasant UX. I'll ask about a fix but it would likely be a compromise at best since, the different display types are geared towards the content type they represent. 

MediaStream Skin:

  1. The MediaStream skin is deprecated and no longer being maintained by the Plex team. It is open source and as such other developers are welcome to take over maintenance or patch it as they see fit. I wouldn't worry too much about trying to make your plugin work well there for now.

Plex for Android:

  1. I suspect that your code likely has an error which leads to appending an empty NextPageObject to the last page of results. That could result in weird behaviour on any client which makes use of the auto-scroll feature.
  2. as above
  3. As mentioned above, view_types set in channels are generally ignored. You're probably stuck with the tiled thumbnails there until such time as the Android app undergoes a redesign.
  4. Any app which supports the use of the auto-scroll feature is expected to do so. Forcing the Next Page button instead is not a supported option.

In regards to the encoding issues, can you provide a sample of the XML from the channel which shows up with missing data in PHT?

Thanks for the reply Mikedm139,

Plex Web:

  1. The next version of the web interface should already have fixed the failure to respect the given sort order. You can test it by pointing your browser at plex.tv/web
  2. Setting view types in channels is likely going to be deprecated in the future. Most clients ignore that and choose "the best" view type based on their own set of criteria.

1. I will give that a go very soon, last night I found a much larger Documentary site http://watchdocumentary.com so I am in the middle of updating the code to accommodate for it.

2. Thanks for the information, very helpful!

*UPDATE* I tried loading my channel through plex.tv/web but it always locks up my browser and I get a message stating the script on this page has stopped responding. I also get it when I run my older version for DocuHeaven so I currently have no way to verify whether the order issue was resolved. :( *UPDATE*

Plex Home Theatre:

Default Plex Skin:

  1. If there are entire fields of data missing, that may be either an oversight or by design. It would be good to confirm either way. If you have encoding issues with the strings, that can wreak all kinds of havoc. Good to track those down.
  2. IIRC, PHT has different display types depending on the high-lighted item. If you're using mixed-type directories, that would explain the display type changing back and forth as you move between items. I would find that a less than pleasant UX. I'll ask about a fix but it would likely be a compromise at best since, the different display types are geared towards the content type they represent. 

MediaStream Skin:

  1. The MediaStream skin is deprecated and no longer being maintained by the Plex team. It is open source and as such other developers are welcome to take over maintenance or patch it as they see fit. I wouldn't worry too much about trying to make your plugin work well there for now.

Default Plex Skin:

1. I am definitely running into encoding issues, here is the traceback from the log. At first glance it appears I am getting a character instead of a standard ', which has caused me grief in the past. without access to 3rd party tools, do you have any suggestions as to how I can replace said characters?

2014-03-25 15:17:31,683 (1c80) :  CRITICAL (core:561) - Exception setting attribute 'summary' of object  to And there was just a hole and black smoke. It just 

looked like a cigar standing up on end with burning tip and black smoke.

Which again, why black smoke? It’s ready to go out. And then it went on

and the building was just burning away and it wasn’t too exciting until

all of the sudden it’s disappearing and there was no real sign, there

was no sign that this was going to go down because of all the black

smoke. And the black smoke is really indicative that the fire is out. (type: <type ‘str’>) (most recent call last):
File “C:\Users\No1\AppData\Local\Plex Media Server\Plug-ins\Framework.bundle\Contents\Resources\Versions\2\Python\Framework\modelling\objects.py”, line 71, in _set_attribute
el.set(convert_name(name), value)
File “lxml.etree.pyx”, line 699, in lxml.etree._Element.set (…\src\lxml\lxml.etree.c:34531)
File “apihelpers.pxi”, line 563, in lxml.etree._setAttributeValue (…\src\lxml\lxml.etree.c:15781)
\ File “apihelpers.pxi”, line 1366, in lxml.etree._utf8 (…\src\lxml\lxml.etree.c:22211)
ValueError: All strings must be XML compatible: Unicode or ASCII, no NULL bytes or control characters

I've tried .encode("utf-8"), .decode("utf-8"), but tbh I am not too sure what the "right way" to deal with this is? I do know from the headers of the page that the encoding type is already UTF-8, so I'm kinda lost on this one....

2. (In conjunction with Plex Web issues) I can understand what you're saying about gearing them towards their content, but in all fairness the only alternative I can think of requires extra windows to be present. Instead of adding a single VideoClipObject to the container for non-playlist items, I would have to add them into a DirectoryObject, then load a window with only a single list item, and then click it to get to the VideoClipObject page. I'm not sure if anyone else shares this opinion, but it's a pretty annoying extra step when there's only one option to select from! ;) The reason I attempted to blend them first was to avoid the necessity of the extra window for a single video. So far, the other clients do not appear to suffer from this issue so it would be greatly appreciated if something could be done to allow Plex Web the same control! :)

Aside from just playlists, another use I have made with the blend of objects is to create an ErrorMessage(header, message) function, this way I can still retain the full list of videos to match the site, but have it gracefully load an empty function with  header/message if an error occurs, then return to the list. So far it works quite well, but since they're also DirectoryObjects, they suffer from the same grouping/view issues as the playlists. :(

@route("/video/watchdocumentary/categorymenu/documentarymenu/errormessage", header=str, message=str)
def ErrorMessage(header, message):
    return ObjectContainer(header=header, message=message)

MediaStream Skin:

1. The funny thing is, other then the missing data, everything else works fine. Order is retained, and there are no jumping views when browsing items. Maybe something here can help resolve the issue in the default Plex skin? That aside, MediaStream looks very familiar...is it ported over from XBMC? If so, and although I'm sure much has changed over the years, I used to skin for XBMC back in the Xbox days (MC360 skin, & custom skin mods) so I maybe I'll take a look at it later on if there's enough interest in it.

Plex for Android:

1/2/4. Very possible, but when the next page button is used in other clients, the button hides as expected when it's on the last page, which suggests otherwise? I'm not too worried about this issue, if it's on my end then I'll find it soon enough. If not, I have a custom setup to append a next button to the end of the list so I will get it working one way or another! ;)

3. Fair enough, was just hoping to use a list/infolist to get a bit more of the titles displaying on my tiny lil screen, no biggie!

With regards to the xml, I'll get one posted up shortly as I'm cleaning up the updated version for WatchDocumentary.com (which advertises at a whopping +8000 videos, instead of +2000 at DocumentaryHeaven.com). I do have a copy of a previous log with the encoding issue which is posted above. 

Thanks again Mikedm139, you've been a great help so far!

Regarding the encoding issues:

I would try forcing unicode encoding to see if that fixes it.

summary = ... #grab the summary from the html source
summary = unicode(summary) #cast the summary as a unicode string

Regarding the mixed-content view-types:

I am by no means saying that the current system is perfect. Just that there is unlikely to be a quick fix. I don't expect to regain the capability for channels to assign view-types, but the strobe-effect as you scroll through a mixed-content directory is hardly a good UX.

Regarding PHT Mediastream:

IIRC, it was originally ported from XBMC back in the Plex 0.8 (or earlier) days and has gone through several iterations along the way. It has been superseded by Plex skin which was designed from the ground up for Plex Home Theater. That's not to say the Mediastream is expected to behave badly but, it is not being updated to handle new features and under-the-hood changes to PHT. So, it may quickly become out of date.

Regarding the Android app and NextPageObjects:

If you can't find any errors that might be causing the issues, I'll make sure to bring it to the attention of the developers so that they can look into it.

Regarding the encoding issues:

I would try forcing unicode encoding to see if that fixes it.

summary = ... #grab the summary from the html source
summary = unicode(summary) #cast the summary as a unicode string

simple enough, will give that a shot.

*UPDATE 1*

While I was writing this I stopped to give it a shot with unicode(), items are still missing. The summary area can be a bit tricky to parse with xpath due to the random formatting tags in between so I am going to parse an area with the summary in it and use Regex() to parse what I need. Out of curiosity, is there any benefit from using Regex().findall() instead of re.findall()? With the Regex() function I am unsure of how to tell it to use re.DOTALL to encompass multiple lines?

*UPDATE 2*

The missing summary issue is resolved! After further digging into more sources from the site, it appears the formatting used in the section where you would xpath the description from is too complicated to reply solely on xpath. After adding from re import findall, DOTALL

I changed:

unicode(html.xpath("//div[@class=' detail_litem']/div[@class='borders vspace']/text()")[1].strip())#.encode(DEFAULT_ENCODING)

to:

String.StripTags(findall('class="borders vspace">(.*?)', HTML.StringFromElement(html.xpath("//div[@class=' detail_litem']/div[@class='borders vspace']")[0]), DOTALL)[0].strip()).replace("Description
			", "")

and now everything is displaying fine!

Regarding the mixed-content view-types:

I am by no means saying that the current system is perfect. Just that there is unlikely to be a quick fix. I don't expect to regain the capability for channels to assign view-types, but the strobe-effect as you scroll through a mixed-content directory is hardly a good UX.

That's fair enough. The view-type assignment is not really a big concern, but the strobe-effect issue really is a buzz killer. lol As an alternative to blending object types, is it possible to create a directory object with a single item, and upon load have it automatically initiate the single object instead of forcing you to click on it manually? My initial attempts failed because I don't know how to programmatically invoke the loading of the first object in a container...if that's even possible?

Regarding the Android app and NextPageObjects:

If you can't find any errors that might be causing the issues, I'll make sure to bring it to the attention of the developers so that they can look into it.

Once I have the new site implemented, I will do another check on this area just to be safe. As previously mentioned, the same code running in Plex Web and PHT hid the page button when it should, but when auto scrolling on Android, given 3 pages, the second pages auto loads fine but the last page is where the error occurs.

Stripping out some irrelevant code, my paging code looks like this...

from math import ceil

limit to XX per page

limit = 20

mock list of items to fill in for the real dict of items, only need it for len()

items = range(243)

given the above, max_page below will equal 13…

max_page = int(ceil(float(len(items)) / float(limit)))

after adding in all video objects, and provided the current page is not the last page (page #13), add the NextPageObject()…

if page < max_page:
oc.add(NextPageObject(key = Callback(DocumentaryMenu, genre=genre, items=data[“data”][genre], page=page+1, limit=limit), title = L(‘Page %s’ % (page+1))))

Glad to hear you've got the summary issue sorted out. I've run into my fair share of sites that include extra formatting in the summaries. They can be a real pain to work with. Often times, I've found that you can grab all the text contents as a list and then join them together. Like so,

for item in page_source.xpath('//item'): #where 'item' is the tag enclosing the entire item listing
    summary_parts = item.xpath('./summary/text()') #where 'summary' is the outermost tag that encloses the entire summary/description
    summary = '' # create an empty string as the starting point for our summary
    summary = summary.join(summary_parts) # join all the parts with an empty string in between them
    # if necessary, further string manipulation can still be used to replace problematic characters, force encoding, etc.

As an alternative to blending object types, is it possible to create a directory object with a single item, and upon load have it automatically initiate the single object instead of forcing you to click on it manually? My initial attempts failed because I don’t know how to programmatically invoke the loading of the first object in a container…if that’s even possible?

That's not possible AFIAK. Perhaps there's another work-around. Can you elaborate a little on why you end up with a mix of video objects and directory objects (why are the directory objects necessary?)?

Your pagination routine looks complicated to me. I usually just use a simple offset and counter. Like so:

@route(PREFIX +'/shows', offset=int)
def ShowList(offset=0):
    for entry in show_list[offset:offset+20]:
        ...
if offset+20 < len(show_list):
    oc.add(NextPageObject(key=Callback(ShowList, offset=offset+20)))

return oc

That being said, I don't see why yours wouldn't work. I just try to avoid importing any extra modules when-ever I can.

Glad to hear you've got the summary issue sorted out. I've run into my fair share of sites that include extra formatting in the summaries. They can be a real pain to work with. Often times, I've found that you can grab all the text contents as a list and then join them together. Like so,

for item in page_source.xpath('//item'): #where 'item' is the tag enclosing the entire item listing
    summary_parts = item.xpath('./summary/text()') #where 'summary' is the outermost tag that encloses the entire summary/description
    summary = '' # create an empty string as the starting point for our summary
    summary = summary.join(summary_parts) # join all the parts with an empty string in between them
    # if necessary, further string manipulation can still be used to replace problematic characters, force encoding, etc.

That's not possible AFIAK. Perhaps there's another work-around. Can you elaborate a little on why you end up with a mix of video objects and directory objects (why are the directory objects necessary?)?

Your pagination routine looks complicated to me. I usually just use a simple offset and counter. Like so:

@route(PREFIX +'/shows', offset=int)
def ShowList(offset=0):
    for entry in show_list[offset:offset+20]:
        ...
if offset+20 < len(show_list):
    oc.add(NextPageObject(key=Callback(ShowList, offset=offset+20)))

return oc

That being said, I don't see why yours wouldn't work. I just try to avoid importing any extra modules when-ever I can.

1. Thanks, I still have the encoding issue to deal with but at least now the text is there ;). Using unicode() did not help but I think I know where the problems might be. I'm thinking that replacing the troublesome characters might be the solution.

2. The reason for having a mix of items is because some urls are single videos, while others are playlists (usually YouTube). When a playist url is detected I do further processing to add the playlist items to a child window using a separate function PlaylistMenu(). If I add the single videos to a DirectoryObject, I then have an extra (and unnecessary) window to click through, which annoys me greatly! :P. If you blend DirectoryObjects and VideoClipObjects within the same ObjectContainer(), the extra window for a single video is avoided. Everything about it but the sort order works as expected, and (imho) the ability to remove extra window clicks provides a much more fluent experience for the user.

3. With regards to my complicated pagination setup, my channel works a bit differently then the few that I learned from. Because the site I'm scripting is quite large (+8000 videos), and each video comes from a random site, the time to load the information from each url by walking the site is just ridiculous. My first attempt resulted in a lot of timing out so I made use of the Data() function and mapped the metadata to a dict and stored it for quicker reference. The site has a link that lists all videos on one page, and within their respective genres/categories. Instead of doing the standard walk-the-site method, I start off by parsing the whole list of videos and saving them to Data. This creates the base list of categories/titles/page urls, and is enough to create the first page of the channel where you select the category, not to mention increases the search time dramatically! From that point forward, each time you load a page the metadata for each item is saved to Data(). This results in a slower first load, but from that point forward it no longer needs to crawl random urls each time so the future load times for each page becomes much quicker. The other advantage with this is with the search service. Upon first attempt I was able to get a search working, but my god was it ever slow! Having all titles/data stored in Data() (although not fully tested just yet) will increase the search times dramatically. I have a lot of improving to do with my update methods but so far it's working decently well. I can also assume this would help lower the overall data usage if the script is not jumping site to site each time? All that said, the complication with my pagination setup is that it is user-defined, and the amount of items is variable depending on the limit set. Currently the limit is hard-coded to 20, but when I have a chance to add prefs, I will look to add the option for user control.

Hopefully that helps clarify why my script appears to be a bit more confusing then necessary, but I really have an issue with slow loading...well anything...so if I can take an extra step to help increase the performance, imho it's worth the extra time. ;)

1. Thanks, I still have the encoding issue to deal with but at least now the text is there  ;). Using unicode() did not help but I think I know where the problems might be. I'm thinking that replacing the troublesome characters might be the solution.

I've gone that route before. A simple replace routine to strip out characters that cause trouble.

 2. The reason for having a mix of items is because some urls are single videos, while others are playlists (usually YouTube). When a playist url is detected I do further processing to add the playlist items to a child window using a separate function PlaylistMenu(). If I add the single videos to a DirectoryObject, I then have an extra (and unnecessary) window to click through, which annoys me greatly!  :P. If you blend DirectoryObjects and VideoClipObjects within the same ObjectContainer(), the extra window for a single video is avoided. Everything about it but the sort order works as expected, and (imho) the ability to remove extra window clicks provides a much more fluent experience for the user.

I think I understand now. Being able to treat YouTube playlist urls as VideoObjects would be a huge boon. I'm inclined to agree that your method seems like a solid work-around. Since the issue in Plex/Web is already sorted out in the next version of the app, we just need to get PHT to behave better.

 3. With regards to my complicated pagination setup, my channel works a bit differently then the few that I learned from. Because the site I'm scripting is quite large (+8000 videos), and each video comes from a random site, the time to load the information from each url by walking the site is just ridiculous. My first attempt resulted in a lot of timing out so I made use of the Data() function and mapped the metadata to a dict and stored it for quicker reference. The site has a link that lists all videos on one page, and within their respective genres/categories. Instead of doing the standard walk-the-site method, I start off by parsing the whole list of videos and saving them to Data. This creates the base list of categories/titles/page urls, and is enough to create the first page of the channel where you select the category, not to mention increases the search time dramatically! From that point forward, each time you load a page the metadata for each item is saved to Data(). This results in a slower first load, but from that point forward it no longer needs to crawl random urls each time so the future load times for each page becomes much quicker. The other advantage with this is with the search service. Upon first attempt I was able to get a search working, but my god was it ever slow! Having all titles/data stored in Data() (although not fully tested just yet) will increase the search times dramatically. I have a lot of improving to do with my update methods but so far it's working decently well. I can also assume this would help lower the overall data usage if the script is not jumping site to site each time? All that said, the complication with my pagination setup is that it is user-defined, and the amount of items is variable depending on the limit set. Currently the limit is hard-coded to 20, but when I have a chance to add prefs, I will look to add the option for user control.

Hopefully that helps clarify why my script appears to be a bit more confusing then necessary, but I really have an issue with slow loading...well anything...so if I can take an extra step to help increase the performance, imho it's worth the extra time.  ;)

 

Now that you've gone to all the work to figure out how to create your own cache, did you know that you can tell the plugin framework how long to cache web pages for you channel? I realize that there's a difference between caching the web page(s) versus how you're storing the retrieved metadata. It's just a thought that dawned on me as I read your description of the situation. That and, you're taking a much more involved approach than I would have done. Since every client now offers/includes a variation of the pre-play screen, which is a much nicer presentation of full metadata, it's generally not necessary to grab a ton of metadata for displaying with the long list of all the available videos. Asynchronous loading of thumbnails is possible so, it's reasonable to have each list item need to parse an extra page to grab a thumbnail image. Otherwise, I would generally expect that if a user is curious about a video, they would visit the preplay page for more info. At that point, since you only need to load the info for one video at a time, it's far less likely to generate timeouts fetching the metadata.

Just my $0.02

I've gone that route before. A simple replace routine to strip out characters that cause trouble.

You by chance have any sample code available? :) Once upon a long long time ago, I remember creating one with (if memory serves) ordinal, but I can't remember how that worked, nor have I had any luck finding a good sample that does not require the use of 3rd party modules?

I think I understand now. Being able to treat YouTube playlist urls as VideoObjects would be a huge boon. I'm inclined to agree that your method seems like a solid work-around. Since the issue in Plex/Web is already sorted out in the next version of the app, we just need to get PHT to behave better.

I'm glad you agree. It just seemed a waste to keep skipping the playlists so I used the gdata link to parse the information for the playlist and added them in as another level to browses through. It works great so far, and if you're interested in the code I posted some example code here a little while back. ;) The first function handles parsing the YouTube id and type (video/playlist), the second parses the playlist and adds the objects to an ObjectContainer and returns it. It's basic in it's metadata handling, but does the job just fine. ;)

Now that you've gone to all the work to figure out how to create your own cache, did you know that you can tell the plugin framework how long to cache web pages for you channel? I realize that there's a difference between caching the web page(s) versus how you're storing the retrieved metadata. It's just a thought that dawned on me as I read your description of the situation. That and, you're taking a much more involved approach than I would have done. Since every client now offers/includes a variation of the pre-play screen, which is a much nicer presentation of full metadata, it's generally not necessary to grab a ton of metadata for displaying with the long list of all the available videos. Asynchronous loading of thumbnails is possible so, it's reasonable to have each list item need to parse an extra page to grab a thumbnail image. Otherwise, I would generally expect that if a user is curious about a video, they would visit the preplay page for more info. At that point, since you only need to load the info for one video at a time, it's far less likely to generate timeouts fetching the metadata.

Yeah... In all fairness I did see the cache options but I did not find any real information regarding it to know if it would work across the board as I needed it. The main reason for my custom caching is to be able to provide the same metadata to the search function to speed up the results. My initial thought was to store the data, then have the Search Service check for, and use, the stored data first. If data is not found for the item then parse it on the fly. Although the site does not have a proper API, they do have a  main URL that index's all +8000 videos on a single page. So instead of parsing each category to search, I can make a single call and obtain the majority of information for the search. The big problem with conventional methods is the fact that the video urls/summaries are not available on the main page, so to complete the object I need to parse each page to obtain them. That said, if my initial query results in 300 matches, the time it will take to parse each page for the additional details would be absolutely insane, but if it is setup to retain the metadata each time, then the searches become quicker and quicker as you explore. Because all titles are hosted on a single page, filtering out non-matching titles is extremely fast. Now with all that said, I did some further testing and it appears this was all for nothing anyhow...Data is simply not accessible through the Search Service so I have no way to access the data anyhow...Which kinda bytes.... :( Now you mentioned that caching the pages is different then what I'm doing, but if the whole page can be cached indefinitely, then that would more or less give me the same results? I'm also curious to know if my accompanying search would also make use of these cached pages?

Also just tot clarify, the "pre-play" screen is the one that shows up when you click on some form of media object? Where you have the pretty thumb on the left, play button below, title and summary on the right, etc..? I do agree it is pretty, but I'm having other issues with the metadata display on that window as well, please see here for more info. The big problem is that WatchDocumentary hosts videos from other sites, which contain their own metadata for each object which is unrelated to the content seen on WD. I'm trying to use the metadata from WatchDocumentary, but that pre-play screen automatically uses the information gathered from the URL Service. This leaves my channel looking quite inconsistent with the site, and to be honest, WD's metadata is much cleaner then the random ugly crap that can be found in YouTube descriptions (as demonstrated in the screenshots). To do this I first need a way to edit/overwrite the url service object (if that's even possible?), and I would also need to retain the metadata from WD to inject into the object, which is where my caching comes in.

Also if it's not too much trouble, where can I get more info regarding the "Asynchronous loading of thumbnails"?

Thanks again!

So to save some repetition, I found this post from a while back that I think answers some of my questions. As I encounter the same error, I assume this also applies to Search Services?

By design, URL Services do not have access to stored data and do not cache data. If you're making that many requests to your URL Service, you're probably not using it correctly.

Loading a list of videos in the plugin should not make any calls to the URL Service aside from checking that a Service exists for any URLs provided (done automatically). Ideally, an URL Service should only be called via the plugin when the play request is made. All other metadata should be gathered in the plugin. That may seem like some code duplication in that both the plugin and URL Service are expected to return metadata but that's the way it was designed and there was good reason to do so. Not every channel will fit the mold for which URL Services were designed but most can be made to work well within the scheme.

So, if your plugin is timing out due to calls to the URL Service, consider why the it's doing so before starting to design your own cache.

In a perfect scenario, you are correct to assume the small requests shouldn't effect the load time, but in reality they do. Until this post, I honestly thought I was the first to come up with the concept but just about everything familyvance was saying fits the same scenario I'm currently dealing with, and both of us drew the same conclusions based on the available functions provided by Plex. Generally load times should be quick, but if a single random site is (for whatever reason) slow, it hinders the load times of the channel, as well as all results returned from a search. I'm also required to double request to obtain all metadata, and like familyvance it's not optional. If both of us instantly drew the same conclusion and worked on similar solutions, then there might just be a bigger issue then originally suspected. ;)

Before I list some reasons why accessing data should be possible from within any service, I'd first like to ask...If Data() is not meant to store metadata, nor be accessible from any of the channel's accompanying services, what is it supposed to be used for? It really does beg the question...what is the actual purpose/intent of Data()..? ;)

Let me break down my design a bit further so you can see why I went with the Data route.

I start with this page here where all titles and their respective urls are located with one lovely request...instead of the usual walk-the-site structure. After reading up on the Data() function, it appeared that the solution to a slow load/search was to use the information provided on the single page that references all available videos, then to crawl the site looking for them like you're browsing in Firefox. As I had no reason to initially assume that the Data() function would not be accessible from within it's hosting channel, I took into consideration a few factors when rethinking my initial design. Note that nothing at all on the doc page that describes the Data/Dict function implies that you cannot access stored data from within services, what it does loosely imply is that "The dictionary is automatically persisted across plug-in relaunches, and can safely be accessed from any thread.". As my search is technically a part of the Plugin, the initial assumption is that this will work. It would be greatly appreciated if someone would update these docs accordingly! :)

Initial reasoning for Data Storage:

  1. With +8000 videos to choose from, a search is MANDATORY!! ;)
  2. This site does not provide an API for advanced queries. All information for each video is available from the site, but multiple sources need to be accessed to obtain them
  3. The full list of documentaries provides an easy way for me to obtain all titles within a single request. The advantage is that while searching, comparing queries with the titles will be MUCH faster then the alternative
  4. The full list of documentaries provides an easy way for me to quickly update the list of titles upon each load
  5. The full list of documentaries also provides the urls for each title, where the rest of the necessary metadata for each item is located
  6. Data() gives me a place to permanently store a dictionary of the data that is usually parsed by the second request. Provided that the video's metadata has previously been loaded, this will avoid the need to make the second url request at all.
  7. On mobile devices, this might lower overall data usage depending on whether channels store the data to each client, or simply run it off the server?
  8. Only smaller strings are saved to disk instead of caching the whole page. Not sure how good caching will hold up when there are +8000 videos that at some point might be cached? Also might help save system storage on mobile devices if data/caches are stored locally per client?
  9. Allows for custom control over the display of content. If Data is stored, I can manipulate the sort order any way I please, as well as the items per page, where if walked the site instead, I am limited to 12 items per page. With sections containing +1000 videos, there should be a way to re-sort by name, date, asc/desc, etc. (and if prefs will allow for this?). At the least, raising the per page limitations is an absolute must, otherwise you would never get to the end of the larger categories
  10. Allows me to quickly replace the (potentially) ugly metadata provided from sites like YouTube or Vimeo with the clean metadata provided by the main site; which is specifically designed to bring a nicer experience for the user then the originating sites usually offer, and like I demonstrated with the screenshots in this post.
  11. Last but not least, it will bring validity to your Data/Dict function(s) as viable tools. If they are not accessible from services defined within the same channel, then why are these services being included as a part of the channel in the first place? It just seems more logical to separate them like you do with metadata agents if they are already so detached from the rest of the channel bundle? I really don't mean to rag on anyone about this :P but these functions do not appear to have any other real use other then storing metadata..? And if you are already storing it, then imho it should be accessible from anywhere within your bundle as the data is all inter-related?

In the long run, it only serves to benefit the overall experience for the user, so regardless of how my code ends up in the long run, please understand that it's their experience, and not my logic that holds the most importance here. ;)

You by chance have any sample code available? :) Once upon a long long time ago, I remember creating one with (if memory serves) ordinal, but I can't remember how that worked, nor have I had any luck finding a good sample that does not require the use of 3rd party modules?

It depends on what (and how many different) characters are giving you grief. At it's most basic, I've just done basic string replace for a small number of bothersome characters.

fixed_string = broken_string.replace(bad_char, good_char)
#or
fixed_string = broken_string.replace(bad_char, '')

I've gone so far as to chain multiple replaces together as well.

fixed_string = broken_string.replace(bad_char, good_char).replace(bad_char2, good_char2).replace(bad_char3, good_char3)

On a more advanced level, one plugin I wrote had issues with several HTML entities which were not getting caught and handled properly by the framework. There I had to include a custom method to clean it. See here.

Also just tot clarify, the “pre-play” screen is the one that shows up when you click on some form of media object? Where you have the pretty thumb on the left, play button below, title and summary on the right, etc…? I do agree it is pretty, but I’m having other issues with the metadata display on that window as well, please see here for more info. The big problem is that WatchDocumentary hosts videos from other sites, which contain their own metadata for each object which is unrelated to the content seen on WD. I’m trying to use the metadata from WatchDocumentary, but that pre-play screen automatically uses the information gathered from the URL Service. This leaves my channel looking quite inconsistent with the site, and to be honest, WD’s metadata is much cleaner then the random ugly crap that can be found in YouTube descriptions (as demonstrated in the screenshots). To do this I first need a way to edit/overwrite the url service object (if that’s even possible?), and I would also need to retain the metadata from WD to inject into the object, which is where my caching comes in.

Yes, that is the preplay screen. In regards to your metadata issues, I responded in your other thread. Short version: write an url service to feed your own metadata and then pass video handling off to other url services as necessary.

Also if it’s not too much trouble, where can I get more info regarding the “Asynchronous loading of thumbnails”?

From the google doc linked in this sticky thread: Missing from the Official Plugin Documentation

Special handling for images

because loading images can be considerably slower than loading text-based metadata, the plugin framework allows for asynchronous loading of image files (thumb, art). This drastically reduces the likelihood of time-outs while loading menus with numerous items each having a unique thumb (or background art). Without the asynchronous loading, the client app cannot render the ObjectContainer until all metadata including images is fully downloaded. With the asynchronous loading, the app will display the text-based metadata immediately, then images will be displayed as they complete downloading. The way to take advantage of this asynchronous load mechanism is to assign a callback as the value for the ‘thumb’ (or ‘art’) attribute of the object in question. The callback can be a method elsewhere in the channel code, for example if determining the final url for an appropriate image takes one or more extra HTTP requests. Or, it can be the plugin framework’s built-in method which allows for a graceful fallback if the image url fails to load. ie:

thumb = Resource.ContentsOfURLWithFallback(url=url, fallback=fallback)

In otherwords, for your channel, you have a long list of video titles with associated links. Assiging the title to the VideoClipObject is very fast but following the link to grab the thumb image is pretty slow especially when combined with several dozen (or thousand) other requests. Instead, use a custom callback to fetch the image and the client apps will load the menu without images and populate the images asynchronously as they are returned.

@route(PREFIX + '/videos')
def VideoMenu():
  list_page = HTML.ElementFromURL(page_url)
  oc = ObjectContainer(title1="Videos")
  for item in list_page.xpath('blah'):
    title = item.xpath('./blahblah')[0].text
    link = item.xpath('./meh')[0].get('href')
    oc.add(VideoClipObject(url=link, title=title, thumb=Callback(GetThumb, url=link)))

return oc

@route(PREFIX + ‘/getthumb’)
def GetThumb(url):
thumb_page = HTML.ElementFromURL(url)
thumb_url = thumb_page.xpath(‘//image’)[0].get(‘href’)
return Redirect(thumb_url)

So to save some repetition, I found this post from a while back that I think answers some of my questions. As I encounter the same error, I assume this also applies to Search Services?

Correct. Services (URL or Search) are isolated from the cache and Data, etc.

I’m also required to double request to obtain all metadata, and like familyvance it’s not optional.

I disagree that it's not optional. I'll grant you that it may not be ideal to only provide a title and thumbnail for each item in the list, but as long as more information is available to the user upon request, I don't see it as a big issue.

If Data() is not meant to store metadata, nor be accessible from any of the channel’s accompanying services, what is it supposed to be used for? It really does beg the question…what is the actual purpose/intent of Data()…?  :wink:

To be perfectly honest, I would not be surprised to see the Data methods become deprecated. Their use has been discouraged for several iterations of the plugin framework. Any channel which makes use of them is highly unlikely to be accepted into the official Channel Directory. I believe that the exposed Data methods are left overs from earlier days. The framework makes use of them below the service but realistically channels are not expected to do so. This is of course just my opinion so take it with a grain of salt.

 It would be greatly appreciated if someone would update these docs accordingly!  :slight_smile:

The docs being outdated (and incomplete) are a well know issue. See the sticky thread at the top of this forum. It's my understanding that when the next major revision of the plugin framework is released (whenever that will be), new docs will accompany it.

Initial reasoning for Data Storage:

  1. With +8000 videos to choose from, a search is MANDATORY!!  ;)
  2. This site does not provide an API for advanced queries. All information for each video is available from the site, but multiple sources need to be accessed to obtain them
  3. The full list of documentaries provides an easy way for me to obtain all titles within a single request. The advantage is that while searching, comparing queries with the titles will be MUCH faster then the alternative
  4. The full list of documentaries provides an easy way for me to quickly update the list of titles upon each load
  5. The full list of documentaries also provides the urls for each title, where the rest of the necessary metadata for each item is located
  6. Data() gives me a place to permanently store a dictionary of the data that is usually parsed by the second request. Provided that the video's metadata has previously been loaded, this will avoid the need to make the second url request at all.
  7. On mobile devices, this might lower overall data usage depending on whether channels store the data to each client, or simply run it off the server?
  8. Only smaller strings are saved to disk instead of caching the whole page. Not sure how good caching will hold up when there are +8000 videos that at some point might be cached? Also might help save system storage on mobile devices if data/caches are stored locally per client?
  9. Allows for custom control over the display of content. If Data is stored, I can manipulate the sort order any way I please, as well as the items per page, where if walked the site instead, I am limited to 12 items per page. With sections containing +1000 videos, there should be a way to re-sort by name, date, asc/desc, etc. (and if prefs will allow for this?). At the least, raising the per page limitations is an absolute must, otherwise you would never get to the end of the larger categories
  10. Allows me to quickly replace the (potentially) ugly metadata provided from sites like YouTube or Vimeo with the clean metadata provided by the main site; which is specifically designed to bring a nicer experience for the user then the originating sites usually offer, and like I demonstrated with the screenshots in this post.
  11. Last but not least, it will bring validity to your Data/Dict function(s) as viable tools. If they are not accessible from services defined within the same channel, then why are these services being included as a part of the channel in the first place? It just seems more logical to separate them like you do with metadata agents if they are already so detached from the rest of the channel bundle? I really don't mean to rag on anyone about this  :P but these functions do not appear to have any other real use other then storing metadata..? And if you are already storing it, then imho it should be accessible from anywhere within your bundle as the data is all inter-related?

I'm not going to address this point by point. Just a few comments.

  • The site provides a search, what sort of advanced queries are you wanting to achieve that you are currently unable to?
  • In general it sounds to me like you're trying to make your channel make up for the short comings of the web site. If the website does not provide an adequate interface or API to achieve what you want, you're always going to be fighting an uphill battle.
  • As I mentioned above, I suspect that Data/Dict are likely to take a further backseat rather than gain further integration to channels and services.

Thanks again for your your support Mikedm139, you've been a great help so far! I also want to thank you for toughing it through the lengthy post/debate, I know it can get a bit tedious ;)

Quote:

I've gone so far as to chain multiple replaces together as well.

fixed_string = broken_string.replace(bad_char, good_char).replace(bad_char2, good_char2).replace(bad_char3, good_char3)

On a more advanced level, one plugin I wrote had issues with several HTML entities which were not getting caught and handled properly by the framework. There I had to include a custom method to clean it. See here.

 

Quote:

Yes, that is the preplay screen. In regards to your metadata issues, I responded in your other thread. Short version: write an url service to feed your own metadata and then pass video handling off to other url services as necessary.

Thanks! I will check both of those out after I'm done replying

From the google doc linked in this sticky thread: Missing from the Official Plugin Documentation

Quote

Special handling for images

because loading images can be considerably slower than loading text-based metadata, the plugin framework allows for asynchronous loading of image files (thumb, art). This drastically reduces the likelihood of time-outs while loading menus with numerous items each having a unique thumb (or background art). Without the asynchronous loading, the client app cannot render the ObjectContainer until all metadata including images is fully downloaded. With the asynchronous loading, the app will display the text-based metadata immediately, then images will be displayed as they complete downloading. The way to take advantage of this asynchronous load mechanism is to assign a callback as the value for the ‘thumb’ (or ‘art’) attribute of the object in question. The callback can be a method elsewhere in the channel code, for example if determining the final url for an appropriate image takes one or more extra HTTP requests. Or, it can be the plugin framework’s built-in method which allows for a graceful fallback if the image url fails to load. ie:

thumb = Resource.ContentsOfURLWithFallback(url=url, fallback=fallback)

In otherwords, for your channel, you have a long list of video titles with associated links. Assiging the title to the VideoClipObject is very fast but following the link to grab the thumb image is pretty slow especially when combined with several dozen (or thousand) other requests. Instead, use a custom callback to fetch the image and the client apps will load the menu without images and populate the images asynchronously as they are returned.

@route(PREFIX + '/videos')
def VideoMenu():
list_page = HTML.ElementFromURL(page_url)
oc = ObjectContainer(title1="Videos")
for item in list_page.xpath('blah'):
title = item.xpath('./blahblah')[0].text
link
= item.xpath('./meh')[0].get('href')
oc.add(VideoClipObject(url=link, title=title, thumb=Callback(GetThumb, url=link)))

return oc

...

@route(PREFIX + '/getthumb')
def GetThumb(url):
thumb_page = HTML.ElementFromURL(url)
thumb_url = thumb_page.xpath('//image')[0].get('href')
return Redirect(thumb_url)

So Callbacks() are similar in nature to thread.start_new_thread()? In this case it initiates the the thumb loading within a separate thread to allow the objects to finish loading the metadata and return it to the UI. I already had the feeling a URL Service was the way to go so I gave myself a head start on it last night. I saw a very similar example of the above when browsing the source of a few channels (I think it was YouTube...or Yahoo?), but I believe your further explanation just clarified a few additional questions. As mentioned in the docs, making url calls during the functions is discouraged, but as an alternative you can use Callbacks for further processing if it's an unavoidable situation...Unfortunately it didn't state much past that so I wasn't quite sure what I was supposed to do.

Quote 1:

Correct. Services (URL or Search) are isolated from the cache and Data, etc.

Quote 2:

I disagree that it's not optional. I'll grant you that it may not be ideal to only provide a title and thumbnail for each item in the list, but as long as more information is available to the user upon request, I don't see it as a big issue.

Quote 3:

To be perfectly honest, I would not be surprised to see the Data methods become deprecated. Their use has been discouraged for several iterations of the plugin framework. Any channel which makes use of them is highly unlikely to be accepted into the official Channel Directory. I believe that the exposed Data methods are left overs from earlier days. The framework makes use of them below the service but realistically channels are not expected to do so. This is of course just my opinion so take it with a grain of salt.

Quote 4:

The docs being outdated (and incomplete) are a well know issue. See the sticky thread at the top of this forum. It's my understanding that when the next major revision of the plugin framework is released (whenever that will be), new docs will accompany it.

I get your opinion, I just feel it sets limitations on what we can or cannot do for our channel users...and a better user experience cannot harm Plex in any way. ;) I also understand that the more convoluted the construction is, the harder it will be for others to keep it maintained, but this warning was also loosely given in the docs where it further stated that the creator would be left responsible for their own maintenance. That said, and no disrespect meant, but I really don't want my work maintained by anyone else but myself anyhow so this works out well for all parties ;) It just seems like those who care to put the extra effort into the user's experience and, are willing to maintain their own work, are sorta being left behind? :(

Imho there's a significant benefit to having accessible data storage available to channel devs, but there's no doubt you have a better understanding of the current framework then I do to know what is best for the future of Plex. In the long run I just don't get how depreciating functions that could greatly increase the overall experience for the user is beneficial? Just a thought, but if it's simply a matter of the maintenance of the channel then an addition to the channel structure to include a 3rd-party list might be a viable solution? Just declare under the 3rd-party channel section that these channels are maintained by their respective developers and all relevant questions and/or issues with said channel should be taken up with their respective creators.

With regards to updating the docs I admit I probably should have been more specific with what I meant. :P I was merely suggesting adding a one-liner to the page to warn channel developers that these features are accessible from the __init__.py only, and not from within any accompanying services. In light of your statement above, I might also suggest a second liner mentioning that these functions might be depreciated in the future. ;) I'm suggesting this to help save us channel dev's from wasting time creating more intricate methods that by all implications should work. In almost any other scenario, it's natural to assume that such functions would be accessible from within any function that's defined within your channel's code. Although what you're stating is a bit vague as to why this approach is being discouraged/depreciated, it makes enough sense to know that thought has already been put into it and decisions have already been made...but if there's something I can contribute to help encourage these features to be implemented further, or help with the design of an updated storage method, I'd be willing to help out where I can. :)

I'm not going to address this point by point. Just a few comments.

  • The site provides a search, what sort of advanced queries are you wanting to achieve that you are currently unable to?
  • In general it sounds to me like you're trying to make your channel make up for the short comings of the web site. If the website does not provide an adequate interface or API to achieve what you want, you're always going to be fighting an uphill battle.
  • As I mentioned above, I suspect that Data/Dict are likely to take a further backseat rather than gain further integration to channels and services.

As mentioned above, it's not really a matter of the advanced features I have in mind, but as you mostly called it in your following point, it's really a mix of the site's shortcomings, as well as the (not-yet-fully-recognized) necessity to walk the secondary urls if rounded storage support is unavailable.

I can't stress this enough, the reason for walking the secondary urls is not for the extra metadata, it's just an added bonus while I'm already there. It's because the necessary urls for the player are not located on the full list page for me to add the videos to the container at the right time. Upon start up, the first step is to select the genre. After selecting the genre, the second step is a per page list of the available titles. Currently the 'Science' genre alone contains 947 titles, which is equivalent to 79 pages at 12 titles/page. now from a user's perspective, who would ever get anywhere close to page 79 with that many pages available?! :P By increasing the per page limit to 25, that would already decrease the page count down to 38 pages, and would also provide a feasible amount of results to display on a single page as 12 titles just looks way too bare on most Plex clients. Now to stress the necessity of secondary requests even further, I must point out that if you previously agreed that I have a viable workaround for handling playlists with a blend of objects, then I must be able to access the player urls to distinguish which are playlists, and which are single videos...and if they are single videos, then I must be able to add the video url to the container during step 2 to actually accomplish the blended workaround....It's a horrible catch 22, I know!!

Now with regards to the search, the same 12 per page restriction applies, and for a search this can result in a much larger problem then a bare looking page. A perfect example query is the word "universe" as it results in 241 matches. This is equivalent to 21 page walks to obtain all possible results, and except for the last page each walk has an additional 12 web requests to obtain the player urls; which results in roughly 240 web request! Although completely impractical, there's still a potential for 735 page walks with so many titles available, and at even an eighth of that it's still way too much processing to make the search anywhere close to practical.

...And then there's the site's broken, but not technically broken, pagination system on top of everything else! Although 241 titles are reported as matched for "universe", only 14 pages are actually available. If the reported numbers are correct then there are approximately 6 pages of "universe" results missing? My suspicion is that their own search function is poorly based off their full list page, and if that's the case then some titles will be listed multiple times depending on their defined categories. This is also something I had already taken care of in my channel from the beginning. ;)

The point is that with access to Data() across the board, not only are all of these issues resolved with a single page request and incremental caching, but (at least for me) the logic needed to cache the data and use it to not only fix the site's deficiencies, but further add custom features, was actually quite simplistic. But without it, concepts like a favorites list, further sorting options, increased items per page, etc. could not be implemented at all and the quality of the overall channel would be horrible!

Don't get me wrong, I can still make use of of Data within the channel itself, but if the data is already there then it just seems wasteful not to be able to reuse it for the accompanying service(s). The advanced features I have in mind only came into mind because the solution to the slowness issue already collects the necessary details for you. With my current setup, pages that are already scanned in load almost instantly in the channel. With the same logic applied to the search, this could no doubt lead to one of the fastest searches available, and I can always document/functionize it for easier implementation for others to use. It would also have the same benefits for accompanying Metadata Agents and URL Services, which were initially part of the plan. I'm thinking the reluctance shown towards this is do to technical issues that make it more while then it's worth, and I can def understand it from that perspective ;). I just don't think full removal of the storage concept is the right answer. Maybe in the future giving channels a place in the Plex db to add custom data, or a step further by giving each channel the ability to create their own db (YES! YES!! YES!!!) would be the practical solution?

With regards to an "uphill battle"... With access to the full list from their site my so-called "uphill battle" was more like a couple inch incline so there really aren't any hard parts just yet. The truth is, and I'm afraid to come off as a jerk for pointing it out but...the only substantial "uphill battle" I have faced so far has unfortunately been the great battle of the Plex docs, and the applicable undocumented information.

If I can assist with either concept or design towards a new storage method, please let me know... I'll always do my best to put my pennies where my tongue flaps so I'd be happy to help out where I can!

So Callbacks() are similar in nature to thread.start_new_thread()? In this case it initiates the the thumb loading within a separate thread to allow the objects to finish loading the metadata and return it to the UI. I already had the feeling a URL Service was the way to go so I gave myself a head start on it last night. I saw a very similar example of the above when browsing the source of a few channels (I think it was YouTube...or Yahoo?), but I believe your further explanation just clarified a few additional questions. As mentioned in the docs, making url calls during the functions is discouraged, but as an alternative you can use Callbacks for further processing if it's an unavoidable situation...Unfortunately it didn't state much past that so I wasn't quite sure what I was supposed to do.

Yes Callbacks allow for delaying the request/loading/processing of certain aspects of the plugin. IIRC, they can be used for the object's "key" and "thumb" attributes only.

I get your opinion, I just feel it sets limitations on what we can or cannot do for our channel users...and a better user experience cannot harm Plex in any way.  ;) I also understand that the more convoluted the construction is, the harder it will be for others to keep it maintained, but this warning was also loosely given in the docs where it further stated that the creator would be left responsible for their own maintenance. That said, and no disrespect meant, but I really don't want my work maintained by anyone else but myself anyhow so this works out well for all parties  ;) It just seems like those who care to put the extra effort into the user's experience and, are willing to maintain their own work, are sorta being left behind?  :(

 

Imho there's a significant benefit to having accessible data storage available to channel devs, but there's no doubt you have a better understanding of the current framework then I do to know what is best for the future of Plex. In the long run I just don't get how depreciating functions that could greatly increase the overall experience for the user is beneficial? Just a thought, but if it's simply a matter of the maintenance of the channel then an addition to the channel structure to include a 3rd-party list might be a viable solution? Just declare under the 3rd-party channel section that these channels are maintained by their respective developers and all relevant questions and/or issues with said channel should be taken up with their respective creators.

I don't want to speak for the developers because I don't know their thoughts and they're of course capable of changing their minds. I believe that some of the limitations that have been put in place (sandboxing plugins, using restricted python, etc.) are to limit possible security holes and some limitations are there because certain functions may not fit with their vision of what channels how channels interact with the Plex ecosystem now or in the future. Specifically for the case of Data/Dict usage in URL Services, I know that the services are intended to be able to completely stand alone. Whether they exist in the channel bundle or the Services.bundle is irrelevant to the local instance of PMS. For better or worse, they are treated the same. The need for them to be capable of acting as stand-alone services is due to their use beyond the simple case of channel playback. Services are involved in (off the top of my head); the PlexIt bookmarklet, sharing/recommending videos, casting videos between devices, the global Plex search, and of course channels.

I don't disagree about the value of accessible data storage for channels. I use it quite a bit in a number of my unsupported channels. It's quite possible that those methods won't be removed from channel support. If there's a good case for allowing channels to continue using them, I'm sure the devs will consider it. I have no evidence to support my theory that they will be deprecated, just a hunch based on off-hand comments here and there.

With regards to updating the docs I admit I probably should have been more specific with what I meant.  :stuck_out_tongue: I was merely suggesting adding a one-liner to the page to warn channel developers that these features are accessible from the init.py only, and not from within any accompanying services. In light of your statement above, I might also suggest a second liner mentioning that these functions might be depreciated in the future.  :wink: I’m suggesting this to help save us channel dev’s from wasting time creating more intricate methods that by all implications should work. In almost any other scenario, it’s natural to assume that such functions would be accessible from within any function that’s defined within your channel’s code. Although what you’re stating is a bit vague as to why this approach is being discouraged/depreciated, it makes enough sense to know that thought has already been put into it and decisions have already been made…but if there’s something I can contribute to help encourage these features to be implemented further, or help with the design of an updated storage method, I’d be willing to help out where I can.  :slight_smile:

I understand what you mean about updating the docs. My point wasn't that it was a bad idea. Just that there are a lot of minor updates that need doing and unfortunately updating the docs generally takes a backseat to further development. My understanding is that the next major framework revision will include more complete and automated documentation. Sadly, the current docs are unlikely to receive any updating. They haven't really since they were released in the first place.

As mentioned above, it's not really a matter of the advanced features I have in mind, but as you mostly called it in your following point, it's really a mix of the site's shortcomings, as well as the (not-yet-fully-recognized) necessity to walk the secondary urls if rounded storage support is unavailable.

 

I can't stress this enough, the reason for walking the secondary urls is not for the extra metadata, it's just an added bonus while I'm already there. It's because the necessary urls for the player are not located on the full list page for me to add the videos to the container at the right time. Upon start up, the first step is to select the genre. After selecting the genre, the second step is a per page list of the available titles. Currently the 'Science' genre alone contains 947 titles, which is equivalent to 79 pages at 12 titles/page. now from a user's perspective, who would ever get anywhere close to page 79 with that many pages available?!  :P By increasing the per page limit to 25, that would already decrease the page count down to 38 pages, and would also provide a feasible amount of results to display on a single page as 12 titles just looks way too bare on most Plex clients. Now to stress the necessity of secondary requests even further, I must point out that if you previously agreed that I have a viable workaround for handling playlists with a blend of objects, then I mustbe able to access the player urls to distinguish which are playlists, and which are single videos...and if they are single videos, then I must be able to add the video url to the container during step 2 to actually accomplish the blended workaround....It's a horrible catch 22, I know!!

 

Now with regards to the search, the same 12 per page restriction applies, and for a search this can result in a much larger problem then a bare looking page. A perfect example query is the word "universe" as it results in 241 matches. This is equivalent to 21 page walks to obtain all possible results, and except for the last page each walk has an additional 12 web requests to obtain the player urls; which results in roughly 240 web request! Although completely impractical, there's still a potential for 735 page walks with so many titles available, and at even an eighth of that it's still way too much processing to make the search anywhere close to practical.

 

...And then there's the site's broken, but not technically broken, pagination system on top of everything else! Although 241 titles are reported as matched for "universe", only 14 pages are actually available. If the reported numbers are correct then there are approximately 6 pages of "universe" results missing? My suspicion is that their own search function is poorly based off their full list page, and if that's the case then some titles will be listed multiple times depending on their defined categories. This is also something I had already taken care of in my channel from the beginning.  ;)

 

The point is that with access to Data() across the board, not only are all of these issues resolved with a single page request and incremental caching, but (at least for me) the logic needed to cache the data and use it to not only fix the site's deficiencies, but further add custom features, was actually quite simplistic. But without it, concepts like a favorites list, further sorting options, increased items per page, etc. could not be implemented at all and the quality of the overall channel would be horrible!

 

I agree that it seems like the need for proper playlist handling is a big hurdle for your (and others') situation. If they were properly handle by the YouTube (or other relevant) service, you wouldn't need to parse extra pages when loading lists. Since Data() access isn't available (and likely won't ever be) to services, you're caught between a rock and a hard place. 

Maybe in the future giving channels a place in the Plex db to add custom data, or a step further by giving each channel the ability to create their own db (YES! YES!! YES!!!) would be the practical solution?

A number of channel developers (myself included) have asked about having PMS treat channel content like local library content. IIRC, the answer is that it would require a pretty major overhaul of the PMS DB scheme (or something similarly non-trivial). Not to say it will never happen but it's likely not on the near horizon.

With regards to an “uphill battle”… With access to the full list from their site my so-called “uphill battle” was more like a couple inch incline so there really aren’t any hard parts just yet. The truth is, and I’m afraid to come off as a jerk for pointing it out but…the only substantial “uphill battle” I have faced so far has unfortunately been the great battle of the Plex docs, and the applicable undocumented information.

I realize that it's little consolation but you are not alone in your frustration with the state of the current documentation. As I mentioned previously, it's been an issue for a while and we do what we can to try to fill the gaps until a proper solution surfaces.

I was reading over the conversation and just thought I would add my two cents to a couple topics.

First as far as client apps and development, I have found that Roku can handle everything I throw at it in the Channel API. So I tend to design and test my channel code using the Roku first and then once I have it working the way I want, I go back and test it with other clients like PHT and Plex/Web to find any client related issues.

As for your statement about limitations to the channel API and how more choices allow you to create the best user experience, I love any code that allows developers to create and offer new features to Plex channels, but there is alot to be said for the KISS method. The more working parts you have in your plugin, the more chances there are for it to break. Even if you do not care for the channel to be added to the channel store and you are the one who will fix or update your channel, this is still a volunteer community where everyone has a real life and responsibilities. And when real life gets in the way of those issue getting fixed immediately, the end users complain.

And despite the fact that there lots of documentation on the forums and in the help documents explaining that all channels are maintained by volunteers and that unsupported apps are only supported and maintained by an individual, the end user still doesn't get it. They do not understand that Plex just provides API and that the volunteer community can, if they wish, use it to create and share plugins with others. They do not understand that a website used for a channel might change the code often and cause issues with the channel and that the channel developer has no control over that. The end user does not grasp who creates these plugins, how these plugins work, or how they are updated and who updates them. In their mind, they installed Plex and everything they install as part of Plex should work all the time. And if it doesn't, they complain, alot. 

And when the channel is breaking too often or it hasn't worked for a period of time because "real life" keeps you from fixing an issue immediately, it impacts how the users see all Plex channels and even the Plex brand itself. So I understand why certain features of the channel development API might be removed if Plex deams it to cause too many issues within channels that use them.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.