Thanks again for your your support Mikedm139, you've been a great help so far! I also want to thank you for toughing it through the lengthy post/debate, I know it can get a bit tedious ;)
Quote:
I've gone so far as to chain multiple replaces together as well.
fixed_string = broken_string.replace(bad_char, good_char).replace(bad_char2, good_char2).replace(bad_char3, good_char3)
On a more advanced level, one plugin I wrote had issues with several HTML entities which were not getting caught and handled properly by the framework. There I had to include a custom method to clean it. See here.
Quote:
Yes, that is the preplay screen. In regards to your metadata issues, I responded in your other thread. Short version: write an url service to feed your own metadata and then pass video handling off to other url services as necessary.
Thanks! I will check both of those out after I'm done replying
From the google doc linked in this sticky thread: Missing from the Official Plugin Documentation
Quote
Special handling for images
because loading images can be considerably slower than loading text-based metadata, the plugin framework allows for asynchronous loading of image files (thumb, art). This drastically reduces the likelihood of time-outs while loading menus with numerous items each having a unique thumb (or background art). Without the asynchronous loading, the client app cannot render the ObjectContainer until all metadata including images is fully downloaded. With the asynchronous loading, the app will display the text-based metadata immediately, then images will be displayed as they complete downloading. The way to take advantage of this asynchronous load mechanism is to assign a callback as the value for the ‘thumb’ (or ‘art’) attribute of the object in question. The callback can be a method elsewhere in the channel code, for example if determining the final url for an appropriate image takes one or more extra HTTP requests. Or, it can be the plugin framework’s built-in method which allows for a graceful fallback if the image url fails to load. ie:
thumb = Resource.ContentsOfURLWithFallback(url=url, fallback=fallback)
In otherwords, for your channel, you have a long list of video titles with associated links. Assiging the title to the VideoClipObject is very fast but following the link to grab the thumb image is pretty slow especially when combined with several dozen (or thousand) other requests. Instead, use a custom callback to fetch the image and the client apps will load the menu without images and populate the images asynchronously as they are returned.
@route(PREFIX + '/videos')
def VideoMenu():
list_page = HTML.ElementFromURL(page_url)
oc = ObjectContainer(title1="Videos")
for item in list_page.xpath('blah'):
title = item.xpath('./blahblah')[0].text
link = item.xpath('./meh')[0].get('href')
oc.add(VideoClipObject(url=link, title=title, thumb=Callback(GetThumb, url=link)))
return oc
...
@route(PREFIX + '/getthumb')
def GetThumb(url):
thumb_page = HTML.ElementFromURL(url)
thumb_url = thumb_page.xpath('//image')[0].get('href')
return Redirect(thumb_url)
So Callbacks() are similar in nature to thread.start_new_thread()? In this case it initiates the the thumb loading within a separate thread to allow the objects to finish loading the metadata and return it to the UI. I already had the feeling a URL Service was the way to go so I gave myself a head start on it last night. I saw a very similar example of the above when browsing the source of a few channels (I think it was YouTube...or Yahoo?), but I believe your further explanation just clarified a few additional questions. As mentioned in the docs, making url calls during the functions is discouraged, but as an alternative you can use Callbacks for further processing if it's an unavoidable situation...Unfortunately it didn't state much past that so I wasn't quite sure what I was supposed to do.
Quote 1:
Correct. Services (URL or Search) are isolated from the cache and Data, etc.
Quote 2:
I disagree that it's not optional. I'll grant you that it may not be ideal to only provide a title and thumbnail for each item in the list, but as long as more information is available to the user upon request, I don't see it as a big issue.
Quote 3:
To be perfectly honest, I would not be surprised to see the Data methods become deprecated. Their use has been discouraged for several iterations of the plugin framework. Any channel which makes use of them is highly unlikely to be accepted into the official Channel Directory. I believe that the exposed Data methods are left overs from earlier days. The framework makes use of them below the service but realistically channels are not expected to do so. This is of course just my opinion so take it with a grain of salt.
Quote 4:
The docs being outdated (and incomplete) are a well know issue. See the sticky thread at the top of this forum. It's my understanding that when the next major revision of the plugin framework is released (whenever that will be), new docs will accompany it.
I get your opinion, I just feel it sets limitations on what we can or cannot do for our channel users...and a better user experience cannot harm Plex in any way. ;) I also understand that the more convoluted the construction is, the harder it will be for others to keep it maintained, but this warning was also loosely given in the docs where it further stated that the creator would be left responsible for their own maintenance. That said, and no disrespect meant, but I really don't want my work maintained by anyone else but myself anyhow so this works out well for all parties ;) It just seems like those who care to put the extra effort into the user's experience and, are willing to maintain their own work, are sorta being left behind? :(
Imho there's a significant benefit to having accessible data storage available to channel devs, but there's no doubt you have a better understanding of the current framework then I do to know what is best for the future of Plex. In the long run I just don't get how depreciating functions that could greatly increase the overall experience for the user is beneficial? Just a thought, but if it's simply a matter of the maintenance of the channel then an addition to the channel structure to include a 3rd-party list might be a viable solution? Just declare under the 3rd-party channel section that these channels are maintained by their respective developers and all relevant questions and/or issues with said channel should be taken up with their respective creators.
With regards to updating the docs I admit I probably should have been more specific with what I meant. :P I was merely suggesting adding a one-liner to the page to warn channel developers that these features are accessible from the __init__.py only, and not from within any accompanying services. In light of your statement above, I might also suggest a second liner mentioning that these functions might be depreciated in the future. ;) I'm suggesting this to help save us channel dev's from wasting time creating more intricate methods that by all implications should work. In almost any other scenario, it's natural to assume that such functions would be accessible from within any function that's defined within your channel's code. Although what you're stating is a bit vague as to why this approach is being discouraged/depreciated, it makes enough sense to know that thought has already been put into it and decisions have already been made...but if there's something I can contribute to help encourage these features to be implemented further, or help with the design of an updated storage method, I'd be willing to help out where I can. :)
I'm not going to address this point by point. Just a few comments.
- The site provides a search, what sort of advanced queries are you wanting to achieve that you are currently unable to?
- In general it sounds to me like you're trying to make your channel make up for the short comings of the web site. If the website does not provide an adequate interface or API to achieve what you want, you're always going to be fighting an uphill battle.
- As I mentioned above, I suspect that Data/Dict are likely to take a further backseat rather than gain further integration to channels and services.
As mentioned above, it's not really a matter of the advanced features I have in mind, but as you mostly called it in your following point, it's really a mix of the site's shortcomings, as well as the (not-yet-fully-recognized) necessity to walk the secondary urls if rounded storage support is unavailable.
I can't stress this enough, the reason for walking the secondary urls is not for the extra metadata, it's just an added bonus while I'm already there. It's because the necessary urls for the player are not located on the full list page for me to add the videos to the container at the right time. Upon start up, the first step is to select the genre. After selecting the genre, the second step is a per page list of the available titles. Currently the 'Science' genre alone contains 947 titles, which is equivalent to 79 pages at 12 titles/page. now from a user's perspective, who would ever get anywhere close to page 79 with that many pages available?! :P By increasing the per page limit to 25, that would already decrease the page count down to 38 pages, and would also provide a feasible amount of results to display on a single page as 12 titles just looks way too bare on most Plex clients. Now to stress the necessity of secondary requests even further, I must point out that if you previously agreed that I have a viable workaround for handling playlists with a blend of objects, then I must be able to access the player urls to distinguish which are playlists, and which are single videos...and if they are single videos, then I must be able to add the video url to the container during step 2 to actually accomplish the blended workaround....It's a horrible catch 22, I know!!
Now with regards to the search, the same 12 per page restriction applies, and for a search this can result in a much larger problem then a bare looking page. A perfect example query is the word "universe" as it results in 241 matches. This is equivalent to 21 page walks to obtain all possible results, and except for the last page each walk has an additional 12 web requests to obtain the player urls; which results in roughly 240 web request! Although completely impractical, there's still a potential for 735 page walks with so many titles available, and at even an eighth of that it's still way too much processing to make the search anywhere close to practical.
...And then there's the site's broken, but not technically broken, pagination system on top of everything else! Although 241 titles are reported as matched for "universe", only 14 pages are actually available. If the reported numbers are correct then there are approximately 6 pages of "universe" results missing? My suspicion is that their own search function is poorly based off their full list page, and if that's the case then some titles will be listed multiple times depending on their defined categories. This is also something I had already taken care of in my channel from the beginning. ;)
The point is that with access to Data() across the board, not only are all of these issues resolved with a single page request and incremental caching, but (at least for me) the logic needed to cache the data and use it to not only fix the site's deficiencies, but further add custom features, was actually quite simplistic. But without it, concepts like a favorites list, further sorting options, increased items per page, etc. could not be implemented at all and the quality of the overall channel would be horrible!
Don't get me wrong, I can still make use of of Data within the channel itself, but if the data is already there then it just seems wasteful not to be able to reuse it for the accompanying service(s). The advanced features I have in mind only came into mind because the solution to the slowness issue already collects the necessary details for you. With my current setup, pages that are already scanned in load almost instantly in the channel. With the same logic applied to the search, this could no doubt lead to one of the fastest searches available, and I can always document/functionize it for easier implementation for others to use. It would also have the same benefits for accompanying Metadata Agents and URL Services, which were initially part of the plan. I'm thinking the reluctance shown towards this is do to technical issues that make it more while then it's worth, and I can def understand it from that perspective ;). I just don't think full removal of the storage concept is the right answer. Maybe in the future giving channels a place in the Plex db to add custom data, or a step further by giving each channel the ability to create their own db (YES! YES!! YES!!!) would be the practical solution?
With regards to an "uphill battle"... With access to the full list from their site my so-called "uphill battle" was more like a couple inch incline so there really aren't any hard parts just yet. The truth is, and I'm afraid to come off as a jerk for pointing it out but...the only substantial "uphill battle" I have faced so far has unfortunately been the great battle of the Plex docs, and the applicable undocumented information.
If I can assist with either concept or design towards a new storage method, please let me know... I'll always do my best to put my pennies where my tongue flaps so I'd be happy to help out where I can!