I would like to suggest allowing for local (Ollama) AI endpoints for use with sonicsage. This would be nice for generating movie and TV playlists/recommendations as well. As I understand it Ollama and others mirror the OpenAI api so I feel like allowing users to add an API key as well as direct the url to a local host might be a lite lift.
Another direction might be to use openwebui or similar as an intermediary. Using something like their pipelines integration.
besides the URL and the API key, i think there’d also be a need to feed the PMS media contents into the model somehow and that could require some engineering. if there is anything like a lot of titles in your PMS, then you would quickly run out of token space if you tried to feed the titles in as prompt context. So you would need to do some RAG and keeping that indexed RAG db up to date with additions and deletions from your media library would also need to be dealt with. I don’t think any of this is insurmountable but for a local AI to be useful, there’s more to it than just letting you connect to a locally hosted model.
Yes, I agree, i wonder how easy it would be to allow Ollama to be used in lieu or in addition to OpenAI (api). I’ve been trying out Sonic sage so I have that setup, but I also have been experimenting with Ollama , open-webui , and stable diffusion locally too. So i think the suggestion by AlieFoSho is a good one.
This should be fairly simple to implement. Ollama offers an OpenAI compatible API. Considering the option is already available for OpenAI, then adding an editable endpoint and making the API key optional, should solve the issue.
Squeaky wheel here. Seriously this is not a difficult feature to ask for.
With some of the moves Plex has been making lately it’s making me wonder if they have a deal with OpenAI. Which of course in 2025 would include selling data like always.
This would really be ideal for plexamp as a “play similar songs” feature but additionally it could be used generally as a recommendation engine for media you may enjoy based off of the content you’ve viewed. It would be really nice to have.
This entire discussion boils down to one simple, powerful change:
Please make the AI endpoint URL in Plexamp a configurable setting.
That’s it. That’s the entire “ask.”
The “Why” (This is a Sunk Cost, Not a New Feature)
All the hard work is already done. A massive investment was made to build the client-side UI and logic in Plexamp for the OpenAI/TIDAL integration.
Right now, that is a sunk cost. It’s a premium feature, part of the paid Plex Pass offering, that is currently “rotting” on the vine because its original backend connection is gone.
As paying customers, we see the broken potential every time we use the app.
The “Easy Effort” (The Win-Win Business Case)
This is the lowest-hanging fruit on your roadmap, and it’s a perfect “win-win.”
Plex’s Cost (The “Low Effort”):
One text field: Add a single “Custom AI Endpoint URL” field in the Plexamp advanced settings.
One disclaimer: Make it a Plex Pass feature and add a note: “This is an unsupported, advanced feature. Use at your own risk.”
Plex’s Gain (The “High Return”):
Instantly Salvage an Investment: You immediately turn that “rotting” code into one of the most powerful, active features in Plexamp.
Add Massive Plex Pass Value: This is the ultimate power-user feature. It’s a huge, unique selling point for your most dedicated, paying subscribers.
Zero Support Burden: The community will take on 100% of the work. We will build the RAG backends, set up our local LLMs, and write the guides for each other. You just need to give us the “key” (the configurable field).
Plex is trying to build a sustainable business, and we’re here to support it. Please don’t let a major, valuable, premium-tier feature die. This simple change unlocks it for all your paying users and costs you almost nothing.
not judging your request in any way, but if that endpoint existed right now and you were able to have it start querying your local LLM model, what context would your model have to respond to queries?