A fully local automated subtitle generator for Plex users (NAS-friendly, Whisper-based)

Hi everyone — I wanted to share a small open-source tool I built that might help fellow Plex users who rely on NAS setups or automated media libraries.

Most of us have run into at least one of these issues:
• A new episode or movie gets added → no subtitles
• Online subtitle sources are inconsistent or missing
• Sync issues with downloaded subtitles
• Privacy concerns with cloud STT services
• Running Whisper manually for each file is tedious
• NAS hardware is often underutilized

So I built NAS Subtitler, a lightweight, fully local subtitle generation service designed to run on Synology, QNAP, Unraid, TrueNAS, OMV, Asustor, or any Docker-capable NAS.

Project Url:https://subtitlesdog.com/en/nas-subtitler

Github: GitHub - subtitlesdog/NAS-Subtitler

:small_blue_diamond: What it does
• Watches your media folders for new content
• Automatically generates .srt subtitles
• Runs entirely offline — no cloud APIs
• Uses Whisper-compatible models (high accuracy)
• Works with CPU or GPU
• Subtitles drop right next to your video files
• Plex recognizes them instantly (“Same folder, same name”)
• Supports multi-language detection and timestamps correction

:small_blue_diamond: Why it’s useful for Plex

Plex is amazing at organizing and streaming, but subtitles are always that one manual step left in the automation chain.

With NAS Subtitler: Sonarr/Radarr → Plex → Done
No more subtitle hunting, no risky online sources, and no manual STT runs.

It basically turns your NAS into a background AI worker that fixes subtitles before Plex even scans the file.

:small_blue_diamond: Who it’s for
• Users with large and frequently updated libraries
• People who want 100% local subtitle generation
• Privacy-conscious users
• NAS owners who want to use their hardware more efficiently
• Anyone leveraging Plex, Sonarr, Radarr, Overseerr, Bazarr, or Jellyfin ecosystems

:small_blue_diamond: Platforms tested
• Synology DSM 7 (Docker)
• QNAP QTS (Container Station)
• Unraid (Docker template + Compose)
• TrueNAS SCALE (Apps / Docker / K8s)
• OMV (Compose)
• Asustor ADM (Docker)

If Plex is running on top of any of these platforms, subtitles will show up automatically.

I’m happy to answer deployment/setup questions.
If you have feature requests or edge cases Plex users commonly run into, I’d love to hear them — subtitle workflows vary a lot depending on the library setup.

Hope this helps someone who wants a fully automated solution without relying on external subtitle providers.

Hi there!

I was interested in this project and deployed it, but I’m only seeing OpenAI in the settings pane? I’d like to generate subtitles locally rather than rely on an API.

Is this still a WIP feature, or did I miss something in the documentation that let’s this mode become available?

Thanks in advance!

This sounds amazing. Will give it a go one of the coming days.

I have the same question that you. @z6bbq could you tell us how to make it 100% local?

same feedback from me, i’d rather pass it to a local AI container

So is this actually not local?
Are you able to use a self-hosted AI which uses an API that is compatible to OpenAI?

I did some digging in their code and provided a pull request, to which they haven’t responded yet.

Basically, their frontend is hardcoded to check the entered api key to openai.com’s servers, but the backend did accept an env variable to supply a custom backend. I updated their code to also use that env variable in their frontend code.

So I did get it working in docker compose with a local whisper server, in the end. I’ll find the docker compose if someone’s interested. My fixed code can be found on their pull requests tab still.

That being said, this still feels more like a “proof of concept” than a finished project- for example it has no integration with plex at all, and doesn’t use something like ffprobe in order to see what subtitles are already present.

It simply checks your disc to see if you have a .srt (generated by them, not something like title.eng.srt) present or not.

The algorithm that splits the audio up into chunks before transcribing does work fairly well, but it isn’t perfect, though still better than just running Whisper point-blank, which has issues in long-form contexts.

thanks for the additional information. i’ll let it cook a little longer then since I have a lot of subs already grabbed & sync’d by bazzar that are probably better than what this comes up with. i was hoping to supplement bazzar with this tool to complete the library but it looks like thats not a good match for what it is right now.

If you’re looking to supplement your library with bazzar, I’d recommend SubGen on Github, it’s not perfect since it doesn’t do the time slicing this does, but it does integrate nicely into Bazar