Derailed topic that turned into GDrive API errors - ON HOLD but likely resolved

Server Version#: Latest stable public circa 14May2019
Player Version#: The built-in web interface
Log excerpt: Plex Logs

So… I am tackling a multi-front issue. I have migrated to GSuite Drive for Business from Dropbox for Business because the same price maintains unlimited space and nets me a host of other features.

The problem is the interface keeps dying. This is a fresh Debian 9.9 with fresh Plex installed via Mr Worf’s plexupdate script on GitHub. I do currently have several rate limit issues to tackle with each client having its own client id and secret generated in the google developers console.

Just an aside, I say each client, here’s the setup. I have rclone on my laptop just for manual checks. Rate limit. I have a Feral Hosting box that performs the downloading and rclone moves the files to their correct location after being sorted locally. Rate limit. I have a box over at Vultr that performs a cp --attributes-only to recreate the directory tree and it rsyncs that to the Feral box so Sonarr and Radarr can keep track of what I have without consuming more than a megabyte of disk space. Finally, the PMS box itself.

Then there is this:

It (PMS web interface) won’t load at all either by direct ip or using the plex.tv interface. I also noticed in the logs that my server keeps trying to hit 192.168.1.7:8080/upnp and I don’t know why, that’s a separate Debian server that solely runs my Unifi controller for my Wi-Fi AP. On that box I’ve taken the liberty of modifying iptables to block all traffic from the plex server (192.168.1.19).

Does anyone see any other glaring issues here? I can’t be restarting the media server all the time. As it is, it has been unable to scan my files and build a library because it just seems to zombie out after a few hours of operation. I’ve scripted my rclone mount to be checked if a .test_file is readable and to post to Slack when this happens while it restarts the systemd service to reactivate the rclone mount.

Any assistance, greatly appreciated. If I need to provide more information, let me know. I’m an AWS certified systems admin with 15 years of Linux experience but damn if Plex hasn’t got me scratching my head. Thank you everyone.

Edit: Yes, I’ve noticed the HTTP 408 curl timeouts. I just am not quite sure what to do with them.

Is there anyone that can help me with why plex keeps becoming unresponsive?

Update: I opened a ticket with Google. Per usual, Support has no idea what the API limit is but they did put me onto something. If you have a GSuite account like I do, and moreover if you bought 5 seats for unlimited storage, storage your Plex in a Team Drive removes the owner API limit on the drive itself and then only the user limits apply. So if your Plex installation is actually a 3 server cluster like mine (1 for automation/replication, 1 for downloading uploading, 1 for the PMS itself) then using a separate user account (not just separate client IDs) effectively increases your API limit.

How’s that for an idea I’ve not seen posted before? :slight_smile:

Once I’ve migrated my 35 TB to a team drive and reconfigured all the servers through each of my family member’s accounts, I’ll post back with a new screenshot of my API usage and hopefully have a high amount of traffic with minimal to no errors.

To provide further information for anyone whom might follow this topic or in my footsteps, using the Drive interface to move a multi-terabyte library is extraordinarily time-consuming, and Google offers no method - even for GSuite - to monitor the “data migration” as they’ve labeled it.

I probably should have done this through rclone on my server but then I’d hit the 750 GB/day limit as rclone would be downloading then uploading in chunks if I understand it’s mechanisms correctly.

Just to ping the community again, is there anyone who can look at the above logs and tell me why the web interface keeps becoming non-responsive?

Update: Running each client, with it’s own ID, but authenticated as different users results in much lower errors on the Google Drive API:

This was achieved by using each of my family member’s accounts to authorize rclone to interact with GDrive API. One account per client. Currently using 3.

i’m starting to suspect that when GDrive errors out and tells plex it can’t access the files, this returned result is an unhandled exception and it causes the web interface to hang.

Though I can’t help directly Im totally fascinated by your setup. Is there any reason for the whole scenario of setting up the servers on different accounts?

Particularly this?

I have a single Gsuite account with unlimited storage and the same API secret and key points to the same 85TB of media on Google. This is on both a dedicated server and a seedbox that each run Plex, Sonarr, Radarr etc via an rclone mount… All those apps constantly scan the files on Gdrive.
There is no API errors (let alone bans) ever… Well at least not in the last month. I know I must be missing something with regards to what your crazily (imho) complex set up offers you, beyond a multitude of areas to troubleshoot.

There is absolutely a reason for this. I have GSuite for Business - so when I have a question I get Google on the damn phone. :wink: The technicians I spoke to say that the API limit is per user, not per IP, nor is it per client id. So since I’ve used one of each of my family member’s accounts to authorize RClone to access Drive, I’ve resulted in a 0% error rate on the API


This hasn’t done anything to help me solve the issue as to why plex keeps hanging almost daily if not more frequently.

Before I consulted with Google and split each server into it’s own (and this thread is veering WAY off topic, but whatever)… I did it like this:

  • seedbox - hosted by Feral Hosting
    ** Sonarr
    ** Radar
    ** Jackett
    ** Transmission/rTorrent/Deluge
    ** Plex (reserved for future use - not currently production)
    *** I don’t actually download my files TO my seedbox. Once they’re here they get sorted into a post-processing folder. This folder is recursively and continuously uploaded into my encrypted GDrive - the rclone flags ignore files under a few kilobytes because of what happens on my automation box

  • automation box - hosted by Vultr
    ** This box serves ALL my cloud automation purposes but for plex it’s doing the following:
    ** Same rclone drive > cache > crypt as my PMS
    *** It recursively performs a cp -a /mnt/source /mnt/target to create a minimally sized replica of the directory structure
    *** rsync this structure to where the seedbox organizes the files so that Sonarr doesn’t keep trying to download shows once they’re uploaded off the box.

  • plex server - a local VM with a drive > cache > crypt mount

I honestly have no clue how you’re not hitting the API limits for your user.

Would you mind taking a screenshot of your plex library settings page and posting what you’re using for mounting the remote filesystem? I’m sure mine isn’t completely the most efficient but I’m learning my way through this.

I know some people use PlexDrive but that github repo hasn’t had a commit in a while - so I opted to go purely rclone and fuse. No unionfs/plexdrive or anything else.

This is whilst actually scanning around 350 TV shows into Plex on a new box.
Its doing the Analyze part now without issue.
Im using rclone & fuse on 3 seedboxes and I think the dedi that’s getting cancelled was Merger_FS.

You did actually make me think of something though regarding Plex settings. Do you still have Video Preview thumbnail generation turned on in Plex?

I would also say that even your last screenshot shows you’re getting constant errors. Something is up.

Absolutely not. That takes up a large amount of disk space and my local Plex server runs in a limited ESXi, 8 drive raid-10 Dell PowerEdge:


I am still getting some errors. I think you’d be quite able to help with that. If you’re up to a screen-share or something, I’d love to have you take an inside look at what I’ve done.

I can host MS Teams or Webex - would you be willing?

At work at the moment and to be honest a lot of your set up would be totally lost on me anyway. I like to keep things simple.

I’d love to keep it simple but there is so much erroneous and deprecated documentation out there…

How about posting your rclone mount commands, and any special flags you’ve specified in rclone.conf?

All very basic.
rclone mount --allow-non-empty --allow-other --cache-db-purge gcache: ~/mnt/gdrive &
Just what was recommended by the seedbox provider.
All defualt settings in rclone config if I recall.

I do not have this in my mount command. I’m going to add it and see what happens.

But you do have cache set up?