Derailed topic that turned into GDrive API errors - ON HOLD but likely resolved

I’ve got these:

[Unit]
Description=rclone-cache
After=network-online.target rclone-gdrive.service
[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/rclone mount --allow-non-empty --allow-other GCache: /mnt/GCache
ExecStop=/bin/fusermount -uz /mnt/GCache
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target

[Unit]
Description=rclone-crypt
After=network-online.target rclone-gcache.service
[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/rclone mount GCrypt: /mnt/GCrypt --allow-non-empty --allow-other
ExecStop=/bin/fusermount -uz /mnt/GCrypt
Restart=always
RestartSec=2
[Install]
WantedBy=multi-user.target

[Unit]
Description=rclone-drive
After=network-online.target
[Service]
Type=simple
User=root
Group=root
ExecStart=/usr/bin/rclone mount --allow-non-empty --allow-other GDrive: /mnt/GDrive
ExecStop=/bin/fusermount -uz /mnt/GDrive
Restart=always
RestartSec=1
[Install]
WantedBy=multi-user.target

the only service that is actually active is the Crypt service: it appears to correctly launch the others without having to have 3 mounts.

[GDrive]
type = drive
client_id = security first
client_secret = security first
scope = drive
team_drive = security first
token = security first
use_trash = false
chunk_size = 16M

[GCrypt]
type = crypt
remote = GCache:
filename_encryption = standard
directory_name_encryption = true
password = security first
password2 = security first

[GCache]
type = cache
remote = GDrive:
plex_url = http://127.0.0.1:32400
plex_username = security first
plex_password = security first
info_age = 2d
plex_token = security first
db_path = /mnt/GCacheDB
chunk_path = /mnt/GCacheDB
chunk_size = 16M
chunk_total_size = 20G
db_purge = true

So … What do you think?

Do you even need to bother with the cache layer these days?

I run mine just using vfs and it seems to be fine.

I use both an encrypted mount for some stuff and another unencrypted for other stuff.

But I don’t write directly to the mount I just use rclone to copy files to my gDrive.

An example of my mount command:

rclone mount gDrive: /path/to/my/mount --allow-other --read-only --buffer-size 256M --dir-cache-time 6072h --drive-chunk-size 128M --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --use-mmap

and my encrypted one is the same but with the gDriveEnc: config instead of the gDrive: config

1 Like

I began bothering with this only recently because of an incredibly high error rate on the google api side. It was over 30%, which was affecting Plex’s ability to scan and my seedbox’s ability to upload.

Can I just wedge these vfs options in to my current config?

Not sure, as I don’t use cache and I thought vfs is sort of an alternative to cache, so adding those options to your config might not help.

I also use feral to upload to my gDrive, just using rclone copy.

I do set a tps-limit on my copy command to prevent some errors, but that I think is more to do with rclone listing the drive, as I have got quite a lot of media.

I will post my copy command shortly. (am not in front of the right computer at the moment)

I do add new stuff daily and don’t usually see any api issues (apart of an issue last week, but I think that was down to google).

Although I have not done a brand new plex setup in a long time.

I look forward to your reply

This is what I use on my slot to copy stuff:

rclone copy -v -P --transfers=10 --min-size 1k --fast-list --tpslimit 5 ~/path/to/files/ gDrive:plex

I have a sort of mirror of all the files on feral, but old ones (the ones already copied to my gDrive) are just empty placeholders for sonarr and radarr, that’s why I have --min-size 1k (that’s important to my setup, to make sure the real one’s do not get overwritten with the placeholders)

I’m doing something very similar, my upload looks like this:

#!/bin/bash
while true do /media/sdah1/duvadmin/bin/rclone move -v --transfers 12 --min-size 1k --size-only /media/sdah1/duvadmin/private/post-processing/ GCrypt:
done

I use placeholder files too.

I need to go read what fast-list and tpslimit are for.

OMFG…
I have been using the Gooby script for so so long on a dedi where everything is just set up for you. Now I’m back testing on a Seedbox I had totally forgotten VFS Mode.
A thousand times thank you. :grinning:
Time to test it out.

1 Like

Glad we could be of service. :stuck_out_tongue:

Yeah dont forget to disable your cache remote though.

So if you’re using vfs you can’t use the cache?

There should be no need and its the cache remote that slows things down if I recall.

Speed isn’t my concern. Without the cache the Google API said I had a ~35% error rate.

If you would, kindly go look at screenshot in original post up top.

But the VFS should change that.

I certainly don’t use cache.

I have my gDrive mounted on both my nas and dedicated server.

And run both Plex and emby on both, all getting their media from my gDrive and scanning at least daily

And looking in the google console there are (almost) no errors

Yeah fairly certain you MUSTN’T use both together.
Like I said its been a while.:laughing: