The culmination of my experience with rclone is a completely Dockerized solution to allow auto-mounting of cloud storage. See the source here on my GitHub but read on if you want to hear more about my journey.
Since it’s inception, rclone has become ubiquitous with effectively connecting cloud storage to devices running Linux, Windows, or macOS. Thanks to its cross compatibility with both hosts and cloud providers, it is seen in many homelab setups. My original goal was to get rclone configured and running on a headless server I had running Ubuntu.
Setup was pretty painless: the interactive config generator makes this easy; just follow their specific guide for OneDrive, but I ran into a dead end: using the rclone tool “headless” mode. Unfortunately it seems that the size of this response causes issues in some terminals. Luckily, there’s a simple work around: generate the config on the device with a web browser and then just SFTP it to your headless machine. We’re in business!
The next step is configuring a file system mount, and having it auto-mount when the system boots. The options I found were to either add and entry to /etc/fstab
or create systemd unit(s). I decided to set both up just to compare functionality. My conclusions are:
_netdev
parameter that delays the mount until networking is active. Using systemd units you can time other services around when the mount happens.x-systemd.automount
option in fstab, or generating both .automount
and .mount
units delays the actual mount until the mount point is accessed (e.g. ls or cd on the directory.) Make sure this is the functionality you want; it was not for me.--rc
parameter (there is an open issue on rclone’s repo for this). I am not sure if the --rcd
parameter is also affected.Below are some examples. To summarize how these function differently:
x-systemd.automount
prior to args2env
the mount will be delayed until you access the directory.OneDrive1:Data /onedrive/onedrive_1 \
rclone rw,nofail,_netdev,args2env, \
vfs_cache_mode=full, allow_other, \
uid=1000, gid=1000, \
config=/etc/rclone/rclone.conf, \
cache_dir=/var/cache/rclone, \
daemon-wait=600, \
0 0
After adding the above to your fstab, run mount -av
or reboot your host.
[Unit]
Description=Mount OneDrive with rclone
Wants=network-online.target docker.service
After=network-online.target docker.service
[Mount]
Type=rclone
What=OneDrive1:Data
Where=/onedrive/onedrive_1
Options=rw,nofail,_netdev,args2env, \
vfs_cache_mode=full,allow_other, \
uid=1000,gid=1000, \
config=/etc/rclone/rclone.conf, \
cache_dir=/var/cache/rclone, \
daemon-wait=600
[Install]
WantedBy=multi-user.target
After adding the above to /etc/systemd/system/onedrive-onedrive_1.mount (note that your file name needs tto match the directory in the “Where” clause of the unit) run systemctl daemon-reload
followed by systemctl enable onedrive-onedrive_1.mount
and then either reboot your host or run systemctl start onedrive-onedrive_1.mount
instead.
[Unit]
Description=Automount OneDrive with rclone
Wants=onedrive.mount
[Automount]
Where=/onedrive/onedrive_1
[Install]
WantedBy=multi-user.target
Note: this needs to be paired with onedrive-onedrive_1.mount, but instead of running systemctl enable onedrive-onerive_1.mount
you would run systemctl enable onedrive-onerive_1.automount
Specifically, we’ll be talking about setting up the rclone Web GUI because
See the rclone
service in this example docker-compose.yml for how to get up.
The UI allows for generating rclone.config files, or adding remotes to an existing one. For remotes using OAuth 2.0 with the auth-code flow, I recommend configuring the .config on your device with a web browser and then SFTP transferring it over to the headless machine if applicable.
The compose file is pretty self explanatory, but I do want to call out a few important parts.
I mount the entire host filesystem with /:/hostfs:rshared
but you can use a different directory such as /mnt
if you prefer. The caveats are:
/mnt:/hostfs:shared
/hostfs
as the container mount point for the mount automation script to work.The docker-compose.yml configured as is will allow access from a reverse proxy that is also on the reverse-proxy-network
Docker bridge network.
If you don’t use a reverse proxy, you would want to remove this network (leave the rclone-net
to allow communication with the auto-mount container) and instead bind the port supplied in --rc-addr
to a port on the host.
Here’s the meat and potatoes of what I actually contributed to this whole setup! The majority of the auto-mount logic is in rclone_initializer.py
What is happening:
This is partially controlled by the compose file:
...
depends_on:
- rclone
...
However, even with this it is possible that the rclone container is running but the remote control server isn’t up yet when the rclone_initializer starts. To handle this, we wait for a successful response from the remote
def is_rclone_ready():
try:
response = requests.options(f"{RCLONE_URL}/rc/noopauth", auth=AUTH)
return response.ok
except requests.exceptions.RequestException as e:
logging.error(f"Error checking rclone readiness: {e}")
return False
Refer to mounts.json for an example for how to format what you want auto-mounted. A full list of available options and their defaults (as of Jan 2024) for mountOpt
and vfsOpt
can be found in the rclone Config Options
The initializer will allow individual mounts to fail and continue trying the remaining. The overall process will only log a success if all mounts succeed.
def mount_payloads(mount_payloads):
all_mount_success = True
for mount_payload in mount_payloads:
try:
logging.info(f"Mounting {mount_payload['fs']} to {mount_payload['mountPoint']}")
response = requests.post(f"{RCLONE_URL}/mount/mount", json=mount_payload, headers=HEADERS, auth=AUTH)
if response.ok:
logging.info(f"Mount successful.")
else:
logging.error(f"Failed to mount {mount_payload['fs']}: Status code: {response.status_code}, Response: {response.text}")
all_mount_success = False
except requests.exceptions.RequestException as e:
logging.error(f"Request failed: {e}")
all_mount_success = False
return all_mount_success
Docker has a few options on handling how containers restart, but unfortunately enabling any of them (so that the container starts when the host reboots) will also result in the container restarting every time the script exits, or not at all. This is partially because the base python container returns an exit code once the script completes. To summarize the options:
no: the container will exit when the script completes (code 0 or otherwise).
on-failure: the container will exit if all mounts are successful, otherwise it will continuously retry.
always: the container will always continuously restart and re-execute the mount process
unless-stopped: similar to “always” but the cycle will end if the Docker daemon is sent a command to stop the container
To work around this, we include this in the Dockerfile:
CMD python ./rclone_initializer.py && tail -f /dev/null
The tail -f /dev/null
will essentially cause the container to remain active but idle using minimal resources.
By setting this up in Docker, we apply the benefits of containerization (ease of cross deployments, backup simplicity, and cross platform functionality) to our cloud storage mounts. We additionally can now leverage both the rclone WebGUI and the remote control for configuring mounts.
Happy Home Labbing!