Filesystem in Userspace (FUSE) is a simple interface for user space programs to export a filesystem to the Linux kernel. In this example we will mount an S3 bucket from StorageGRID to an Ubuntu Linux machine. After the setup is complete, we should be able to ingest object from one Linux machine and fetch it from a different Linux machine, cache objects into a temporary folder, encrypt objects using SSE, and restrict object access using SSE-C. Apart from FUSE we also need a plug-in that acts as an interface between FUSE and S3. A few available options include goofys, s3backer, S3Proxy, s3ql, YAS3FS and the plug-in we will be using for the purposes of this blog is s3fs-fuse.
Prerequisite:
- Two Ubuntu virtual machines.
- StorageGRID deployed with and a bucket to mount on your Ubuntu machine.
Follow the below steps to mount your S3 bucket to your Ubuntu VM Instance:
Install FUSE on your Ubuntu VM
$ sudo apt install fuse
Install s3fs-fuse on your Ubuntu VM
$ sudo apt install s3fs
Verify s3fs installation
$ s3fs –version
If you are using commercial certificates you can skip this step. If you are using self-signed cert, add StorageGRID certificate to Ubuntu
$ cd /usr/local/share/ca-certificates
$ sudo mkdir fuse-cert; cd fuse-cert/
$ vi sg-fuse-cert.crt
--> paste your StorageGRID certificate here$ sudo update-ca-certificates
Add StorageGRID user credentials to file ${HOME}/.passwd-s3fs
$ echo ACCESS_KEY_ID:SECRET_ACCESS_KEY > ${HOME}/.passwd-s3fs
Modify permission to ${HOME}/.passwd-s3fs
$ chmod 600 ${HOME}/.passwd-s3fs
Create a directory where you want to mount the your S3 bucket
$ mkdir media
Mount your s3 bucket to Ubuntu
$ s3fs <bucket-name> <path to directory> -o passwd_file=${HOME}/.passwd-s3fs
-o url=https://<StorageGRID endpoint> -o use_path_request_style
Verify the mount by copying something into the folder and doing an Object Metadata Lookup on StorageGRID
Now follow the same steps to mount the bucket on the second Ubuntu VM as well.
NOTE: s3fs does not usually give any output messages when the mount fails or succeeds. Add “-o debuglevel=info” to your s3fs mount command and look into “journalctl -r” for hints.
Once your s3 bucket is mounted successfully, you should be able to list, copy, move and delete files in your mountpoint folder we’ve created. You should also be able to share files between the two ubuntu machines. Now let's look at some of the advanced options that s3fs has to offer. First we will have to unmount the bucket to enable some of these features as these features are enabled at the time when the bucket is mounted.
Unmount s3fs mount
$ sudo umount <mountpoint folder path>
Cache objects from a bucket into a temp folder
S3fs lets you cache objects from StorageGRID into a folder for easy access. You can enable this using the following command.
$ s3fs <bucket-name> <mountpoint folder path> -o passwd_file=${HOME}/.passwd-s3fs
-o url=<StorageGRID endpoint> -o use_path_request_style
-o use_cache=/home/netapp/temp -o del_cache
s3fs creates the folder for you to copy and cache the objects into this folder. Using del_cache option is not mandatory but this option cleans up the cache folder whenever you start or stop s3fs mount.
Enable SSE and SSE-C on s3fs with StorageGRID
SSE (server-side encryption with StorageGRID-managed keys): When you issue a copy command to store an object to your mount with SSE enabled, StorageGRID encrypts the object with a unique key. When you issue a request to retrieve the object from your mount, StorageGRID uses the stored key to decrypt the object. You can enable this by using the “-o use_sse” option on s3fs.
$ s3fs <bucket name> <mountpoint folder path> -o passwd_file=${HOME}/.passwd-s3fs
-o url=https://<StorageGRID endpoint> -o use_path_request_style -o use_sse
SSE-C (server-side encryption with customer-provided keys): When you issue a copy request to store an object to your mount, you provide your own encryption key. When you retrieve an object, you provide the same encryption key as part of your request. If the two encryption keys match, the object is decrypted and your object data is returned. To generate your own encryption key you can use openssl and copy just the key generated by openssl and save it in a .txt file.
Generate SSE-C key
$ openssl enc -aes-128-cbc -k secret -P
Copy the key to a file
$ echo F7BD368C697900AFD270373E0A6DE963 > key.txt
Mount s3 bucket with SSE-C enabled
$ s3fs <bucket name> <mountpoint folder path> -o passwd_file=${HOME}/.passwd-s3fs
-o url=https://<StorageGRID endpoint> -o use_path_request_style
-o use_sse=custom:<path to your key file>.txt
Now try coping a file to your mount point and run list (ls -l) command from the second ubuntu machine which does not have SSE-C enabled mountpoint. The output would look something like this
The file that has question marks next to it was uploaded using SSE-C from the first ubuntu machine.
Summary
In conclusion, s3fs is a simple tool to mount StorageGRID S3 bucket to virtual machines, servers, laptops, or containers. Mounting S3 as drive storage can be very useful in creating distributed file like systems with minimal effort to handle media content-oriented workloads. Having StorageGRID as the S3 target in this case lets you create strong ILM policies for protecting your data and use storage efficiently.