This document is a technical guide for the Storage Manager app. Topics covered in this document include:
Part of the problem with Linux systems is the diverse and varied ways in which user data is stored. Because of a long legacy with POSIX standards, data ends up being placed in varied locations. As an example, take a look at following table which provides the data locations for some ClearOS apps:
|< 80% 30% 30% 40% >|
|Web Proxy||/var/spool/squid||Web proxy cache|
|MySQL Server||/var/lib/mysql||MySQL databases|
|System Database||/var/lib/system-mysql||System MySQL databases (Zarafa)|
Creating separate /var and /home partitions is one of the tactics used to separate some of the data, but there are two problems with this approach:
Using the magic of bind mounts, it is possible to keep the path to the legacy data directories while at the same time store the data on a completely separate partition or hard disk.
To help describe the bind mount concepts, we are going to walk through an example. We have installed ClearOS on a system with a 50 GB solid state drive and 6 TB RAID array. ClearOS is only installed on the solid state drive while the RAID array is mounted to /store as an empty data drive. This particular system is a dedicated Web Proxy and Content Filter gateway for a school.
The Web Proxy requires data storage for its cache, and since this system is running in a school a large cache is desired. The underlying software (Squid) uses the /var/spool/squid directory to store all the cached files. Through mount bind, this /var/spool/squid cache directory can also be mapped to /store/web_proxy/spool. This mapping is done by specifying the following entry in /etc/fstab:
/store/web_proxy/spool /var/spool/squid none bind,rw 0 0
The actual data for the web proxy cache now lives on the RAID disk, but the mount bind configuration in /etc/fstab also makes the data available in /var/spool/squid. The web proxy system is unaware that the actual cache data exists elsewhere on the file system. Linux magic.
Though this example only covered the web proxy cache, any ClearOS app that needs storage can automatically create a mount bind entry in /etc/fstab. The mechanism for storage mappings is described in the next section.
The main configuration file - /etc/clearos/storage.conf - provides two simple options:
If you want to enable the Storage Manager on a system that is already running, please be careful, do a backup, and follow the instructions further below.
When an app like Web Proxy needs a storage location mapped, it drops a configlet file into /etc/clearos/storage.d/. The meaty part of the web proxy configlet looks like:
$storage['/var/spool/squid']['base'] = $base; $storage['/var/spool/squid']['directory'] = 'web_proxy/spool'; $storage['/var/spool/squid']['permissions'] = '0750'; $storage['/var/spool/squid']['owner'] = 'squid'; $storage['/var/spool/squid']['group'] = 'squid';
The $base variable is taken from the primary configuration file (default: /store).
If you want to override the defaults here, please do not overwrite the default file! Instead, copy the default web_proxy_default.conf to web_proxy.conf and edit the parameters in the new file. For example, you might want to put the web proxy cache on a separate solid state disk to increase throughput, so your configuration file would look something like:
$storage['/var/spool/squid']['base'] = '/my_cache_drive'; $storage['/var/spool/squid']['directory'] = 'web_proxy'; $storage['/var/spool/squid']['permissions'] = '0750'; $storage['/var/spool/squid']['owner'] = 'squid'; $storage['/var/spool/squid']['group'] = 'squid';
In this case, the /my_cache_drive is a mount point to the separate solid state disk.
This section of the document walks through how to implement Storage Manager on a running system. If you have jumped to this section of the document first, please go back and read the earlier sections!
Starting with ClearOS 6.4.0, the Storage Manager became available for the Amazon EC2 ClearOS builds. However, you can install the app on any ClearOS system, but it requires moving data around on the command line. To install the software, run:
yum install app-storage
Point your web browser over to <navigation>System|Storage|Storage Manager</navigation> in the web-based interface. You will see something similar to the screenshot below.
In the above screenshot, you can see that ClearOS is installed on the 9 GB /dev/sda drive. The /dev/sdb and /dev/sdc drives are unformatted and available. If you are using hard disks from a previous install, the Storage Manager may chicken out and set the In Use status. Only disks without partitions will show up in the web-based interface at this time. If you want blow away the data on a disk, you can use the dd command with extreme caution:
dd if=/dev/zero of=/dev/xyz
Before we can enable the Storage Manager, we need to first format and mount the data drive. Following the example in the above screenshot, the /dev/sdc drive is formatted using the following command:
Feel free to partition the disk, use LVM tools, etc. The only requirement is to have a mounted disk for data. The next step is to create the /etc/fstab entry to mount the new disk to /store:
/dev/sdc /store ext4 defaults 1 2
And then mount /store:
Now that /store is ready, you can enable Storage Manager by changing the enabled parameter in /etc/clearos/storage.conf. Now run the following to notify the storage manager to check the mappings. Don't worry, nothing destructive happens here:
Go back to ClearOS web interface to see if any of the storage mappings are enabled. In our example shown in the screenshot below, both the Web Proxy and Users apps are now mapped to our new storage locations:
The Storage Manager was able to create these mappings because the target directories were still empty and unused. In other words, no users existed and the Web Proxy had never been started. Do not be surprised if none of the mappings are enabled on your system since there is a good chance that data already exists for all the apps.
Looking in /etc/fstab on our example system, two new entries exist:
# Storage engine - start /store/users/home /home none bind,rw 0 0 /store/web_proxy/spool /var/spool/squid none bind,rw 0 0 # Storage engine - end
In our example, the System Database was already in use by the reports engine, so the storage system refused to perform the mapping. To get the mapping working, we need to migrate the data from the old directory - /var/lib/system-mysql/ - to the new storage directory.
# Shutdown the system database service system-mysqld stop # Move the database data to a temporary location mkdir -p /var/tmp/system-mysql mv /var/lib/system-mysql/* /var/tmp/system-mysql/ # Initialize storage engine storage
At this point, the System Database storage mapping should be happy. Check the /etc/fstab file and output from mount as a sanity check. If there's a problem, check the status in the ClearOS web-based interface.
The next step is to move the database data from the old location to the new storage location. Feel free to copy the files instead of moving, but remember to delete the old /var/tmp files once you are comfortable with the new storage location. The following command may look like you are moving the files back the old location, but in fact the actual data is now going to the /store disk. Since this is a disk-to-disk move, this move can take some time.
mv /var/tmp/system-mysql/* /var/lib/system-mysql
And finally, restart the system database:
service system-mysqld start