In both ClearOS and ClearBOX you may need to manipulate the mount points in order to properly and efficiently use the space they way you want. The following is intended as both a guide for changing those mount points and data structures as well as providing a framework for a future app based on these considerations.
This document relies heavily on the architecture proposed and used in practice for ClearBOX with ClearOS version 5.
The problem that exists in Linux is that there are a variety of places that things can be. This can be difficult for storage planning because we are typically left with two options that are both less than ideal. The first option is to just make one big partition and be done with it. This is what was done by default with ClearOS 5.x with a standard install (not this way on ClearBOX). The problem/downside of this solution is that the potential exists for the either the users or the system to fill up the entire disk in which the operating system ALSO resides. This can cause huge problems. In ClearOS 5.x this symptom is most easily recognized by see Webconfig ask for authentication over and over with each click.
The other method is to partition the drive into various partitions and make a variety of mount points. This means that you can place things that grow on partitions other than the root system partition. The downside to this method is that if you cannot predict exactly how much will be needed on each partition (and who does?) then you will end up with wasted space.
Fortunately a middle ground exists where you can place all your growing data and user data on a separate partition and use bind mount points to divide up the same storage block into a variety of locations.
So here is what we recommend. You will want at least 3 physical partitions on your system:
It is a good idea to keep boot off on its own. This way it is a plain old partition which can be read easily by GRUB or other systems without needing to dissect the LVM aspects. With version 6 we recommend at least 500 Megabytes for your /boot partition
For the second partition we will set it up with LVM with the size of 51.2 Gigabytes. We will end up putting system data here.
Last we will throw everything else into a big LVM partition. LVM gives us incredible flexibility which is even further leveraged by our use of mount points later in this guide.
For our install we can just set the size to 20.48 Gigabytes. Why? LVM is super easy to grow and we will want to grow it instead of setting a size for it because we recommend that you reserve some data, just in case.
Here is how we will further divide the two LVM partitions labeled main and data:
Back in the day it was recommended that you have double the RAM as you had memory. However, with larger memory pools and other considerations it is unlikely that your system will ever use more than 2 Gigabytes.
ClearOS doesn't need much space 5 gig is more than enough, as long as you set up the other structures. The only thing that should need to change here is the addition of apps. This is typically pretty small even for some of the most robust and complicated software that you can run on ClearOS. If you should need more later, never fear, we've left some room to grow. This is LVM after all!
The /var partition can use just a little space or vast amounts of space. much of our bind mount strategy will focus here. Outside of your various services, however, this should change more than the '/' (root) partition so we have more space. 20 gig should do it.
The log files can grow immensely if something is wrong. We want to keep this somewhat small on purpose, why? Because run away logs can crash the system. Set it to 8 and we can grow it a little later if you really need. But if you are exceeding 8 gig then it is likely that you need to address what is going wrong or collect your data in a better method.
The /store/data0 LVM partition is where the real magic happens. This is going to store all the real user data and here is where we will place data structures. Moreover, and this is really cool, we will use this same paradigm for any additional disks, SAN storage (iSCSI, et al), connected NAS storage, USB devices and other such data. We will explain this later.
LVM stands for Linux volume manager. It allows for partitions with great flexibility. Some of the things that LVM can do are:
LVM will NOT:
If you've configured these partitions when you installed the system then you will be able to see them and manipulate them. Here are some command that will be useful:
[root@clearos ~]# pvs PV VG Fmt Attr PSize PFree /dev/sdb2 main lvm2 a-- 24.41g 2.53g /dev/sdb3 data lvm2 a-- 207.98g 6.09g
[root@clearos ~]# vgs VG #PV #LV #SN Attr VSize VFree data 1 1 0 wz--n- 207.98g 6.09g main 1 4 0 wz--n- 24.41g 2.53g
[root@clearos ~]# lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert data0 data -wi-ao-- 201.89g logs main -wi-ao-- 4.88g root main -wi-ao-- 10.00g swap main -wi-ao-- 2.00g var main -wi-ao-- 5.00g
After you have installed, your system's /etc/fstab might look something like this:
/dev/mapper/main-root / ext4 defaults 1 1 UUID=5abcde29-abc9-abcd-abcd-1abcd19abcdf /boot ext4 defaults 1 2 /dev/mapper/data-data0 /store/data0 ext4 defaults 1 2 /dev/mapper/main-var /var ext4 defaults 1 2 /dev/mapper/main-logs /var/log ext4 defaults 1 2 /dev/mapper/main-swap swap swap defaults 0 0 tmpfs /dev/shm tmpfs defaults 0 0 devpts /dev/pts devpts gid=5,mode=620 0 0 sysfs /sys sysfs defaults 0 0 proc /proc proc defaults 0 0
To get started with any volume attached to the system we will stick to a couple standards. First, the volumes will need to conform to standards. For the volume which is immediate adjacent to the system partition we will always call data0. Other than this name, all other names will be can be selected at will.
You should assign the name for the logical volume in a consistent way to the directory mount point you create. For example:
lvcreate -l 1280 data -n data0
This command would have created the 'data0' logical volume in the volume group 'data'. We will keep this standard of naming the volume to be the same as the mount point for all volumes attaching whether they be NAS devices, iSCSI targets, USB drives with non-LVM partitions or whatever the case may be. It is up to the administrator to make sure that attaching devices do not overlap in name space.
Once a drive is prepared we can add the entry to the /etc/fstab and attempt to mount it. The following is an example of data0's mount point entry in /etc/fstab.
/dev/mapper/data-data0 /store/data0 ext4 defaults 1 2
To mount this device run the following:
mount /store/data0
An inspection of this device will show (on a new volume) the lost+found directory only on this drive:
[root@cbox6 ~]# ls -la /store/data0/ total 28 drwxr-xr-x. 4 root root 4096 Jun 8 18:14 . drwxr-xr-x. 5 root root 4096 Sep 20 13:24 .. drwx------. 2 root root 16384 Jun 8 18:09 lost+found
Each drive, regardless of the mount point should have the same basic structure so that future ClearOS servers can utilize the data properly. Perform the following to create that structure.
mkdir /store/data0/live/ mkdir /store/data0/backup/ mkdir /store/data0/log/ mkdir /store/data0/sbin/
The name for all localhost data should be 'server1'. This convention will allow for exported volumes to be properly processed in the Central User Data paradigm. To designate a volume space as NON-exportable, create the following:
mkdir /store/data0/live/server1 mkdir /store/data0/backup/server1
Typical bind mount suggestions are:
/store/data0/live/server1/home /home none bind,rw 0 0 /store/data0/live/server1/root-support /root/support none bind,rw 0 0 /store/data0/live/server1/shares /var/flexshare/shares none bind,rw 0 0 /store/data0/live/server1/cyrus-imap /var/spool/imap none bind,rw 0 0 /store/data0/live/server1/kopano /var/lib/kopano none bind,rw 0 0 /store/data0/live/server1/zarafa /var/lib/zarafa none bind,rw 0 0 /store/data0/live/server1/system-mysql /var/lib/system-mysql none bind,rw 0 0 /store/data0/live/server1/mysql /var/lib/mysql none bind,rw 0 0
mount |grep data /dev/mapper/data-data0 on /store/data0 type ext4 (rw) /store/data0/live/server1/home on /home type none (rw,bind) /store/data0/live/server1/root-support on /root/support type none (rw,bind)
-OR-
[root@system ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/main-root 9.9G 1.5G 8.0G 16% / tmpfs 3.9G 0 3.9G 0% /dev/shm /dev/md1 117M 47M 64M 43% /boot /dev/mapper/data-data0 644G 7.2G 604G 2% /store/data0 /dev/mapper/main-logs 4.9G 207M 4.4G 5% /var/log
mkdir /store/data0/live/server1/squid-cache
Gather info about source
ls -la /var/spool/squid/|head -n2 total 3092 drwxr-x--- 18 squid squid 4096 Dec 6 10:06 .
Run commands to match permissions
chown --reference /var/spool/squid /store/data0/live/server1/squid-cache chmod --reference /var/spool/squid /store/data0/live/server1/squid-cache
Validate results
ls -la /store/data0/live/server1/squid-cache/ | head -n2 total 8 drwxr-x--- 2 squid squid 4096 Dec 6 23:14 .
service squid stop service dansguardian-av stop
service httpd stop service smb stop
yum -y install rsync rsync -av --delete /var/spool/squid/* /store/data0/live/server1/squid-cache/.
rsync -av --delete /var/spool/squid/* /store/data0/live/server1/squid-cache/.
ls /store/data0/live/server1/squid-cache/ 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F swap.state swap.state.clean
rm -rf /var/spool/squid/*
/store/data0/live/server1/squid-cache /var/spool/squid none bind,rw 0 0
mount /var/spool/squid
mount |grep '/var/spool/squid' /store/data0/live/server1/squid-cache on /var/spool/squid type none (rw,bind)
ls /store/data0/live/server1/squid-cache/ 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F swap.state swap.state.clean ls /var/spool/squid 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F swap.state swap.state.clean
touch /store/data0/live/server1/squid-cache/test ls /var/spool/squid 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F swap.state swap.state.clean test rm -f /var/spool/squid/test ls /store/data0/live/server1/squid-cache 00 01 02 03 04 05 06 07 08 09 0A 0B 0C 0D 0E 0F swap.state swap.state.clean
service squid start service dansguardian-av start