ClearCenter's Remote Data Backup app provides a secure, simple, and reliable backup service to the cloud. All data is encrypted, both in transit and on the storage clusters managed by ClearCenter.
Simple, yet powerful, snapshots allow the administrator to control the retention period for all backups making this solution ideal for disaster recovery, failed hardware or user error (e.g. deleted file).
Algorithms provide efficiencies and optimisations to reduce bandwidth and storage space for data which has not changed or been added in previous snapshots.
If your system does not have this app available, you can install it via the Marketplace.
You can find this feature in the menu system at the following location:
<navigation>System|Backup|ClearCenter Remote Server Backup</navigation>
Before you can begin to use the Remote Backup and Restore service you must set the volume encryption key. The volume encryption key (a secret password known only by you) is used to encrypt data on the backup servers.
Enter a key, re-enter it as verification and click on Set Key. Your key is now part of the local server configuration, however, the key has been hashed.
If you have upgraded your system hardware or had a catastrophic event and needed to re-install your server and access your remote backups for restoration, you will follow a slightly different process. The Remote Backup Service will automatically detect that there is data stored on the remote cloud servers and will ask you for the key that exactly matches your original encryption key used when you first configured the service.
Enter your key and click on the Validate Key Against Existing Snapshot. If the key matches (eg. it can decrypt the remote storage), your key will be hashed and saved to your Remote Server configuration settings.
If the key is incorrect, you will be prompted to re-enter the key.
If you do not remember your key and you can live without your backup data, you can click on the Reset button. Resetting will effectively delete (irrecoverably) all snapshots previously saved on the remote cloud servers. Do not select this option unless you understand that no data from prior backups will be available in the event you wish to restore.
An indication of whether the service is enabled on the cloud-based storage clusters. By default, the service will automatically be enabled if it is determined that you have a valid subscription for the service. There may be cases when you wish to stop automated backups…you can do this either from the cloud or local settings.
An indication of how much cloud-based storage you have purchased. If you require additional space, follow the instructions on purchasing incremental storage from the Marketplace.
A graphical representation of how much storage capacity your data backup and snapshots have actually used on the remote servers.
Backup (or restore) jobs can take a long time if a large amount of data is being transferred over the network. The Most Recent Service Status fields give you an idea of what operation the service performed last, what the result was and how long it took.
Configuring the remote backup service is straightforward. Click on Settings from the main overview page. The following fields can be configured by selecting Edit from the form summary.
The app is configured to allow an administrator to quickly backup data related to typical services run on the system. The following section describes the categories one can select from.
System configuration includes all standard system settings, including users and groups. Setting this selector to enabled stores a copy of the Configuration Backup app.
Users home directories (/home).
Data stored in the Flexshare shares (/var/flexshare/shares).
Data pertaining to websites hosted on the server (/var/www).
Data related to the FTP service (/var/proftpd).
To enable daily automatic backups, set the Automatic Backup to enabled. If you prefer to backup your data manually, set to disabled.
If automatic backups are enabled, this setting determines the window in which the backup will start.
Throttling or limiting the amount of bandwidth the remote backup service uses can ensure quality of service for other services in the event your organisation operates 24 hours a day.
Click on the Advanced button under the Settings menu to access additional settings of the service.
If you have installed applications or storage that is not included when enabling one or more of the quick pick selectors under the general settings options, enable the Custom Folders field. Upon saving your configuration, you will find a link where you can browser through the file system of the server, selecting folders (or files - see below) to be included with each backup snapshot.
Set to enable if, when browsing the custom folder selection utility, you would like to be able to select from both folders and individual files on the file-system.
While enabling this option gives you a higher degree of selection, there are two drawbacks.
Number of daily snapshots to keep on the server. Once reached, future snapshots will cycle - the oldest will drop off (be deleted) to make room for the newest.
Number of weekly snapshots to keep on the server. Once reached, future snapshots will cycle - the oldest will drop off (be deleted) to make room for the newest.
Number of monthly snapshots to keep on the server. Once reached, future snapshots will cycle - the oldest will drop off (be deleted) to make room for the newest.
Number of yearly snapshots to keep on the server. Once reached, future snapshots will cycle - the oldest will drop off (be deleted) to make room for the newest.
Send a summary of each backup via e-mail.
Send a e-mail in the event an automated backup fails.
The e-mail to send all notifications to, as configured above.
Snapshot management configuration is an important part of an administrator's duty to backup the data residing on the server/gateway. A balance is struck between how many snapshots to retain, and the potential overhead (required amount of cloud-based storage) one needs to purchase and maintain. Fewer snapshots results in lower cloud-based storage requirements. However, the downside is that one could potentially still lose files in a data loss event.
Retaining many, many snapshots ensures you will never lose your data, but comes with the higher cost to maintain more cloud-based storage.
The key is to strike a balance between these scenarios that is both acceptable to the owner of the data and cost-effective.
Perhaps the best way to explain how snapshots are created, how cloud-based storage is consumed and how data could potentially be irrecoverably lost is to go through a typical scenario of file management and see how various snapshot management strategies handle the situation.
Imagine a scenario where a user's home directory is targeted for backup. On February 22, 2012, the service is enabled, and all files/folders in their home directory are successfully backed up.
On March 9th, the user accidentally deletes a critical file, but does not realise their action and so, the deleted files goes unnoticed.
On April 20th, the user requires the file, and finds it has gone missing. They turn to the remote backup service for help. Let's see how each of the following backup strategies play out.
The simplest backup strategy…a rolling 7-day snapshot cycle where a snapshot is generated each day, Monday through Sunday, inclusive. Once a full cycle has taken place, the latter snapshots replace the oldest snapshots. On any given day, you can go back 7 days and expect to restore any data that existed in those 7 days.
In our scenario, the deleted file(s) would be irrecoverable, since knowledge of the missing files went unnoticed past the 7 day window. Any snapshot that had the file (March 9 - 16) has been replaced by more recent snapshots.
In addition to the 7 daily snapshots, 1 weekly is added.
With one weekly snapshot configured, the weekly will get overwritten every Sunday.
In the scenario above, the deleted file(s) will still be irrecoverable, since all daily snapshots would have cycled through, and there was a second Sunday between deleting the file and noticing it was missing, which means the weekly snapshot that gets overwritten no longer contains the data we need.
With this strategy, we have a much longer window where we can guarantee all data can be recovered in the event of loss of data.
7 daily snapshots coincide with 4 weekly snapshots (every Sunday). We now add 1 monthly snapshot which will get replaced on the first of each month.
In our mini case-study, however, we would still lose our file. Here's why - we started the service on February 22, and the first monthly snapshot (March 1) successfully backs up our critical file(s). However by the time the second monthly snapshots comes around (April 1) the user has already deleted the file. This monthly no longer contains the critical file which we need restoring and replaces the March 1 snapshot. All daily files have already cycled through, and there was more than 4 Sunday's when the weekly snapshots were created (March 11, 18, 25 and April 1) so that none of those snapshots would contain the missing file.
Ultimately, if you want to ensure your backup for more than a year, this strategy is recommended. Of the 4 strategies outlined here, this is the only one that could successfully restore the data deleted from April 20th.
Moving to 12 monthly backups ensures that we don't overwrite the March 1 backup which contains the file(s) deleted without realising our action.
While it is true that this cycle requires the most snapshots, if having your data be recoverable for up to 2 years back, this strategy, with the option to increase the number of yearly snapshots, is essential.
In reading the above section, you may be under the impression that a lot of remote storage will be necessary to store 20 or more rolling snapshots of your server. Indeed, if it were not for file linking, you would basically need to plan for x times the amount of data you require to backup - x being the number of snapshots which is dependent on your snapshot strategy. Having 50GB of data would theoretically require over 1TB of remote storage!
Fortunately, this is far from reality due to a the use of file linking - a simple concept that is invoked on the cloud-based servers that will determine if a file has changed or not from a previous snapshot. If a change is detected, a new copy of the file will be stored, and that will require additional space to keep different copies (one might think of them as 'versions' of a file). If the file has not changed from a previous backup, a link will be created to the prior snapshot - requiring almost no additional space. Using a bit of magic, deleting a snapshot (eg. either manually or through the process of recycling) will instantly change the next in-line file to become the master, with any linking made to it.
If your Cloud Storage becomes full and there is no additional room for a backup, the automated backup will fail. You will need to delete one or more snapshots for automated backup to run normally.
You may ask: “Shouldn't it detect that it needs to delete one of the backups to continue?” Our answer is an auto backup routine should never delete snapshots…only replace as per the backup strategy/setting. In this case, the weekly/daily may get replaced one day depending on your settings, but won't because it has exceeded the cloud storage capacity.
The best course of action is to proactively watch the capacity and delete unneeded snapshots as necessary to keep auto backup running as normal or purchase additional cloud storage.
Purchasing additional cloud-based storage for the Remote Server Backup app can be done through the Marketplace. Click on the Marketplace link in webconfig and enter into the search term filter the keyword backup. A selection of additional storage increments ranging from 5GB to 100GB are available. Each Marketplace purchase of a storage allotment is cumulative - if you purchase 5GB today and in 6 months, you purchase an additional 10GB, you would have 15GB total moving forward.
Once in progress, a backup will continue in the background and an administration can navigate away from the remote backup app configuration page or close the browser to return at a later time - neither of these actions will stop a backup in progress.
You can view the progress at any time by returning to the remote backup app page. If an operation (restore, backup etc.) is in progress, your browser will automatically reload to the progress view shown to the right.
To have backups performed automatically, enable auto backup.
Restoring data from the Remote Backup Service cloud-based servers is much like configuring a backup with two additional steps:
From the main overview page, click on Restore. You can view the settings that will be applied to the restore operation in the form labelled Restored.
Select the volume you wish to restore from in addition to the location.
Once your restore settings are set, simply click on the Start Restore button. Upon confirmation, your restore operation will begin. You can follow the progress via the progress view. Just as in a backup operation which may take hours to complete depending on the bandwidth available and amount of data, you can move on to other activities, close your browser etc. and the process will continue to run on the server in the background.
Deleting snapshots manually is generally ill-advised…better to let the system take care of rolling over snapshots that are no longer required. However, if deleting a snapshot is required, it can be done through the app's UI.
From the main overview page, click on Restore. Below the restoration settings form you will see a table with a list of snapshots. Clicking on the Refesh button simply ensures that the snapshots on the cloud-based storage server are in sync with the enumerated list on your server.
Each entry contains information that will help you identify a snapshot and determine how much additional storage space (storage used column) will become available if it is deleted.
Running operations from the command line should *only* be done by experienced users who have advanced use-cases (ie. automation, scripting etc.).
To get a list of commands for the Remote Backup Service (RBS) client, run (as root):
For more information about using the command line client, click here.
Your data is your data. It is that simple.
This is why ClearCenter designs the storage of your data to best protect it from all intrusions or threats. ClearCenter uses high levels of encryption in both the transmission (RSA 2048-bit public/private key authentication or better for the handshake process and subsequent session key encryption while the data channel is secured with a randomly generated 256-bit AES session key or better and the shared secret key (host key) used for authentication is also a 256-bit AES key) and storage of your data (dm-crypt module which encrypts using a 256-bit AES key). The technology is designed in such a way that only the transmitting ClearOS server can decrypt the data. This key is never stored anywhere by ClearCenter. If a client loses or forgets their volume key, we can not recover their backup data. It is your responsibility to ensure that this key is complex and difficult to compromise as well as it is your responsibility to recall this information. The client key (which is set by the user of this service, located on the server in question, and only revealable by the user or his server) represents the only tangible method for access to the user’s data that known to us.
While it is possible for an entity with enough resources to crack any encryption method, ClearCenter has and will take every precaution to ensure that the tightest methods available to us are implemented within best security practices. In the event that a general failure, backdoor, loophole, exploit or other mechanism is discovered, ClearCenter will make every effort to ensure that the failure, backdoor, loophole, exploit or other mechanism is fixed, updated, or repaired. In the event that ClearCenter cannot ensure security of the transmission or storage of your data, ClearCenter will discontinue the service and refund the balance of the remaining service at a pro-rated rate based on the remaining duration of the service term purchased.
All Remote Server Backup data is stored outside of the United States of America. All data centers used by ClearOS for the purposes of Remote Server Backup and customer data are housed in data centers that comply with PCI compliance standards and industry best practices. Encrypted data stored on ClearCenter servers are subject to the local laws of the countries in which that data is stored. This can include any of the following countries (Canada, New Zealand, and the United Kingdom but by default is located in Canada.) If you desire that your RBS data be stored in a country not listed or in a particular country that is listed, please contact ClearCenter. To date, ClearCenter and its related companies has never received any request from any legal authority requesting customer data (encrypted or otherwise) under subpoena or other legal instrument.
Remote Server Backup Privacy and Security Statement version 2.0 (updated 1 January 2014) David Loper Vice President of Technology Representing ClearCenter Corp.