The Bandwidth and QoS (quality of service) Manager app is used to shape or prioritize network traffic.
If your system does not have this app available, you can install it via the Marketplace.
You can find this feature in the menu system at the following location:
<navigation>Network|Bandwidth Control|Bandwidth and QoS Manager</navigation>
The QoS application works by allocating traffic to 7 Priority buckets. The lower numbered Priority buckets are then allowed to take bandwidth from higher Priority buckets, so bucket 1 can take bandwidth from buckets 2-7, bucket 2 can only take bandwidth from buckets 3-7 and so on. Any bucket can take all the bandwidth if no other traffic is using the bandwidth.
The upstream and downstream rates for your external (Internet) interfaces must be specified in order to optimize the underlying QoS engine. If you set these values below your actual upload/download rates, then you will find your bandwidth capped by these lower values.
We recommend the SpeedTest.net online tool for measuring actual bandwidth. Please perform these tests when network traffic is low (off hours) and without a web proxy running.
To do this, navigate to 'Network' > 'Settings' > 'IP Settings' in Webconfig. Then click on the Speedtest icon, located next to your external interface, as shown below.
Another dialog will appear where you will need to click 'Run Speed Test' to start the Speed Test.
Once this is completed, you may make changes as recommend above or return to the 'Bandwidth and QoS Manager' app.
If you land on a screen like this you will need to add your interface to the QoS engine by hitting . If you have not yet set the upstream and downstream bandwidth limits for the interface you will get redirected to the IP Settings screen to run a speed test (or manually enter the limits):
QoS has 7 priority bands with 7 being the lowest. Any traffic which does not match any of the QoS rules gets allocated to band 7.
It is recommended that Priority 1 is left alone. The default Priority 1 rules cover ICMP (pings etc) and are important for traffic flow as the the DNS rule. In the background there is also a custom rule covering small packets such as ACK packets.
Priority 2 is possibly best left alone. It is there because at times sysadmins may need to rapidly access the server to sort out issues. User rules should start at Priority 3 (or possibly 2).
It is fairly common to require higher QoS for a particular remote IP or network. For example, many VoIP solutions use Internet SIP servers for providing services, and these servers should be given high priority. Below is an example step-by-step guide for providing high priority to IP 18.104.22.168.
A similar rule needs to be added for downstream QoS:
With the QoS engine enabled, traffic to and from 22.214.171.124 will be given higher priority over other traffic.
Typical high priority traffic is VoIP or other telephony. Traffic like file transfers would normally be considered as low priority but it really depends on the individual use case.
This can be a single IP address or a subnet in either CIDR notation (192.168.0.0/xx) or Network/Netmask notation (192.168.0.0/w.x.y.z)
The port can be a single port, a port range separated by a colon (eg 1000:2000), or a list of up to 15 ports separated by commas (e.g. 80,443,8080,8443).
If you want to create a rule for a whole LAN subnet, the Source Address and Destination Address boxes accept subnets in both the 172.17.2.0/24 form and in the 172.17.2.0/255.255.255.0 form.
It is possible to add custom rules. If you want to do this you need to edit /etc/clearos/qos.conf directly and add them to QOS_PRIOMARK4_CUSTOM setting. You can use many of the firewall switches for the <param> section. See the example in the file and the two TCP/ACK rules.
If you need to add port ranges or more complex network matching, you can contact ClearCARE support. The support team will be able to provide custom configuration rules for the QoS engine.
It is possible to limit the amount of bandwidth available in each Priority bucket. Currently each one can take up to 100% of the available bandwidth. Similarly each bucket gets a similar proportion of the available bandwidth (before the lower numbered buckets steal from the higher numbered buckets), but it is possible to vary it. For more details see /etc/clearos/qos.conf
<!-- Before getting started with the QoS configuration, it is important to know about best practices. There are two ways to approach bandwidth management:
* Limit low priority traffic in an effort to improve speeds for high priority traffic
* Reserve bandwidth for high priority traffic which will shuffle low priority traffic aside
It is impossible to predetermine what types of traffic will be low priority, but typically quite easy to identify important traffic (VoIP being an obvious one). Therefore, **reserving bandwidth for high priority traffic** is the best way to proceed with QoS management. You can always add rules for low priority traffic as well, but it is often not necessary. -->
Having a web proxy configured either on a ClearOS gateway or some other local proxy server complicates matters. As soon as a web request is made via the proxy, the source IP address for the request is lost. In other words, configuring bandwidth rules using an IP address on your local network will not have an effect for any traffic going through the proxy. See the examples for ways to limit bandwidth to your proxy server.
Please see this forum post