QoS: Keeping it Simple

networknodescommunicationQuality of Service (QoS) is a topic that causes a lot of debate and confusion in the networking community. Over the years, I’ve seen many implementations that were not effective for various reasons, mostly because they were either over-engineered or they fell short and did not protect the critical flows adequately. As a result, I found an approach that is very effective and quite simple and provides a QoS policy that is “just right” and can grow if required. The goal of this series of posts is not to teach you the ins and outs of implementation but to educate you enough about QoS so that you can build a QoS policy that is just right for your network.

What Is QoS?

The definition of QoS I found is: “The ability of the network to provide better or ‘special’ service to a set of users and applications to the detriment of others.”

Let’s start by dissecting this definition: “The ability of the network” means that this is done by the network infrastructure, meaning switches and routers. It is not a traffic-shaping server sitting on your network but the network infrastructure itself. Next, “to provide better or ‘special’ service” means that some traffic flows will be given preferential treatment and other traffic flows will be given what is left over, if anything. “To a set of users and applications” means we can choose which traffic flows get preferential treatment and which traffic flows get detrimental treatment or less bandwidth. “To the detriment of others” means that with a finite amount of bandwidth, if I give more bandwidth to my preferred traffic, there will be less available for the remaining traffic.

Now that we are on the same page about what QoS is, how do we implement it? Well, in my experience the best approach is, “keep it simple.” I have seen many implementations that were over-engineered, some to a point that they caused more problems than not having any QoS at all. I have also seen the other extreme: that QoS was helpful, but it didn’t do enough to protect all the critical services or traffic flows. So what is the “just right” amount of QoS? The right amount of QoS keeps all your critical services flowing, even in a completely congested scenario, but is simple enough for you to identify problem traffic in a very short amount of time. Basically, I seek a solution that is simple and yet effective.

Let’s address some common questions that always come up. This will establish a foundation on which we can build. We will conclude with a brief summary of my step-by-step procedure for developing and applying a “Just Right” QoS policy.

What do I Have If There is No QoS?

The default queuing is a FIFO, which stands for “First In First Out.” This is the default for all switches and routers. It is the simplest of all queuing methods with all packets treated the same and processed in the order that they arrived. The problem with this type of queuing is that some traffic flows are more aggressive and will consume all resources (queue space and bandwidth), starving out less aggressive flows by occupying the entire queue. This is not a situation that we want to allow.

How Many Classes or Queues Should I Have?

What if I could create or enable multiple FIFO queues? We can. This is exactly what we are going to do on each interface. On some devices, namely switches, all we have to do is to enable or turn them on, thus giving us four egress queues. On routers, we have to create them, and we have the flexibility to create as many as we want. We want to keep it simple so, for consistency, we will create the same number of egress queues on our routers that we have in the rest of our network. After we have these four queues then we can assign certain traffic types to each of the queues.

By creating classes the network administrators get to choose what traffic is assigned to each of the different FIFO queues. For example, my voice traffic from my phone can be put into the Queue #1 , and my traffic to the corporate Oracle server can be assigned to Queue #2 while my general web surfing can be assigned to the Queue #3 . By creating multiple queues, we can manage each queue individually, meaning we can adjust the size and how often we service each queue. This is where most people get carried away and over-complicate things. Don’t do this. Keep it simple and be consistent throughout your entire network. This will be a life saver in the long run. I can’t stress that enough!

Now we have four FIFO queues that we can sort our traffic flows into, and we can adjust the size, how often, and for how long we service each of the queues. I think you can see that this is already a huge improvement over the default of a single FIFO queue, and we are just getting started.

Next week we’ll finish up with a review of the classes.

Reproduced from Global Knowledge White Paper: QoS, Keeping It Simple

Related Courses
QOS — Implementing Cisco Quality of Service
CVOICE — Implementing Cisco Unified Communications Voice over IP and QoS v8.0
ICOMM — Introducing Cisco Voice and UC Administration v8.0

QoS, Keeping it Simple Series

  • QoS: Keeping it Simple
Please support our Sponsors here :