Article Reviews

Quality of Service Essay

Posted on

In terms of computer networking, Quality of Service (QoS) is a traffic engineering term which has got little (or nothing) to do with service quality. QoS is basically a set of different technologies utilized for managing network traffic economically in order to improve the overall experience for both home-users and corporate users. It can be defined as the ability of a network to provide varying priority to various applications, users and (or) dataflows.

QoS allow the users to measure their connection bandwidth, detect modifications in network conditions and prioritize network traffic. Furthermore, implementation of QoS solutions is important for a whole new generation of applications for instance; Voice over IP (VoIP).

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Quality of Service Parameters

As we discussed earlier, the goal of QoS is to guarantee results expected from a network depending on its ability.

There are different factors which affect QoS and can be distinguished into both ‘human’ and ‘technical’ factors.

a-      Human Factors

These factors are the following:

                    i.            Stability of network service.

                  ii.            Availability of network service.

                iii.            Delays in service.

                iv.            Critical user information. (Vegesna, S. 2001)

b-      Technical factors

Technical factors on the other hand are more comprehensive and play a much greater role than human factors. These factors are reflective of the fact that different things can happen to packets during their travel from origin to their final destination resulting in numerous problems. These factors are also known as performance dimensions (or parameters) of QoS and they can be both measured and monitored to determine the level of service being offered or received. They consist of the following:

i-                    Network availability

It is basically the availability of a number of different components of a network for instance; redundant interfaces, processor cards, networking protocols, physical connections etc.

These components need to be implemented with varying degrees in order to ensure appropriate network availability. Network availability can therefore have a significant effect on QoS, since its unavailability for even brief periods of time can lead to disastrous network performance hence adversely affecting QoS. (Chao, J. 2002)

ii-                  Bandwidth (Throughput)

Bandwidth (also termed as throughput) is another major parameter affecting QoS. It is basically the bit rate (or maximum throughput) provided to different data streams; however it can be too low in case all data streams have same scheduling priority and users sharing a network have varying loads.

Bandwidth allocation if further sub-divided into the following two types:

–          Available Bandwidth

Empirical evidence has shown that network operators usually oversubscribe bandwidth in order to maximize their return on the investment made on the network infrastructure or leased bandwidth from another provider. This oversubscribed bandwidth is not the same as the actual bandwidth provided to the user. This leads to the users to compete one another for available bandwidth. The amount of bandwidth obtained by a user depends on the network traffic. (Xiao, X. 2008)

ADSL networks frequently utilize the available bandwidth technique, i.e. if a customer signs up for 256 kbps service, the service level agreement (SLA) of which states that there is no provision of QoS guarantee, then this basically means that under circumstances where load is low then the user can obtain the promised bandwidth however upon increase in load, the available bandwidth would decrease. This aspect is noticed during specific times of the day when more users are accessing a network.

–          Guaranteed bandwidth

This type of bandwidth is guaranteed by the provider (i.e. network operator). It is mentioned in the SLA that the provider guarantees a minimum bandwidth to the user. Since the bandwidth is guaranteed therefore, the price is considerably higher than otherwise. (Marchese, M. 2007)

Network operators might also differentiate their subscribers through different physical networks for e.g. via VLANs (for guaranteed bandwidth subscribers). In other cases both available bandwidth and guaranteed bandwidth subscribers would share the same network. This is true of locations where establishing new networks are expensive or leased bandwidth is utilized from some other service provider. However, in such cases it is advisable that network operators prioritize the traffic of guaranteed bandwidth users over available bandwidth users so that when network congestion occurs then the SLA requirements of guaranteed bandwidth users are met. (Evans, J. 2007)

Burst bandwidth is simply the excess bandwidth above the guaranteed one. QoS mechanisms can discard this excess traffic (burst bandwidth) and ensure that the service being provided is in line with that agreed in the SLA.

iii-                Network Delays (Latency)

The transit witnessed by an application from ingress point of a network to its egress point is termed as Network delay. Delay can cause significant QoS problems with real time applications and other applications such as SNA, fax transmission etc.

There are some applications which can still work even if there are small delays. However after a certain level they cannot also work completely and QoS is compromised as a result. An example would be of VoIP gateways which can provide some local buffering for small network delays.

Network delays can be both fixed and variable.

Some examples of fixed network delays are as follows:

–          Application based delays.

–          Queuing delays (i.e. data transmission).

–          Propagation delays.

Some examples of variable network delays are as follows:

–          Ingress queuing delay: These are usually for traffic entering a network node.

–          Egress queuing delay: These are usually for traffic leaving a network node.

iv-                Jitters

Jitter can be defined as the measure of delay between consecutive packets for a specified traffic flow. They can have a significant effect on real time application which are sensitive to delays for example; voice and video applications. Most real time applications require packets to be sent and received consistently with little delays as possible.  Slight jitters might be acceptable but in case these jitters increase to significant levels then the application can be rendered totally unusable.

There are some applications though which can provide compensation for small amounts of jitters for instance; voice gateways or VoIP.  This can be explained from the fact that voice application requires audioplay to be at a constant rate, if packets do not arrive in a timely manner then the previous voice packet will be replayed until the next one arrives. However, if next packet takes a considerably longer time to arrive then it can lead to small interruption in the audioplay.

Nevertheless all the networks experience some jitters which is due to variability in delay by each node. As long as these jitters can be somehow bounded then the QoS can be maintained. (Cisco.com. n.d)

v-                  Loss

Sometimes packets can misdirect or mistakenly combine together or get corrupted while on their route. The receiver will need to detect this and in case the packet is dropped than the sender would have to repeat the whole activity. These errors could also be introduced by the physical transmission medium and hence lead to losses. Empirical evidence has shown that most landline connections possess low loss which is measured by Bit Error Rate (BER) i.e. they have low BER. On the other hand, most wireless connections have been found to have high BER, perhaps due to environmental and geographical factors for instance; rain, fog etc.

Apart from this loss can also be a result of congested nodes dropping packets. However, different networking protocols exist which can mitigate packet losses for example; Transmission Control Protocol (TCP), it offers protection against packet loss through re-transmission of packets which might have been dropped or left corrupted by the network.  Continuous congestions can lead to deterioration of the network performance and numerous TCP re-transmissions. A better way to deal with this problem would be deploy congestion avoidance mechanisms and Random Early Discard (RED) is an example of it. RED randomly chooses packets and intentionally drops them once the network traffic reaches an already configured threshold. It takes advantage of the TCP’s throttling feature and provides more efficient management of dataflows. However, it should be kept in mind that RED only gives effective congestion           management for throttling mechanisms similar to TCP.

c-      Other Factors

There are other factors which though cannot be measured but provide effective traffic management mechanisms. These consist of the following:

i-                    Emission priorities

These priorities determine the sequence in which traffic is moved forward once it exits a network node. Network traffic with higher emission priority tends to be forwarded much ahead of network traffic with lower emission priority. Another related feature of emission priorities is that they determine extent of latency introduced into the network traffic through queuing mechanism of the network node.

Applications which can tolerate network delays for example; e-mail can be configured such that they have lower emission priorities. Such applications are usually buffered when delay-sensitive applications transmitted.

Discard properties:

These properties are utilized to determine the sequence in which traffic can be discarded. The traffic can get dropped due to either network node congestion or the network traffic exceeding its bandwidth for a certain period of time. Under congestion, traffic with higher discard priority gets dropped earlier than traffic with low discard priority. Network with similar QoS can be distinguished using discard properties. In this manner the network traffic can receive similar performance when network node is not facing congestion. On the other hand, when network node is congested then discard priority can be used to drop more eligible network traffic first.

QoS Mechanisms

Alternative mechanisms to QoS mechanisms have surfaced over the years and high quality communication can be provided as a result of over positioning a network such that capacity is based on network traffic when it is at its peak.  Such mechanisms are relatively easier and economical for networks which are predictable and have light traffic loads. Furthermore, they are also reasonable for a great many applications.

a-      Integrated Services (IntServ)

IntServ is a QoS mechanism which is based on the philosophy of reservation of network resources. In computer networking terms, IntServ is basically a computer architecture which specifies elements to guarantee quality of service (QoS) on related networks. IntServ for instance can be used to permit both audio and video to the receiver without significant interruption. It is specifically a fine-grained QoS system and is often contrasted with DiffServ.

The idea behind IntServ is that every router in the existing system does implement IntServ and every application which needs some kind of guarantees will have to make an independent reservation. These reservations are described by its sub-tool FlowSpecs.

Flow specs have further two types:

i-                    What does the network traffic seem to be like?

ii-                  What guarantees does the network traffic need?

The first query is resolved by Traffic Specification (TSPEC) whereas the second query is resolved by Request Specification (RSPEC).

TSPECs basically specify both the token rate and the bucket depth for instance; a video with a refresh rate of 105 frames per second and each frame requiring 25 packets will specify both token rate of 2,625 Hz and bucket depth of a mere 25. There would be cases when conversations require lower token rate but higher bucket depth. This basically means that bucket depth will need to increased in order to compensate for the network traffic (i.e. where it is burstier).

Under this model, applications use Resource reservation protocol (RSVP) for both request and reservation of resources in a network. It is basically a Transport layer protocol and it normally operates over an IPv4 and IPv6 Internet Layer. RSVP can be used by both hosts and routers alike so that they can both request and (or) deliver specified level of quality of service (QoS) for both application data streams and

Data flows.

Although RSVP is not itself a routing protocol but it was designed to help compatibility with both current and future routing protocols.

            Nevertheless, IntServ faces considerable issues with the growth of the internet probably because core routers would be needed to accept thousands of reservations which would be quite difficult and it was also considered to be unethical.

RSPECS on the other hand specify the requirements for the data flow. It is usually normal internet ‘best effort’ and in this scenario no reservation is required. Such a setting would normally be used for webpages, file transfer protocols (FTP) etc.

b-      Differentiated Services (DiffServ)

In this model, packets are actually marked as per the services they would need. DiffServ is a computer networking architecture which specifies a simple and coarse grained mechanism for distinguishing between both network traffic management and provision of QoS guarantees on current IP networks.

References

Chao, J. (2002). Quality of service control in high-speed networks. Wiley-IEEE

Cisco.com. (n.d). Internet Technology Handbook- Quality of Service. Retreived on 24th July, 2010 from: http://www.cisco.com/en/US/docs/internetworking/technology/handbook/QoS.html

Evans, J. (2007). Deploying IP and MPLS QoS for Multiservice Networks: Theory and Practice. Morgan Kaufmann

Marchese, M. (2007). QoS Over Heterogeneous Networks. Wiley

Vegesna, S. (2001). IP quality of service. Cisco Press

Xiao, X. (2008). Technical, Commercial and Regulatory Challenges of QoS: An Internet Service Model Perspective. Morgan Kaufmann