Side_1_360

(Dana P.) #1
performance degradation. This is due to the
policing function (CAR rate-limit), and the
effect is packet loss (not shown in the figure).
Streaming is then the next class to get perfor-
mance degradation. As long as offered rate for
this queue is lower than the scheduler rate, the
average latency is almost as low as for VoIP
(difference is 0.4 ms). But this load interval is
rather small and when scheduler rate gets too
low performance degrades quickly.

Although the example given is not a realistic
traffic load scenario, it shows that the load inter-
val where we have some kind of performance
differentiation can be rather small and that we
need some control mechanism to guarantee per-
formance to some extent. With low load (no
congestion) there will be no differentiation
between the service classes. Only in case of
congestion can we see a difference between the
classes. This difference relies on the relation
between the relative share of offered traffic and
the weight given to the class by the scheduler
(drain rate). If the configured rate does not
match the actual traffic we may e.g. see that the
BE class gets the best performance! Even the
performance guarantee of the priority class relies
on the fact that offered traffic is below the rate-
limit for this class.

A problem with DiffServ is that control of the
traffic from a given customer is based on the
SLA/SLS that is normally given as a total vol-
ume for each class to and from that customer.
That is, we can give an upper limit for the traffic
(for each class) entering the network, but we do
not know how the traffic is distributed. This may
create congested points in the network while

other parts are under-utilised. Also dimensioning
of the network cannot be based solely on the
SLS parameters, since these are upper limits
and will give an expensive worst case design. A
more sophisticated control framework is needed
to support a well-dimensioned network that can
differentiate between service classes and at the
same time give some performance support. This
can be based on admission control or on the use
of bandwidth reservation on an aggregated level
combined with traffic measurements, e.g. by use
of MPLS. Also MPLS has support for fast re-
covery in case of failure that may be required
by some services.

The Use of Measurements for


Control and Capacity Planning


The utilisation of MPLS simplifies the task of
monitoring the traffic on each trunk and building
a picture of the load on the network. With the
aid of this information it should be possible to


  • Manage the network more effectively to
    obtain better end-to-end performance and
    more efficient use of resources;

  • Guide connection admission control (CAC);

  • Build traffic matrixes for capacity planning
    purposes.


This monitoring should be done at the entrance
to the MPLS network, i.e. at the LSP ingress in
the edge routers. A clearer picture of the avail-
able resources in the network should be gained.
By making use of this information in the CAC
process, it should be possible to allow higher
utilisation without risking congestion.

Figure 3 Average latency
distribution 1000000


100000

10000

1000

100
139.5 143.2 146,9 150.5 154.2 157.9 161.6 165.2 168.9 172.6

voice
streaming
business
best-effot

Business queue
starts to grow.
Drain rate < feed rate

No differen-
tiation

Tx-buffered
gets filled

Streaming queue
starts to grow.
Drain rate < feed rate

Tx-buffer - 30 particles

Total offered traffic (Mbit/s)

Average latency (μs)
Free download pdf