now distinguish between coordinated and uncoordinated
Coordinated control. Given a set of paths c, a coordinated
controller actively balances loads over all paths in c, taking
into account the states of the paths. Our understanding of
and ability to design such controllers relies on a significant
advance made by Kelly et al., 8 which maps this problem into
one of utility optimization. In the case of coordinated congestion control, the objective is to maximize the “social welfare,” that is to
over (λcr ≥ 0) subject to the capacity constraints
where λcr is the sending rate of a class s session that is using
path r in c ∈ C(s). We will find it useful to represent the total
rate contributed by class s sessions that use path r ∈ R(s) as
Lr = Nc ∑c r λcr, and the aggregate rate achieved by a single s
session over all paths in c as λc = ∑r ∈ c λcr.
3. LoaD BaLancinG PRoPeRties of muLtiPath
Multipath has been put forward as a mechanism that when
used by all sessions can balance traffic loads in the Internet.
It is impossible to determine whether this is universally
true. However, we present in this section a simple scenario
where this issue can be definitively resolved. We consider
a simple scenario where there are N resources with unit
capacity (Cl ≡ 1).
To provide a concrete interpretation, the resources can
be interpreted as servers, or as relay or access nodes—see
Figure 2. There are aN users. Each user selects b resources
at random from the N available, where b is an integer larger
than one (the same resource may be sampled several times).
We shall look at the worst-case rate allocation of users in
two scenarios. In the first scenario, users implement uncoordinated multipath congestion control where there is no
coordination between the b distinct connections of each
user. Thus, a connection sharing a resource handling X connections overall achieves a rate allocation of exactly 1/X. In
the second scenario, each user implements coordinated
multipath congestion control.
We take the worst-case user rate allocation (or throughput), as the load balance metric. One can show13 that the
more “unfair” the allocation, the greater the expected time
to download a unit of data.
figure 2. Load balancing example: there are N servers, aN users and
each selects b > 1 servers at random.