We introduce the notion of normalized entropic vectors-slightly different from the standard definition in the literature in that we normalize entropy by the logarithm of the alphabet size. We argue that this definition is more natural for determining the capacity region of networks and, in particular, that it smooths out the irregularities of the space of non-normalized entropy vectors and renders the closure of the resulting space convex (and compact). Furthermore, the closure of the space remains convex even under constraints imposed by memoryless channels internal to the network. It therefore follows that, for a large class of acyclic memoryless networks, the capacity region for an arbitrary set of sources and destinations can be found by maximization of a linear function over the convex set of channel-constrained normalized entropic vectors and some linear constraints. While this may not necessarily make the problem simpler, it certainly circumvents the "infinite-letter characterization" issue, as well as the nonconvexity of earlier formulations, and exposes the core of the problem. We show that the approach allows one to obtain the classical cutset bounds via a duality argument. Furthermore, the approach readily shows that, for acyclic memoryless wired networks, one need only consider the space of unconstrained normalized entropic vectors, thus separating channel and network coding - a result very recently recognized in the literature.
[1]
Frantisek Matús,et al.
Infinitely Many Information Inequalities
,
2007,
2007 IEEE International Symposium on Information Theory.
[2]
Gerhard Kramer.
Capacity results for the discrete memoryless network
,
2003,
IEEE Trans. Inf. Theory.
[3]
Sergio Verdú,et al.
On limiting characterizations of memoryless multiuser capacity regions
,
1993,
IEEE Trans. Inf. Theory.
[4]
Zhen Zhang,et al.
A non-Shannon-type conditional inequality of information quantities
,
1997,
IEEE Trans. Inf. Theory.
[5]
Peter Elias,et al.
A note on the maximum flow through a network
,
1956,
IRE Trans. Inf. Theory.
[6]
D. R. Fulkerson,et al.
Maximal Flow Through a Network
,
1956
.
[7]
Raymond W. Yeung,et al.
A First Course in Information Theory
,
2002
.
[8]
Edward C. van der Meulen,et al.
A survey of multi-way channels in information theory: 1961-1976
,
1977,
IEEE Trans. Inf. Theory.
[9]
van der MeulenE..
A Survey of Multi-Way Channels in Information Theory:
,
1977
.