Question 1 |

A digital communication system transmits a block of N bits. The probability of error in
decoding a bit is \alpha. The error event of each bit is independent of the error events of
the other bits. The received block is declared erroneous if at least one of the its bits
is decoded wrongly. The probability that the received block is erroneous is

N(1-\alpha ) | |

\alpha ^n | |

1-\alpha ^n | |

1-(1-\alpha )^n |

Question 1 Explanation:

Probability of error in decoding single bit=\alpha

Then probability of no error will be 1- \alpha.

Total N-bits transmitted, so that probability of no error in received block

=(1-\alpha )(1-\alpha )...N \, \text{ Times}

=(1-\alpha) ^{N}

The Probability of received block is erroneous is =1-(1-\alpha) ^{N}

Then probability of no error will be 1- \alpha.

Total N-bits transmitted, so that probability of no error in received block

=(1-\alpha )(1-\alpha )...N \, \text{ Times}

=(1-\alpha) ^{N}

The Probability of received block is erroneous is =1-(1-\alpha) ^{N}

Question 2 |

A linear Hamming code is used to map 4-bit messages to 7-bit codewords. The encoder mapping is linear. If the message 0001 is mapped to the codeword 0000111, and the message 0011 is mapped to the codeword 1100110, then the message 0010 is mapped to

10011 | |

1100001 | |

1111000 | |

1111111 |

Question 2 Explanation:

Question 3 |

Consider a binary channel code in which each codeword has a fixed length of 5 bits. The
Hamming distance between any pair of distinct codewords in this code is at least 2. The
maximum number of codewords such a code can contain is _________.

15 | |

16 | |

17 | |

18 |

Question 3 Explanation:

According to the Plotkin bound.

\begin{aligned} d_{\min } &\leq \frac{n 2^{k-1}}{2^{k}-1}\\ n&= \text{ Length of each code word }\\ k&= \text{ Length of each message word} \\ \text{Given that.}\\ d_{\min }&=2 \text { and } n=5\\ \text{So, } \frac{2^{k}}{2\left(2^{k}-1\right)} &\geq \frac{2}{5} \\ \frac{2^{k}}{2^{k}-1} &\geq \frac{4}{5} \end{aligned}

For any value of k, the above bound will be satisfied. But as n > k and n is fixed at 5. The maximum value of k that can be selected is 4.

So, the maximum number of codewords possible is 2_{4} = 16.

\begin{aligned} d_{\min } &\leq \frac{n 2^{k-1}}{2^{k}-1}\\ n&= \text{ Length of each code word }\\ k&= \text{ Length of each message word} \\ \text{Given that.}\\ d_{\min }&=2 \text { and } n=5\\ \text{So, } \frac{2^{k}}{2\left(2^{k}-1\right)} &\geq \frac{2}{5} \\ \frac{2^{k}}{2^{k}-1} &\geq \frac{4}{5} \end{aligned}

For any value of k, the above bound will be satisfied. But as n > k and n is fixed at 5. The maximum value of k that can be selected is 4.

So, the maximum number of codewords possible is 2_{4} = 16.

Question 4 |

Consider a binary memory less channel characterized by the transition probability diagram shown in the figure.

The channel is

The channel is

Lossless | |

Noiseless | |

Useless | |

Deterministic |

Question 4 Explanation:

Given that

\left[P\left(\frac{Y}{X}\right)\right]=\left[\begin{array}{ll} 0.25 & 0.75 \\ 0.25 & 0.75 \end{array}\right]

Hmesual irformation I(X;Y)= O for every possible input distribution, then the channel is called as useless (or) zero-capacity channel.

\begin{aligned} Let \quad[P(X)]&=[\alpha(1-\alpha)] \\ \text{Then } H(x)&=\\ -\alpha \log _{2} \alpha-(1-\alpha) &\log _{2}(1-\alpha)\text{ bits/symbol}\\ [P(Y)]&=[P(X)]\left[P\left(\frac{Y}{X}\right)\right]=[0.25\; \; 0.75] \end{aligned}

\begin{array}{c} [P(X, Y)]=\left[\begin{array}{cc} \frac{\alpha}{4} & \frac{3 \alpha}{4} \\ \frac{(1-\alpha)}{4} & \frac{3(1-\alpha)}{4} \end{array}\right] \\ \left[P\left(\frac{X}{Y} \right ) \right ]=\frac{[P(X,Y)]}{[P(Y)]_{d}}=\left[\begin{array}{cc} \alpha & \alpha \\ (1-\alpha) & (1-\alpha ) \end{array} \right ]\\ H\left(\frac{X}{Y}\right)=-\sum_{i} \sum_{j} P\left(x_{i}, y_{i}\right) \log _{2} P\left(\frac{x_{i}}{y_{i}}\right) \text { bits/symb } \\ =-\alpha \log _{2} \alpha -(1-\alpha) \log _{2}(1-\alpha) \text { bits/symbol } \\ I(X ; Y)=H(X)-H\left(\frac{X}{Y}\right)=0 \end{array}

So, the given binary memoryless channel is "useless" channel.

\left[P\left(\frac{Y}{X}\right)\right]=\left[\begin{array}{ll} 0.25 & 0.75 \\ 0.25 & 0.75 \end{array}\right]

Hmesual irformation I(X;Y)= O for every possible input distribution, then the channel is called as useless (or) zero-capacity channel.

\begin{aligned} Let \quad[P(X)]&=[\alpha(1-\alpha)] \\ \text{Then } H(x)&=\\ -\alpha \log _{2} \alpha-(1-\alpha) &\log _{2}(1-\alpha)\text{ bits/symbol}\\ [P(Y)]&=[P(X)]\left[P\left(\frac{Y}{X}\right)\right]=[0.25\; \; 0.75] \end{aligned}

\begin{array}{c} [P(X, Y)]=\left[\begin{array}{cc} \frac{\alpha}{4} & \frac{3 \alpha}{4} \\ \frac{(1-\alpha)}{4} & \frac{3(1-\alpha)}{4} \end{array}\right] \\ \left[P\left(\frac{X}{Y} \right ) \right ]=\frac{[P(X,Y)]}{[P(Y)]_{d}}=\left[\begin{array}{cc} \alpha & \alpha \\ (1-\alpha) & (1-\alpha ) \end{array} \right ]\\ H\left(\frac{X}{Y}\right)=-\sum_{i} \sum_{j} P\left(x_{i}, y_{i}\right) \log _{2} P\left(\frac{x_{i}}{y_{i}}\right) \text { bits/symb } \\ =-\alpha \log _{2} \alpha -(1-\alpha) \log _{2}(1-\alpha) \text { bits/symbol } \\ I(X ; Y)=H(X)-H\left(\frac{X}{Y}\right)=0 \end{array}

So, the given binary memoryless channel is "useless" channel.

Question 5 |

Which one of the following graphs shows the Shannon capacity (channel capacity) in bits of a memory less binary symmetric channel with crossover probability P?

A | |

B | |

C | |

D |

Question 5 Explanation:

The channel capacity of a memoryless binary symmetric channel can be expressed as,

C=1+p \log _{2} p+(1-p) \log _{2}(1-p)

C=1+p \log _{2} p+(1-p) \log _{2}(1-p)

Question 6 |

Let (X_{1},X_{2}) be independent random variables. X_{1} has mean 0 and variance 1, while X_{2} has mean 1 and variance 4. The mutual information I(X_{1};X_{2}) between X_{1} and X_{2} in bits is

1 | |

2 | |

3 | |

0 |

Question 6 Explanation:

Mutual information of two random variables is a
measure of the mutual dependence of the two variables.

Given that, X and Y are independent. Hence, I(X:Y)=0 .

Given that, X and Y are independent. Hence, I(X:Y)=0 .

Question 7 |

A voice-grade AWGN (additive white Gaussian noise) telephone channel has a bandwidth of 4.0 kHz and two-sided noise power spectral density \frac{\eta }{2}=2.5 \times 10^{-5} Watt per Hz. If information at
the rate of 52 kbps is to be transmitted over this channel with arbitrarily small bit error rate, then the minimum bit-energy E_{b} (in mJ/bit) necessary is __________

11.25 | |

22.75 | |

31.50 | |

44.50 |

Question 7 Explanation:

\begin{array}{c} C=B \log _{2}\left(1+\frac{S}{N}\right) \\ S=E_{b} \cdot R_{b}\\ \left(E_{b}=\text { bit energy, } R_{b}= \text{ information rate bits/sec}\right)\\ C=B \log _{2}\left(1+\frac{E_{D} R_{D}}{N_{o}}\right) \end{array}

For distortionless transmission, C should be atleast of R_{b}

\begin{aligned} \stackrel{R_{b}}{B}&=\log _{2}\left(1+\frac{E_{b} R_{b}}{N_{o} B}\right)\\ E_{b} &=\left(2^{R_{b} / B}-1\right) \frac{N_{0} B}{R_{b}} \\ &=\frac{\left(2^{52 / 13}-1\right) \times 2 \times 2.5 \times 10^{-5} \times 4}{52} \\ E_{p} &=31.504 \mathrm{mJ} / \mathrm{bil} \end{aligned}

For distortionless transmission, C should be atleast of R_{b}

\begin{aligned} \stackrel{R_{b}}{B}&=\log _{2}\left(1+\frac{E_{b} R_{b}}{N_{o} B}\right)\\ E_{b} &=\left(2^{R_{b} / B}-1\right) \frac{N_{0} B}{R_{b}} \\ &=\frac{\left(2^{52 / 13}-1\right) \times 2 \times 2.5 \times 10^{-5} \times 4}{52} \\ E_{p} &=31.504 \mathrm{mJ} / \mathrm{bil} \end{aligned}

Question 8 |

An analog baseband signal, bandlimited to 100 Hz, is sampled at the Nyquist rate. The samples are quantized into four message symbols that occur independently with probabilities p1 = p4 = 0.125 and p2 = p3. The information rate (bits/sec) of the message source is __________

180.6 | |

90.8 | |

362.2 | |

320.5 |

Question 8 Explanation:

\begin{aligned} f_{m} &=100 \mathrm{Hz} \\ f_{s} &=2 f_{m}=200 \text { samplesisec } \\ P_{1} &=P_{4}=\frac{1}{8} \\ P_{1}+P_{2}+P_{3}+P_{4} &=1 \\ \Rightarrow \quad 2 P_{2}&= 2 P_{3}=1-\frac{1}{4}=\frac{3}{4} \\ P_{2}&= P_{3}=\frac{3}{8} \\ H &=\sum_{i=1}^{N} P_{i} \log \frac{1}{P_{i}} \\ \Rightarrow \frac{1}{8} \log _{2} 8+\frac{1}{8} \log _{2} 8&+\frac{3}{8} \log _{2} \frac{8}{3}+\frac{3}{8} \log _{2} \frac{8}{3} \\ H &=1.811 \text { bits/sample }\\ \text{Information rate }\\ R=H(\text { bits/sample }) &\times r(\text { samples/sec }) \\ &=1.811 \times 200 \\ &=362.2 \mathrm{bits} / \mathrm{sec} \end{aligned}

Question 9 |

A binary communication system makes use of the symbols "zero" and "one". There are channel errors. Consider the following events:

x_{0}: a "zero" is transmitted

x_{1} : a "one" is transmitted

y_{0} : a "zero" is received

y_{1} : a "one" is received

The following probabilities are given: P(x_{0})=\frac{1}{2},P(y_{0}|x_{0})=\frac{3}{4}, and P(y_{0}|x_{0})=\frac{1}{2}. The information in bits that you obtain when you learn which symbol has been received (while you know that a "zero" has been transmitted) is ________

x_{0}: a "zero" is transmitted

x_{1} : a "one" is transmitted

y_{0} : a "zero" is received

y_{1} : a "one" is received

The following probabilities are given: P(x_{0})=\frac{1}{2},P(y_{0}|x_{0})=\frac{3}{4}, and P(y_{0}|x_{0})=\frac{1}{2}. The information in bits that you obtain when you learn which symbol has been received (while you know that a "zero" has been transmitted) is ________

0.4 | |

0.8 | |

1.6 | |

2 |

Question 9 Explanation:

Information in bits when you learn which symbol has been received when zero is transmitted.

Case-I= when y_{0} is received and x_{0}, is transmitted

Case-II = when y_{1}, is received and x_{0} is transmitted

Thus, the overall average information, we can say H

\begin{aligned} H=& P\left(y_{0} \mid x_{0}\right) \log _{2}\left(\frac{1}{P\left(y_{0} \mid x_{0}\right)}\right) \\ &+P\left(y_{1} \mid x_{0}\right) \log _{2}\left(\frac{1}{P\left(y_{1} \mid x_{0}\right)}\right) \\ =&\left[\frac{3}{4} \log _{2}\left(\frac{4}{3}\right)+\frac{1}{4} \log _{2}(4)\right] \text { bits } \\ =& 0.811 \text { bits } \end{aligned}

Question 10 |

A discrete memoryless source has an alphabet \{a_{1},a_{2},a_{3},a_{4}\}
with corresponding probabilities \{\frac{1}{2},\frac{1}{4},\frac{1}{8},\frac{1}{8}\}. The minimum required average codeword length in bits to represent this source for error-free reconstruction is ________

0.5 | |

1 | |

1.5 | |

1.75 |

Question 10 Explanation:

Minimum required average codeword length in bits for error free reconstruction

\begin{aligned} L_{\min } &=H(\text { Entropy }) \\ H=\frac{1}{2} \log _{2} 2+\frac{1}{4} \log _{2} 4 &+\frac{1}{8} \log _{2} 8+\frac{1}{8} \log _{2} 8 \\ &=\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{3}{8}=1.75 \\ \Rightarrow \quad L_{\text {min }} &=1.75 \mathrm{bits} / \text { word } \end{aligned}

\begin{aligned} L_{\min } &=H(\text { Entropy }) \\ H=\frac{1}{2} \log _{2} 2+\frac{1}{4} \log _{2} 4 &+\frac{1}{8} \log _{2} 8+\frac{1}{8} \log _{2} 8 \\ &=\frac{1}{2}+\frac{1}{2}+\frac{3}{8}+\frac{3}{8}=1.75 \\ \Rightarrow \quad L_{\text {min }} &=1.75 \mathrm{bits} / \text { word } \end{aligned}

There are 10 questions to complete.