Channel Coding Theorem, or Shannon’s second theorem is a safe or maximum limit of transmission rate over a noisy channel without the loss of Information#. In fact, the probability of error should be \(10^{-6}\) or even lower so that transmission of the symbols is feasible. Channel Coding is to increase the resistance of a digital Communication System# to channel noise in order to achieve high performance level. It is by increasing the redundancy into the code.
For Discrete Memoryless Source (DMS)#, there exists a coding scheme where the source output can be transmitted over the channel and be reconstructed with a very small probability of error if:
$$ \frac{H(S)}{T_S} \le \frac{C}{T_C} $$
Where:
- \(H(S)\) is the Entropy# from a DMS with an alphabet \(S\)
- \(T_S\) is the time interval when \(S\) produces symbols
- \(\frac{C}{T_C}\) is the Critical Rate#
For Binary Symmetric Channel, since \(H(S)\) is equal to 1, we can reconstruct the formulae above to get the following formula:
$$ r \le C $$
Where:
- \(r\) is the Information Rate# which is defined as \(\frac{T_C}{T_S}\)
- \(C\) is the Channel Capacity#
Channel Encoder# is subjected to a device that practice this theorem.