3

Wikipedia:

Huffman coding is optimal among all methods in any case where each input symbol is a known independent and identically distributed random variable having a probability that is dyadic. Prefix codes, and thus Huffman coding, in particular, tend to have inefficiency on small alphabets, where probabilities often fall between these optimal (dyadic) points.

How to prove that increasing the alphabet improves the efficiency of Huffman coding, in cases where the probabilities are not dyadic?

Update

Following a suggestion by @ MarcusMüller, another framing of the question is why do larger alphabets tend more toward dyadic probability distribution? (How this tendency converts to efficiency?)

Gideon Genadi Kogan
  • 1,156
  • 5
  • 17
  • adding the sentence before the sentence you cited, because it directly addresses your quesiton! Not quite sure what the question that arises is: Is it why does larger alphabets tend more towards dyadic distributions or is it why do dyadic distributions lead to better Huffman coding compression? – Marcus Müller Sep 04 '22 at 07:52
  • @MarcusMüller, please see my edit. The why do dyadic distributions lead to better Huffman coding compression? seems to be straightforward. Since I am not sure that the proof that I am looking for exists, I asked for intuition too. – Gideon Genadi Kogan Sep 04 '22 at 13:00
  • same question as before: is your question a) why does larger alphabets tend more towards dyadic distributions or b) why do dyadic distributions lead to better Huffman coding compression? – Marcus Müller Sep 04 '22 at 13:13
  • @MarcusMüller, (a) – Gideon Genadi Kogan Sep 04 '22 at 13:57

0 Answers0