Region 1 is a larger one.

It includes 80% of the pixels.

And region 2 includes the remaining of the pixels.

Region1 is dominated by white pixels, 95% of them are white.

So if we find the entropy footage in region 1 is equal to the number shown here

Region 2 is dominated by black pixels and the corresponding entropy is shown here.

Now the total entropy of the source is the weighted sum

of the two entropies of the regions

where the weights are proportional to the number of pixels in it's region.

So in other words the total entropy is 0.8H1,

0.2H2 and this is the resulting number.

So what we see is that by dividing the documented two regions,

the entropy has been reduced to almost a half of the original entropy.

And therefore, the user multiple coders which adopt the context

of the symbol being coded are beneficial, and

this same idea has been used extensively, of course in a more

refined form that I will explain later when meta-coders are used.

I should explain here that this is a case Of a small alphabet,

it starts with a small alphabet and skewed probabilities of symbols.

So we know that, for this case, Huffman coding has low or problematic performance,

while a source like this is very suitable for arithmetic coding.

A case where context adaptive arithmetic coding is use is the JBIG standard.

JBIG actually stands for Joint Bi Level Image Processing Group.

So this is a group of experts that established the standard for

the progressive transmission of bi level images.

JBIG is therefore a combination of a progressive transmission and

the lossless coding algorithm.