Techniques that take advantage of the specific characteristics of images such as the common phenomenon of contiguous 2-D areas of similar tones. Burrows-Wheeler transform (block sorting preprocessing that makes compression more efficient).
Statistical modeling algorithms for text (or text-like binary data such as executables) include: Many of the lossless compression techniques used for text also work reasonably well for indexed images. While, in principle, any general-purpose lossless compression algorithm (general-purpose meaning that they can compress any bitstring) can be used on any type of data, many are unable to achieve significant compression on data that is not of the form for which they were designed to compress. Lossless compression methods may be categorized according to the type of data they are designed to compress. Most popular types of compression used in practice now use adaptive coders. Both the encoder and decoder begin with a trivial model, yielding poor compression of initial data, but as they learn more about the data performance improves. Adaptive models dynamically update the model as the data is compressed. This approach is simple and modular, but has the disadvantage that the model itself can be expensive to store, and also that it forces a single model to be used for all data being compressed, and so performs poorly on files containing heterogeneous data. There are two primary ways of constructing statistical models: in a static model, the data is analyzed and a model is constructed, then this model is stored with the compressed data. Arithmetic coding achieves compression rates close to the best possible for a particular statistical model, which is given by the information entropy, whereas Huffman compression is simpler and faster but produces poor results for models that deal with symbol probabilities close to 1. The primary encoding algorithms used to produce bit sequences are Huffman coding and arithmetic coding. frequently encountered) data will produce shorter output than "improbable" data. Most lossless compression programs do two things in sequence: the first step generates a statistical model for the input data, and the second step uses this model to map input data to bit sequences in such a way that "probable" (e.g.