Science Journal of Education
Volume 3, Issue 4-1, August 2015, Pages: 17-20

1. Introduction

Compression is a process intended to yield a compact digital representation of a signal. In the literature, the terms bit rate reduction; data compression, signal compression, and bandwidth compression are all used to refer to the process of compression. Signal compression is a key technology for multimedia applications .Many current applications treat sound, images and even video as uncompressed bit-streams, but this situation will change when their use becomes widespread. The main problem of compression is to minimize the bit rate of their digital representation. In recent years, there have been significant advancements in algorithms and architectures for the processing of image, video, and audio signals. These advanced developments have proceeded along several directions. On the algorithmic point of view, new techniques and methods have been developed to reduce the size of the images, videos, or audio data. Such methods are extremely indispensable in many applications that manipulate and store digital and medical data.

Informally, we refer to the process of size reduction as a compression process. One of the exciting prospects of such advancements is that multimedia information incorporate image, video, and audio have the potential to become just another data type. This usually insinuates that multimedia information will be digitally encoded so that it can be stored, and transmitted along with other digital data types. For such data usage to be endemic, it is essential that the data encoding is standard across different principles and applications. Furthermore, standardisation can lead to the development of cost-effective implementations, which in turn will promote the widespread use of multimedia information. This is the primary motivation behind the emergence of image and video compression standards.

The main concept of image compression algorithms is to compress (decompress) video frames before (after) being stored (fetched) into (from) the external memory. Adopting frame compression algorithms has two advantages. First, frame compression algorithms can be easily integrated into various video coding systems since they are independent of memory types and video coding standards. Second, frame compression algorithms reduce the data access cycles instead of the DRAM penalty cycles. Therefore, they are orthogonal to the previous works that reduce the DRAM penalty cycles. Frame compression algorithms can be either lossy or lossless. Lossy algorithms can guarantee the data reduction ratio (DRR) which is defined below at the expense of video quality loss. On the other hand, lossless algorithms preserve video quality. As video resolution increases and display technology advances, video quality becomes increasingly important. Therefore, a cumulative number of studies are proposing lossless frame compression algorithms.

Since computers play a vital role in solving the complex issues of our daily life, it has applications in medical diagnostics. This diagnostics includes magnetic resonance imaging (MRI), computed tomography (CT) etc. These diagnostic machines produce their results in the form of sequence of digital images. These sequences require large disk space for storage and take more time for transmission over the network. The solution to this problem can be image compression.

There are many state-of-the-art algorithms like CALIC, LOCO-I, JPEG-LS, JPEG-2000 to perform image compression in a way that no information gets lost after decompression of the images. These algorithms deals only with single images and does not perform the correlation among the frames in the sequences which produce very low compression ratio.

A common characteristic of most images is that the neighbouring pixels are correlated and therefore contain redundant information. The supreme task is to find less correlated representation of the image. Two fundamental components of image or data compression are redundancy and irrelevancy reduction.

a. Redundancies reduction aims at removing duplication from the signal source (image/video).

b. Irrelevancy reduction omits parts of the signal that will not be noticed by the signal receiver, namely the Human Visual System. In an image three types of redundancies are defined in order to compress the file size of data. They are:

a. Coding redundancy: Fewer bits to represent frequently occurring symbols.

b. Inter-pixel redundancy: Neighbouring pixels have almost same value.

c. Psycho visual redundancy: Human visual system cannot simultaneously distinguish all colours.

2. Framework on the Proposed Work

Compressing an image/video is significantly different than compressing raw binary data. Of course, general purpose compression algorithms can be used to compress the images, but the result is less than the optimum. This is because images have certain statistical properties which can be utilized by the encoders specifically designed for them. Also, some of the exquisite details in the image can be sacrificed for the sake of saving a little more bandwidth or storage space. On the other hand, images need not be reproduced 'exactly'. An approximation of the original image is enough for most occasions, as long as the error between the original and the compressed image is tolerable. The proposed algorithm introduced in this paper overcome the difficulties and achieves higher compression ratio.

2.1. Analysing the Video Frames

Since most of the compression programs use a single file compression technique, and they do not exploit the correlation between the images. The aim of our proposed work is to improve the compression ratio by using inter-frame correlation. In our proposed work the super-spatial structure prediction algorithm is used for compressing the image/video. The MRI video is given as the input, and the input video is splitted into number of frames. The first frame is always compressed using the super-spatial structure prediction algorithm SSP (). The output file of the super-spatial structure prediction algorithm is further compressed using Huffman coding scheme. Once the first image is compressed, the second frame becomes the current frame and the first frame becomes the reference frame for the second frame. The work flow process for lossless image compression is described in Figure 1. In the existing state-of art algorithms the neighbourhood constraints will limit the compression efficiency . The super spatial structure prediction is used to find the optimal prediction of structure components within the encoded images.

Figure 1. Flow process for lossless image compression.

2.2. Performing the Block Matching Using IDs

Generally speaking, one key characteristic of medical imaging is the limited differences in between frames. In order to reduce the number of search points, we use the SDSP as the primary shape of search. The steps involved actually resemble traditional DS as it consists of two basic search patterns (LDSP and SDSP) [5]. The entire process is represented in the diagram shown in Fig. 2,3. Its algorithm is illustrated in detail below:

Step 1: First, use SDSP as the shape of the search and check all 5 points (including the central point) to individually calculate the BDM for each point. If the minimum block distortion (MBD) falls on the central point, proceed to Step 3; if not, go to Step 2.

Step 2: Use the point where you have found the MBD as your 3D Medical Image Set Compression 268 central point and perform another search for a new SDSP. Likewise, determine the BDM for each point. If the MBD appears at the central point, go to Step 3; if not, repeat Step 2 until the MBD falls on the central point.

Step 3: Now use LDSP as the shape of your search instead to locate MBD. The point where MBD is will be your motion vector.

Figure 2. Small Diamond Search Pattern.

LDSP is a large diamond search formed from a total of nine points, constituting a central point surrounded by eight points. The small diamond search pattern is a small five point comprised of a central point and surrounded by four points [5]. It needs to be calculated whether two blocks match each other.

Figure 3. Large Diamond Search Pattern.

We calculate the block distortion measure (BDM) by the mean square error (MSE) as following:

Figure 4. Flow chart for inverse diamond search.

2.3. Performing Super Spatial Structure Prediction

The motion estimation and motion compensation is done for these two images. After motion estimation and compensation, the image difference is computed by calculating the each pixel’s intensity level of reference block and current block. Here we use block-based image classification method, we classify the image blocks into structure and non-structured blocks. Structured blocks are compressed using SSP, and non-structured blocks are compressed by using CALIC algorithm [1]. We perform the GAP prediction method on the original image and calculate the prediction error for each block, if the prediction error is greater than the threshold value, then it is considered as structured block, otherwise it is non-structured block. Then LZ8 compression algorithm is further applied to achieve good compression ratio and it also reduce the resource usage, and SSP algorithm exploits the correlation in the images.

2.4. Results and Discussions

To evaluate the performance of the proposed algorithm, we tested the algorithm on a single image and sequence of images. The result of this algorithm was also compared with JPEG-LS, the compression ratio and bits per pixel are shown in table 1.The results showed that the proposed algorithm produced improved results and achieve higher compression ratio and also exploits the correlation in the images.

Table 1. Compression ratio for MRI sequences.

METHOD Compression Ratio
JPEG2000 2.59
JPEG-LS 2.72
PROPOSED 7
Gain over JPEG2000 141%
Gain over JPEG-LS 129.4%

3. Conclusion and Future Enhancement

A high memory bandwidth requirement for accessing display frames usually degrades the performance of a video decoder. High performance lossless frame compression algorithm is used to reduce the size of display frames. The Inverse Diamond Search algorithm reduces the time taken for block matching process. Since the purpose of this algorithm was to exploit the correlation among the frames to achieve a higher compression ratio, we tested this algorithm on the same MRI images used in the experiment. The results of other algorithms tested on these images are also taken is proved that the proposed algorithm has shielded better results than in the previous work. In our future work, the fast and efficient algorithms can be developed to reduce the complexity of super-spatial prediction. The LZ8 algorithm can also applied with other compression method and that enable to repeat the compression more than three times, and to provide high compression ratio for medical images.


References

  1. X. Wu and N. Memon, "Context-based adaptive lossless image coding", IEEE Trans. Commun, vol 45.no.4,pp 437-444, Apr 1997.
  2. Abdesh Singla, Kulbhushan Singla. A, "New lossless compression scheme for medical images", EXCEL International Journal of Multidisciplinary Management Studies (EIJMMS) Vol.3 (7) July (2013).
  3. M.Moorthi, Dr.Amudha, "A near lossless compression method for medical images", IEEE-International Conference on Advances in Engineering, Science and Management (ICAESM-2012) March30, 2012.
  4. Smitha Joyce Pinto, Jayanand P Gawande, "Performance    analysis of medical image compression techniques", IEEE Transaction on image processing, (2012).
  5. S. Zhu and K.K. Ma, ‘A New diamond search algorithm for fast block matching motion estimation", IEEE Trans. Image Process.,9: 287-290,2000.
  6. L. M. Po, C.W. Ting, K. M. Wong and K. H. Ng,"Novel point-oriented inner searches for fast block motion estimation", IEEE Trans. Multimedia,9:9-15,2007.
  7. P. Franti," A Fast and Efficient Compression Method for Binary Images",1993.
  8. B .Brindha, G. Raghuraman, "Region based lossless compression for digital images in telemedicine application", International conference on Communication and Signal Processing, April (2013).
  9. P. Sudha , "Image compression with scalable ROI usingadaptive huffman coding", IJCSMC, Vol. 2, Issue.4 (2012).
  10. R. Rajeshwari and R. Rajesh, "WBMP Compression", International journal of Wisdom Based Computing,vol.1,No.2,2011.

Article Tools
  Abstract
  PDF(213K)
Follow on us
ADDRESS
Science Publishing Group
548 FASHION AVENUE
NEW YORK, NY 10018
U.S.A.
Tel: (001)347-688-8931