Compressing Color Computer-Generated Hologram Using Gradient Optimized Quantum-Inspired Neural Network

: In the existing electronic communication systems, fast transmission of three-dimensional image information requires compression and encoding of holographic images. In this paper


Introduction
In computer holography, the computer-generated holograms (CGHs) contain a large amount of information about three-dimensional objects, which includes amplitude and phase information of 3D object.Therefore, to accelerate the transmission of CGHs and achieve the real-time 3D image display, the CGHs should be compressed and fast transmitted.At the same time, the recorded object information in CGHs should not be destroyed during the transmission [1].
In researching, some documents about the digital hologram compression have been investigated, such as in Ref. [2], the back propagation (BP) neural network has been used to compress CGHs.Researchers have offered a more flexible compression ratio and higher reproduction quality than traditional encoding methods.In Ref. [3], the compression sensing has also been applied to compress CGHs.
Inspired by the principles of quantum computing, the quantum-inspired neural network (QINN) [4,5] has been used in the field of image processing.The QINN uses the principle of quantum superposition states to provide strong data parallelism.The N-bit qubits can store 2 n -bit information, which expands the understanding and optimization space to enhance the nonlinear ability of the network.In Refs.[6][7][8][9][10][11], researchers have proved the superior performance of QINN in solving function optimization and image quality restoration.In Ref. [1], the QINN is used to compress the grayscale Fresnel CGHs, and Optimized Quantum-Inspired Neural Network their method adopted fewer epochs than the BP neural network.In Ref. [12], the initial values of quantum-inspired neural networks are optimized to accelerate the convergence of the network, at the same time it can improve the quality of reconstructed images.
Previously, there have been papers using the quantum-inspired neural network to process the ordinary images and the grayscale CGHs, but this work focuses on the application of QINN to compress and reconstruct the color CGHs.Different from the grayscale CGHs, the data amount of color CGHs is three times that of the grayscale CGHs.It should be ensured that when reconstructing the original object information, each channel has a considerable reconstruction quality to avoid the generation of chromatic aberration.Although QINN has many advantages over ordinary neural networks, it still needs further optimization for QINN itself.The descent algorithm of network gradient determines the speed and convergence effect of model training.Therefore, it is necessary to optimize the stochastic gradient descent (SGD) method to find the most suitable gradient optimization algorithm of QINN.
In this paper, the double-phase CGHs of different wavelengths are obtained by the angular spectrum method (ASM) in the RGB color space [13].The color CGHs are compressed and reconstructed using optimized QINN.The convergence speed of the network, the quality of decompressed holograms and the quality of reconstructed original color images are evaluated.

Band-Limited Double-Phase Computer-Generated Hologram
According to Ref. [14,15], the bandwidth-limited ASM is adopted in this paper.This method can solve the sampling problem in ASM by limiting the bandwidth and truncating unnecessary high-frequency signals in the input source field, avoiding the aliasing error of the transfer function.
The ( , , ) u x y z is the complex amplitude distribution of the light field on the observation screen, which can be expressed as the convolution of the complex amplitude distribution ( , , 0) u x y of the light field on the back surface of the diffraction screen and the system impulse response ( , , ) h x y z , using the convolution theorem, the formula can be written as where F represents Fourier transform, u, v and w are Fourier frequencies in x, y and z directions respectively.Therefore, the resulting hologram can be expressed as In order to avoid aliasing error, the range of the system function needs to be limited.According to Nyquist theorem, to avoid descent error, the sampling interval should satisfy the following condition The mathematical expression of the CGH can also be defined as the complex of amplitude ( , ) A x y and phase .
The amplitude information of the CGH is encoded into the phase information, which can be expressed as where 1 W and 2 W are grating in the shape of complementary checkerboard with the size of x y × , thus a double phase CGH contains both amplitude and phase information of the object light waves [14].

Compressing Color CGH with
Gradient Optimized QINN

Quantum-Inspired Neural Network
In quantum computing theory, quantum states expressed in complex numbers can be described as ( ) cos sin , The quantum-inspired neural network is composed of quantum neurons.The computational process of quantum neurons [1] is as follows ( ), = O f y (12) where ( ) 1/[1 exp( )] = + − g x x , arg (•) represents the phase of the complex number, L is the number of input quantum neuron, O represents the output of the quantum neuron, the phase parameter θ and the threshold λ corresponds to the one-bit rotation gate, and the reversal parameter δ corresponds to the two-bit CNOT gate.
The input data of the quantum-inspired neural network needs to be normalized to [0, 1] and then converted to the phase range [0, π/2] of the quantum state.The data of the input layer can be expressed as . 2 The output value of the nth quantum neuron in the last layer is expressed as the probability of 1 .
where B is the number of sub image blocks, N is the number of neurons,

Quantum-Inspired Neural Network with Gradient Descent Optimization Algorithms
In SGD algorithms, a larger learning rate causes the QINN gradient to decrease quickly but may miss the maximum point of the loss function.However, a smaller learning rate will make the QINN converge slowly.Therefore, several experiments are needed to choose an appropriate learning rate.At the same learning rate, the threshold parameters of all quantum gates are updated synchronously, but depending on different data characteristics, different gradient update magnitudes may be required, which cannot be achieved by SGD.In addition, a key issue of SGD training is whether the network is trapped in the saddle point and cannot escape, thus it cannot reach the minimum point of the error function.
In the backpropagation process of quantum-inspired neural network, the update of quantum neuron parameters can be expressed as , , where η is the learning rate.In this paper, Momentum, Adagrad, Adadelta, and Adam algorithm [16,17] are used to analyze the gradient descent problem of QINN.The parameter θ of the phase-shifted quantum gate ( ) i R θ is used as an example to introduce its update, and the parameters γ and δ of the controlled non-gate ( ) U γ are updated in the same way as θ .The Momentum is an optimization algorithm that suppresses the gradient oscillation and can accelerate the gradient descent in the relevant direction of the neural network.The three parameters can be updated according to the following rules.The γ is usually set to 0.9, the v represents the exponential weighted average of the gradient.
The Adagrad can adjust the learning rate according to the quantum spin gate parameters adaptively.It doesn't require manual setting of the learning rate, and the =0.01 η is adopted in most tasks.The historical gradient information is considered in the process of new gradient updating, the t G is the squares sum of historical gradients, the ε is the smoothing factor, and the value is 8  e − . .
The Adadelta is used to reduce the learning rate of monotonic descent in the Adagrad.The Adadelta limits the accumulation window of past gradients to a fixed size, instead of accumulating all past squared gradients.The moving average of the gradients at each moment depends only on the average gradient value of the previous moment and the current gradient value.E is mathematic expectation.
The Adam is another algorithm to calculate the adaptive learning rate for each parameter.In order to store the moving average of the squared gradient, the Adam algorithm also maintains the exponential decay average of the previous and present gradients.It is essentially the Adadelta algorithm with momentum term, and after bias correction, the effective learning rate of each step changes within a certain range.
The t m and t v are the biased first-order moment estimates of the gradient and the biased second-order moment estimates of the gradient, respectively.The 1 β and 2 β are the exponential decay rates of the first-order moment estimates and second-order moment estimates of the gradient, respectively, the value range is controlled in [0, 1].To control the range of variation of the learning rate, the bias is updated for the gradient first-order moment estimates and second-order moment estimates as

Compressing Color Computer-Generated Hologram
A three-layer QINN structure [18][19][20] is constructed to compress color CGHs.The number of hidden neurons (e. g., K) is smaller than the number of input neurons (e. g., L), and the number of input neurons is equal to the number of output neurons.The color CGH is compressed from the input layer to the hidden layer, and the color CGH is decompressed from the hidden layer to the output layer.The output data of the hidden layer is the compressed color CGH, and the output data of the output layer is the decompressed color CGH.
First, the double-phase color CGH is computed in RGB space.In the data preprocessing stage, the color CGHs are extracted into three RGB channels.The CGHs (e. g., X×Y pixels) are normalized from [0, 255] to [0, 1], and divided into sub-image blocks (e. g., x×y pixels, B=(X×Y)/(x×y)).Then the sub-image blocks are transformed into column vectors (e. g., L×1= x×y) as the final network input.
The process of compressing a color CGH by the QINN can be represented in Figure 1.

Calculation and Reconstruction of Color Computer-Generated Hologram
The authors of Ref. [1] used quantum-inspired neural networks to compress a single grayscale Fresnel hologram.And it is experimentally demonstrated that quantum-inspired neural networks converge to the expected loss value faster compared to traditional bp neural networks, and also obtain higher PSNR of the reconstructed image.A three-channel quantum-inspired neural network structure for compressing color computer holograms is proposed, and extends the compression method from a single image to a training set to enhance the network's generalization ability.
The specific approach of the experiments in this paper is to divide the holograms of each color channel of RGB into a dataset separately, and train different three network models in different channels to obtain different quantum neural nets, each of which is able to learn the features of the corresponding color hologram.The advantage of this approach is that each quantum-inspired neural network model learns the features of the dataset more uniformly and can obtain different compression models for each color channel hologram, which can produce higher quality decompressed color CGHs.
The calculation conditions of color CGHs calculated by ASM are as follows: the wavelengths of red, green and blue are 633, 532 and 450 nm, respectively, the size of the original image is 256×256 pixels, the size of hologram is 256×256 pixels, and the distance between the color original image and the CGH is 0.3m.The pixel sizes corresponding to the RGB channels are

Quality Evaluation Index
The indexes to evaluate the effect of network compression include holographic image compression rate (CR) and loss reduction speed.The indexes to evaluate the quality of compressed CGHs and reconstructed CGHs are the mean square error (MSE), the peak-signal-to-noise ratio (PSNR), and the structural similarity index measure (SSIM).

Compressing Color Computer-Generated Hologram with Quantum-Inspired Neural Network
The color CGHs are divided into three channels of RGB, and the color CGHs are compressed and decompressed in each channel.The QINN is initialized with a randomly generated threshold.The CSIQ dataset [21] is computed as double-phase CGHs, and then 20 CGHs are selected as the training set for QINN.The input CGH size is unified to 256×256 pixels, then divided into sub-image blocks with pixel size of 8×8, and the final input data size is 64×1.Therefore, the number of quantum neurons in the input and output layers is 64, and the number of quantum neurons in the hidden layer is set to 32, 16, 8 and 4, respectively.The image compression ratios corresponding to the four network structures are 0.5, 0.25, 0.125, and 0.0625.Barbara image is used for test image.
Table 1 shows the loss values of network convergence MSE, PSNR, and SSIM of the decompressed CGH for different compression ratios.Comparing the image quality of each channel, it can be concluded that the PSNR and SSIM of the decompressed CGH decrease as the CR decreases.When the compression ratio is 0.5, the PSNRs of the three RGB channels are the highest.Comparing different color channels, it can be found that PSNR and SSIM of the decompressed red light CGHs are highest, and those of green light CGHs are lowest.The reason for this phenomenon is the difference in pixel values and distributions of different color wavelengths CGHs, which lead to differences in network training and reconstructed image quality.Table 2 lists the PSNR and SSIM of the reconstructed original image from the decompressed CGH for different channels and compression ratios.It can be noticed that the index value of each reconstructed image is lower than that of the decompressed CGH due to the introduction of image noise during the reconstruction of the CGH.However, the index value of each reconstructed image is positively correlated with the index value of the decompressed CGH.The quality of the reconstructed original images is also highest when CR is 0.5.Figure 2 shows the curves of loss reduction in different channels.Figure 3 shows the decompressed CGHs and the reconstructed original images in the three channels.The pixel distribution of CGHs in different color channels varies greatly, among which the pixel distribution of green light holograms is more complex, which leads to different decreasing speed of loss values and different convergence to different error values in different color channels.When the compression ratio of the network decreases, the ability of the network to characterize the pixel distribution decreases in the same network structure, and the more complex the pixel distribution of the training set has a greater impact on the network performance.Therefore, the green and blue channel loss curves converge to a larger error and their reconstructed image quality is lower.

Compressing Color Computer-Generated Hologram with Gradient Optimized Quantum-Inspired Neural Network
Different gradient optimization algorithms will bring different gradient convergence rates and eventually converge to different loss values.Gradient descent curves of different gradient optimization algorithms in RGB channel can be represented by Figure 4. Table 3 shows that when CR = 0.5 different gradient optimization algorithms are used to compare the image quality evaluation indexes of holograms decompressed by each channel.The loss value of Adam algorithm is the lowest, and the PSNR and SSIM values are the highest.Compared among the three channels, the quality of decompressed CGHs in R channel is the highest.
Table 4 shows the PSNR and SSIM of the reconstructed original image of different gradient optimization algorithms corresponding to different color channels when CR = 0.5.In each channel, the reconstructed image quality of Adam's method is the best.The PSNR values of each gradient optimization algorithm in the R channel are better than those of the SGD algorithm.It can be seen from Figure 4, the red curve is the Adam optimization algorithm with the fastest gradient descent and the best convergence.The convergence error value of SGD is similar to that of Adam, but its convergence speed is the slowest.The convergence speed of the momentum algorithm is similar to that of the Adam algorithm, but the final convergence error value is larger than that of the Adam algorithm.The convergence speed of Adagrad and Adadelta are similar and both are faster than the SGD algorithm, but the final convergence error value of Adadelta is poor.From the perspective of human visual observation, the reconstructed images of Adam algorithm and Momentum algorithm have higher clarity, among which the reconstructed image color based on Adam algorithm is closer to Figure 5. (a), and the color difference is smaller than that of Momentum algorithm, and the reconstructed image noise of Momentum algorithm is more than that of Adam algorithm.The reconstructed original image colors of Adagrad, Adadelta, and SGD algorithms are similar, and the reconstructed images are more violet than the remaining two algorithms, which is consistent with the numerical results of the indexes in Table 4.The reconstructed image of the green CGH is noisier, and its noise distribution is different from that of the reconstructed images of the red CGH and the blue CGH, thus leading to the fusion of the red and blue channel noise to form the purple color, while the green channel noise is not fused and thus presented in the final color reconstructed image.

Conclusion
In this paper, a gradient optimized QINN algorithm for compressing color CGH in RBG space is proposed.In the RGB color space, the bandwidth limited angular spectrum method is used to calculate the double phase hologram by computer.Because this method uses a real value CGH to record the phase and amplitude information of the spatial complex function, without introducing additional bias terms, it reduces the introduction of noise during the reconstruction process.Therefore, it can achieve higher image quality and smaller color difference in the reconstruction process of decompressed color CGH.Four different QINN gradient optimization algorithms were proposed in the compression algorithm.The experimental results show that the gradient optimized QINN has faster convergence speed, lower convergence error, and higher quality reconstructed original color images.When CR=0.5, the reconstructed color images decompressed by the QINN network using the Adam algorithm have a PSNR improvement of 0.388dB and a SSIM improvement of 0.03 compared to traditional QINN.Therefore, compared to traditional QINN, utilizing different gradient optimization algorithms has great potential.
are the target output value and the actual output value of the nth layer neural network.

Figure 1 .
Figure 1.Color CGH compression using QINN based on RGB channels.
where the c S and 0 S represent the size of the compressed CGH and the original CGH.The ( , ) f x y and ( , ) f x y represent the reconstructed CGH and the original CGH. the x is the peak-to-peak value of the CGH data.

2 c , and 3 c
l x y , ( , ) c x y , and ( , ) s x y are the image brightness, contrast and structure contrast functions respectively, the 2 x σ is the variance of x, the 2 y σ is the variance of y, the xy σ is the covariance of x and y.The 1 c , are set to constants.The SSIM is a number between 0 and 1.The larger SSIM means less difference between the two images and better image quality.

Figure 2 .
Figure 2. The variation of training loss with the increment of training epochs.

Figure 4 .
Figure 4. Gradient descent curves of different gradient optimization algorithms in R, G, and B channel.

Figure 5 .
Figure 5. Reconstructed image of different optimized algorithm.

Table 1 .
MSE, PSNR and SSIM of decompressed CGHs with different compression ratios.

Table 2 .
PSNR and SSIM of reconstructed image with different compression ratios.

Table 3 .
MSE, PSNR and SSIM of decompressed CGHs with different optimization algorithm.

Table 4 .
PSNR of reconstructed image with different optimization algorithm.