An image denoising method is proposed for ultrasonic logging images with severe noise. The proposed method works on a variational Bayesian framework using block sparse prior. First, the sparse coefficients are simulated by a more appropriate distribution—Laplacian distribution. Then the variational Bayesian denoising model in which Laplacian distribution is used as a prior term of sparse coefficients is proposed. Finally, semiquadratic regularization is used to solve the model with a simplified process. Moreover, during the denoising process, a relaxation factor is introduced to improve the accuracy. In the vast majority of cases, the proposed method obtained better results in both the visual quality and the objective evaluation. It achieves better denoising performance than the existing denoising methods when the edge details of the images are contaminated by noise, especially severe noise. The experimental results show that the proposed method is practical in ultrasonic logging images. |
1.IntroductionWith the increasing demand for oil and gas in the 21st century, ultrasonic logging imaging will be more and more widely used due to its intuitive feature. During the logging, a motor drives the transducer and magnetometer to rotate around the axis of the instrument at a fixed rate to scan the entire borehole wall. In contrast, the ultrasonic logging image is inevitably contaminated by noise. It is necessary to remove noise and improve the image quality, which will guarantee subsequent image processing performance, such as for fracture segmentation, hole recognition, and reservoir interpretation. The purpose of denoising is to preserve image details while removing noise. In the past decades, scholars have proposed a variety of denoising methods, including average filter, total variation, sparse coding, and deep learning.1–4 The average filter is an effective method for removing noise, but the edges become blurry after denoising. Garnett et al.5 presented a new trilateral filter based on the bilateral filter to remove noise. A weighting cost function was designed to calculate the weights of neighbor pixels and operate the filter. It effectively removed noise while preserving the image edges when the noise intensity was small. However, the denoised results became very poor, and the noise intensity was severe. Li and Suen6 proposed a new nonlocal means method based on gray theory. The experiments showed that their method has a superior denoising ability. The method accurately discriminated between the information and noise while effectively reducing the pseudo-Gibbs artifacts. Unfortunately, although the noise was severe, some pseudo-Gibbs artifacts were still visible after denoising. Then, Li and Wang7 presented an improved wavelet threshold denoising method combining non-local mean filtering. However, it had difficulty extracting the redundancy of visual images, especially for the images with severe noise. Recently, to solve the problem that the early lesions of COVID-19 are not obvious and the generated images are easily contaminated with noise, Guo et al.8 proposed an adaptive two-stage filtering method for COVID-19 CT images. The results demonstrated that their method achieved satisfactory denoising performance when the images were contaminated with impulse noise. However, its denoising performance of Gaussian noise still needs to be improved. In the last decade, total variation has been one of the popular methods in image denoising. The total variation method, was first proposed by Rudin et al.;9 it achieved a trade-off between noise removal and edge preservation. However, the method easily generated block effects. A number of methods based on total variation have been proposed to suppress the block effects. You and Kaveh10 proposed a fourth-order partial differential equation for noise removal. In their method, a cost function was proposed based on the image intensity function. Then the minimization of the cost function was solved by the time evolution of the partial differential equation. Chan et al.11 proposed an improved model by adding a nonlinear fourth order diffusive term to the Euler-Lagrange equations of the total variation model, and then Shahdoosti et al.12 proposed a new hybrid denoising scheme using total variation and ripplet transform. The visual quality demonstrated that their schemes provided sharper edges. Recently, Wang et al.13 proposed a vector total fractional-order variation for image denoising. Furthermore, they introduced a regularization term for describing texture space to improve the denoising performance. Guo and Chen found that the main shortage of the traditional total variation was that it cannot reflect the local feature owing to the same weight of different orientation of total variation. Therefore, they proposed a nonconvex anisotropic total variation for image denoising.14 The results showed that it was very effective for suppressing staircase effects. Overall, the total variation method can achieve good performance, but it has difficulty minimizing the cost function because of the nondifferentiability of the model. Sparse coding is a newly built image representation method in signal processing.15,16 It shows that the signals can be exactly reconstructed by fewer coefficients than the traditional methods. In recent years, sparse coding has been studied quite extensively for image denoising. The existing image denoising methods based on sparse coding include two steps. First, image blocks are expressed by a linear combination of a few coefficients taken from the basis functions. Second, the noise is removed from the image because the noise distribution does not meet the sparse assumption in general. Recently, Dong et al.17 proposed a nonlocally centralized sparse representation denoising method combining nonlocal self-similarity and sparse representation of images. It has a very powerful denoising performance. However, solving K-means and principal component analysis (PCA) used with this method is difficult. Inspired by Dong’s method, Nejati et al.18 proposed a denoising method based on sparsity and low-rank representation that also took advantage of nonlocal self-similarity and image sparsity. Liu et al.19 proposed a correlation adaptive sparse model for image denoising. Their proposed model adaptively selected different image data using local and nonlocal correlation. In contrast, Zhang and Li20 presented an image denoising and repair model based on compressed sensing theory, and the results demonstrated that their model effectively removed noise. Deep learning was first used for image denoising in 2015.21 It does not need the parameters to be manually set in the denoising process. Then, Mao et al.22 introduced multiple convolutions to remove noise and obtain the high-resolution image and Zhang et al.23 proposed a flexible neural network for denoising by introducing different noise levels and the image patch as the input. Scetbon et al.24 presented an end-to-end deep architecture with the exact K-singular value decomposition (K-SVD) computational path and trained it for optimizing the denoising performance. The deep learning methods can improve denoising results; however, they have high requirements of the hardware for training deep networks. Although various image denoising methods have been applied to numerous fields, few methods have been proposed for ultrasonic logging images. Motivated by the research, we propose a novel variational Bayesian inference framework for ultrasonic logging image denoising using a sparsity prior term. This prior assumes that the outputs of local sparse coefficients of the ultrasonic logging image obey a Laplacian distribution, which is demonstrated by many simulations and tests. The main contributions of this work rely in three aspects. First, we use the Laplacian distribution to simulate the sparse coefficients of ultrasonic logging images. Second, we introduce the sparse prior term into the variational Bayesian model to improve the denoising performance. Finally, semiquadratic regularization is used to solve the model. Moreover, during the denoising process, a relaxation factor is introduced to further improve the accuracy. The experiments demonstrate that our proposed algorithm can obtain a competitive performance compared with existing denoising algorithms, especially for ultrasonic logging images with severe noise. This paper is organized as follows. Section 2 shows the image model and sparse coefficients of the ultrasonic logging images. The variational Bayesian denoising model is detailed in Sec. 3. Experimental results are shown in Sec. 4, and finally conclusions and future research directions are presented in Sec. 5. 2.Image Denoising Model and Sparse Coefficients of Image2.1.Image Denoising ModelFor convenience but without loss of generality, we use 1D notation to represent the imaging model, as shown in the following equation: where denotes the observed image, denotes the original image, and denotes the additive noise.To utilize the image sparsity, we define filters [assumed to be an orthonormal basis, e.g., discrete cosine transform (DCT) and wavelet], and the outputs of these filters are shown as where satisfies the sparsity constraint and . Now we determine the appropriate statistical model to simulate the distribution of .2.2.Statistical Model of Sparse CoefficientsHistorically, sparse coding can be traced back to the proposition of wavelet transform25 and multiresolution analysis.26 Further, sparse coding can be successfully employed in image processing. To date, studies on sparsity can be divided primarily into two kinds: basis function and sparse coefficients. For basis functions, the main aim is to represent the sparse vector under a certain basis function. For sparse coefficients, the aim is to find an appropriate statistical model to express sparse coefficients. We assume that the distribution of sparse coefficients is independent and identically distributed, with the parameters . So the prior of is expressed as As for the ultrasonic logging images, the distribution of the sparse coefficients obeys a heavy-tailed distribution, as shown in Fig. 1. First, the heavy-tailed distribution of the sparse coefficients results from the filter outputs of the image edge areas being usually small. Then the heavy-tailed distribution can be regarded as a prior and used as a regularization term for the denoising model. Second, unlike the visual images with heavy-tailed distributions of the sparse coefficients that obey the Gaussian distribution, the sparse coefficients of ultrasonic logging images obey a different distribution. Fig. 1Heavy-tailed distribution of sparse coefficients of an ultrasonic logging image: (a) ultrasonic logging image and (b) the distribution of sparse coefficients (with the horizontal axis representing the value of sparse coefficients and the vertical axis representing the probability density distribution). ![]() As a family of statistical models, Gaussian distribution, Laplacian distribution, and Student-t distribution all obey a heavy-tailed distribution. To determine the appropriate statistical model to simulate the distribution of sparse coefficients, we randomly choose 100 ultrasonic logging images (the sample is shown in Fig. 2) and utilize the above three distributions to simulate their spare coefficients. The simulated results are shown in Fig. 3. Fig. 3Simulation of sparse coefficients of the ultrasonic logging images in Fig. 2. (a) The simulation of Fig. 2(a), (b) the simulation of Fig. 2(b), (c) the simulation of Fig. 2(c), (d) the simulation of Fig. 2(d), (e) the simulation of Fig. 2(e), and (f) the simulation of Fig. 2(f) (with the horizontal axis representing the value of sparse coefficients and the vertical axis representing the probability density distribution). ![]() As seen from Fig. 3, three simulated results are different obviously. The Gaussian distribution matches well with the center of the original data, and the Student-t distribution matches well with the kurtosis of the original data. By contrast, the Laplacian distribution matches well with both the center and the kurtosis of the original data. Moreover, we used the root mean square error (RMSE) to quantitatively evaluate the simulated results. RMSE is defined as where is the simulated values, is the original values, and Num is the length of . As seen from Table 1, the results simulated by the Laplacian distribution are better than the other two distributions. Therefore, we use the Laplacian distribution as a prior term of sparse coefficients, expressed asTable 1RMSE of the simulated results of all images. (The smallest RMSE is marked in bold.)
Existing models always assume the mean , which violates the actual results. Therefore, in our model, the mean . However, solving this prior term is difficult because its normalization constant cannot be found in closed form. One contribution of this paper is that a constrained variational Bayesian model is derived for solving this problem. This algorithm is described in detail next. 3.Variational Bayesian Model for Denoising3.1.Hierarchical Bayesian AnalysisThe model Eq. (1) is converted into the following form via Eq. (2): So we use the commuting property and rewrite the model as where denotes the filter outputs of the observed images and denotes the filter outputs of the additive noise . As for the parameters of the Laplacian prior, two parameters are estimated. To simplify the computation, we assume the parameters () that are directly calculated by the sparse coefficients. Then the Laplacian prior term is written in a simplified form asThe maximum a posteriori estimator is obtained as As for the parameters, Jeffrey’s prior has been proposed as an appropriate choice, that is Therefore, Eq. (9) is translated into the new form using Eqs. (7), (8), and (10) where is a cost function. For Jeffrey’s prior, as , which results in Eq. (11) being unstable. So we have instead of , where is a very small number for improving the stability of Eq. (11). In this case, the model Eq. (11) is written in the following form:For the Laplacian prior term, we have and , where is a diagonal matrix for normalizing of the image block. Then the Bayesian model is converted into the joint estimation of as Unlike Chantas et al.,27 who utilized the Student-t distribution as the prior term and solved the variational Bayesian model by approximation of a lower bound on the logarithm of the marginal likelihood, or Shanthi et al.,28 who utilized the Gaussian scale mixture distribution as the prior term and converted their model into L2-norm, we explicitly utilize the general Laplacian distribution as the prior term and convert the sparse model into L1-norm. In fact, compared with L2-norm, L1-norm is more specifically suitable for image sparsity. Such a sparse model is appealing and motivates us to further exploit the solution. 3.2.Solving the Variational Bayesian ModelFor sparse coefficients, it is necessary to extract similar blocks from the observed image, with sparse coefficients that should be simulated by the same prior. Thus, these similar blocks are combined to extend Eq. (13) as where is the collection of similar blocks. Accordingly, and are the collection of and , respectively.For the model Eq. (14), there exists a very popular approach for minimizing called alternating minimization, that is, one starts with some initial guess, and then one successively obtains the alternating sequence of conditional minimizers. 3.2.1.Optimization ofFor the optimization of , it is simplified by fixing as shown in the following equation: Moreover, Eq. (15) is simplified as where . To seem more intuitive, the equation is written asThe optimization of is transformed into the minimization of quadratic function . The function is further decomposed into a series of subproblems as Obviously, this is solved after taking the derivative of to and equating it to zero, followed as Then, when , the solution of Eq. (19) is shown as So the minimum of is . In contrast, if , the function is a monotone function. Then the minimum of is , and the optimization of is shown as 3.2.2.Optimization ofFor the optimization of , it is simplified by fixing , shown as To solve Eq. (22), this algorithm is derived from the well-known variable-splitting and penalty techniques. Therefore, according to semiquadratic regularization, we introduce a relaxation factor to transform Eq. (22) into an equivalent form as follows: Proposition 1.For the parameter , the optimization of Eq. (23) is equivalent to the optimization of Eq. (22). Proof:For , if does not hold, then . This is inconsistent with the optimization of Eq. (23). So holds, and thus . Namely, when , Eq. (23) is equivalent to Eq. (22). The optimization of Eq. (23) is decomposed into two steps: the optimization of and the optimization of . For a fixed , the optimization of Eq. (23) is further simplified into the following forms: For this, the unique minimum is given by the following two-dimensional shrinkage equation: In contrast, for a fixed , the optimization of Eq. (23) is also simplified as shown in the following equation: Obviously, Eq. (26) is quadratic for , and the minimum of is given by the normal equations: where is the identity matrix. Noting that is a diagonal matrix, is also a diagonal matrix, and therefore its inverse can be easily computed.By Eqs. (21), (25), and (27), the denoised result of the noisy image is shown as 3.3.Steps of the Method and Parameters SettingThe steps of this method are summarized below.
The parameter is empirically estimated, as shown in Refs. 29 and 30. In addition, as discussed in Sec. 3.2.2, when , Eq. (23) is equivalent to Eq. (22). However, the larger is, the slower the convergence rate is. Considering the convergence rate and equivalence of the function, the parameter is set as an increasing sequence. The initial value is 1, and the maximum value is . After every iteration of and , the value is modified as . 4.Experimental Results and AnalysisNow the experiments are presented to demonstrate the denoising performance for ultrasonic logging images. Moreover, we compare our method with three other denoising methods.
4.1.Denoising for Synthetic Noisy Images with Different NoiseIn the following section, we employ the above three methods and our proposed method [called Laplacian prior and sparse regularization (LPSR)] on six images contaminated by different noises. These six original ultrasonic logging images are shown in Fig. 2. To comprehensively evaluate the denoised image quality, peak signal to noise ratio (PSNR) and structural similarity (SSIM) are used as criteria for evaluating the quality. In this section, several experiments are reported to validate the denoising performance. As stated above, we test the other three denoising algorithms on these ultrasonic logging images and show the sample of the results for a visual comparison. Generally speaking, when the standard deviations of noise is more than 50, the noisy image is considered to be a severe noisy image. In our experiments, these six images are contaminated by different noise levels and the standard deviations of noise are 20, 40, 60, and 80. The PSNR and SSIM of all denoised images are shown in Table 2, with the best results marked in bold. Table 2PSNR and SSIM of the algorithms for all test images at different noise levels. (The best results are marked in bold.)
As seen from Table 2, compared with the other algorithms, our proposed LPSR method obtained the best denoising performance. For all denoised images and their PSNR and SSIM, the highest values of PSNR and SSIM in the majority of cases belong to the proposed LPSR. In addition, for the WGSM method and the TWMF method, their PSNR and SSIM are significantly less than those of the NCATV method and LPSR (our proposed method), which demonstrates that WGSM and TWMF are ineffective on these ultrasonic logging images, especially the images with severe noise ( and ). Several reasons for these improvements were found. First, the WGSM method and the TWMF method follow the hypothesis that the local sparse coefficients of an image obey a Gaussian distribution. However, as demonstrated in Sec. 2, for ultrasonic logging images, a Laplacian distribution is a more appropriate distribution. Therefore, our method introduces the Laplacian distribution as a prior term and obtains the better performance. Second, the NCATV method, as well as most of the total variation methods, generates some block effects, which influence the evaluation values. In terms of subjective vision, the denoised images of Fig. 2(d) at medium noise () are shown in Fig. 4. Two zoomed regions of these images are shown to the right of each result. As seen from Fig. 4, we found that the denoised results of all algorithms have a good subjective visual quality at the medium noise, especially for our proposed LPSR method. These denoised images have few artifacts in the smooth regions. Of course, there are slight differences among these algorithms. The WGSM method and the TWMF method more easily generate blur edges. This is because WGSM and TWMF cannot preserve the edge details effectively. Fig. 4Denoising performance comparison on Fig. 2(d) with noise corruption. (a) Original image; (b) noisy image (); denoised images by (c) WGSM (); (d) TWMF (); (e) NCATV (); and (f) LPSR (our method, ). ![]() Similarly, the denoised images of Fig. 2(f) at severe noise () are shown in Fig. 5. It is observed that the proposed LPSR outperforms the other three algorithms in preserving smooth regions and image edges and obtains the most visually pleasant results that have fewer artifacts and clearer edges. We also observe that the improvement is easier to distinguish when the noise contamination is severe, especially for the zoomed regions of these denoised images. Fig. 5Denoising performance comparison on Fig. 2(f) with noise corruption. (a) Original image; (b) noisy image (); denoised images by (c) WGSM (); (d) TWMF (); (e) NCATV (); and (f) LPSR (our method, ). ![]() 4.2.Denoising for Real Ultrasonic Logging ImagesIn this section, we test our method on various real ultrasonic logging images. All images are denoised using the denoising methods of WGSM, TWMF, NCATV,14 and our proposed LPSR. The real ultrasonic logging images and their denoised results are respectively shown in Figs. 6Fig. 7–8. Fig. 6Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image, denoised images by (b) WGSM, (c) TWMF, (d) NCATV, and (e) LPSR (our method). ![]() Fig. 7Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image; denoised images by (b) WGSM; (c) TWMF; (d) NCATV; and (e) LPSR (our method). ![]() Fig. 8Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image; denoised images by (b) WGSM, (c) TWMF, (d) NCATV, and (e) LPSR (our method). ![]() First, for Fig. 6, all denoised results exhibit clearer image details compared with the noisy image. However, it is seen from the enlarged rectangle that the result of WGSM [Fig. 6(b)] still has significant noise in the denoised results. In Figs. 6(c) and 6(d), the details in the denoised results of TWMF are over smooth. In contrast, our proposed method [Fig. 6(e)] produces a better effect in vision over the smooth image. Second, for Fig. 7, the small fractures in our denoised results [Fig. 7(e)] are clearest, especially in the enlarged rectangle. The same denoised effects are also shown in Fig. 8. In particular, the denoising of ultrasonic logging images is challenging. However, our results exhibit sharper details and fewer ringing artifacts compared with the other methods. 5.Conclusions and Future WorkIn this paper, an image denoising method is proposed for the ultrasonic logging images with severe noise. The denoised images of our method have clearer edges and fewer artifacts. The success of our method benefits from three aspects. First, the sparse coefficients are simulated by a more appropriate distribution—Laplacian distribution. Second, we use the Laplacian distribution as a prior term and propose the variational Bayesian denoising model. Finally, a relaxation factor is introduced to solve the proposed model. Numerical experiments demonstrate that the proposed algorithm outperforms other previous algorithms in terms of both visual quality and objective evaluation. However, due to the complexity of the solution of variational Bayesian model, when the size of noisy images becomes large, it takes too much time to remove noise. For example, it takes about 280 s to remove noise for a image with medium noise (). Obviously, the defect limits the extension of our method, and thus future research will focus on developing parallel technology to decrease the running time. AcknowledgmentsWe would like to thank the National Natural Science Foundation of China (Grant Nos. 61901059 and 51978079). ReferencesM. Rajalakshmi and K. Annapurani,
“Performance evaluation of various filters for noise removal on near infrared palm dorsal vascular images,”
in 3rd Int. Conf. Intell. Sustain. Syst. (ICISS),
(2020). Google Scholar
P. Li et al.,
“Joint image denoising with gradient direction and edge-preserving regularization,”
Pattern Recognit., 125 108506 https://doi.org/10.1016/j.patcog.2021.108506
(2022).
Google Scholar
O. Yang et al.,
“Single image denoising via multi-scale weighted group sparse coding,”
Signal Process., 200 108650 https://doi.org/10.1016/j.sigpro.2022.108650
(2022).
Google Scholar
J. Chi et al.,
“Single low-dose CT image denoising using a generative adversarial network with modified U-Net generator and multi-level discriminator,”
IEEE Access, 8 133470
–133487 https://doi.org/10.1109/ACCESS.2020.3006512
(2020).
Google Scholar
R. Garnett et al.,
“A universal noise removal algorithm with an impulse detector,”
IEEE Trans. Image Process., 14
(11), 1747
–1754 https://doi.org/10.1109/TIP.2005.857261 IIPRE4 1057-7149
(2005).
Google Scholar
H. Li and C. Y. Suen,
“A novel non-local means image denoising method based on grey theory,”
Pattern Recognit., 49 237
–248 https://doi.org/10.1016/j.patcog.2015.05.028
(2016).
Google Scholar
F. Li and J. Wang,
“Research on image denoising algorithm based on improved wavelet threshold and non-local mean filtering,”
in IEEE 6th Int. Conf. Signal and Image Process. (ICSIP),
(2021). https://doi.org/10.1109/ICSIP52628.2021.9688900 Google Scholar
S. Guo et al.,
“COVID-19 CT image denoising algorithm based on adaptive threshold and optimized weighted median filter,”
Biomed. Signal Process. Control, 75 103552 https://doi.org/10.1016/j.bspc.2022.103552
(2022).
Google Scholar
L. I. Rudin, S. Osher and E. Fatemi,
“Nonlinear total variation based noise removal algorithms,”
Physica D Nonlinear Phenom., 60
(1–4), 259
–268 https://doi.org/10.1016/0167-2789(92)90242-F
(1992).
Google Scholar
Y. L. You and M. Kaveh,
“Fourth-order partial differential equations for noise removal,”
IEEE Trans. Image Process., 9
(10), 1723 https://doi.org/10.1109/83.869184
(2000).
Google Scholar
T. Chan, A. Marquina and P. Mulet, High-Order Total Variation-Based Image Restoration, Society for Industrial and Applied Mathematics(
(2000). Google Scholar
H. R. Shahdoosti and S. M. Hazavei,
“Combined ripplet and total variation image denoising methods using twin support vector machines,”
Multimedia Tools Appl., 77 1
–19 https://doi.org/10.1007/s11042-017-4618-9
(2017).
Google Scholar
W. Wang et al.,
“Vector total fractional-order variation and its applications for color image denoising and decomposition,”
Appl. Math. Modell., 72 155
–175 https://doi.org/10.1016/j.apm.2019.03.010 AMMODL 0307-904X
(2019).
Google Scholar
J. Guo and Q. Chen,
“Image denoising based on nonconvex anisotropic total-variation regularization,”
Signal Process., 186 108124 https://doi.org/10.1016/j.sigpro.2021.108124
(2021).
Google Scholar
D. L. Donoho,
“Compressed sensing,”
IEEE Trans. Inf. Theory, 52
(4), 1289
–1306 https://doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448
(2006).
Google Scholar
S. Agapiou et al.,
“Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems,”
Inverse Prob., 34 045002 https://doi.org/10.1088/1361-6420/aaacac INPEEY 0266-5611
(2018).
Google Scholar
W. Dong et al.,
“Nonlocally centralized sparse representation for image restoration,”
IEEE Trans. Image Process., 22
(4), 1620
–1630 https://doi.org/10.1109/TIP.2012.2235847 IIPRE4 1057-7149
(2013).
Google Scholar
M. Nejati et al.,
“Denoising by low-rank and sparse representations,”
J. Vis. Commun. Image Represent., 36
(C), 28
–39 https://doi.org/10.1016/j.jvcir.2016.01.004 JVCRE7 1047-3203
(2016).
Google Scholar
H. Liu, J. Zhang and R. Xiong,
“CAS: correlation adaptive sparse modeling for image denoising,”
IEEE Trans. Comput. Imaging, 7 638
–647 https://doi.org/10.1109/TCI.2021.3083135
(2021).
Google Scholar
X. Zhang and Y. Li,
“Research on image denoising algorithm based on compressed sensing,”
in 2nd Int. Conf. Consum. Electron. Comput. Eng. (ICCECE),
(2022). Google Scholar
J. Liang and R. Liu,
“Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network,”
in 8th Int. Congr. Image and Signal Process. (CISP),
697
–701
(2015). Google Scholar
X. Mao et al.,
“Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,”
in Adv. Neural Inf. Process. Syst.,
2802
–2810
(2016). Google Scholar
K. Zhang et al.,
“Ffdnet: toward a fast and flexible solution for CNN-based image denoising,”
IEEE Trans. Image Process., 27
(9), 4608
–4622 https://doi.org/10.1109/TIP.2018.2839891 IIPRE4 1057-7149
(2018).
Google Scholar
M. Scetbon, M. Elad and P. Milanfar,
“Deep K-SVD denoising,”
IEEE Trans. Image Process., 30
(6), 5944
–5955 https://doi.org/10.1109/TIP.2021.3090531 IIPRE4 1057-7149
(2021).
Google Scholar
I. Daubechies,
“Orthonormal bases of compactly supported bases,”
Commun. Pure Appl. Math., 41 909
–996 https://doi.org/10.1002/cpa.3160410705 CPMAMV 0010-3640
(1988).
Google Scholar
S. Mallat,
“Multiresolution approximations and wavelet orthonormal bases of l2(r),”
Trans. Am. Math. Soc., 315 69
–87 https://doi.org/10.2307/2001373
(1989).
Google Scholar
G. Chantas et al.,
“Variational Bayesian image restoration based on a product of t-distributions image prior,”
IEEE Trans. Image Process., 17
(10), 1795
–1805 https://doi.org/10.1109/TIP.2008.2002828 IIPRE4 1057-7149
(2008).
Google Scholar
S. Amala Shanthi, C. H. Sulochana and T. Latha,
“Image denoising in hybrid wavelet and quincunx diamond filter bank domain based on Gaussian scale mixture model,”
Comput. Electr. Eng., 46
(C), 384
–393 https://doi.org/10.1016/j.compeleceng.2015.02.002 CPEEBQ 0045-7906
(2015).
Google Scholar
L. Y. Zhou and Z. B. Zhang,
“No-reference image quality assessment based on noise, blurring and blocking effect,”
Optik – Int. J. Light Electron. Opt., 125
(19), 5677
–5680 https://doi.org/10.1016/j.ijleo.2014.07.010
(2014).
Google Scholar
S. Sudharson, T. Pratap and P. Kokil,
“Noise level estimation for effective blind despeckling of medical ultrasound images,”
Biomed. Signal Process. Control, 68 102744 https://doi.org/10.1016/j.bspc.2021.102744
(2021).
Google Scholar
BiographyHu Deng received his PhD in drilling engineering from Southwest Petroleum University, Chengdu, China, in 2004. Currently, he is a senior engineer at Drilling and Production Engineering Technology Research Institute, CNPC Chuanqing Drilling Engineering Co. Ltd., China. He is mainly engaged in the research and management of superior fast drilling, geo-engineering integration, and drilling automation and intelligence. Guoquan Liu received his BE degree in automation from Electronic Information Science and Technology from Jianghan Petroleum Institute, Jingzhou, China, in 1991. Currently, he is a senior engineer at Engineering Technology Research Institute, CNPC Logging Co. Ltd., China. He is engaged in the research and development of production logging instruments and logging signal processing. Luoyu Zhou received his BS degree in optical information science and technology from the University of Science and Technology of China, Hefei, China, in 2008 and his PhD in optical engineering from Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, in 2013. Currently, he is an associate professor at the School of Electronics and Information, Yangtze University, China. His current research interests include image processing and artificial intelligence. |