Open Access
14 January 2023 Ultrasonic logging image denoising algorithm based on variational Bayesian and sparse prior
Hu Deng, Guoquan Liu, Luoyu Zhou
Author Affiliations +
Abstract

An image denoising method is proposed for ultrasonic logging images with severe noise. The proposed method works on a variational Bayesian framework using block sparse prior. First, the sparse coefficients are simulated by a more appropriate distribution—Laplacian distribution. Then the variational Bayesian denoising model in which Laplacian distribution is used as a prior term of sparse coefficients is proposed. Finally, semiquadratic regularization is used to solve the model with a simplified process. Moreover, during the denoising process, a relaxation factor is introduced to improve the accuracy. In the vast majority of cases, the proposed method obtained better results in both the visual quality and the objective evaluation. It achieves better denoising performance than the existing denoising methods when the edge details of the images are contaminated by noise, especially severe noise. The experimental results show that the proposed method is practical in ultrasonic logging images.

1.

Introduction

With the increasing demand for oil and gas in the 21st century, ultrasonic logging imaging will be more and more widely used due to its intuitive feature. During the logging, a motor drives the transducer and magnetometer to rotate around the axis of the instrument at a fixed rate to scan the entire borehole wall. In contrast, the ultrasonic logging image is inevitably contaminated by noise. It is necessary to remove noise and improve the image quality, which will guarantee subsequent image processing performance, such as for fracture segmentation, hole recognition, and reservoir interpretation. The purpose of denoising is to preserve image details while removing noise. In the past decades, scholars have proposed a variety of denoising methods, including average filter, total variation, sparse coding, and deep learning.14

The average filter is an effective method for removing noise, but the edges become blurry after denoising. Garnett et al.5 presented a new trilateral filter based on the bilateral filter to remove noise. A weighting cost function was designed to calculate the weights of neighbor pixels and operate the filter. It effectively removed noise while preserving the image edges when the noise intensity was small. However, the denoised results became very poor, and the noise intensity was severe. Li and Suen6 proposed a new nonlocal means method based on gray theory. The experiments showed that their method has a superior denoising ability. The method accurately discriminated between the information and noise while effectively reducing the pseudo-Gibbs artifacts. Unfortunately, although the noise was severe, some pseudo-Gibbs artifacts were still visible after denoising. Then, Li and Wang7 presented an improved wavelet threshold denoising method combining non-local mean filtering. However, it had difficulty extracting the redundancy of visual images, especially for the images with severe noise. Recently, to solve the problem that the early lesions of COVID-19 are not obvious and the generated images are easily contaminated with noise, Guo et al.8 proposed an adaptive two-stage filtering method for COVID-19 CT images. The results demonstrated that their method achieved satisfactory denoising performance when the images were contaminated with impulse noise. However, its denoising performance of Gaussian noise still needs to be improved.

In the last decade, total variation has been one of the popular methods in image denoising. The total variation method, was first proposed by Rudin et al.;9 it achieved a trade-off between noise removal and edge preservation. However, the method easily generated block effects. A number of methods based on total variation have been proposed to suppress the block effects. You and Kaveh10 proposed a fourth-order partial differential equation for noise removal. In their method, a cost function was proposed based on the image intensity function. Then the minimization of the cost function was solved by the time evolution of the partial differential equation. Chan et al.11 proposed an improved model by adding a nonlinear fourth order diffusive term to the Euler-Lagrange equations of the total variation model, and then Shahdoosti et al.12 proposed a new hybrid denoising scheme using total variation and ripplet transform. The visual quality demonstrated that their schemes provided sharper edges. Recently, Wang et al.13 proposed a vector total fractional-order variation for image denoising. Furthermore, they introduced a regularization term for describing texture space to improve the denoising performance. Guo and Chen found that the main shortage of the traditional total variation was that it cannot reflect the local feature owing to the same weight of different orientation of total variation. Therefore, they proposed a nonconvex anisotropic total variation for image denoising.14 The results showed that it was very effective for suppressing staircase effects. Overall, the total variation method can achieve good performance, but it has difficulty minimizing the cost function because of the nondifferentiability of the model.

Sparse coding is a newly built image representation method in signal processing.15,16 It shows that the signals can be exactly reconstructed by fewer coefficients than the traditional methods. In recent years, sparse coding has been studied quite extensively for image denoising. The existing image denoising methods based on sparse coding include two steps. First, image blocks are expressed by a linear combination of a few coefficients taken from the basis functions. Second, the noise is removed from the image because the noise distribution does not meet the sparse assumption in general. Recently, Dong et al.17 proposed a nonlocally centralized sparse representation denoising method combining nonlocal self-similarity and sparse representation of images. It has a very powerful denoising performance. However, solving K-means and principal component analysis (PCA) used with this method is difficult. Inspired by Dong’s method, Nejati et al.18 proposed a denoising method based on sparsity and low-rank representation that also took advantage of nonlocal self-similarity and image sparsity. Liu et al.19 proposed a correlation adaptive sparse model for image denoising. Their proposed model adaptively selected different image data using local and nonlocal correlation. In contrast, Zhang and Li20 presented an image denoising and repair model based on compressed sensing theory, and the results demonstrated that their model effectively removed noise.

Deep learning was first used for image denoising in 2015.21 It does not need the parameters to be manually set in the denoising process. Then, Mao et al.22 introduced multiple convolutions to remove noise and obtain the high-resolution image and Zhang et al.23 proposed a flexible neural network for denoising by introducing different noise levels and the image patch as the input. Scetbon et al.24 presented an end-to-end deep architecture with the exact K-singular value decomposition (K-SVD) computational path and trained it for optimizing the denoising performance. The deep learning methods can improve denoising results; however, they have high requirements of the hardware for training deep networks.

Although various image denoising methods have been applied to numerous fields, few methods have been proposed for ultrasonic logging images. Motivated by the research, we propose a novel variational Bayesian inference framework for ultrasonic logging image denoising using a sparsity prior term. This prior assumes that the outputs of local sparse coefficients of the ultrasonic logging image obey a Laplacian distribution, which is demonstrated by many simulations and tests. The main contributions of this work rely in three aspects. First, we use the Laplacian distribution to simulate the sparse coefficients of ultrasonic logging images. Second, we introduce the sparse prior term into the variational Bayesian model to improve the denoising performance. Finally, semiquadratic regularization is used to solve the model. Moreover, during the denoising process, a relaxation factor is introduced to further improve the accuracy. The experiments demonstrate that our proposed algorithm can obtain a competitive performance compared with existing denoising algorithms, especially for ultrasonic logging images with severe noise.

This paper is organized as follows. Section 2 shows the image model and sparse coefficients of the ultrasonic logging images. The variational Bayesian denoising model is detailed in Sec. 3. Experimental results are shown in Sec. 4, and finally conclusions and future research directions are presented in Sec. 5.

2.

Image Denoising Model and Sparse Coefficients of Image

2.1.

Image Denoising Model

For convenience but without loss of generality, we use 1D notation to represent the imaging model, as shown in the following equation:

Eq. (1)

g=f+n,
where g denotes the observed image, f denotes the original image, and n denotes the additive noise.

To utilize the image sparsity, we define K filters Dk [assumed to be an orthonormal basis, e.g., discrete cosine transform (DCT) and wavelet], and the outputs of these filters are shown as

Eq. (2)

εk=Dkfk=1,,K,
where εk satisfies the sparsity constraint and ε=[ε1,,εK]. Now we determine the appropriate statistical model to simulate the distribution of εk.

2.2.

Statistical Model of Sparse Coefficients

Historically, sparse coding can be traced back to the proposition of wavelet transform25 and multiresolution analysis.26 Further, sparse coding can be successfully employed in image processing. To date, studies on sparsity can be divided primarily into two kinds: basis function and sparse coefficients. For basis functions, the main aim is to represent the sparse vector under a certain basis function. For sparse coefficients, the aim is to find an appropriate statistical model to express sparse coefficients.

We assume that the distribution of sparse coefficients εk is independent and identically distributed, with the parameters θk=[θk1,θk2,]. So the prior of εk is expressed as

Eq. (3)

P(ε)=kP(εk),P(εk)=0P(εk|θk)dθk.

As for the ultrasonic logging images, the distribution of the sparse coefficients obeys a heavy-tailed distribution, as shown in Fig. 1. First, the heavy-tailed distribution of the sparse coefficients results from the filter outputs of the image edge areas being usually small. Then the heavy-tailed distribution can be regarded as a prior and used as a regularization term for the denoising model. Second, unlike the visual images with heavy-tailed distributions of the sparse coefficients that obey the Gaussian distribution, the sparse coefficients of ultrasonic logging images obey a different distribution.

Fig. 1

Heavy-tailed distribution of sparse coefficients of an ultrasonic logging image: (a) ultrasonic logging image and (b) the distribution of sparse coefficients (with the horizontal axis representing the value of sparse coefficients and the vertical axis representing the probability density distribution).

JEI_32_1_013004_f001.png

As a family of statistical models, Gaussian distribution, Laplacian distribution, and Student-t distribution all obey a heavy-tailed distribution. To determine the appropriate statistical model to simulate the distribution of sparse coefficients, we randomly choose 100 ultrasonic logging images (the sample is shown in Fig. 2) and utilize the above three distributions to simulate their spare coefficients. The simulated results are shown in Fig. 3.

Fig. 2

(a)–(f) Samples of ultrasonic logging images.

JEI_32_1_013004_f002.png

Fig. 3

Simulation of sparse coefficients of the ultrasonic logging images in Fig. 2. (a) The simulation of Fig. 2(a), (b) the simulation of Fig. 2(b), (c) the simulation of Fig. 2(c), (d) the simulation of Fig. 2(d), (e) the simulation of Fig. 2(e), and (f) the simulation of Fig. 2(f) (with the horizontal axis representing the value of sparse coefficients and the vertical axis representing the probability density distribution).

JEI_32_1_013004_f003.png

As seen from Fig. 3, three simulated results are different obviously. The Gaussian distribution matches well with the center of the original data, and the Student-t distribution matches well with the kurtosis of the original data. By contrast, the Laplacian distribution matches well with both the center and the kurtosis of the original data. Moreover, we used the root mean square error (RMSE) to quantitatively evaluate the simulated results. RMSE is defined as

Eq. (4)

MSE=1Numi(YsiYoi)2,
where Ysi is the simulated values, Yoi is the original values, and Num is the length of Ysi. As seen from Table 1, the results simulated by the Laplacian distribution are better than the other two distributions. Therefore, we use the Laplacian distribution as a prior term of sparse coefficients, expressed as

Eq. (5)

P(εk|θk)=12θk1exp(|εkθk2|θk1).

Table 1

RMSE of the simulated results of all images. (The smallest RMSE is marked in bold.)

ImagesRMSE/10−3
Gaussian simulationLaplacian simulationStudent-t simulation
Fig. 2(a)6.5721.3299.943
Fig. 2(b)2.3362.18911.832
Fig. 2(c)9.9083.8309.419
Fig. 2(d)2.5360.9299.925
Fig. 2(e)4.0931.09010.070
Fig. 2(f)12.3713.0524.135
The average of all images5.1252.3198.532

Existing models always assume the mean θk2=0, which violates the actual results. Therefore, in our model, the mean θk20. However, solving this prior term is difficult because its normalization constant cannot be found in closed form. One contribution of this paper is that a constrained variational Bayesian model is derived for solving this problem. This algorithm is described in detail next.

3.

Variational Bayesian Model for Denoising

3.1.

Hierarchical Bayesian Analysis

The model Eq. (1) is converted into the following form via Eq. (2):

Eq. (6)

Dkg=Dkf+Dknk=1,2,,K.

So we use the commuting property and rewrite the model as

Eq. (7)

wk=εk+nkk=1,2,,K,
where wk=Dkg denotes the filter outputs of the observed images and nk denotes the filter outputs of the additive noise nkN(0,σn2). As for the parameters of the Laplacian prior, two parameters θ=[θ1,θ2] are estimated. To simplify the computation, we assume the parameters (θ1=θ,θ2=u) that are directly calculated by the sparse coefficients. Then the Laplacian prior term is written in a simplified form as

Eq. (8)

P(εk|θk)=12θkexp(|εkuk|θk).

The maximum a posteriori estimator is obtained as

Eq. (9)

P(g,ε,θ)=P(g|ε,θ)P(ε,θ)=P(g|ε,θ)P(ε|θ)P(θ).

As for the parameters, Jeffrey’s prior has been proposed as an appropriate choice, that is

Eq. (10)

P(θk)=1θk.

Therefore, Eq. (9) is translated into the new form using Eqs. (7), (8), and (10)

Eq. (11)

J(ε,θ)=argmaxlog[P(g|ε,θ)P(ε|θ)P(θ)]=argminε,θ12σn2Dgε22+klog(2θk)+k|εkuk|θk+klogθk,
where J is a cost function. For Jeffrey’s prior, logθk as θk0, which results in Eq. (11) being unstable. So we have log(θk+δ) instead of log(θk), where δ is a very small number for improving the stability of Eq. (11). In this case, the model Eq. (11) is written in the following form:

Eq. (12)

J(ε,θ)=argminε,θ12σn2Dgε22+2log(θ+δ)+k|εkuk|θk.

For the Laplacian prior term, we have ε=ρυ and u=ρμ, where ρ=diag(θk) is a diagonal matrix for normalizing ε of the image block. Then the Bayesian model is converted into the joint estimation of J(υ,θ) as

Eq. (13)

J(υ,θ)=argminυ,θ12σn2Dgρυ22+2log(θ+δ)+υμ1.

Unlike Chantas et al.,27 who utilized the Student-t distribution as the prior term and solved the variational Bayesian model by approximation of a lower bound on the logarithm of the marginal likelihood, or Shanthi et al.,28 who utilized the Gaussian scale mixture distribution as the prior term and converted their model into L2-norm, we explicitly utilize the general Laplacian distribution as the prior term and convert the sparse model into L1-norm. In fact, compared with L2-norm, L1-norm is more specifically suitable for image sparsity. Such a sparse model is appealing and motivates us to further exploit the solution.

3.2.

Solving the Variational Bayesian Model

For sparse coefficients, it is necessary to extract similar blocks from the observed image, with sparse coefficients ε that should be simulated by the same prior. Thus, these similar blocks are combined to extend Eq. (13) as

Eq. (14)

J(Y,θ)=argminY,θ12σn2DGρY22+2log(θ+δ)+YM1,
where G=[g1,g2gn] is the collection of n similar blocks. Accordingly, Y=[υ1,υ2υn] and M=[μ1,μ2μn] are the collection of υ and μ, respectively.

For the model Eq. (14), there exists a very popular approach for minimizing J called alternating minimization, that is, one starts with some initial guess, and then one successively obtains the alternating sequence of conditional minimizers.

3.2.1.

Optimization of θ

For the optimization of θ, it is simplified by fixing Y as shown in the following equation:

Eq. (15)

θ=argminθ12σn2DGρY22+2log(θ+δ).

Moreover, Eq. (15) is simplified as

Eq. (16)

θ=argminθ12σn2Wk=1Kυkθk22+2log(θ+δ),
where W=DG. To seem more intuitive, the equation is written as

Eq. (17)

θ=argminθf(θ)=argminθkakθk2+bkθk+clog(θk+δ).

The optimization of θ is transformed into the minimization of quadratic function f(θ). The function is further decomposed into a series of subproblems as

Eq. (18)

θk=argminθkf(θk)=argminθkakθk2+bkθk+clog(θk+δ).

Obviously, this is solved after taking the derivative of f(θk) to θk and equating it to zero, followed as

Eq. (19)

df(θk)dθk=2akθk+bk+cθk+δ=0.

Then, when Δ=bk28akc0, the solution of Eq. (19) is shown as

Eq. (20)

θk1,2=bk±bk28akc4ak.

So the minimum of f(θk) is min(f(θk1),f(θk2)). In contrast, if Δ=bk28akc<0, the function f(θk) is a monotone function. Then the minimum of f(θk) is f(0), and the optimization of θk is shown as

Eq. (21)

θk={0,Δ=bk28akc<0argminθk{f(θk1),f(θk2)},Δ=bk28akc0.

3.2.2.

Optimization of Y

For the optimization of Y, it is simplified by fixing θ, shown as

Eq. (22)

Y=argminY12σn2DGρY22+YM1.

To solve Eq. (22), this algorithm is derived from the well-known variable-splitting and penalty techniques. Therefore, according to semiquadratic regularization, we introduce a relaxation factor to transform Eq. (22) into an equivalent form as follows:

Eq. (23)

(Y,z)=argminY,z12σn2DGρY22+β2z(YM)22+z1.

Proposition 1.

For the parameter β, the optimization of Eq. (23) is equivalent to the optimization of Eq. (22).

Proof:

For β, if z(YM)220 does not hold, then βz(YM)22. This is inconsistent with the optimization of Eq. (23). So z(YM)220 holds, and thus zYM. Namely, when β, Eq. (23) is equivalent to Eq. (22).

The optimization of Eq. (23) is decomposed into two steps: the optimization of z and the optimization of Y.

  • (1) The optimization of z

For a fixed Y, the optimization of Eq. (23) is further simplified into the following forms:

Eq. (24)

z=argminzβ2z(YM)22+z1.

For this, the unique minimum is given by the following two-dimensional shrinkage equation:

Eq. (25)

z=max{YM1β,0}YMYM1.

  • (2) The optimization of Y

In contrast, for a fixed z, the optimization of Eq. (23) is also simplified as shown in the following equation:

Eq. (26)

Y=argminY12σn2DGρY22+β2z(YM)22.

Obviously, Eq. (26) is quadratic for Y, and the minimum of Y is given by the normal equations:

Eq. (27)

Y=ρTDG+σn2β(z+M)ρTρ+σn2βI,
where I is the identity matrix. Noting that ρ=diag(θk) is a diagonal matrix, ρTρ+σn2βI is also a diagonal matrix, and therefore its inverse can be easily computed.

By Eqs. (21), (25), and (27), the denoised result of the noisy image is shown as

Eq. (28)

f=DρY.

3.3.

Steps of the Method and Parameters Setting

The steps of this method are summarized below.

  • The initial definition for f, D.

  • The initial guess for Y using Y=[υ1,υ2υn], ε=ρυ and ε=Df.

  • The optimization of θ using Eq. (21).

  • The initial values β=1.

  • The optimization of z and Y using Eqs. (25) and (27), then β=3β.

  • If β<107, go to step ⑤.

  • If Yn+1YnYn+1>ξ go to step ③.

  • Achieve the denoised results f using Eq. (28).

The parameter σn is empirically estimated, as shown in Refs. 29 and 30. In addition, as discussed in Sec. 3.2.2, when β, Eq. (23) is equivalent to Eq. (22). However, the larger β is, the slower the convergence rate is. Considering the convergence rate and equivalence of the function, the parameter β is set as an increasing sequence. The initial value is 1, and the maximum value is 106107. After every iteration of z and Y, the value is modified as β=3β.

4.

Experimental Results and Analysis

Now the experiments are presented to demonstrate the denoising performance for ultrasonic logging images. Moreover, we compare our method with three other denoising methods.

  • (1) Based on the Gaussian scale mixture model, the image denoising method that was proposed in Ref. 28 and called wavelet and Gaussian scale mixture (WGSM).

  • (2) The method based on an adaptive threshold and optimized weighted median filter that was proposed in Ref. 8 and called threshold and weighted median filter (TWMF).

  • (3) The method using nonconvex anisotropic total variation regularization that was proposed in Ref. 14 and called non-convex anisotropic total variation (NCATV).

4.1.

Denoising for Synthetic Noisy Images with Different Noise

In the following section, we employ the above three methods and our proposed method [called Laplacian prior and sparse regularization (LPSR)] on six images contaminated by different noises. These six original ultrasonic logging images are shown in Fig. 2. To comprehensively evaluate the denoised image quality, peak signal to noise ratio (PSNR) and structural similarity (SSIM) are used as criteria for evaluating the quality.

In this section, several experiments are reported to validate the denoising performance. As stated above, we test the other three denoising algorithms on these ultrasonic logging images and show the sample of the results for a visual comparison.

Generally speaking, when the standard deviations of noise is more than 50, the noisy image is considered to be a severe noisy image. In our experiments, these six images are contaminated by different noise levels and the standard deviations of noise are 20, 40, 60, and 80. The PSNR and SSIM of all denoised images are shown in Table 2, with the best results marked in bold.

Table 2

PSNR and SSIM of the algorithms for all test images at different noise levels. (The best results are marked in bold.)

MethodImagesσn=20σn=40σn=60σn=80
WGSMFig. 2(a)28.9091/0.988020.8334/0.916216.4061/0.807213.4069/0.6658
TWMF24.5454/0.971223.7430/0.957122.7220/0.945521.0336/0.9327
NCATV32.0974/0.994129.3981/0.989727.4945/0.986024.8067/0.9636
LPSR33.0667/0.995630.6525/0.992428.7487/0.990025.9856/0.9715
WGSMFig. 2(b)29.2242/0.987520.8794/0.914516.5157/0.796313.4717/0.6435
TWMF25.8456/0.977024.2517/0.965222.6098/0.950320.7090/0.9324
NCATV33.1817/0.995729.8544/0.991227.4599/0.984524.5384/0.9651
LPSR33.9031/0.996330.9061/0.992928.6020/0.988425.6038/0.9763
WGSMFig. 2(c)26.5659/0.986820.4561/0.941916.2538/0.848613.2866/0.7089
TWMF23.6687/0.981522.7173/0.976321.7972/0.970020.9506/0.9643
NCATV30.0280/0.994227.4387/0.990325.1755/0.984623.4203/0.9765
LPSR30.8152/0.995228.2170/0.991826.1760/0.987724.4681/0.9821
WGSMFig. 2(d)30.4465/0.995120.6872/0.951416.1782/0.865213.2636/0.7334
TWMF26.8365/0.991526.3157/0.990925.8247/0.990225.9286/0.9908
NCATV34.1903/0.998131.3503/0.996529.2408/0.994726.9758/0.9911
LPSR35.5036/0.998532.6126/0.997330.3315/0.995627.7447/0.9924
WGSMFig. 2(e)30.3808/0.985821.0097/0.897816.4960/0.771513.4282/0.6239
TWMF25.6654/0.968024.3234/0.949622.5329/0.929120.0456/0.9039
NCATV33.1045/0.994430.8025/0.990728.6164/0.982625.4824/0.9600
LPSR34.0373/0.995131.7769/0.992329.7165/0.988326.7546/0.9715
WGSMFig. 2(f)26.6814/0.978620.8414/0.918316.4542/0.791013.2892/0.6250
TWMF22.0750/0.950921.0595/0.937519.9551/0.922418.2940/0.9041
NCATV29.2758/0.988527.0489/0.981625.7069/0.974723.1234/0.9572
LPSR29.8557/0.989227.8344/0.984226.4979/0.979324.3927/0.9663

As seen from Table 2, compared with the other algorithms, our proposed LPSR method obtained the best denoising performance. For all denoised images and their PSNR and SSIM, the highest values of PSNR and SSIM in the majority of cases belong to the proposed LPSR. In addition, for the WGSM method and the TWMF method, their PSNR and SSIM are significantly less than those of the NCATV method and LPSR (our proposed method), which demonstrates that WGSM and TWMF are ineffective on these ultrasonic logging images, especially the images with severe noise (σn=60 and σn=80). Several reasons for these improvements were found. First, the WGSM method and the TWMF method follow the hypothesis that the local sparse coefficients of an image obey a Gaussian distribution. However, as demonstrated in Sec. 2, for ultrasonic logging images, a Laplacian distribution is a more appropriate distribution. Therefore, our method introduces the Laplacian distribution as a prior term and obtains the better performance. Second, the NCATV method, as well as most of the total variation methods, generates some block effects, which influence the evaluation values.

In terms of subjective vision, the denoised images of Fig. 2(d) at medium noise (σn=20) are shown in Fig. 4. Two zoomed regions of these images are shown to the right of each result. As seen from Fig. 4, we found that the denoised results of all algorithms have a good subjective visual quality at the medium noise, especially for our proposed LPSR method. These denoised images have few artifacts in the smooth regions. Of course, there are slight differences among these algorithms. The WGSM method and the TWMF method more easily generate blur edges. This is because WGSM and TWMF cannot preserve the edge details effectively.

Fig. 4

Denoising performance comparison on Fig. 2(d) with noise corruption. (a) Original image; (b) noisy image (σn=20); denoised images by (c) WGSM (PSNR=30.4665); (d) TWMF (PSNR=26.8365); (e) NCATV (PSNR=34.1903); and (f) LPSR (our method, PSNR=35.5036).

JEI_32_1_013004_f004.png

Similarly, the denoised images of Fig. 2(f) at severe noise (σn=80) are shown in Fig. 5. It is observed that the proposed LPSR outperforms the other three algorithms in preserving smooth regions and image edges and obtains the most visually pleasant results that have fewer artifacts and clearer edges. We also observe that the improvement is easier to distinguish when the noise contamination is severe, especially for the zoomed regions of these denoised images.

Fig. 5

Denoising performance comparison on Fig. 2(f) with noise corruption. (a) Original image; (b) noisy image (σn=80); denoised images by (c) WGSM (PSNR=13.2892); (d) TWMF (PSNR=18.2940); (e) NCATV (PSNR=23.1234); and (f) LPSR (our method, PSNR=24.3927).

JEI_32_1_013004_f005.png

4.2.

Denoising for Real Ultrasonic Logging Images

In this section, we test our method on various real ultrasonic logging images. All images are denoised using the denoising methods of WGSM, TWMF, NCATV,14 and our proposed LPSR. The real ultrasonic logging images and their denoised results are respectively shown in Figs. 6Fig. 78.

Fig. 6

Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image, denoised images by (b) WGSM, (c) TWMF, (d) NCATV, and (e) LPSR (our method).

JEI_32_1_013004_f006.png

Fig. 7

Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image; denoised images by (b) WGSM; (c) TWMF; (d) NCATV; and (e) LPSR (our method).

JEI_32_1_013004_f007.png

Fig. 8

Denoising performance comparison on the real ultrasonic logging images with noise. (a) Noisy image; denoised images by (b) WGSM, (c) TWMF, (d) NCATV, and (e) LPSR (our method).

JEI_32_1_013004_f008.png

First, for Fig. 6, all denoised results exhibit clearer image details compared with the noisy image. However, it is seen from the enlarged rectangle that the result of WGSM [Fig. 6(b)] still has significant noise in the denoised results. In Figs. 6(c) and 6(d), the details in the denoised results of TWMF are over smooth. In contrast, our proposed method [Fig. 6(e)] produces a better effect in vision over the smooth image. Second, for Fig. 7, the small fractures in our denoised results [Fig. 7(e)] are clearest, especially in the enlarged rectangle. The same denoised effects are also shown in Fig. 8. In particular, the denoising of ultrasonic logging images is challenging. However, our results exhibit sharper details and fewer ringing artifacts compared with the other methods.

5.

Conclusions and Future Work

In this paper, an image denoising method is proposed for the ultrasonic logging images with severe noise. The denoised images of our method have clearer edges and fewer artifacts. The success of our method benefits from three aspects. First, the sparse coefficients are simulated by a more appropriate distribution—Laplacian distribution. Second, we use the Laplacian distribution as a prior term and propose the variational Bayesian denoising model. Finally, a relaxation factor is introduced to solve the proposed model. Numerical experiments demonstrate that the proposed algorithm outperforms other previous algorithms in terms of both visual quality and objective evaluation.

However, due to the complexity of the solution of variational Bayesian model, when the size of noisy images becomes large, it takes too much time to remove noise. For example, it takes about 280 s to remove noise for a 512×512 image with medium noise (σn=20). Obviously, the defect limits the extension of our method, and thus future research will focus on developing parallel technology to decrease the running time.

Acknowledgments

We would like to thank the National Natural Science Foundation of China (Grant Nos. 61901059 and 51978079).

References

1. 

M. Rajalakshmi and K. Annapurani, “Performance evaluation of various filters for noise removal on near infrared palm dorsal vascular images,” in 3rd Int. Conf. Intell. Sustain. Syst. (ICISS), (2020). Google Scholar

2. 

P. Li et al., “Joint image denoising with gradient direction and edge-preserving regularization,” Pattern Recognit., 125 108506 https://doi.org/10.1016/j.patcog.2021.108506 (2022). Google Scholar

3. 

O. Yang et al., “Single image denoising via multi-scale weighted group sparse coding,” Signal Process., 200 108650 https://doi.org/10.1016/j.sigpro.2022.108650 (2022). Google Scholar

4. 

J. Chi et al., “Single low-dose CT image denoising using a generative adversarial network with modified U-Net generator and multi-level discriminator,” IEEE Access, 8 133470 –133487 https://doi.org/10.1109/ACCESS.2020.3006512 (2020). Google Scholar

5. 

R. Garnett et al., “A universal noise removal algorithm with an impulse detector,” IEEE Trans. Image Process., 14 (11), 1747 –1754 https://doi.org/10.1109/TIP.2005.857261 IIPRE4 1057-7149 (2005). Google Scholar

6. 

H. Li and C. Y. Suen, “A novel non-local means image denoising method based on grey theory,” Pattern Recognit., 49 237 –248 https://doi.org/10.1016/j.patcog.2015.05.028 (2016). Google Scholar

7. 

F. Li and J. Wang, “Research on image denoising algorithm based on improved wavelet threshold and non-local mean filtering,” in IEEE 6th Int. Conf. Signal and Image Process. (ICSIP), (2021). https://doi.org/10.1109/ICSIP52628.2021.9688900 Google Scholar

8. 

S. Guo et al., “COVID-19 CT image denoising algorithm based on adaptive threshold and optimized weighted median filter,” Biomed. Signal Process. Control, 75 103552 https://doi.org/10.1016/j.bspc.2022.103552 (2022). Google Scholar

9. 

L. I. Rudin, S. Osher and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D Nonlinear Phenom., 60 (1–4), 259 –268 https://doi.org/10.1016/0167-2789(92)90242-F (1992). Google Scholar

10. 

Y. L. You and M. Kaveh, “Fourth-order partial differential equations for noise removal,” IEEE Trans. Image Process., 9 (10), 1723 https://doi.org/10.1109/83.869184 (2000). Google Scholar

11. 

T. Chan, A. Marquina and P. Mulet, High-Order Total Variation-Based Image Restoration, Society for Industrial and Applied Mathematics( (2000). Google Scholar

12. 

H. R. Shahdoosti and S. M. Hazavei, “Combined ripplet and total variation image denoising methods using twin support vector machines,” Multimedia Tools Appl., 77 1 –19 https://doi.org/10.1007/s11042-017-4618-9 (2017). Google Scholar

13. 

W. Wang et al., “Vector total fractional-order variation and its applications for color image denoising and decomposition,” Appl. Math. Modell., 72 155 –175 https://doi.org/10.1016/j.apm.2019.03.010 AMMODL 0307-904X (2019). Google Scholar

14. 

J. Guo and Q. Chen, “Image denoising based on nonconvex anisotropic total-variation regularization,” Signal Process., 186 108124 https://doi.org/10.1016/j.sigpro.2021.108124 (2021). Google Scholar

15. 

D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, 52 (4), 1289 –1306 https://doi.org/10.1109/TIT.2006.871582 IETTAW 0018-9448 (2006). Google Scholar

16. 

S. Agapiou et al., “Sparsity-promoting and edge-preserving maximum a posteriori estimators in non-parametric Bayesian inverse problems,” Inverse Prob., 34 045002 https://doi.org/10.1088/1361-6420/aaacac INPEEY 0266-5611 (2018). Google Scholar

17. 

W. Dong et al., “Nonlocally centralized sparse representation for image restoration,” IEEE Trans. Image Process., 22 (4), 1620 –1630 https://doi.org/10.1109/TIP.2012.2235847 IIPRE4 1057-7149 (2013). Google Scholar

18. 

M. Nejati et al., “Denoising by low-rank and sparse representations,” J. Vis. Commun. Image Represent., 36 (C), 28 –39 https://doi.org/10.1016/j.jvcir.2016.01.004 JVCRE7 1047-3203 (2016). Google Scholar

19. 

H. Liu, J. Zhang and R. Xiong, “CAS: correlation adaptive sparse modeling for image denoising,” IEEE Trans. Comput. Imaging, 7 638 –647 https://doi.org/10.1109/TCI.2021.3083135 (2021). Google Scholar

20. 

X. Zhang and Y. Li, “Research on image denoising algorithm based on compressed sensing,” in 2nd Int. Conf. Consum. Electron. Comput. Eng. (ICCECE), (2022). Google Scholar

21. 

J. Liang and R. Liu, “Stacked denoising autoencoder and dropout together to prevent overfitting in deep neural network,” in 8th Int. Congr. Image and Signal Process. (CISP), 697 –701 (2015). Google Scholar

22. 

X. Mao et al., “Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections,” in Adv. Neural Inf. Process. Syst., 2802 –2810 (2016). Google Scholar

23. 

K. Zhang et al., “Ffdnet: toward a fast and flexible solution for CNN-based image denoising,” IEEE Trans. Image Process., 27 (9), 4608 –4622 https://doi.org/10.1109/TIP.2018.2839891 IIPRE4 1057-7149 (2018). Google Scholar

24. 

M. Scetbon, M. Elad and P. Milanfar, “Deep K-SVD denoising,” IEEE Trans. Image Process., 30 (6), 5944 –5955 https://doi.org/10.1109/TIP.2021.3090531 IIPRE4 1057-7149 (2021). Google Scholar

25. 

I. Daubechies, “Orthonormal bases of compactly supported bases,” Commun. Pure Appl. Math., 41 909 –996 https://doi.org/10.1002/cpa.3160410705 CPMAMV 0010-3640 (1988). Google Scholar

26. 

S. Mallat, “Multiresolution approximations and wavelet orthonormal bases of l2(r),” Trans. Am. Math. Soc., 315 69 –87 https://doi.org/10.2307/2001373 (1989). Google Scholar

27. 

G. Chantas et al., “Variational Bayesian image restoration based on a product of t-distributions image prior,” IEEE Trans. Image Process., 17 (10), 1795 –1805 https://doi.org/10.1109/TIP.2008.2002828 IIPRE4 1057-7149 (2008). Google Scholar

28. 

S. Amala Shanthi, C. H. Sulochana and T. Latha, “Image denoising in hybrid wavelet and quincunx diamond filter bank domain based on Gaussian scale mixture model,” Comput. Electr. Eng., 46 (C), 384 –393 https://doi.org/10.1016/j.compeleceng.2015.02.002 CPEEBQ 0045-7906 (2015). Google Scholar

29. 

L. Y. Zhou and Z. B. Zhang, “No-reference image quality assessment based on noise, blurring and blocking effect,” Optik – Int. J. Light Electron. Opt., 125 (19), 5677 –5680 https://doi.org/10.1016/j.ijleo.2014.07.010 (2014). Google Scholar

30. 

S. Sudharson, T. Pratap and P. Kokil, “Noise level estimation for effective blind despeckling of medical ultrasound images,” Biomed. Signal Process. Control, 68 102744 https://doi.org/10.1016/j.bspc.2021.102744 (2021). Google Scholar

Biography

Hu Deng received his PhD in drilling engineering from Southwest Petroleum University, Chengdu, China, in 2004. Currently, he is a senior engineer at Drilling and Production Engineering Technology Research Institute, CNPC Chuanqing Drilling Engineering Co. Ltd., China. He is mainly engaged in the research and management of superior fast drilling, geo-engineering integration, and drilling automation and intelligence.

Guoquan Liu received his BE degree in automation from Electronic Information Science and Technology from Jianghan Petroleum Institute, Jingzhou, China, in 1991. Currently, he is a senior engineer at Engineering Technology Research Institute, CNPC Logging Co. Ltd., China. He is engaged in the research and development of production logging instruments and logging signal processing.

Luoyu Zhou received his BS degree in optical information science and technology from the University of Science and Technology of China, Hefei, China, in 2008 and his PhD in optical engineering from Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun, China, in 2013. Currently, he is an associate professor at the School of Electronics and Information, Yangtze University, China. His current research interests include image processing and artificial intelligence.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Hu Deng, Guoquan Liu, and Luoyu Zhou "Ultrasonic logging image denoising algorithm based on variational Bayesian and sparse prior," Journal of Electronic Imaging 32(1), 013004 (14 January 2023). https://doi.org/10.1117/1.JEI.32.1.013004
Received: 4 September 2022; Accepted: 28 December 2022; Published: 14 January 2023
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Ultrasonics

Denoising

Image denoising

Tunable filters

Simulations

Visualization

Contamination

RELATED CONTENT


Back to Top