Open Access Paper
2 February 2023 Automatic inspection and reading system of pointer instrument based on computer vision
Author Affiliations +
Proceedings Volume 12462, Third International Symposium on Computer Engineering and Intelligent Communications (ISCEIC 2022); 1246209 (2023) https://doi.org/10.1117/12.2660824
Event: International Symposium on Computer Engineering and Intelligent Communications (ISCEIC 2022), 2022, Xi'an, China
Abstract
Aiming at the problems of background interference and incorrect angles in the images collected by inspection robots, a computer vision-based automatic inspection and reading system for pointer-type instruments is proposed, which can automatically obtain pointer readings on the basis of instrument image detection and correction. First, use the Centernet algorithm to detect the target of the pointer instrument image, and cut out the instrument image with the background removed according to the detected position information; then, perform key point detection, select a pair of symmetrical key points to rotate the instrument using affine transformation correction, and then select two pairs of symmetrical key points through template matching to correct the inclination of the meter using perspective transformation; finally, complete the reading of the meter by using Otsu segmentation and Hough transform circle and line detection. The experimental results show that the proportion of images to be corrected is 93%, the average error rate of the corrected instrument image is reduced by 10.19%, and the average accuracy of readings reaches 97.02%, which can meet the practical application.

1.

INTRODUCTION

There are a large number of pointer instruments in mines, substations, chemical plants and other places to measure the equipment. For the reading and recording of the instruments, since most of the pointer instruments do not have a communication interface, manual reading is often required[1-2]. Manual reading is not only time-consuming and costly, but the accuracy of reading values also depends on the working status and work experience of the reading personnel. Therefore, it is of great significance to study a system that can realize automatic reading of pointer instrument.

For the detection and reading of pointer-type meters, researchers at home and abroad have carried out researches to varying degrees. Li Jinhong et al.[3] proposed to use the Faster RCNN algorithm to detect dials and pointers, and perform binary segmentation, Hough transform and other operations on the detected pointer images to obtain readings. Xu Li et al. [4] proposed an iterative maximum inter-class variance algorithm to solve the problem that the inspection robot’s collection of instrument images is easily affected by light and dust, which can extract pointers under different interference conditions. Jiang Tao et al.[5] used bilateral filtering and single-scale Retinex algorithm to perform feature enhancement and noise reduction on the image, then used Mask RCNN convolutional neural network to locate the meter, and used algorithms such as Hough transform detection to complete the reading of the meter. Pick. Ma Bo et al.[6] proposed to use prior knowledge to generate virtual samples for training, and then use CNN and virtual samples for automatic instrument identification. Lei et al.[7] proposed a pointer-type meter recognition method that improved the East algorithm, and used the lightweight neural network MOGA to replace the East algorithm, thereby reducing the amount of parameters of EAST. Robert Sablatnig et al.[8] innovatively proposed to use Hough transform to detect circular dials under specific constraints such as industrial environments. Corrêa Alegria et al.[9] proposed to collect two different pointer scale meter images with the same angle and position, subtract them, calculate the rotation angle of the two pointer regions obtained, and use the obtained rotation angle to complete the calibration of the meter. Waisq Khan et al.[10] proposed a method for pointer recognition using convolution operations to address the impact of low resolution and illumination issues on pointer detection.

However, with the continuous development of technology, inspection robots have begun to be used in image acquisition of instruments. Using inspection robots for image acquisition can greatly reduce costs and complete image acquisition tasks in complex environments and severe weather. The instrument image also has instrument position offset, background information interference, dial tilt and so on.

In order to solve the above problems, an automatic inspection and reading system of pointer-type meters based on computer vision is proposed. The system consists of four parts: instrument panel detection, instrument key point detection, instrument image correction and instrument reading recognition. Use the inspection robot to collect images and send them back to the computer, where the images are detected and read. This method can be applied to a variety of instrumentation scenarios to meet actual needs.

2.

OVERALL DESIGN PROCESS OF THE INSTRUMENTATION DETECTION SYSTEM

After the inspection robot starts working, the images it acquires are transmitted to the workstation in real time. Firstly, the Centernet algorithm is used to detect the position of the meter disc and then the Centernet algorithm is used to detect the two pairs of key points of the meter symmetrical about the centre. The calibration operation eliminates the difficulty of reading the meter due to angular deflection and tilt. After pre-processing the corrected instrument image with mean filtering and greyscaling, the dial is detected using the Hoff transform circle, the pointer is segmented by the Otsu algorithm, the pointer is detected by the Hoff transform straight line and finally the reading is obtained according to the angle method. The overall design flow of the instrument detection system is shown in Figure 1.

Figure 1

Flow chart of meter reading system

00009_PSISDG12462_1246209_page_2_1.jpg

3.

INSTRUMENT READING IDENTIFICATION METHOD

In this paper, the instrument reading recognition method is proposed, including instrument dial detection, instrument key point detection, instrument image correction and instrument reading recognition.

3.1

Instrument dial detection

In recent years, deep learning has been widely used in the field of computer vision, and has achieved a lot of results in image segmentation, image classification, 3D reconstruction and object detection[11-13]. ResNet[14] proposed the idea of residual connections, which showed good results in feature extraction, and the network was widely used in image classification, segmentation, and object detection. Using ResNet-50 as the backbone network of the dashboard, the Centernet[15] algorithm is used for the detection of the dashboard.

3.2

Instrument key point detection

The traditional processing method for instrument key point detection is not satisfactory. The traditional detection algorithm mainly consists of three stages. First, a sliding window is used to scan the entire image to generate a target proposal frame. This method requires a large amount of calculation, is inefficient, and performs many repeated calculations; Then extract the features of the proposed frame by manual design. Due to the diversity of instrument panel shapes and different lighting conditions, it is difficult to design a feature with strong robustness, and the quality of the extracted features directly affects the classification accuracy; Finally, use the classifier to complete the classification. Compared with the traditional detection algorithm, the Centernet algorithm has high detection efficiency for instrument key points, less computation, strong robustness, and good recognition effect. Therefore, this paper proposes to use the Centernet-based algorithm to detect the key points of the instrument, and select two pairs of scale numbers symmetrical about the center as the key points.

The predecessor of Centernet is Cornernet proposed by Law et al.[16] in 2018. This algorithm introduces the idea of key point estimation into target detection. Cornernet is a single-stage target detection method that treats a detection target as a pair of key points, that is, the boundary. The upper left and lower right corners of the box. The prediction frame is obtained by detecting the two key points of the upper left corner and the lower right corner of the target frame. The anchor is not used in this method, which solves the limitation of the size of the object using the anchor. The recognition accuracy of Corternet surpasses that of Retinanet[17], but the speed of this algorithm has no advantage due to the large backbone network and complex post-processing methods.

Centernet is mainly composed of two parts, the first part is the feature extraction network, using ResNet-50, the second part is heat map prediction, center point prediction, width and height prediction, so as to obtain the detected target frame. The size of the input image is 512×512×3, and the features of the image are extracted by ResNet-50, and then the high-resolution image output is obtained through three upsampling operations. After each deconvolution operation is completed, the height of the feature layer is high. The sum width will become twice the original size, and after the three deconvolutions are completed, a high-resolution feature map of 128×128×64 is obtained. Perform three convolutions on this feature map to obtain a heat map of C categories, a center point prediction map with a channel number of 2, and a width and height prediction map with a channel number of 2. The schematic diagram of the structure of this algorithm is shown in Figure 2.

Figure 2

Centernet network structure diagram

00009_PSISDG12462_1246209_page_3_1.jpg

The loss function of the heat map is improved on the basis of Focal loss, where N represents the number of key points in the image, α and β are hyperparameters, α, β are set to 2 and 4, respectively, Yxyc representing the true value under the sample Gound truth, 00009_PSISDG12462_1246209_page_3_2.jpg representing The predicted value of the sample. The heatmap loss function is shown below.

00009_PSISDG12462_1246209_page_3_3.jpg

The key points on the heat map are represented by a two-dimensional Gaussian kernel, and the value closer to the key point is closer to 1, and vice versa. When it Yxyc is 1, the value of the easy-to-classify instrument sample is 00009_PSISDG12462_1246209_page_3_6.jpg close to 1. At this time, 00009_PSISDG12462_1246209_page_3_4.jpg the calculated value is small, and the calculated loss function value is small, which can correct the key points of the instrument; for the difficult-to-classify The instrument sample of, the predicted value is 00009_PSISDG12462_1246209_page_3_7.jpg close to 0, the 00009_PSISDG12462_1246209_page_3_5.jpg calculated value at this time is larger, and the corresponding training weight is increased. When it Yxyc is not 1, in order to prevent the predicted value from 00009_PSISDG12462_1246209_page_3_9.jpg approaching 1, it is used 00009_PSISDG12462_1246209_page_3_8.jpg as a penalty item. However, (1 − Yxyc)β the closer the parameter is to the center point of the meter, the smaller the obtained value, which further reduces the corresponding penalty.

The target box size and center point paranoia use the L1 loss function. N is expressed as the number of key points of the si instrument, the real size of the instrument frame, and the 00009_PSISDG12462_1246209_page_3_10.jpg predicted target instrument size. The target box size loss function is shown below.

00009_PSISDG12462_1246209_page_3_11.jpg

The total loss function of the Centernet network is to sum the three partial losses corresponding to the heat map, the target box size and the center point offset. Due to the large loss of the target box size calculation, multiply it by a hyperparameter to reduce its loss value. λsize The value is 0.1 and the λoff value is 1. The loss function of the Centernet network is shown below.

00009_PSISDG12462_1246209_page_4_1.jpg

3.3

Instrument image correction

When the instrument is installed for the first time, most of the instruments are asymmetrical about the center symmetry line due to the installation interface and inspection shooting, and the dial is in a rotating state. On this basis, the inspection robot is used for image acquisition. Since the camera and the instrument are not on the same horizontal line, the collected instrument image is inclined. Therefore, there are tilt and rotation issues for the acquired instrument image. This paper proposes to select two points of initial scale and maximum scale as key points to perform rotation correction using affine transformation. After the rotation correction is completed, two pairs of scale key point position information of the new instrument image are obtained by calculation, and the new key point information is used for perspective transformation. Complete tilt correction.

3.3.1

Instrument rotation correction

After obtaining the position information of a pair of key points about the initial scale and the maximum scale symmetrical to the central symmetry line through detection, connect the two points to calculate the angle between the line connecting the two points and the horizontal direction, and after obtaining the rotation angle, select The center point of the image is used as the rotation center, and the rotation correction of the instrument is performed. The instrument rotation angle is shown in Figure 3 below.

Figure 3

Instrument rotation angle

00009_PSISDG12462_1246209_page_4_2.jpg

The principle of affine transformation is to linearly transform a picture from a two-dimensional plane to a new two-dimensional plane, and the relative positional relationship between its graphics does not change. The matrix corresponding to its transformation is shown below. θ is the required rotation angle, x and y is the abscissa and ordinate values corresponding to the key points obtained by the target detection.

00009_PSISDG12462_1246209_page_4_3.jpg

Expand the formula to calculate the sum corresponding to the corresponding rotated picture, x1 and y1 the expansion formula is as follows.

00009_PSISDG12462_1246209_page_4_4.jpg
00009_PSISDG12462_1246209_page_4_5.jpg

3.3.2

Instrument tilt correction

After the affine transformation, the instrument picture is still tilted, so we use perspective transformation to correct the tilt of the instrument. Compared with affine transformation, perspective transformation can transform an elliptical dial into a circular dial. The principle of perspective transformation is to project a two-dimensional picture onto a three-dimensional plane, and then transform it into a new two-dimensional plane. During transformation, we need to obtain the position information of the four key points before the transformation and the position information of the four key points after the transformation in advance. The position information of the four key points before the transformation is obtained by rotation correction; the position information of the four key points after the transformation is obtained by collecting the standard photos of the instrument in advance, and obtaining the horizontal and vertical coordinates corresponding to the four key points of the standard photos, so as to obtain the position information of the four key points. Transformation matrix. The general expression of the perspective matrix is shown below.

00009_PSISDG12462_1246209_page_5_1.jpg

x1 and y1 are the key point position information before perspective transformation, x2, y2, z are the coordinate information projected to the three-dimensional plane. A33 value is set to 1.

00009_PSISDG12462_1246209_page_5_2.jpg

x3 and y3 are the abscissa and ordinate coordinates of the key points of the corresponding picture after the perspective transformation. Substitute the four key points into the corresponding equations, and then the corresponding perspective matrix can be obtained. Among them, A13 and A23 these two elements will transform the picture in perspective.

00009_PSISDG12462_1246209_page_5_3.jpg
00009_PSISDG12462_1246209_page_5_4.jpg

The image of the instrument before and after tilt correction using perspective transformation is shown in Figure 4 below. Figure (a) is the instrument image before perspective transformation, Figure (b) is the instrument image after perspective transformation, and Figure (c) is the standard instrument image for template matching.

Figure 4

Instrument tilt correction

00009_PSISDG12462_1246209_page_5_5.jpg

3.4

Instrument reading recognition

For the instrument image obtained after target detection, key point detection and image correction, image preprocessing, disc detection, pointer segmentation, pointer detection and angle calculation are performed to complete the meter reading. The preprocessing adopts mean filtering, retains the frame part, and averages the pixels within a certain range to erode the area with a small area. In order to improve the operation speed, the color image is converted into a grayscale image.

After the grayscale image is obtained, the redundant background information is removed by using the Hough transform circle detection and the position of the center of the meter circle is obtained. The principle is that all non-zero pixels in the image are considered to be a point on a potential circle. Using voting calculation, a cumulative two-dimensional coordinate plane is generated, and then the detected dial image is obtained according to the set threshold.

The extraction pointer is segmented using Otsu’s algorithm. Using the Otsu algorithm only once cannot separate the meter pointer very well, so it is necessary to use the Otsu algorithm for multiple times to separate the pointer. The Otsu algorithm is called the maximum inter-class variance method. The principle is to use the grayscale image to calculate the total number of pixels in the histogram, divide the image into two parts, the foreground and the background, and use statistical methods to select the threshold to make the foreground and the background. The background variance is the largest, and the image segmentation is completed. WL Represents the proportion of low pixels in the image, WH represents the proportion of high pixels in the image, EL represents the average of low pixels, EH represents the average of high pixels, and Erepresents the overall average of the picture. T represents the segmentation boundary between the low pixel point and the high pixel point of the segmentation, traverse its segmentation points from 0 to 255, calculate ρthe maximum value, and finally determine the required boundary T. Its calculation formula is as follows.

00009_PSISDG12462_1246209_page_6_1.jpg

The detection of the gauge needle is done by straight line detection using the Hough transform. The method used in Hough transform line detection is to represent the line in the Cartesian coordinate system with polar coordinates. A point in the polar coordinate system represents a straight line in its rectangular coordinate system. Find a point in the straight line of the rectangular coordinate system. All straight lines passing through this point represent a curve in polar coordinates, where the intersection point in polar coordinates Beyond the set threshold is our desired straight line. After obtaining the pointer straight line, establish a Cartesian coordinate system with the center point of the instrument as the center of the circle, calculate the angle formed by the pointer and the initial scale α, the maximum scale and initial scale of the dial are 270°, and then according to the dial scale, the instrument representation can be obtained. Its relational expression is shown below. μ for its calculated meter reading, γmax representing the maximum scale of the meter.

00009_PSISDG12462_1246209_page_6_2.jpg

4.

EXPERIMENT AND RESULT ANALYSIS

4.1

Automatic inspection of the experimental environment

The head of the inspection robot is equipped with a high-definition camera to collect instrument images. The pan/tilt viewing angle can achieve 180° left and right, 130° up and down, the maximum elevation angle is 40°, and the minimum top view angle is 90°. The bottom tire of the robot can drive the camera for mobile shooting. The inspection movement speed is 3km/h. By simulating the automatic inspection environment in the laboratory corridor, 10 meters of different heights, different light intensities and different distances are placed for inspection and collection, and the collected images of the meters are transmitted to the host for subsequent detection, correction and reading processing. The automatic inspection robot is shown in Figure 5.

Figure 5

Automatic inspection robot

00009_PSISDG12462_1246209_page_6_3.jpg

The experimental training and test environments use NVIDIA GeForce RTX 2060 (8GB), Windows 10 operating system, 16GB memory, and the deep learning framework is pytorch1.2.0.

4.2

Instrument key point detection experiment

Since the instrument does not have a large data set publicly available, this experiment made the required data set. By simulating the images of the instrument in different environments, the inspection robot was used to capture the instrument images with different light intensities, different tilt angles and different rotation angles. A total of 8 types of instrument data were collected this time, and labelImg was used to label the key points of the collected 2778 photos to make xml files. The data set was divided into a 9:1 ratio, with 2221 photos in the training set, 247 photos in the validation set, and 275 photos in the test set. a photograph. During network training, the PASCAL-VOC2012 dataset is used for pre-training weights. The training settings Batch_size is 8, the learning rate is 0.001, the decay coefficient is 0.0005, and the optimizer is Adam. When the Epoch is less than 100, the backbone network is frozen and only fine-tuning is performed on the network. When the Epoch is greater than 100, the backbone network is thawed and the learning rate is reduced to 0.0001 to further reduce the loss function. When the training Epoch is 300, the network reaches convergence.

The instrument images with blur, angle deflection, and uneven illumination are collected to extract key points to test the robustness of the training model. The results are shown in Figure 6 below. The experimental results show that the network trained in this experiment has strong robustness and can achieve good key point extraction results in different situations.

Figure 6

Image key point extraction

00009_PSISDG12462_1246209_page_7_1.jpg

4.3

Instrument image correction experiment

Once the key point detection is complete, a rotation correction is performed first, followed by a tilt correction. The more the inspection robot is on a level with the instrument, the better the corrected picture of the instrument using the affine and perspective transforms will be after the target detection is completed.

Experiments were carried out using equation (12) to calculate the meter reading and error rate before and after calibration, and the average error rate of the meter was calculated as shown below.

00009_PSISDG12462_1246209_page_7_2.jpg

N is expressed as the number of images of the test meter, μreal expressed as the real meter reading, μtest expressed as the reading obtained through the experimental test, and 00009_PSISDG12462_1246209_page_7_3.jpg expressed as the average error rate.

In this experiment, by simulating the inspection environment, the inspection robot is used to collect 10 sets of instrument photos with different illumination, different deflection angles and blur degrees for testing. readings and error rates. The images before and after calibration of the meter are shown in Figure 7 below, and the measurement results are shown in Table 1 below. Figure 7(a) is the image before rotation correction, Figure 7(b) is the instrument image after rotation using affine transformation, and Figure 7(c) is the instrument image completed using perspective transformation. Extract the key point location information of the template image, and obtain high-quality meter images for subsequent readings through two meter corrections.

Figure 7

Before and after instrument calibration images

00009_PSISDG12462_1246209_page_7_4.jpg

Table 1

Test results of readings before and after meter calibration

Serial numberActual valuePre-calibration readingReadings after calibrationPre-calibration error rate (%)Error rate after calibration (%)
10.150.0930.13838.008.00
20.270.2010.25825.564.44
30.40.3410.38114.754.75
40.550.5740.5594.361.64
50.70.6420.6658.295.00
60.830.7560.8168.921.69
70.90.8560.8794.892.33
81.21.1811.2061.580.50
91.351.1591.33614.151.04
101.51.3321.50611.200.40
Average error rate   13.172.98

The experimental results show a reduction in the average error rate of 2.98% after calibration, which is 10.19% lower than the average error rate of the uncalibrated images and an average accuracy of 97.02% for the meter readings.

5.

CONCLUSION

This paper proposes a computer vision-based automatic inspection and reading system for pointer-type instruments. The system consists of five parts: image collection by inspection robot, instrument panel detection, instrument key point detection, instrument image correction and instrument reading recognition. Using an inspection robot for automatic inspection to collect images, although it is not possible to adjust the shooting angle manually like a manual adjustment method so that the captured instrument image is in the middle position and the angle is correct, but through the detection and correction method proposed in this paper, not only The background can be removed very well to make the instrument image in the center, and through the two correction operations of the instrument image, the instrument greatly reduces the influence caused by the shooting angle. Compared with using traditional methods to extract dials and key points, it is better to use deep learning algorithm for dial detection and key point detection, and then rotate and tilt the meter according to the detected key point information, this method can get better correction effect and reduce the error rate of readings. For the calibrated instrument image, the pointer is segmented and the pointer straight line is obtained in combination with the traditional image processing method, so as to complete the reading. The experimental results show that the automatic inspection and reading system of the instrument proposed in this paper has strong robustness, good correction effect and high reading accuracy, and has strong practical value.

ACKNOWLEDGMENT

The authors are grateful to anonymous reviewers for their valiable comments. This work was supported by the Doctoral Fundation of Inner Mongolia University of Technology with Granted No. BS201935, the Basic Scientific Research Expenses Program of Universities directly under Inner Mongolia Autonomous Region with Granted No. JY20220327, the Natural Science Foundation of Inner Mongolia of China (2019MS06005) and the National Natural Science Foundation of China(No.61962044).

REFERENCES

[1] 

Liu Yang,Liu Jun,Ke Yichen, “Identification of pointer meter readings based on substation inspection robot[J],” Chemical Automation and Instrumentation, 46 (08), 636 –639+658 (2019). Google Scholar

[2] 

Peng G, Du B, Cao C, et al., “Pointer-type instrument positioning method of intelligent inspection system for substation[J],” Journal of Electronic Imaging, 31 (1), 013001 (2022). https://doi.org/10.1117/1.JEI.31.1.013001 Google Scholar

[3] 

Li JH,Xiong JP,Chen ZH,Zhu LY, “Pointer meter recognition based on convolutional neural network[J],” Computer Systems Applications, 30 (09), 85 –91 (2021). https://doi.org/10.15888/j.cnki.csa.008090 Google Scholar

[4] 

Xu L,Shi W,Fang T., “A pointer meter reading recognition system in inspection robots[J],” Journal of Instrumentation, 38 (07), 1782 –1790 (2017). https://doi.org/10.19650/j.cnki.cjsi.2017.07.027 Google Scholar

[5] 

Jiang T,Tao JC,Fan X., “Research on pointer meter reading based on deep learning in gas collection stations[J],” Automation Instrumentation, 43 (02), 18 –22 (2022). https://doi.org/10.16086/j.cnki.issn1000-0380.2021060021 Google Scholar

[6] 

Ma Bo,Cai Weidong,Zheng Fanfan, “A priori knowledge-guided generation of virtual samples for pointer gauge recognition[J],” Journal of Computer-Aided Design and Graphics, 31 (09), 1549 –1557 (2019). Google Scholar

[7] 

Shao L, Chen Y, Xu X, et al., “Research on Recognition of Pointer Meter Based on Improved East Algorithm[C],” in 2021 IEEE International Conference on Mechatronics and Automation (ICMA), 287 –292 (2021). Google Scholar

[8] 

Sablatnig R, Kropatsch W G, “Application constraints in the design of an automatic reading device for analog display instruments[C],” in Proceedings of IEEE, 205 –212 (1994). Google Scholar

[9] 

Alegria E C, Serra A C, “Automatic calibration of analog and digital measuring instruments using computer vision[J],” IEEE transactions on instrumentation and measurement, 49 (1), 94 –99 (2000). https://doi.org/10.1109/19.836317 Google Scholar

[10] 

Khan W, Ansell D, Kuru K, et al., “Automated aircraft instrument reading using real time video analysis[C],” in 2016 IEEE 8th International Conference on Intelligent Systems (IS), 416 –420 (2016). Google Scholar

[11] 

Hesamian M H, Jia W, He X, et al., “Deep learning techniques for medical image segmentation: achievements and challenges[J],” Journal of digital imaging, 32 (4), 582 –596 (2019). https://doi.org/10.1007/s10278-019-00227-x Google Scholar

[12] 

Perez L, Wang J., “The effectiveness of data augmentation in image classification using deep learning[J],” arXiv preprint arXiv:1712.04621, (2017). Google Scholar

[13] 

Mohanty S P, Hughes D P, Salathé M., “Using deep learning for image-based plant disease detection[J],” Frontiers in plant science, 7 1419 (2016). https://doi.org/10.3389/fpls.2016.01419 Google Scholar

[14] 

He K, Zhang X, Ren S, et al., “Deep residual learning for image recognition[C],” in Proceedings of the IEEE conference on computer vision and pattern recognition, 770 –778 (2016). Google Scholar

[15] 

Duan K, Bai S, Xie L, et al., “Centernet: keypoint triplets for object detection[C],” in Proceedings of the IEEE/CVF international conference on computer vision, 6569 –6578 (2019). Google Scholar

[16] 

Law H, Deng J., “Cornernet: Detecting objects as paired keypoints[C],” in Proceedings of the European conference on computer vision (ECCV), 734 –750 (2018). Google Scholar

[17] 

Lin T Y, Goyal P, Girshick R, et al., “Focal loss for dense object detection[C],” in Proceedings of the IEEE international conference on computer vision, 2980 –2988 (2017). Google Scholar
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Liming Xue, Junping Qin, Tong Gao, and Chao Wang "Automatic inspection and reading system of pointer instrument based on computer vision", Proc. SPIE 12462, Third International Symposium on Computer Engineering and Intelligent Communications (ISCEIC 2022), 1246209 (2 February 2023); https://doi.org/10.1117/12.2660824
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Inspection

Image segmentation

Robots

Detection and tracking algorithms

Target detection

Calibration

Hough transforms

RELATED CONTENT


Back to Top