Endoscopy is a medical technology used to inspect the inner surface of organs such as the colon. During endoscopic
inspection of the colon or colonoscopy, a tiny video camera generates a video signal, which is displayed on a monitor for
manual interpretation by physicians. In practice, these images are not typically captured, which may be attributed by lack
of tools for automatic capturing, automatic analysis of important contents, and quick and easy access to these contents.
However, this lack of tools is being addressed by recent research efforts. This paper presents the description and
evaluation results of novel software that automates the capture of all images of a single colonoscopy into a single
digitized video file. The system uses metrics based on color and motion over time to determine whether the images are
derived from inside a single patient. During testing our system extracted 173 videos totaling 70 hours of endoscopic
video, out of 230 hours of raw video, with a segment-based sensitivity of 100% and specificity of 99%. No procedures
were missed. Two video files contained only a non-patient video signal. The features of our system are robust enough to
be suitable for day-to-day use in medical practice.
In the US, colorectal cancer is the second leading cause of all cancer deaths behind lung cancer. Colorectal polyps are the precursor lesions of colorectal cancer. Therefore, early detection of polyps and at the same time removal of these precancerous lesions is one of the most important goals of colonoscopy. To objectively document detection and removal of colorectal polyps for quality purposes, and to facilitate real-time detection of polyps in the future, we have initiated a computer-based research program that analyzes video files created during colonoscopy. For computer-based detection of polyps, texture based techniques have been proposed. A major limitation of the existing texture-based analytical methods is that they depend on a fixed-size analytical window. Such a fixed-sized window may work for still images, but is not efficient for analysis of colonoscopy video files, where a single polyp can have different relative sizes and color features, depending on the viewing position and distance of the camera. In addition, the existing methods do not consider shape features. To overcome these problems, we here propose a novel polyp region segmentation method primarily based on the elliptical shape that nearly all small polyps and many larger polyps possess. Experimental results indicate that our proposed polyp detection method achieves a sensitivity and specificity of 93% and 98%, respectively.
Advances in video technology are being incorporated into today’s healthcare practice. For example, colonoscopy is an important screening tool for colorectal cancer. Colonoscopy allows for the inspection of the entire colon and provides the ability to perform a number of therapeutic operations during a single procedure. During a colonoscopic procedure, a tiny video camera at the tip of the endoscope generates a video signal of the internal mucosa of the colon. The video data are displayed on a monitor for real-time analysis by the endoscopist. Other endoscopic procedures include upper gastrointestinal endoscopy, enteroscopy, bronchoscopy, cystoscopy, and laparoscopy. However, a significant number of out-of-focus frames are included in this type of videos since current endoscopes are equipped with a single, wide-angle lens that cannot be focused. The out-of-focus frames do not hold any useful information. To reduce the burdens of the further processes such as computer-aided image processing or human expert’s examinations, these frames need to be removed. We call an out-of-focus frame as non-informative frame and an in-focus frame as informative frame. We propose a new technique to classify the video frames into two classes, informative and non-informative frames using a combination of Discrete Fourier Transform (DFT), Texture Analysis, and K-Means Clustering. The proposed technique can evaluate the frames without any reference image, and does not need any predefined threshold value. Our experimental studies indicate that it achieves over 96% of four different performance metrics (i.e. precision, sensitivity, specificity, and accuracy).
Colonoscopy is an important screening procedure for colorectal cancer. During this procedure, the endoscopist visually inspects the colon. Human inspection, however, is not without error. We hypothesize that colonoscopy videos may contain additional valuable information missed by the endoscopist. Video segmentation is the first necessary step for the content-based video analysis and retrieval to provide efficient access to the important images and video segments from a large colonoscopy video database. Based on the unique characteristics of colonoscopy videos, we introduce a new scheme to detect and remove blurry frames, and segment the videos into shots based on the contents. Our experimental results show that the average precision and recall of the proposed scheme are over 90% for the detection of non-blurry images. The proposed method of blurry frame detection and shot segmentation is extensible to the videos captured from other endoscopic procedures such as upper gastrointestinal endoscopy, enteroscopy, cystoscopy, and laparoscopy.
Recent years have seen a tremendous growth of interests in streaming continuous media such as video over the Internet. This would create an enormous increase in the demand on various server and networking resources. To minimize service delays and to reduce loads placed on these resources, we propose an Overlay Caching Scheme (OCS) for overlay networks. OCS utilizes virtual cache structures to coordinate distributed overlay caching nodes along the delivery path between the server and the clients. OCS establishes and adapts these structures
dynamically according to clients' locations and request patterns. Compared with existing video caching techniques, OCS offers better performances in terms of average service delays, server load, and network load in most cases in our study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.