Open Access
12 June 2023 Imaging and qualitative depth analysis with a portable nonmydriatic fundus camera using oblique illumination
Neelam Kaushik, Takashi Sasaki, Kana Takeyama, Kazuhiro Hane, Toru Nakazawa
Author Affiliations +
Abstract

The photographic assessment of the optic nerve has been one of the original and most extensively used methods to evaluate patients for glaucoma. The depth evaluation of the optic disc in the retinal fundus is important for the early detection of glaucoma. Conventional fundus cameras have a limited field-of-view for imaging of the retina and its peripheral areas. In this article, we report the design and fabrication of a non-mydriatic wide-field fundus camera using a contact-free trans-scleral illumination that is capable of taking 3D images of the optic disc using oblique illumination. We demonstrate that, using oblique illumination through sclera, a quasi 3D image of the optic disc along with its shadow can be obtained. The shadow provides important information on the shape and depth of the optic disc. The depth values of the optic disc obtained by our proposed method using shadow length measurements are in good agreement with the values obtained using an optical coherence tomography device. The results indicate that our fabricated fundus camera could be an easy-to-handle and low-cost tool for remote detection and diagnosis of ocular diseases without the need of an ophthalmologist.

1.

Introduction

Visual damage is a global health concern that has negative impacts on physical and mental health. Approximately 40% of the human population suffers from eye related diseases. For example, glaucoma is the second leading cause of blindness in the world.1 The estimated number of people with glaucoma worldwide will increase by 74% to 111.8 million in 2040.2 The timely screening and diagnosis of eye diseases are crucial for avoiding visual damage and blindness. Recent clinical practice guidelines recommend annual or biennial complete eye examinations.3 Presently, there is shortage of ophthalmologists worldwide, especially in developing nations, despite the rapid increase in eye diseases.4 A commonly used alternative to broad eye inspection is remotely interpreted fundus photography, known as “teleretinal screening.”5 A multipurpose portable fundus camera is important for developing telemedicine screening and point-of-care examination of various eye diseases.

The precise and consistent assessment of the optic disc is key in the management of glaucoma patients. The structure and appearance of the optic nerve have been important features for understanding glaucoma.6 Recently, digital image processing techniques and mobile technology have been used to assist in the clinical evaluation of images obtained by imaging devices for the diagnosis of various ocular diseases.7,8 Fundus cameras have been used for imaging the optic nerve for diagnosis and monitoring of glaucoma.9 For the diagnosis of glaucoma, most clinicians rely on 2D fundus images to estimate the boundaries of the optic disc and cup. The depth analysis of the optic nerve head in the retinal fundus helps in quantitatively evaluating optic disc cupping and is important for the early detection of glaucoma.10 According to the reported literature, the average volume, area, and depth of the optic cup were found to be larger in glaucomatous eyes than in normal eyes.11 The measurement of optic cup volume asymmetry provides an important additional diagnostic parameter. It was reported that volumes in excess of 1  mm3 confirm a definite diagnostic value; the cup volume varies directly with degree of field loss and is correlated with the cup/disc diameter.12 Optical coherence tomography (OCT) is used to measure three-dimensional (3D) depth information, but it only gives a profile image at a certain cross-section. Another instrument that is used for depth analysis of the optic disc is Heidelberg retina tomography, which is based on a scanning laser technique and cannot provide a 3D image. Numerous computerized methods for 3D optic disc reconstruction from stereoscopic pairs of retinal fundus images have been developed.13,14 It has been shown that such techniques are helpful for the analysis of the 3D structure of the optic nerve head.15,16 Stereoscopic imaging technology shows potential as an accurate method for optic cup segmentation, but its requirements of dilated pupils, reduced portability, and dependence on skilled ophthalmic technicians limit its role as a glaucoma screening device. Moreover, most of the handheld fundus cameras do not provide stereoscopic capabilities, so it is difficult to estimate the depth of optic disc to diagnose the progression of glaucoma and related eye diseases.

Traditional fundus cameras are still considered to be standard devices for eye checkups in both ophthalmology and optometry. In conventional fundus cameras, a ring shaped trans pupillary illumination has been commonly used; this delivers illumination through the edge of the pupil, and reflected light is collected through the central area of the pupil.17 To eliminate corneal and crystalline lens reflection from the retinal image, only a small central pupillary area can be used for imaging, and the optical system has to be optimized to ensure that the imaged retinal area can be covered homogenously by illumination. Hence, the field-of-view of the conventional fundus camera is limited typically between 30 deg and 45 deg for the external angle.18 Camera technology has progressed remarkably in recent years. New developments in handheld fundus cameras are cost effective, and recent devices designed to be used with smartphone cameras have further reduced cost and increased potential adoption.19 Non-mydriatic fundus photography provides a non-invasive, fast, and handy method of screening that does not require pupillary dilation.20 High quality images with quick capture from portable handheld non-mydriatic cameras are appealing for application of these devices for eye disease screening.21 However, for the application, it requires a skilled and experienced person to operate the device.

Wide field fundus photography is preferred for the diagnosis of various ocular diseases that can generate morphological abnormalities at outer areas of the retina.8,22 Over the past 10 years with traditional illumination methods in some fundus cameras, such as RetCam (Natus Medical Inc., Pleasanton, California, United States), wide-field photography up to 88.43 deg in the visual angle was achieved. With some modern scanning laser ophthalmoscopy, such as Optos (Optos, Dunfermline, United Kingdom), a 142.8 deg visual angle is now imaged routinely.23 To achieve wide-field fundus examination that does not require pharmacologic pupil dilation, trans-scleral illumination has been proposed as an alternative illumination method.24 In the trans-scleral illumination method, light is delivered through the outer region of the pupil, and thus it can increase the field-of-view for fundus imaging. Clinical implementation of trans-scleral illumination was unsuccessful due to numerous limiting factors, such as the direct contact of the illumination and imaging parts with the eyeball that caused discomfort for the patients. Recently for wide-angle photography, trans-palpebral illumination using an optical fiber illumination that contacted the eyelid was reported.25 A benchtop contact-free trans-pars-planar illumination-based snap-shot fundus camera with a wide-angle (60 deg angular angle) was also reported.26 Very recently a 142.8 deg wide-field portable camera using trans-pars-planar illumination through an optic fiber was reported.27 However, the clinical employment of trans-palpebral and trans-pars-planar illumination-based devices remains difficult as they require the separate adjustment and optimization of imaging and illumination sub-systems.

In this article, we report on the fabrication of a handy, low-cost, and portable fundus camera with contact-free trans-scleral illumination for wide-field fundus photography. Without the use of any complicated contact optics or pupil dilation, this fundus camera can be used by oneself for regular monitoring of eyes to avoid eye diseases. The transmission of light through the sclera helps in creating oblique illumination for the retina on a large area and helps to estimate the depth of the optic disc. The contrast of phase objects can be improved by oblique illumination, also known as off-axis illumination.2830 This type of illumination provides a quasi 3D image and importantly improves the contrast of the image.28 It was reported that, using oblique back illumination, phase gradient imaging of thick scattering samples is also possible.31 Imaging using oblique illumination is inexpensive and simple; additionally, it is non-computational, so it allows for observing samples through a camera in real time. In the meantime, in microscopy, various off-axis illumination methods using an LED array,29,32 projector,33 and transmission liquid crystal display34 have been proposed and used for differential phase contrast imaging of transparent specimens by various illumination patterns of the light source. Trans-scleral optical phase imaging can provide high contrast images of the retinal layers at the cellular levels to detect early retinal diseases.35 Depth analysis of the optic disc with retinal imaging using oblique illumination has not been reported yet. In this report, we demonstrate that, using oblique/asymmetrical illumination through sclera, a quasi 3D image of the optic disc can be obtained, and the depth of the optic disc can be estimated from the shadow created by oblique illumination.

2.

Materials and Methods

2.1.

Optical Design for Fundus Camera

In our designed fundus camera, the oblique illumination was created by the transmission of light through the sclera using two LEDs with an 850-nm nominal wavelength and spectral bandwidth of 30 nm full-width at half-maximum. The light that reaches the retina is not direct; only a part of the light is transmitted through the sclera (40%) and is in accordance with ISO eye safety standards for light safety limits. The two near-infrared (NIR)-LEDs could be illuminated at the same time or separately to get an image with one or both illumination points. The LED lights were placed at desired angle positions to avoid illumination via the pupil and to create an oblique illumination through the sclera. Figure 1(a) shows the basic design of the fabricated fundus camera, and Fig. 1(b) demonstrates the schematic illustration of illumination used in the fabricated fundus camera to get non-mydriatic wide-field fundus photography. The beam angle for each LED was around 20 deg (0.35 rad). When both LEDs were used, the light passing through the sclera was diffused, and it illuminated the intraocular area homogenously. The illumination light path in the eyes is shown in Fig. 1(c). The white bright part enclosed in the red circle in Fig. 1(c) shows the illumination area. Two small dots on the pupil are the reflection from the ophthalmic lens. The pupil size increased as the eyeshade covered the eye. The pupil size in Fig. 1(c) is 5.4 mm, and as the camera came closer to the eye, the pupil size increased to 7  mm in diameter [Fig. 1(d)], showing that it is nonmydriatic; the system requires no drug for pupil dilation and will be less traumatizing to the patients. For demonstration purposes, the author used her eye to show the illumination path through the sclera.

Fig. 1

(a) Basic optical design of the fabricated wide angled fundus camera, (b) schematic illustration of trans-scleral illumination using IR-LEDs, (c) photograph of the eye showing the illumination light path through the sclera and pupil size, (d) photograph of eye showing a bigger pupil size of 7 mm in front of the camera without using any pupil dilation drug, (e) photograph of the prototype fundus camera, and (f) photograph showing the self-imaging capability of the fundus camera.

JOM_3_2_024502_f001.png

Figure 1(e) shows a photograph of the fabricated prototype fundus camera. For the ergonomic design, portable fundus camera components were placed in a 3D-printed ABS plastic enclosure designed in SolidWorks (Dassault Systemes) and printed using a 3D printer (Flash Forge Inventor II). A black-colored eyeshade with a rubber eyepiece was attached to the printed enclosure to allow for a natural pupil opening during NIR imaging. The camera is palm sized and lightweight. For wide-field photography, a 78D ophthalmic lens (Righton, Japan) was used to collect light coming out of the pupil. An achromatic lens (lens 2) was used to relay the fundus image on the sensor of the camera. A small web-camera (Logitech HD Pro webcam C920) with full HD 1080p video recording capability was used to get fundus images. The web-camera was modified for NIR–imaging by replacing the color filter on the camera sensor to an NIR filter. Our fabricated portable fundus camera prototype was able to be used in a handheld and portable manner by connecting the camera to the laptop computer. The still-image or live-video can be recorded by positioning the camera in front of and close to the eye with the person looking directly into the camera. The imaging position can easily be checked by looking at the computer screen by oneself. Figure 1(f) shows the self-imaging capability of the fundus camera.

The field-of-view with respect to the internal eye angle was measured using a fabricated model eye. According to the reported literature, the equatorial diameter of the retina in the human eye is about 24 mm, and at ora-serrata (end of retinal tissue), the average internal diameter of retina is about 20 mm.36 Thus, a 180 deg visual angle field-of-view corresponds to the whole retinal area of 1148  mm2. Figure 2(a) is the schematic diagram of the eye showing the peripheral retina, ora-serrata, and total internal eye angle for the retina (241 deg). The size of the model eye used for measurement was 24 mm in diameter, and we used an achromatic lens (f=24  mm) as the cornea. Figure 2(b) is the photograph of the model eye. To show the field-of-view of the fabricated fundus camera, we pasted a measurement scale in millimeters on the posterior surface of the model eye. The image taken from the fundus camera is shown in Fig. 2(c). The marked dotted line shows the boundary of the equator of the model eye. To measure the maximum angular field-of-view, the clearest and most readable marker on the vertical (50 mm) and horizontal (45 mm) scales was determined. The average readable scale on either side was taken as 47.5 mm, and the eye angle was calculated. The equator of the globe and line representing the ora serrata (4 mm above the equatorial diameter) is clearly visible. According to the reported literature, the visual angle and eye angle can be converted to each other.37,38 Using the conversion factor38 (1.37) for the eye angle to the visual angle, the measured angular field-of-view for the fundus camera was 165.6 deg.

Fig. 2

(a) Schematic illustration of the eye showing the locations of peripheral retina, ora-serrata, and total internal angle for retina, (b) photographs of the model eye, and (c) image of the model eye captured by the fabricated fundus camera showing a 165.6 deg ultra-wide angular field-of-view. The dotted line shows the equator of the globe.

JOM_3_2_024502_f002.png

Enucleated swine eyes were used to check the imaging ability of the fabricated fundus camera. The enucleated swine eyes for the experiment were bought from Sendai Central Meat Whole Sale Market Co. Ltd. For experiments using animal parts, we followed the regulations specified by Tohoku University. For initial imaging, we used an enucleated swine eye. The depth measurement results obtained from our fabricated device were compared with those obtained from a commercial OCT device by NIDEK company (Model -RS-3000 advance). The axial resolution of the employed OCT device was 4  μm, and the operating wavelength was 880 nm. The stage of the OCT was modified for animal eye imaging. The eye was fixed on a sample holder that mimicked the human face, and the camera was placed in front of and close to the eye. The eye was illuminated through the sclera by two NIR LEDs attached on the outer cover of the ophthalmic lens at pre-specified angled positions.

3.

Results

3.1.

Near-Infrared Fundus Imaging with Portable Fundus Camera

The color fundus image is considered the gold standard for ophthalmic documentation, but NIR/IR imaging has some advantages over colored fundus imaging.39 Oxygenated hemoglobin, hemoglobin, and water are the main fundus molecules that absorb IR light, and melanin is a strong reflector. NIR imaging is possible with less light as NIR can penetrate into deeper layers. IR reflectance imaging provides information on alterations in the retinal structure, sub-retinal lesions, and accumulation of fluid within and beneath the retina and the retinal epithelium. Another potential benefit of IR illumination is that it minimizes the baffling effects of intra-individual differences in the pigmentation.40 Moreover, NIR/IR imaging can help detect chronic changes even with the presence of a hemorrhage in the eye that may be undetected on other imaging systems.

The LED light operation can be controlled separately for oblique illumination to get quasi 3D images of the retina and optic disc. Figure 3(a) shows the photograph of an enucleated swine eye (sample 1) used for imaging. Figure 3(b) shows the large field-of-view coverage by the constructed fundus camera. The optic disc and choroidal vessels are clearly visible. Imaging of choroidal vessels is important to check the eye’s health condition. The choroidal thickness varies among the disease states and changes with increasing age and axial length. Choroidal vessels are seen instantly with NIR imaging compared with other wavelengths.41 Figure 3(c) gives another image (sample 2) taken by the same fundus camera. Blood vessels in and around the optic disc are visible.

Fig. 3

(a) Photograph of the enucleated swine eye showing the pupil size, (b) image of the retina of this swine eye (sample 1) showing the optic disc and choroidal vessels, (c) retina of another enucleated swine eye (sample 2) showing the blood vessels and optic disc, and (d) photograph of sample 2 after removing the cornea and lens to verify the operation of the fundus camera.

JOM_3_2_024502_f003.png

To confirm the reliability and ultra-wide field imaging capability of our fundus camera, we removed the cornea and crystalline lens of the enucleated swine eye and took the photo of the cut eye by a smart phone camera. Figure 3(d) is the photograph of the enucleated swine eye taken after removing the cornea and crystalline lens. Figures 3(c) and 3(d) look similar in features, confirming ultra-wide field coverage by our fundus camera. The contrast of the blood vessels is not very high in the NIR image. The reason for variation is that there are several factors that influence the appearance and size of retinal vessels as a function of wavelength. The most important factor is the optical absorption by blood columns and vessels, as they have low contrast against the fundus background for wavelengths longer than 600 nm. Another reason is that, with NIR illumination, there is scattering multiple times over a wide region of the fundus, so only heavily pigmented structures are visible.

The scattered light that acted as noise in the image was removed using an aperture at the pupil conjugate plane. Figure 4 shows the various images taken with and without the aperture. Figure 4(a) shows an image taken without the aperture. Figures 4(b) and 4(c) are the images taken with varying sizes of the aperture. Images became clearer and contrast in the image was improved by reducing the noise.

Fig. 4

(a) Retina image without using any aperture. (b) and (c) Images of retina taken with varying sizes of aperture.

JOM_3_2_024502_f004.png

3.2.

Oblique Illumination and Imaging

Oblique illumination is a well-known technique that has been described by many scientists to improve the contrast of transparent specimens by introducing a pseudo-relief effect.42,43 In oblique or anaxial illumination, the specimen is unevenly illuminated, and it can be interpreted in the context of Abbe’s theory of diffraction.42 According to Abbe’s theory, if a narrow axial illuminating beam is used instead of broad illuminating beams, then directionally shadowed differential phase contrast images that are very similar to differential contrast images are produced. In our designed fundus camera, using an LED with a narrow beam angle of 20 deg, anaxial illumination was created at the retinal plane.

When both LEDs are illuminated at the same time, a wide field-of-view coverage was obtained. By changing another lens with a slightly narrower field-of-view, the area near the optic disc was observed more clearly. With the use of only one-sided illumination, quasi 3D images were obtained. Figure 5(a) shows the wide-angled image of the swine eye’s retina. Figures 5(b) and 5(c) are the quasi 3D images obtained using illumination from only one LED. Figure 5(d) shows more detailed features around the optic disc, and Fig. 5(e) is the corresponding quasi 3D image of the pig’s retina with a slightly narrow field-of-view. This quasi 3D effect itself can be brought about in a gradual manner without having to replace the LED. For the detection of glaucoma, images of the optic disc are very important. The shape of the optic disc in glaucomatous eye changes with progression of the disease.44 The quasi 3D effect is more prominent in Figs. 5(c) and 5(e). The depth of field is greater in both quasi 3D images compared with the image in Fig. 5(a), where the depth of the optic disc of the swine eye can be estimated by the shadow length measurement.

Fig. 5

(a) Wide-angled image of the swine eye. (b) and (c) Quasi 3D images obtained using oblique illumination. (d) Narrow-field image showing the detailed area near the optic disc and (e) corresponding quasi 3D image of the narrow-field image.

JOM_3_2_024502_f005.png

3.3.

Measurement of Optic Disc’s Depth

We measured the depth of the optic disc from the shadow created by oblique angle illumination. Generally, the method that we used in the present study is applied for the measurement of the depth of a crater in planetary bodies. This method is complementary to other methods as it requires only one image to measure the depth. The shape of the crater is considered to be a paraboloid. When the light (for example sunlight) falls on the crater, a shadow is formed inside the crater. The size of the shadow or the shadow-length (L) depends on the incident angle of sun rays, i.e., the illumination angle (θ), and it changes from larger to smaller with increasing θ. The shadow disappears when the light falls directly over the crater, i.e., θ=90  deg. The depth of the crater “d” is measured by measuring the length of the shadow using the following equation:45

Eq. (1)

d=D4(1LD)tanθ,
where D is the diameter of the parabolic crater and L is the shadow length. The shape of the optic disc in the eye is similar to the paraboloid crater. Thus, the same method could be applied for the measurement of the optic disc depth. However, the dimensions in both cases are quite different. Additionally, this method has not been applied to measure the depth of the optic disc. A basic experiment was performed to confirm the accuracy of the proposed method. A model eye with a pattern resembling the optic disc was fabricated using a 3D printer. The schematic diagram of the measurement setup is shown in Fig. 6(a), and the schematic of the pattern resembling the optic disc of the model eye is shown in Fig. 6(b). The diameter “D” and depth “d” of the model eye are 3 and 1.5 mm, respectively. These dimensions are approximately twice that of a real eye. These were chosen to make the fabrication and measurement easier. To make it realistic, the model eye was filled with water, and an achromatic lens with a focal length of 24 mm was used as the eye lens. The model eye was illuminated with an NIR light at an angle of 45 deg (i.e., θ).

Fig. 6

(a) Schematic of depth measurement setup using a model eye and oblique illumination, (b) schematic of the pattern resembling the optic disc inside the model eye, D is the diameter of the pattern, d is the depth, and Θ is the illumination angle (for this measurement, O=45  deg). The diameter of the pattern resembling the optic disc was 3 mm. (c) An overhead view of the shadow created by the illumination angle, where L is the length of the shadow. (d) Image of the model eye with the pattern resembling the optic disc taken by the fundus camera, where the diameter of the structure resembling a crater is 3 mm. (d)(ii) Enhanced image of the area marked on panel d(i), showing the shadow created by oblique illumination and mathematical calculation of the shadow length.

JOM_3_2_024502_f006.png

The NIR light was chosen in view of its lower absorption compared with visible light. On the other hand, O=45  deg allows for direct imaging of the optic disc and its shadow through the eye lens. The calculation of the depth becomes simple when the incident angle is 45 deg as the trigonometric tangent function (tanΘ) value varies sharply with slight changes in the angle. But with tan 45 deg, a variation of ±5  deg does not change the calculation values significantly. So minor a change in angle will not change the depth measurement values. [Fig. 6(c)]. Based on the geometry of our model eye’s optic disc and the illumination angle (Θ), the formation of shadow and its length can be predicted from Eq. (1). The expected length (L) of the shadow should be half of the diameter of the optic disc [as shown in Fig. 6(c)]. Figure 6(d) shows the mathematical calculation of the shadow length on the model eye. The dotted area marked in Fig. 6(d)(i) was enhanced using imageJ to clearly see the boundaries of the shadow. The surface of the model eye is rough as it was printed by a 3D printer. Figure 6(d)(ii) shows the enhanced image with marked dimensions for calculating the shadow length. The distance between A (x=R) and B (x=Xc+R) is the shadow length (L), where R is radius of the cup like structure. The calculated depth using Eq. (1) matched with the designed depth.

In digital photography, it is often difficult to recognize shadows in an image. Shadow regions are assumed to be formed when there is obstruction along the illumination path, resulting in changes of image intensities. A shadow consists of two regions, umbra and penumbra.46 On non-flat/curved surfaces, shadows with wide boundaries, known as penumbrae, are observed. So it is important to detect shadows accurately in digital images while maintaining original local and textural information. Several methods for the detection of shadow have been reported.47,48 Clustering-based algorithms can partition image pixels into a certain number of similar/dissimilar groups and can be applied for the detection of shadows in an image. K-means clustering is one of the most popular algorithms,49 and it has already been implemented in the Open CV2 module of the Python 3 programing language. We used it for the detection of shadow in the present study. Before applying K-means clustering, the colored image [as shown in Fig. 7(a)] was converted to an 8-bit image, and pixel intensity was normalized in the range of 0 to 255. Contrast limited adaptive histogram equalization was applied to the image to make it clearer [Fig. 7(b)].

Fig. 7

(a) Original RGB image of the optic disc like pattern of the model eye, (b) intensity normalized image of the original image of the model eye, (c) image obtained with the K-means clustering algorithm, and (d) differential K-means image to calculate the shadow length.

JOM_3_2_024502_f007.png

K-means clustering was performed by considering the different numbers of clusters. Iterating the image for 1000 times with the number clusters in the range of 10 to 20 is good enough for detecting the shadow of the optic disc in the image. For example, the image shown in Fig. 7(c) is obtained by considering a total of 10 clusters. To precisely measure the shadow length, the image obtained with K-means clustering was differentiated [Fig. 7(d)]. The area marked by the dotted circle in Fig. 7(d) shows the actual shadow region consisting of the umbra and penumbra regions. The length of the shadow measured in the center is 1.49±0.02  mm. The depth calculated using Eq. (1) is 1.50 mm, which matches well with the designed depth of the model eye. By varying the incident angle to 45±0.5  deg, the measured results showed good accordance with the actual value. These results clearly confirmed that it is possible to know the depth of an optic disc from the shadow.

For qualitative analysis and to compare the depth measurement results with optical coherent topography (OCT), we imaged enucleated swine eyes using the OCT device. Figure 8 shows OCT images of three pig eye samples with measured depths at different cross-sections. The arrows in Figs. 8(a), 8(d), and 8(g) denote the location and direction of the cross-sectional OCT scan. Figures 8(b), 8(e), and 8(h) are cross-sectional images of the optic nerve head. The measured depth of these samples is shown in Figs. 8(c), 8(f), and 8(i). The depth at different cross-sectional points varied from 183 to 490  μm for various enucleated pig eyes.

Fig. 8

(a)–(c) Optic disc images of three different enucleated swine eyes. (b), (e), and (h) The cross-sectional images of the optic nerve head taken in the directions and positions of the arrow marked in panels (a), (b), and (c). The depths at the marked arrows are shown in panels (c), (f), and (i).

JOM_3_2_024502_f008.png

Figure 9(a) shows the cropped image of the optic disc of an enucleated swine eye taken using angled illumination. A shadow in the optic disc that was created by oblique illumination can be easily observed in the image. The shape of the optic disc of the swine eye is not circular or paraboloid; it is elliptical in shape. The shadow obtained by imaging the pig’s optic disc fits well with an ellipse. For elliptical shapes, the depth from the shadow is calculated using following equation:45

Eq. (2)

d=L2(1LD)tanθ.

Fig. 9

(a) Cropped original RGB image of the enucleated swine eye obtained using the fundus camera. (b) Intensity normalized image of the original RGB image. (c) Image obtained with the K-means clustering algorithm and (d) differential K-means image to calculate the shadow length.

JOM_3_2_024502_f009.png

Figure 9(b) is the intensity normalized image with 256 levels of intensity. Similar to the model-eye analysis, to detect the shadow area more clearly, the K-means clustering algorithm was applied to the normalized swine eye image. Figure 9(c) is the image obtained after performing 1000 iterations of the clustering algorithm. As a result, the shadow clusters merged among other color clusters with identical normalized intensity. The clusters with lower intensity values belong to the shadow. To accurately measure the length of the shadow, the image obtained after K-means clustering was differentiated. Figure 9(d) is the differential K-means image. The area marked with dotted circles (1) and (2) are the shadow regions. Dotted circle (1) is the shadow area inside the optic disc that was used to measure the shadow length. To compare the results with the OCT measurement, the shadow length was measured at different cross-sections similar to the OCT scan. The diameter of the optic disc of the swine eye was measured manually by dissecting the eye and using an optical microscope. The diameter was about 2.1 mm. The angle of oblique illumination for this image was 45  deg±5  deg. Using Eq. (2), the depth of the optic disc at various cross-sections was calculated. Table 1 shows a comparison of the actual values obtained with the OCT device and our proposed method. We used five enucleated pig eyes for measuring the depth of the optic disc with the OCT device. The same eyes were used to measure the depth of the optic disc with our proposed method. The depth values varied from 0.179 to 0.350 mm, which are in good accordance with depth values obtained from OCT measurements. The precision of a set of measurements can be determined by calculating the standard deviation for a set of data. In our proposed method, we calculated the standard deviation of the shadow length measurements. The standard deviation value was 0.028 mm for each set of shadow length measurements. In each set, 11 values were taken.

Table 1

Comparison of depth values of the optic disc obtained by the OCT device and proposed method.

Sample No.OCT device (mm)Proposed method (mm) (standard deviation = 0.028 mm)
10.1830.179
20.2090.198
30.2690.250
40.4200.342
50.4900.350

The depth of the elliptical/parabolic optic disc can be successfully estimated using this simple method. These depth values are useful for calculating the cup volume and area. According to the reported literature, the shape and dimensions of structures can be obtained from the cast shadow.50 The shapes obtained from the shadows can be used for 3D surface reconstruction.51 Estimating the shape and 3D reconstruction of the optic disc could be helpful in detecting the progression of various eye diseases, including glaucoma.

3.4.

Light Safety

During screening and regular medical checkups of the eye, radiation may have to be directed toward the eye, which can impose potential photochemical and thermal hazards to the retina. For our prototype, most of the light passes through the sclera and illuminates the retina. For an 850-nm wavelength, transmission through the sclera is 40% and absorption is 5%.52 To ensure the safety of the prototype, photochemical and thermal limits should be according to ISO standards, ISO 15004-2 and ISO 10940 (ISO 15004-2,2007: ISO 10940, 2009). For ocular tissues, we measured light exposure values at the sclera, lens, cornea, and retina and compared them with their respective exposure limits as mentioned in ISO 15004-2, 2007 standards.53 For skin exposure limits, guidelines given in 2013 ICNIRP for limits of exposure to incoherent visible and IR radiation for skin were followed.54

3.4.1.

Eye exposure conditions

The maximum radiant flux for the LED at a center wavelength of 850 nm was 75 mW. We calculated the irradiance at the sclera surface considering the maximum power of LED. According to the calculations, the irradiance at the sclera surface was Emax=0.5419  W/cm2, and the maximum radiant exposure was Hmax=10.83  J/cm2.

For the worst-case scenario, radiant exposure on the eye for 300 s was calculated. For radiant exposure for 60 ss (power 1 mW), Hact1=0.433  J/cm2, and for 300 s, Hact5=2.167  J/cm2. The retinal irradiance for accidental trans pupil illumination was calculated using the beam converging angle. The calculated irradiance at the sclera surface was Ep=7.22  mW/cm2. The irradiance of the posterior retina was 3.5  mW/cm2, and the intended transscleral exposure (40%×Ep) was 2.89  mW/cm2.

To compare the calculated exposure values with the ISO standards, applicable limits for the continuous NIR LED light source were calculated and are as follows.

  • 1. Unweighted corneal and lenticular IRirradiance=7.22  mW/cm2. The maximum permissible limit for Group 1 instruments is 20  mW/cm2.

  • 2. Weighted retinal visible and IR radiation thermal irradiance=1.445  mW/cm2. The permissible limit for Group 1 instruments in this category is 0.7  W/cm2.

  • 3. Unweighted anterior segment visible and IRradiation irradiance=2.89  mW/cm2. The permissible limit for this category is 4  W/cm2.

The radiation exposure conditions for the continuous NIR LED light calculated above can be compared directly with different limits, and they are much lower than the limits mentioned for the Group 1 instrument (“ophthalmic instrument for which no potential light hazard exists”).

3.4.2.

Eyelid/ skin

There is no or limited reporting in the literature about the investigation into the degree of IR transmission by the eyelid. However, because the biological structures of eyelid and the body skin are quite similar, it is possible to estimate the IR light transmission. According to ICNIRP skin limits for visible and IR thermal injury, the radiant exposure limit is less than 10 s, and no limit is provided for longer exposure durations. With much longer exposure durations, there are concerns of heat stress, so longer exposure should be avoided. In our system, the actual LED power used for imaging is very low, and we estimated the radiant exposure on the eye for a maximum time of 300 s. The calculated values are much lower than the limits provided by ISO standards, and there is no thermal hazard concern during imaging.

4.

Discussion

This paper presented the design and fabrication of a portable fundus camera that uses trans-scleral illumination for the imaging of the retina and optic disc of the eye. Trans-palpebral and trans-pars-planar illumination using fiber optics have been reported for wide field fundus examination, but these devices need separate adjustments of the illumination sub-system, making the use of the devices inconvenient.2527 In our fabricated fundus camera, trans-scleral illumination is contact-free and can be changed to oblique illumination. These features make it different from the reported devices. For a long time, stereo disc photography has been used to record structural abnormalities in the glaucomatous eyes. Such a method is susceptible to inter- and intra-subject variability and is time consuming for working on a large number of patients over a long period of time.13 In our proposed method, using oblique illumination, a quasi 3D image of the optic disc was obtained, and the depth of the optic disc was calculated using the shadow length measurement. Clustering-based algorithms, such as K-means clustering, can efficiently detect shadows in digital images and were applied in the present study to detect the shadows created by oblique illumination. The shadow length can vary with the slight change in the angle of illumination, optical aberrations, and size of the eye ball. In our fabricated device, we fixed the position of LED light at the designed angle (45 deg), so the variation by angle of illumination was controlled in this case. The size of the eye ball changes with eye diseases, such as myopia or hyperopia. In such cases, the images with and without oblique illumination should be analyzed to avoid error in the diagnosis. The size of the optic disc could also vary with the presence of glaucoma that will change the depth values. So by comparing the depth values with the disease-free eyes, the proposed method can be used for preliminary screening of eye diseases in which the depth of the optic disc is an important factor. Depending on the results, the person can be further checked with more specified devices. To prevent damage to the eyes, the safety of the prototype was ensured by calculating the thermal limits according to ISO standards.

4.1.

Limitations

Despite having several advantages of directly obtaining quasi 3D images by oblique illumination and the ease of calculating the depth values from the shadow length using a single image, our proposed method has some limitations. For example, certain structures in the image may easily be misinterpreted when using off-axis illumination alone. Due to the greater depth of field and the presence of the visual cues to depth structure in the images, this method is useful only for the initial screening of the diseases. For practical applications to the clinics, imaging with and without oblique illumination should be analyzed. For further improvement in glaucoma diagnosis, the implementation of artificial intelligence using images obtained by our fundus camera and depth measurement is being considered.

5.

Conclusions

In summary, using all off-the-shelf components, a portable, simple, low-cost, and non-mydriatic prototype fundus camera with a wide field-of-view was constructed. Oblique/off-axis illumination is a simple and cost-effective way to improve the contrast and to get quasi 3D images. Oblique illumination in our fabricated fundus camera was created using an NIR LED with a narrow beam angle. The depth of the optic disc of a swine eye was calculated using the shadow created by oblique illumination. The reliability of the depth calculation using shadow length measurements from quasi 3D images was confirmed by a model eye, and results were compared with OCT measurements. These depth values may be useful as supporting parameters for ophthalmologists in the diagnosis of the progression of glaucoma.

Acknowledgment

The research leading to these results was supported by Japan Science and Technology Agency (JST) Center of Innovation program (COI) (Grant No. JPMJCE1303). No potential conflicts of interest were reported by the authors.

References

1. 

H. A. Quigley and A. T. Broman, “The number of people with glaucoma worldwide in 2010 and 2020,” Br. J. Ophthamol., 90 (3), 262 –267 https://doi.org/10.1136/bjo.2005.081224 (2006). Google Scholar

2. 

Y. C. Tham et al., “Global prevalence of glaucoma and projections of glaucoma burden through 2040- a systematic review and meta-analysis,” Ophthalmology, 121 (11), 2081 –2090 https://doi.org/10.1016/j.ophtha.2014.05.013 OPANEW 0743-751X (2014). Google Scholar

3. 

, “Retina/vitreous panel preferred practice pattern® guidelines, diabetic retinopathy,” www.aao.org/ppp (2016). Google Scholar

4. 

S. Resnikoff et al., “The number of ophthalmologists in practice and training worldwide: a growing gap despite more than 200000 practitioners,” Br. J. Ophthalmol., 96 783 –787 https://doi.org/10.1136/bjophthalmol-2011-301378 BJOPAL 0007-1161 (2012). Google Scholar

5. 

S. Rachapelle et al., “The cost-utility of telemedicine to screen for diabetic retinopathy in India,” Ophthalmology, 120 566 –573 https://doi.org/10.1016/j.ophtha.2012.09.002 OPANEW 0743-751X (2013). Google Scholar

6. 

S. P. Solish et al., “Optic nerve and retinal nerve fiber imaging,” https://eyewiki.aao.org/ (2021). Google Scholar

7. 

R. A. Karlsson et al., “Automatic fundus image quality assessment on a continuous scale,” Comput. Biol. Med., 129 104114 https://doi.org/10.1016/j.compbiomed.2020.104114 CBMDAW 0010-4825 (2021). Google Scholar

8. 

Y. Shen et al., “Domain-invariant interpretable fundus image quality assessment,” Med. Image Anal., 61 101654 https://doi.org/10.1016/j.media.2020.101654 (2020). Google Scholar

9. 

G. L. Spaeth and S. C. Reddy, “Imaging of the optic disk in caring for patients with glaucoma: ophthalmoscopy and photography remain the gold standard,” Surv. Ophthalmol., 59 454 –458 https://doi.org/10.1016/j.survophthal.2013.10.004 SUOPAD 0039-6257 (2014). Google Scholar

10. 

M. Iester, F. S. Mikelberg and S. M. Drance, “The effect of optic disc size on diagnostic precision with Heidelberg Retina Tomograph,” Ophthalmology, 104 (3), 545 –548 https://doi.org/10.1016/S0161-6420(97)30277-2 OPANEW 0743-751X (1997). Google Scholar

11. 

G. L. Portney, “Photogrammetric analysis of volume asymmetry of the optic nerve head cup in normal, hypertensive and glaucomatous eyes,” Am. J. Ophthalmol., 80 (1), 51 –55 https://doi.org/10.1016/0002-9394(75)90868-5 AJOPAA 0002-9394 (1975). Google Scholar

12. 

O. C. Holm et al., “Volume of the optic disc cup,” Am. J. Ophthalmol., 73 (6), 876 –881 https://doi.org/10.1016/0002-9394(72)90456-4 AJOPAA 0002-9394 (1972). Google Scholar

13. 

E. Corona et al., “Digital stereo image analyzer for generating automated 3D measures of optic disc deformation in glaucoma,” IEEE Trans. Med. Imaging, 21 1244 –1253 https://doi.org/10.1109/TMI.2002.806293 ITMID4 0278-0062 (2002). Google Scholar

14. 

J. Xu et al., “Automated assessment of optic nerve head on stereo disc photographs,” Invest. Ophthalmol. Vis. Sci., 49 2512 –2517 https://doi.org/10.1167/iovs.07-1229 IOVSDA 0146-0404 (2008). Google Scholar

15. 

V. R. Algazi, J. L. Keltner and C. A. Johnson, “Computer analysis of the optic cup in glaucoma,” Invest. Ophthalmol. Visual Sci., 26 (12), 1759 –1770 IOVSDA 0146-0404 (1985). Google Scholar

16. 

M. Okutomi, O. Yoshizaki and G. Tomita, “Color stereo matching and its application to 3D measurement of optic nerve head,” in Proc. 11th IAPR Intl. Conf. Pattern Recognit., 509 –513 (1992). https://doi.org/10.1109/ICPR.1992.201611 Google Scholar

17. 

K. Tran et al., “Construction of an inexpensive hand-held fundus camera through modification of a consumer point-and-shoot camera,” Invest. Ophthalmol. Visual Sci., 53 (12), 7600 –7607 https://doi.org/10.1167/iovs.12-10449 IOVSDA 0146-0404 (2012). Google Scholar

18. 

K. V. Chalam, V. S. Brar and R. Keshavamurthy, “Evaluation of modified portable digital camera for screening of diabetic retinopathy,” Ophthalmic Res., 42 60 –62 https://doi.org/10.1159/000219687 OPRSAQ 0030-3747 (2009). Google Scholar

19. 

P. Li et al., “Usability testing of a smartphone-based retinal camera among first-time users in the primary care setting,” BMJ Innov., 5 (4), 120 –126 https://doi.org/10.1136/bmjinnov-2018-000321 (2019). Google Scholar

20. 

J. D. Cavallerano et al., “Non-mydriatic digital imaging alternative for annual retinal examination in person with previously documented no or mild diabetic retinopathy,” Am. J. Ophthalmol., 140 667.e1 –667.e8 https://doi.org/10.1016/j.ajo.2005.03.075 AJOPAA 0002-9394 (2005). Google Scholar

21. 

B. Y. Shen and S. Mukai, “A portable, inexpensive, nonmydriatic fundus camera based on the Raspberry Pi computer,” J. Ophthalmol., 2017 4526243 https://doi.org/10.1155/2017/4526243 (2017). Google Scholar

22. 

M. T. Witmer and S. Kiss, “Wide-field imaging of the retina,” Surv. Ophthalmol., 58 (2), 143 –154 https://doi.org/10.1016/j.survophthal.2012.07.003 SUOPAD 0039-6257 (2013). Google Scholar

23. 

Jr. A. Capone et al., “Wide field imaging in pediatric retinal disease,” 42 –45 (2015). Google Scholar

24. 

O. Pomerantzeff, “Illumination system for ophthalmoscope,” US Patent US4061423A (1977).

25. 

D. Toslak et al., “Trans-palpebral illumination: an approach for wide-angle fundus photography without the need for pupil dilation,” Opt. Lett., 41 2688 –2691 https://doi.org/10.1364/OL.41.002688 OPLEDP 0146-9592 (2018). Google Scholar

26. 

B. Wang et al., “Contact-free trans-pars-planar illumination enables snapshot fundus camera for nonmydriatic wide field photography,” Sci. Rep., 8 8768 https://doi.org/10.1038/s41598-018-27112-x SRCEC3 2045-2322 (2018). Google Scholar

27. 

D. Toslak et al., “Trans-pars-planar illumination enables a 200o ultra-wide field pediatric fundus camera for easy examination of the retina,” Biomed. Opt. Express, 11 (1), 68 –76 https://doi.org/10.1364/BOE.11.000068 BOEICL 2156-7085 (2020). Google Scholar

28. 

Z. Hostounsky and R. Pelc, “An efficient way of high contrast, quasi-3D cellular imaging: off-axis illumination,” J. Biochem. Biophys. Methods, 68 23 –30 https://doi.org/10.1016/j.jbbm.2006.03.016 JBBMDG 0165-022X (2006). Google Scholar

29. 

G. Zheng, C. Kolner and C. Yang, “Microscopy refocusing and dark field imaging using a simple LED array,” Opt. Lett., 36 (20), 3987 –3989 https://doi.org/10.1364/OL.36.003987 OPLEDP 0146-9592 (2011). Google Scholar

30. 

S. B. Mehta and C. J. R. Sheppard, “Quantitative phase-gradient imaging at high resolution with asymmetric illumination-based differential phase contrast,” Opt. Lett., 34 (13), 1924 –1926 https://doi.org/10.1364/OL.34.001924 OPLEDP 0146-9592 (2009). Google Scholar

31. 

T. N. Ford, K. K. Chu and J. Mertz, “Phase-gradient microscopy in thick tissue with oblique back-illumination,” Nat. Methods, 9 (12), 1195 –1197 https://doi.org/10.1038/nmeth.2219 1548-7091 (2012). Google Scholar

32. 

L. Tian, J. Wang and L. Waller, “3D differential phase-contrast microscopy with computational illumination using an LED array,” Opt. Lett., 39 (5), 1326 –1329 https://doi.org/10.1364/OL.39.001326 OPLEDP 0146-9592 (2014). Google Scholar

33. 

R. Yokoe and W. Watanabe, “Multi-contrast imaging of femtosecond laser-induced modifications in glass by variable illumination with a projector-based microscope,” Optik, 150 48 –53 https://doi.org/10.1016/j.ijleo.2017.09.086 OTIKAJ 0030-4026 (2017). Google Scholar

34. 

K. Guo et al., “Microscopy illumination engineering using a low-cost liquid crystal display,” Biomed. Opt. Express, 6 574 –579 https://doi.org/10.1364/BOE.6.000574 BOEICL 2156-7085 (2015). Google Scholar

35. 

T. Laforest et al., “Transscleral optical phase imaging of the human retina,” Nat. Photonics, 14 439 –445 https://doi.org/10.1038/s41566-020-0608-y NPAHBY 1749-4885 (2020). Google Scholar

36. 

B. R. Straatsma, M. B. Landers and A. E. Kreiger, “The ora serrata in the adult human eye,” Arch. Ophthalmol., 80 3 –20 https://doi.org/10.1001/archopht.1968.00980050005002 AROPAW 0003-9950 (1968). Google Scholar

37. 

Y. Kato, M. Inoue and A. Hirakata, “Quantitative comparisons of ultra-wide field images of model eye obtained with Optos 200Tx and Optos California,” BMC Ophthalmol., 19 115 https://doi.org/10.1186/s12886-019-1125-y (2019). Google Scholar

38. 

X. Yao et al., “Understanding the relationship between visual-angle and eye-angle for reliable determination of the field-of-view in ultra-wide field fundus photography,” Biomed. Opt. Express, 12 (10), 6651 –6659 https://doi.org/10.1364/BOE.433775 BOEICL 2156-7085 (2021). Google Scholar

39. 

A. Ajaz et al., “Agreement study between color and IR retinal images based on retinal vasculature morphological parameters,” BMC Ophthalmol., 19 27 –35 https://doi.org/10.1186/s12886-018-0997-6 (2019). Google Scholar

40. 

A. E. Elsner et al., “Infrared imaging of sub retinal structures in human ocular fundus,” Vis. Res., 36 (1), 191 –205 https://doi.org/10.1016/0042-6989(95)00100-E (1996). Google Scholar

41. 

D. L. Nickla and J. Wall, “The multifunctional choroid,” Progr. Retinal Eye Res., 29 144 –168 https://doi.org/10.1016/j.preteyeres.2009.12.002 (2010). Google Scholar

42. 

T. Stephanides, The Microscope and the Practical Principles of Observation, Faber and Faber, London (1947). Google Scholar

43. 

M. Francon, Progress in Microscopy, Pergamon Press( (1961). Google Scholar

44. 

G. Wollstein et al., “Identifying early glaucomatous changes-comparing between clinical assessment of optic disc photographs and confocal scanning ophthalmoscopy,” Ophthalmology, 107 (12), 2272 –2277 https://doi.org/10.1016/S0161-6420(00)00363-8 OPANEW 0743-751X (2000). Google Scholar

45. 

J. E. Chappelow and V. L. Sharpton, “An improved shadow measurement technique for constraining the morphometry of simple impact craters,” Meteorit. Planet. Sci., 37 479 –486 https://doi.org/10.1111/j.1945-5100.2002.tb00834.x MPSCFY 1086-9379 (2002). Google Scholar

46. 

E. Arbel and H. Hel-Or, “Shadow removal using intensity surfaces and texture anchor points,” IEEE Trans. Pattern Anal. Mach. Intell., 33 (6), 1202 –1216 https://doi.org/10.1109/TPAMI.2010.157 ITPIDJ 0162-8828 (2011). Google Scholar

47. 

G. D. Finlayson et al., “Removing shadows from the images,” Lect. Notes Comput. Sci., 2353 823 –836 https://doi.org/10.1007/3-540-47979-1_55 LNCSD9 0302-9743 (2002). Google Scholar

48. 

E. Salvador, A. Cavallaro and T. Ebrahimi, “Cast shadow segmentation using invariant color features,” Comput. Vis. Image Underst., 95 (2), 238 –259 https://doi.org/10.1016/j.cviu.2004.03.008 CVIUF4 1077-3142 (2004). Google Scholar

49. 

Y. Cheng, “Mean shift, mode seeking and clustering,” IEEE Trans. Pattern Anal. Mach. Intell., 17 (8), 790 –799 https://doi.org/10.1109/34.400568 ITPIDJ 0162-8828 (1995). Google Scholar

50. 

J. E. Chappelow, “Simple impact crater determination from shadow,” Meteorit. Planet. Sci., 48 (10), 1863 –1872 https://doi.org/10.1111/maps.12201 MPSCFY 1086-9379 (2013). Google Scholar

51. 

M. Daum and G. Dudek, “On 3-D surface reconstruction using shape from shadows,” in Proc. IEEE Comput. Soc. Conf. Comput. Vis. and Pattern Recognit., 461 –468 (1998). https://doi.org/10.1109/CVPR.1998.698646 Google Scholar

52. 

A. Vogel et al., “Optical properties of human sclera and their consequences for trans scleral applications,” Laser Surg. Med., 11 331 –340 https://doi.org/10.1002/lsm.1900110404 (1991). Google Scholar

53. 

International Organization for Standardization, “ISO 15004-2,” (2007). https://www.iso.org/obp/ui/#iso:std:iso:15004:-2:ed-1:v1:en Google Scholar

54. 

International Commission on Non-Ionizing Radiation Protection (ICNIRP), “ICNIRP guidelines on limits of exposure to incoherent visible and infrared radiation,” Health Phys., 105 (1), 74 –96 https://doi.org/10.1097/HP.0b013e318289a611 HLTPAO 0017-9078 (2013). Google Scholar

Biography

Neelam Kaushik received her PhD in material science and engineering from Tohoku University, Sendai, Japan, in 2010. She was a research associate from 2010 to 2016 in WPI-AIMR, Tohoku University. She was an assistant professor in the Department of Fine Mechanics, Tohoku University, from 2017 to 2021. Since April 2021, she has been at the Department of Ophthalmology, Tohoku University. Her research interests include the development of portable health monitoring systems and new materials for MEMS applications.

Takashi Sasaki received his MS degree and PhD in mechanical engineering from Tohoku University, Japan, in 2009 and 2012, respectively. From 2012 to 2021, he was an assistant professor at the Graduate School of Engineering, Tohoku University. From 2018 to 2019, he was a visiting scholar at Stanford University, United States. Currently, he is a staff scientist at Silicon Austria Labs, Austria. His research interests focus on microelectromechanical systems for sensor applications.

Kana Takeyama received her MS degree in mechanical engineering from the Graduate School of Engineering, Tohoku University, in 2020. Currently, she is an engineer at Denso Hokkaido Corporation, Japan.

Kazuhiro Hane received his MS degree and PhD from Nagoya University, Japan, in 1980 and 1983, respectively. From 1983 to 1994, he was with the Department of Electrical Engineering, Nagoya University. From 1994 to 2021, he was a professor at the Graduate School of Engineering, Tohoku University, Japan. Currently, he is a professor at New Industry Creation Hatchery Center, Tohoku University. His research interests include the development of optical micro sensors and optical microelectromechanical systems.

Toru Nakazawa received his PhD in ophthalmology in 2002. He spent the following 3 years at Massachusetts Eye and Ear Infirmary as the research fellow. He has been a professor and chairman of the Department of Ophthalmology at Tohoku University since 2011. His research interests include the development of clinical therapy using the results of basic research. He has published more than 441 articles in board reviewed international journals.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Neelam Kaushik, Takashi Sasaki, Kana Takeyama, Kazuhiro Hane, and Toru Nakazawa "Imaging and qualitative depth analysis with a portable nonmydriatic fundus camera using oblique illumination," Journal of Optical Microsystems 3(2), 024502 (12 June 2023). https://doi.org/10.1117/1.JOM.3.2.024502
Received: 6 March 2023; Accepted: 25 May 2023; Published: 12 June 2023
Advertisement
Advertisement
KEYWORDS
Eye

Light sources and illumination

Cameras

Shadows

Portability

Retina

Eye models

Back to Top