Computational modeling of visual attention has been a research field focused on emulating the behavior of biological visual systems in a given scenario, by using mechanisms developed for fixation prediction or salient region detection. In the literature, different approaches have been presented to emulate the interactions that occur in the early vision system of biological structures. However, mathematical modeling of these systems applying theories related to fractional operators could outperform the existing models. In this paper, we present a fractional bio-inspired filter for salient color detection in natural scenarios, based on the behavior and distribution of the cone photoreceptors cells in the retina. The filter was compared with two classic saliency algorithms over a natural color image dataset in terms of saliency prediction and processing time, using a Similarity (SIM) score and runtime performance, respectively. Our approach reach the second best result in therms of saliency prediction with a 48,9% of SIM with ground truth fixations maps and the fastest time response, with an average time of 0.12 s when processing a high resolution image, being 25% faster than Itti et al. algorithm, one of the most applied in robotic vision tasks.
Visual Attention Models are usually tested using collections of natural images that have intentionally salient objects and obvious context information. On the other hand, in the literature, few algorithms have considered datasets with non-context information to modeling attention. Moreover, Visual Attention Models haven’t been well-measured considering both contextless and context-awareness environments. In this paper, we compare some well-known Bottom Up visual attention models performance using contextless and context aware datasets, using the Pearson Correlation Coefficient as a method to assess the efficiency of each Visual Attention Model in terms of accuracy and eye fixations predictions. The best algorithm outperforms the others by reaching 59,1% and 43,8% of correlation with ground truth information in the contextless and context awareness datasets respectively.
Using multiparametric MRI (mpMRI) protocols to monitor prostate cancer could provide new insights into the biological mechanisms of developing tumours. Automatically discriminating tumour regions active area of research due to the complexity and plurality of cancer behaviour. This work evaluates four different Magnetic Resonance Imaging (MRI) image modalities, namely: Diffusion-Weighted Imaging evaluated at b = {0, 100, 1000}, Apparent Diffusion Coefficient and Dynamic Contrast Enhanced MRI, by extracting texture and functional features and then selecting the optimal ones to discriminate anatomical prostate regions in each modality. The images used were taken prior to radiotherapy from eight patients previously diagnosed with moderate risk of recurrent cancer. Finally, we compared the relevance of each modality to discriminate between healthy tissue and tumour cells.
The detection, segmentation and quantification of multiple sclerosis (MS) lesions on magnetic resonance images (MRI) has been a very active field for the last two decades because of the urge to correlate these measures with the effectiveness of pharmacological treatment. A myriad of methods has been developed and most of these are non specific for the type of lesions and segment the lesions in their acute and chronic phases together. On the other hand, radiologists are able to distinguish between several stages of the disease on different types of MRI images. The main motivation of the work presented here is to computationally emulate the visual perception of the radiologist by using modeling principles of the neuronal centers along the visual system. By using this approach we are able to detect the lesions in the majority of the images in our population sample. This type of approach also allows us to study and improve the analysis of brain networks by introducing a priori information.
Diagnosis of a histopathology glass slide is a complex process that involves accurate recognition of several structures, their function in the tissue and their relation with other structures. The way in which the pathologist represents the image content and the relations between those objects yields a better and accurate diagnoses. Therefore, an appropriate semantic representation of the image content will be useful in several analysis tasks such as cancer classification, tissue retrieval and histopahological image analysis, among others. Nevertheless, to automatically recognize those structures and extract their inner semantic meaning are still very challenging tasks. In this paper we introduce a new semantic representation that allows to describe histopathological concepts suitable for classification. The approach herein identify local concepts using a dictionary learning approach, i.e., the algorithm learns the most representative atoms from a set of random sampled patches, and then models the spatial relations among them by counting the co-occurrence between atoms, while penalizing the spatial distance. The proposed approach was compared with a bag-of-features representation in a tissue classification task. For this purpose, 240 histological microscopical fields of view, 24 per tissue class, were collected. Those images fed a Support Vector Machine classifier per class, using 120 images as train set and the remaining ones for testing, maintaining the same proportion of each concept in the train and test sets. The obtained classification results, averaged from 100 random partitions of training and test sets, shows that our approach is more sensitive in average than the bag-of-features representation in almost 6%.
This paper introduces an automated method for finding diagnostic regions-of-interest (RoIs) in histopathological
images. This method is based on the cognitive process of visual selective attention that arises during a
pathologist's image examination. Specifically, it emulates the first examination phase, which consists in a coarse
search for tissue structures at a "low zoom" to separate the image into relevant regions.1 The pathologist's
cognitive performance depends on inherent image visual cues - bottom-up information - and on acquired clinical
medicine knowledge - top-down mechanisms -. Our pathologist's visual attention model integrates the latter two
components. The selected bottom-up information includes local low level features such as intensity, color, orientation
and texture information. Top-down information is related to the anatomical and pathological structures
known by the expert. A coarse approximation to these structures is achieved by an oversegmentation algorithm,
inspired by psychological grouping theories. The algorithm parameters are learned from an expert pathologist's
segmentation. Top-down and bottom-up integration is achieved by calculating a unique index for each of the
low level characteristics inside the region. Relevancy is estimated as a simple average of these indexes. Finally,
a binary decision rule defines whether or not a region is interesting. The method was evaluated on a set of 49
images using a perceptually-weighted evaluation criterion, finding a quality gain of 3dB when comparing to a
classical bottom-up model of attention.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.