C-BER - Indexed Articles in Journals
Permanent URI for this collection
Browse
Browsing C-BER - Indexed Articles in Journals by Author "Adrian Galdran"
Results Per Page
Sort Options
-
ItemEnd-to-end Adversarial Retinal Image Synthesis( 2017) Costa,P ; Adrian Galdran ; Maria Inês Meyer ; Niemeijer,M ; Abramoff,M ; Ana Maria Mendonça ; Aurélio Campilho ; 6071 ; 6381 ; 6825 ; 6835In medical image analysis applications, the availability of large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a Generative Adversarial Network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality. IEEE
-
ItemFusion-Based Variational Image Dehazing( 2017) Adrian Galdran ; Vazquez Corral,J ; Pardo,D ; Bertalmio,MWe propose a novel image-dehazing technique based on the minimization of two energy functionals and a fusion scheme to combine the output of both optimizations. The proposed fusion-based variational image-dehazing (FVID) method is a spatially varying image enhancement process that first minimizes a previously proposed variational formulation that maximizes contrast and saturation on the hazy input. The iterates produced by this minimization are kept, and a second energy that shrinks faster intensity values of well-contrasted regions is minimized, allowing to generate a set of difference-of-saturation (DiffSat) maps by observing the shrinking rate. The iterates produced in the first minimization are then fused with these DiffSat maps to produce a haze-free version of the degraded input. The FVID method does not rely on a physical model from which to estimate a depth map, nor it needs a training stage on a database of human-labeled examples. Experimental results on a wide set of hazy images demonstrate that FVID better preserves the image structure on nearby regions that are less affected by fog, and it is successfully compared with other current methods in the task of removing haze degradation from faraway regions.
-
ItemRetinal image quality assessment by mean-subtracted contrast-normalized coefficients( 2018) Adrian Galdran ; Teresa Finisterra Araújo ; Ana Maria Mendonça ; Aurélio CampilhoThe automatic assessment of visual quality on images of the eye fundus is an important task in retinal image analysis. A novel quality assessment technique is proposed in this paper. We propose to compute Mean-Subtracted Contrast-Normalized (MSCN) coefficients on local spatial neighborhoods of a given image and analyze their distribution. It is known that for natural images, such distribution behaves normally, while distortions of different kinds perturb this regularity. The combination of MSCN coefficients with a simple measure of local contrast allows us to design a simple but effective retinal image quality assessment algorithm that successfully discriminates between good and low-quality images, while delivering a meaningful quality score. The proposed technique is validated on a recent database of quality-labeled retinal images, obtaining results aligned with state-of-the-art approaches at a low computational cost. © 2018, Springer International Publishing AG.
-
ItemA Weakly-Supervised Framework for Interpretable Diabetic Retinopathy Detection on Retinal Images( 2018) Costa,P ; Adrian Galdran ; Smailagic,A ; Aurélio Campilho ; 6071 ; 6825Diabetic retinopathy (DR) detection is a critical retinal image analysis task in the context of early blindness prevention. Unfortunately, in order to train a model to accurately detect DR based on the presence of different retinal lesions, typically a dataset with medical expert's annotations at the pixel level is needed. In this paper, a new methodology based on the multiple instance learning (MIL) framework is developed in order to overcome this necessity by leveraging the implicit information present on annotations made at the image level. Contrary to previous MIL-based DR detection systems, the main contribution of the proposed technique is the joint optimization of the instance encoding and the image classification stages. In this way, more useful mid-level representations of pathological images can be obtained. The explainability of the model decisions is further enhanced by means of a new loss function enforcing appropriate instance and mid-level representations. The proposed technique achieves comparable or better results than other recently proposed methods, with 90% area under the receiver operating characteristic curve (AUC) on Messidor, 93% AUC on DR1, and 96% AUC on DR2, while improving the interpretability of the produced decisions.