C-BER - Indexed Articles in Journals

Permanent URI for this collection


Recent Submissions

Now showing 1 - 5 of 78
  • Item
    Conventional Filtering Versus U-Net Based Models for Pulmonary Nodule Segmentation in CT Images
    ( 2020) Joana Maria Rocha ; António Cunha ; Ana Maria Mendonça ; 7800 ; 6271 ; 6381
  • Item
    Review on Deep Learning Methods for Chest X-Ray based Abnormality Detection and Thoracic Pathology Classification
    ( 2021) Joana Maria Rocha ; Ana Maria Mendonça ; Aurélio Campilho ; 6071 ; 6381 ; 7800
    Backed by more powerful computational resources and optimized training routines, deep learning models have proven unprecedented performance and several benefits to extract information from chest X-ray data. This is one of the most common imaging exams, whose increasing demand is reflected in the aggravated radiologists’ workload. Consequently, healthcare would benefit from computer-aided diagnosis systems to prioritize certain exams and further identify possible pathologies. Pioneering work in chest X-ray analysis has focused on the identification of specific diseases, but to the best of the authors' knowledge no paper has specifically reviewed relevant work on abnormality detection and multi-label thoracic pathology classification. This paper focuses on those issues, selecting the leading chest X-ray based deep learning strategies for comparison. In addition, the paper discloses the current annotated public chest X-ray databases, covering the common thorax diseases.
  • Item
    Lightweight multi-scale classification of chest radiographs via size-specific batch normalization
    ( 2023) Sofia Cardoso Pereira ; Rocha,J ; Campilho,A ; Sousa,P ; Mendonça,AM ; 8251
    Background and Objective:Convolutional neural networks are widely used to detect radiological findings in chest radiographs. Standard architectures are optimized for images of relatively small size (for example, 224 × 224 pixels), which suffices for most application domains. However, in medical imaging, larger inputs are often necessary to analyze disease patterns. A single scan can display multiple types of radiological findings varying greatly in size, and most models do not explicitly account for this. For a given network, whose layers have fixed-size receptive fields, smaller input images result in coarser features, which better characterize larger objects in an image. In contrast, larger inputs result in finer grained features, beneficial for the analysis of smaller objects. By compromising to a single resolution, existing frameworks fail to acknowledge that the ideal input size will not necessarily be the same for classifying every pathology of a scan. The goal of our work is to address this shortcoming by proposing a lightweight framework for multi-scale classification of chest radiographs, where finer and coarser features are combined in a parameter-efficient fashion. Methods:We experiment on CheXpert, a large chest X-ray database. A lightweight multi-resolution (224 × 224, 448 × 448 and 896 × 896 pixels) network is developed based on a Densenet-121 model where batch normalization layers are replaced with the proposed size-specific batch normalization. Each input size undergoes batch normalization with dedicated scale and shift parameters, while the remaining parameters are shared across sizes. Additional external validation of the proposed approach is performed on the VinDr-CXR data set. Results:The proposed approach (AUC 83.27±0.17, 7.1M parameters) outperforms standard single-scale models (AUC 81.76±0.18, 82.62±0.11 and 82.39±0.13 for input sizes 224 × 224, 448 × 448 and 896 × 896, respectively, 6.9M parameters). It also achieves a performance similar to an ensemble of one individual model per scale (AUC 83.27±0.11, 20.9M parameters), while relying on significantly fewer parameters. The model leverages features of different granularities, resulting in a more accurate classification of all findings, regardless of their size, highlighting the advantages of this approach. Conclusions:Different chest X-ray findings are better classified at different scales. Our study shows that multi-scale features can be obtained with nearly no additional parameters, boosting performance. © 2023 The Author(s)
  • Item
    Mapping Cashew Orchards in Cantanhez National Park (Guinea-Bissau)
    ( 2022) Sofia Cardoso Pereira ; Lopes,C ; João Pedro Pedroso ; 8251 ; 4747
    The forests and woodlands of Guinea-Bissau are a biodiversity hotspot under threat, which are progressively being replaced by cashew tree orchards. While the exports of cashew nuts significantly contribute to the gross domestic product and support local livelihoods, the country's natural capital is under significant pressure due to unsustainable land use. In this context, official entities strive to counter deforestation, but the problem persists, and there are currently no systematic or automated means for objectively monitoring and reporting the situation. Furthermore, previous remote sensing approaches failed to distinguish cashew orchards from forests and woodlands due to the significant spectral overlap between the land cover types and the highly intertwined structure of the cashew tree patches. This work contributes to overcoming such difficulty. It develops an affordable, reliable, and easy-to-use procedure based on machine learning models and Sentinel-2 images, automatically detecting cashew orchards with a dice coefficient of 82.54%. The results of this case study designed for the Cantanhez National Park are proof of concept and demonstrate the viability of mapping cashew orchards. Therefore, the work is a stepping stone towards wall-to-wall operational monitoring in the region. © 2022 Elsevier B.V.
  • Item
    End-to-end Adversarial Retinal Image Synthesis
    ( 2017) Costa,P ; Adrian Galdran ; Maria Inês Meyer ; Niemeijer,M ; Abramoff,M ; Ana Maria Mendonça ; Aurélio Campilho ; 6071 ; 6381 ; 6825 ; 6835
    In medical image analysis applications, the availability of large amounts of annotated data is becoming increasingly critical. However, annotated medical data is often scarce and costly to obtain. In this paper, we address the problem of synthesizing retinal color images by applying recent techniques based on adversarial learning. In this setting, a generative model is trained to maximize a loss function provided by a second model attempting to classify its output into real or synthetic. In particular, we propose to implement an adversarial autoencoder for the task of retinal vessel network synthesis. We use the generated vessel trees as an intermediate stage for the generation of color retinal images, which is accomplished with a Generative Adversarial Network. Both models require the optimization of almost everywhere differentiable loss functions, which allows us to train them jointly. The resulting model offers an end-to-end retinal image synthesis system capable of generating as many retinal images as the user requires, with their corresponding vessel networks, by sampling from a simple probability distribution that we impose to the associated latent space. We show that the learned latent space contains a well-defined semantic structure, implying that we can perform calculations in the space of retinal images, e.g., smoothly interpolating new data points between two retinal images. Visual and quantitative results demonstrate that the synthesized images are substantially different from those in the training set, while being also anatomically consistent and displaying a reasonable visual quality. IEEE