7 research outputs found

    UAV Image Multi-Labeling with Data-Efficient Transformers

    No full text
    In this paper, we present an approach for the multi-label classification of remote sensing images based on data-efficient transformers. During the training phase, we generated a second view for each image from the training set using data augmentation. Then, both the image and its augmented version were reshaped into a sequence of flattened patches and then fed to the transformer encoder. The latter extracts a compact feature representation from each image with the help of a self-attention mechanism, which can handle the global dependencies between different regions of the high-resolution aerial image. On the top of the encoder, we mounted two classifiers, a token and a distiller classifier. During training, we minimized a global loss consisting of two terms, each corresponding to one of the two classifiers. In the test phase, we considered the average of the two classifiers as the final class labels. Experiments on two datasets acquired over the cities of Trento and Civezzano with a ground resolution of two-centimeter demonstrated the effectiveness of the proposed model.</jats:p

    UAV Image Multi-Labeling with Data-Efficient Transformers

    No full text
    In this paper, we present an approach for the multi-label classification of remote sensing images based on data-efficient transformers. During the training phase, we generated a second view for each image from the training set using data augmentation. Then, both the image and its augmented version were reshaped into a sequence of flattened patches and then fed to the transformer encoder. The latter extracts a compact feature representation from each image with the help of a self-attention mechanism, which can handle the global dependencies between different regions of the high-resolution aerial image. On the top of the encoder, we mounted two classifiers, a token and a distiller classifier. During training, we minimized a global loss consisting of two terms, each corresponding to one of the two classifiers. In the test phase, we considered the average of the two classifiers as the final class labels. Experiments on two datasets acquired over the cities of Trento and Civezzano with a ground resolution of two-centimeter demonstrated the effectiveness of the proposed model

    Vision Transformers for Remote Sensing Image Classification

    No full text
    In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively

    Vision Transformers for Remote Sensing Image Classification

    No full text
    In this paper, we propose a remote-sensing scene-classification method based on vision transformers. These types of networks, which are now recognized as state-of-the-art models in natural language processing, do not rely on convolution layers as in standard convolutional neural networks (CNNs). Instead, they use multihead attention mechanisms as the main building block to derive long-range contextual relation between pixels in images. In a first step, the images under analysis are divided into patches, then converted to sequence by flattening and embedding. To keep information about the position, embedding position is added to these patches. Then, the resulting sequence is fed to several multihead attention layers for generating the final representation. At the classification stage, the first token sequence is fed to a softmax classification layer. To boost the classification performance, we explore several data augmentation strategies to generate additional data for training. Moreover, we show experimentally that we can compress the network by pruning half of the layers while keeping competing classification accuracies. Experimental results conducted on different remote-sensing image datasets demonstrate the promising capability of the model compared to state-of-the-art methods. Specifically, Vision Transformer obtains an average classification accuracy of 98.49%, 95.86%, 95.56% and 93.83% on Merced, AID, Optimal31 and NWPU datasets, respectively. While the compressed version obtained by removing half of the multihead attention layers yields 97.90%, 94.27%, 95.30% and 93.05%, respectively.</jats:p

    Deep Learning Approach for COVID-19 Detection in Computed Tomography Images

    Full text link

    SARS-CoV-2 vaccination modelling for safe surgery to save lives: data from an international prospective cohort study

    No full text
    Background: Preoperative SARS-CoV-2 vaccination could support safer elective surgery. Vaccine numbers are limited so this study aimed to inform their prioritization by modelling. Methods: The primary outcome was the number needed to vaccinate (NNV) to prevent one COVID-19-related death in 1 year. NNVs were based on postoperative SARS-CoV-2 rates and mortality in an international cohort study (surgical patients), and community SARS-CoV-2 incidence and case fatality data (general population). NNV estimates were stratified by age (18-49, 50-69, 70 or more years) and type of surgery. Best- and worst-case scenarios were used to describe uncertainty. Results: NNVs were more favourable in surgical patients than the general population. The most favourable NNVs were in patients aged 70 years or more needing cancer surgery (351; best case 196, worst case 816) or non-cancer surgery (733; best case 407, worst case 1664). Both exceeded the NNV in the general population (1840; best case 1196, worst case 3066). NNVs for surgical patients remained favourable at a range of SARS-CoV-2 incidence rates in sensitivity analysis modelling. Globally, prioritizing preoperative vaccination of patients needing elective surgery ahead of the general population could prevent an additional 58 687 (best case 115 007, worst case 20 177) COVID-19-related deaths in 1 year. Conclusion: As global roll out of SARS-CoV-2 vaccination proceeds, patients needing elective surgery should be prioritized ahead of the general population
    corecore