Federated Deep Learning With Prototype Matching for Object Extraction From Very-High-Resolution Remote Sensing Images

Image credit: Unsplash

Abstract

Deep convolutional neural networks (DCNNs) have become the leading tools for object extraction from very-high-resolution (VHR) remote sensing images. However, the label scarcity problem of local datasets hinders the prediction performances of DCNNs, and privacy concerns regarding remote sensing data often arise in the traditional deep learning schemes. To cope with these problems, we propose a novel federated learning scheme with prototype matching (FedPM) to collaboratively learn a richer DCNN model by leveraging remote sensing data distributed among multiple clients. This scheme conducts the federated optimization of DCNNs by aggregating clients’ knowledge in the gradient space without compromising data privacy. Specifically, the prototype matching method is developed to regularize the local training using prototypical representations while reducing the distribution divergence across heterogeneous image data. Furthermore, the derived deviations across local and global prototypes are applied to quantify the effects of local models on the decision boundary and optimize the global model updating via the attention-weighted aggregation scheme. Finally, the sparse ternary compression (STC) method is used to alleviate communication costs. Extensive experimental results derived from VHR aerial and satellite image datasets verify that the FedPM can dramatically improve the prediction performance of DCNNs on object extraction with lower communication costs. To the best of our knowledge, this is the first time that federated learning has been applied for remote sensing visual tasks.

Publication
IEEE Transactions on Geoscience and Remote Sensing 61, 1-16
Xiaokang Zhang
Xiaokang Zhang
PhD

My research interests include remote sensing, computer vision and deep learning.