Published in

Hindawi, Security and Communication Networks, (2019), p. 1-10, 2019

DOI: 10.1155/2019/9169802

Links

Tools

Export citation

Search in Google Scholar

Laplace Input and Output Perturbation for Differentially Private Principal Components Analysis

Journal article published in 2019 by Yahong Xu ORCID, Geng Yang ORCID, Shuangjie Bai ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Orange circle
Preprint: archiving restricted
Orange circle
Postprint: archiving restricted
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

With the widespread application of big data, privacy-preserving data analysis has become a topic of increasing significance. The current research studies mainly focus on privacy-preserving classification and regression. However, principal component analysis (PCA) is also an effective data analysis method which can be used to reduce the data dimensionality, commonly used in data processing, machine learning, and data mining. In order to implement approximate PCA while preserving data privacy, we apply the Laplace mechanism to propose two differential privacy principal component analysis algorithms: Laplace input perturbation (LIP) and Laplace output perturbation (LOP). We evaluate the performance of LIP and LOP in terms of noise magnitude and approximation error theoretically and experimentally. In addition, we explore the variation of performance of the two algorithms with different parameters such as number of samples, target dimension, and privacy parameter. Theoretical and experimental results show that algorithm LIP adds less noise and has lower approximation error than LOP. To verify the effectiveness of algorithm LIP, we compare our LIP with other algorithms. The experimental results show that algorithm LIP can provide strong privacy guarantee and good data utility.