Dissemin is shutting down on January 1st, 2025

Published in

Association for Computing Machinery (ACM), ACM Transactions on Software Engineering and Methodology, 1(31), p. 1-36, 2021

DOI: 10.1145/3464969

Links

Tools

Export citation

Search in Google Scholar

Automating App Review Response Generation Based on Contextual Knowledge

Journal article published in 2021 by Cuiyun Gao, Wenjie Zhou, Xin Xia, David Lo ORCID, Qi Xie, Michael R. Lyu
This paper was not found in any repository, but could be made available legally by the author.
This paper was not found in any repository, but could be made available legally by the author.

Full text: Unavailable

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Red circle
Published version: archiving forbidden
Data provided by SHERPA/RoMEO

Abstract

User experience of mobile apps is an essential ingredient that can influence the user base and app revenue. To ensure good user experience and assist app development, several prior studies resort to analysis of app reviews, a type of repository that directly reflects user opinions about the apps. Accurately responding to the app reviews is one of the ways to relieve user concerns and thus improve user experience. However, the response quality of the existing method relies on the pre-extracted features from other tools, including manually labelled keywords and predicted review sentiment, which may hinder the generalizability and flexibility of the method. In this article, we propose a novel neural network approach, named CoRe, with the contextual knowledge naturally incorporated and without involving external tools. Specifically, CoRe integrates two types of contextual knowledge in the training corpus, including official app descriptions from app store and responses of the retrieved semantically similar reviews, for enhancing the relevance and accuracy of the generated review responses. Experiments on practical review data show that CoRe can outperform the state-of-the-art method by 12.36% in terms of BLEU-4, an accuracy metric that is widely used to evaluate text generation systems.