Dissemin is shutting down on January 1st, 2025

Published in

MDPI, Journal of Imaging, 6(8), p. 152, 2022

DOI: 10.3390/jimaging8060152

Links

Tools

Export citation

Search in Google Scholar

Coded DNN Watermark: Robustness against Pruning Models Using Constant Weight Code

Journal article published in 2022 by Tatsuya Yasui ORCID, Takuro Tanaka, Asad Malik ORCID, Minoru Kuribayashi ORCID
This paper is made freely available by the publisher.
This paper is made freely available by the publisher.

Full text: Download

Green circle
Preprint: archiving allowed
Green circle
Postprint: archiving allowed
Green circle
Published version: archiving allowed
Data provided by SHERPA/RoMEO

Abstract

Deep Neural Network (DNN) watermarking techniques are increasingly being used to protect the intellectual property of DNN models. Basically, DNN watermarking is a technique to insert side information into the DNN model without significantly degrading the performance of its original task. A pruning attack is a threat to DNN watermarking, wherein the less important neurons in the model are pruned to make it faster and more compact. As a result, removing the watermark from the DNN model is possible. This study investigates a channel coding approach to protect DNN watermarking against pruning attacks. The channel model differs completely from conventional models involving digital images. Determining the suitable encoding methods for DNN watermarking remains an open problem. Herein, we presented a novel encoding approach using constant weight codes to protect the DNN watermarking against pruning attacks. The experimental results confirmed that the robustness against pruning attacks could be controlled by carefully setting two thresholds for binary symbols in the codeword.