The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
Un auto-encodeur a la capacité potentielle de compresser et de décompresser des informations. Dans ce travail, nous considérons le processus de génération d'une stégo-image à partir d'une image originale et de filigranes comme compression, et le processus de récupération de l'image originale et des filigranes à partir de la stégo-image comme décompression. Nous proposons des réseaux de neurones intégrateurs et extracteurs basés sur l'auto-encodeur. Le réseau d'intégration apprend le mappage à partir des coefficients DCT de l'image originale et d'un filigrane avec ceux de la stégo-image. Le réseau d'extracteurs apprend le mappage des coefficients DCT de la stégo-image au filigrane. Une fois le réseau neuronal proposé formé, le réseau peut intégrer et extraire le filigrane dans des images de test non apprises. Nous avons étudié la relation entre le nombre de neurones et les performances du réseau par des simulations informatiques et avons découvert que le réseau neuronal entraîné pouvait fournir des stégo-images et des filigranes de haute qualité avec peu d'erreurs. Nous avons également évalué la robustesse par rapport à la compression JPEG et constaté que, lorsque des paramètres appropriés étaient utilisés, les filigranes étaient extraits avec un BER moyen inférieur à 0.01 et une qualité d'image supérieure à 35 dB lorsque le facteur de qualité Q avait plus de 50 ans. Nous avons également étudié comment représenter les filigranes dans la stégo-image par notre réseau neuronal. Il existe deux possibilités : la représentation distribuée et la représentation clairsemée. À partir des résultats de l'enquête sur la sortie de la couche stégo (3ème couche), nous avons constaté que la représentation distribuée émergeait à une étape d'apprentissage précoce, puis que la représentation clairsemée apparaissait à une étape ultérieure.
Ippei HAMAMOTO
Yamaguchi University
Masaki KAWAMURA
Yamaguchi University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Ippei HAMAMOTO, Masaki KAWAMURA, "Image Watermarking Technique Using Embedder and Extractor Neural Networks" in IEICE TRANSACTIONS on Information,
vol. E102-D, no. 1, pp. 19-30, January 2019, doi: 10.1587/transinf.2018MUP0006.
Abstract: An autoencoder has the potential ability to compress and decompress information. In this work, we consider the process of generating a stego-image from an original image and watermarks as compression, and the process of recovering the original image and watermarks from the stego-image as decompression. We propose embedder and extractor neural networks based on the autoencoder. The embedder network learns mapping from the DCT coefficients of the original image and a watermark to those of the stego-image. The extractor network learns mapping from the DCT coefficients of the stego-image to the watermark. Once the proposed neural network has been trained, the network can embed and extract the watermark into unlearned test images. We investigated the relation between the number of neurons and network performance by computer simulations and found that the trained neural network could provide high-quality stego-images and watermarks with few errors. We also evaluated the robustness against JPEG compression and found that, when suitable parameters were used, the watermarks were extracted with an average BER lower than 0.01 and image quality over 35 dB when the quality factor Q was over 50. We also investigated how to represent the watermarks in the stego-image by our neural network. There are two possibilities: distributed representation and sparse representation. From the results of investigation into the output of the stego layer (3rd layer), we found that the distributed representation emerged at an early learning step and then sparse representation came out at a later step.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2018MUP0006/_p
Copier
@ARTICLE{e102-d_1_19,
author={Ippei HAMAMOTO, Masaki KAWAMURA, },
journal={IEICE TRANSACTIONS on Information},
title={Image Watermarking Technique Using Embedder and Extractor Neural Networks},
year={2019},
volume={E102-D},
number={1},
pages={19-30},
abstract={An autoencoder has the potential ability to compress and decompress information. In this work, we consider the process of generating a stego-image from an original image and watermarks as compression, and the process of recovering the original image and watermarks from the stego-image as decompression. We propose embedder and extractor neural networks based on the autoencoder. The embedder network learns mapping from the DCT coefficients of the original image and a watermark to those of the stego-image. The extractor network learns mapping from the DCT coefficients of the stego-image to the watermark. Once the proposed neural network has been trained, the network can embed and extract the watermark into unlearned test images. We investigated the relation between the number of neurons and network performance by computer simulations and found that the trained neural network could provide high-quality stego-images and watermarks with few errors. We also evaluated the robustness against JPEG compression and found that, when suitable parameters were used, the watermarks were extracted with an average BER lower than 0.01 and image quality over 35 dB when the quality factor Q was over 50. We also investigated how to represent the watermarks in the stego-image by our neural network. There are two possibilities: distributed representation and sparse representation. From the results of investigation into the output of the stego layer (3rd layer), we found that the distributed representation emerged at an early learning step and then sparse representation came out at a later step.},
keywords={},
doi={10.1587/transinf.2018MUP0006},
ISSN={1745-1361},
month={January},}
Copier
TY - JOUR
TI - Image Watermarking Technique Using Embedder and Extractor Neural Networks
T2 - IEICE TRANSACTIONS on Information
SP - 19
EP - 30
AU - Ippei HAMAMOTO
AU - Masaki KAWAMURA
PY - 2019
DO - 10.1587/transinf.2018MUP0006
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E102-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2019
AB - An autoencoder has the potential ability to compress and decompress information. In this work, we consider the process of generating a stego-image from an original image and watermarks as compression, and the process of recovering the original image and watermarks from the stego-image as decompression. We propose embedder and extractor neural networks based on the autoencoder. The embedder network learns mapping from the DCT coefficients of the original image and a watermark to those of the stego-image. The extractor network learns mapping from the DCT coefficients of the stego-image to the watermark. Once the proposed neural network has been trained, the network can embed and extract the watermark into unlearned test images. We investigated the relation between the number of neurons and network performance by computer simulations and found that the trained neural network could provide high-quality stego-images and watermarks with few errors. We also evaluated the robustness against JPEG compression and found that, when suitable parameters were used, the watermarks were extracted with an average BER lower than 0.01 and image quality over 35 dB when the quality factor Q was over 50. We also investigated how to represent the watermarks in the stego-image by our neural network. There are two possibilities: distributed representation and sparse representation. From the results of investigation into the output of the stego layer (3rd layer), we found that the distributed representation emerged at an early learning step and then sparse representation came out at a later step.
ER -