The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La technologie super-résolution est l’une des solutions permettant de combler le fossé entre les écrans haute résolution et les images à basse résolution. Il existe différents algorithmes pour interpoler les informations perdues, dont l'un utilise un réseau neuronal convolutif (CNN). Cet article présente une implémentation de FPGA et une évaluation des performances d'un nouveau système de super-résolution basé sur CNN, capable de traiter des images animées en temps réel. Nous appliquons des retournements horizontaux et verticaux aux images d'entrée au lieu d'un agrandissement. Cette méthode d'inversion évite la perte d'informations et permet au réseau d'utiliser au mieux la taille de son patch. De plus, nous avons adopté le système de numérotation des résidus (RNS) dans le réseau pour réduire l'utilisation des ressources FPGA. La multiplication et l'addition efficaces avec les LUT ont augmenté d'environ 54 % l'échelle du réseau pouvant être implémenté sur le même FPGA par rapport à une implémentation avec des opérations à virgule fixe. Le système proposé peut réaliser une super-résolution de 960×540 à 1920×1080 à 60 ips avec une latence inférieure à 1 ms. Malgré la restriction des ressources du FPGA, le système peut générer des images claires en super-résolution avec des bords lisses. Les résultats de l'évaluation ont également révélé la qualité supérieure en termes de rapport signal/bruit de pointe (PSNR) et d'indice de similarité structurelle (SSIM), par rapport aux systèmes utilisant d'autres méthodes.
Taito MANABE
Nagasaki University
Yuichiro SHIBATA
Nagasaki University
Kiyoshi OGURI
Nagasaki University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Taito MANABE, Yuichiro SHIBATA, Kiyoshi OGURI, "FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN" in IEICE TRANSACTIONS on Fundamentals,
vol. E101-A, no. 12, pp. 2280-2289, December 2018, doi: 10.1587/transfun.E101.A.2280.
Abstract: The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E101.A.2280/_p
Copier
@ARTICLE{e101-a_12_2280,
author={Taito MANABE, Yuichiro SHIBATA, Kiyoshi OGURI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN},
year={2018},
volume={E101-A},
number={12},
pages={2280-2289},
abstract={The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.},
keywords={},
doi={10.1587/transfun.E101.A.2280},
ISSN={1745-1337},
month={December},}
Copier
TY - JOUR
TI - FPGA Implementation of a Real-Time Super-Resolution System Using Flips and an RNS-Based CNN
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 2280
EP - 2289
AU - Taito MANABE
AU - Yuichiro SHIBATA
AU - Kiyoshi OGURI
PY - 2018
DO - 10.1587/transfun.E101.A.2280
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E101-A
IS - 12
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - December 2018
AB - The super-resolution technology is one of the solutions to fill the gap between high-resolution displays and lower-resolution images. There are various algorithms to interpolate the lost information, one of which is using a convolutional neural network (CNN). This paper shows an FPGA implementation and a performance evaluation of a novel CNN-based super-resolution system, which can process moving images in real time. We apply horizontal and vertical flips to input images instead of enlargement. This flip method prevents information loss and enables the network to make the best use of its patch size. In addition, we adopted the residue number system (RNS) in the network to reduce FPGA resource utilization. Efficient multiplication and addition with LUTs increased a network scale that can be implemented on the same FPGA by approximately 54% compared to an implementation with fixed-point operations. The proposed system can perform super-resolution from 960×540 to 1920×1080 at 60fps with a latency of less than 1ms. Despite resource restriction of the FPGA, the system can generate clear super-resolution images with smooth edges. The evaluation results also revealed the superior quality in terms of the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index, compared to systems with other methods.
ER -