The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
La reconstruction tridimensionnelle (3D) et l'estimation de la profondeur de la scène à partir d'images bidimensionnelles (2D) sont des tâches majeures en vision par ordinateur. Cependant, l’utilisation de techniques de reconstruction 2D conventionnelles s’avère difficile dans des milieux participants tels que l’eau trouble, le brouillard ou la fumée. Nous avons développé une méthode qui utilise une caméra à temps de vol (ToF) à onde continue pour estimer simultanément la région et la profondeur d'un objet dans les médias participants. La lumière diffusée observée par la caméra est saturée et ne dépend donc pas de la profondeur de la scène. De plus, les signaux reçus rebondissant sur des points distants sont négligeables en raison de l'atténuation de la lumière, et l'observation d'un tel point ne contient donc qu'une composante de diffusion. Ces phénomènes nous permettent d'estimer la composante de diffusion dans une région d'objet à partir d'un arrière-plan qui ne contient que la composante de diffusion. Le problème est formulé comme une estimation robuste dans laquelle la région objet est considérée comme des valeurs aberrantes, et il permet l'estimation simultanée d'une région objet et de la profondeur sur la base d'un schéma d'optimisation des moindres carrés itérativement repondérés (IRLS). Nous démontrons l'efficacité de la méthode proposée en utilisant des images capturées à partir d'une caméra ToF dans des scènes de brouillard réelles et évaluons l'applicabilité avec des données synthétisées.
Yuki FUJIMURA
Kyoto University
Motoharu SONOGASHIRA
Kyoto University
Masaaki IIYAMA
Kyoto University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Yuki FUJIMURA, Motoharu SONOGASHIRA, Masaaki IIYAMA, "Simultaneous Estimation of Object Region and Depth in Participating Media Using a ToF Camera" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 3, pp. 660-673, March 2020, doi: 10.1587/transinf.2019EDP7219.
Abstract: Three-dimensional (3D) reconstruction and scene depth estimation from 2-dimensional (2D) images are major tasks in computer vision. However, using conventional 3D reconstruction techniques gets challenging in participating media such as murky water, fog, or smoke. We have developed a method that uses a continuous-wave time-of-flight (ToF) camera to estimate an object region and depth in participating media simultaneously. The scattered light observed by the camera is saturated, so it does not depend on the scene depth. In addition, received signals bouncing off distant points are negligible due to light attenuation, and thus the observation of such a point contains only a scattering component. These phenomena enable us to estimate the scattering component in an object region from a background that only contains the scattering component. The problem is formulated as robust estimation where the object region is regarded as outliers, and it enables the simultaneous estimation of an object region and depth on the basis of an iteratively reweighted least squares (IRLS) optimization scheme. We demonstrate the effectiveness of the proposed method using captured images from a ToF camera in real foggy scenes and evaluate the applicability with synthesized data.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2019EDP7219/_p
Copier
@ARTICLE{e103-d_3_660,
author={Yuki FUJIMURA, Motoharu SONOGASHIRA, Masaaki IIYAMA, },
journal={IEICE TRANSACTIONS on Information},
title={Simultaneous Estimation of Object Region and Depth in Participating Media Using a ToF Camera},
year={2020},
volume={E103-D},
number={3},
pages={660-673},
abstract={Three-dimensional (3D) reconstruction and scene depth estimation from 2-dimensional (2D) images are major tasks in computer vision. However, using conventional 3D reconstruction techniques gets challenging in participating media such as murky water, fog, or smoke. We have developed a method that uses a continuous-wave time-of-flight (ToF) camera to estimate an object region and depth in participating media simultaneously. The scattered light observed by the camera is saturated, so it does not depend on the scene depth. In addition, received signals bouncing off distant points are negligible due to light attenuation, and thus the observation of such a point contains only a scattering component. These phenomena enable us to estimate the scattering component in an object region from a background that only contains the scattering component. The problem is formulated as robust estimation where the object region is regarded as outliers, and it enables the simultaneous estimation of an object region and depth on the basis of an iteratively reweighted least squares (IRLS) optimization scheme. We demonstrate the effectiveness of the proposed method using captured images from a ToF camera in real foggy scenes and evaluate the applicability with synthesized data.},
keywords={},
doi={10.1587/transinf.2019EDP7219},
ISSN={1745-1361},
month={March},}
Copier
TY - JOUR
TI - Simultaneous Estimation of Object Region and Depth in Participating Media Using a ToF Camera
T2 - IEICE TRANSACTIONS on Information
SP - 660
EP - 673
AU - Yuki FUJIMURA
AU - Motoharu SONOGASHIRA
AU - Masaaki IIYAMA
PY - 2020
DO - 10.1587/transinf.2019EDP7219
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 3
JA - IEICE TRANSACTIONS on Information
Y1 - March 2020
AB - Three-dimensional (3D) reconstruction and scene depth estimation from 2-dimensional (2D) images are major tasks in computer vision. However, using conventional 3D reconstruction techniques gets challenging in participating media such as murky water, fog, or smoke. We have developed a method that uses a continuous-wave time-of-flight (ToF) camera to estimate an object region and depth in participating media simultaneously. The scattered light observed by the camera is saturated, so it does not depend on the scene depth. In addition, received signals bouncing off distant points are negligible due to light attenuation, and thus the observation of such a point contains only a scattering component. These phenomena enable us to estimate the scattering component in an object region from a background that only contains the scattering component. The problem is formulated as robust estimation where the object region is regarded as outliers, and it enables the simultaneous estimation of an object region and depth on the basis of an iteratively reweighted least squares (IRLS) optimization scheme. We demonstrate the effectiveness of the proposed method using captured images from a ToF camera in real foggy scenes and evaluate the applicability with synthesized data.
ER -