The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. ex. Some numerals are expressed as "XNUMX".
Copyrights notice
The original paper is in English. Non-English content has been machine-translated and may contain typographical errors or mistranslations. Copyrights notice
De nombreux domaines scientifiques et techniques utilisent largement des techniques d'optimisation pour trouver les valeurs de paramètres appropriées des modèles. Diverses méthodes d'optimisation sont disponibles pour une utilisation pratique. Les algorithmes d'optimisation sont classés principalement en fonction des taux de convergence. Malheureusement, il arrive souvent dans la pratique qu'une méthode d'optimisation particulière avec des taux de convergence spécifiés fonctionne de manière sensiblement différente sur diverses tâches d'optimisation. La classification théorique des taux de convergence manque alors de pertinence dans le cadre de l’optimisation pratique. Il est donc souhaitable de formuler un nouveau cadre de classification pertinent pour le concept théorique des taux de convergence ainsi que pour l'optimisation pratique. Cet article présente un tel cadre de classification. Le cadre de classification proposé permet de spécifier des techniques d'optimisation et des tâches d'optimisation. Cela sous-tend également sa relation inhérente avec les taux de convergence. Un nouveau cadre de classification est appliqué à la catégorisation des tâches d'optimisation des polynômes et au problème de la formation de réseaux neuronaux de perceptrons multicouches.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copier
Peter GECZY, Shiro USUI, "Novel First Order Optimization Classification Framework" in IEICE TRANSACTIONS on Fundamentals,
vol. E83-A, no. 11, pp. 2312-2319, November 2000, doi: .
Abstract: Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/e83-a_11_2312/_p
Copier
@ARTICLE{e83-a_11_2312,
author={Peter GECZY, Shiro USUI, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={Novel First Order Optimization Classification Framework},
year={2000},
volume={E83-A},
number={11},
pages={2312-2319},
abstract={Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.},
keywords={},
doi={},
ISSN={},
month={November},}
Copier
TY - JOUR
TI - Novel First Order Optimization Classification Framework
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 2312
EP - 2319
AU - Peter GECZY
AU - Shiro USUI
PY - 2000
DO -
JO - IEICE TRANSACTIONS on Fundamentals
SN -
VL - E83-A
IS - 11
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - November 2000
AB - Numerous scientific and engineering fields extensively utilize optimization techniques for finding appropriate parameter values of models. Various optimization methods are available for practical use. The optimization algorithms are classified primarily due to the rates of convergence. Unfortunately, it is often the case in practice that the particular optimization method with specified convergence rates performs substantially differently on diverse optimization tasks. Theoretical classification of convergence rates then lacks its relevance in the context of the practical optimization. It is therefore desirable to formulate a novel classification framework relevant to the theoretical concept of convergence rates as well as to the practical optimization. This article introduces such classification framework. The proposed classification framework enables specification of optimization techniques and optimization tasks. It also underlies its inherent relationship to the convergence rates. Novel classification framework is applied to categorizing the tasks of optimizing polynomials and the problem of training multilayer perceptron neural networks.
ER -