JAXA Repository / AIREX 未来へ続く、宙(そら)への英知

このアイテムに関連するファイルはありません。

タイトルApproximationg many valued mappings using a recurrent neural network
本文(外部サイト)http://dspace.lib.kanazawa-u.ac.jp/dspace/bitstream/2297/6814/1/TE-PR-NAKAYAMA-K-1494.pdf
参考URLhttp://hdl.handle.net/2297/6814
著者(英)Tomikawa, Y.; Nakayama, Kenji
発行日1998-05
発行機関などIEEE(Institute of Electrical and Electronics Engineers)
刊行物名IEEE&INNS Proc. of IJCNN'98, Anchorage
2
開始ページ1494
終了ページ1497
刊行年月日1998-05
言語eng
内容記述In this paper, a recurrent neural network (RNN) is applied to approximating one to N many valued mappings. The RNN described in this paper has a feedback loop from an output to an input in addition to the conventional multi layer neural network (MLNN). The feedback loop causes dynamic output properties. The convergence property in these properties can be used for this approximating problem. In order to avoid conflict by the overlapped target data y*s to the same input x., the input data set (x*, y*) and the target data y* are presented to the network in learning phase. By this learning, the network function f(x, z) which satisfies y* = f(x*,y*) is formed. In recalling phase, the solutions y of y = f(x,y) are detected by the feedback dynamics of RNN. The different solutions for the same input x can be gained by changing the initial output value of y. It have been presented in our previous paper that the RNN can approximate many valued continuous mappings by introducing the differential condition to learning. However, if the mapping has discontinuity or changes of value number, it sometimes shows undesirable behavior. In this paper, the integral condition is proposed in order to prevent spurious convergence and to spread the attractive regions to the approximating points.
資料種別Conference Paper
著者版フラグpublisher
URIhttps://repository.exst.jaxa.jp/dspace/handle/a-is/610216


このリポジトリに保管されているアイテムは、他に指定されている場合を除き、著作権により保護されています。