مشاهده مشخصات مقاله
Learning Concepts from a Sequence of Experiences by Reinforcement Learning Agents
نویسنده (ها) |
-
Farzad Rastegar
-
Majid Nili Ahmadabadi
|
مربوط به کنفرانس |
دوازدهمین کنفرانس بینالمللی سالانه انجمن کامپیوتر ایران |
چکیده |
In this paper, we propose a novel approach whereby a reinforcement learning agent attempts to understand its
environment via meaningful temporally extended concepts in an unsupervised way. Our approach is inspired by
findings in neuroscience on the role of mirror neurons in action-based abstraction. Since there are so many cases
in which the best decision cannot be made just by using instant sensory data, in this study we seek to achieve a
framework for learning temporally extended concepts from sequences of sensory-action data. To direct the agent
to gather fertile information for concept learning, a reinforcement learning mechanism utilizing experience of
the agent is proposed. Experimental results demonstrate the capability of the proposed approach in retrieving
meaningful concepts from the environment. The concepts and the way of defining them are thought such that
they not only can be applied to ease decision making but also can be utilized in other applications as elaborated
in the paper. |
قیمت |
-
برای اعضای سایت : ۱٠٠,٠٠٠ ریال
-
برای دانشجویان عضو انجمن : ۲٠,٠٠٠ ریال
-
برای اعضای عادی انجمن : ۴٠,٠٠٠ ریال
|
خرید مقاله
|
|