Learning the Condition of Satisfaction of an Elementary Behavior in Dynamic Field Theory
Clicks: 240
ID: 56331
2015
In order to proceed along an action sequence,
an autonomous agent has to recognize that the intended
final condition of the previous action has been achieved.
In previous work, we have shown how a sequence of actions
can be generated by an embodied agent using a
neural-dynamic architecture for behavioral organization,
in which each action has an intention and condition of satisfaction.
These components are represented by dynamic
neural fields, and are coupled to motors and sensors of the
robotic agent.Here,we demonstratehowthemappings between
intended actions and their resulting conditions may
be learned, rather than pre-wired.We use reward-gated associative
learning, in which, over many instances of externally
validated goal achievement, the conditions that are
expected to result with goal achievement are learned. After
learning, the external reward is not needed to recognize
that the expected outcome has been achieved. This
method was implemented, using dynamic neural fields,
and tested on a real-world E-Puck mobile robot and a simulated
NAO humanoid robot.
Reference Key |
matthew2015learningpaladyn
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
---|---|
Authors | Matthew, Luciw;Sohrob, Kazerounian;Konstantin, Lahkman;Mathis, Richter;Yulia, Sandamirskaya ; |
Journal | paladyn: journal of behavioral robotics |
Year | 2015 |
DOI | DOI not found |
URL | |
Keywords |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.