Contingency, contiguity, and causality in conditioning: Applying information theory and Weber's Law to the assignment of credit problem.
Clicks: 299
ID: 41573
2019
Article Quality & Performance Metrics
Overall Quality
Improving Quality
0.0
/100
Combines engagement data with AI-assessed academic quality
Reader Engagement
Star Article
81.5
/100
297 views
242 readers
Trending
AI Quality Assessment
Not analyzed
Abstract
Contingency is a critical concept for theories of associative learning and the assignment of credit problem in reinforcement learning. Measuring and manipulating it has, however, been problematic. The information-theoretic definition of contingency-normalized mutual information-makes it a readily computed property of the relation between reinforcing events, the stimuli that predict them and the responses that produce them. When necessary, the dynamic range of the required temporal representation divided by the Weber fraction gives a psychologically realistic plug-in estimates of the entropies. There is no measurable prospective contingency between a peck and reinforcement when pigeons peck on a variable interval schedule of reinforcement. There is, however, a perfect retrospective contingency between reinforcement and the immediately preceding peck. Degrading the retrospective contingency by gratis reinforcement reveals a critical value (.25), below which performance declines rapidly. Contingency is time scale invariant, whereas the perception of proximate causality depends-we assume-on there being a short, fixed psychologically negligible critical interval between cause and effect. Increasing the interval between a response and reinforcement that it triggers degrades the retrograde contingency, leading to a decline in performance that restores it to at or above its critical value. Thus, there is no critical interval in the retrospective effect of reinforcement. We conclude with a short review of the broad explanatory scope of information-theoretic contingencies when regarded as causal variables in conditioning. We suggest that the computation of contingencies may supplant the computation of the sum of all future rewards in models of reinforcement learning. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
| Reference Key |
gallistel2019contingencypsychological
Use this key to autocite in the manuscript while using
SciMatic Manuscript Manager or Thesis Manager
|
|---|---|
| Authors | Gallistel, C R;Craig, Andrew R;Shahan, Timothy A; |
| Journal | psychological review |
| Year | 2019 |
| DOI |
10.1037/rev0000163
|
| URL | |
| Keywords | Keywords not found |
Citations
No citations found. To add a citation, contact the admin at info@scimatic.org
Comments
No comments yet. Be the first to comment on this article.