Contingencies of reinforcement
Fixed |
Variable |
|
Interval |
every 3 minutes |
averaging every 3 minutes |
Ratio |
every 4th response |
averaging every 4th response |
Different contingency schedules will have
a different impact on learning and behavior production.
(1) Fixed schedules of reinforcement become known to the responder, who can predict the next occurrence. Such schedules tend to promote very rapid learning of behavior. However, the behavior response tends to be inconsistent, with behavior slowing immediately following the reinforcement, then building until the next reinforcer is received. Further, behavior acquired through fixed reinforcement tends to extinguish quickly if the reinforcement stops.
(2) Variable schedules of reinforcement
cannot be predicted by the responder. As a result,
the rate of response tends to remain relatively stable
over time. However, learning through variable rate
reinforcement tends to be slower than learning reinforced
by fixed reinforcement. The trade off, is that
behavior reinforced through variable schedules, while
harder to acquire, is much more resistant to extinction,
and therefore preferred.
A very good example of the reinforcing power of variable interval schedules of reinforcement is the behavior of gamblers. If individuals lost money every time they placed a wager, almost all people would stop. However, on occasion, gamblers do win. These occasional wins reinforce the behavior, because the individual is aware that at any time, they may win again. Even following long periods of not winning, people will continue to gamble- this shows the resistance to extinction created by a variable reinforcement schedule.
Before continuing, stop and consider
activities in your life that are reinforced.
What type of schedules can you identify?
Because behavior is learned quickest using a fixed schedule, this is often the approach that individuals choose to use. However, wanting to promote lasting behavior, a variable schedule would be better. A technique, known as thinning, is used to accomplish both objectives. When thinning, we start by reinforcing behavior for every occurrence. This fixed rate will create a rapid response and learning curve. Once the individual consistently displays the desired behavior, a fixed rate schedule is maintained but the rate of reward is increased. As an example, instead of reinforcing every occurrence, only every other occurrence is rewarded.
Over time, the fixed rate schedule is continuously increased, such that the individual can expect a reinforcement, but not as often as when the original learning occurred. Then, the schedule of reinforcement is changed from fixed to variable. In making this change, the reinforcement average rate may increase slightly, but by changing to a variable rate, the learner cannot predict when a reinforcer will occur. Over time, the average rate of reinforcement is decreased, until eventually, no reinforcer is provided. After the reinforcer has been discontinued, if the behavior rate drops below a desirable threshold, the behavior rate can be quickly increased by adding reinforcement at a variable rate over a short period of time.