Learning To Blog & Blogging To Learn

Blog With Me?

What happens when you introduce something to look forward to in a situation that is bleak and both a psychological and physiological struggle?  This can be the difference that allows for motivation to adhere to treatment regimens for those with medical difficulties.  In fact, the two articles we will discuss this week look at methods for improving adherence to treatment of chronic diseases in children via positive reinforcement.  The importance from these studies stem from the fact that poor adherence is a common problem which, in many cases, is the underlying cause of poor clinical outcomes. Therefore, this may “lead to misconceptions regarding the efficacy of the treatment,” sometimes creating what merely “appears to be a ‘treatment resistant’ disease” (Luersen et al. 5).  The treatment is then perceived to be the problem when, in actuality, the problem is the patient’s willingness to faithfully and correctly stick to the treatment.

Luersen et al. look at this problem of variable adherence and it’s importance as a factor in clinical outcomes.  They review other studies which implement sticker charts to instrumentally train children with chronic diseases to adhere to their respective treatment regimens.  Each time the child completes their specific treatment or reach specified intake levels of substances, the child receives a sticker.  This type of positive reinforcement via the application of a sticker is thought to do two things: (1) It provides the children with an immediate sense of gratification and (2) It helps remind the children (and parents in some cases) whether a treatment has been completed and when the next dose is due.  I will lightly touch upon the studies Luersen et al. focus on but please feel free to look into the specifics involved at http://onlinelibrary.wiley.com/doi/10.1111/j.1525-1470.2012.01741.x/pdf.  Following is an example of a sticker chart:

Image

In a study by Stark et al., 4-12 years old cystic fibrosis patients are rewarded with a sticker each time a calorie goal is met. When compared to control groups, results include greater increase in caloric intake, weight gain, and BMI which was maintained during follow-up testings.

In another study by Stark et al., 4-10 year old juvenile rheumatoid arthritis patients are rewarded in a stepwise fashion with one sticker per increase in calcium intake per meal and for meeting specified calcium goals.  When compared to control groups, results include an increased calcium adherence and greater body bone mass.  Patients 5-12 years old with inflammatory bowel disease also participate in this same type of sticker program which similarly results in an increase in calcium intake and a greater percentage of patients achieving their set calcium goals.

The study of Cass et al. involve tuberculosis patients who are 1-14 years of age.  The introduction of sticker charts in which they are rewarded one sticker for each daily tuberculosis medication ingested makes children 2.4 times more likely to complete their latent tuberculosis infection treatment.

Slifer et al. study 3-5 year old patients with breathing disorders and how the sticker charts increase nightly BiPAP (Bilevel positive airway pressure) use.

In a study by Penica et al., a 2 year old hemophiliac child is rewarded with a sticker for adherence during the IV infusion of clotting factors.  This results in both the decrease in negative behavior and increase in positive behavior during the infusion process.

Luersen et al. findings therefore support their claim that the sticker charts are effective at increasing adherence to therapy in children with chronic disease.  Moreover, the effect is robust, working across a wide variety of chronic pediatric diseases.  What’s important about this is that these interventions improved the clinical outcomes, which is the ultimate goal of the adherence modifying methods.  What is the reason for the apparent success of this approach?  One can speculate that introducing a reward program gives the patients something to look forward to and want to work for.  It is something positive in their already adverse and tiresome life.  Medical difficulties, especially chronic diseases, not only burden the body but also have significant psychological ramifications.  It is not normal for a child to have to go through the demanding treatments, to be restricted from everyday activities, to be constantly worrying about their health.  Therefore, giving them some kind of encouragement, something rewarding, something positive is extremely important.  The desire to adhere to their treatments is instilled through this instrumental positive reward.

To further understand this type of instrumental conditioning, specifically positive reinforcement, let us now go in depth into one of the studies which results in similar findings as the studies above. This four patient study by Magrab et al. include one 11-year old, two 13-year olds, and one 18 year-old, all of whom are in the pediatric hemodialysis unit receiving dialysis treatment 2-3 times a week.  A serious problem among all four subjects, which is actually common for many pediatric patients undergoing dialysis treatment due to renal failure, is adherence to strict dietary restrictions.  Failure to follow the prescribed diets results in “increase risk for fluid overload and congestive heart failure, hypertension, hyperkalemia, azotemia, and bone disease” (Magreb et al. 573).  Because few techniques are found to be successful at maintaining compliance to the dietary restrictions, Magrab et al. wish to test the success of a token economy on these type of patients.  Their token economy is a bit different than the conventional in that only reinforcement is delivered (never is anything taken away).  In order to receive a reinforcement of 2-3 points, weight gain between dialysis sessions must be equal to or less than 2 pounds.  Weight gain is used as a measure of fluid intake, and 2 pounds represents the largest amount of fluid safe for dialysis.  Eighteen points are equivalent to $2.00 for purchasing prizes or rewards.  What is unique about the specific reward system of this study when compared to the previous studies discussed is that it is a “specially for you” system.  The children are allowed to construct their own prize list and indicate specifically what rewards the want to work for (as long as it is not food).  In addition, a chart which records their respective point achievements is posted in a very visual location in the unit for further motivation and recognition.  Each week the child’s point count is gathered, and he or she is allowed to exchange the earned points for their predetermined prizes.

Thankfully, the results of Magrab et al. are promising.  Compared to the baseline data, a reduction in the amount of weight gained found is found between dialysis sessions such that the average baseline gain is 2.18 lbs while the average treatment gain is .97 lbs.  In addition, a substantial change in the percentage of sessions in which the patients exceed acceptable weight gains decreases from a baseline of 47% to 20% during treatment.  It is also noticed that any extreme weight fluctuation that occurs during baseline is reduced during treatment.

Again, it can be hypothesized that the reason why this type of reward system is successful is because many psychosocial adjustments must be made when one is ill and “severe restrictions or major changes in eating style may become further barriers to normal social activity” (Magrab et al. 573).  Therefore, many more negative things are introduced into the lives of these patients.  By introducing something positive, these children have something to look forward to, something to work for.  This essentially enhances the quality of their lives and provides a type of “life motivation” (Magrab et al. 578).  A reward system can also give the patients a sense of control over something in a situation they have little control over.  They gain a sense of power over their present states.  Moreover, the special thing about this case is that the children are able to choose exactly what they want their reward to be.  This then tweaks the system so that the motivation to complete the task, to adhere to their dietary restrictions, is greatest.  In the other studies, the value of stickers may be different from one patient to another.  However, in this study, the value of the reward is high for all patients.

 

Magrab, P. R. & Papadopoulou, Z. L. (1977) The effect of a token economy on dietary compliance for children on hemodialysis. Journal of Applied Behavior Analysis, 10(4), 573-578.

 

Luersen, K., Davis, S., Kaplan, S., Abel, T., Winchester, W., & Feldman, S. (2012) Sticker charts: A method for improving adherence to treatment of chronic diseases in children. Pediatric Dermatology, 1-6.

Last week, we looked at the patterns of reinforcement and discussed the experiment that proposed intermittent reinforcement as a better means of resistance to extinction than is continuous reinforcement.  This week, we will focus on the specific characteristic of the reinforcement and examine an experiment that takes advantage of embedded reinforcement.  Before explaining what is meant by embedded reinforcement, I must first make Hanley, Tiger, and Ingvarsson’s experiment familiar to you.

Haley et al. focus on free-play periods in the pre-school setting.  Free-play periods are characterized by “children initiated engagement [and] provide children with opportunities to choose from a variety of simultaneously available activities that are presumably consistent with their interests and abilities” (Hanley et al. 33). This type of free-play is commonly seen as an opportunity to develop social and academic skills.  The experiment of Hanley et al. emerges from the finding that “selection and engagement of materials in instructional, literacy, and science zones [are] consistently low compared to [that] in dramatic play, computers, blocks, manipulatives, games and art activities” (Hanley et al. 33).  Therefore, Hanley et al. propose two strategies to promote the selection of these less preferred activities:

1)   Satiation: By providing prolonged access to the preferred activities and keeping these activities constant (lacking any type of novelty), participation in these activities should decrease due to satiation or habituation.

2)   Embedded reinforcement: By adding attracting qualities to the locations of the less preferred activities, subjects will be lured to those activity zones and subsequently engage in those activities.

In the experimental set up, there were nine zones the children could choose to play in.

  • Dramatic play: Pretend play toys (e.g., dress-up clothes, doctor set, flower shop, barber shop)
  • Computer: Two computers with a variety of CD-ROM games (e.g., Clifford’s Counting, Jumpstart Kindergarten)
  • Blocks: Toys to occasion large motor movement (e.g., train sets, large blocks, bowling set, basketball)
  • Manipulatives: Small toys on table tops to occasion small motor movement (e.g., building blocks, animals, tinker toys, Lincoln Logs)
  • Games: Age-appropriate board games and large puzzles (e.g., Candyland, Memory, dominos)
  • Art: Open-ended art activities on table top (e.g., paint, crayons, Play-Doh)
  • Science: An open-ended activity for children to explore and use their senses (e.g., digging for dinosaurs in sand; pouring water through sieves)
  • Instructional zone: One-on-one direct instruction. Each child has individualized skills and relevant materials selected and stored in the area
  • Library: A variety of age-appropriate books

The last three zones are the zones that are initially least preferred during baseline.  During baseline, dramatic play, blocks, art, games, manipulatives, and science materials are rotated daily. Because of the large number of books in the library and the individualized nature of the instructional zone, the materials in these activities are rotated weekly. A wide variety of computer games are located by the computer; therefore, these materials do not rotate (Hanley et al. 35).  In the satiation phase, the materials in all zones other than the science, instructional zone, and library zone are kept constant.  In the embedded reinforcement phase, the following zones change:

Instructional zone: the chairs and cubbies of the instructional zone are redecorated with popular children’s cartoon characters;  when possible, a teacher sits in the instructional zone prior to children selecting the instructional zone; small trinkets are placed intermittently in each child’s bin of individualized instructional materials, which are available to children who select and sit in the instructional zone (Hanley, et al. 37).

Library area: the table and chairs are replaced with four plush pillows and a carpet in the library; a book of the week is selected and displayed with thematically related toys; when possible, a teacher sat in the library prior to children selecting the area (Hanley, et al. 37).

Science: Science-related activities that are thought to provide more reinforcement are arranged in the science area. To increase the likelihood that children will select the science area, teachers present each new science activity during group instruction the day prior to its inclusion in the science area (Hanley, et al. 37).

Hanley et al. conduct the experiment by first holding the baseline phase, then holding the satiation phase, baseline phase, and finally the embedded reinforcement phase.  The results seem to indicate that an embedded reinforcement type of procedure produces more success overall.  After the satiation phase, there is a slow decline in allocation to dramatic play and blocks but no significant change in any of the other zones (other than the three of interest: instructional, science and library).  The instructional and science zones feel the indirect effects such that there is an increase in preference for those; however, there was no change for the library zone.  After the embedded reinforcement phase, there is an immediate increase in attendance all three zones of particular interest, the library, science and instructional zones.  In addition, these effects are sustained and present during the follow up testings.

These results are interesting yet, I question whether the seemingly success of the embedded reinforcement is due to the fact that satiation occurs prior.  Maybe it is not just the embedded reinforcement that is causing the desired results.  Maybe there is an effect of both together (as long as they are in near temporal vicinity to each other) that causes the increase in the instructional, library, and science zones.  It could be that the reason why we interpret it to be due to the embedded reinforcement is merely because embedded reinforcement was the second strategy performed.  To solve for this, further research can switch the order such that the embedded reinforcement is before the satiation.  If the same effect is found such that the embedded reinforcement causes stronger, more sustaining results, then it is in actuality due to embedded reinforcement.  The two strategies could also be tested separately using a between subject design.  All could be combined to see if the effects are due to one or the other, or to see if combining the two creates even better results.

Another thing to discuss is this method of embedded reinforcement.  To my understanding, I interpret from Hanley et al. article that this type of reinforcement is just to attract the children over to the library, instructional, and science zones.  The children do not necessarily have to partake in the activities to be rewarded.  Therefore, this method depends on the probability that if they are in those zones, they will interact with the respective activities because they are nearby.  The embedded reinforcers are there just to call over the children and get them to pay more attention to the zones they would normally prefer less.  They are not rewarded for taking part in those zones; they are rewarded for being in those zones.  The experiment of Hanley et al. show that apparently, this is successful.  It could be then suggested that reinforcement is a very powerful tool since, even though the reinforcement is not directly on the behavior of interest, it still produces the desired, indirect effects on that behavior.

Hanley, G. P., Tiger, J. H., & Ingvarsson, E. T. (2009) Influencing preschoolers’ free-play activity preferences: an evaluation of satiation and embedded reinforcement. Journal of Applied Behavior Analysis, 42(1), 33-41.

Thus we begin the second half of the quarter…

We have already discussed the success of instrumental conditioning, specifically, that of reinforcement training.  To go one step deeper, we will now focus on more detailed aspects of this strategy.  We will look at how exactly reinforcement is administered.  An experimenter by the name of Bijou focuses on the specifics of reinforcement and hypothesizes that “intermittent reinforcement (whether fixed or irregular in pattern) markedly increases resistance to extinction as compared to continuous reinforcement” (Bijou, 47).  To discuss this, it is helpful to define the most common types of reinforcement patterns as quoted in the article:

1. Continuous reinforcement: In the experimentally defined situation, a response is reinforced on each occasion of its occurrence.

2. Intermittent reinforcement: A reinforced occurrence of a response is preceded or succeeded on at least one occasion by an unreinforced occurrence of the response.  No differentiation is made among the terms descriptive of this procedure; namely, intermittent reinforcement, partial reinforcement and periodic conditioning.

a) Interval intermittent reinforcement: The pattern of a reinforcement is controlled by temporal events in the external environment.

b) Ratio intermittent reinforcement: The pattern of a reinforcement is dependent on the subject’s behavior and follows a specific ratio.

c) Fixed and Variable patterns of intermittent reinforcement: The relationship between the reinforced and nonreinforced is either fixed or variable.  Both interval and ratio may be either fixed or variable in pattern.

Bijou’s experiment focuses on the following question: “For a given number of reinforcements, is there a difference in the extinctive behavior of two groups of preschool children when the training of one group is on a continuous reinforcement pattern, and the training of the other group is on a variable intermittent schedule with reinforcement following 20% of the responses?” (Bijou, 48).  To answer this, Bijou uses a box with two holes, one above the other such that the top is the input hole and the bottom is the output hole.  The subjects are allowed to place a rubber ball in the top hole which is the action that is rewarded.  A motor driven machine is also utilized to dispense trinkets as rewards.    The experimenters explain to the subjects how the box works and allow the children to play with the apparatus.  One group of children, group A, are rewarded with trinket reinforcements six times in a consecutive order.  The second group of children, group B, are rewarded with trinket reinforcements six times such that they are received over 30 responses, specifically on trials 1, 6, 13, 17, 23, and 30.  For both group A and group B, the extinction period followed immediately after such that the children received no reinforcement for three and a half minutes regardless of their behavior.

From the data collected it is clear that the rate of extinction is greater for the group that was continuously reinforced (group A: 100%) than for the group that was reinforced intermittently (group B: 20%).  The mean number of responses for the extinction period is 15.3 and 22.0 for group A and group B respectively.

Bijou goes on to do a second experiment where now, the trinket dispenser is accentuated by a buzzer.  Everything else is identical to the first experiment.  The same general trend occurs where the continuously reinforced group shows a greater extinction rate than the intermittently reinforced group.  The mean number of responses for the extinction period is 13.0 and 26.2 for group A and group B respectively.

From Bijou’s experiments, we are provided with two interesting points.  First, “for a given number of reinforcements, a variable ratio intermittent distribution is associated with more resistance to extinction than a continuous schedule” (Bijou, 52).  This is interesting because one would intuitively think that a continuous schedule would better reinforce a behavior than would an intermittent schedule.  An increase in contingency usually is related to greater learning.  Therefore, one might assume the robustness and resiliency of such a strongly learned behavior would prevail and resist extinction.  However, this is not the case.  What could account for this?

Hypothesis 1) Because an intermittent type of reinforcement does not reward the subject on every trial, the subject could just assume it is just one of those trials that he or she will not rewarded.  They realize that, to be rewarded, they have to endure these trials with no reward and therefore will not be able to tell that they’ve entered an extinction phase or that anything has changed.  However, if the continuously reinforced subjects are not rewarded, this is not “normal,” and they realize that something has changed.  The “surprise” factor is different.

Hypothesis 2) The intermittent type of reinforcement already establishes a moderate amount of frustration on the subjects.  They are a bit upset that they do not receive rewards all the time, but this negative feeling is minor.  The continuous type of reinforcement establishes no frustration on the other group of subjects since they are always rewarded.  Although once no reward is administered, both group A and group B will be greatly frustrated, the difference between the initial levels of frustration and the final levels of frustration will be different for group A and for group B.  Group A will feel a greater change in frustration while group B, already frustrated a bit, will feel a smaller change in frustration.

The second point is that the difference in the mean amount of responses between group A and group B are higher in the second experiment than it is in the first experiment (Bijou, 52).  This means that the increased distinctiveness of the auditory stimulus serves as a stronger conditioned reinforcer.  The buzz increases the saliency of the reward, and because the difference between the means will be greater.

As we conclude this post, there is one thing I would like to mention.  This study held the number of reward trials constant such that both group A and group B received 6 rewards.  However, because group B was rewarded intermittently, group B participated in the experiment for a longer amount of time and for more trials (group A had only 6 trials, all of which were rewarded, and group B had 30 trials, of which 6 were rewarded).  This difference could account for the results found as opposed to the type of reinforcement pattern.  Therefore, to investigate this, future research can be done such that the amount of total trials, the sum of the reinforced and nonreinforced, is held constant.  The duration of the experiment is the same for both groups.  The comparison of this data to the data found in Bijou’s article can then explain this phenomena further and more accurately.

 

Bijou, S. W. (1957). Patterns of reinforcement and resistance to extinction in young children. Child Development, 28(1), 47-54.

Last week, we focused on the parent, specifically the mother, as a social reinforcer for certain behaviors in children.  Keeping in line with an external individual as the means and focus to ultimately alter the child, we will now look at the child’s peers as the social reinforcers.  However, the matters we talk about in this post have prominent differences and are of quite a different nature than that of the last post.  One obvious difference is the age gap between the reinforcer and the child.  The child’s peers are usually the same or around the same age as the child.  Another difference is the social role peers play.  While the mother’s role is to raise the child, the peers role is to merely provide companionship, support, and be a social equal.  The relationship between the child and peer greatly contrasts the relationship between the child and parent.  Therefore, how the child views the reinforcer is much different if the reinforcer is a peer or if the reinforcer is a parent.  The time spent with the reinforcer also differs.  As children grow up and begin to go to school or become involved in extracurricular activities, they sometimes begin to spend more time with their peers than with their parents.  All these are important differences to keep in mind while this post unfolds.

Patterson and Anderson’s article reveals how peers can serve as agents dispensing social reinforcers.  They hypothesize that “after extended experience with a peer group, the child responsive to social reinforcers from the peer group would be expected to show high frequency of behaviors valued by this group” because those are the behaviors that are most likely to elicit social reinforcers from the peer (Patterson & Anderson, 952).  Therefore, their first step is to prove that peers can in fact serve as social reinforcers.

To do so, Patterson and Anderson show that the child’s peer can be the social reinforcer to condition a simple motor response.  They use a box with two identical holes on the top into which the child can drop marbles.  The child is instructed to pick up the marbles one at a time and drop them into any hole they want.  They can drop as many marbles they want and in any order.  The frequency of response into either hole is recorded in respect to time.  After 100 marbles are dropped into the box, the experimenters establish a minimally stable baseline estimate of choice behavior; in other words, they determine which hole is preferred by the subject.  The subject then sits with the box right in front of him or her while facing one of their peers.  The peer serves as the reinforcing agent and is instructed by the experimenter through an earpiece to read the words “good,” “yes,” “great,” “ok,” “fine,” or “very good” when the subject places a marble into the least preferred hole.  Patterson and Anderson obtain the measure of preference change by computing X/(A+B) (where X is the frequency of the least preferred hole, A is the frequency of one of the holes, and B is the frequency of the other hole) before and after the reinforcement from the peer takes place.  The difference between these two values then “provides a measure of the magnitude of shift in preference” (Patterson & Anderson, 954).  Their data, which shows a large shift in preference, supports their hypothesis that a peer can act as a social reinforcer such that “the peer is clearly effective in changing the behavior of the subject” (Patterson & Anderson, 955).

Patterson and Anderson then extrapolate this phenomenon to say that if “the child is responsive to social reinforcers delivered by the peer group,” then “the child will show an acquisition of the behaviors valued by the peer group” (Patterson & Anderson, 958).  The child will be rewarded depending on the desirability of his or her behaviors and thus, the more desirable the trait, the more frequent this behavior will become.

Although this is a very interesting and exciting hypothesis, I would like to point out some limitations.  They are not to depreciate Patterson and Anderson’s finding or discredit their argument.  They are instead just some things to keep in mind, something future researchers can address.  Firstly, testing only peers as reinforcers may not be enough to provide sufficient data.  They do not, in addition and for comparison, use a parent, teacher, stranger, or other type of non-peer as an agent to disperse the same reinforcement.  Therefore, it is difficult to claim that the success of the reinforcement is due to the reinforcer being a peer specifically.  It could merely be because someone, anyone, is reinforcing the child’s behavior.  In regards to the apparatus that is used, there might be an inaccurate sense of hole bias.  If the child wishes to place the marbles in the holes following a certain pattern (for example: right, right, left, right, right, left…) this can give a sense that he or she prefers one hole (the right hole) when in actuality, this is not the case.  The child merely prefers the pattern.  In addition, the type of reinforcement received in the environmental setting is a bit unrealistic to the reinforcement the child would get in the natural setting.  The peer reinforcer only states mundane words (“good,” “yes,” “great,” “ok,” “fine,” or “very good”) in the experiment, while in a real life setting, a peer reinforcer might have much more active response.  This could include more colorful and descriptive verbal approvals, expressive body gestures, and more personal interaction.

—————UPDATE—————

Lastly, it is important to always have a control group in which to rule out any spontaneous preference of the other hole.  For example, a child may seem to respond to the peer reinforcement when in actuality, the child may just feel like putting the marble in the hole that so happens to be the least preferred.

Patterson, G. R., & Anderson, D. (1964). Peers as social reinforcers. Child Development, 35(3), 951-960.

The two previous articles suggest that a reward system through differential reinforcement of the other behavior (DRO) may produce better results than do a punishment system.  This hypothesis has not been proven (although in science, nothing is ever “proven,” only supported), and for every argument, it is important to present the other side.  The article by Conyers et al. presents an experiment which produces results that support omission training as a better means than DRO.

Conyers et al. compares two procedures aimed to reduce the disruptive behavior in a preschool class.  In the Response-Cost procedure, children start off with 15 tokens and are told that a token would be taken away if any disruptive behavior is exhibited.  In the DRO procedure, children start off with no tokens and are told that they would receive a token if they do not engage in any disruptive behavior.  For both procedures, the children receive a candy reward as long as they have at least 12 tokens at the end of each trial.

At baseline, disruptive behavior occurred with a mean of 64%.  The Response-Cost procedure decreased this mean to 5% while the DRO procedure decreased this mean to 27%.  Conyers et al. therefore find “response cost more effective than DRO” (Conyers et al. 413).  The prompting of a punishment is now seen as more successful in diminishing disruptive children in children than is the reward of the other behavior.  This opposes the findings from the other two previous articles which illustrates that there are no set rules for learning.  One method will not necessarily prove more effective than another across the board or in all types of situations.  In learning, there are no such things as golden rules.  Each circumstance is unique and therefore requires a type of learning tweaked to its own conditions.

Wahler et al. provide us a second article which further demonstrates the complexities of learning.  The three previous articles focus on and involve the direct manipulation of the child’s behavior.  Wahler et al., however, go beyond the centralization on the child and recognize the behavior of a parent, specifically that of the mother, as a “powerful class of reinforcers for her child’s deviant as well as normal behavior” (Wahler, 113).  They propose the use of the DRO technique on the child through the mother or primary care taker of that child.  In other words, they aim to produce specific changes in the behavior of the mothers as a means to indirectly improve the behavior of her child.

Wahler et al. state that since the parents’ “behaviors serve a large variety of stimulus functions” and “compose the most influential part of [the child’s] natural environment,” the parents become the “source of eliciting stimuli and reinforcers which [produce] and [maintain] the child’s behavior” (Wahler, 114).  It thus becomes logical that to modify the child’s deviant behavior, a change in parental behaviors must be involved.  More specifically, Wahler et al. highlights the importance of “eliminating the contingencies which currently support their child’s deviant behavior, and [providing] new contingencies to produce and maintain more normal behaviors which compete with the deviant behavior” (Wahler, 114).

—————– UPDATE (begin) —————–

In the three cases studied, the baseline of the mothers differs based on the type of deviant behavior their child expresses.

Case 1) Danny is an extremely demanding child who virtually determines his own bedtime, foods he eats, when his parents play with him, and other household activities.  Danny’s mother initially is “unable to refuse his demands and rarely attempts to ignore or punish him.  On the few occasions when she refused him, she quickly relented when he began to shout or cry” (Wahler, 117).

Case 2) Johnny is a very dependent child who is physically abusive to others when he receives no attention.  Johnny’s mother initially is very responsive to this type of dependency and she feels more comfortable when he is at her side or at least within site.  Before the study, she encourages this type of “dependence on others for direction and support” (Wahler, 119).

Case 3) Eddie exudes extreme stubbornness, ignoring his mothers commands and requests or doing the opposite of what he is told or asked to do. Initially, all of Eddie’s mother’s interactions with him is restricted to his oppositional behavior such that she “rarely plays games with him, reads to him, or talks to him” (Wahler, 121).

In these three cases, we see how each mother’s baseline behavior differs.  Danny’s mother rewards Danny’s deviant behavior by relenting after he throws tantrums.  She rewards his demands by allowing him to do so.  Johnny’s mother rewards Johnny’s unhealthy dependent behavior by giving him the attention desired when he expresses his dependency.  Eddie’s mother rewards Eddie’s stubbornness by giving him attention only during the times he behaves in a negative way.  Other than during those times, she does not acknowledge him, a form of punishment.

—————– UPDATE (end) —————–

To ultimately improve the behavior of these three children, the experimenters first teach the mothers to recognize the child’s deviant behavior.  Then, because the aim is to eliminate reinforcers produced by the mother, the experimenters instruct the mother to ignore any deviant behaviors and respond to any cooperative or socially desired behavior.  More specifically, the mother is required to make no verbal or non-verbal contact with her child and ostensibly read a book if the child exhibits the deviant behavior.  This is to positively reinforce the desired behavior and punish the negative behavior.  In an experimental setting, the mothers are informed of a correct response to her child’s behavior via a light as a cueing signal which is controlled by experimenters in another room.  Because the “response rates of the children’s deviant and incompatible behavior [are] weakened when their mothers’ contingent behavior [is] eliminated, and strengthened when they [are] replaced,” Wahler et al. find that their data “supports the contention that [the mother’s] behavior changes are responsible for the changes” in the child’s behavior (Wahler, 123).  The children’s behavior is under the control of the mother’s behavior.

This article parallels the saying, “Give a man a fish, and you feed him for a day.  Teach a man to fish, and you feed him for a lifetime.”  Most programs focus on the direct manipulation of the actions of the child.  They are able to change the child’s behavior in a specific experimental setting however, once placed back in real world situations, the environmental cues and stimulants differ drastically which cause the child to exhibit little improvement in their behavior.  In this approach, the problem is the child.  In the approach discussed above, the problem is not the child but the environment in which the behavior is exhibited.  By changing the conditions and responses, improvements in child behavior can be extrapolated and transferred to varying situations.  Changing the child is like giving the man a fish.  Just the immediate problem is solved.  The child learns good behavior in a specific context but cannot maintain such conduct.  Changing the parent’s behavior to the child’s behavior is like teaching a man to fish.  Now, the man can obtain fish whenever he wants, from whichever sea, during any time of the day.  Likewise, the child’s behavior can be manipulated in multiple and diverse situations.

This all leads me to speculate that the importance of learning, therefore, lies not only in the individual, but in everything that influences the individual as well.  Successful behavioral improvement procedures must take into account the whole picture.  Learning is not an isolated entity; it is neither straightforward nor simple.  Instead, learning is complex, multifaceted, and involves many intricacies that must be acknowledged.

Conyers, C., Miltenberger, R., Maki, A., Barenz, R., Jurgens, M., Sailer, A., Haugen, M., & Kopp, B. (2004). A comparison of response cost and differential reinforcement of other behavior to reduce disruptive behavior in a preschool classroom. Journal of Applied Behavior Analysis, 37(3), 411-415.

Wahler, R., Winkel, G., Peterson, R., & Morrison, D. (1965). Mothers as behavior therapists for their own children. Behaviour Research and Therapy, 3(2), 113-124.

Hello World!

As a newbie to the world of blogging, I find that I must first learn how to communicate to you in this foreign yet exciting technological form of discourse.  The concept of learning, however, is something quite familiar to not only me, but to almost all individuals.  Learning is what allows an individual to survive, what allows a species to continue.  Learning comes in many different shapes, sizes and forms, and influences our lives from the very day we are born.  The topic of my ensuing posts will revolve around one specific type of learning: instrumental conditioning.  Furthermore, I focus on the effects of instrumental conditioning on child rearing.

Instrumental behavior, the end product of instrumental learning, is set apart from the effects of habituation, sensitization, and classical conditioning in the fact that it is goal-directed.  This means that the behavior “occurs because it was previously instrumental in producing certain consequences” (Domjan 144).  In instrumental learning, the entity being learned is the “association between the response and the stimuli present at the time of the response” (Domjan 146).  Instrumental learning procedures fall under two criterion with two levels each, whether the response-outcome contingency is positive or negative and if the procedure increases or decreases a specific response rate.  Table 5.1 illustrates the properties of the four instrumental conditioning procedures.

Two articles, one by Weiher and Harman and the other by Azrin, Sneed, and Foxx, analyze the effects of instrumental conditioning procedures on children behavior.

Let’s start with Azrin, Sneed, and Foxx.  Their study on enuretic children, children who lacked the ability to control their urination, was prompted by the limitations of the urine-alarm technique.  The Urine-Alarm technique utilizes a loud buzzing sound upon urination detected by a bed pad.  However, this procedure requires weeks, even months, to reduce urination in bed and also has a high relapse rate.  Azrin et al. thus propose the Dry-Bed procedure where, when an accident occurs, the individual “receives verbal disapproval, [is] required to change the bed sheets, and [is] required to practice arising from the bed to walk to the toilet” (Azrin et al. 147).  At the bathroom, the child is also asked if he or she can inhibit urination for another hour.  If the child responds with a yes and does so successfully, praise is administered.  If the child responds with a no, he or she is persuaded to inhibit urination for a few minutes, and if done successfully, praise is administered and the child is allowed to urinate.  The child again receives praise for correct toileting after urination.  At all times after correct urination, the child is praised for having kept his sheets dry upon arriving at his bed and feeling his sheets.  The next day, praise is administered again and is continued during multiple times throughout the entire day.  Azrin et at. find that this Dry-Bed technique is much quicker in eliminating enuresis and long lasting than the Urine-Alarm technique.

My question is why is the Dry-Bed technique more immediate and effective than the Urine-Alarm technique?  Although these issues are not discussed in the article itself, I hypothesize that this is due to the fact that the Urine-Alarm technique implements punishment, while the Dry-Bed technique implements punishment but more so positive reinforcement.  The Dry-Bed technique does punish the individual by disturbing sleep, forcing him or her to change their bed and walk to the bathroom, however, it focuses mainly on the praise upon control of urination.  Is there something about positive reinforcement that makes a technique better than one utilizing punishment?  To try and figure out what is going on, the second article by Weiher and Harman must be discussed.

Weiher’s and Harman’s studies arise from the problem of self-injurious behavior (SIB) in the retarded child.  Like the Urine-Alarm technique, the limitations of a less effective technique for SIB prompted their investigation.  Employment of an aversive stimulus via a mild electric shock initially reduces the amount of SIB.  However, when the shock is no longer administered, SIB returns.  In addition, the suppression of SIB is only successful in selective environments and in the presence of specific therapists.  Therefore, Weiher and Harman approached the problem a different way.  They utilize omission training where if the child engages in SIB, the therapist withholds a reward, in this case, half a teaspoon of applesauce.  When the child retains from exhibiting SIB for a certain amount of time, he or she is given the desired applesauce.  The time required to withhold from SIB is then increased and the same procedure is followed until SIB is eliminated.  This technique proves to be just as rapid as the electric shock technique, however, it produces a greater durability in the sense that it is long lasting and applies to a broad range of environments.

Just as the Azrin et al. study found, a positive reinforcement technique proved to be more successful in the elimination of an unwanted behavior.  Weiher and Harman’s study may hold the answer.  It is important to note that SIB is a “learned, operant, or instrumental social behavior” where the “strength of which bears a functional relationship to the presentation and withdrawal of social reinforcement” (Weiher and Harman 262).  Therefore, the child is using the behavior as a means to produce a certain outcome.  If another behavior can replace this to produce an even more desirable outcome, then the initial behavior can be reduced, maybe even eliminated.  The “other responding, which actively [competes] with resumption of the response under focus of response elimination, [is] maintained by omission reinforcement” and thus becomes the predominant behavior.  The reinforcement by the apple sauce of the other behavior, which is lack of SIB, allows the SIB behavior to be surpressed.  The subject learns “not to emit SIB and to emit other, more appropriate behaviors as a means of gaining attention” via this positive reinforcement (Weiher and Harman 267). Both articles then exhibit the effects of Differential Reinforcement of Other Behavior (DRO), which may explain the efficiency and durability of the Dry-Bed technique and the Apple-Sauce technique.  These instrumental conditioning procedures involve a positive reinforcer, the praise or the apple sauce, when the individual exhibits the opposite other behavior, the lack of bed-wetting or lack of SIB behavior.  This other behavior then seems to replace the opposite undesired behavior which is why these techniques are so successful.

——————————————- Update ——————————————-

It should be mentioned that we cannot directly conclude that the reinforcement procedure is solely responsible for the success of the procedures discussed.  For example, the Dy-Bed technique not only involves the reinforcement of the other behavior, but also differs on the level of punishment utilized when compared to the Buzzer-technique.  For the Buzzer-technique, the child is merely woken by the buzzer whenever he or she urinates in bed.  Immediately after, the child may fall back to sleep with little to no disturbance.  For the Dry-Bed technique, the child is woken up by an actual person, is forced to take off the dirty sheets and replace them with clean sheets, and is taken to the bathroom where they must stay until correct urination is performed.  The child cannot go back to sleep as he or she must perform these actions, which the parent or experimentor makes sure of.  It is arguable that being woken up briefly is less intense than being woken up and forced to function in such a manner.  Therefore, two variables may be responsible for the greater success of the Dry-Bed technique.  To distinguish which variable elicits this increase in efficiency, future research can be conducted such that the praise element is eliminated.  If the child still shows a greater response to the Dry-Bed technique, then that data may support the hypothesis that the increase in success of the technique is due to the greater intensity of punishment.  Researchers can also alter the Dry-Bed technique so that the child is awaken by the buzzer only (identical to the Buzzer-technique), but the praise is still administered when correct inhibition of urination is performed.  If the child still shows a greater response to the Dry-Bed technique, then that data may support the hypothesis that the increase in success of the technique is due to the DRO reinforcement aspect.

 

 

 

 

References:

Azrin, N. H., Sneed, T. J., & Foxx, R. M. (1974). Dry-bed training: rapid elimination of childhood enuresis. Behavior Research Therapy, 12(3), 147-156.

Domjan, M. (2006). Principles of learning and behavior. Belmon, CA: Wadsworth.

Weiher, R. G., & Harman, R. E. (1975). The use of omission training to reduce self-injurious behavior in a retarded child. Behavior Therapy, 6(2), 261-268.

Hello world!

Welcome to WordPress.com. After you read this, you should delete and write your own post, with a new title above. Or hit Add New on the left (of the admin dashboard) to start a fresh post.

Here are some suggestions for your first post.

  1. You can find new ideas for what to blog about by reading the Daily Post.
  2. Add PressThis to your browser. It creates a new blog post for you about any interesting  page you read on the web.
  3. Make some changes to this page, and then hit preview on the right. You can always preview any post or edit it before you share it to the world.