EQUINE CLICKER TRAINING..... using precision and positive reinforcement to teach horses and people |
Clicker Expo 2014: Norfolk, Virginia These are my notes from the lectures and labs I attended. I try to share the information as it was presented by the speakers, but there are places where I have added extra comments or filled in a few details. If you have any questions about the material prsented here, please feel free to contact me. The notes are all on one page so you can just start reading, or you can use the links below to get to each one directly. If you'd like more information on Clicker Expo, you can go to www.clickerexpo.com and read all about it. Kathy Sdao:
It’s a Good Fit! Operant and
Classical Conditioning Kathy Sdao: It’s a Good Fit! Operant and Classical Conditioning
I’ve heard Kathy talk on the subjects of operant conditioning and classical conditioning before, but I attended this talk because I was interested in what she had to say about how they related to each other. When I first learned about operant conditioning (OC) and classical conditioning (CC), they were presented as two separate modes of training. You were either using one or the other. It turns out that this is a bit of an oversimplification because the reality is that they are both happening at the same time. As trainers we can choose to emphasize one over the other, but we really need to be aware that both types of learning are happening. In this talk Kathy emphasized that there is also a progression from CC to OC.
Kathy started off by defining OC and CC, because you do have to start by understanding each separately. As clicker trainers, we are all familiar with the idea that behavior is driven by its consequences. The standard formula in operant (or Skinnerian) conditioning is A -> B -> C, where A = antecedent, B=behavior, C=consequence. In operant conditioning, the likelihood of B being repeated is determined by the consequences, which can either reinforce (increase) or punish (decrease) the behavior. As clicker trainers, we add something the animal wants after the behavior happens, and that strengthens the behavior so it is more likely to happen in the future.
In classical conditioning, the trainer is using a procedure that changes the animal’s response (and associated emotions) from one stimulus to another. It is usually written out as CS + US -> B (reflex). The CS (conditioned stimulus) has no intrinsic meaning. The US (unconditioned stimulus) has intrinsic meaning. When you pair them together by presenting the CS and then the US, you can condition the CS so that it triggers the same emotional response as the US. If this is making your head spin, just think about the clicker itself. The first time you click, the click has no meaning to the horse. After you click and treat a few times, the horse now associates the click with food and gets excited when it hears the click. That is classical conditioning at work.
Kathy pointed out that while classical conditioning is very strong, it is not a long term solution. In many cases she starts with CC and then switches over to OC because most trainers are interested in training specific behaviors and this is done more effectively with OC. At the same time, she emphasized that people often underestimate just how powerful CC can be. This is partly because CC doesn’t usually have the same “epiphany” moment where the animal “gets it.” CC is more of a gradual process.
So those are the differences. This talk was about integrating them together so she pointed out how we use CC when we are working primarily with OC. Her list included “charging” the clicker, pausing between click and treat, creating new reinforcers, adding cues to behaviors and modifying emotional responses. I mentioned that the clicker takes on meaning through CC, and here she emphasized why it is click THEN treat, not click/treat at the same time. In order for CC to work, the conditioned stimulus must come before the unconditioned stimulus. This is why it’s so important to keep the click and food delivery separate.
Her first example of using CC as a foundation for OC was her dog Effie who was afraid of buzzing insects. When Effie heard a buzzing insect, she trembled, panted, cowered and fled. These are all behaviors that happened automatically when she heard the bee. She had no choice. To help Effie get over her fear of bees, Kathy needed to change her emotional response to bees so that she no longer needed to flee. She can do this with a CC procedure called desensitization.
Desensitization is a process of “repeatedly presenting a fear-evoking stimulus (trigger) at low intensity until the animal reacts without fear, then slightly increasing intensity and repeating the process until the animal is comfortable with full-strength trigger.” This is different than habituation which is “decreasing the intensity or probability of a reflex resulting from repeated exposure to an eliciting stimulus.”
The key to desensitization work is working at the appropriate intensity where the animal can learn a new response to an existing trigger. There are 5 ways to change the intensity of something. They are loudness, duration, distance, frequency (how often), and distractions. In Effie’s case, she could use buzz-like sounds and change the intensity in different ways. By pairing the buzzing sounds with something Effie liked, she could slowly change Effie’s “automatic” reaction from one of fear to one of happy anticipation (something good is going to happen).
Once Effie is happily anticipating something good will happen, then Kathy can switch to OC to shape exactly the behavior she wants. In the long run, this means that when Effie hears buzzing, she offers a specific behavior to Kathy, who can then reinforce it. This is an example of what is called classical counter-conditioning because Kathy started with a stimulus that had an unpleasant association, as opposed to being neutral.
She also had example of using classical conditioning with a neutral stimulus. Her dog Nick didn’t have any interest in Frisbees until she started showing him the Frisbee before letting him out to eat apples. Over time the sight of the Frisbee elicited the same emotional response as being released to go eat apples. This brings up an important point which is that many people stop using CC at the point at which the animal is comfortable with the stimulus, but really a better goal would be to keep going until the animal is excited and happy about seeing the stimulus. That is the point at which you can definitively show that you have changed how the animal feels about the stimulus.
Why don’t more people do this? Perhaps it’s because they don’t realize the power of CC, or perhaps it’s because they don’t know how to do CC well. Classical conditioning sounds simple, but there are some important details. I mentioned earlier that the order and timing matters. The order is conditioned stimulus -> PAUSE -> unconditioned stimulus. If you do both at once, it won’t work. If you do them in the wrong order, it either won’t work, or you can “poison” your unconditioned stimulus by associating it with something aversive. Here are some other tips:
The interval between the CS and US matters.
The CS should precede the US by 1-2 seconds
The first pairings are the most important. Get your momentum going.
Longer inter-trial intervals are important. You don’t want the animal to be expecting the stimulus, so this also means… Avoid rhythmic trials. People have a tendency to fall into patterns so if you are doing a number of trials in a set period of time, plan the timing so the stimulus comes at random intervals.
Avoid competing stimuli or using weak unconditional stimuli (use something the dog really likes)
Not transitioning to operant conditioning when the dog is ready.
She finished with a little discussion on how operant techniques like DRA, DRI, DRO, stimulus control and using an LRS fit into the picture. Basically these are all ways to start to sort through and shape the behavior you want once the animal is offering something other than its initial response to the trigger.
I want to make some comments on this material and how to apply it to horses because I found it triggered a series of related thoughts about using classical conditioning with my own horses. Rather than mix them in with her material, I thought I would share them at the end.
I do want to start by repeating something from yesterday. In her talk Kathy’s focus was on the progression from classical conditioning to operant conditioning as part of a long term training plan. So she was talking about using them sequentially, one and then the other. It’s one way to think about combining the two types of learning. But it’s also important to recognize that in most cases the two are happening at the same time. As Bob Bailey would say “Pavlov is always on your shoulder.” So there are two ways to think about using both classical conditioning and operant conditioning as part of your training plan.
If you want to learn more about the overlap between classical conditioning and operant conditioning, I wrote about it last year after attending Clicker Expo 2013 (see my notes on my website). This subject was discussed by Jesús Rosales-Ruiz who looked at the differences between classical and operant conditioning and argued that it’s hard to separate them. In many of the examples of classical conditioning that he showed, there was an operant element too. He had a video of using counter-conditioning to change a small dog’s reaction to a previously frightening stimulus. The dog was fed using a standard counter-conditioning protocol and its response to the trigger changed.
But it didn’t change in random ways. As the session progressed, it was clear that some behaviors were now happening more than others. Even though the intent was not to select out or shape a particular behavior (using operant conditioning), there was operant learning going on during a classical counter-conditioning session. He had a few other examples, all showing that when you are working on operant conditioning, classical conditioning is also happening and vice versa.
I see this with my own horses when I try to feed without clicking. I think that once an animal knows about a marker signal and how the “clicker training game” works, then they are very quick to pick up on any information that tells them what behavior is being reinforced. So if I really want to avoid reinforcing a particular behavior, I have to be very careful about my body language, food delivery, and monitor the horse’s behavior.
This doesn’t mean it is “wrong” if a particular behavior is being reinforced more, just that it’s helpful to be aware of it. And there are advantages. It means that I get some of the benefits of operant conditioning when I am working on classical conditioning and vice versa. This is something I already knew, even though I didn’t think about it quite that way. Have you ever noticed that the objects that are used in clicker training exercises become important in their own way? I think most of us have experienced a horse that brightens up or gets excited at seeing a target stick, mat, or some other object that is associated with a specific behavior. Those objects have taken on value through classical conditioning, which was happening while we were training behaviors with operant conditioning. If you’ve ever had a horse eagerly take you to a mat, then you know the power of classical conditioning. Since some classical conditioning is already happening, is that enough? Or are there benefits to focusing more on classical conditioning? One of the things that Kathy emphasized was that we don’t want to use classical conditioning just to get the animal to tolerate or accept a previously aversive stimulus. We want to use classical conditioning until the animal is excited about seeing it. I love this idea and now I am wondering why I haven’t used classical conditioning more. Thinking about it, I think there were a few sticking points.
One of the sticking points for me with using classical conditioning was that I was pretty wedded to the “don’t treat without clicking” mantra. This is partly because that “rule” was one of the things that convinced me to give clicker training a try. I liked the idea that the horse would not expect food without being clicked, especially since I had small kids and possibly nibbly ponies. I think this rule is really important to avoid muggy horses and for a long time I really stuck to it (meals excluded). But it did mean that training protocols that used food without clicking were ones I tended to shy away from.
However, in recent years I have played around with just using food (no click) and I have not seen any problem with doing it. I found that my horses were so polite around food, from all the years of clicking before treating, that just using food was not an issue. I do have to be careful about my mechanics (if I get sloppy, they get sloppy) and there are some situations where I chose to place food in a bucket or on the ground instead of hand feeding. But overall I like being able to deliver food with or without a click as it gives me more flexibility and I think it also made my horses even more polite about food.
The other sticking point was that I think I was confusing using food to create positive associations with conditioning a new stimulus. When I first started using classical conditioning, I would use food as a way to change how an animal felt about something that was happening. So if I wanted to work on trailer loading, I would load the horse and feed it while it was in the trailer. Or I might feed it while the vet was there so that having the vet there was a good thing. I think there is value in this, but I don’t think it is quite the same thing as more focused classical conditioning where the trainer identifies the conditional stimulus and deliberately transfers the associations from the unconditioned stimulus. Doing the latter is a much more focused approach and requires a lot of pairings (with the correct order and a pause between), but also has a much greater likelihood of really changing how an animal feels when the stimulus is presented.
Recently I came across an old handout from my early days of giving presentations on clicker training. As part of it I have a little chart titled “Four Ways to Use food.” They are listed as operant conditioning, classical conditioning, bribery, and indiscriminate hand feeding. I used to present this information so that people could start to recognize the different ways they use food. It was a nice way to get people to expand their thinking a little if they were in the “all food treats are bad” camp, or if they had never thought about when and why they fed treats. At that time I spent all my presentation on the operant conditioning part, but next time I think I’ll take a few moments and explain more about classical conditioning now that I know more about how the two work together.
One is that Susan says she does teach the quadrants with their scientific names, even though there is some confusion over terminology because it conflicts with the way the words are commonly used. She thinks it’s important that we all use the same words so we can understand each other. Later she said there are really only about 20 words you need to learn to understand the technical jargon. That was meant to be encouraging.
When she looks at behavior with a view toward analyzing the ABCs (antecedent, behavior, consequence), she asks three questions:
1. What is the focal behavior (yes, you have to pick ONE behavior to analyze) 2. Do you predict the animal will do the behavior more (reinforcement) or less (punishment) in the future? 3. Was something added as a result of (contingent on) the behavior (+ ) or removed (-) A few comments on the ABCs:
In addition to having the antecedent that immediately precedes the behavior, there can be distant antecedents. These can come from an animal’s history and past learning. One way to think about antecedents is that they “set the occasion for the behavior.”
She mentioned the importance of predicting behavior. When we can predict what will happen, then we can plan for it. One way to get a better understanding of the quadrants is to watch a video clip of an interaction and pick out specific behaviors to observe. As you watch the video, can you see which behaviors are being reinforced and which behaviors are being punished? I thought this was a great suggestion because we all tend to make assumptions about how behavior is affected by different consequences. We might find we are more accurate if we spend more time observing first, or at least video our training sessions so we can observe what really happened. It’s so easy to think you know what is going on and then watch video and see that something entirely different is happening.
In a typical ABC analysis, the emphasis is on observable behavior. This may make it sound like there is no place for considering the emotional response of the animal, but this is not so. The emotions associated with a behavior will show up as an observable part of the behavior. We just have to learn to look for them. She used the word “valence” to describe the emotional component of a behavior.
I looked it up and got this definition. “Valence, as used in psychology, especially in discussing emotions, means the intrinsic attractiveness (positive valence) or aversiveness (negative valence) of an event, object, or situation.” Valence is not a word I had used before, but also listed was “ambivalence” which I think we all recognize. I thought it was interesting that the word ambivalence (mixed feelings) is commonly used but the word valance is not.
Susan asked “Does Valence Matter?” YES!
Valence is important, but it’s important to be accurate about how we evaluate it. There is a temptation to try and read the minds of our animals and this is not valid or possible. We can only evaluate valence by observing behavior. If you want to learn more about emotions in animals and how to understand them from a scientific point of view, look up the work of Dr. Jaak Panksepp. I have some notes on his talk at ORCA on my website which will get you started, and then there are more resources available on the internet. Susan’s point was that yes, valance matters, but we need to evaluate it objectively.
Do you remember that the title of this talk was about oxymorons? An oxymoron is a word or phrase that seems to contradict itself, like jumbo shrimp or negative reinforcement. Or… using reinforcement to reduce behavior. This was one of the key points of her talk. She showed several ways that we can use reinforcement to reduce unwanted behavior. This included noncontingent reinforcement and differential reinforcement.
Noncontingent reinforcement is a slightly controversial term because reinforcement is contingent by definition, but it has been adopted to mean a training protocol where the reinforcement is delivered independent of the behavior. The idea is to break the connection between an unwanted behavior and any possible reinforcement for it by providing the reinforcer on a regular basis, but for other behaviors. This takes away the functionality of the unwanted behavior.
The reinforcement is often provided at regular intervals. She had an example of a puppy on a mat. The trainer wanted the puppy to lie down on the mat so she reinforced her for being on the mat at regular intervals, regardless of her position on the mat. There were some criteria (mostly just being on the mat) but they were very broad and the reinforcer was delivered at regular intervals regardless of what the puppy was doing (as long as she was on the mat). The food was presented to encourage a down, but it was not contingent upon the puppy being down.
She listed the following steps for using noncontingent reinforcement:
Identify the reinforcer that maintains the problem
behavior.
Another way to reduce behavior with reinforcement is to use differential reinforcement. Differential reinforcement refers to a general procedure and then there are some specific “types “of differential reinforcement protocols that are usually written out as initials. DRO is differential reinforcement of OTHER behavior. DRI is differential reinforcement of INCOMPATIBLE behavior. DRA is differential reinforcement of ALTERNATIVE behavior.
Differential reinforcement is two procedures working in tandem. One behavior is being strengthened (reinforced) and the other behavior is getting weaker (extinction). All behavior is the result of differential selection of behavior by consequences. She called differential reinforcement “the ‘other’ kind of natural selection.” We are biologically designed to change our behavior in response to consequences.
Differential reinforcement is part of shaping and stimulus control. When we stop reinforcing certain approximations because we have changed the criteria, we are using differential reinforcement to select out those approximations that do meet criteria. When we only reinforce behaviors that happen on cue, then we are using differential reinforcement to select out cued responses and extinguish uncued responses.
As part of the discussion on getting rid of unwanted behavior, she did talk a little bit about other ways to reduce behavior (LRS, punishment, and extinction). LRS (least reinforcing scenario) is a way of responding to an incorrect response to a cue. The trainer pauses for a moment (very brief), just enough to break the flow of behavior and then immediately recues a well-known behavior to get reinforcement going again. A correctly done LRS makes it less likely that an incorrect response will be inadvertently reinforced (this can happen if you just continue on) and reduces extinction induced aggression.
The use of positive punishment has been well studied and it has been shown that it is not a good way to reduce unwanted behavior. It is generally the most intrusive and has side effects such as:
Apathy
Extinction is another option for reducing behavior. It works because in extinction you decrease a behavior by permanently withholding the maintaining reinforcer, but it also has unwanted side effects and can be hard to do (resurgence and recovery are problems). You have to be able to withhold all reinforcement and sometimes the animal finds its own reinforcement. She calls this “bootlegging reinforcers.” Extinction is more intrusive than reinforcement, causes an initial burst of intensity (extinction burst) and can be hard to do with behaviors that are reinforced intermittently. It is most effective if you can ignore the behavior the first time it happens so that you never give it a function. I will add, from personal experience, that it can be really hard to control all the reinforcers for a behavior when working in an animal’s normal environment. When you can’t control all the reinforcers, then the “Matching Law” comes into effect. The matching law says that “Given a choice between two behaviors, the relative proportion of responses matches the relative proportion of reinforcers earned by that alternative. “ Animals do what pays the best. If an animal walks away, it means we are not reinforcing enough.
So with all this information about ways to reduce unwanted behavior, how do we proceed?
Step 1:
operationalize the behavior
She had some examples of this with lions and goats and showed how to write up a behavior change model chart listing antecedents, possible behaviors and consequences. The chart starts with the existing ABC sequence but then you can add alternative paths to different consequences. Then she showed a video clip of a boy working with a parrot that had a history of biting him. He started by just feeding the parrot while it was in its cage and by the end of the training (over a period of weeks), he was holding the parrot and asking for behaviors.
Here are a few other quotes from the talk that I wanted to
share.
Susan Friedman’s website is
www.behaviorworks.org.
You can find more information on her work, articles and courses on her
site.
This was the first of the shorter (45 min) talks that Karen gave. Extinction is something that often gets mentioned as a side note in discussions on operant conditioning and learning theory, but this was an in-depth look at what it is and whether or not we should (and are) using it.
Karen started with the statement “Extinction takes place when reinforcement stops. The term “extinction” refers to the behavior that was being reinforced: theoretically the behavior the reinforcement is supporting stops, too.” We say a behavior is going into extinction when it is no longer being reinforced and the frequency at which it occurs decreases, hopefully until it is not happening at all.
I think most of us become aware of extinction by experiencing or observing an extinction burst. This is the escalation of behavior that occurs when reinforcement first stops and the animal (or person) has an emotional outburst because something that had been working is no longer doing so. During an extinction burst the frequency of the previously reinforced behavior increases (there is more of it, often accompanied by a change in intensity) and it will be accompanied by a variety of emotions, none of them good. Karen says it is common to feel annoyance, anxiety, anger, rage and despair. This occurs across many species. She had examples of extinction bursts happening with rats, pigeons, dogs, and fish. Examples with people included the ever familiar examples of kicking the vending machine and trying to get your car to start.
In addition to the emotions that accompany the extinction burst itself, extinction can lead to the feeling of grief. This is the sense of loss that can affect future behavior. Examples of extinction that can lead to grief are the break-up of a relationship, losing your job, and leaving home. B.F. Skinner is the one who pointed out that leaving home means that a lot of extinction is happening at once because you are leaving so many familiar things behind, and that includes all the reinforcement that accompanied them. This grief can be quite long lasting. Karen had a story about a dolphin that sulked for 3 days after an accidental extinction event.
The rest of the talk focused on three questions about extinction. They are:
Should we use extinction in training?
Should we use extinction in training?
Extinction is about decreasing (or stopping) a behavior by removing reinforcement. Right off, as a clicker trainer, this should raise a little red flag in your mind. It’s always better to focus on what YOU DO WANT than it is to train by thinking about what you don’t want. You can see this more clearly by looking at Karen’s list of ways that extinction is used by some trainers.
As a research tool (on/off experiments)
You can see that most of these (potentially all) are training methods that are going to be emotionally difficult for the animal and have possible adverse effects. Not only that, but it’s not clear that they are even effective. Karen said that “the learner always remembers what used to work. The memory is always there.” You can’t replace learning. You can only cover it up with new learning, so extinction is never complete. It really only works “to all intents and purposes.”
Punishment in particular is very ineffective. Karen clearly talks about both +P and –P. She said that taking away reinforcement for a behavior is –P. Since extinction is about decreasing behavior, this makes sense and you would get a lot of the same emotional fallout that accompanies punishment.
Punishment tends to suppress all behavior or just change the environmental context or triggers for the behavior. The animal may learn not to do the behavior if someone is watching, but continue to do it in other circumstances. If the underlying reason for the behavior or reinforcement is not changed, then punishment can end up suppressing one behavior, but the animal might replace the undesired behavior with another equally undesirable behavior.
When you use punishment, emotional responses are inevitable and they include:
A loss of trust
The bottom line is that we don’t want to intentionally use extinction in our training… but there’s a little catch here. According to Jesús Rosales-Ruiz, extinction is part of the shaping process. Karen mentioned that old textbooks talk about using extinction in shaping. When you stop reinforcing a behavior because you have changed the criteria, the old behavior goes into extinction. When I first learned about shaping, it was described as “riding the extinction bursts. “ I would click for a response a few times and then withhold the click and see if the horse offered something else.
I think the thing to keep in mind here is that extinction happens all the time as we learn new things it is probably part of the learning process. The important thing is how the animal feels about what is happening. If the animal understands the shaping process and is still working toward behavior, then the temporary lack of reinforcement becomes information and is not accompanied by the emotional angst that can accompany extinction. This is yet another argument for shaping in small steps so as to avoid stressing the animal.
This leads us right into…
Can we build resistance to extinction?
If we accept the last statement that there is some extinction as part of the shaping process, and we also recognize that there will be times when we need an animal to work for longer periods of time before being reinforced, then we have to look at extinction not just as something to be avoided, but as something that is part of learning. That means that we need to train our animals so that they understand how to handle a change in the type of reinforcement, a thinner schedule of reinforcement, or the absence of reinforcement at moments in the shaping process (I could argue this one, but I’m not going to get into that here).
Karen said that this is where variable reinforcement schedules are often used. Some trainers use them so that the animal will continue to keep working even if a behavior is not reinforced each time. The animal is trained that reinforcement does not happen for every correct answer, but only for some correct answers. This leads to more persistence because when a behavior doesn’t pay, the animal will try it again, hoping that it will pay the next time. It’s the same idea as gambling or trying to get an old car to start. You keep trying because sometimes it works. The reality is that a lot of things in life are on variable reinforcement schedules. She used the examples of playing solitaire (sometimes you win), batting practice (sometimes you hit it), your boyfriend’s phone call (sometimes he calls?), and the dog’s response to cues. None of those things happen every time you try, but they happen often enough that you keep trying.
Variable reinforcement schedules are used to teach the animal that if they keep doing behaviors, reinforcement will eventually happen. And they can be very powerful. Most addictions are maintained by variable reinforcement schedules (gambling, smoking, ..). Variable reinforcement schedules can be important for some types of working dogs where reinforcement happen at the end of a long sequence of behaviors. Karen used the example of search and rescue dogs. I suspect those dogs get quite a bit of reinforcement along the way, but their main reinforcement is finding someone at the end of the search.
Just as a little note, not everyone agrees on the value of variable reinforcement schedules. Bob Bailey says he doesn’t use them. There have been various talks at past Clicker Expos about the difference between variable reinforcement schedules and reinforcement variety. You might want to read up on that material (it’s on my website) if you are curious about variable reinforcement vs. reinforcement variety. What are alternatives to extinction?
From what I have written above, you can probably guess that intentionally putting a behavior into extinction is not often a good training choice. If you want to get rid of unwanted behavior, there are better ways. Karen lists some alternatives to extinction (and punishment) here:
1. Teach an incompatible behavior
2.Bring the unwanted behavior under stimulus control
At the end of the talk she pointed out that the use of extinction as part of many laboratory experiments is changing and researchers are looking for alternative ways to conduct experiments so the subjects don’t experience as much stress.
Michele Pouliot, “Pace, Place and More”
The subtitle for this lecture/lab was “Strategic reinforcement delivery.” It was about the importance of learning how to deliver your reinforcement. She said that every reward is a “delivery event” and the reward event greatly influences behavior. She illustrated this with two equations:
Effective Reward Location + Effective Delivery Method =
Powerful Training.
When adding reinforcement strategy into your training, it’s important to take into account the following considerations:
The individual animal: your relationship, the value of
reinforcers available, mechanics of taking rewards
In addition, it’s important to note that Strategic reinforcement does NOT ask for additional behavior. After the click, the “delivery event” flows smoothly and does not “require” more behavior before the dog receives the reward. For example, if the dog is clicked for heel position but moves out of heel after the click, the handler should not require the dog to move back into heel position before giving the reward.
Reward location is the key to effective reinforcement strategy. There are several location strategies that she talked about and showed in the lab including:
· resets dog (places dog in best location for next repetition) · rewards where dog was at the time of the click (adding value to that location) · rewards at a specific location (moves the dog) which supports the goal behavior – · rewards where the dog is at the time of delivery (may be different location from where the click happened).
The Speed of Reward delivery is also important.
TOO FAST:
dog is startled by hand movement or doesn’t see where the food goes
Between TOO FAST and TOO SLOW, there is room for a variety of speeds. The handler can influence the dog’s energy by moving more quickly so that her movement builds the dog’s enthusiasm. Or she can deliberately slow down so that the reward event lowers the dog’s energy. In addition to the handler moving, the reward itself can be thrown. This works better with rewards that are easy to see, stay in one piece and have a low “bounce” factor. It also helps if the handler has reasonably accurate throws (practice, practice, practice!).
She shared a number of specific food delivery strategies that she uses. I have grouped them according to her four general categories above because that made it easier for me to understand different applications of the same general strategy. It’s possible she wouldn’t group them exactly the same way I did, and it’s good to remember that a trainer can use the same food delivery technique for different effect, depending upon how the rest of the exercise is structured. Resets:
Delivery Process: Location: Come n Get it! Upon click dog comes to reward at its location (commonly the handler) – this is most often used as a reset, but could be used to reinforce a location or goal, depending upon the behavior.
Feeding
to promote the goal behavior: Reward Location Goal:
Reward completion point.
Final behavior position is more heavily reinforced . The trainer can feed down
and back between front legs to shape a bow. Feeding to reinforce the clicked behavior (or location):
Delivery process: Pizza delivery, no need to travel – Direct Delivery – prompts dog to remain at click location. You can also use a reward location that encourages the dog to check in with the handler. This is helpful if training behaviors in the presence of distractions.
Feeding where the animal is (even if it moves after the click):
Delivery Process: Protected contact. Animal remains in position behind a barrier and waits for the reward to be delivered.
Notes from the lab:
One of the activities in the lab was to teach the dog to put its head in a pop tube. A pop tube is a plastic tube that can be lengthened or shortened and curved into different shapes. When teaching a dog to put its head in the pop tube, she had the trainer feed by putting her hand through the pop tube so the treat came out of the pop tube. This encouraged the dog to orient toward the pop tube in such a way that it made the desired behavior (head in tube) more likely.
She had everyone teach one behavior doing the pop tube and then each person with a dog got to pick a new behavior to train (or one to improve) and she had suggestions for training it and how to deliver the food. Some of the ones I observed were:
The bow: feed
low and back between the dog’s front legs to encourage it to drop down more.
There were a few other behaviors where the treat delivery was more about taking advantage of the environment and dog’s natural tendencies. If the dog tends to orient toward your treat pocket, then you can use that. Use walls, corners, etc… to make the dog more likely to orient in a specific direction. If the dog is staring at you, and not interacting with the object, then feed so the dog is not facing you and then you can move out of their space so that the object is the first thing they see when they are done eating.
Additional tips and comments:
The food reward hand should have no information.
What to do if it’s not going well….If you aren’t making progress, stop and THINK. Clicker training done well should be fast. We have a tendency to keep training and hope it will get better. Don’t lower the criteria to get things happening – stop and THINK. (I’m going to add a little note here that this comment should be taken in context. I don’t think Michele is saying all behaviors can be trained fast, but shaping a clickerwise dog to do a simple behavior should be quick).
Kay Laurence, “The Art of Practice”
Does the idea of “practicing” make you cringe? As a mother, I know that all I have to do is suggest someone practice something and they are headed the other way. Why is that? Is it the word itself which has become associated with repeating boring drills? Or is it that we are not taught how to practice well so that practice is both productive and enjoyable? This was the subject of Kay Laurence’s lecture on practicing. What is practice? She says “It is the purposeful and single-minded application with the intent to get better.” Not only that, but “It isn’t the thing you do once you are good. It is the thing you do that makes you good.” (Malcolm Gladwell, “The Outliers”)
In talking about practice, Kay is including many different types of behaviors. There are those skills we practice because they are important for our performance goals. There are also those skills we practice because they are life skills. Practicing is not limited to pet or performance dog training. It is about developing competence, familiarity, fluency, focus, understanding, consistency. It is also about preparation. It takes time and effort to train behaviors to meet these criteria and a big part of it is learning to see the environment as the dog sees it. She said her new slogan is going to be “Be More Dog” which she got from a commercial. I’ll put the link in a comment if you want to see it.
So why do people hate to practice? Here are some common reasons:
Poor technique: if you practice with good technique and the science is sound, you will make progress. On the other hand, if your technique is poor, practice will be frustrating and the lack of progress will make you less likely to continue with it.
Training for the wrong goal: If you are training under a time limit or to please someone else, then your ability to practice well will be compromised. Time is a bad reason to push something along, whether it’s because you only have 5 minutes or because you need to be ready for a competition tomorrow.
Joyless repetitions: Some people think the dog doesn’t like repeating behaviors, but they do if they know the behavior and get paid for it. A big part of this is that the trainer learns to deliver the treat with delight. If the trainer acts bored with the repetitions, the dog will pick up on it. Unprepared: It’s important to take the time to prepare the environment. Make sure that all the equipment you need is ready and that your treats are prepared. Preparing also includes having a training plan.
Selectively practice: Some people do practice, but…they tend to practice more the things that they do well and the avoid practicing the things that are more challenging. So they may have some things they like to practice and some things that they hate practicing.
Any combination of the above reasons can lead to avoidance. The trainer doesn’t put the time into practicing that is necessary. Kay said that the 10,000 hour rule also applies to practicing. You have to do 10,000 hours of practicing to achieve mastery in something.
She gave some tips to make yourself more motivated to practice, and more successful when you do it. The first part is making a plan. There should be a plan for you and a plan for the dog. The plan should identify what needs to be prepared ahead of time. This could be setting up the environment so that you have everything you need. It should also include removing potential distractions.
As part of preparation, you should rehearse mentally and physically. There are a lot of trainer skills that can be practiced before the dog comes out. She recommends practicing throwing treats until you can easily throw a treat and have it land where you intended. You should also do some mental preparation so that you start practicing with a good attitude and are be fully committed to what you are doing. She used the phrase “bring your whole-self.”
You should have a way to document the session (record data or videotape) so that you can analyze how things went. She pointed out that it’s important to be honest about what really happened (trainer errors vs. dog errors vs. other random stuff). If you are videotaping, it’s helpful to watch the video a few times, focusing on one thing each time. Then apply what you have observed to the next training session. This requires self-discipline and self-awareness. The dog has no knowledge of the outcome so the trainer needs to pay attention to the bits along the way. Plan to train in small bits and batches so you can pay attention to detail and balance.
In a session, the details are important. Your motor skills need to be fluent. Good mechanics include being able to click with good timing, deliver food appropriately, manage and manipulate targets, proper use of signals, being disciplined about correct use of cues, and balance. An example of a detail that matters is how you position the dog so that you can observe the behavior. When teaching a bow, she clicks when the dog’s elbows touch the floor. If the dog is in front of her, she can’t see it. She has to position the dog sideways relative to her so that she can time the click correctly.
She had some specific comments about clicking. Dogs pick up on uncertainty, so don’t click if you are uncertain whether or not you should click. If you do click, then deliver the reinforcement promptly. There should be no “yes, buts” following the click where you click and then ask the dog to do something else or waffle over delivering the good because of the way the dog takes the food. If there’s a problem with some element in the training that’s not related to your specific goal for that session, then make note and work on it another day.
Most of this has been about the “science” of practice. What about the “art” of practice? The art of practice includes:
compliment the learner’s pace
“Practicing is very much about focusing on one thing at a time and working gradually toward the experience of the whole.” (Madeline Bruser)
Kay writes “The talent is being able to combine the science, good technique and the natural instinct of listening to your learner and providing what they need at this precise time. Attractive solutions are often avoiding doing the work. But…it is the work, the understanding, the practice that is the solution.”
I am reminded of a quote from Sue Ailsby that I read long ago. She was listing important steps that lead to success in training and one was very simply “Do the work.”
But…keep in mind that Kay also says “Training should be a joy.” It’s our job to reconcile these two things so that practicing is a joy for all parties involved.
If this has piqued your interest in practicing, there are plenty of good resources out there on the subject. Kay often recommends a book called “The Art of Practicing” by Madeline Bruser. I haven’t read it, but I’ll put it on my list. A few years back I read a book called “The Talent Code” by Daniel Coyle and he writes a lot about “deep practice.”
Here’s the Be More Dog commercial: https://www.youtube.com/watch?v=iMzgl0nFj3s.
Jesús Rosales-Ruiz, “Using Resurgence to Your advantage.”
Jesús started with some definitions. If some of this terminology is new to you, I would just read them, read the rest of the article, and then come back to them later. They will probably make more sense once you have seen examples of them. Also, if you haven’t read the notes from Karen Pryor’s talk on extinction, I suggest you do so. They add a few more details on extinction.
What is resurgence?
1. Resurgence
refers to the reappearance of previously learned behaviors when the present
behavior is not capable of getting reinforcement.
Usually the learner reverts to older forms of the response which were
once effective.
2. Also known as
regression
3. It’s a retreat
from one’s more recently acquired behavior to that of an earlier period. (From
Keller & Schoenfeld (1950) Those were the “original” definitions.
Since then, behavior analysts have made a few distinctions between
resurgence and regression. Modern Definitions:
Regression - The reappearance of
previously extinguished behavior during the extinction of
more recently reinforced behavior.
Catania (2013)
Resurgence - The occurrence of a
previously reinforced behavior (Behavior 1) when a more recently reinforced
behavior (Behavior 2) is undergoing extinction.
Cleland, Foster, & Temple (2000), (e.g. Epstein, 1983; Wilson and Hayes,
1996)
The significant thing to note here is
that in regression the behavior that reappears is one that has been extinguished
(intentionally let go to extinction), but in resurgence, the behavior that
reappears is one that is still being reinforced.
Since the difference between
regression and resurgence is related to extinction, he defined extinction.
Extinction is one way to get rid of
behavior, but it can come with an emotional cost.
This is because the process of extinction goes through several steps.
These are:
Response bursting
Extinction bursts (the “response
bursting”) are seen in all species.
Pigeons will coo, flap their wings and engage in other emotional behavior.
Humans will try harder, often increasing the intensity of the behavior,
accompanied by emotional outbursts such as a loud voice, hitting something,
etc...
Jesús compared the feelings that
accompany extinction with those of Loss or Grief.
If the behavior has been heavily reinforced in the past, the learner may
go through the stages of grief (shock and denial, anger, depression and
detachment, dialog and bargaining, and acceptance.)
Not every extinction event creates
the same emotional outburst. There
are a number of factors that can affect how the learner reacts when a behavior
no longer leads to reinforcement.
One of the most significant factors is if the learner has choices.
The more other choices there are that could lead to reinforcement, the
more quickly the learner will find other ways of earning reinforcement, and the
more quickly she will abandon the behavior that no longer works.
Jesús also mentioned that variable
reinforcement schedules can minimize the effects of extinction.
If the learner is used to a pretty lean reinforcement schedule, then it
is going to be less affected when reinforcement stops, compared to a learner
that expected reinforcement for every correct effort.
I am adding a note here that variable
reinforcement schedules also make a behavior more resistant to extinction.
The fact that they make the learner less emotional is probably related to
the fact that extinction is actually less likely to be happening in the absence
of reinforcement when the learner is on a VR schedule.
So the learner may not experience the effects of extinction until there
has been a very long period of no reinforcement.
So extinction is bad?
Well…not exactly. There are
degrees of extinction and most learning involves some level of extinction as the
criteria are changed so that previously reinforced behaviors no longer get
reinforcement. The reality is that
extinction is going on all the time when we learn new behavior.
It is also happening when we add stimulus control to a behavior because
there will now be incidences of that behavior that are no longer reinforced.
Animals
can learn to deal with extinction as long as other opportunities for
reinforcement are available. This is one of the big differences between studying
extinction in the lab vs. in real life. In the lab, the learners often have few
choices. When a behavior undergoes
extinction, it means that ALL reinforcement stops or that the animal has to
learn something totally new to get reinforcement going again.
His mention of lab vs. real life made
me remember that Karen Pryor mentioned reversal studies as examples of training
that uses extinction. In a reversal study, the animal is taught to do one
behavior (behavior A) to get reinforcement.
This might be touching a key.
Then the researcher changes so that it has to touch a different key (behavior
B). Once it learns to do behavior B,
then the researcher changes back to only reinforcing behavior A, and so on until
the animal can go back and forth on cue (yes, there are cues).
This involves extinction every time the researcher changes which key
leads to reinforcement and it is very stressful on the learner.
These animals often exhibit a lot of stress behavior.
Those are examples of intentionally
using extinction in training, but extinction can also happen “accidentally” in
training. Jesus showed a few video
clips of dog training where the dog was “accidentally” put into extinction.
In one of the video clips the dog that was “put into extinction” because
it didn’t offer the desired behavior. During
extinction the dog offered a previously reinforced behavior and then the
behavior the trainer wanted. She clicked the correct response.
When she repeated the exercise, the dog did both behaviors. This is an
example of how behaviors that are offered during extinction bursts can “tag
along” with desired behaviors.
I always thought of extinction as
being about a specific behavior, but you can also have a period of extinction
where no behavior is reinforced. In
that case, a number of behaviors are being extinguished at the same time.
Maybe this is always true, because a learner in an extinction burst is
going to try a lot of different behavior. But I think there might be a
difference between how the learner responds to a period of extinction vs.
letting one behavior extinguish.
To study this more, he did some
experiments with PORTL (portable operant research and teaching lab) using fixed
rate schedules. The trainer shaped a
behavior and put it on a fixed rate schedule of 5. That means every 5th
response was reinforced. Then the
trainer switched to a FR 30 schedule (big jump – don’t do this!).
This effectively created a period of extinction when no behavior was
being reinforced, but the learner tried a lot of different things.
What happened was that after a few repetitions, the learner was now
consistently including a lot of extra behavior that it would offer between
correct repetitions of the target behavior.
Jesús didn’t use this term, but I am
sure this is what is often called superstitious behavior.
Superstitious behavior is behavior that becomes “attached” to the target
behavior because the animal has been indirectly reinforced for doing it.
It often happens when the trainer is looking for one behavior and doesn’t
notice that the behavior is preceded by other behaviors in a consistent way.
Instead of reinforcing the final behavior, the trainer ends up reinforcing a
chain or sequence of behaviors.
The talk spent a lot of time on
extinction because you have to understand about extinction before you can really
understand about resurgence. In the
last part of the talk he showed some video about different ways that behavior
can “re-appear” during resurgence.
In the first video clip, the trainer
taught the learner 4 behaviors. I can’t remember exactly what they were, but
they were manipulating small objects so I am going to call them ring, button,
cube, disc. The learner was taught
them in that specific order so she got reinforcement for manipulating each one
in a specific way until there was a trained behavior associated with each
object. When she was done being shaped to
do one behavior, the object as removed and they started with the net object.
The only extinction that happened was during the shaping process for each
behavior, not between learning different behaviors.
When all four behaviors had been
learned, the trainer placed all 4 objects back out.
Then she was put into a period of extinction (no reinforcement for any
behavior). During extinction,
she went back and repeated the behaviors in the order in which they were taught
starting with the ring, then the button, the cube and the disc.
In the second video clip, the trainer
taught the learner to touch objects that were placed out in a circle.
The learner was reinforced for touching each object a certain number of
times and then the trainer stopped reinforcing that behavior and waited for her
to touch another object in the circle (the next one around).
So extinction was used here to get the
learner to move to another object. Once
the learner had worked her way around the circle, there was a period of
extinction where nothing was reinforced. In this case, the learner worked
backward so she repeated the last reinforced behavior, then the one before it
and so on, working backward around the circle.
The difference between these two
experiments was that in the second video clip, the learner had moved from one
behavior to the next after a period of extinction (the trainer just stopped
reinforcing touching one object), but in the first video clip, the previous
objects had just been removed (no extinction, just training a new behavior from
scratch). This is actually a predictable
outcome based on the way the behaviors had been trained.
I hope this makes sense without
seeing the video. The point here is
that in extinction, previously reinforced behaviors can come back in predictable
ways depending upon their past history.
This is the part that we can take and apply to our training.
We can set up situations where the learner returns to a previously
reinforced behavior to shape a new behavior.
Clear as mud? Here’s the
example Jesús showed:
In PORTL, a toy chair (small) is
placed on the table and the learner is reinforced for touching the chair. The
learner touches the chair in predictable ways (touches the seat, tips it, etc…).
The object itself influences how the learner chooses to interact with it
(what does one do with a chair?).
But the trainer wants the learner to push the chair.
Nope…
The trainer removes the chair and
replaces it with a toy car. The learner is reinforced for touching the car and
is shaped to push the car. This is easy because pushing a toy car is a pretty
normal behavior to do with a toy car.
The learner gets a lot of reinforcement for pushing the car.
The trainer removes the car and puts
the chair back. The learner goes
back to touching, tipping the chair even if there is no reinforcement.
The trainer removes the chair and
puts the car back. The learner
pushes the car and gets reinforced.
The trainer removes the car and puts
out a small block. I can’t remember
if the learner immediately pushed the block or tried a few things first, but
pretty quickly the learner figures out to push the block (this is generalizing
the “push” behavior).
The trainer removes the block and
puts out the chair. The learner
pushes the chair!
So what happened here?
The trainer set it up so that a previously reinforced behavior (pushing)
would occur in the absence of reinforcement for other behaviors.
The example shows one way to get the learner to try something different
and how you can “prime” the learner to try the behavior you want.
Jesús finished up with a few examples
from Dr. Epstein’s work on creativity and some real life training examples that
showed how you can take advantage of the reappearance of previously reinforced
behavior in the absence of reinforcement.
Resurgence can be used to shape new behaviors as well as to combine
behaviors. The chair/car example
shows how to shape the new behavior of push the chair. He had some video of Alex
and Robin showing how she combined the trot and Robin added in the pose to get
the new behavior of pose and trot.
He also had video of Sola training passage by combining several different
components so that Ember added them together to get a new behavior.
When I was watching this I was
thinking that this was a brilliant way to explain to new trainers how to help
learners when they get “stuck.”
Occasionally when I am working with one of my horses I get into a situation
where the horse is so sure it knows the right answer that it ignores any
information to the contrary. I have
seen this happen with people in the training game too.
I have learned that continuing on without making any changes is
counterproductive because they just try harder to do what they think is the
right answer.
I might go work on something else or
present the information in a different way, and then come back to it.
Or I might do something else that gets them thinking in “the right
direction,” and then come back to it. I
will also do something similar for more complicated behaviors where I teach
separate pieces and then let the horse put them together. That would be using
resurgence to get new behaviors.
With the information from this talk, I think I will be able to use it more
effectively and explain what I am doing to new trainers.
Next week I will post notes on Kay
Laurence’s lab on “Repurposing default behaviors.” Kay takes advantage of
resurgence in her training so the report from that lab will have a few more
examples of ways to use resurgence in real life.
This was the first time I had heard Hannah Branigan talk. She is a KPA CTP training partner from North Carolina. I found this information on her on the clickerexpo website: “As the owner of Wonderpups, LLC, Hannah is committed to training both dogs and people with positive reinforcement methods. She has titled her dogs in conformation, obedience, schutzhund, agility, and rally.”
Social interactions and focus are included in the fundamentals. Puppies should be allowed to play and engage with other dogs and puppies in well supervised ways. She had some nice video of a puppy class where the puppies were doing parallel play by playing games in the presence of other puppies and exploring the environment which was full of interesting objects and things to do. The first interactions with other dogs and people are important. Puppies can be taught early on that interaction is contingent on appropriate behavior and that checking back in with the trainer is a highly reinforcable behavior. She showed some food games she plays with puppies that teach the puppy to explore and then check back.
Laura Sharkey, “Give Pups a Chance: working with aggressive puppies.”
In order to talk about aggression, we have to define aggression. Most puppies exhibit some degree of mouthiness and play behaviors that can get a little overwhelming at times but she defines aggression as “a willingness to threaten and/or bite under mild duress.” What is mild duress? A person could put a puppy under mild duress by doing any of the following:
Picking it up
Every puppy is an individual so there are variations in responses and she looks at these puppies as being at different places along an aggression scale. In her aggression scale an “angel puppy” is on the left and an “off the chart aggressive puppy” is on the right. Snarky puppies are somewhere in the middle. On the scale she also has a line for “guarding” as this is one of the indicators of puppy’s potential for aggression. Guarding can be minor but if it is not addressed, it can lead to true aggression.
Joey was an Amish farm puppy who showed intense aggression for no reason. She had some pretty scary video of Joey freezing and then biting for no obvious reason. He was dog-dog aggressive, dog-people aggressive and seemed to have very few puppy qualities. She adopted him from his family and spent a long time rehabbing him (6 months – 1 year?) with significant improvements in some areas but not in others. He was eventually PTS when it became clear he was not going to be able to cope with life without putting other people and dogs at risk.
You must be
committed to +R only You must be
willing to deliver the truth – if the puppy is not going to work out for that
family, you need to tell them. You need to
identify when a situation is unsafe
Absolutely no
place for aversives in dealing with aggression
Kay Laurence, “Freedom to Learn: training in the absence of formal competition.”
Get down – ok,
this is probably not a cue many of us teach, but it is certainly used by many
dog owners. Kay’s point was
that this phrase and the tone that often goes with it are not how you would
speak to a family member who just wants to say hello.
Greeting is important to dogs.
They want to say hello to us when we come back after being away. We
should take the time to train an appropriate greeting ritual that meets the
needs of both parties.
Can we replace traditional ideas about what they should look like and how they should be trained by a more dog-centric version? This is Kay’s “Be More Dog.”
Kay Laurence, “Repurposing Default Behaviors” – lab
One of the reasons she teaches this is that she has found that learning the cues for behaviors is one of the hardest parts of training. In her training, she only trains behaviors that are part of a dog’s natural repertoire so the behaviors themselves are not something the dog has to learn. What the dog has to learn is when to do them and this puts more focus of training on getting behavior under good stimulus control. The more behaviors you have, the more important this becomes.
1. The behavior you want puts the dog in conflict against an instinctive response.
An example would be asking a dog to lie down on an uncomfortable surface.
2. The behavior might be an environmentally inappropriate behavior. That means
there is something in the environment that says it’s not a good idea to do that
behavior when you asked. In many
cases, the dog will be saying “not right now,” and will comply if given a little
time or an adjustment is made. She described how a student was training a small
dog to sit and he was stepping toward it (unintentionally) while he cued it.
The dog felt threatened by the man’s forward movement so it would scoot
back instead of sitting. Once the man learned to keep his feet still, the dog
would sit quite willingly.
3. The behavior costs the dog because it has to give up something that it already
has. If the dog is already engaged
in a reinforcing behavior, it might be reluctant to stop and do something else
unless it has a long history of these trades being worthwhile. When you set up a default behavior, you want to think about one that will be functional in the situation in which you want to use it. I know this sounds obvious, but actually it takes a little thought (GIMT- a Kay phrase which stands for Give it More Thought). With dogs, sit is often used as a default behavior, but there are some behaviors that are harder to do from a sit (the dog has to get up first) and if the dog is sitting all the time, then it becomes hard to practice sitting on cue.
1. Throw food so dog goes out away from you.
Pay attention to where you are throwing the food because you want to
encourage the dog to re-orient to you after it finds it. She likes to throw the
food into the corner of the room so the dog naturally turns. If you are not good
at throwing food so the dog can track it, or have bad aim, then practice.
2. When the dog comes back, click as it approaches you, but while it is still
moving. This is especially important with dogs that have been taught to return
and sit (or do some behavior other than stand).
If in doubt, do not click. You can always repeat the food throw to try
again.
3. Make sure your body language is not asking for anything specific.
A lot of dogs have been trained to react to the trainer’s posture when
returning, so be casual and vary your stance/position/posture a bit so the dog
does not attach the stand behavior to any particular body language cue.
4. It can be helpful at certain points to slow down the food delivery.
This prolongs the “waiting in anticipation” which is the behavior you
actually want.
5. Once the dog is reliably returning and stationing in front of you, you can start
to cue behaviors. She doesn’t teach
eye contact as a specific behavior. Instead she waits until the dog connects
with her and shows it is ready. Then she cues a behavior.
This builds a natural response of orienting toward the trainer when the
dog is ready.
6. It’s important to mix in some nonsense words with the cues.
You want to make sure the dog is really listening to the words and not
guessing. Even the inhale
before you speak can become a cue so you can practice changes in breathing.
She noted that for many dogs “silence is
a killer.” If you are quiet, they
are going to start trying to do things.
Working on nonsense cues is one way to teach the dog to wait longer
between actual cues, but without the frustration that can happen if you try to
build duration by just standing there and waiting.
7. Keep practicing sending the dog out (by throwing food) and returning to the
station in front position even after you start working on cues.
This will break things up a bit and also give you some information about
if the dog is getting tired and needs a break.
8. You are going to reinforce both the default position (stand in front) as well as
correct responses to cues. In the beginning, you may have to reinforce the
default position several times between cues.
9. You can add a “wait” cue to the stand in front if you want.
Default behaviors can have specific cues as well as environmental or
context cues. Teaching Parking:
1. Parking is a useful behavior for greeting people. If you’ve never heard of “parking,” Kay has written/talked about it before. There is some information in my Clicker Expo notes from last year on Connected Walking and it’s also in her book “Every Dog, Every day.”
2. In parking, the handler holds the dog’s collar and steps on the leash so that the dog is limited in how much it can move. The leash does not pull the dog down, it just provides a boundary.
In the course of the lab, she shared some other useful tidbits:
Kay Laurence, “The Craft of Fine Slicing.” – lab
She doesn’t use food to lure the dog on to the mat. You want the dog to step on the mat with confidence.
Click the dog for touching the target. Over time you can move the target so the dog has to cross one leg over the other to touch the target.
When you are tugging, make sure you and the dog are at an appropriate distance so he’s not in your body space and you’re not on top of him.
If you have any questions about these topics or about Clicker Expo in general, feel free to email me with questions or comments. I'll try to answer them or point you toward more resources.
Katie Bartlett, 2014 - please do not copy or distribute without my permission
Home | Articles | Clicker Basics | Community | FAQ | Getting Started | Horse Stories | Links | Photos | Resources
|