Class 11 Psychology Notes Chapter 6 (Learning) – Introduction to Pshycology Book
Alright class, let's delve into Chapter 6: Learning. This is a fundamental chapter in Psychology, and understanding its concepts is crucial, not just for your Class 11 exams but also for various government examinations where Psychology or General Awareness questions might touch upon behavioural principles.
Learning is a key process that allows us to adapt to our environment. We'll explore how we acquire new behaviours, skills, and knowledge.
Chapter 6: Learning - Detailed Notes for Exam Preparation
1. Nature of Learning
- Definition: Learning is defined as any relatively permanent change in behaviour or behavioural potential produced by experience.
- Note the key terms: 'relatively permanent' (excludes temporary changes due to fatigue, drugs, illness, maturation) and 'experience' (learning happens through interaction with the environment, observation, practice).
- Inferred Process: Learning itself is not directly observable; we infer it has occurred based on changes in observable behaviour (performance). Performance is the observed behaviour/response, which may or may not reflect all the learning that has occurred.
- Learning vs. Maturation: Maturation refers to changes primarily due to biological growth and development, independent of specific experiences (e.g., walking). Learning relies on experience.
2. Paradigms of Learning
A. Classical Conditioning (Pavlovian Conditioning / Respondent Conditioning)
- Pioneer: Ivan Pavlov (Russian physiologist, Nobel Prize winner).
- Experiment: Pavlov's famous experiments with dogs.
- Before Conditioning:
- Food (Unconditioned Stimulus - UCS) naturally elicits Salivation (Unconditioned Response - UCR).
- Bell (Neutral Stimulus - NS) elicits no specific response (orienting response, maybe).
- During Conditioning (Acquisition Phase):
- Bell (NS) is repeatedly paired with Food (UCS). Bell presented just before the food.
- After Conditioning:
- Bell alone (now Conditioned Stimulus - CS) elicits Salivation (now Conditioned Response - CR).
- Before Conditioning:
- Key Concepts:
- Unconditioned Stimulus (UCS): A stimulus that naturally and automatically triggers a response without prior learning (e.g., food).
- Unconditioned Response (UCR): The unlearned, naturally occurring response to the UCS (e.g., salivation to food).
- Neutral Stimulus (NS): A stimulus that does not elicit the response of interest before conditioning (e.g., bell initially).
- Conditioned Stimulus (CS): An originally neutral stimulus that, after association with a UCS, comes to trigger a conditioned response (e.g., bell after pairing).
- Conditioned Response (CR): The learned response to the previously neutral (but now conditioned) stimulus (e.g., salivation to the bell). Usually, CR is similar to UCR but may differ in intensity or form.
- Principles/Determinants of Classical Conditioning:
- Time Relations: Best conditioning occurs when CS precedes UCS (Forward conditioning). Simultaneous and Backward conditioning are less effective. Trace conditioning (CS ends before UCS starts) can also work if the interval is brief.
- Type of Stimuli: Stronger UCS and more distinct CS lead to faster conditioning.
- Acquisition: The initial stage of learning the CS-UCS association. Requires repeated pairings.
- Extinction: The diminishing of a CR when the UCS no longer follows the CS. The learned association weakens.
- Spontaneous Recovery: The reappearance, after a pause, of an extinguished CR. Shows the learning wasn't completely erased.
- Generalization: The tendency for stimuli similar to the CS to elicit similar responses (e.g., dog salivating to bells of slightly different tones).
- Discrimination: The learned ability to distinguish between a CS and other stimuli that do not signal the UCS (e.g., dog learns to salivate only to a specific bell tone, not others, if only that tone is paired with food).
B. Operant Conditioning (Instrumental Conditioning / Skinnerian Conditioning)
- Pioneers: E.L. Thorndike (Law of Effect) and B.F. Skinner.
- Core Idea: Learning occurs because behaviours are influenced by their consequences. Organisms 'operate' on the environment; the consequences determine if the behaviour is repeated. Behaviour is instrumental in obtaining desired outcomes or avoiding unpleasant ones.
- Thorndike's Law of Effect: Behaviours followed by satisfying consequences become more likely, and behaviours followed by unsatisfying consequences become less likely.
- Skinner's Experiments: Used the "Skinner Box" (Operant Chamber) with rats or pigeons. Animals learn to press a lever/peck a disk to receive food (reinforcement) or avoid shock (punishment).
- Key Concepts:
- Reinforcement: Any consequence that increases the likelihood of a behaviour being repeated.
- Positive Reinforcement: Adding a desirable stimulus after a behaviour (e.g., giving praise for a correct answer).
- Negative Reinforcement: Removing an aversive (unpleasant) stimulus after a behaviour (e.g., fastening a seatbelt to stop the annoying beeping sound; taking medicine to remove pain). Note: This is NOT punishment. It strengthens behaviour by removing something bad.
- Punishment: Any consequence that decreases the likelihood of a behaviour being repeated.
- Positive Punishment (Punishment by Application): Adding an aversive stimulus after a behaviour (e.g., scolding for misbehaviour).
- Negative Punishment (Punishment by Removal / Omission Training): Removing a desirable stimulus after a behaviour (e.g., taking away TV privileges for breaking rules).
- Reinforcers: Stimuli that act as reinforcement (Primary: inherently satisfying like food, water; Secondary/Conditioned: learned value like money, grades).
- Reinforcement: Any consequence that increases the likelihood of a behaviour being repeated.
- Schedules of Reinforcement: Rules determining how often a behaviour will be reinforced.
- Continuous Reinforcement: Reinforcing the desired response every time it occurs. Learning is rapid, but extinction is also rapid.
- Intermittent (Partial) Reinforcement: Reinforcing a response only part of the time. Learning is slower, but resistance to extinction is greater.
- Fixed-Ratio (FR): Reinforcement after a specific number of responses (e.g., paid for every 10 units produced). High response rate, pause after reinforcement.
- Variable-Ratio (VR): Reinforcement after an unpredictable number of responses (e.g., gambling, fishing). High, steady response rate, very resistant to extinction.
- Fixed-Interval (FI): Reinforcement for the first response after a specific time interval has elapsed (e.g., checking mail near delivery time). Scalloped response pattern (increase near interval end).
- Variable-Interval (VI): Reinforcement for the first response after unpredictable time intervals (e.g., checking email randomly). Slow, steady response rate.
- Shaping: Reinforcing successive approximations toward a desired behaviour. Used to teach complex behaviours.
- Extinction (in Operant Conditioning): Behaviour decreases when reinforcement stops.
C. Observational Learning (Social Learning / Modeling)
- Pioneer: Albert Bandura.
- Core Idea: Learning occurs by observing others (models) and imitating their behaviour. Does not require direct experience or reinforcement.
- Bandura's Bobo Doll Experiment: Children observed an adult model behaving aggressively (hitting a Bobo doll) or non-aggressively towards the doll. Children who saw the aggressive model were significantly more likely to behave aggressively towards the doll themselves.
- Key Processes:
- Attention: Must pay attention to the model's behaviour and its consequences.
- Retention: Must store a mental representation of the observed behaviour (memory).
- Motor Reproduction: Must be physically capable of reproducing the observed behaviour.
- Motivation: Must be motivated to perform the behaviour. Observation of reinforcement/punishment received by the model (vicarious reinforcement/punishment) influences this. Self-efficacy (belief in one's ability) also plays a role.
- Applications: Learning social norms, skills, attitudes; influence of media.
D. Cognitive Learning
- Focuses on the role of mental processes (thinking, understanding, perceiving, problem-solving) in learning. Challenges the purely behaviourist view (S-R connections).
- Insight Learning (Wolfgang Köhler): Studied chimpanzees (Sultan). Learning occurs through a sudden understanding or grasping of the relationships in a problem situation ('Aha!' moment), not just trial-and-error. Involves cognitive restructuring of the problem.
- Latent Learning (Edward Tolman): Studied rats in mazes. Learning can occur without immediate reinforcement and may not be apparent (latent) until there is an incentive to demonstrate it. Rats formed 'cognitive maps' (mental representations) of the maze even without rewards.
E. Verbal Learning
- Learning involving words, symbols, language. Primarily studied in humans.
- Methods of Study:
- Paired-Associate Learning: Learning pairs of items (e.g., capitals-states, foreign words-English equivalents).
- Serial Learning: Learning items in a specific sequence (e.g., a poem, steps in a process). Subject to the Serial Position Effect (better recall for items at the beginning - primacy effect, and end - recency effect).
- Free Recall: Learning a list of items in any order. Organization/clustering aids recall.
- Determinants: Meaningfulness of material, length of list, amount of practice, distribution of practice (spaced practice generally better than massed practice).
F. Skill Learning
- Acquisition of complex motor skills or procedures (e.g., cycling, typing, driving).
- Phases (Fitts):
- Cognitive Phase: Understanding the task, identifying required actions, relying on instructions. Performance is slow, error-prone, requires conscious attention.
- Associative Phase: Practice leads to smoother execution, errors decrease, links between cues and responses strengthen. Less conscious effort needed.
- Autonomous Phase: Skill becomes automatic, requires minimal conscious attention, performance is efficient and smooth. Can often perform other tasks simultaneously.
3. Factors Facilitating Learning
- Motivation: Intrinsic (internal satisfaction) or extrinsic (external rewards). Generally, intrinsic motivation leads to more sustained learning. Needs and goals drive learning.
- Preparedness: Biological predisposition to learn certain associations more easily than others (e.g., taste aversion learning).
- Attention & Perception: Focusing on relevant stimuli is crucial.
- Learner Characteristics: Aptitude, prior knowledge, learning style.
- Task Characteristics: Difficulty, meaningfulness.
- Learning Methods: Spaced vs. massed practice, whole vs. part learning, feedback.
4. Learning Disabilities
- Neurologically-based processing problems that interfere with learning basic skills (reading, writing, math) or higher-level skills.
- Not indicative of low intelligence.
- Examples: Dyslexia (reading), Dysgraphia (writing), Dyscalculia (math). Require specific interventions.
5. Applications of Learning Principles
- Behaviour Modification: Using operant conditioning principles (reinforcement, shaping, extinction) to change undesirable behaviours and promote desirable ones (e.g., in therapy, classrooms, workplaces). Token economies are an example.
- Treating Phobias/Anxiety: Using classical conditioning principles (e.g., systematic desensitization, exposure therapy).
- Education: Classroom management, instructional design.
- Child Rearing: Using reinforcement effectively, modeling appropriate behaviour.
- Health: Promoting healthy habits, managing pain.
- Advertising: Associating products with positive stimuli (classical conditioning).
Multiple Choice Questions (MCQs)
-
In Pavlov's experiments on dogs, the food served as the:
a) Conditioned Stimulus (CS)
b) Unconditioned Stimulus (UCS)
c) Conditioned Response (CR)
d) Neutral Stimulus (NS) -
Learning that is not immediately demonstrated in behaviour until there is an incentive to do so is known as:
a) Insight Learning
b) Observational Learning
c) Latent Learning
d) Classical Conditioning -
A student is praised by the teacher every time they answer a question correctly. This is an example of:
a) Positive Reinforcement
b) Negative Reinforcement
c) Positive Punishment
d) Negative Punishment -
Which schedule of reinforcement leads to the highest and most consistent rate of response and is very resistant to extinction?
a) Fixed Interval (FI)
b) Variable Interval (VI)
c) Fixed Ratio (FR)
d) Variable Ratio (VR) -
Bandura's Bobo doll experiment is a classic example demonstrating:
a) Operant Conditioning
b) Classical Conditioning
c) Insight Learning
d) Observational Learning -
Extinction in classical conditioning occurs when the ________ is repeatedly presented without the ________.
a) UCS; CS
b) CS; UCS
c) UCR; CR
d) NS; UCS -
Köhler's experiments with chimpanzees provided evidence for:
a) The Law of Effect
b) Shaping
c) Insight Learning
d) Negative Reinforcement -
Remembering items better from the beginning and end of a list compared to the middle is called the:
a) Paired-Associate Effect
b) Clustering Effect
c) Serial Position Effect
d) Law of Effect -
Fastening your seatbelt to stop the annoying car alarm is an example of:
a) Positive Reinforcement
b) Negative Reinforcement
c) Positive Punishment
d) Negative Punishment -
The final stage of skill learning, where the skill becomes automatic and requires minimal conscious attention, is called the:
a) Cognitive Phase
b) Associative Phase
c) Autonomous Phase
d) Motivational Phase
Answer Key for MCQs:
- b) Unconditioned Stimulus (UCS)
- c) Latent Learning
- a) Positive Reinforcement
- d) Variable Ratio (VR)
- d) Observational Learning
- b) CS; UCS
- c) Insight Learning
- c) Serial Position Effect
- b) Negative Reinforcement
- c) Autonomous Phase
Make sure you understand the distinctions between classical and operant conditioning, the different types of reinforcement and punishment, and the processes involved in observational learning. These are frequently tested areas. Good luck with your preparation!