PDF Summary:The Psychology Book, by Catherine Collin, Nigel Benson, et al.
Book Summary: Learn the key points in minutes.
Below is a preview of the Shortform book summary of The Psychology Book by Catherine Collin, Nigel Benson, et al.. Read the full comprehensive summary at Shortform.
1-Page PDF Summary of The Psychology Book
Many college students take a Psychology 101 class, but not many of them actually absorb and remember the material. If you like the study of the mind but would rather pass on dry lectures, The Psychology Book offers a broad overview of psychology and how it’s changed over time, exploring its various fields and the major figures and theories in those fields. The authors bring a wide range of expertise to the book, including clinical psychology, business psychology, journalism, writing and editing, and music.
In our guide, we’ll explore the origins of psychology before moving chronologically through six major fields: behaviorism, psychoanalysis, cognitive psychology, social psychology, developmental psychology, and differential psychology. We’ll also add historical context, criticisms and limitations of certain theories, and updated research that supports or contradicts some theories.
(continued)...
Carl Rogers: Person-Centered Therapy
According to the authors, another major player in the field of psychotherapy was American psychologist Carl Rogers, who pioneered a type of psychotherapy called person-centered therapy. Until around the mid-1950s, psychotherapy was largely focused on curing or treating illness, but Rogers felt it didn’t make sense to view mental health as something that could be permanently achieved, nor did he believe anyone could be mentally or psychologically defective. Instead, he believed that creating a healthy human experience is an ongoing process, the goal of which should be to grow rather than to cure.
This involves living in the moment, allowing your experiences to form your personality—as opposed to trying to make your experiences fit your perception of your personality—and taking responsibility for your own life. In person-centered therapy, the therapist allows the patient to identify and explore their own problems rather than following the direction the therapist wants to take. Much of modern therapy still follows this model.
(Shortform note: Criticisms of Rogers’ person-centered therapy include the claim that his view of humanity is heavily biased toward the positive and that it reflects naïveté and a failure to understand and account for human evil. Others suggest that his ideas led to the popularization of “psychobabble,” or the use of seemingly meaningless phrases like “go with the flow” or “get in touch with your feelings.” He’s also been criticized for the suggestion that his person-centered therapy was effective for all problems, with some critics claiming that the therapy hasn’t been shown to help with conditions like schizophrenia and autism. Still, some research suggests that person-centered therapy tends to have lower rates of dropouts and relapse than other forms of treatment.)
Cognitive Psychology: The Study of Mental Processes
The authors say that after World War II, psychology’s main focus shifted from behavior and psychoanalysis to the study of mental processes like emotion and memory in what’s referred to as the “cognitive revolution.” Following the development of computer systems and artificial intelligence, some psychologists began to view the brain as an information processor, and this view—along with advances in neuroscience—gave them a model for how to study mental processes directly instead of interpreting them through behavior. This led to the field of cognitive psychology, which remains the dominant approach to psychology today. Below we’ll discuss some of cognitive psychology’s most influential concepts.
(Shortform note: The fields of cognitive psychology and computer science continue to be closely intertwined, as much of the research and advancements in artificial intelligence—the purpose of which is to program computers to have abilities similar to those of the human brain—is based on concepts from cognitive psychology. Universities now offer combined majors in computer science and cognitive psychology because of the close link between the two fields.)
Beliefs and Cognitive Dissonance
Some cognitive psychologists study beliefs, the authors explain. In the 1950s, Leon Festinger studied how people deal with information that clashes with their beliefs. He noticed that people whose beliefs are challenged by evidence experience an uncomfortable feeling called “cognitive dissonance.”
People experiencing cognitive dissonance will seek a way to reconcile the differences between their beliefs and the contradictory evidence. Someone with an open mind may change their beliefs, but people who hold very strong beliefs are likely to distort or ignore contradictory evidence in order to protect their beliefs. This is particularly true when the person has invested a lot in their beliefs (like time, emotion, or money).
Cognitive Dissonance and Misinformation
People’s desires to reconcile the conflicting beliefs that lead to cognitive dissonance can make them susceptible to misinformation like fake news. The need for cognitive consistency makes us prone to confirmation bias, or the tendency to ignore information that conflicts with our beliefs in favor of information that confirms them. As a result, we accept information that may be inaccurate or even entirely false to avoid changing our beliefs.
Research suggests that you can combat the discomfort of cognitive dissonance—and the tendency toward accepting misinformation—by exposing yourself to many alternative viewpoints, thinking critically and skeptically about all new information, and approaching your beliefs with a sense of humor.
Memory: Storing and Categorizing
According to the authors, other cognitive psychologists studied how the brain organizes and stores memories.
Psychologist Endel Tulving identified two types of long-term memory: episodic memory, which is memory of experiences and events, and semantic memory, which is memory of data and facts. In his experiments, Tulving demonstrated that we group pieces of semantic memory into meaningful categories to remember them better (like grouping the words “red” and “blue” into the category of “colors”). To remember experiences, however, we organize episodic memories according to the circumstances—such as time, location, and sensory input—under which they occurred. This is how a date or a smell might become associated with an experience in our memories.
(Shortform note: Knowing how we organize episodic memories can make it easier to summon them to mind later in life, particularly in cases involving memory impairment. People with dementia can use sensory stimuli to trigger episodic memories from their past, which can be a pleasant and empowering experience for them. Additionally, episodic memory and semantic memory may be less distinct than Tulving’s original theory suggests, as more recent theories seem to indicate that semantic memory is also closely linked to circumstantial context such as sensory input: In order to store semantic memory, the brain may engage in semantic encoding, which is when the brain converts sensory stimuli into meaningful data to make it easier to retrieve.)
Applications for Legal Systems
According to the authors, some aspects of cognitive psychology became particularly useful in fields like forensics and law. In the 1970s, Israeli-American psychologist Daniel Kahneman (Thinking, Fast and Slow) and Israeli psychologist Amos Tversky observed that people tend to base their decisions on anecdotal evidence rather than statistical data or probability (contrary to popular belief at the time). Their observations had major implications for decisions made by juries.
(Shortform note: Additional research since the 1970s shows there are also other factors that determine how much credence we give to facts versus anecdotal evidence. People are equally likely to believe both anecdotal evidence and facts in discussions about nonmoral issues, but when the topic is centered around a moral disagreement, anecdotal evidence is given greater consideration. This is particularly true when the issue involves harm of some kind. This helps explain the harsh political divide between groups on moral issues like gun control, climate change, and immigration.)
Soon after, American psychologist Elizabeth Loftus demonstrated that people’s memories could be distorted or even fabricated by emotions, other experiences, or the way in which the person is asked to recall them. This became known as “false memory syndrome.” Loftus used these findings to criticize memory retrieval techniques like dream therapy and hypnosis and also to insist that eyewitness testimony is not sufficient evidence to prove a crime in court—even though jurors tend to give it more credence than any other type of evidence. Still, her findings have been adopted in a number of legal systems today.
(Shortform note: There’s debate as to whether eyewitness testimony that involves uncovering repressed memories should be admissible as evidence in court. Some point out that testimony from individuals who suffer insane delusions is admissible, so it would be inconsistent to exclude uncovered memories as testimony. Some experts who assess the validity of repressed memories in legal situations look for certain clues that memories may be false, including the age the individual was in the memory, who the individual was with when the memory was uncovered, and whether someone like a therapist may have implanted the memory.)
Social Psychology: Society and the Individual
The scope of psychology broadened in the 20th century, leading to the emergence of new fields like social psychology. In the 1930s, some scientists began to study how individuals behave within groups, as well as how those groups impact the individuals within them. They also looked at relationships among individuals in groups and among groups themselves. This gave them insight into how social change occurs, humans’ tendency to conform, and the psychology behind obedience, among other things. We’ll discuss some of the most prominent ideas below.
Field Theory and Social Change
The authors say that one of the earliest figures in social psychology was German-American psychologist Kurt Lewin. He developed field theory, which focuses on how a group or individual’s psychology influences a situation. He established a model of change suggesting that anyone who wants to create change in a person or group must understand the psychological and environmental factors that influence them.
For example, if a doctor’s office wanted to switch from in-office visits to remote telehealth appointments, they would need to understand many factors, including whether patients benefit from physically being in the office, whether patients and staff have the technology and knowledge needed for video visits, and whether patients can measure their own vital signs or if they need a professional to do it for them. Failing to account for any of these factors could prevent the desired change from being effectively implemented.
Lewin’s model involves three steps for effecting change:
1) Unfreezing, in which the people involved are made to understand that change is needed (which is difficult because people tend to be naturally resistant to change)
2) Changing, in which the new system is put into place (which in groups requires strong leadership and support, and in individuals requires willingness to take on a new mindset)
3) Freezing, in which the new system becomes routine
Other Models of Change
Lewin’s model has been criticized for discounting the agency of humans in the process of change, with critics saying, for example, that humans must be willing to accept the need to change and can’t be coerced into it. Since Lewin established his model, many other models have followed that do account for this human factor. Some are designed to facilitate organizational change, like John Kotter’s 8 Steps for Leading Change. Kotter’s model involves creating a feeling that change is urgently needed (similar to unfreezing), building a large group of volunteers to lead the change, getting rid of barriers to change, and establishing the change (similar to freezing).
Other models are geared toward catalyzing personal change, such as Grant Van Ulbrich’s “SCARED-SO WHAT” model. This consists of two major stages, each characterized by six elements (represented by the acronyms SCARED and SO WHAT). The SCARED stage begins with surprise at the need for change and ends with a decision to either move forward or reject the change (if you decide to reject the change, you’re likely to repeat the SCARED stage until you can make a decision to move forward).
The SO WHAT stage begins with a strategy for creating the change and ends with taking ownership of the change you’ve established. Both this model and Kotter’s model are much more detailed and actionable than Lewin’s model but still draw from some of the same principles of field theory.
Conformity and Obedience
The authors note that while Lewin focused on change, other scientists like Polish-American psychologist Solomon Asch studied humans’ tendency toward conformity. He conducted an experiment called the Asch Paradigm, in which participants were placed into groups and asked to identify which of three lines on a card matched the length of a line on another card. They did a total of 18 such questions, each called a “trial.” The participants didn’t know that the other members of the group were actors instructed to give incorrect answers.
Asch found that 75% of participants conformed to the group’s incorrect answers on at least one trial but that none of them conformed on every trial. He also found that 26% of participants never conformed and always gave the right answer despite the group’s dissent, suggesting that certain people are less prone to conformity than others. Additionally, he found that if just one of the actors provided a correct answer, the actual subjects were much more likely to also provide the correct answer—this suggests that if even a small minority diverges from the group, it makes others much less likely to conform as well.
Why Do We Conform?
Others have further explored the factors that affect conformity in groups. Some research suggests that people who don’t conform to a group are often ostracized by that group and that the pressure to conform tends to be highest during the period of adolescence, when the desire to be liked by peers is at its peak.
Some researchers have also distinguished between different types of conformity. For instance, informational conformity occurs when the conformist believes others have more information and are thus better equipped to make decisions about how to behave. However, the type of conformity revealed by Asch’s Paradigm is called normative conformity, and it occurs when the conformist agrees with the group even though they know the group is wrong. People often engage in normative conformity in order to show cohesiveness or avoid conflict, but it can have dangerous consequences when it leads individuals to look the other way when they know other members of their group are doing things that are wrong.
Zimbardo’s Prison Experiment
According to the authors, one of the leading questions that arose in social psychology after World War II was the question of how ordinary people could be convinced to carry out acts of extreme cruelty, as occurred during the Holocaust. In 1971, American psychologist Phillip Zimbardo conducted further research into conformity and what prompts people to behave cruelly with his famous Stanford Prison experiment. For this experiment, Zimbardo placed 24 male college students who were deemed mentally healthy into a fake prison scenario. The students were randomly assigned the role of either prisoner or guard, and the guards were given total control over the prisoners.
All of the guards quickly began abusing their authority and mistreating the prisoners, and conditions grew so bad that the experiment had to be ended early after less than a week. Zimbardo’s experiment suggests that people will adjust their behavior to fit social roles that they’re assigned, and that even ordinary, mentally healthy people will behave cruelly when given positions of total authority.
Implications of Obedience and Conformity for Nazism
Due to the ethical issues that arose, Zimbardo’s experiment had to be cut short early and couldn’t be replicated, leading to questions of how representative Zimbardo’s sample population was. One commonality the participants shared was that they elected to sign up for a study based on life in prison as opposed to a more general psychological study. Additional research by psychologists suggests that people who chose to sign up for such a prison study were more likely to be aggressive, to approve of social hierarchies and submit to authorities, and to view themselves as superior. They were also less likely to be empathetic or altruistic than the general population.
According to some psychologists, this research suggests that Nazis may not have been as “ordinary” as they claimed, that people with cruel and antisocial tendencies may have been more likely to become Nazis, and that the claim that they were “just following orders”—also known as the Nuremberg defense—can’t adequately explain or excuse the atrocities they committed.
Developmental Psychology: The Mind From Infancy to Adulthood
The authors explain that prior to the 1930s, the general belief was that children are just miniature versions of adults and that the main psychological difference between children and adults was a lack of knowledge due to age. This belief was challenged by Swiss psychologist Jean Piaget, who argued that children’s cognitive processes are different from those of adults. This led to the field of developmental psychology, or the study of how the brain changes over the course of a lifetime, including the forming of attachments, learning, and developmental disorders, among other things. We’ll discuss some of the most influential concepts and figures below.
(Shortform note: Because the historical view of children was that they were miniature adults, the concept of childhood wasn’t established in Western society until around the 1600s. Before that, children as young as seven were treated as small adults, with the accompanying expectations for labor. Marriage and reproduction before the age of 18 were also the norm, particularly for girls. The emergence of developmental psychology helped distinguish children from adults, and some say this distinction has led to greater protections of children’s rights and safety.)
Piaget: The Father of Developmental Psychology
Piaget theorized that children pass through stages of cognitive development, using their senses and natural curiosity to learn through trial and error. The four stages he identified were:
1) The sensorimotor stage (birth to age 2). Children use their senses and physical actions to learn about their environment, and they’re unable to understand perspectives other than their own.
2) The pre-operational stage (ages 2-7). Children begin to understand things like symbols and language.
3) The concrete operational stage. Children gain the ability to think logically about physical objects.
4) The formal operational stage. Children learn to conceptualize ideas and can reason about abstract concepts.
(Shortform note: The authors don’t give an age range for Piaget’s third and fourth stages, but according to other sources, the concrete operational stage ranges from 7 to 11 years old and the formal operational stage ranges from 11 years old through adulthood.)
Piaget’s theories shifted the focus of the education system from adult-centered (trying to teach children to be adults) to child-centered (meeting children at their level of development and encouraging individuality, creativity, and exploration).
Criticisms and Misinterpretations of Piaget’s Theory
Some have criticized Piaget’s theories for overestimating the abilities of adolescent children and underestimating the abilities of infants and for Piaget’s potential bias in studying his own children as subjects. There are also criticisms of the stages themselves: Some suggest that not everyone moves through all four stages and that some people may stay in the concrete operational stage throughout their adulthood. Some also dispute the idea that children in the first two stages can’t understand the perspective of others.
Additionally, some have misinterpreted Piaget’s suggestion that education should be child-centered by assuming that direct instruction should never be used, despite cognitive science research suggesting the contrary.
Learning Through Social Interaction and Modeling
The authors explain that later researchers adapted and expanded on Piaget’s initial ideas. While Piaget believed that children learn mostly through interaction with their environment, Russian psychologist Lev Vygotsky felt that learning was more dependent on social interaction. He also developed the theory of the zone of proximal development, which is the idea that children need help from an adult or older child to learn how to do certain things. The zone of proximal development is the area of learning that’s accessible to a student with such assistance (as opposed to what the learner can do without assistance or what they can’t do even with assistance), and this theory prompted a greater emphasis on cooperative learning in the education system.
(Shortform note: Vygotsky’s theory of learning and the zone of proximal development have led to an emphasis on the practice of scaffolding in education. Educators scaffold by providing activities to children that are challenging but within their capabilities, while controlling the factors that are beyond the child’s capabilities. This type of teaching has to be highly individualized and requires a deep knowledge of a child’s abilities and cognitive level. It also requires a large time commitment for planning, which can make it difficult to implement effectively in a classroom setting.)
Canadian-American psychologist Albert Bandura was also interested in how children learn, particularly how they pick up behaviors such as aggression, which he felt couldn’t be adequately explained by behaviorist theories like operant conditioning. He believed that humans learn behavior by watching others and mimicking their actions, an idea known as the social learning theory (in contrast to reinforcement theory in behaviorism, which suggests that they learn from rewards and punishments).
(Shortform note: Subsequent experiments that Bandura carried out demonstrated that punishment may play a role in how likely someone is to mimic a behavior they’ve seen modeled: When children saw someone behaving aggressively and then being punished for it, they were less likely to imitate the behavior. However, if someone asked them to mimic the behavior, they did, suggesting that they had learned the behavior, but seeing someone punished for it taught them that they shouldn’t engage in that behavior.)
Differential Psychology: The Study of Human Differences
The field of differential psychology, which arose in the 20th century, examines differences in people’s personalities and intelligences. This has led to modern day IQ and personality tests, trait theory, and research into personality disorders. We’ll discuss some of the most prominent figures and concepts below.
Differences in Intelligence
In the late 1800s, German scientist Wilhelm Wundt theorized that all living creatures have a “mental life,” which includes intelligence. The authors explain that he formed the idea of an intelligence quotient (IQ), which led him and other scientists to seek to measure a creature’s intelligence.
Later, in the 1890s and early 1900s, French psychologist Alfred Binet observed that it was only possible to measure someone’s mental abilities in a specific time and context, and that these abilities could change over time, meaning intelligence was not fixed at any particular level. Binet and fellow scientist Théodore Simon devised a set of IQ tests called the Binet-Simon Scale to assess a child’s abilities based on the abilities of other children of the same age.
While the Binet-Simon Scale was originally intended to provide accommodations for children with disabilities, later scientists co-opted the test and used it instead to identify children who were considered genetically inferior (despite Binet’s insistence that intelligence was not hereditary or fixed) or to identify students who would be suited only for menial work as adults. Still, the Binet-Simon Scale continues to be used for most modern IQ tests.
Flaws and Misuses of Intelligence Testing
The idea that a person’s intelligence can be measured with a single IQ score is considered by some to be inherently flawed. Wundt acknowledged that any assessment of intelligence would vary depending on a person’s culture, and because he felt it was impossible to fully account for these differences, he believed there was no way to study intelligence in a laboratory setting—which, in experimental psychology, suggested it couldn’t be studied at all.
Some modern critics of intelligence tests note that the tests’ validity seems to vary with the age of the subjects and assert that they’re poor predictors of academic and intellectual performance in people in early adulthood onward. They also suggest that classifying children in schools based on their intelligence creates a self-fulfilling prophecy in which “gifted” students are given more enriching and challenging experiences that aren’t offered to their peers, leading to an even wider achievement gap between the two groups.
Also, particularly in the mid-20th century, some took the misuse of the Binet-Simon Scale to deadly levels. In 1924, Virginia legalized the forced sterilization of low-IQ individuals, and during the Holocaust, it became legal in Germany to murder children with low IQs or other disabilities.
Decades later, when differential psychology emerged as its own distinct field, other scientists continued to refine ideas of intelligence. In the 1960s, American psychologist J.P. Guilford suggested that the Binet-Simon Scale was flawed because it couldn’t measure creativity, which he considered an essential part of intelligence and mental ability. He identified two types of thinking: convergent thinking, which encompasses things like memory and solving simple problems (and which could be measured by IQ tests), and divergent thinking, which refers to creative problem-solving. Guilford designed tests to assess divergent thinking.
Creativity and Neurodivergence
Guilford’s theories of creative intelligence and convergent and divergent thinking are especially relevant for people with neurodivergent conditions, whose difficulties with convergent thinking often lead to them being labeled as unintelligent. Guilford’s theories suggest that their intelligence is not necessarily lesser, but is instead just different. Neurodivergence refers to people whose brains naturally process information differently from the way most people—neurotypical people—do. Neurodivergent conditions include autism, ADHD, OCD, and many others.
As the term suggests, neurodivergent people, and especially autistic people and people with ADHD, have been shown to be particularly adept at divergent thinking, despite often scoring lower on tests of convergent thinking. They tend to score higher on tests like those developed by Guilford. Some say thinking of intelligence as only convergent thinking leads to a cultural perception of neurodivergent people as intellectually inferior, which can lead to institutional bias, a denial of academic and employment opportunities, and other types of ableism and marginalization.
Differences in Personality
Other scientists studying differential psychology focused on differences in personality, the authors explain. American psychologist Gordon Allport, one of the pioneering researchers in personality theory, identified 4,500 adjectives that could describe personality traits. Analyzing these further, he identified three different types of traits: cardinal traits, which are essential to a person and shape how they live their lives; common traits, which develop from influences from others and help shape our behavior; and secondary traits, which are dependent on context and can be changed. His work led other researchers to develop personality tests that heavily influenced the ones we still use today.
(Shortform note: Though highly influential, Allport’s theory has been criticized for a lack of supporting empirical evidence. Some critics say trait theory isn’t an accurate way of predicting behavior because Allport’s conclusions were based largely on observations and generalizations, and the traits he identified weren’t always consistent in the behavior of people who had them. Some critics also suggest that the theory’s focus on the present means it offers no explanation for personality development and how traits change over time and that it doesn’t account for the ways in which certain traits can be present in different degrees in some people.)
Want to learn the rest of The Psychology Book in 21 minutes?
Unlock the full book summary of The Psychology Book by signing up for Shortform .
Shortform summaries help you learn 10x faster by:
- Being 100% comprehensive: you learn the most important points in the book
- Cutting out the fluff: you don't spend your time wondering what the author's point is.
- Interactive exercises: apply the book's ideas to your own life with our educators' guidance.
Here's a preview of the rest of Shortform's The Psychology Book PDF summary: