What is information chunking? What does it have to do with your ability to remember what you learn?
Information chunking is the process in which your memories get consolidated into “chunks” of related information in your brain. As your brain’s file management system, it’s a vital part of the learning process. You can use chunking to learn new problem-solving techniques that will come in handy—especially as you learn math and science.
Continue reading to learn about information chunking.
Understanding Memory and Information Chunking
According to Oakley, you use both your working memory and your long-term memory to learn math and science, so it’s helpful to understand the distinction between them. Information chunking is the process in which your memories get consolidated into “chunks” of related information in your brain. From Oakley’s writing, we infer that chunking helps lay the foundation for understanding how your memory works because your working and long-term memory depend on your brain’s ability to organize information into chunks.
Working Memory Is Your Brain’s Workspace
As Oakley points out, working memory holds the information that your mind is actively processing. You use it to solve problems in math and science when you focus on the problem and think about the principles you would use to solve it. You also use it when you take in new information and try to make sense of it.
While Oakley doesn’t address this explicitly, there seems to be a strong connection between working memory and focused-mode thinking, since working memory holds information that you are focusing on. She does mention that people who have larger working memories tend to be more naturally disposed to focused-mode thinking, which reinforces this idea.
Your working memory has a limited capacity. On average, it can only hold about four “chunks” of information at a time, although this varies from person to person.
(Shortform note: Not all sources agree as to exactly how many “chunks” of information an average person’s working memory can hold. The number four comes from Cowan’s research, which was published in 2001, and largely supplanted an estimate of seven based on Miller’s work in 1956. Another study by Gilchrist and Cowan determined working memory capacity varies from about two to about six units of information, depending on the person. In 2003, Gobet and Clarkson published a study revising the estimate to between two and three. The important takeaway is that your working memory can only hold a few units of information.)
Oakley asserts that if your working memory cannot hold all the information it needs to solve a problem, mental “choking” occurs, preventing you from solving the problem. The “chunking” process helps to reduce your risk of choking by condensing information, leaving room for your brain to load all the concepts you need at once.
Oakley also points out that your working memory requires continual input of energy to retain information. She explains that chemical reactions in your brain continually clear your working memory to prevent it from filling up with trivial information, and they eventually erase anything your mind isn’t using.
She does not discuss whether these reactions play any role in alternating between focused and diffuse modes of thinking. However, if focused mode operates on the information in your working memory, and switching to diffuse mode is complete when the thought disappears from your working memory, then the speed of these reactions arguably determines how fast you can switch from focused mode to diffuse mode.
|Perspectives on Choking|
In cognitive psychology, most research on “choking” has focused on “choking under pressure,” where your working memory cannot hold enough information because one or more of your working memory slots are filled up with anxiety.
New research suggests that only people with higher-than-average working memory capacity are subject to choking under pressure. It is as if these people have an extra working memory bank that gets filled up or cannot be accessed under stress, while regular working memory remains accessible, both for them and for people who don’t have the extra memory bank.
Can your brain still choke if you’re not under pressure? In principle, yes. Based on Oakley’s description of choking, some problems might simply require enough information to solve that you can’t comprehend the whole solution until your brain has had a chance to condense some of the information into more compact chunks.
In practice, studies of information overload have proved inconclusive: It seems that your brain is naturally capable of sorting through large amounts of information to pick out what is relevant for solving a problem, provided you have time to do so.
Long-Term Memory Is Your Brain’s Library
Oakley explains that long-term memory stores information for future reference. To learn math and science, you have to take the new concepts that you’ve received into working memory and save them to your long-term memory so you can recall them when you need them (such as on an exam). We’ll talk more about how you transfer new information from working to long-term memory in the next two parts, but for now, let’s go over your long-term memory’s capabilities.
Oakley explains that, unlike your working memory, your long-term memory can store billions of chunks of information. Furthermore, the information does not dissipate as it does in working memory. However, chunks that you don’t access repeatedly may get buried under other chunks and become difficult to retrieve.
|Diffuse Mode Memory|
In light of this contrast with working memory, how does long-term memory relate to the two modes of thinking? Oakley doesn’t discuss this. Focused mode can retrieve information from long-term memory and load it into working memory where it makes use of it. Does diffuse mode operate directly on data in your long-term memory? Or does diffuse mode have its own unconscious version of working memory that absorbs information from your focused-mode working memory so diffuse mode can operate on it later?
Given Oakley’s assertion that diffuse mode eventually runs out of information to process, the second explanation seems to fit better. Furthermore, Gilchrist and Cowan have identified that there are both conscious and unconscious aspects of working memory.
Oakley explains that long-term memory is susceptible to a phenomenon called “knowledge collapse.” This happens when your brain has to reorganize the chunks in your long-term memory to restructure your understanding of something. As you learn a subject, you’ll reach certain points where some restructuring is necessary for your understanding to continue to grow. While the restructuring is going on, you may feel as though everything you knew about the subject has suddenly evaporated, but once it is complete, your understanding will be stronger than ever.
As an analogy for knowledge collapse, picture a warehouse where boxes are stacked on top of each other in orderly piles. Then the warehouse manager decides to install shelving. The boxes have to be taken down and moved out of the way to install the shelves, and so the inside of the warehouse looks like complete chaos during the installation. Yet once the shelves are installed and the boxes are stacked on the shelves, they are more orderly and more accessible than ever.
Elsewhere in the book, Oakley affirms that persistence is more important than innate intelligence when it comes to learning a difficult subject. If carried to its logical conclusion, the concept of knowledge collapse reinforces this: Since knowledge collapse is a natural part of the learning process, it’s normal to struggle with new material and periodically feel like you don’t understand it. Persevere through periods of knowledge collapse.
|The Origins of Knowledge Collapse Theory|
Oakley may have coined the phrase “knowledge collapse,” as it does not appear in other available literature about this phenomenon. However, she didn’t originate the theory of knowledge levels temporarily declining at certain points of the learning process. She cites the work of Fischer and Bidell, who discuss in detail the different phases of learning.
Fischer and Bidell observed that when you learn a skill, your progress follows a pattern that depends on your initial level of proficiency. If you’re completely new to the subject, chances are your knowledge of it will require such frequent restructuring that your progress seems chaotic: As soon as you think you are starting to understand it, you learn something new that shatters your current understanding of the concept. For example, suppose you are suddenly transported to a foreign country where nobody speaks English and you don’t know the local language. At first, you’ll just be guessing blindly to figure out what words mean as you try to communicate with people.
Fischer and Bidell report that as you reach an intermediate skill level, your progress follows a “scalloped pattern” where growth is repeatedly followed by sudden decline and then rapid regrowth. This is the phase of learning where their study best fits Oakley’s description of knowledge collapse.
Finally, according to Fischer and Bidell, once you become an expert in something, your progress hits a plateau, where your proficiency is high and remains relatively constant over time. At this point, you no longer experience knowledge collapse in the sense that Oakley describes it, although Fisher and Bidell still observed occasional tiny dips in the plateau of proficiency. Turning back to our language example, let’s say you now speak the language fluently. One of these knowledge dips might be hearing a word that you don’t know and having to look it up in the dictionary.
Information Chunking Is Your Brain’s File Management System
Oakley explains that everything you know is stored in your memory as “chunks” of information. The information chunking process condenses information so that it will fit in your working memory, and gives it structure so that it can be organized in your long-term memory. Let’s look at what a chunk is, and then we will discuss how your brain builds chunks.
According to Oakley, a “chunk” physiologically consists of a group of neurons connected by synapses that fire in sync with each other. Learning is the process of forming these neural connections.
(Shortform note: Oakley references a study by Guida et al in which scientists used neuroimaging to study both novices and experts learning or performing certain tasks. They observed distinct patterns in brain activity that were different for the novices versus the experts: In the case of novices, researchers correlated these patterns to chunks forming in working memory; in the case of experts, they correlated these patterns to chunks being retrieved from long-term memory. By mapping the synchronized activity of the neurons that made up these chunks, this study appears to provide the basis for Oakley’s physiological description of a “chunk.”)
Conceptually, a “chunk” consists of a group of ideas or bits of information that are bound together in your mind through meaning. As you take in information, your brain tries to assemble it such that it makes sense: If the new information relates to something you already know, your brain makes the connection. If not, it still looks for unifying themes that make the facts easier to keep track of. Oakley asserts that the more deeply chunked something is in your mind, the more intuitive it becomes, and the less space it takes up in your memory.
As an example of the information chunking process, think about learning to drive a car with a manual transmission: To shift gears, you have to push in the clutch, move the stick to the right position, and keep the engine RPMs in the right range while you let the clutch back out. You also have to control the steering wheel, operate the brake, keep track of your speed, and so forth. However, once you learn to drive a stick-shift, you don’t think specifically about all these little tasks. You just think, “I need to drive to the grocery store,” and pretty soon you’re cruising down the road in third gear, without any conscious recollection of how you shifted from first gear into second or second into third. This is because your brain has condensed all the little tasks into a single chunk of information that you can use intuitively.
|The Discovery of Chunking|
The discovery of chunking is generally attributed to George Miller, who studied working memory capacity in the 1950s. Miller observed that people could only distinguish between sensory inputs to a finite degree of precision. Specifically, most people could only distinguish between about seven levels of a given stimulus, such as tones of sound or shades of color.
Miller initially tried to describe his results in terms of digital information theory. When a computer measures a sensor input, it uses a series of electronic comparators wired to a digital memory bank. This generates a binary number in the memory bank that corresponds to the value of the measurement. The larger the memory bank, the more comparators we can wire to it and the more precise the measurement can be.
The size of a digital memory bank is expressed in “bits,” where a “bit” is a memory slot that can hold either a one or a zero. When Miller applied this analogy to humans’ working memory, he calculated that our brain’s sensory memory bank needed to hold about 2.8 bits of information to distinguish between seven levels of sensory input (since the limit of precision is equal to two raised to the power of the number of bits, and 22.8=7).
Miller also observed that people could only keep about seven items in mind at once, such as a string of seven numbers, letters, or words. This provided another measure of working memory capacity that gave the same result. However, in this case, Miller could not say that working memory had a capacity of “2.8 bits,” because letters and words take more bits than this to encode in binary.
Moreover, a string of seven words is made up of more than seven letters, and contains more information than a string of seven letters. Thus, if measured in terms of binary bits, the capacity of working memory seemed to vary, depending on what kind of information it held.
Miller coined the term “chunks” to describe the units of information in working memory, since “bits” didn’t provide a consistent measure: Based on Miller’s observations, your working memory could hold about seven chunks, but an almost unlimited number of bits.
To explain this, Miller theorized that your brain could organize information into increasingly complex chunks over time. As you learn something, your working memory quickly fills up with information, but once you process this information into a coherent chunk, it takes up only one slot in your working memory instead of all of them. Then, you can take in more new information and append it to the same chunk. In this respect, Oakley reiterates Miller’s description of how chunking makes room for more information in your working memory.
Oakley and Miller both describe chunking as a mental process that occurs naturally as part of the learning process. However, others have explored applications of artificial chunking, both for humans and for computers.
In his book, Moonwalking with Einstein, Joshua Foer describes chunking as a mnemonic device: You can remember longer strings of numbers by breaking them up into groups of digits. This could be called “arbitrary chunking,” since you arbitrarily impose the grouping and artificially create a unifying meaning for each chunk of numbers.
Meanwhile, computer scientists Laird, Rosenbloom, and Newell took the concept of chunking and applied it to the problem of developing machine learning and artificial intelligence. They started with a problem-solving algorithm that was able to process specified goals and generate sub-goals that it could solve. They then programmed the computer to save “chunks” consisting of subgoals that it solved, packaged with their solutions. The computer could refer back to these chunks when solving new problems that had similar goals or subgoals, instead of generating the same solutions all over again.
Without chunking, the problem solver took 1731 processor operations to solve a certain benchmark problem. With their chunking algorithm, the computer was able to solve the same problem with only 485 operations on the first run, and only 7 operations when they ran the same problem again. Thus, it looks like artificial chunking has the potential to dramatically improve computers’ problem-solving capabilities, especially for repetitive problem solving.
Simplify Concepts to Expedite Chunking
Oakley explains that the more you distill a concept down to its essence, or figure out what it means at an intuitive level, the more tightly that meaning binds the chunk together in your mind. To help this process along, Oakley recommends trying to simplify concepts that you’re learning enough that you could explain them to someone who has little background in the subject. Because it helps with the information chunking process, this exercise often enhances your own understanding. She notes that many teachers of her acquaintance say they didn’t really understand their subjects until they began preparing to teach them, and consequently simplified the subjects for the benefit of their students.
(Shortform note: In Ultralearning, Scott Young describes a similar technique for learning a new concept by pretending to explain it to someone else, which he attributes to physicist Richard Feynman. According to Young, the “Feynman Technique” consists of writing out an explanation of the concept you are trying to understand as if you are explaining it to someone else. If, at any point, you don’t feel you can explain it clearly, you review your sources to clarify your understanding, and then finish writing your explanation.)
|The Link Between Teaching and Recall|
Four years after Oakley wrote A Mind for Numbers, a study determined that the effectiveness of teaching as a learning tool is closely related to the practice of intentional recall. Researchers concluded that the element of recall involved in teaching may be what makes teaching so effective for learning.
Given the results of this study, perhaps the act of teaching improves your understanding of something by making it more accessible in your memory, rather than by simplifying it.
Build Chunks From the Bottom Up
According to Oakley, information chunking is a “bottom-up” learning process because it starts with the details and progresses towards the big picture: First you take in the information. Then you connect the information by understanding the underlying concept. Finally, you establish the context of the chunk, so your memory can file it where it will be most relevant. As Oakley points out, the more you understand why something works, the more easily you can understand how it works. Understanding whether or not the chunk is applicable in a variety of different situations also establishes multiple neural pathways to it, making it accessible when you need it.
As an illustration of the bottom-up chunking process, consider our example of learning to drive a stick-shift. The different gear positions and the method of letting out the clutch so you don’t stall the engine are pieces of information. As you develop these skills and learn to coordinate them, they become united in a chunk embodying the concept of shifting gears. As you gain experience, you develop a better understanding of when to shift into what gear (the context of the chunk). For example, you might normally go 55 mph in fifth gear, but use fourth gear at this speed if you’re driving uphill or pulling a trailer.
(Shortform note: Joshua Foer’s use of chunking to remember longer strings of numbers by breaking them down into groups of digits illustrates that “bottom-up” does not always mean continually building up. Sometimes, you have to disassemble information before you can start making connections between pieces of it and building the chunks back up. This is still part of the bottom-up chunking process.)
When you learn a new problem-solving technique, Oakley suggests following these steps to make sure the new technique is strongly chunked in your memory:
- Step 1: Work through a representative problem, making sure you understand the purpose of each step.
- Step 2: Work the same problem again. Repetition is important for the learning process.
- Step 3: Switch to diffuse-mode thinking by taking a break.
- Step 4: Work the same problem again before going to bed.
- Step 5: Get some sleep.
- Step 6: Work the same problem again as soon as possible after you wake up.
- Step 7: Go for a walk or do something physically active and mentally review the key steps involved in the problem while you walk.
- Repeat these steps for a few other problems to further strengthen the chunk and build context.
|Chunking Strategy: A New Approach to Homework?|
Oakley’s process as outlined above emphasizes learning a technique thoroughly by completing just a few problems, each of which you work through four times. By contrast, many math and science curriculums teach problem-solving techniques through a larger number of problems that you only solve once.
If Oakley’s technique represents the most efficient way to learn problem-solving skills, then perhaps many math and science professors could simultaneously help their students learn better and reduce their own workload by assigning about a fourth as many homework problems, and advising their students to revisit each problem as Oakley does. As yet, it seems this approach to homework has not been formally studied, or at least conclusive studies have not yet been published.
Build Context for Chunks With Top-Down Learning
As we’ve noted, chunking is a bottom-up learning process, but top-down learning also contributes to building your brain’s library of chunks. Oakley explains that top-down learning follows a similar sequence to the chunking process, but from the opposite direction: First, you learn the context in which a concept applies, then, you learn the concept, and finally, you flesh out the concept with detailed information. In other words, you progress from the big picture down to the details.
For example, Oakley suggests skimming a book or chapter before you read it in detail so that your mind organizes your expectations of what you will be learning before you fill in the details of each new concept. This is analogous to building a mental filing cabinet with a separate drawer where your brain can file each new piece of information when you go back and read it in detail. You absorb information better when your brain already has a place to put the chunk or an existing chunk to append it to.
(Shortform note: For this reason, skimming a book before you read it in detail is a common strategy taught in speed reading courses. In How to Read a Book, 20th-century philosopher Mortimer Adler presents it as a way to understand the crux of a book within 15 minutes and better remember what you read.)
When should you use top-down versus bottom-up learning strategies? Oakley doesn’t say, but she does emphasize that both are important for learning math and science. From this, and the advantages she lists for top-down learning, we infer that it is efficient to use top-down learning whenever possible, but that a top-down approach is not always available.
|Metalearning: An Extended Form of Top-Down Learning|
In Ultralearning, Scott Young takes the concept of top-down learning to a higher level. He discusses “metalearning,” a method for learning a specific subject. Metalearning is a form of top-down learning that starts at the highest level, building context for the subject itself and for the major concepts that make it up before working down to any of the details.
To practice metalearning, Young recommends you do the following before you begin to study a subject:Understand your motivation for learning it.Investigate the approaches that other people have used to gain proficiency in it.Make a list of the concepts and skills that you will need to learn to master the subject.
———End of Preview———
Like what you just read? Read the rest of the world's best book summary and analysis of Barbara Oakley's "A Mind for Numbers" at Shortform.
Here's what you'll find in our full A Mind for Numbers summary:
- How to make math and science intuitive
- Strategies for remembering complex concepts more easily
- How to build good study habits and avoid procrastination