Fellow Johns Hopkins Cognitive Science grad department alumni (and friend) Tamara Nicol Medina just made the news for some really interesting research findings. In short, kids learn language not by gradually honing in on correct word associations over time but through more focused moments of insight.
One of the more interesting aspects of the research is the logic the research team had for doubting what was assumed by many to be key for language learning:
The research conclusions themselves can be one of those "Aha!" moments that immediately sink in once you think about it. The conclusions touch on several key issues including:
Later, I'll share some fascinating visual cognition research that has provided surprising insights into the limitations of human cognitive processing, even when we feel we are "fully" processing the world around us.
One of the more interesting aspects of the research is the logic the research team had for doubting what was assumed by many to be key for language learning:
The current, long-standing theory suggests that children learn their first words through a series of associations; they associate words they hear with multiple possible referents in their immediate environment. Over time, children can track both the words and elements of the environments they correspond to, eventually narrowing down what common element the word must be referring to."See the full article here. If you're interested in how kids learn language, this is a must read.
This sounds very plausible until you see what the real world is like," Gleitman said. "It turns out it's probably impossible."
...
"The theory is appealing as a simple, brute force approach," Medina said. "I've even seen it make its way into in parenting books describing how kids learn their first words."A small set of psychologists and linguists, including members of the Penn team, have long argued that the sheer number of statistical comparisons necessary to learn words this way is simply beyond the capabilities of human memory. Even computational models designed to compute such statistics must implement shortcuts and do not guarantee optimal learning.
"This doesn't mean that we are bad at tracking statistical information in other realms, only that we do this kind of tracking in situations where there are a limited number of elements that we are associating with each other," Trueswell said. "The moment we have to map the words we hear onto the essentially infinite ways we conceive of things in the world, brute-force statistical tracking becomes infeasible. The probability distribution is just too large."
The research conclusions themselves can be one of those "Aha!" moments that immediately sink in once you think about it. The conclusions touch on several key issues including:
- Losing information (memories) can sometimes be a very good thing. As researcher Lila Gleitman said, "It's the failure of memory that's rescuing you from remaining wrong for the rest of your life."
- The human mind has many strategies for working around limitations in acquiring, processing, and storing information.
- Artificial Intelligence will be better able to mimic how the human mind works the more it takes into account the brain's limitations.
Later, I'll share some fascinating visual cognition research that has provided surprising insights into the limitations of human cognitive processing, even when we feel we are "fully" processing the world around us.
No comments:
Post a Comment