In sum, everyone has an “inner pickpocket” lurking inside them.
This interesting finding not only explains why this form of thievery is effective, but it also serves a practical purpose: It allows a person, for instance, to determine if something looks good on them – just by looking at it.
According to the researchers, this ability stems from how humans handle the steady stream of data obtained by their sensory input. The brain divides that nonstop information into discrete chunks for easier management.
For example, a window shopper's sense of sight deciphers photons as reflected light from the objects in the storefront. Similarly, a pickpocket renders the series of small depressions on their fingers as a sequence corresponding to an object in a pocket or bag. (Related: Can your inbuilt “compass” respond to changes in Earth’s magnetic field?)
Humans rely on their senses to interact with their surroundings, often relying on sight and touch to identify objects in a heaping mess. Through these senses, they can predict how they feel about it using sight, or what it feels like using touch.
Their brain delivers on those requirements by running statistical analyses of earlier experience. Based on the existing data, it may infer the properties of a newly-encountered object. Simultaneously, it may immediately guess the identity of an item despite having no obvious clue, such as well-defined shapes.
In the study, co-lead author Mate Lengyel and his team looked at how the brain took in the continuous flow of sensory information and divided that data into objects. The researchers then compared the earlier data to the new sensory input.
“The common view is that the brain receives [specialized] cues: such as edges or occlusions, about where one [thing] ends and another thing begins, but we've found that the brain is a really smart statistical machine: It looks for patterns and finds building blocks to construct objects,” Lengyel added.
The research team set up scenarios with abstract shapes that lacked distinct boundaries between them. Their subjects either watched the objects on a screen or tugged them apart along a tear line. The line went either through or between the objects. Next, the participants performed tests that evaluated their ability to predict the visual and haptic characteristics of the jigsaw puzzles.
The visual exams determined the familiarity of the jigsaw pieces compared to abstract shapes made from the parts of two different objects. Meanwhile, the haptic tests investigated the difficulty of physically tearing new scenes apart in different directions.
They learned that participants succeeded in assembling the right mental model of the jigsaw pieces using only visual or touch-based experiences. Further, the subjects immediately predicted the haptic properties of an object from the visible characteristics and vice versa.
“These results challenge classical views on how we extract and learn about objects in our environment,” explained Lengyel. “Instead, we've [shown] that general-purpose statistical computations known to operate in even the youngest infants are sufficiently powerful for achieving such cognitive feats.”
Sources include: