Very recently, I did what I have been doing a lot of lately and visited our good friend and neighbor Christina. It was a particularly warm night and she had the baloney slide doors open.
Before long, a swarm of what seemed like mosquitoes covered the ceilings inside her house.
Out came the can of Mortein and the ensuring smell forced us to go outside and finish off the very good bottle of wine.
I thought there was something odd about these [mozzies] – after all, they weren’t biting anyone and weren’t even buzzing near our ears. I thought, these aren’t mozzies at all – probably flying ants I offered in way of an explanation.
Christina was adamant however. We even put a bet on it - Pancakes.
My main thrust of argument was that they were simply not behaving like mozzies. Sure they looked like mozzies, notwithstanding the bad light and vision anything like years gone past – but it just didn’t seem right.
Later, I gave the matter a bit more thought. How do we go about making meaning of what we see, how do we recognize objects and ascribe labels.
Object recognition, we have come to learn is the ability to perceive an object’s physical properties (such as shape, colour and texture) and apply semantic attributes to the object, which includes the understanding of its use, previous experience with the object and how it relates to others.
Using a neuropsychological basis for object recognition provides us information that allows us to divide the process into four different stages. These steps are usually processed rapidly and with little or no effort in cognation.
Stage 1 Processing of basic object components, such as shape, color, size, depth, etc.
Stage 2 These basic components are then grouped on the basis of similarity, providing information on distinct edges to the visual form. For example, we make out the wings as distinct from the body, etc.
Stage 3 The visual representation is matched with structural descriptions in memory. We have mental models of things we have seen including mosquitoes.
Stage 4 Semantic attributes are applied to the visual representation, providing meaning, and thereby recognition.
This is where Christina and I parted ways. In step 4. The bulk of her recognition heuristic seemed to be around shape, size, etc, as well as other, I’m guessing, more subtle cues, including the weather and the way they seemed to gravitate towards light.
I couldn’t but help needing further evidence in the way of behavior. This could be because the bulk of my work in recent years has been around, and, in the use of applied behavior analysis. It’s not surprising that there are so many signals that go into making meaning of things. And not surprising therefore the challenges involved in both artificial intelligence and artificial recognition systems.