Thursday, July 10, 2008

Overreliance on models

As humans, we tend to have a desire to explain everything. If something does not fit our previous model, we create a new model to fit the new data into it. We need a model to explain everything. We learn, and let our models grow- enriching our knowledge and understanding of the world.

So far, so good. but the centuries of (implicit) model fitting has demeaned our ability to account for completely unforeseen possibilities. As Nassim Taleb puts it in "The black swan", we restrict our views to our imagination. And most people cant imagine things that contrast with the existing model, yet dont have any fundamental reasons for not being true. We often overlook the limitations of the model and believe it to be an accurate representation of the world.

Like the black swan itself, until the black swan was discovered by the Europeans in Australia in 1606, it was believed that swans could only be white. It was then that people realised that there is no limitation in the model of the swans that predicts the existence of white swans only.

We do a lot of implicit model fitting - "this female is pretty"- this indication is generated after the brain takes in sensory data, and fits it to the idol, finds the matching coefficient and produces a result in the form describable to other humans. Likewise, A lion on the look for a prey would glance at the herd of deers and decide to eat one particular deer - he does this after analysing the deer's features and size, and computes the expected benefit - cost . The lion finally will pursue the deer with the largest positive gain. This is again arrived at by implicit model fitting. These models can be created by two methods - by interaction with the surroundings, and through the hard-wiring of the genes. In fact, as I look at it, genes provide only the basic framework for the decisions. All the models are designed to meet the guidelines outlined by the genes. I am not aware of any study that tested an animal's reaction when its newborn was subjected to environmental learning that contrasted with its native learning environment. Maybe any of my readers could help me in this.

But the essential part to be kept in mind while using models are, that well they are models after all. Models are meant to take into account most of the available data and predict the outcome - with a certain degree of accuracy. These CAN NOT predict the future. Once we get used to the idea that our mind uses models (most probably more sophisticated NNs(Neural Networks) than any ANN(Artificial Neural Network) in use today) to predict outcomes, and that models CAN go wrong - then we can be in better control of things.

Stumble Upon Toolbar

No comments: