The idea that artificial intelligence is very different from human intelligence is just gaining cachet. It’s hard to believe that we rarely understand our own motivations and desires. Think about it. Do we really know what we like? I remember a discussion initiated at the office on the choice of toilet soaps. No one was able to give a clear reason for preferring their brand apart from the obvious responses like – ‘It works for me, it smells better, lathers in all kinds of water, keeps me feeling fresh’, etc. – things that the advertising has put into their minds in the first place. When pressed further, they were just plain confused. Then factors like ‘Mum always bought it’ were touted as reasons.
So, if we are not able to explain simple preferences like soap, shampoo, and moisturiser brands, how do we get machines to understand what preferences are? If physical properties like tall and short get vague the moment you change the context ( tall among basketball players vs tall among the general populace), imagine how hard it is to define desire. And from a status standpoint. So, if we want machines to think (whatever that means), we’re constantly confusing it with emotional connect – the essence of being human. There are lots of terms – humanoids, robots, and the concepts have been explored by some of the world’s best directors in movies, but we are no closer to understanding how to construct them.
Sure, fuzzy logic washing machines and microwaves claim to understand which cycle to use – depending on how dirty the clothes are or what needs to be cooked. But the variables again have to be clearly defined. What is dirty vs very dirty vs insanely dirty. Incidentally, how would you define dirt? Dust, grease, curry, oil? On what surface area of the clothes does it have to be to qualify? See where this is going?