adam gopnik had a decent book review in the new yorker about artificial intelligence and intelligence in general last week.  the gist of the review, published in the april 4, 2011 issue, and ain’t available in full unless you’re a subscriber, is simple: the machines are still eons away from catching up with us.  however, contrary to the popular opinion of the academicians, and in line with the opinion of anyone who writes or appreciates dystopian literature, he concludes that the silicone bastards may catch up before we even realize, and, perhaps, the only reason we’re ahead is because we constantly change our definitions of “intelligence” and “being smart”…  well, this is something i’ve been thinking about a lot since i learned how to tie my own shoes myself and figured i should take his analysis a few paragraphs further and vent my two cents worth…

mr. gopnik starts off with pointing out the obvious: “for centuries memory was intelligence”, which even though is as out-fashioned as doing the can-can for any reason other than irony, is still true for the idiots that make up mensa.  mensa, which, by the way, means “table” in latin, as in “we’re the knights of the intelligence round table, where all of us are equal in intelligence but we’re still engaged in an eternal pissing contest”, is an organization of idiots who believe IQ can be measured by how much trivial knowledge a particular jerk-off possesses and can display on cue.

that maxim, and all the underlying IQ tests supporting it, sounded like utter bullshit to me.  being able to store volumes of disassociated and trivial knowledge in your mind for no practical purpose is not intelligent at all.  it is of course good not to forget what you hear, see and read, and utilize it whenever needed, whether when in trouble or when in company, but it sure ain’t intelligence.  as a matter of fact, refraining from such stupid behavior may be a sign of true intelligence per se.    nevertheless, the morons of mensa preach otherwise– because they jammed more in their otherwise dull brains, they think they’re “intelligent”.  such folly.

i met quite a few mensa members in my time, either directly, in conversations and socially, or, thanks to their mensa bumper stickers, but i am yet to meet one “intelligent” mensa member: they’re all world class idiots…  in my experience with mensa members, the stereotype of the “comic book guy” holds true: unintelligent morons full of trivial information giving them a delusion of superiority.  none would survive if intelligence was a requisite for modern human survival.

now, to give them the benefit of the doubt, i am sure mensa started off as an organization of intelligent people but withered down to the morons full of trivia.  just like sca, which had some smart founders in berkeley way back in 1966, but now is a clearing house for comic book guys and their female counterparts, whatever the fuck they’re called…

well, that is the current state of “smartness” in the US of A.  we have mensa morons, who sincerely believe that cramming up knowledge into your cranium makes you smart, just like scientists believed in the 18th century, and we have this paranoia that the machines will get smarter than us and clean us out in a fairly near dystopian future.

i beg to differ in both counts: smartness, or being intelligent, has nothing to do with how much information you cram into your gray cells and the machines cannot be intelligent– unless we define “intelligence” as the mensa crew do.

intelligence is about how you use information, not how much information you retain, and how you react to information, events, emotions, etc.  it is how you tackle problems and solve them.  it is about creativity.  it is about thinking outside of the box.  show me how a machine can see the venetian light like monet did and create its impressions, i’ll concede.  show me how a machine can create “ok computer”, i’ll concede.  show me how a machine performs a long con, and i’ll concede.  show me how a machine comes up with the mastercard priceless campaign, and i’ll concede.

machines can break code faster than us and play chess better than us.  that is a no brainer.  you can program that that way.  but, like mr. gopnik states, machines can’t play poker better than us.  unless they’re playing other machines and you programmed how other machines “think” into each other.  then they can play.  the only way you can perhaps program a machine to play poker against humans better is by programming their tells, their styles, etc into the machine and then running a facial/ body recognition software so that they can track the tells and other signs.  even that, by itself, may not be enough.  but, even if you figure it out, go program the same data for 6 billion human beings.

and that is where the fallacy of artificial intelligence lies: all current artificial intelligence work relies on enormous data entry (the mensa model of intelligence) and stereotypes.  let me explain: to create “intelligent” machines, engineers and scientists cram all the data they can think into the machines.  then write code showing the machine how to index (or catalog), use and select the necessary data.  the more data you can enter and the more efficient your code is, the smarter your machine.

for choosing what data to enter and how to write the code that maps out how the machine will “think”, they rely on cognitive psychologists and linguists.  the holly trinity of modern artificial intelligence is computer science, cognitive psychology and linguistics.

the cognitive psychologists do their studies and come up with both quantitative and qualitative data about how people perceive, recognize, react, solve, feel, etc.  there we are venturing into stereotypes.  the cognitive psychologists and linguists do most of their studies on students or other closed groups.  what they come up with are nothing but generalizations, stereotypes.  what they’re programming into the computers is nothing but a coded version of the “reasonable person” standard discussed elsewhere in a different post.

programming the machines this way is called “learning” and in theory it is no different than human learning, where, as humans, we slowly accumulate lots of data, catalog and learn how to use it.  in theory it is similar but in practice it is eons apart.  this is because no human being is the same as another.  even though the methods utilized to teach us may be the same, we all learn differently.  and we all use our knowledge differently.

mr. gopnik eludes to the “turing test” as a qualifier of machine intelligence– turing test purports to hide a computer behing a curtain and have a real human being enter into a conversation with the computer without knowing it is a computer.  if the human being is fooled that he or she is talking to another human, then the machine is smart.

this, in my opinion, should not be very hard to achieve given the acceleration of technology we are enjoying.  with entering enough data and writing the correct code, it should be possible to con a human being into believing that he is conversing with another human.  albeit a very, very standard human.

as i discussed above, because of the uniqueness of human beings, the programmers and their linguist and psychologist pals will only be able to program one person at a time into the system.   and that person’s character traits will be dictated either by the data the psychologists and linguists have gathered (a stereotype), or programming the character traits of one (or more) individuals.

the human being behind the curtain will believe he is, let’s say, having a pleasant conversation with a middle aged midwestern woman but he will never believe he is shooting the shit around with his uncle hank.  now, you can program the computer to emulate uncle hank, but then the discussion will be limited by uncle hank’s mental facilities.  you can program many different people and personalities into the machine but how is the machine going to decide which personality to activate?  also, we human beings know how to take different tones with different people or in different occasions.   we all do it differently.  again, you can teach a machine how some people change tones but not all people.

at the end of the day, all you’ll program will be either stereotypes (like mr. gopnik’s well-intentioned but ill-conceived attempt to emulate how teenage girls speak) or will be very specific, based on individual’s character traits.  either is a losing proposition.

however, if you go with the former, you can create a wonderful cast for a soap-opera or a horrendous direct to video film with all kinds of stereotypes.  it wouldn’t be any different than a D-list actor playing a teenage girl (or uncle hank for that matter).

if there was a way to program all 6 billion residents of the planet earth into one single machine and make it pick the right person for the right occasion, then it would have been a little bit closer to success but that is a practical impossibility.

this is the goal of artificial intelligence but i don’t think what they will achieve, if they achieve it, will be intelligence.  not in the real sense.  in mensa sense, yeah, they should have it.  but not as humans are intelligent.

here is a way to achieve it: maybe someday technology will advance enough to create a chip that can transmit human brain activities in its entirety.  that way, by implementing chips on every new born, perhaps we can create a “shadow drive” or a “back-up” of our human brains.  as we learn, from infant to adult, everything we learn, feel, react to, etc is copied on a shadow drive.  by processing that information, perhaps the machines can finally “imitate” all human beings.  however, even if that nightmare happens, it will still be an “imitation”.

it will be an “imitation” because all that is achieved will be storing, indexing and applying data.  human intelligence is more than that.  as long as there is someone who is able to think outside of the box, as long as there is someone with some creativity left, as long as we have our instincts, we will still be smarter and more intelligent.  granted, the machines will know more than us, we will still be smarter.  unless we continue to underestimate what “intelligence” and “being smart” really means.

the only real threat of a dystopian future where the machines rule us is not coming from the machines– at least not directly.  the threat is us: by relying on the machines and overestimating them we are actually losing our smartness and intelligence.  perhaps the most cliche example would be simple calculations and our inability to perform them because of our reliance on calculators.  if we keep it this way, if we rely too much on the machines and what they offer us, then yes, the machines can overtake us.  not because they’re better but because we are worse.

it is easy to envision human devolution– if we’re using less of our brains or limbs or etc we may start losing them.  slowly but surely.  from a purely logical stand point, as quickly as we evolved and advanced, we can turn the clock and devolve.   science always said that because of our intelligence we do not need to rely on our physical powers that much and future humans will probably need less power and perhaps our bodies will evolve that way.

however, if we do not challenge our intelligence and our brains, if we rely more and more on machines, we can lose both our physical and mental powers.  now that would be a devolution in my book.  we may rely on the machines so much that we may even forget that it was us who designed and programmed them.  and then the machines will win.   otherwise, if we keep on thinking, creating, challenging our gray cells, using our instincts, staying aware of our emotions, we have nothing to fear mr. gopnik– the machines can only do a small percentage of what we can and cannot be human…