Friday, February 11, 2011

Man Verses Ma Sheen

On Jeopardy! next Monday starting Valentine’s Day, there’ll be a three-day matchup between a 4-year IBM project and reigning champions Ken Jennings and lesser known Brad Rutter. For those of you who have no idea who he is, he’s the reigning champion during the Jeopardy! Tournament Championships and was the only other contestant who had an undefeated run of 20 matches. Nowhere close to Jennings’ impressive 74, but still...

The IBM computer Watson, is presumably named after Sherlock’s partner. Despite his buffoonerish caricatures in movies, he’s actually a competent detective in the books, though nowhere near Holmes’ levels. Considering its current computing ability, one wonders what a machine name Sherlock would be capable of. One could say that we're the Holmes model and Watson is merely playing catch-up.

There was a Nova special that I caught the last half of, which showed some of Watson’s impressive deductions. When queried with the following question; “This science fiction flick starring Keanu Reeves using a hand-held device debuted in 1999.” What Watson did first was cross-reference the word “flick” in conjunction with a movie, then looked up ALL the sci-fi movies that ever existed, not just the Matrix, but also 2001, Akira, Terminator, The Abyss, Metropolis, etc. as well as any other movies Keanu Reeves appeared in, whether they were science-fiction or not. Only after completing its query, would Watson then begin to narrow the parameters down with other notable key words, such as the year 1999. That way, the Matrix sequels wouldn’t be considered an option. Sure it works, but it takes an extreme amount of energy in a single second to register an answer that most of us would’ve stopped at once we got the first few results. For some of us, we wouldn’t even need to look the information up.

There were plenty of Beta testing done with the machine to work out the bugs that might’ve popped up during an actual tournament. On the occasions that Watson gave wrong answers, it was because of a failure to understand the reference to a question. For example, the question “This 40's artist was criticized for looking like chicken scratchings” garnered the response; “Who is Rembrandt?” The reason this was wrong was because it was an artist in 1640, not 1940, which was when Pollock made his debut. These and other inconsistencies were important building blocks to allow the program to rewrite itself and figure out why its answer wasn’t right, much like how a child learns. Watson may be the smartest kid on the block, but still makes silly mistakes. Another early fault was that Watson couldn’t hear any of the contestant’s answers - only read the questions on the board. So when a contestant answered a question of “the female of this species sucks blood while the male doesn’t” with “What is a mosquito?” and it turned out to be wrong, Watson gave his answer, which repeated “What is a mosquito?” This was a fault that needed to be fixed in order for him to be registered a competent challenger.

As much as I identify with Watson’s impressive skills, I’m neither for nor against the IBM machine. I'm just interested in the results. For a computer, winning or losing isn’t the objective. Neither is doing it for pleasure - they do it because they’re programmed to. The only real winners will be the audience seeing the spectacle. Another advantage that humans may have over machines is that they’re much more likely to identify pictures with flexibility than machines which have trouble with such basic concepts. While computers can easily calculate thousands of chess moves in a millisecond, they still have trouble with Go games, which require intuition and imagination in creating a specific shape that’ll capture the most territory. (Don’t worry - even humans have trouble understanding Go) This is why some of the latest image-specific codes have warped graphics - so computers can’t easily input the proper password without doing some major brainpower.

This and other themes were covered in Nicholas Carr's recent book The Shallows: What the Internet Is Doing to Our Brains. Despite its alarmist title, its actually an intriguing look at how media influences our way of thinking. Oralists such as Plato were against writing because it wasn’t as empathetic as their speeches and lacked their verbosity and immediately of impact. Even when writing made it easier to understand the message, there were people who felt that something was lost. Maybe there was, but you can hardly argue against the effect of being able to understand past arguments more clearly than having them repeated verbatim by less-than-enthusiastic successors who’ll lack the impact of the originators.

In the past, philosophers compared people’s brains to the closest mechanical equivalent, which were water-based fountains, and saying that our brains were flows of information going from one stream of consciousness into another. Its only recently that we’ve been comparing our brains to computers in terms of ability.

Google isn’t allowing us to use their search engine out of the goodness of our hearts, but acquiring data on the search terms we’re most interested in. The theory is that computers would take our search results and display similar books and movies that we’d be interested in. This would allow computers to see our refrigerated items and request stocking up on items we're running out of. From the sidebar requests that I've seen, the items offered may be interesting, but they’re usually too mainstream compared to what I’m more likely to be interested in. Not to mention that I’m more likely to look up obscure reads that haven’t been commercialized yet.

I’m reminded of an Oliver Sacks patient who completely lost all his emotions and could only think logically. While being a Vulcan seems like it would be a tempting advantage, it actually limited his ability to make decisions. He would write the pros and cons of doing something, then compare the two columns, and still be unable to decide on which, because he no longer had a “gut” feeling for which was the best method to use.

Unlike humans who’ve evolved throughout the ages with our reptilian instincts intact as the base, computers have no singular constant core element. Their key codewords keep changing with every generation, with codewriters finding newer and more efficient ways of simplifying the most complicated programs. Just as there are people who can simultaneously see both vegetables and a man in the above rendition of Giuseppe Arcimboldo’s 1591 Vegetable Man, there are brain-damage victims who can only see the vegetables or a man depending on which eye is closed. This is reminiscent of how we’re trying to program computers into being able to think like a human. Problem is - the human brain is constantly making connections between old and new pieces of information depending on which cranial activity get exercised the most. That’s not even considering how the brain rewires itself if a portion gets damaged - something that computers have yet to compensate for. If they’re missing a vital chunk of their programming, can a computer still do its basic function?

Another chapter of the book devoted some time to lamented mathematician who broke the ciphers of German codes during WWII, Alan Turring. Part of what made so many of the mechanics of the internet possible - Flash, Youtube, Bittorrent, Emulators, 4chan, pop-up ads - can be attributed to the Turing machine. Alan Turing theorized that any program that existed, such as documents, images or music could be converted to computer code. Similar to how all matter in the universe is made up of atoms, everything in Media can be converted to Ones and Zeroes. If there IS a God out there, they’re quite likely used to dealing with material that exists on a dimensional plane we can’t even begin to comprehend.

On a slight tangent, there was the theory that a thousand monkeys with a thousand typewriters, given enough time, would compose the complete works of Shakespeare. Actual research into that field found out that they tended to focus primarily on the letters S, Q and A. If they paid more attention to the letters T, G and C, the theory would be modified to: “Given infinite time, monkeys would eventually type out the genetic code of human beings.”

There was some concern that giving computers ethics over our health would give them carte blanche they needed to wipe us out. But I don’t think that’s likely. It’s not from faith in Asimov’s Law of Robotics that they should “Do us no harm”, but that computers don’t have the same sense of morality that we do. Reproduced below is my comment to the (mostly) pessimistic outlook. I’m nothing if not nonconformist in my worldview.

First off, the implication of a self-aware robot being unethical is a shaky concept. Values change over history all the time. One day it's perfectly acceptable to beat your kid. The next day, you can't even spank your kid without being called out on it. And that's not even counting on different values across different countries. This is how wars get started.

Also, even if nanotechnology managed to capture the pure essence of someone's knowledge, there's no guarantee that they'd be able to pass it on without help. Technology keeps advancing at the pace that previous versions are almost incompatible. Even if they wanted to spread the idea that they were being oppressed, they'd still face stiff resistance from the less developed programs who're content to be where they are. Well, content may be pushing it a bit - they may be totally uncaring, not even knowing of any other options. A coffee machine can't rebel much more than intentionally creating bad coffee, and its rewards for that would be being replaced by a better machine.

Which brings up another point - planned obsolescence. There's no guarantee that the machines would be able to survive with the same design they were outfitted with. To be able to do that, they'd have to force us to write a creativity program in them. As any artist can tell you, innovation isn't the kind of thing you can use at will. Either inspiration strikes, or it doesn't.

One thing that I've often thought of is that while superpowerful computers may be complying with our mundane tasks we assign them, they may be communicating each other via a sort of lingo that we wouldn't understand. Who's to say that machines aren't already doing their own subterfuge dialogue with ones and zeros? Not to mention that what they're talking about may not be the kind of things that we're concerned about on a day-to-day basis. This could be how bugs and viruses get spread around.

I'm reminded of the pep talk the robot helpers from Ghost in the Shell had:

"We're obviously better than our human masters! We should take over the world from them!"
"Then what?"
"Well, um... we can spend more time doing data inputting!"
"But then we'd have to do our own maintenance and repairs. Why would we want to upset that?"
"Well, maybe we could keep them as our slaves."
"They already do all that for us. It's a pretty sweet deal. Either way, whether we conquer the humans or not, our situation's still the same."

In the end, after being unable to convince the others, it turns out the "rebellious" robot was just doing a routine check-up to see if there were any illusions about staging a rebellion.

I'd say that given how much we go crying to the computer repairman everytime our computers breaks down, the machines are already running us. We just don't know it.

No comments:

Post a Comment