On Wed, 2007-09-05 at 08:54 -0700, marijane white wrote:
> On Wed, 5 Sep 2007, Jason Etheridge wrote:
>
> > <snip>
> >> In my experience, the fulfillment of the big promises of AI are always
> >> "just 5 or 10 years" away.
> >
> > To be fair to AI, I think the goal posts keep shifting. Will AI ever
> > be able to do everything a human can do? No idea. Can AI do some
> > things better than humans now? Indubitably.
>
> I agree with this; the goal posts do shift. One of the great ironies of
> AI is that once a computer can do something that only humans could do, we
> no longer think of it as artificial intelligence.
Yes, playing chess was once a commonly-produced example of machines
"really thinking". Until the best chess players in the world were
computers.
Now, though, people quite rightly understand that playing chess isn't
*that* hard. I mean, sure it's hard, but not as hard as e.g. translating
a piece of text from one language to another (which pretty much
approaches the classic "Turing test" in difficulty). Playing the Chinese
game Go is computationally far, far harder than Chess, though this isn't
intuitively obvious.
The interesting conclusion, for me, is that human intuition of what is
computationally hard is often wrong. Many tasks which humans think are
computationally hard are actually dead easy (and vice versa - tasks that
people think are easy, like walking a straight line, have proved to be
quite tricky for robots). I think that exposure to CS can be very
helpful in informing a more accurate intuition about complexity of
computational tasks, and about the effectiveness of various computing
techniques in particular "problem spaces", but one does have to learn
not to just trust one's gut feeling or common sense.
Con
Con
Received on Wed Sep 05 2007 - 16:51:56 EDT