[Sigia-l] Findability
chick foxgrover
cflists at foxpath.com
Mon Jul 21 16:08:43 EDT 2003
Couldn't one say though that understanding and intelligence can best be
understood in relation to a task of some sort?
And that the tasks or the intentions of a series tasks or their ends come from
us. We may have machines that find solutions we may have to study to understand
if we care, but whether we know/understand/remember how the keep food fresh in a
refrigerator, the only reason the process of creating a device that was designed
or "learned" to keep food fresh was in relation to OUR (or the designers') need
to solve a particular problem. Problem creation, posing, solving and intentions
in us evolve contnually and are perhaps in this sense far more "scalable" than
the idea that humans are "computers" that must evolve through generations to
grow or change. So we may keep up after all.
As one AI researcher put it:
The question is not to create a program that can play chess but to create one
that wants to.
Does this help?
>A lot of what we've been talking about boils down to how we define
>'understanding' and 'intelligence.'
>
>The natural way we define these things is in a human-centric way.
>
>We say things like "a machine will understand something when it
>understands things the way a human does." The implication then is
>that the machine must, internally, either function like a human
>functions or achieve an extroardinary capacity for mimicing human
>abilities.
>
>But is this a good way to define 'understanding' and 'intelligence?'
>
>Perhaps not.
-------------------------------------
Chick Foxgrover
mailto:cfoxgrover at foxpath.com
718-369-7102
-------------------------------------
917-661-6758 day
More information about the Sigia-l
mailing list