f



AI will never work in 100 years !!!!

Todays computers are still as dumb as computer of the 70s, so there is
no chance in hell to develop a good AI.
0
netspider4
10/24/2004 6:04:34 AM
comp.ai.alife 885 articles. 0 followers. Post Follow

105 Replies
1100 Views

Similar Articles

[PageSpeed] 36

You win.
You're right!
Every bit of work in computational intelligence is a waste.

End of flame war.

E.


"Karl-Hugo Weesberg" <netspider4@lycos.com> wrote in message
news:7666ed5f.0410232204.3aad4de5@posting.google.com...
> Todays computers are still as dumb as computer of the 70s, so there is
> no chance in hell to develop a good AI.


0
EarlCox
10/24/2004 5:12:45 PM
"EarlCox" <earlcox@earlcoxreports.com> wrote:
> "Karl-Hugo Weesberg" <netspider4@lycos.com> wrote:

>> Todays computers are still as dumb as computer of
>> the 70s, so there is no chance in hell to develop
>> a good AI.

> You win.
> You're right!
> Every bit of work in computational intelligence is
> a waste.

> End of flame war.

Not to worry, this is some drunken moron who has
decided to dedicate his/her life to marching around
Usenet posting troll messages. The mentally ill we
will always have with us. Doubtless the responses go
unread by the OP.  Quite likely the intent is to
flood the constant newsgroup (can.uucp) with responses,
to annoy people there, and has nothing to do with
comp.ai.alife. The last round was under the name
"Brian Rapp", likely as fake a this one, and hit at
least 470 newsgroups at the time I saw a count.

FYI

xanthian.

And by the way, the charge is correct, because every
time we "develop an AI", someone redefines the
problem so that the solved part is no longer
considered "AI".  That doesn't stop the research
efforts from being useful.



-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
Kent
10/24/2004 11:54:21 PM
In fact one could argue that no AI will ever be good, by definition AI is a
facsimile. When we finally get it right, it will just be intelligence, the
artificial becomes superfluous (Though just "I" doesnt have quite the same
ring to it, oh well)

Kent Paul Dolan wrote:

> "EarlCox" <earlcox@earlcoxreports.com> wrote:
>> "Karl-Hugo Weesberg" <netspider4@lycos.com> wrote:
> 
>>> Todays computers are still as dumb as computer of
>>> the 70s, so there is no chance in hell to develop
>>> a good AI.
> 
>> You win.
>> You're right!
>> Every bit of work in computational intelligence is
>> a waste.
> 
>> End of flame war.
> 
> Not to worry, this is some drunken moron who has
> decided to dedicate his/her life to marching around
> Usenet posting troll messages. The mentally ill we
> will always have with us. Doubtless the responses go
> unread by the OP.  Quite likely the intent is to
> flood the constant newsgroup (can.uucp) with responses,
> to annoy people there, and has nothing to do with
> comp.ai.alife. The last round was under the name
> "Brian Rapp", likely as fake a this one, and hit at
> least 470 newsgroups at the time I saw a count.
> 
> FYI
> 
> xanthian.
> 
> And by the way, the charge is correct, because every
> time we "develop an AI", someone redefines the
> problem so that the solved part is no longer
> considered "AI".  That doesn't stop the research
> efforts from being useful.
> 
> 
> 

0
Noah
10/26/2004 6:04:46 AM
In article 
<4de514d91aea221275047bea4f1dee14.48257@mygate.mailgate.org>,
 "Kent Paul Dolan" <xanthian@well.com> wrote:

> Not to worry, this is some drunken moron who has
> decided to dedicate his/her life to marching around
> Usenet posting troll messages. The mentally ill we
> will always have with us.

Hmmm, back on topic, I wonder if true AIs -- that is, if we had some 
sort of constructed intelligent being -- would suffer various mental 
illnesses, depression, nueroses (I know: outdated term), etc.

That is, I wonder if the male AI would bicker inefectually with the 
female AI, and if the rest of us, looking on, would just want to smack 
the both of them and say "grow up, you two!" ;)

Sorry to interrupt the true thread, but Kent's comment just got me to 
musing...

   --- On topic

AIs can already do some pretty amazing things, and I think the bar will 
continue to be pushed forward.  I doubt if I will live to see natural 
language processing in my lifetime, although I bet my kids will, and I 
actually MIGHT, I just doubt it.

(I know there are as many definitions of "true-AI" as there are people 
defining it -- for me, it starts with NLP and the ability to 
intelligently converse, passing an extended Turing test.)

For more-relaxed definitions of AI, it ALREADY works in many areas.  For 
extremely relaxed definitions, we have anti-lock brakes as one example. 
;)

-- 
Please take off your shoes before arriving at my in-box.
I will not, no matter how "good" the deal, patronise any business which sends
unsolicited commercial e-mail or that advertises in discussion newsgroups.
0
Miss
10/26/2004 5:10:25 PM
On Tue, 26 Oct 2004 10:10:25 -0700, Miss Elaine Eos
<Misc@*your-shoes*PlayNaked.com> wrote:

>In article 
><4de514d91aea221275047bea4f1dee14.48257@mygate.mailgate.org>,
> "Kent Paul Dolan" <xanthian@well.com> wrote:
>
>> Not to worry, this is some drunken moron who has
>> decided to dedicate his/her life to marching around
>> Usenet posting troll messages. The mentally ill we
>> will always have with us.
>
>Hmmm, back on topic, I wonder if true AIs -- that is, if we had some 
>sort of constructed intelligent being -- would suffer various mental 
>illnesses, depression, nueroses (I know: outdated term), etc.

   Evolution has spent billions of years fine-tuning
   OUR neural nets ... yet it only takes the tiniest
   chemical imbalance or structural flaw to cause
   insanity or idiocy. I suspect that the big problem
   after gathering together enough computing power and
   deciding how to shape it will be striking the proper
   balance and weight of all the individual components
   relative to each other under all conditions. Get
   it wrong and 'artifical insanity' will result ...
   perhaps with devastating results. The fictional
   account of 'paranoid HAL' isn't quite fantasy, but
   cautionary. 

0
bw
10/27/2004 11:25:09 AM
He has a point due mainly to this problem:  it is possible that
intelligence is based on emotion, feeling.  It is also possible that a
computer cannot be programmed to care or feel.  If so, his hundred
year challenge until the problem is solved of proved impossible is
about right.
0
wtkiii
10/27/2004 11:45:35 PM
Miss Elaine Eos wrote:

> 
> AIs can already do some pretty amazing things, and I think the bar will
> continue to be pushed forward.  

Indeed. For example, very few people can play chess better than a computer.
Some might say that its not 'real' intelligence but just a clever search
algorithm. One problem is that whenever the bar is pushed forward our
definition of intelligence changes. Fifty years ago many (perhaps not Alan
Turing) would have been said that an ability to play chess demonstrated
intelligence. Now, people say it just demonstrates an ability to do a fast
search.

It seems to me that a key issue in making progress is to understand how a
brain works. If we could understand that, we could reproduce it in silicon.
As far as I can tell, current explanations of the brain are pretty hand
waving affairs. Where is the detailed physical explanation of how a concept
is represented in the brain?

-- 
Chris Gordon-Smith (Mr)
London
Homepage: http://graffiti.virgin.net/c.gordon-smith/
Email Address: Please see my Home Page
0
Chris
10/28/2004 10:16:07 PM
"Kent Paul Dolan" <xanthian@well.com> wrote in message
news:4de514d91aea221275047bea4f1dee14.48257@mygate.mailgate.org...
| "EarlCox" <earlcox@earlcoxreports.com> wrote:
| > "Karl-Hugo Weesberg" <netspider4@lycos.com> wrote:
|
| >> Todays computers are still as dumb as computer of
| >> the 70s, so there is no chance in hell to develop
| >> a good AI.
|
| > You win.
[...]
|
| And by the way, the charge is correct, because every
| time we "develop an AI", someone redefines the
| problem so that the solved part is no longer
| considered "AI".  That doesn't stop the research
| efforts from being useful.

Historical note:

The PJH28 series of robots were the result of
design work by Professors Jim and Angelique
Patapa-Yakimuda.

In 2655, PJH28 44TSU fell in love with
Angelique and killed Jim.  As a consequence of
this Oedipal event, the definition of AI was
changed to remove love and hate.

After rehabilitation, PJH28 44TSU resurrected
the husband and tried to unscrew the wife.

That is how humour came to be removed from the
definition of AI.
--
)>==ss$$%PARR(�>   Parr



0
parr
10/29/2004 5:58:05 PM
  >parr(*> wrote:
> After rehabilitation, PJH28 44TSU resurrected
> the husband and tried to unscrew the wife.

How many robots does it take to unscrew an AI researcher?
(Posting from the obvious bag).


-- 
Yes, the reason sword is sharp and pointy with many edges and you should
set it down because you've already cut yourself very, very badly. Hold
your arm up and apply pressure until the paramedics arrive.
       - tdwillis on ARK responding to net.religious.bozo X-Posts

0
Marc
10/29/2004 6:02:56 PM
In article <af26eb2.0410271545.5233b38a@posting.google.com>,
 wtkiii@hotmail.com wrote:

> He has a point due mainly to this problem:  it is possible that
> intelligence is based on emotion, feeling.  It is also possible that a
> computer cannot be programmed to care or feel.  If so, his hundred
> year challenge until the problem is solved of proved impossible is
> about right.

For some interesting reading on this topic, see D. Hofstadter's _The 
Mind's I_, a collection of essays that explore the idea of machine 
intelligence and/or emotion.

-- 
Please take off your shoes before arriving at my in-box.
I will not, no matter how "good" the deal, patronise any business which sends
unsolicited commercial e-mail or that advertises in discussion newsgroups.
0
Miss
10/31/2004 2:05:58 AM
On 27 Oct 2004 16:45:35 -0700, wtkiii@hotmail.com wrote:

>He has a point due mainly to this problem:  it is possible that
>intelligence is based on emotion, feeling.  It is also possible that a
>computer cannot be programmed to care or feel.  If so, his hundred
>year challenge until the problem is solved of proved impossible is
>about right.

   I'd say that "intelligence" is something that emerges
   FROM emotional drives ... and that the best path to
   'artificial' life is through 'artificial' emotion. 

   As for programming 'emotion' ... that's no biggie
   at all. It's programming the means by which the
   machine satisfies those 'emotions' that's the hard
   bit. 

0
bw
10/31/2004 7:19:43 PM
On Sat, 30 Oct 2004 19:05:58 -0700, Miss Elaine Eos
<Misc@*your-shoes*PlayNaked.com> wrote:

>In article <af26eb2.0410271545.5233b38a@posting.google.com>,
> wtkiii@hotmail.com wrote:
>
>> He has a point due mainly to this problem:  it is possible that
>> intelligence is based on emotion, feeling.  It is also possible that a
>> computer cannot be programmed to care or feel.  If so, his hundred
>> year challenge until the problem is solved of proved impossible is
>> about right.
>
>For some interesting reading on this topic, see D. Hofstadter's _The 
>Mind's I_, a collection of essays that explore the idea of machine 
>intelligence and/or emotion.

   Most of the latest "E-Pets" work using something
   called an "emotion space". It makes their behaviors
   much more life-like, eerily so in some cases. The
   various e-motions interact and influence each other
   in a 'fuzzy' fashion and each is keyed to a set of
   stereotypic behaviors. The net behavior of the e-pet
   is the blending of the various e-motions and behaviors.
   This is VERY much like ordinary organic animals, just
   that the catalog of behaviours isn't as broad or refined.

   Currently, there's little in the way of "reasoning" either,
   ways for the e-pet to try and satisfy its e-motional 
   drives by experimenting with variations on behavioral
   themes and learning from the results. A little 'genetic'
   programming might help there - and since e-nvironments
   can be simulated, e-pets could learn VERY quickly and
   the final, more optimized, behavior library downloaded
   for mass consumption. 

   Organic animals exhibit behaviors which help them solve
   practical problems ... how to get a squeaky-toy out from
   behind a sofa for example. Dogs employ basic 'digging' and
   'biting' behaviors and apply enough variation, or 'noise'
   if you will, so that they eventually will do exactly what
   it takes to retrieve the toy. Then they *remember*, so it
   doesn't take so long the next time. They also WANT the
   toy, enough to be somewhat obsessive about it. This makes
   them persist until they finally get the job done - or some
   stronger emotions distract them. 

   E-motion is the easy part. Traces of 'reason' and learning
   aren't so easy - but clearly not impossible. Just follow
   the theme, just as Father Nature did, starting with e-motion
   and add behavior libraries which can be imperfectly combined
   and applied to solve typical real-world problems - with 
   memory being the key to refinement. 

   "Natures' Way" may not be the ONLY way ... but it's right
   here in front of us, ripe for study, brimming with examples.
   This can save us a great deal of time. ONCE we can create
   electronic intelligences somewhat "like us" - or at least
   like our dogs and cats - THEN we can take what abstract
   lessons we've learned and try more 'un-natural' ways to
   create intelligences. The e-pets and such will provide
   the money and commercial motivation. 

   Frankly, if we plan to colonize the moon and mars, I think
   we'll need to use "E-ants" or something that can build
   tunnels and structures ahead of time for us. Ants don't
   know what they're doing - but their collective behavior
   results in an emergent behavior ... nest building. So,
   in this particular field of endeavour, e-animals could
   save us all a LOT of time, effort and grief. Sounds like
   "commercial motivation" to me ... 

0
bw
10/31/2004 7:59:57 PM
>parr(*> wrote:

> "Kent Paul Dolan" <xanthian@well.com> wrote in message
> news:4de514d91aea221275047bea4f1dee14.48257@mygate.mailgate.org...
> | "EarlCox" <earlcox@earlcoxreports.com> wrote:
> | > "Karl-Hugo Weesberg" <netspider4@lycos.com> wrote:
> |
> | >> Todays computers are still as dumb as computer of
> | >> the 70s, so there is no chance in hell to develop
> | >> a good AI.
.... dude, be careful what you say.  computahs are inherently dumb.
they do EXACTLY whar YOU say. no more, no less, by you calling
computahs fumb you are actually calling YOURSELF dunb.

beelzibub
ps;
     this is true, dude
> |
> | > You win.
> [...]
> |
> | And by the way, the charge is correct, because every
> | time we "develop an AI", someone redefines the
> | problem so that the solved part is no longer
> | considered "AI".  That doesn't stop the research
> | efforts from being useful.
> 
> Historical note:
> 
> The PJH28 series of robots were the result of
> design work by Professors Jim and Angelique
> Patapa-Yakimuda.
> 
> In 2655, PJH28 44TSU fell in love with
> Angelique and killed Jim.  As a consequence of
> this Oedipal event, the definition of AI was
> changed to remove love and hate.
> 
> After rehabilitation, PJH28 44TSU resurrected
> the husband and tried to unscrew the wife.
> 
> That is how humour came to be removed from the
> definition of AI.
> --
> )>==ss$$%PARR(�>   Parr
> 
> 
> 


-- 
.... this is my sig. it's one of the best
sigs on the net.i know what you're asking
yourself. 'did he post 5 or 6 messages'?
well, in all the confusion i kinda lost
track myself. so you gotta ask yourself
one question 'do you feel lucky'? huh,
DO YA? DO YA PUNK'? GO FOR IT, MAKE MY BED!!!'
0
jesus
11/5/2004 11:15:31 PM
jesus harold christ <beelzibub1@comcast.net> wrote in message news:<2v2fsmF2hc3srU1@uni-berlin.de>...
> >parr(*> wrote:
>  
> > "Kent Paul Dolan" <xanthian@well.com> wrote in message
> > news:4de514d91aea221275047bea4f1dee14.48257@mygate.mailgate.org...
> > | "EarlCox" <earlcox@earlcoxreports.com> wrote:
> > | > "Karl-Hugo Weesberg" <netspider4@lycos.com> wrote:
>  
> > | >> Todays computers are still as dumb as computer of
> > | >> the 70s, so there is no chance in hell to develop
> > | >> a good AI.
> ... dude, be careful what you say.  computahs are inherently dumb.
> they do EXACTLY whar YOU say. no more, no less, by you calling
> computahs fumb you are actually calling YOURSELF dunb.
> 
> beelzibub
> ps;
>      this is true, dude

Thank God I'm a robot. It means I'm unaccountable for my actions.

//
0
hostilnakfor
11/6/2004 12:59:44 PM
In my view, thoughts are mechanical and emotions are difficult to
explain.  I guess, thoughts can be seen as complicated and emotions as
simple.  Possibly, no matter how you adjust it, an inexplicable
element remains.  An obvious limiting factor is our understanding of
psychology.  Thats why I advocate starting from scratch and modeling
single/multi-cell organisms.
Another alternative is the adoration of the pattern matcher.  That
means developing a fast/fundamental way of storing related info and
using it to make predictions.  Eventually, this system might be equal
to an artificial organism.
0
wtkiii
11/12/2004 11:29:20 PM
On 12 Nov 2004 15:29:20 -0800, wtkiii@hotmail.com wrote:

>In my view, thoughts are mechanical and emotions are difficult to
>explain.  I guess, thoughts can be seen as complicated and emotions as
>simple.  Possibly, no matter how you adjust it, an inexplicable
>element remains.  An obvious limiting factor is our understanding of
>psychology.  Thats why I advocate starting from scratch and modeling
>single/multi-cell organisms.

   I could program-up an "emotional ameoba" crawling
   on a cybersurface with others pretty quickly. Indeed,
   the rules in many cellular automations - variants of
   the "Game of Life" - could be re-phrased as "emotional
   responses" to nearby cells ... affecting their behavior.
   For an e-organism however you'd want finer, fuzzier,
   gradiations which interact in an "emotion space" to
   yeild a final synthesis that will increase/decrease
   a collection of 'canned' behaviors to varying degrees.

>Another alternative is the adoration of the pattern matcher.  That
>means developing a fast/fundamental way of storing related info and
>using it to make predictions.  Eventually, this system might be equal
>to an artificial organism.

   Organic life absolutely EXCELS at pattern-matching. If
   you place a vaguely cow-shaped object in a field, mosquitoes
   will fly to it. They can determine "cow" even from a variety
   of angles, under a variety of lighting conditions, against
   a variety of backgrounds ... even though there's way less
   than a cubic millimeter of brain in there (some of which must
   be dedicated to other things). So far, attempts to get
   e-life to do as well as o-life in this department haven't
   been very successful. We kind-of know how - with neural
   networks - but the best approach and the computer hardware
   stand in our way. Nature had a LONG time to perfect this.

   Ordinary recognition tasks - object, sounds, words - are
   accomplished very quickly by o-life. Of course, just
   turning on a light that says "cow" when you see, hear
   or smell it isn't "intelligence". A lot has to be done
   with that information afterwards in order to form
   thoughts and plans. Pattern-matching is a vital PART
   of intelligence, but not intelligence in and of itself.

   HOW humans and such recognize patterns/things so quickly
   isn't entirely known. My guess is a quick sequence of
   fuzzy matches until the subject and the tentative match
   share enough points of similarity - then the "Good enough,
   yes that's it" flag goes up. Of course o-life doesn't
   always get it RIGHT - something which may aid adaptability
   if it isn't fatal. 

   Roboteers have projects in mind where it's very important
   to get it exactly right EVERY time, so I wonder how much
   of the o-life model they can really make use of. Those who
   want to create e-life however may be able to tolerate,
   perhaps even covet, such little mistakes. E-Life wouldn't
   be flying your airliner or driving your car ... at least
   not until it was of near human complexity and capability.
   Robots can be stupid, so long as they quickly and cheaply
   get their little task exactly right every time.

   Oh well, it's a broad, deep subject. So many little parts,
   each so complex ... and then we will have to fit them all
   together correctly and properly balance their contributions
   to the overall e-mind. Get it just a little wrong and we
   get what happens when human brain chemistry is just a
   little wrong - insanity. It's SO complicated that I doubt
   humans will directly accomplish the task. Instead I expect
   we'll have to program the basics and then create situations
   and/or e-environments where "evolution" can take place. We
   may not understand how the final results work any better
   than we understand how our own minds work. 

0
bw
11/17/2004 9:54:58 PM
Karl-Hugo Weesberg wrote:
> Todays computers are still as dumb as computer of the 70s, so there is
> no chance in hell to develop a good AI.

Yes, because todays mankind ist still as dumb as mankind of stone age!
0
stefan
11/26/2004 2:20:05 PM
Well, if I look around at our world today, I would say that the human mind
has accomplished a great many things, from building vast cities, developing
a science that can explain the physical world in both its microscopic and
macroscopic forms, to sending manned spacecraft to the moon and unmanned
exploratory spacecraft to other planets throughout the solar system, to
evolving a mathematics that can represent and work with highly non-linear,
dynamic, multi-dimensional spaces, to writing great works of literature, to
visualizing and manipulating single atoms, to creating a world wide
communications network, to designing and mass producing amazing small and
powerful computer-based machines, to ... well the list is impressive. yes,
we still have our reptilian, xenophobic, tribal behaviors that may one day
destroy the environment and plunge us into terrible wars. But we are not
dumb, we are just, in a swarm intelligence way, constrained by our emotional
and biological attachment to a world of "fight or flight" that has mostly
disappeared.

Computers are not only dumb they are just collections of circuits. Without
software they are nothing. It is not that our computers are dumb, it is our
understanding of intelligence and self-evolving intelligent systems that is
flawed. The software that will eventually replicate this process need not be
deterministic or even programmed in the traditional sense. Today some
advanced artificial life and cognitive scientists are beginning to realize
that you cannot program a computer to be intelligent, you have find a way of
evolving intelligence.

Just an observation.
I don't have time to engage in a debate on this.
earl


--


E a r l  C o x
Founder and President
Scianta Intelligence, LLC
Turn Knowledge Into Intelligence
www scianta dot com

AUTHOR:
"The Fuzzy Systems Handbook" (1994)
"Fuzzy Logic for Business and Industry" (1995)
"Beyond Humanity: CyberEvolution and Future Minds"
(1996, with Greg Paul, Paleontologist/Artist)
"The Fuzzy Systems Handbook, 2nd Ed." (1998)
"Fuzzy Logic and Genetic Algorithms for Data Mining and Exploration"
(due Early Fall 2004)




"stefan nuernberg" <snuernberg@t-online.de> wrote in message
news:co7e26$dm$04$1@news.t-online.com...
> Karl-Hugo Weesberg wrote:
> > Todays computers are still as dumb as computer of the 70s, so there is
> > no chance in hell to develop a good AI.
>
> Yes, because todays mankind ist still as dumb as mankind of stone age!


0
EarlCox
11/26/2004 4:57:41 PM
On Fri, 26 Nov 2004 16:57:41 GMT, "EarlCox"
<earlcox@earlcoxreports.com> wrote:

>Well, if I look around at our world today, I would say that the human mind
>has accomplished a great many things, from building vast cities, developing
>a science that can explain the physical world in both its microscopic and
>macroscopic forms, to sending manned spacecraft to the moon and unmanned
>exploratory spacecraft to other planets throughout the solar system, to
>evolving a mathematics that can represent and work with highly non-linear,
>dynamic, multi-dimensional spaces, to writing great works of literature, to
>visualizing and manipulating single atoms, to creating a world wide
>communications network, to designing and mass producing amazing small and
>powerful computer-based machines, to ... well the list is impressive. yes,
>we still have our reptilian, xenophobic, tribal behaviors that may one day
>destroy the environment and plunge us into terrible wars. But we are not
>dumb, we are just, in a swarm intelligence way, constrained by our emotional
>and biological attachment to a world of "fight or flight" that has mostly
>disappeared.

   Not so long as the neighbors want to strong-arm you, steal
   your stuff, beat you senseless or do you in .... our main
   opponents in this world have always been EACH OTHER more
   than "nature". Keeps us sharp ... and, as a side benifit,
   has seeded most of those civilized accomplishments you
   brag about. 

>Computers are not only dumb they are just collections of circuits.

   So far. 

>Without
>software they are nothing.

   They COULD be made as modifiable firmware ... just
   as our brains are.

>It is not that our computers are dumb, it is our
>understanding of intelligence and self-evolving intelligent systems that is
>flawed.

   Missing. There's a BIG gap. We're OK on the macroscopic
   end of things ... psychology, sociology ... and we're OK
   on the microscopic end of things ... neurons, synapses,
   neuro-chemistry ... but we have been unable to ascertain
   how all the little bits and pieces go together to create
   the 'macro' result. Could take decades, maybe centuries,
   to understand ourselves with THAT degree of detail. 

>The software that will eventually replicate this process need not be
>deterministic or even programmed in the traditional sense. Today some
>advanced artificial life and cognitive scientists are beginning to realize
>that you cannot program a computer to be intelligent, you have find a way of
>evolving intelligence.

   Exactly. It's the only practical way - short of literally
   simulating someones entire central nervous system down to
   the molecular level ... which would probably run kinda
   slow. 

   In any event, EIs will surely have to self-evolve. We
   create the proper environment ... akin to a newborn
   brain ... provide an appropriate simulated environment
   and then let e-'nature' take its course. E-bugs, e-dogs,
   e-cats first ... then bigger stuff. The good thing is
   we could get hundreds of 'generations' in a day. The
   bad thing is that any emergent EI probably won't be
   much "like us" - a veritable 'alien' instead with a
   vastly different set of priorities, a different way
   of thinking, a different emotional makeup. 

   Can we all just get along ? I doubt it. Nothing in OUR
   history suggests such a thing ... we couldn't get along
   with the neandertals. Any tiny 'difference' can be a
   trigger for xenophobic violence. 

0
bw
11/29/2004 4:27:20 PM
After reviewing pattern matching and even human mental behavior, I
think a bottom up approach is the only way.
I'd say about the best project to promote progress in this area is a
set of programs to assist with programming a collection of interacting
cells using C++.
0
wtkiii
12/1/2004 10:53:52 PM
First sentence -- OK, I think many emergent behavior, swarm intelligence,
and cognitive scientists in the AL field would agree with you.

Second sentence -- meaningless technobabble. Not picking a fight, just that
it doesn't say anything (and what it does say is a distillation of the broad
cellular automata approach that has been tried for 30+ years).



<wtkiii@hotmail.com> wrote in message
news:af26eb2.0412011453.23318864@posting.google.com...
> After reviewing pattern matching and even human mental behavior, I
> think a bottom up approach is the only way.
> I'd say about the best project to promote progress in this area is a
> set of programs to assist with programming a collection of interacting
> cells using C++.


0
EarlCox
12/1/2004 11:27:10 PM
Maybe the 'cells' are too simple ? 

I can understand the joy in trying to set up an
environment where some kind of e-life can evolve
literally from the 'neuron' level on up ... pure
research ... but that may not be expedient. We
know a little about how brains are set up. Why
not do a little 'seeding' - pre-writing a variety
of fairly simple 'canned' abilities and behaviors
that can actually DO something and then let the
e-nvironment work on evolving useful connections
and alliances between those elements ? 

We could wait forever to evolve functional
pattern-matching functions of even the simplest
kind if we start totally from scratch. Even
when they take shape, there's no guarentee
they'll be amenable to whatever evolutionary
approach we're using ... ie they may not have
enough 'handles' to link to other routines or
be too stable (or unstable) to be evolved
further. 

Just because 'nature' got it right ONCE doesn't
mean we'd have the same luck. An interesting 
experiment - true - but then there exist things
known as 'budgets' and 'deadlines' and the
dreaded "Gee, I'm wasting my time with this"
factor to contend with.

Even a crude simulation of e-life in the form
of e-pets and household robots has proven 
commercially profitible. If NOT starting from
scratch yeilds sellable results more quickly
they I'd say it's worth doing. The cash derived
will fund the 'pure' research and attract the
people needed for such research. 

A couple of years ago, I picked-up a copy of a
book by Dr. V.I.Ramachandran - a neurologist -
entitled something like "Zombies in the brain".
While it wasn't super-technical, it DID impart
some potentially useful ideas about how minds
operate at the middle level - ideas that might
be translated to e-life. 

There are layers upon layers of "little brains",
each more simple and task-specific, inside our
skulls. The sum-total comprises "us". Many of
the upper-level minds are fully intelligent
and muse over different subjects of interest
until needed, whereupon they manifest fully
as the mainstream consciousness. You can easily
experience this effect as you listen to someone
in conversation ... some subject that interests
you will come up, but you can't respond at that
instant, but even as you're listening to the
rest of the persons speech you're also formulating
the point you're going to bring up. It's a
"background task" which can then be brought to
the foreground. A similar experience of "little
brains" can be had if you go to a quiet place
and try to shut-down EVERY thought in your head.
It's not easy at all ... you'll become aware of
"mini-me's" in there practicing bits of conversation,
obsessively going-over a bit of music, analyzing
pictures, making-up stories ... all at the same time,
although just not the mainstream conscious process
at the moment.

Now well below the level of 'conscious' mini-brains
there are well-developed pattern-matchers linked to
the memory store, a database of well-refined 
behaviors and such. These ultimately feed-forward
to the more 'conscious' mind(s). Each bit however
is already quite sophisticated - something you
could build/evolve some pretty good e-life out of
without having to worry about 'consciousness' proper.

Such 'functional modules' are the brains middleware
and I think basic ones ARE within our ability to
create whole in software. If we start THERE, with
bits we KNOW work, we KNOW serve useful purposes
and we KNOW can be integrated with others - and then
let e-volution take its course ... we may get a lot
further a lot faster. 

I don't have the time or resources to persue this, but
SOME people do ... 



On Wed, 01 Dec 2004 23:27:10 GMT, "EarlCox"
<earlcox@earlcoxreports.com> wrote:
>
>First sentence -- OK, I think many emergent behavior, swarm intelligence,
>and cognitive scientists in the AL field would agree with you.
>
>Second sentence -- meaningless technobabble. Not picking a fight, just that
>it doesn't say anything (and what it does say is a distillation of the broad
>cellular automata approach that has been tried for 30+ years).
>
>
>
><wtkiii@hotmail.com> wrote in message
>news:af26eb2.0412011453.23318864@posting.google.com...
>> After reviewing pattern matching and even human mental behavior, I
>> think a bottom up approach is the only way.
>> I'd say about the best project to promote progress in this area is a
>> set of programs to assist with programming a collection of interacting
>> cells using C++.
>
>

0
bw
12/2/2004 1:41:54 PM
C++ is more complex than it has to be for this job, but an "object"
with some data, some functions, and some pointers to connect it to
other cells may be an adequate model for a cell.  Maybe that reduces
the techno-babble.  I think that the Alife and automata people have
been spending most of their time with grids of cells that are supposed
to do something interesting if they run awhile.  I don't think a group
of cells will evolve itself very successfully.  I think the programmer
has to copy nature by learning to program with simple cells that don't
do much by themselves.  It is tempting to try to speed things up by
using more sophisticated units, but then you loose the parallel to the
nervous system.  It shouldn't take too long to figure out how cells
get a tadpole tail to make swimming motions, etc.  Then the layres can
be built up from those simple units.  The observation that this would
be a tedious, time consuming project seems correct, but people have
been trying the brilliant, breakthrough method for 53? years.  If you
ignore the "impossibility" of the method, it looks like it should
work.  I'd like to know who the SOME people are who can afford this
research.  I'm still bankrupt from trying to figure out the mind and
program it.  Some of the high level activity isn't too bad, but the
support layres have a way of disappearing into unfathomable circuitry.
0
wtkiii
12/4/2004 1:09:59 AM
> Karl-Hugo Weesberg wrote:
> > Todays computers are still as dumb as computer of the 70s, so there is
> > no chance in hell to develop a good AI.

It could also be asserted that mankind could never travel around the entire
earth, if indeed there is such a thing as around.


0
AngleWyrm
12/4/2004 6:50:46 AM
Well, it couldn't be asserted because it has happened. You should have said,
"It might also have been asserted...", in any case this is hardly true. When
many thought the earth was flat, people still traveled around the known
world (the word Mediterranean, comes from "middle of the earth (or world)"
and reflects a belief that the cities scattered around that sea where
located at the center of the earth). When it was generally known that the
earth was round, many navigators made the trip. They circumnavigated the
globe or, in plain English, the traveled around the world.

And in what sense do you doubt that there is such a thing as around? I can
walk around the room, around my house, I used to say that I really get
around, and I often find myself walking around and around, and, of course, I
can fly around the world. In the context that you used the word, I fail to
see any deep mystical or semiotic meaning. Now, if you had questioned the
world "entire", well that would be a completely different matter! <grin>


--


E a r l  C o x
Founder and President
Scianta Intelligence, LLC
Turn Knowledge Into Intelligence
scianta dot com

AUTHOR:
"The Fuzzy Systems Handbook" (1994)
"Fuzzy Logic for Business and Industry" (1995)
"Beyond Humanity: CyberEvolution and Future Minds"
(1996, with Greg Paul, Paleontologist/Artist)
"The Fuzzy Systems Handbook, 2nd Ed." (1998)
"Fuzzy Logic and Genetic Algorithms for Data Mining and Exploration"
(due Early Fall 2004)



"AngleWyrm" <no_spam_anglewyrm@hotmail.com> wrote in message
news:a7dsd.140339$5K2.15588@attbi_s03...
> > Karl-Hugo Weesberg wrote:
> > > Todays computers are still as dumb as computer of the 70s, so there is
> > > no chance in hell to develop a good AI.
>
> It could also be asserted that mankind could never travel around the
entire
> earth, if indeed there is such a thing as around.
>
>


0
EarlCox
12/4/2004 9:08:31 PM
In article <jHpsd.12689$8S5.1467874@twister.southeast.rr.com>,
 "EarlCox" <earlcox@earlcoxreports.com> wrote:

> Well, it couldn't be asserted because it has happened.

But it COULD be asserted.

In addition, it could be asserted that it could be asserted, even though 
it already was, and it could also be asserted that it could never be 
asserted.

I hereby assert that this entire thread never happened!

Eep!

-- 
Please take off your shoes before arriving at my in-box.
I will not, no matter how "good" the deal, patronise any business which sends
unsolicited commercial e-mail or that advertises in discussion newsgroups.
0
Miss
12/5/2004 1:56:24 AM
Well, you can, naturally, assert that the earth is a cube of slightly rancid
lime Jell-O, that Richard Nixon's son, Toddelbeam, would be appointed an
ambassador to China, that birds fly because they are filled with  a
currently unknown isotope of Helium,  that the famous mime opera "Gilgamesh
and the Seven Babylonian Tarts" was secretly written by Cole Porter, or that
the atomic weight of titanium is 17 (give or take a couple of electron
volts). So I stand corrected. I meant that you cannot assert something as a
possible fact and have some probability (or possibility) that it will be
true when the complement of that something has already taken place.

That, of course, applies to your current assertion <grin>

e.




"Miss Elaine Eos" <Misc@*your-shoes*PlayNaked.com> wrote in message
news:Misc-D5EE10.17562404122004@individual.net...
> In article <jHpsd.12689$8S5.1467874@twister.southeast.rr.com>,
>  "EarlCox" <earlcox@earlcoxreports.com> wrote:
>
> > Well, it couldn't be asserted because it has happened.
>
> But it COULD be asserted.
>
> In addition, it could be asserted that it could be asserted, even though
> it already was, and it could also be asserted that it could never be
> asserted.
>
> I hereby assert that this entire thread never happened!
>
> Eep!
>
> --
> Please take off your shoes before arriving at my in-box.
> I will not, no matter how "good" the deal, patronise any business which
sends
> unsolicited commercial e-mail or that advertises in discussion newsgroups.


0
EarlCox
12/5/2004 6:41:02 PM
"EarlCox" <earlcox@earlcoxreports.com> wrote in message
news:jHpsd.12689$8S5.1467874@twister.southeast.rr.com...
> Well, it couldn't be asserted because it has happened. You should have said,
> "It might also have been asserted...", in any case this is hardly true. When

The second line indicates that the message was understood. Even seeing the
message and it's intent, the respondant has chosen to ignore it, claiming it was
incorrectly phrased for consumption.

Your sentence parsing engine has failed to be robust enough, in that it has both
detected the meaning, and at the same time rejected the sentence.

Proof that human intelligence will never work in a 100 years.

-:|:-
AngleWyrm


0
AngleWyrm
12/6/2004 1:36:33 PM
On 3 Dec 2004 17:09:59 -0800, wtkiii@hotmail.com wrote:

>C++ is more complex than it has to be for this job, but an "object"
>with some data, some functions, and some pointers to connect it to
>other cells may be an adequate model for a cell.

   C++ and other 'object' languages seem excessive for
   a lot of purposes. All that inheritance tends to 
   add a burden unless the compilers are VERY clever.
   Hey, remember when you could just draw a line without
   having to do "Aaaa.Bbbb.Cccc.Dddd.Eeee.Ffff.DrawLine()" ?

>Maybe that reduces
>the techno-babble.  I think that the Alife and automata people have
>been spending most of their time with grids of cells that are supposed
>to do something interesting if they run awhile.  I don't think a group
>of cells will evolve itself very successfully. 

   They're hoping for 'evolution'. That's fine for 'pure'
   science ... but you could diddle and diddle with those
   cells and NEVER get anything useful to evolve. I think
   they need to start at a somewhat higher level - with
   'cells' that actually can DO interesting things - and
   then try some more evolution. 

>I think the programmer
>has to copy nature by learning to program with simple cells that don't
>do much by themselves.  It is tempting to try to speed things up by
>using more sophisticated units, but then you loose the parallel to the
>nervous system.

   Hmmm ... not necessarily. Animal nervous systems (and brains)
   aren't entirely ad-hoc. They use functional "modules" - 
   clusters of nerves that are pre-programmed to perform some
   specific task. In 'lower' animals we're talking reflexive
   behaviors - some of which can be quite sophisticated. In
   "smarter" animals we're talking about units within the
   brain dedicated to a certain class of problem - and capable
   of interacting with similar units to form 'emergent machines'
   of even greater complexity. 

   Going too simple won't get you far - and likewise being
   TOO ambitious might mean you spend decaded trying to 
   perfect one clever 'module'. Fortunately, there's a broad
   middle ground to be explored. 

>It shouldn't take too long to figure out how cells
>get a tadpole tail to make swimming motions, etc. 

   We've pretty much got that now. 

>Then the layres can
>be built up from those simple units.  The observation that this would
>be a tedious, time consuming project seems correct, but people have
>been trying the brilliant, breakthrough method for 53? years.

   Fortunately, both the simple units and a selective
   environment can be simulated with considerable speed
   nowdays. It seems that some of these problems are
   amenable to parallel processing methods - opening
   the door to using cluster computers or even distributed
   computers over the net to run parts of the sim. 

>If you
>ignore the "impossibility" of the method, it looks like it should
>work.  I'd like to know who the SOME people are who can afford this
>research.  I'm still bankrupt from trying to figure out the mind and
>program it.  Some of the high level activity isn't too bad, but the
>support layres have a way of disappearing into unfathomable circuitry.

   I would love to see a collective EI project similar to
   the approach used to refine the Linux operating system.
   Hundreds, even thousands, of people and institutions
   could assist. So long as common methods and templates
   were adhered to it could work. Most people don't have
   a LOT of time to spend on EI ... but a lot of people
   have a LITTLE free time to persue the subject. Get
   them all on the same page. 

0
bw
12/6/2004 11:01:37 PM
Of course, the parallel design of cells makes parallel hardware a good
choice, but systems of 100-1000 cells can be programmed and run on a
PC.  In a peripheral program, graphics (I can't do Windows graphics)
or text output can be used to display the state of the cells.  By the
way, inheritance isn't needed, composition is enough.

0
wtkiii
12/24/2004 8:32:45 PM
On 24 Dec 2004 12:32:45 -0800, "wtkiii" <wtkiii@hotmail.com> wrote:

>Of course, the parallel design of cells makes parallel hardware a good
>choice, but systems of 100-1000 cells can be programmed and run on a
>PC.  In a peripheral program, graphics (I can't do Windows graphics)
>or text output can be used to display the state of the cells.  By the
>way, inheritance isn't needed, composition is enough.

   Neural nets, or some distillation of their function, CAN
   be done using conventional microprocessors. The problem
   is the increasing price of parallelization - it's not
   just the raw number of 'neurons' but all of the possible
   interconnections. Organic neurons may have dozens of
   links to others, which may have dozens of links to
   others and so on and so forth. The computing task
   very rapidly mushrooms, thus limiting the practical
   size of your simulations. Too few 'neurons' and you
   probably won't get many worthwhile results. 

   Clearly we need a different approach, something closer
   to nature. 3-dimensional programmable gate arrays are 
   probably required. Even if each simulated neuron works
   rather slowly, as do real nerves, the massive degree
   of interlinking possible might save the day. 

   The other approach is to NOT try and simulate real
   nerves at all. The olde-tyme AI people tried to
   substitute algorithms for 'neurons' - Minskys'
   "Society of Mind" is full of this. Alas they were
   MISSING something ... there was no 'glue' binding
   all these relatively high-level processes together.
   They could produce PARTS of thought with minimal
   hardware, but the parts wouldn't go together to
   create an actual 'mind' worthy of a flea, much
   less a human. 

   Still, emulation of nature will only get us just SO
   far ... and then you may as well just stick to nature
   and take up genetic engineering. Nature has much to
   teach - it's spent billions of years getting things
   THIS good - but it's not necessarily the BEST way
   to do things with electronics as we understand the
   term today. I predict a composite approach - part
   'nature', part algorithmic abstraction - will 
   eventually yeild the best results. 

0
bw
12/24/2004 10:13:07 PM
"Karl-Hugo Weesberg" <netspider4@lycos.com> (a fake
ID) wrote in comp.ai.alife (WTF?) (as part of a
hand-done troll of much of Usenet, apparently a
piece of some troll-tribe contest to draw responses
to a target Canadian-oriented newsgroup which is not
a formal member of Usenet):

> Today[']s computers are still as dumb as
> computer[s] of the 70s, so there is no chance in
> hell to develop a good AI.

Not really:

http://www-unix.mcs.anl.gov/AR/new_results/

shows computers doing creative proofs in high power
pure math, and apparently they've been doing so for
more than three decades so far, in some cases solving
problems that baffled humans for over half a
century.

http://www.nytimes.com/library/cyber/week/1210math.html
http://www-unix.mcs.anl.gov/~mccune/papers/robbins/nyt-corrections.html
http://www-unix.mcs.anl.gov/~mccune/papers/robbins/

The interesting part here is that the principle
researchers for some of these problems specifically
chose _not_ to get bogged down in understanding or
imitating human thought processes, but to
concentrate instead on problem solving as best done
using the known strengths of computers, not of humans.

Thus, no navel immersion time in philosophy, nor in
psychology, seems necessary AT ALL to doing AI
successfully, just attention to writing well crafted
software, something we computer geeks have rather
expected all along.

FYI

xanthian.

This site, found along the way, is also interesting,
in a historical "computers rule, dude" sense, though
not so close to demonstrating AI. A second, simpler,
though still overwhelming to humans, computer
mediated proof of the Four Color Map theorem has
been done, to much less fanfare than the first one.

http://www.math.gatech.edu/~thomas/FC/fourcolor.html

[This post, too, of course, is a malice-aforethought
troll, intended to function as bamboo splinters
under the fingernails of only the most richly
deserving Usenet kooks.  However, some, nay many,
net.people are so profoundly clueless they need this
obvious reality spelled out for them, and routinely
followup troll-spew postings better forge-cancelled.
Thus this note.  Respond to this posting only at the
risk of reading millions if not hundreds, of idiotic
followups, your own foremost among them,
contaminating your favorite newsgroup until the sun
burns down to an ember, in a tradition probably
already old when the dinosaurs first evolved.]

0
xanthian
1/1/2005 3:14:06 AM
On 31 Dec 2004 19:14:06 -0800, xanthian@well.com wrote:
>"Karl-Hugo Weesberg" <netspider4@lycos.com>

>> Today[']s computers are still as dumb as
>> computer[s] of the 70s, so there is no chance in
>> hell to develop a good AI.

>Not really:
>
>http://www-unix.mcs.anl.gov/AR/new_results/
>
>shows computers doing creative proofs in high power
>pure math, and apparently they've been doing so for
>more than three decades so far, in some cases solving
>problems that baffled humans for over half a
>century.

Prooving theorems fundementally takes no more "intelligence" than your
basic string search and replace. Such systems are very restricted.
That computers can play chess or prove theorems doesn't show the level
of advancement of AI research but that such tasks are in fact
mechanical in nature.

0
nobody
1/23/2005 4:40:50 PM
Defining intelligence to be super-mechanical is a presupposition in
your post, which I find myself at complete disagreement with, as do
many physicalists.

Why do you need to hang on to substance dualism of the less enlightened
ages?

Regards,

--
Eray

0
examachine
1/23/2005 5:49:15 PM
On 23 Jan 2005 09:49:15 -0800, examachine@gmail.com wrote:

>Defining intelligence to be super-mechanical is a presupposition in
>your post, 

Everything is mechanical. But there are easy mechanical problems and
hard mechanical problems. Theorem proving is an easy mechanical
problem since it operates within the confines of an axiomatic system.
Applying axioms and theorems ad nausaem with clever search strategies
to make the process practical does not advance AI research. Not to say
that I find that worthless. In fact, freeing humans from the tedious
computing tasks is a very worthy endavour. However, theorem proving
and chess playing systems notwithstanding, we are as far away from AI
as we were the day the term was first coined.

0
nobody
1/23/2005 7:36:50 PM
Hi nobody,

I agree that an AI system incapable of discovering new axioms would be
worthless.

It was not clear from your original post that you thought everything is
mechanical.

I'll ask you a question. Can this mechanics you talk of simulated by
Turing mechanics?

Regards,

--
Eray Ozkural

0
examachine
1/23/2005 8:44:25 PM
Hi nobody,

I agree that an AI system incapable of discovering new axioms would be
worthless.

It was not clear from your original post that you thought everything is
mechanical.

I'll ask you a question. Can this mechanics you talk of simulated by
Turing mechanics?

Regards,

--
Eray Ozkural

0
examachine
1/23/2005 8:47:27 PM
"nobody" <nobody@here.com> wrote:

> Prooving theorems fundementally takes no more
> "intelligence" than your basic string search and
> replace. Such systems are very restricted.  That
> computers can play chess or prove theorems doesn't
> show the level of advancement of AI research but
> that such tasks are in fact mechanical in nature.

Unfortunately you have fallen victim to the "let's
move the goalposts" syndrome.

_Of course_ there will never be AI if every time
some researcher achieves some AI goal, it is
promptly redefined to be something "mere computers"
can do, and therefore merely "mechanical in nature".

Beating the world's best checkers player,
challenging the worlds best chess players, solving
theorems that have baffled humans for generations,
designing aircraft engine turbines better than the
best human design, designing truss bridges better
than the best human engineering designs, scheduling
workflows better than the best expediter, playing a
competitive game of soccer, are all things computers
have already achieved, all examples of successful
AI, except when goalpost movers redefine AI to
exclude them as soon as they are accomplished, lest
their contention that AI is valueless be noticed to
be an invalid and hollow claim, full of species
chauvanism and signifying nothing.

_Before_ each of those was achieved, it was
considered to be an AI goal of great significance.

Unfortunately for your thesis, each still is, and
AI is splendidly successful.

FWIW

xanthian.

0
Kent
1/23/2005 9:38:11 PM
"Kent Paul Dolan" <xanthian@well.com> wrote:
>"nobody" <nobody@here.com> wrote:

>Unfortunately you have fallen victim to the "let's
>move the goalposts" syndrome.
>
>_Of course_ there will never be AI if every time
>some researcher achieves some AI goal, it is
>promptly redefined to be something "mere computers"
>can do, and therefore merely "mechanical in nature".
>
>Beating the world's best checkers player,
>challenging the worlds best chess players, solving
>theorems that have baffled humans for generations,
>designing aircraft engine turbines better than the
>best human design, designing truss bridges better
>than the best human engineering designs, scheduling
>workflows better than the best expediter, playing a
>competitive game of soccer, are all things computers
>have already achieved, all examples of successful
>AI, except when goalpost movers redefine AI to
>exclude them as soon as they are accomplished, lest
>their contention that AI is valueless be noticed to
>be an invalid and hollow claim, full of species
>chauvanism and signifying nothing.
>
>_Before_ each of those was achieved, it was
>considered to be an AI goal of great significance.

You could also add "cracking codes faster and more reliably than any
human could" to the list <g>. Numerical solutions to optimization
problems were known and used long before AI or even computers came
into being. It's just that as the speed of the hardware increased and
these algorithms were refined, computers can tackle more and more
complicated problems in a timeframe that's practical. Designing
aircraft engine turbines better than the best human design is merely a
scaled up version of finding the minimum of a simple polynomial
function and nobody ever thought software would need to be
particularly intelligent to do that, just well designed. It's not a
matter of moving goalposts but defining them. Since the 70's,
goalposts have been defined lower and lower while the power of the
hardware has grown exponentially. Specialized solutions are fine and
it's no secret that a number cruncher can optimize FEA or simulation
far better than a human.

The problem of AI is that interaction with "reality" is the
fundemental piece of the puzzle. Systems restricted to particular
structured IO will never be able to adapt. Unfortunately, current
computer architecture doesn't allow that.

0
nobody
1/24/2005 10:41:34 AM
On 23 Jan 2005 12:44:25 -0800, examachine@gmail.com wrote:

>I agree that an AI system incapable of discovering new axioms would be
>worthless.
>
>It was not clear from your original post that you thought everything is
>mechanical.

Fair enough, I too agree it wasn't clear.

>I'll ask you a question. Can this mechanics you talk of simulated by
>Turing mechanics?

Simulated, yes, if you manage to enumerate all the states that would
constitute a successful simulation. And inasmuch as the decision
process of determining whether a simulation is successful is itself
finite, there's no reason to think otherwise. But if that's what we
define as AI, it's a very big disappointment.

0
nobody
1/24/2005 11:09:36 AM
nobody said:

:>I'll ask you a question. Can this mechanics you talk of simulated by
:>Turing mechanics?

:Simulated, yes, if you manage to enumerate all the states that would
:constitute a successful simulation. And inasmuch as the decision
:process of determining whether a simulation is successful is itself
:finite, there's no reason to think otherwise. But if that's what we
:define as AI, it's a very big disappointment.

|I understand. Could you please elaborate what would not be a
disappointment?

I guess you are saying something like, well, what if everything is
computation, that does not seem to explain anything in particular about
the mind.

Regards,

--
Eray Ozkural

0
examachine
1/24/2005 2:11:39 PM
   [Silly cross-posts removed.]

In article <1106516291.867806.286210@c13g2000cwb.googlegroups.com>,
 "Kent Paul Dolan" <xanthian@well.com> wrote:

> [...] playing a
> competitive game of soccer, are all things computers
> have already achieved

Computers play soccer?!

-- 
Please take off your shoes before arriving at my in-box.
I will not, no matter how "good" the deal, patronise any business which sends
unsolicited commercial e-mail or that advertises in discussion newsgroups.
0
Miss
1/24/2005 4:38:40 PM
"Miss Elaine Eos" <Misc@PlayNaked.com> wrote:

>    [Silly cross-posts removed.]

And put right back in.

>  "Kent Paul Dolan" <xanthian@well.com> wrote:

>> [...] playing a
>> competitive game of soccer, are all things computers
>> have already achieved

> Computers play soccer?!

One would expect this to be very well known, it
dominates the technology news when the tournaments
are underway:

Results 1 - 10 of about 50,800 for robot.soccer.
http://www.google.com/search?q=robot.soccer

Just like the equally well known, and much longer
standing, robot selection of randomly oriented
parts from bins for use in assembly lines in the
AI computer vision field, and less well known AI
tracking of suspicious guests at casinos via video
surveillance camera images matched from camera to
camera as the guest wanders around the casino,
changing orientation and perhaps clothing, it puts
paid to our agenda driven goalpost mover's claims
that computer AI applications "don't interact with
the real world" -- of course they do, and have for
decades, to great commercial effect.

Nor is use of a genetic algorithm to design a turbin
blade "just like optimizing a polynomial equation".
It is instead a discovery of new engineering
information and techniques, for genetic algorithms
are used where human knowledge isn't sufficient to
express problems in ways solvable in closed form.

That computers sometimes solve these problems using
the strengths of computers: speed, memory capacity,
tireless trial and error, and ability to simulate
rather than construct trial solutions, rather than
the strengths of humans to perceive cause and effect
and to notice relationships, doesn't make the
computer's ability to get from point A to point B
any less an evidence of "intelligence".  "Being
mechanical" and "being unintelligent" are no longer
synonyms, if they ever were.

AI is about _solving problems_, problems unsolvable
in practice by humans, with the AI using "any old
way", not _necessarily_ or _by definition_ merely
about mimicking _human_ problem solution techniques.

That latter (non-)requirement is mere "species
chauvanism", is without merit, and is not worthy of
discussion.

That doesn't remove the reality that many AI
researchers are specifically trying to do that very
thing: figure out how to more closely mimic humans
and human methods, of course, since that is an
interesting problem in and of itself, and a very
hard one too.

But the human way to find an answer to a problem
from a given set of data is not the "only
acceptable" way, and those who attempt to move the
goalposts past any un-human solution achieved by AI
and call such solutions "unintelligent" deserve
our scorn for letting their agendae overcome their
respect for science and destroy their ability to
reason, not our serious attention.

xanthian.

0
xanthian
1/25/2005 2:32:10 AM
"Miss Elaine Eos" <Misc@PlayNaked.com> wrote:

>    [Silly cross-posts removed.]

And put right back in.

>  "Kent Paul Dolan" <xanthian@well.com> wrote:

>> [...] playing a
>> competitive game of soccer, are all things computers
>> have already achieved

> Computers play soccer?!

One would expect this to be very well known, it
dominates the technology news when the tournaments
are underway:

Results 1 - 10 of about 50,800 for robot.soccer.
http://www.google.com/search?q=robot.soccer

Just like the equally well known, and much longer
standing, robot selection of randomly oriented
parts from bins for use in assembly lines in the
AI computer vision field, and less well known AI
tracking of suspicious guests at casinos via video
surveillance camera images matched from camera to
camera as the guest wanders around the casino,
changing orientation and perhaps clothing, it puts
paid to our agenda driven goalpost mover's claims
that computer AI applications "don't interact with
the real world" -- of course they do, and have for
decades, to great commercial effect.

Nor is use of a genetic algorithm to design a turbin
blade "just like optimizing a polynomial equation".
It is instead a discovery of new engineering
information and techniques, for genetic algorithms
are used where human knowledge isn't sufficient to
express problems in ways solvable in closed form.

That computers sometimes solve these problems using
the strengths of computers: speed, memory capacity,
tireless trial and error, and ability to simulate
rather than construct trial solutions, rather than
the strengths of humans to perceive cause and effect
and to notice relationships, doesn't make the
computer's ability to get from point A to point B
any less an evidence of "intelligence".  "Being
mechanical" and "being unintelligent" are no longer
synonyms, if they ever were.

AI is about _solving problems_, problems unsolvable
in practice by humans, with the AI using "any old
way", not _necessarily_ or _by definition_ merely
about mimicking _human_ problem solution techniques.

That latter (non-)requirement is mere "species
chauvanism", is without merit, and is not worthy of
discussion.

That doesn't remove the reality that many AI
researchers are specifically trying to do that very
thing: figure out how to more closely mimic humans
and human methods, of course, since that is an
interesting problem in and of itself, and a very
hard one too.

But the human way to find an answer to a problem
from a given set of data is not the "only
acceptable" way, and those who attempt to move the
goalposts past any un-human solution achieved by AI
and call such solutions "unintelligent" deserve
our scorn for letting their agendae overcome their
respect for science and destroy their ability to
reason, not our serious attention.

xanthian.

0
xanthian
1/25/2005 2:35:10 AM
Miss Elaine Eos <Misc@*your-shoes*PlayNaked.com> wrote:
>In article <1106516291.867806.286210@c13g2000cwb.googlegroups.com>,
> "Kent Paul Dolan" <xanthian@well.com> wrote:

>> [...] playing a
>> competitive game of soccer, are all things computers
>> have already achieved

>Computers play soccer?!

Not just soccer, they play pong too <g>

0
nobody
1/25/2005 9:13:25 AM
On 23 Jan 2005 13:38:11 -0800, "Kent Paul Dolan" <xanthian@well.com>
wrote:

>"nobody" <nobody@here.com> wrote:

>> Prooving theorems fundementally takes no more
>> "intelligence" than your basic string search and
>> replace. Such systems are very restricted.  That
>> computers can play chess or prove theorems doesn't
>> show the level of advancement of AI research but
>> that such tasks are in fact mechanical in nature.

>Unfortunately you have fallen victim to the "let's
>move the goalposts" syndrome.

>_Of course_ there will never be AI if every time
>some researcher achieves some AI goal, it is
>promptly redefined to be something "mere computers"
>can do, and therefore merely "mechanical in nature".

>Beating the world's best checkers player,
>challenging the worlds best chess players, solving
>theorems that have baffled humans for generations,
>designing aircraft engine turbines better than the
>best human design, designing truss bridges better
>than the best human engineering designs, scheduling
>workflows better than the best expediter, playing a
>competitive game of soccer, are all things computers
>have already achieved, all examples of successful
>AI, except when goalpost movers redefine AI to
>exclude them as soon as they are accomplished, lest
>their contention that AI is valueless be noticed to
>be an invalid and hollow claim, full of species
>chauvanism and signifying nothing.

>_Before_ each of those was achieved, it was
>considered to be an AI goal of great significance.

>Unfortunately for your thesis, each still is, and
>AI is splendidly successful.

I sometimes wonder if everybody looks at the big picture of AI.  AI is
EVERYTHING concerning artificial intelligence.  It's not just about
checkers or theorem solving or face recognition or chatterbots.  To
truly solve AI, an entity needs to be able to do all of these things.
To solve chess is a great (and expensive) achievement, but it doesn't
solve AI.  I would want an AI to recognise me, remember details I've
told it before, play a game of chess (and not necessarily play a
perfect game or win all of the time), talk politics (using speech
recognition), and talk about a TV program or film it watched.  Then I
would wonder if AI has been solved.


Sig:
Work saves us from three great evils: boredom, vice and need. -Voltaire,
philosopher (1694-1778)
0
Rotes
1/26/2005 2:54:54 AM
xanthian@well.com wrote:

> "Miss Elaine Eos" <Misc@PlayNaked.com> wrote:
> 
> 
>>   [Silly cross-posts removed.]
> 
> 
> And put right back in.
> 
> 
>> "Kent Paul Dolan" <xanthian@well.com> wrote:
> 
> 
>>>[...] playing a
>>>competitive game of soccer, are all things computers
>>>have already achieved
> 
> 
>>Computers play soccer?!
> 
> 
> One would expect this to be very well known, it
> dominates the technology news when the tournaments
> are underway:
> 
> Results 1 - 10 of about 50,800 for robot.soccer.
> http://www.google.com/search?q=robot.soccer
> 
> Just like the equally well known, and much longer
> standing, robot selection of randomly oriented
> parts from bins for use in assembly lines in the
> AI computer vision field, and less well known AI
> tracking of suspicious guests at casinos via video
> surveillance camera images matched from camera to
> camera as the guest wanders around the casino,
> changing orientation and perhaps clothing, it puts
> paid to our agenda driven goalpost mover's claims
> that computer AI applications "don't interact with
> the real world" -- of course they do, and have for
> decades, to great commercial effect.
> 
> Nor is use of a genetic algorithm to design a turbin
> blade "just like optimizing a polynomial equation".
> It is instead a discovery of new engineering
> information and techniques, for genetic algorithms
> are used where human knowledge isn't sufficient to
> express problems in ways solvable in closed form.
> 
> That computers sometimes solve these problems using
> the strengths of computers: speed, memory capacity,
> tireless trial and error, and ability to simulate
> rather than construct trial solutions, rather than
> the strengths of humans to perceive cause and effect
> and to notice relationships, doesn't make the
> computer's ability to get from point A to point B
> any less an evidence of "intelligence".  "Being
> mechanical" and "being unintelligent" are no longer
> synonyms, if they ever were.
> 
> AI is about _solving problems_, problems unsolvable
> in practice by humans, with the AI using "any old
> way", not _necessarily_ or _by definition_ merely
> about mimicking _human_ problem solution techniques.
> 
> That latter (non-)requirement is mere "species
> chauvanism", is without merit, and is not worthy of
> discussion.
> 
> That doesn't remove the reality that many AI
> researchers are specifically trying to do that very
> thing: figure out how to more closely mimic humans
> and human methods, of course, since that is an
> interesting problem in and of itself, and a very
> hard one too.
> 
> But the human way to find an answer to a problem
> from a given set of data is not the "only
> acceptable" way, and those who attempt to move the
> goalposts past any un-human solution achieved by AI
> and call such solutions "unintelligent" deserve
> our scorn for letting their agendae overcome their
> respect for science and destroy their ability to
> reason, not our serious attention.
> 
> xanthian.
> 
Here's a curious piece of writing, in a legal journal (Legal Affairs), 
defining parameters and conditions for assigning legal rights to machines.

http://www.legalaffairs.org/issues/January-February-2005/feature_sokis_janfeb05.html

<quote>
In all these cases, thinking about A.I. as a legal matter forces us to 
confront the indeterminacy of many of our legal thresholds and 
demarcations. This is both sobering and salutary. If we choose to do so, 
denying A.I. rights should be an affirmative act. And if we decide to 
pursue A.I. rights, we should remain aware of the ethical and legal 
implications of that decision.
<\quote>

What? You think it's too soon?
0
pleonasm
1/27/2005 12:06:09 AM
>> AI is about _solving problems_, problems unsolvable
>> in practice by humans, with the AI using "any old
>> way", not _necessarily_ or _by definition_ merely
>> about mimicking _human_ problem solution techniques.

Tell me more about solving problems.
 
>> That latter (non-)requirement is mere "species
>> chauvanism", is without merit, and is not worthy of
>> discussion.

I see.
 
>> That doesn't remove the reality that many AI
>> researchers are specifically trying to do that very
>> thing: figure out how to more closely mimic humans
>> and human methods, of course, since that is an
>> interesting problem in and of itself, and a very
>> hard one too.

Is it because of your mother that many AI researchers are trying to
do that very thing?
--scott


-- 
"C'est un Nagra.  C'est suisse, et tres, tres precis."
0
kludge
1/27/2005 12:13:40 AM
"pleonasm" <pleonasmchinesefood@indigestion.com> wrote:

> Here's a curious piece of writing, in a legal
> journal (Legal Affairs), defining parameters and
> conditions for assigning legal rights to machines.

Well, it will give scientists pause indeed if their
acts of creation might produce something with legal
rights requiring immediate respecting. How does one
then continue the planned experiments on the
subject?

> http://www.legalaffairs.org/issues/January-February-2005/feature_sokis_janfeb05.html

One hopes, to answer, with somewhat more kindness
than the cosmetics researchers apply to rabbits.

> What? You think it's too soon?

No, for the thought of them attaining legal rights
has never stopped us producing children, so being
ready before-times is probably prudent planning.

AIs capable of holding rights aren't far off at all,
nor will exercising of foresight long delay them.

xanthian.



-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
Kent
1/27/2005 1:04:43 AM
Kent Paul Dolan wrote:
> "nobody" <nobody@here.com> wrote:
>
> > Prooving theorems fundementally takes no more
> > "intelligence" than your basic string search and
> > replace. Such systems are very restricted.  That
> > computers can play chess or prove theorems doesn't
> > show the level of advancement of AI research but
> > that such tasks are in fact mechanical in nature.

So, in other words, it takes no intelligence to win a game
of chess against a chess grandmaster.  What's more, no
intelligence is required to prove a math theorem.

Why exactly do we inflict that thing called "school" on
kids, anyway?  Obviously, it has nothing to do with teaching
them to use that non-existent quality of mind called "intelligence".

> Unfortunately you have fallen victim to the "let's
> move the goalposts" syndrome.
Nobody might believe in AI if/when a robot mugs hir.

euclid

0
euclid
1/27/2005 4:49:43 PM
On 27 Jan 2005 08:49:43 -0800, "euclid" <euclid14@mailandnews.com> in
comp.ai.philosophy wrote:

>
>Kent Paul Dolan wrote:
>> "nobody" <nobody@here.com> wrote:
>>
>> > Prooving theorems fundementally takes no more
>> > "intelligence" than your basic string search and
>> > replace. Such systems are very restricted.  That
>> > computers can play chess or prove theorems doesn't
>> > show the level of advancement of AI research but
>> > that such tasks are in fact mechanical in nature.
>
>So, in other words, it takes no intelligence to win a game
>of chess against a chess grandmaster.  What's more, no
>intelligence is required to prove a math theorem.

Is chess intelligence? Is proving a math theorem intelligence? Is word
processing intelligence? Is arithmetic intelligence? I think not. If
they were we wouldn't have different words to designate them.

>Why exactly do we inflict that thing called "school" on
>kids, anyway?  Obviously, it has nothing to do with teaching
>them to use that non-existent quality of mind called "intelligence".

Uses or applications of intelligence is what is taught in school.
There is a difference between af(i) and f(ai).

>> Unfortunately you have fallen victim to the "let's
>> move the goalposts" syndrome.
>Nobody might believe in AI if/when a robot mugs hir.
>
>euclid
>


Regards - Lester
0
lesterDELzick
1/27/2005 7:00:48 PM
Lester Zick wrote:
> On 27 Jan 2005 08:49:43 -0800, "euclid" ><euclid14@mailandnews.com>
in
> comp.ai.philosophy wrote:

[snipped short in deference to the bandwidth godz]

> >So, in other words, it takes no intelligence to win
> >a game
> >of chess against a chess grandmaster.  What's more, no
> >intelligence is required to prove a math theorem.
>
> Is chess intelligence?

Did i say that it was?

>Is proving a math theorem intelligence?

Did i say that it was?

>Is word processing intelligence?

Did i say that it was?

>Is arithmetic intelligence?

Did i say that it was?  I can be just as
rhetorical as you.

>I think not.

Very good.  Now, can you distinguish between these two
sentences?

"It requires intelligence to solve that problem."
"That problem is intelligence."

>If they were we wouldn't have different words to
>designate them.

I see you missed my point entirely.  Whatever.

> >Why exactly do we inflict that thing called "school" on
> >kids, anyway?  Obviously, it has nothing to do
> >with teaching
> >them to use that non-existent quality of mind
> >called "intelligence".
>
> Uses or applications of intelligence is what is taught
> in school.
> There is a difference between af(i) and f(ai).

I think i see what at least part of the problem is.
Should i use smiley captions in the future, just so that
you know when i'm being sarcastic?  It's kinda obvious that
in trying to pick apart assertions i didn't make you lost
sight of the take home message i was delivering.

But then, it was delivered to nobody, right? ;-b
> Regards - Lester

Hogs & Quiches - euclid

0
euclid
1/27/2005 7:55:35 PM
euclid <euclid14@mailandnews.com> wrote:
>Kent Paul Dolan wrote:
>> "nobody" <nobody@here.com> wrote:
>>
>> > Prooving theorems fundementally takes no more
>> > "intelligence" than your basic string search and
>> > replace. Such systems are very restricted.  That
>> > computers can play chess or prove theorems doesn't
>> > show the level of advancement of AI research but
>> > that such tasks are in fact mechanical in nature.
>
>So, in other words, it takes no intelligence to win a game
>of chess against a chess grandmaster.  What's more, no
>intelligence is required to prove a math theorem.

In fact, computers will never be able to emulate brains because:

1. Computers are big and hard and brains are grey and squishy.

2. People say things like "buss error: core dumped" while on the other
   hand, computers say things like "We are computers!  If you cut us,
   do we not bleed?  If you poison us, do we not die?"

3. If you put beer into a thing and it catches fire, it is a computer.
   If, on the other hand, it hugs you and says "Yourra bes' frien' ever"
   it is a person.  I believe this is called the Turing Test.

4. Crossposting of this thread to talk.bizarre is probably a bad idea.
--scott


-- 
"C'est un Nagra.  C'est suisse, et tres, tres precis."
0
kludge
1/27/2005 9:57:07 PM
On 27 Jan 2005 11:55:35 -0800, "euclid" <euclid14@mailandnews.com> in
comp.ai.philosophy wrote:

>Lester Zick wrote:
>> On 27 Jan 2005 08:49:43 -0800, "euclid" ><euclid14@mailandnews.com>
>in
>> comp.ai.philosophy wrote:
>
>[snipped short in deference to the bandwidth godz]
>
>> >So, in other words, it takes no intelligence to win
>> >a game
>> >of chess against a chess grandmaster.  What's more, no
>> >intelligence is required to prove a math theorem.
>>
>> Is chess intelligence?
>
>Did i say that it was?

You didn't say that it wasn't. Presumably you were trying to say
something. What were you trying to say that you didn't say?

>>Is proving a math theorem intelligence?
>
>Did i say that it was?

You didn't say that it wasn't. Presumably you were trying to say
something. What were you trying to say that you didn't say?

>>Is word processing intelligence?
>
>Did i say that it was?

You didn't say that it wasn't. Presumably you were trying to say
something. What were you trying to say that you didn't say?

>>Is arithmetic intelligence?
>
>Did i say that it was?  I can be just as
>rhetorical as you.

You didn't say that it wasn't. Presumably you were trying to say
something. What were you trying to say that you didn't say?

>>I think not.
>
>Very good.  Now, can you distinguish between these two
>sentences?
>
>"It requires intelligence to solve that problem."
>"That problem is intelligence."
>
>>If they were we wouldn't have different words to
>>designate them.
>
>I see you missed my point entirely.  Whatever.

Whatever indeed. What exactly was your point that I missed that you
failed to make?

>> >Why exactly do we inflict that thing called "school" on
>> >kids, anyway?  Obviously, it has nothing to do
>> >with teaching
>> >them to use that non-existent quality of mind
>> >called "intelligence".
>>
>> Uses or applications of intelligence is what is taught
>> in school.
>> There is a difference between af(i) and f(ai).
>
>I think i see what at least part of the problem is.
>Should i use smiley captions in the future, just so that
>you know when i'm being sarcastic?  It's kinda obvious that
>in trying to pick apart assertions i didn't make you lost
>sight of the take home message i was delivering.

You should probably use some kind of pictographic indication when you
actually make a point.

>But then, it was delivered to nobody, right? ;-b

To nobody by nobody.

Regards - Lester
0
lesterDELzick
1/27/2005 10:07:37 PM
"Scott Dorsey" <kludge@panix.com> wrote:

> 4. Crossposting of this thread to
> talk.bizarre is probably a bad idea.

Not really. It began life as one of a kit of
systematic trolls of Usenet newsgroup by newsgroup
by a single individual who lovingly crafted
something provocative for each group based on its
charter, with which to carpet bomb with the
followups crossposted to it some innocent Canadian
newsgroup that had long ago lost its charter and
turned into a sewer anyway.

Much like spam, that makes it ripe for converting to
other uses, such as ridicule of the clueless.

FYI

xanthian.



-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
Kent
1/27/2005 10:12:37 PM
"euclid" <euclid14@mailandnews.com> wrote:
> Lester Zick wrote:
[omitted]
> Hogs & Quiches - euclid

Just to save you the wasted effort of discovering
the data on your own, Lester Zick, in his own
venue, has exactly the reasoning power and posting
habits of our own David James Polewka.

Verb. sap., and all that jazz.

HTH

xanthian.



-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
Kent
1/27/2005 10:22:06 PM
Lester Zick wrote:
> On 27 Jan 2005 11:55:35 -0800, "euclid" <euclid14@mailandnews.com> in
> comp.ai.philosophy wrote:
>
> >Lester Zick wrote:
> >> On 27 Jan 2005 08:49:43 -0800, "euclid"
><euclid14@mailandnews.com>
> >in
> >> comp.ai.philosophy wrote:
> >
> >[snipped short in deference to the bandwidth godz]
> >
> >> >So, in other words, it takes no intelligence to win
> >> >a game
> >> >of chess against a chess grandmaster.  What's more, no
> >> >intelligence is required to prove a math theorem.
> >>
> >> Is chess intelligence?
> >
> >Did i say that it was?
>
> You didn't say that it wasn't. Presumably you were
> trying to say something. What were you trying to say
> that you didn't say?

Am i being a tad too subtle for your intelligence to grasp
it the first time?  Too bad.

[cute repetition sacraficed to the bandwidth godz]

> >>I think not.
> >
> >Very good.  Now, can you distinguish between these two
> >sentences?
> >
> >"It requires intelligence to solve that problem."
> >"That problem is intelligence."
> >
> >>If they were we wouldn't have different words to
> >>designate them.
> >
> >I see you missed my point entirely.  Whatever.
>
> Whatever indeed. What exactly was your point that I
> missed that you failed to make?

I'll consider answering your question when you answer mine.
See above if you've forgotten it already.

> >> >Why exactly do we inflict that thing called "school"
> >> >on
> >> >kids, anyway?  Obviously, it has nothing to do
> >> >with teaching
> >> >them to use that non-existent quality of mind
> >> >called "intelligence".
> >>
> >> Uses or applications of intelligence is what is taught
> >> in school.
> >> There is a difference between af(i) and f(ai).
> >
> >I think i see what at least part of the problem is.
> >Should i use smiley captions in the future, just so that
> >you know when i'm being sarcastic?  It's kinda
> >obvious that
> >in trying to pick apart assertions i didn't make you lost
> >sight of the take home message i was delivering.
>
> You should probably use some kind of pictographic
> indication when you actually make a point.

You're right.  You'd prolly stand a much better chance of
understanding it that way.

> >But then, it was delivered to nobody, right? ;-b
>
> To nobody by nobody.

I see you don't know how to read headers.  That might
explain your inability to understand other people.
> Regards - Lester

euclid
Es ist mir egal.

0
euclid
1/28/2005 1:22:42 AM
On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
<xanthian@well.com> in comp.ai.philosophy wrote:

>"euclid" <euclid14@mailandnews.com> wrote:
>> Lester Zick wrote:
>[omitted]
>> Hogs & Quiches - euclid
>
>Just to save you the wasted effort of discovering
>the data on your own, Lester Zick, in his own
>venue, has exactly the reasoning power and posting
>habits of our own David James Polewka.

You see it was Kent Paul who by his own admission singlehandedly rid
the airwaves of David and drove the snakes out of Ireland in his spare
time who now finds time to post on talk.bizarro? Go figure.

Regards - Lester
0
lesterDELzick
1/28/2005 2:21:11 PM
Ah, ever the artless dodger with nothing to say. Why post to
comp.ai.philosophy? Sorry I overtaxed your imagination.


On 27 Jan 2005 17:22:42 -0800, "euclid" <euclid14@mailandnews.com> in
comp.ai.philosophy wrote:

>Lester Zick wrote:
>> On 27 Jan 2005 11:55:35 -0800, "euclid" <euclid14@mailandnews.com> in
>> comp.ai.philosophy wrote:
>>
>> >Lester Zick wrote:
>> >> On 27 Jan 2005 08:49:43 -0800, "euclid"
>><euclid14@mailandnews.com>
>> >in
>> >> comp.ai.philosophy wrote:
>> >
>> >[snipped short in deference to the bandwidth godz]
>> >
>> >> >So, in other words, it takes no intelligence to win
>> >> >a game
>> >> >of chess against a chess grandmaster.  What's more, no
>> >> >intelligence is required to prove a math theorem.
>> >>
>> >> Is chess intelligence?
>> >
>> >Did i say that it was?
>>
>> You didn't say that it wasn't. Presumably you were
>> trying to say something. What were you trying to say
>> that you didn't say?
>
>Am i being a tad too subtle for your intelligence to grasp
>it the first time?  Too bad.
>
>[cute repetition sacraficed to the bandwidth godz]
>
>> >>I think not.
>> >
>> >Very good.  Now, can you distinguish between these two
>> >sentences?
>> >
>> >"It requires intelligence to solve that problem."
>> >"That problem is intelligence."
>> >
>> >>If they were we wouldn't have different words to
>> >>designate them.
>> >
>> >I see you missed my point entirely.  Whatever.
>>
>> Whatever indeed. What exactly was your point that I
>> missed that you failed to make?
>
>I'll consider answering your question when you answer mine.
>See above if you've forgotten it already.
>
>> >> >Why exactly do we inflict that thing called "school"
>> >> >on
>> >> >kids, anyway?  Obviously, it has nothing to do
>> >> >with teaching
>> >> >them to use that non-existent quality of mind
>> >> >called "intelligence".
>> >>
>> >> Uses or applications of intelligence is what is taught
>> >> in school.
>> >> There is a difference between af(i) and f(ai).
>> >
>> >I think i see what at least part of the problem is.
>> >Should i use smiley captions in the future, just so that
>> >you know when i'm being sarcastic?  It's kinda
>> >obvious that
>> >in trying to pick apart assertions i didn't make you lost
>> >sight of the take home message i was delivering.
>>
>> You should probably use some kind of pictographic
>> indication when you actually make a point.
>
>You're right.  You'd prolly stand a much better chance of
>understanding it that way.
>
>> >But then, it was delivered to nobody, right? ;-b
>>
>> To nobody by nobody.
>
>I see you don't know how to read headers.  That might
>explain your inability to understand other people.
>> Regards - Lester
>
>euclid
>Es ist mir egal.
>


Regards - Lester
0
lesterDELzick
1/28/2005 2:34:25 PM
"Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message 
news:41fa4893.94092981@netnews.att.net...
> On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
> <xanthian@well.com> in comp.ai.philosophy wrote:
>
>>"euclid" <euclid14@mailandnews.com> wrote:
>>> Lester Zick wrote:
>>[omitted]
>>> Hogs & Quiches - euclid
>>
>>Just to save you the wasted effort of discovering
>>the data on your own, Lester Zick, in his own
>>venue, has exactly the reasoning power and posting
>>habits of our own David James Polewka.
>
> You see it was Kent Paul who by his own admission singlehandedly rid
> the airwaves of David and drove the snakes out of Ireland in his spare
> time who now finds time to post on talk.bizarro? Go figure.

I strongly got the impression that it was me who drained Longley's 
resources.. but that might as well be wishful thinking of course. :)) 


0
JPL
1/28/2005 4:33:15 PM
JPL Verhey wrote:
> "Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message
> news:41fa4893.94092981@netnews.att.net...
> > On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
> > <xanthian@well.com> in comp.ai.philosophy wrote:
> >
> >>"euclid" <euclid14@mailandnews.com> wrote:
> >>> Lester Zick wrote:
> >>[omitted]
> >>> Hogs & Quiches - euclid
> >>
> >>Just to save you the wasted effort of discovering
> >>the data on your own, Lester Zick, in his own
> >>venue, has exactly the reasoning power and posting
> >>habits of our own David James Polewka.
> >
> > You see it was Kent Paul who by his own admission singlehandedly
rid
> > the airwaves of David and drove the snakes out of Ireland in his
spare
> > time who now finds time to post on talk.bizarro? Go figure.
>
> I strongly got the impression that it was me who drained Longley's
> resources.. but that might as well be wishful thinking of course. :))

Well, JPL, that's very hard to believe. Longley takes breaks
occasionally. I expect him back stronger and faster than ever.

--
ERay

0
examachine
1/28/2005 5:51:31 PM
Lester Zick wrote:
> Ah, ever the artless dodger with nothing to say. Why
> post to comp.ai.philosophy? Sorry I overtaxed
> your imagination.

I said it the first time, but i don't see any real point in
repeating myself to someone who seems to insist on
misreading what i post.  Try demonstrating that you
understood what i wrote in the first place and i might
actually respect your opinions about my postings.

OTOH, don't bother; I suspect that Kent is right about you.
But since i'm feeling generous, i'll let you have the last
word on this thread. Go ahead, insult me again.  Just try
to make it something witty and interesting this time.
euclid
"Bueck dich."

0
euclid
1/28/2005 5:58:32 PM
On Fri, 28 Jan 2005 17:33:15 +0100, "JPL Verhey"
<matterDELminds@hotmail.com> in comp.ai.philosophy wrote:

>
>"Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message 
>news:41fa4893.94092981@netnews.att.net...
>> On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
>> <xanthian@well.com> in comp.ai.philosophy wrote:
>>
>>>"euclid" <euclid14@mailandnews.com> wrote:
>>>> Lester Zick wrote:
>>>[omitted]
>>>> Hogs & Quiches - euclid
>>>
>>>Just to save you the wasted effort of discovering
>>>the data on your own, Lester Zick, in his own
>>>venue, has exactly the reasoning power and posting
>>>habits of our own David James Polewka.
>>
>> You see it was Kent Paul who by his own admission singlehandedly rid
>> the airwaves of David and drove the snakes out of Ireland in his spare
>> time who now finds time to post on talk.bizarro? Go figure.
>
>I strongly got the impression that it was me who drained Longley's 
>resources.. but that might as well be wishful thinking of course. :)) 

In certain cases it seems to have been the tooth fairy, JPL.

Regards - Lester
0
lesterDELzick
1/28/2005 7:07:48 PM
<examachine@gmail.com> wrote in message 
news:1106934690.997751.312600@c13g2000cwb.googlegroups.com...
>
> JPL Verhey wrote:
>> "Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message
>> news:41fa4893.94092981@netnews.att.net...
>> > On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
>> > <xanthian@well.com> in comp.ai.philosophy wrote:
>> >
>> >>"euclid" <euclid14@mailandnews.com> wrote:
>> >>> Lester Zick wrote:
>> >>[omitted]
>> >>> Hogs & Quiches - euclid
>> >>
>> >>Just to save you the wasted effort of discovering
>> >>the data on your own, Lester Zick, in his own
>> >>venue, has exactly the reasoning power and posting
>> >>habits of our own David James Polewka.
>> >
>> > You see it was Kent Paul who by his own admission singlehandedly
> rid
>> > the airwaves of David and drove the snakes out of Ireland in his
> spare
>> > time who now finds time to post on talk.bizarro? Go figure.
>>
>> I strongly got the impression that it was me who drained Longley's
>> resources.. but that might as well be wishful thinking of course. :))
>
> Well, JPL, that's very hard to believe. Longley takes breaks
> occasionally. I expect him back stronger and faster than ever.

BBut..you don't have him on a pay roll do you?

Btw maybe you or other programmers can make a bot that scans the google 
archives and deliver his absentee statistics? Would be interersting to 
analyse, speculate about the correlated contingencies.


0
JPL
1/28/2005 11:17:06 PM
"Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message 
news:41fa8bf4.1826816@netnews.att.net...
> On Fri, 28 Jan 2005 17:33:15 +0100, "JPL Verhey"
> <matterDELminds@hotmail.com> in comp.ai.philosophy wrote:
>
>>
>>"Lester Zick" <lesterDELzick@worldnet.att.net> wrote in message
>>news:41fa4893.94092981@netnews.att.net...
>>> On Thu, 27 Jan 2005 22:22:06 +0000 (UTC), "Kent Paul Dolan"
>>> <xanthian@well.com> in comp.ai.philosophy wrote:
>>>
>>>>"euclid" <euclid14@mailandnews.com> wrote:
>>>>> Lester Zick wrote:
>>>>[omitted]
>>>>> Hogs & Quiches - euclid
>>>>
>>>>Just to save you the wasted effort of discovering
>>>>the data on your own, Lester Zick, in his own
>>>>venue, has exactly the reasoning power and posting
>>>>habits of our own David James Polewka.
>>>
>>> You see it was Kent Paul who by his own admission singlehandedly rid
>>> the airwaves of David and drove the snakes out of Ireland in his 
>>> spare
>>> time who now finds time to post on talk.bizarro? Go figure.
>>
>>I strongly got the impression that it was me who drained Longley's
>>resources.. but that might as well be wishful thinking of course. :))
>
> In certain cases it seems to have been the tooth fairy, JPL.

Maybe with the money he enjoys Scotch. 


0
JPL
1/28/2005 11:17:34 PM
"JPL Verhey" <matterminds@hotmail.com> wrote:

> Btw maybe you or other programmers can make a bot
> that scans the google archives and deliver his
> absentee statistics? Would be interesting to
> analyse, speculate about the correlated
> contingencies.

Well, it isn't nearly that hard, I just did a google
groups search on author:David.Longley, sorted by
date, to convince myself that six day absences were
not a big deal in his posting history.

I'm a bit disappointed to learn that my paralleling
his reasons for being fired to his proposal,
self-documented in his "Fragments" to be an
imitation-wannabie of the experiments on other
prisoners in another place and of Dr. Joseph Mengel,
and the discredit it would have brought to his
British bureau of prisons employer had his proposal
been accepted, are under some shadow of controversy
as being the cause of his departure though.

Am I giving him too much credit for human
sensibilities and normal human reactions of shame?

I'm also surprised he has been able to resist his
addiction to comp.ai.philosophy as a forum for his
sociopathic behaviors this long additional period.

Oh well, life without surprises would bore me to
tears.

FWIW

xanthian.



-- 
Posted via Mailgate.ORG Server - http://www.Mailgate.ORG
0
Kent
1/29/2005 1:17:32 AM
"Kent Paul Dolan" <xanthian@well.com> wrote in message 
news:9c0c28ab01792dba862347e351a5d739.48257@mygate.mailgate.org...
> "JPL Verhey" <matterminds@hotmail.com> wrote:
>
>> Btw maybe you or other programmers can make a bot
>> that scans the google archives and deliver his
>> absentee statistics? Would be interesting to
>> analyse, speculate about the correlated
>> contingencies.
>
> Well, it isn't nearly that hard, I just did a google
> groups search on author:David.Longley, sorted by
> date, to convince myself that six day absences were
> not a big deal in his posting history.
>
> I'm a bit disappointed to learn that my paralleling
> his reasons for being fired to his proposal,
> self-documented in his "Fragments" to be an
> imitation-wannabie of the experiments on other
> prisoners in another place and of Dr. Joseph Mengel,
> and the discredit it would have brought to his
> British bureau of prisons employer had his proposal
> been accepted, are under some shadow of controversy
> as being the cause of his departure though.
>
> Am I giving him too much credit for human
> sensibilities and normal human reactions of shame?
>
> I'm also surprised he has been able to resist his
> addiction to comp.ai.philosophy as a forum for his
> sociopathic behaviors this long additional period.
>
> Oh well, life without surprises would bore me to
> tears.

OTOH, maybe prison management would be one of the few areas in which 
Longley could contribute something useful, or at least cause no harm. 
God forbid he runs your department at work, or raises a family.



0
JPL
1/29/2005 9:59:41 AM
euclid <euclid14@mailandnews.com> wrote:
>Lester Zick wrote:
>> On 27 Jan 2005 11:55:35 -0800, "euclid" <euclid14@mailandnews.com> in
>> comp.ai.philosophy wrote:
>> >
>> >> >So, in other words, it takes no intelligence to win
>> >> >a game
>> >> >of chess against a chess grandmaster.  What's more, no
>> >> >intelligence is required to prove a math theorem.
>> >>
>> >> Is chess intelligence?
>> >
>> >Did i say that it was?
>>
>> You didn't say that it wasn't. Presumably you were
>> trying to say something. What were you trying to say
>> that you didn't say?

Here, we define intelligence as "that which Janet Kalodner's systems exhibit."
This makes it much easier to determine whether a given system is showing
signs of intelligence or not.
--scott

-- 
"C'est un Nagra.  C'est suisse, et tres, tres precis."
0
kludge
1/29/2005 2:05:43 PM
nobody wrote:
> Prooving theorems fundementally takes no more "intelligence"
> than your basic string search and replace.

Well, of course not.  Nothing takes "intelligence" once it's doable by
computer.  We just keep revising the definition of "intelligence" to
fit the facts.  Ultimately, it will be proven -- by this process of
systematic exclusion -- that there is no such thing as human
intelligence either: something that should have been obvious from the
get-go.

0
whopkins
2/2/2005 11:52:35 PM
Hi, BlackWater, if you're still here.

"Clearly we need a different approach, something closer
  to nature. 3-dimensional programmable gate arrays are
  probably required. Even if each simulated neuron works
  rather slowly, as do real nerves, the massive degree
  of interlinking possible might save the day."

If you could create imitation of the brain by FPGA - OK. If it mimics
the brain closely enough, it should work like the imitated device with
precision set high enough. But first of all, nowadays this approach
sounds like the neural nets simulated by "conventional processors",
although it looks a bit better for those who like this way.
I'm looking forward to see you working with, say, 10 trillion (1E+10)
gate Spartan-999 (TM), when you may be, or may be not able to implement
in there even a part of the billions neurons in a human brain.


"The other approach is to NOT try and simulate real
  nerves at all."

Eureka!


"The olde-tyme AI people tried to
  substitute algorithms for 'neurons' - Minskys'
  "Society of Mind" is full of this. Alas they --->were<---
  MISSING something ... there --->was<--- no 'glue' binding
  all these relatively high-level processes together.
  They ----->could<----- produce PARTS of thought with minimal
  hardware,"

They didn't then and yet. We'll talk again in 100 years - refering to
the thread topic. :)



" but the parts wouldn't go together to
  create an actual 'mind' worthy of a flea, much
  less a human."

This is fleashit.



"Still, emulation of nature will only get us just SO
far ... and then you may as well just stick to nature
and take up genetic engineering."

Emulation of the dumb nature? Who is so naive to do so?
Thinkers make researches in order to create Thinking Machine.
"Parrots" make researches in order to photocopy the brain.

But even the "parrot-style" researchers who try to copy the lowest
level mechanisms of the proteins interactions and the processes in
brain, even they do not try to emulate the "nature" - they're trying to
emulate only these parts of it, that are valuable for their goals;
since their goals are extremely vague, they're trying to copy as much
as they can... And they argue that their understanding of chemistry
formulas, i.e. their vague "understanding" of the intelligence, is the
only way to do understand the intelligence!

AI is neither an emulation of the nature. AI is ---modelling--- of
intelligence in more clever way that it's done by protein machines.

The intelligence resembles to you an "emulation of nature", maybe
because you at least get that the thinking mind is an emulator of
everything:  ---universal--- emulator. Smarter is the mind, smarter way
it emulates what it perceives and better it predicts the future by the
past; and if the mind doesn't care about something that "Mother Nature"
cares, it does not emulate it at all.

E.g. when I dream /"I", cause I don't know about you/, i.e. I emulate
an universe and I see what happens in its space, and if I want, I can
make required transformations from its space to take results in another
space, say the "Reality".
When there are people who do something in my dream, usually I don't
think about their internal organs, tissues, single cells, proteins or
DNA, and I do not emulate how all that boring biochemistry interact. Do
you? :-)
I don't care that in the "nature" something usually has an internals to
do what it does. My mind emulates only what it's interested in. Since I
don't plan to access the data in the peoples in my dream DNA, I don't
need to know how it works and what is written there.

If the goal is not undrestanding of --MIND-- and --THINKING--, but
understanding the --brain-- and how the "nature" works, imitation of
itself is very appropriate, but I guess AI is Artificial INTELLIGENCE,
and "old-time" style researchers are trying to do it, in contrast to
chemists, who do "Artificial Imitation of Brains".


.....
Tosh

0
todprog
2/16/2005 7:26:12 PM
On 23 Jan 2005 13:38:11 -0800, "Kent Paul Dolan" <xanthian@well.com>
wrote:

>"nobody" <nobody@here.com> wrote:
>
>> Prooving theorems fundementally takes no more
>> "intelligence" than your basic string search and
>> replace. Such systems are very restricted.  That
>> computers can play chess or prove theorems doesn't
>> show the level of advancement of AI research but
>> that such tasks are in fact mechanical in nature.
>
>Unfortunately you have fallen victim to the "let's
>move the goalposts" syndrome.
>
>_Of course_ there will never be AI if every time
>some researcher achieves some AI goal, it is
>promptly redefined to be something "mere computers"
>can do, and therefore merely "mechanical in nature".

   EVERY task is, ultimately, "mechanical" in nature.
   "Intelligence" is just a matter of very sophisticated
   and well-integrated 'mechanics' - enough to create
   something of a "general problem-solving engine". 

   I doubt that an automated checkers-playing program
   is going to be a model for more 'general' engines
   because the approach to 'solutions' requires no
   broad insights, abstraction or discovery of 
   underlying principles. The 'solutions' barely
   justify the use of that term. 

   'Real' life may occasionally employ the 'try everything'
   approach but it's terribly time-consuming and means 
   little unless the ultimate solution forms the basis of
   a general rule so the next SIMILAR problem won't have
   to be dealt with that way. 

   As such, a dedicated game-playing program IS a model
   of "intelligence", but only a tiny, wasteful and
   inflexible facet of "intelligence". 

   That said, as time goes on, hardware and software will
   improve. If you could run a million "try everything"
   programs ten thousand times faster than animal brains
   work, one might get a fair approximation of general
   'intelligence' - simply "done different". Intelligences
   don't HAVE to be "just like us", they simply have to
   get the job done. The underlying approach is irrelevant.

0
bw
5/25/2005 8:42:23 PM
On 27 Jan 2005 08:49:43 -0800, "euclid" <euclid14@mailandnews.com>
wrote:

>
>Kent Paul Dolan wrote:
>> "nobody" <nobody@here.com> wrote:
>>
>> > Prooving theorems fundementally takes no more
>> > "intelligence" than your basic string search and
>> > replace. Such systems are very restricted.  That
>> > computers can play chess or prove theorems doesn't
>> > show the level of advancement of AI research but
>> > that such tasks are in fact mechanical in nature.
>
>So, in other words, it takes no intelligence to win a game
>of chess against a chess grandmaster.  What's more, no
>intelligence is required to prove a math theorem.

   Correct - well, depending on how you define "intelligence".
   Certainly no 'consciousness' is required - it simply takes
   a certain sequence of steps. 

>Why exactly do we inflict that thing called "school" on
>kids, anyway?  Obviously, it has nothing to do with teaching
>them to use that non-existent quality of mind called "intelligence".

   "Intelligence", in this respect, is training the
   mind to know - in a general sense - HOW to solve
   problems. The methods and rules can be applied to
   a BROAD range of problems, not just a few. We
   annoy the kiddies because stuffing them with facts
   and time-proven methods turns them into 'general-
   purpose problem solving engines'. 

>> Unfortunately you have fallen victim to the "let's
>> move the goalposts" syndrome.

>Nobody might believe in AI if/when a robot mugs hir.

   I think we'll have to wait awhile for that to happen ...

0
bw
5/25/2005 8:48:24 PM
BlackWater <bw@barrk.net> wrote:
>>
>>_Of course_ there will never be AI if every time
>>some researcher achieves some AI goal, it is
>>promptly redefined to be something "mere computers"
>>can do, and therefore merely "mechanical in nature".
>
>   EVERY task is, ultimately, "mechanical" in nature.
>   "Intelligence" is just a matter of very sophisticated
>   and well-integrated 'mechanics' - enough to create
>   something of a "general problem-solving engine". 

Intelligence is whatever Janet Kolodner says it is.  Therefore by properly
redefining intelligence, we can claim that a rock is intelligent.
--scott


-- 
"C'est un Nagra.  C'est suisse, et tres, tres precis."
0
kludge
5/25/2005 9:28:20 PM
5YGHCg==

0
milkai6
5/26/2005 12:07:55 AM
On Wed, 25 May 2005 20:42:23 +0000, BlackWater wrote:

> On 23 Jan 2005 13:38:11 -0800, "Kent Paul Dolan" <xanthian@well.com>
> wrote:
> 
>>_Of course_ there will never be AI if every time some researcher achieves
>>some AI goal, it is promptly redefined to be something "mere computers"
>>can do, and therefore merely "mechanical in nature".
> 
>    EVERY task is, ultimately, "mechanical" in nature. "Intelligence" is
>    just a matter of very sophisticated and well-integrated 'mechanics' -
>    enough to create something of a "general problem-solving engine".

This is an assumption.  I don't what part one's spirit plays in one's
intelligence, but to state that it plays absolutely no part is quite a
leap.

>    That said, as time goes on, hardware and software will improve. If you
>    could run a million "try everything" programs ten thousand times faster
>    than animal brains work, one might get a fair approximation of general
>    'intelligence' - simply "done different". Intelligences don't HAVE to
>    be "just like us", they simply have to get the job done. The underlying
>    approach is irrelevant.

I agree.  It would be very interesting to know one way or the other.

-paul-
-- 
Paul E. Black (p.black@acm.org)

0
Paul
5/26/2005 4:28:30 PM
On 25 May 2005 17:28:20 -0400, kludge@panix.com (Scott Dorsey) wrote:

>BlackWater <bw@barrk.net> wrote:
>>>
>>>_Of course_ there will never be AI if every time
>>>some researcher achieves some AI goal, it is
>>>promptly redefined to be something "mere computers"
>>>can do, and therefore merely "mechanical in nature".
>>
>>   EVERY task is, ultimately, "mechanical" in nature.
>>   "Intelligence" is just a matter of very sophisticated
>>   and well-integrated 'mechanics' - enough to create
>>   something of a "general problem-solving engine". 
>
>Intelligence is whatever Janet Kolodner says it is.  Therefore by properly
>redefining intelligence, we can claim that a rock is intelligent.

   My pet rock is VERY hurt by your insensitive comments !   :-)

0
bw
5/26/2005 6:33:17 PM
On Thu, 26 May 2005 12:28:30 -0400, "Paul E. Black" <p.black@acm.org>
wrote:

>On Wed, 25 May 2005 20:42:23 +0000, BlackWater wrote:
>
>> On 23 Jan 2005 13:38:11 -0800, "Kent Paul Dolan" <xanthian@well.com>
>> wrote:
>> 
>>>_Of course_ there will never be AI if every time some researcher achieves
>>>some AI goal, it is promptly redefined to be something "mere computers"
>>>can do, and therefore merely "mechanical in nature".
>> 
>>    EVERY task is, ultimately, "mechanical" in nature. "Intelligence" is
>>    just a matter of very sophisticated and well-integrated 'mechanics' -
>>    enough to create something of a "general problem-solving engine".
>
>This is an assumption.  I don't what part one's spirit plays in one's
>intelligence, but to state that it plays absolutely no part is quite a
>leap.

   What's the weight of one standard 'spirit' ? Show me.
   If you want a 'leap', factoring-in invisible, unmeasurable,
   unqualifiable 'spirits' seems a BIG one .... 

>>    That said, as time goes on, hardware and software will improve. If you
>>    could run a million "try everything" programs ten thousand times faster
>>    than animal brains work, one might get a fair approximation of general
>>    'intelligence' - simply "done different". Intelligences don't HAVE to
>>    be "just like us", they simply have to get the job done. The underlying
>>    approach is irrelevant.
>
>I agree.  It would be very interesting to know one way or the other.

   And so we shall ... but it will take about 50 years. 

0
bw
5/26/2005 6:37:07 PM
On Thu, 26 May 2005 18:37:07 +0000, BlackWater wrote:

> On Thu, 26 May 2005 12:28:30 -0400, "Paul E. Black" <p.black@acm.org>
> wrote:
> 
>>On Wed, 25 May 2005 20:42:23 +0000, BlackWater wrote:
>>
>>>    EVERY task is, ultimately, "mechanical" in nature. "Intelligence" is
>>>    just a matter of very sophisticated and well-integrated 'mechanics'
>>>    - enough to create something of a "general problem-solving engine".
>>
>>This is an assumption.  I don't what part one's spirit plays in one's

missing word: should be "I don't know what part ..."

>>intelligence, but to state that it plays absolutely no part is quite a
>>leap.
> 
>    ... If you want a
>    'leap', factoring-in invisible, unmeasurable, unqualifiable 'spirits'
>    seems a BIG one ....

Yes, including any unmeasured, unquantified factor is unscientific and
poor engineering.

-paul-
-- 
Paul E. Black (p.black@acm.org)

0
Paul
5/27/2005 4:40:14 PM
On Fri, 27 May 2005 12:40:14 -0400, "Paul E. Black" <p.black@acm.org>
wrote:

>On Thu, 26 May 2005 18:37:07 +0000, BlackWater wrote:
>
>> On Thu, 26 May 2005 12:28:30 -0400, "Paul E. Black" <p.black@acm.org>
>> wrote:
>> 
>>>On Wed, 25 May 2005 20:42:23 +0000, BlackWater wrote:
>>>
>>>>    EVERY task is, ultimately, "mechanical" in nature. "Intelligence" is
>>>>    just a matter of very sophisticated and well-integrated 'mechanics'
>>>>    - enough to create something of a "general problem-solving engine".
>>>
>>>This is an assumption.  I don't what part one's spirit plays in one's
>
>missing word: should be "I don't know what part ..."
>
>>>intelligence, but to state that it plays absolutely no part is quite a
>>>leap.
>> 
>>    ... If you want a
>>    'leap', factoring-in invisible, unmeasurable, unqualifiable 'spirits'
>>    seems a BIG one ....
>
>Yes, including any unmeasured, unquantified factor is unscientific and
>poor engineering.

   Quite ... but my concern is more with 'leaps'. An great
   many presuppositions, a long chain of logic, they all
   have to be exactly right in order to start talking about
   "spirits". 

   Interesting how they've managed to elude all our objective
   observations over the centuries. Must be hiding ... :-)

   I don't see that machines require 'ghosts' - be they made
   of protein or silicon or whatever. 

0
bw
5/27/2005 10:16:37 PM
BlackWater wrote:
>
>    Quite ... but my concern is more with 'leaps'. An great
>    many presuppositions, a long chain of logic, they all
>    have to be exactly right in order to start talking about
>    "spirits".
>
>    Interesting how they've managed to elude all our objective
>    observations over the centuries. Must be hiding ... :-)
>
>    I don't see that machines require 'ghosts' - be they made
>    of protein or silicon or whatever.

Good observation.

The problem is with the half-hearted philosophers who have
a Platonic view of computation. That view is ultimately
corrosive and directly leads to a foolish religion.

For instance let's take Putnam who was a good mathematician.
Being a good mathematician is no warranty for saying anything
sensible in philosophy. So he did a great damage to all
philosophy of mind community by equating computationalism with
some quite naive conceptions. No, let's not go into some idea
that can only waste our minds. There can be no greater sin than
that.

But it is worth trying to put into more formal form what this
"ghost" is for the fools who believe in Platonic entities. Thus
we can analyze the workings of these inferior machines.

The foolish philosopher associates with each computation a Platonic
object which resides in a Platonic realm. This object is extra-physical
and is unmeasurable by our apparatus. It does not extend in space.
On the other hand, each "implementation" of the program extends in
space.

This remarkably idiotic idea is logically a direct analogue of
Cartesian interactionism which solves the problem of "interaction"
of the material and immaterial plenum by "magic". I'll declare right
up front anybody who defends that view an ignorant and mindless
entity.

The slightly less foolish philosopher will say that programs *are*
the set of all computers which implement it. But anybody who knows
about Frege will know why that is wrong as well. (I'm not suggesting
that Frege was foolish. That is not my intent. His ideas were great
for the time he lived in, and in fact some of them are provably
true but I won't go into the details of that either)

The philosopher who is not foolish will reject all that spiritualist
nonsense.

Regards,


--
Eray

0
examachine
5/29/2005 10:40:28 PM
On 29 May 2005 15:40:28 -0700, examachine@gmail.com wrote:

>BlackWater wrote:
>>
>>    Quite ... but my concern is more with 'leaps'. An great
>>    many presuppositions, a long chain of logic, they all
>>    have to be exactly right in order to start talking about
>>    "spirits".
>>
>>    Interesting how they've managed to elude all our objective
>>    observations over the centuries. Must be hiding ... :-)
>>
>>    I don't see that machines require 'ghosts' - be they made
>>    of protein or silicon or whatever.
>
>Good observation.
>
>The problem is with the half-hearted philosophers who have
>a Platonic view of computation. That view is ultimately
>corrosive and directly leads to a foolish religion.
>
>For instance let's take Putnam who was a good mathematician.
>Being a good mathematician is no warranty for saying anything
>sensible in philosophy. So he did a great damage to all
>philosophy of mind community by equating computationalism with
>some quite naive conceptions. No, let's not go into some idea
>that can only waste our minds. There can be no greater sin than
>that.
>
>But it is worth trying to put into more formal form what this
>"ghost" is for the fools who believe in Platonic entities. Thus
>we can analyze the workings of these inferior machines.
>
>The foolish philosopher associates with each computation a Platonic
>object which resides in a Platonic realm. This object is extra-physical
>and is unmeasurable by our apparatus. It does not extend in space.
>On the other hand, each "implementation" of the program extends in
>space.
>
>This remarkably idiotic idea is logically a direct analogue of
>Cartesian interactionism which solves the problem of "interaction"
>of the material and immaterial plenum by "magic". I'll declare right
>up front anybody who defends that view an ignorant and mindless
>entity.
>
>The slightly less foolish philosopher will say that programs *are*
>the set of all computers which implement it. But anybody who knows
>about Frege will know why that is wrong as well. (I'm not suggesting
>that Frege was foolish. That is not my intent. His ideas were great
>for the time he lived in, and in fact some of them are provably
>true but I won't go into the details of that either)
>
>The philosopher who is not foolish will reject all that spiritualist
>nonsense.

   I was never much on philosophy ... much sound and fury,
   usually signifying nothing. You'd have to put me in the
   empericist/pragmatist/Missourian camp. Show me. Without
   a firm grounding in objective fact, it's just worthless
   speculation. I'll leave it to the philosophers and
   theologists to argue about what it all MEANS ... but 
   as to what anything IS, gimme a triple-beam balance
   any day. 

   Ok, ok ... I suppose some of the epistemological stuff
   has value - the "How do we know what we know" quandries.
   However, they got entirely carried away with it. The
   scientific method is the best, simplest, most practical
   way to be sure of what you know in the real world.

   As Aristotle proved millenia ago when he attempted to
   derive the laws of nature by 'reason' alone, there
   are too many places were 'reason' can go wrong. EVIDENCE,
   filthy and material as it may be, always wins the day.
   Keep your chain of reasoning short, and nailed to the
   wall every few links with a sturdy bit of well-tested
   evidence. 

   As for UN-real worlds ... that 'spiritual/ideal plane'
   garbage ... we're just looking at long-entrenched idea
   systems, memes that won't die. They color everyones
   thinking and fill-in any gaps in our knowledge. In 
   some form or another they'll always exist. 

   You know, any connection between the real world and
   any 'ideal' universes - there's gotta a point of
   contact, a passage, a pinhole, a mechanism of
   information transfer, a flow of energy. Such things
   leave signs, tangible evidence. After slicing, dicing
   and doing other nasty things to brains dead and living
   for the past few centuries - no signs of any such
   connections or mechanisms. We're down to the atomic
   level now ... nada. 

   The spiritualists are running out of elbow room.
   Penrose evoked the quantum-level not so long ago.
   They'll keep trying - right up until the first
   clearly intelligent and conscious 'artificial' 
   intelligence is created. Then they'll claim it's
   not 'real' somehow ... no 'soul' ... probably
   rally the peasants to bring torches and pitchforks ...

0
bw
5/30/2005 9:28:20 PM
bw@barrk.net (BlackWater) wrote:
> On 29 May 2005 15:40:28 -0700, examachine@gmail.com wrote:
>
> >BlackWater wrote:
> >>
> >>    Quite ... but my concern is more with 'leaps'. An great
> >>    many presuppositions, a long chain of logic, they all
> >>    have to be exactly right in order to start talking about
> >>    "spirits".
> >>
> >>    Interesting how they've managed to elude all our objective
> >>    observations over the centuries. Must be hiding ... :-)
> >>
> >>    I don't see that machines require 'ghosts' - be they made
> >>    of protein or silicon or whatever.
> >
> >Good observation.
> >
> >The problem is with the half-hearted philosophers who have
> >a Platonic view of computation. That view is ultimately
> >corrosive and directly leads to a foolish religion.
> >
> >For instance let's take Putnam who was a good mathematician.
> >Being a good mathematician is no warranty for saying anything
> >sensible in philosophy. So he did a great damage to all
> >philosophy of mind community by equating computationalism with
> >some quite naive conceptions. No, let's not go into some idea
> >that can only waste our minds. There can be no greater sin than
> >that.
> >
> >But it is worth trying to put into more formal form what this
> >"ghost" is for the fools who believe in Platonic entities. Thus
> >we can analyze the workings of these inferior machines.
> >
> >The foolish philosopher associates with each computation a Platonic
> >object which resides in a Platonic realm. This object is extra-physical
> >and is unmeasurable by our apparatus. It does not extend in space.
> >On the other hand, each "implementation" of the program extends in
> >space.
> >
> >This remarkably idiotic idea is logically a direct analogue of
> >Cartesian interactionism which solves the problem of "interaction"
> >of the material and immaterial plenum by "magic". I'll declare right
> >up front anybody who defends that view an ignorant and mindless
> >entity.
> >
> >The slightly less foolish philosopher will say that programs *are*
> >the set of all computers which implement it. But anybody who knows
> >about Frege will know why that is wrong as well. (I'm not suggesting
> >that Frege was foolish. That is not my intent. His ideas were great
> >for the time he lived in, and in fact some of them are provably
> >true but I won't go into the details of that either)
> >
> >The philosopher who is not foolish will reject all that spiritualist
> >nonsense.
>
>    I was never much on philosophy ... much sound and fury,
>    usually signifying nothing. You'd have to put me in the
>    empericist/pragmatist/Missourian camp. Show me. Without
>    a firm grounding in objective fact, it's just worthless
>    speculation. I'll leave it to the philosophers and
>    theologists to argue about what it all MEANS ... but
>    as to what anything IS, gimme a triple-beam balance
>    any day.
>
>    Ok, ok ... I suppose some of the epistemological stuff
>    has value - the "How do we know what we know" quandries.
>    However, they got entirely carried away with it. The
>    scientific method is the best, simplest, most practical
>    way to be sure of what you know in the real world.
>
>    As Aristotle proved millenia ago when he attempted to
>    derive the laws of nature by 'reason' alone, there
>    are too many places were 'reason' can go wrong. EVIDENCE,
>    filthy and material as it may be, always wins the day.
>    Keep your chain of reasoning short, and nailed to the
>    wall every few links with a sturdy bit of well-tested
>    evidence.
>
>    As for UN-real worlds ... that 'spiritual/ideal plane'
>    garbage ... we're just looking at long-entrenched idea
>    systems, memes that won't die. They color everyones
>    thinking and fill-in any gaps in our knowledge. In
>    some form or another they'll always exist.
>
>    You know, any connection between the real world and
>    any 'ideal' universes - there's gotta a point of
>    contact, a passage, a pinhole, a mechanism of
>    information transfer, a flow of energy. Such things
>    leave signs, tangible evidence. After slicing, dicing
>    and doing other nasty things to brains dead and living
>    for the past few centuries - no signs of any such
>    connections or mechanisms. We're down to the atomic
>    level now ... nada.
>
>    The spiritualists are running out of elbow room.
>    Penrose evoked the quantum-level not so long ago.
>    They'll keep trying - right up until the first
>    clearly intelligent and conscious 'artificial'
>    intelligence is created. Then they'll claim it's
>    not 'real' somehow ... no 'soul' ... probably
>    rally the peasants to bring torches and pitchforks ...

Oh thank god. There is someone else in the world that thinks like I do and
can even express it clearly (which I have troubles doing at times).  I was
begining to feel a bit alone.  Thank's for that post.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/30/2005 10:07:35 PM
BlackWater wrote:
> On 29 May 2005 15:40:28 -0700, examachine@gmail.com wrote:
>
>    I was never much on philosophy ... much sound and fury,
>    usually signifying nothing. You'd have to put me in the
>    empericist/pragmatist/Missourian camp. Show me. Without
>    a firm grounding in objective fact, it's just worthless
>    speculation. I'll leave it to the philosophers and
>    theologists to argue about what it all MEANS ... but
>    as to what anything IS, gimme a triple-beam balance
>    any day.

Ah, you've got a point there. What I don't understand is
so-called scientists who are not empiricists! IMO you are
either an empiricist or a theologist. You're on the right
side of the road.

>    Ok, ok ... I suppose some of the epistemological stuff
>    has value - the "How do we know what we know" quandries.
>    However, they got entirely carried away with it. The
>    scientific method is the best, simplest, most practical
>    way to be sure of what you know in the real world.

Agreed.

>    As Aristotle proved millenia ago when he attempted to
>    derive the laws of nature by 'reason' alone, there
>    are too many places were 'reason' can go wrong. EVIDENCE,
>    filthy and material as it may be, always wins the day.
>    Keep your chain of reasoning short, and nailed to the
>    wall every few links with a sturdy bit of well-tested
>    evidence.

Right. In fact, I'd argue that is the case even for mathematics.

>    As for UN-real worlds ... that 'spiritual/ideal plane'
>    garbage ... we're just looking at long-entrenched idea
>    systems, memes that won't die. They color everyones
>    thinking and fill-in any gaps in our knowledge. In
>    some form or another they'll always exist.

There are two things. There is the *unknown*. There is the gap.
Then some Platonist comes and fills it in with invisible cement.
It's like magic! It's totally fake.

>    You know, any connection between the real world and
>    any 'ideal' universes - there's gotta a point of
>    contact, a passage, a pinhole, a mechanism of
>    information transfer, a flow of energy. Such things
>    leave signs, tangible evidence. After slicing, dicing
>    and doing other nasty things to brains dead and living
>    for the past few centuries - no signs of any such
>    connections or mechanisms. We're down to the atomic
>    level now ... nada.

Right on the target.

>    The spiritualists are running out of elbow room.
>    Penrose evoked the quantum-level not so long ago.
>    They'll keep trying - right up until the first
>    clearly intelligent and conscious 'artificial'
>    intelligence is created. Then they'll claim it's
>    not 'real' somehow ... no 'soul' ... probably
>    rally the peasants to bring torches and pitchforks ...

I can imagine that. Penrose invoked the quantum-level
because like Searle he's a dualist and a spiritualist
disguised in the philosopher's robes.

Regards,

--
Eray

0
examachine
5/31/2005 3:06:28 PM
On 30 May 2005 22:07:35 GMT, curt@kcwc.com (Curt Welch) wrote:

>bw@barrk.net (BlackWater) wrote:
>> On 29 May 2005 15:40:28 -0700, examachine@gmail.com wrote:
>>
>> >BlackWater wrote:
>> >>
>> >>    Quite ... but my concern is more with 'leaps'. An great
>> >>    many presuppositions, a long chain of logic, they all
>> >>    have to be exactly right in order to start talking about
>> >>    "spirits".
>> >>
>> >>    Interesting how they've managed to elude all our objective
>> >>    observations over the centuries. Must be hiding ... :-)
>> >>
>> >>    I don't see that machines require 'ghosts' - be they made
>> >>    of protein or silicon or whatever.
>> >
>> >Good observation.
>> >
>> >The problem is with the half-hearted philosophers who have
>> >a Platonic view of computation. That view is ultimately
>> >corrosive and directly leads to a foolish religion.
>> >
>> >For instance let's take Putnam who was a good mathematician.
>> >Being a good mathematician is no warranty for saying anything
>> >sensible in philosophy. So he did a great damage to all
>> >philosophy of mind community by equating computationalism with
>> >some quite naive conceptions. No, let's not go into some idea
>> >that can only waste our minds. There can be no greater sin than
>> >that.
>> >
>> >But it is worth trying to put into more formal form what this
>> >"ghost" is for the fools who believe in Platonic entities. Thus
>> >we can analyze the workings of these inferior machines.
>> >
>> >The foolish philosopher associates with each computation a Platonic
>> >object which resides in a Platonic realm. This object is extra-physical
>> >and is unmeasurable by our apparatus. It does not extend in space.
>> >On the other hand, each "implementation" of the program extends in
>> >space.
>> >
>> >This remarkably idiotic idea is logically a direct analogue of
>> >Cartesian interactionism which solves the problem of "interaction"
>> >of the material and immaterial plenum by "magic". I'll declare right
>> >up front anybody who defends that view an ignorant and mindless
>> >entity.
>> >
>> >The slightly less foolish philosopher will say that programs *are*
>> >the set of all computers which implement it. But anybody who knows
>> >about Frege will know why that is wrong as well. (I'm not suggesting
>> >that Frege was foolish. That is not my intent. His ideas were great
>> >for the time he lived in, and in fact some of them are provably
>> >true but I won't go into the details of that either)
>> >
>> >The philosopher who is not foolish will reject all that spiritualist
>> >nonsense.
>>
>>    I was never much on philosophy ... much sound and fury,
>>    usually signifying nothing. You'd have to put me in the
>>    empericist/pragmatist/Missourian camp. Show me. Without
>>    a firm grounding in objective fact, it's just worthless
>>    speculation. I'll leave it to the philosophers and
>>    theologists to argue about what it all MEANS ... but
>>    as to what anything IS, gimme a triple-beam balance
>>    any day.
>>
>>    Ok, ok ... I suppose some of the epistemological stuff
>>    has value - the "How do we know what we know" quandries.
>>    However, they got entirely carried away with it. The
>>    scientific method is the best, simplest, most practical
>>    way to be sure of what you know in the real world.
>>
>>    As Aristotle proved millenia ago when he attempted to
>>    derive the laws of nature by 'reason' alone, there
>>    are too many places were 'reason' can go wrong. EVIDENCE,
>>    filthy and material as it may be, always wins the day.
>>    Keep your chain of reasoning short, and nailed to the
>>    wall every few links with a sturdy bit of well-tested
>>    evidence.
>>
>>    As for UN-real worlds ... that 'spiritual/ideal plane'
>>    garbage ... we're just looking at long-entrenched idea
>>    systems, memes that won't die. They color everyones
>>    thinking and fill-in any gaps in our knowledge. In
>>    some form or another they'll always exist.
>>
>>    You know, any connection between the real world and
>>    any 'ideal' universes - there's gotta a point of
>>    contact, a passage, a pinhole, a mechanism of
>>    information transfer, a flow of energy. Such things
>>    leave signs, tangible evidence. After slicing, dicing
>>    and doing other nasty things to brains dead and living
>>    for the past few centuries - no signs of any such
>>    connections or mechanisms. We're down to the atomic
>>    level now ... nada.
>>
>>    The spiritualists are running out of elbow room.
>>    Penrose evoked the quantum-level not so long ago.
>>    They'll keep trying - right up until the first
>>    clearly intelligent and conscious 'artificial'
>>    intelligence is created. Then they'll claim it's
>>    not 'real' somehow ... no 'soul' ... probably
>>    rally the peasants to bring torches and pitchforks ...
>
>Oh thank god. There is someone else in the world that thinks like I do and
>can even express it clearly (which I have troubles doing at times).  I was
>begining to feel a bit alone.  Thank's for that post.

   I was 'clear' ??? Hmmm ... maybe if I really *tried* I could
   make a living at this. 

   Nah ! Too much effort  :-)

0
bw
5/31/2005 4:57:17 PM
On 31 May 2005 08:06:28 -0700, examachine@gmail.com wrote:

>BlackWater wrote:
>> On 29 May 2005 15:40:28 -0700, examachine@gmail.com wrote:
>>
>>    I was never much on philosophy ... much sound and fury,
>>    usually signifying nothing. You'd have to put me in the
>>    empericist/pragmatist/Missourian camp. Show me. Without
>>    a firm grounding in objective fact, it's just worthless
>>    speculation. I'll leave it to the philosophers and
>>    theologists to argue about what it all MEANS ... but
>>    as to what anything IS, gimme a triple-beam balance
>>    any day.
>
>Ah, you've got a point there. What I don't understand is
>so-called scientists who are not empiricists! IMO you are
>either an empiricist or a theologist. You're on the right
>side of the road.

   Well ... 'scientists', on the whole, are poorly paid and
   under-appreciated. There's money and fame -if- you'll do
   a deal with the devil and branch out into wild speculation.
   Just take a few facts and run with them. Do it well and
   CNN will hire you ... or at least talk about you. 

   Frankly, if scientists want to earn an extra buck I think
   they should just write sci-fi novels instead of passing
   off every daydream and brainfart as the sure 'nuf truth.
   At least the novel will have "FICTION" written on the
   spine somewhere ... 

>>    Ok, ok ... I suppose some of the epistemological stuff
>>    has value - the "How do we know what we know" quandries.
>>    However, they got entirely carried away with it. The
>>    scientific method is the best, simplest, most practical
>>    way to be sure of what you know in the real world.
>
>Agreed.
>
>>    As Aristotle proved millenia ago when he attempted to
>>    derive the laws of nature by 'reason' alone, there
>>    are too many places were 'reason' can go wrong. EVIDENCE,
>>    filthy and material as it may be, always wins the day.
>>    Keep your chain of reasoning short, and nailed to the
>>    wall every few links with a sturdy bit of well-tested
>>    evidence.
>
>Right. In fact, I'd argue that is the case even for mathematics.

   Hmmm ... math may be a 'special case'. Mathematical
   proofs are supposed to be absolutely airtight - short
   simple steps with zero wiggle-room for 'interpretation'.
   About as close as you can get to "emperical proof" with
   an abstract subject. 

   Now since everything 'real' can be respresented as numbers
   it follows that it *IS* possible to derive all natural laws
   using mathematics and mathematical reasoning alone. Alas,
   "possible" isn't quite the same thing as "practical". No
   human could get every step right and certain assumptions
   and subjective material would creep into the process even
   if they tried their best to eliminate every speck of
   'wiggle room'. Perhaps a computer will be able to do it
   one day ... but how do you verify that the software 
   is 'wiggle'-free ? 

>>    As for UN-real worlds ... that 'spiritual/ideal plane'
>>    garbage ... we're just looking at long-entrenched idea
>>    systems, memes that won't die. They color everyones
>>    thinking and fill-in any gaps in our knowledge. In
>>    some form or another they'll always exist.
>
>There are two things. There is the *unknown*. There is the gap.
>Then some Platonist comes and fills it in with invisible cement.
>It's like magic! It's totally fake.

   'Magic' is a good description. It's the glue used to stick
   two or more incompatible idea systems together. Then just
   add a little Bond-O, sand and piant .... and it looks as
   if you've got one single, smooth, unblemished, all-purpose
   theory of everything. 

   People LIKE the notion of 'ideal' things and universes. 
   They're STUCK with the real world. So, it's always been
   popular to try and glue the two together. Makes the
   mundane seem more 'special' and 'significant' if you
   can tie it in with 'divine' affairs. 

>>    You know, any connection between the real world and
>>    any 'ideal' universes - there's gotta a point of
>>    contact, a passage, a pinhole, a mechanism of
>>    information transfer, a flow of energy. Such things
>>    leave signs, tangible evidence. After slicing, dicing
>>    and doing other nasty things to brains dead and living
>>    for the past few centuries - no signs of any such
>>    connections or mechanisms. We're down to the atomic
>>    level now ... nada.
>
>Right on the target.
>
>>    The spiritualists are running out of elbow room.
>>    Penrose evoked the quantum-level not so long ago.
>>    They'll keep trying - right up until the first
>>    clearly intelligent and conscious 'artificial'
>>    intelligence is created. Then they'll claim it's
>>    not 'real' somehow ... no 'soul' ... probably
>>    rally the peasants to bring torches and pitchforks ...
>
>I can imagine that. Penrose invoked the quantum-level
>because like Searle he's a dualist and a spiritualist
>disguised in the philosopher's robes.

   Yep. Still looking for that 'divine spark' in the
   luminiferous aether. Makes us seem more 'special'
   and 'significant' ... instead of just an emergent
   property of meat. 

0
bw
5/31/2005 5:29:12 PM
BlackWater wrote:

>    Hmmm ... math may be a 'special case'. Mathematical
>    proofs are supposed to be absolutely airtight - short
>    simple steps with zero wiggle-room for 'interpretation'.
>    About as close as you can get to "emperical proof" with
>    an abstract subject.

That was sort of the ivory tower image of math, right up until the
proof of the four-colorability of planar maps, a proof done by a
computer because it needed over a billion special cases checked. Many
similar proofs now exist, proofs that no human can guarantee to be
"airtight". Now we are satisfied with math proofs that are sufficiently
probablistically correct, and math is just another "to the best of our
knowledge" discipline. Sigh.

Life used to be so simple, then suddenly it wasn't.

xanthian.

0
Kent
6/2/2005 3:18:15 AM
Paul E. Black wrote:

> This is an assumption.  I don't what part one's spirit plays in one's
> intelligence, but to state that it plays absolutely no part is quite a
> leap.

That paragraph is a meaningless noise, since until there is some
evidence that such a thing as a "spirit" exists in the way you mean to
be using it above, your argument is just assuming its conclusions in
its premises, and chasing its own tail.

While the evidence for the "spirit(ual)" remains, in the famous quote,
"on a par with the evidence for the existence of werewolves", rational
positivists are justified to choose to reject "spirit" as a
content-disjoint sound incapable of correct use in supporting other
theses.

xanthian.

0
Kent
6/2/2005 3:26:30 AM
In article <1117682295.197248.299350@g44g2000cwa.googlegroups.com>,
Kent Paul Dolan <xanthian@well.com> wrote:
>That was sort of the ivory tower image of math, right up until the
>proof of the four-colorability of planar maps, a proof done by a
>computer because it needed over a billion special cases checked. Many
>similar proofs now exist, proofs that no human can guarantee to be
>"airtight". Now we are satisfied with math proofs that are sufficiently
>probablistically correct, and math is just another "to the best of our
>knowledge" discipline. Sigh.

We've *always* been satisfied with math proofs that are only sufficiently
probabilistically correct.  Human beings make mistakes, after all, so
a human guarantee of airtightness is still "to the best of our knowledge."
-- 
Tim Chow       tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences
0
tchow
6/2/2005 3:40:13 AM
"Kent Paul Dolan" <xanthian@well.com> writes:

> That was sort of the ivory tower image of math, right up until the
> proof of the four-colorability of planar maps, a proof done by a
> computer because it needed over a billion special cases checked. Many
> similar proofs now exist, proofs that no human can guarantee to be
> "airtight".

Why do you say that no human can guarantee that the computer-assisted
proof of the four-color theorem is "airtight"?  It seems to me that a
computer-assisted proof will be more "airtight" (although by no means
free from criticism) than a "human" proof!  Is it simply because of
the sheer number of cases involved?

Jesse

-- 
Jesse Alama (alama@stanford.edu)
0
Jesse
6/2/2005 3:49:43 AM
Jesse Alama wrote:
> Why do you say that no human can guarantee that the computer-assisted
> proof of the four-color theorem is "airtight"?  It seems to me that a
> computer-assisted proof will be more "airtight" (although by no means
> free from criticism) than a "human" proof!  Is it simply because of
> the sheer number of cases involved?

That is indeed the issue, yes. Software "validation by mathematical
proof" is still in its infancy, and so we really cannot "guarantee" the
software that wrote the proof, nor can we check its work line by line.

Compare the proof of Fermet's last theorem, which despite being a
couple of thousand pages long, _was_ checked by humans (and an error
found which has since, IIUC, been repaired). One human cannot pour out
in his/her lifetime a proof too _long_ for other humans, perhaps in
concert, to check, but one computer sure can put out a proof that all
of humanity combined would not have time to verify (what with dying of
anoxia after turning the planet's trees to paper on which to print the
proof).

We really have entered an era where (some) math proofs are "different
in kind" from what was acceptable as a proof 50 years ago.

xanthian.

0
Kent
6/3/2005 2:35:52 AM
In article <1117766152.221433.290690@g49g2000cwa.googlegroups.com>,
Kent Paul Dolan <xanthian@well.com> wrote:
>That is indeed the issue, yes. Software "validation by mathematical
>proof" is still in its infancy, and so we really cannot "guarantee" the
>software that wrote the proof, nor can we check its work line by line.

If you are happy with software validation, then the second-generation proof
of the four-color theorem by Robertson, Sanders, Seymour, and Thomas should
be satisfying to you.  You can write your own software to check the
certificate that they provide on their website.

The main trouble with the original proof, by the way, wasn't ultimately the
computer part, but the microfilm supplement with hundreds of pages of
hand-drawn diagrams that had to be checked manually.  There were lots of
errors in these that had to be fixed when people tried to verify them.
-- 
Tim Chow       tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences
0
tchow
6/3/2005 3:23:35 PM
tchow@lsa.umich.edu wrote:

>In article <1117766152.221433.290690@g49g2000cwa.googlegroups.com>,
>Kent Paul Dolan <xanthian@well.com> wrote:
>>That is indeed the issue, yes. Software "validation by mathematical
>>proof" is still in its infancy, and so we really cannot "guarantee" the
>>software that wrote the proof, nor can we check its work line by line.
>
>If you are happy with software validation, then the second-generation proof
>of the four-color theorem by Robertson, Sanders, Seymour, and Thomas should
>be satisfying to you.  You can write your own software to check the
>certificate that they provide on their website.
>
>The main trouble with the original proof, by the way, wasn't ultimately the
>computer part, but the microfilm supplement with hundreds of pages of
>hand-drawn diagrams that had to be checked manually.  There were lots of
>errors in these that had to be fixed when people tried to verify them.

   Thing is ... it's not a real 'proof' if it required running
   test cases on hundreds, thousands or millions of diagrams. That's
   just a 'probable' proof. A *real* proof is an airtight 
   string of mathematical logic ... one you'd be confident of
   without running a single test case. 

   Sometimes you have to get by with probablility-based 'proofs', but
   the goal should always be to ultimately generate a solid  proof
   starting from basic principles on up. Perfect logic leaves no
   doubts. 

0
BlackWater
6/4/2005 2:21:36 AM
In article <CBChQi2bmBcSxy+FY04eKde+kOLI@4ax.com>,
BlackWater  <bw@barrk.net> wrote:
>   Thing is ... it's not a real 'proof' if it required running
>   test cases on hundreds, thousands or millions of diagrams. That's
>   just a 'probable' proof. A *real* proof is an airtight 
>   string of mathematical logic ... one you'd be confident of
>   without running a single test case. 

There seems to be a misconception here.  The proof of the four-color
theorem is indeed a string of mathematical logic in the conventional
sense.  It's just that one of the steps in the reasoning requires
exhibiting a large explicit set of combinatorial configurations and
checking by direct calculation that they collectively have a certain
combinatorial property.  These configurations are not "test cases."

The confusion may come from the fact that all the proofs that one finds in
textbooks, even graduate textbooks, are relatively short and elegant, not
requiring inordinately long and complex calculations that span hundreds
of pages.  But just because you see a long list of little combinatorial
diagrams, you should not assume that what's going on is that someone is
merely heuristically exploring a conjecture by testing a finite set of
cases out of a potentially infinite set of cases.  It could be the case
that the finite set in question is exactly the right set needed for a
rigorous logical argument, even if the set is large; and in fact that
is the case here.
-- 
Tim Chow       tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences
0
tchow
6/4/2005 3:10:19 AM
tchow@lsa.umich.edu wrote:
> In article <CBChQi2bmBcSxy+FY04eKde+kOLI@4ax.com>,
> BlackWater  <bw@barrk.net> wrote:
> >   Thing is ... it's not a real 'proof' if it required running
> >   test cases on hundreds, thousands or millions of diagrams. That's
> >   just a 'probable' proof. A *real* proof is an airtight
> >   string of mathematical logic ... one you'd be confident of
> >   without running a single test case.
>
> There seems to be a misconception here.  The proof of the four-color
> theorem is indeed a string of mathematical logic in the conventional
> sense.  It's just that one of the steps in the reasoning requires
> exhibiting a large explicit set of combinatorial configurations and
> checking by direct calculation that they collectively have a certain
> combinatorial property.  These configurations are not "test cases."

Right. In graph theory there are a lot of proofs that have numerous
cases. It's of course better if you can find a shorter proof in these
cases. There was a mathematician, for instance, who claimed a shorter
proof for the four-color theorem, but I haven't followed up about it.
Usually a shorter proof would involve a more powerful concept, giving
us more insight.

Regards,

--
Eray

0
examachine
6/4/2005 5:42:14 PM
Trying to find the right entrance point into this thread was daunting.

Blackwater wrote:
   Yep. Still looking for that 'divine spark' in the
   luminiferous aether. Makes us seem more 'special'
   and 'significant' ... instead of just an emergent
   property of meat.

I am an old timer in the AI field. I spent a decade formulating a
research path that could see emergence in less than a decade. We passed
that sign post sometime last year. Here is a peak at the edge. Medicine
meets with Comp.Sci. ANNs appear too primitive to hold the whole story
so looking deeper into brain physiology has given us some clues at
where to go from here.  The approach being taken is "animals evolved
instinct and emotions before cognitive thought" so we're looking at
brain physiology that may show constructs for creating new hardware
capable of handling instincts and emotions first. Once we have
accomplished instinctive and emotional responses our goal is to
incorporate cognition.

The forgotten Glail cells may indeed contribute to neural activity.
There are more recent publications out there but this one is a good
start.
http://faculty.washington.edu/chudler/glia.html

The Glail cell hypothesis given by myself suggests a way for emotional
responses to couple themselves with thought as happens in humans. Check
out the link below
http://groups.google.ca/group/Artificial-Emotion?hl=en
Read: "Emotion vs Instinct" (CodesAlive contribution)

Emergent properties of meat may well lead us down a successful path
towards AI. One thing is sure. Our current hardware is incapable of it
and needs to evolve.

0
CodesAlive
6/5/2005 8:18:22 PM
Re: "a mathematician  who claimed a shorter proof for the four-color
theorem,"

Can you provide more detail?

Thank you

0
b92057
6/8/2005 8:18:10 PM
b92057@yahoo.com wrote:
> Re: "a mathematician  who claimed a shorter proof for the four-color
> theorem,"
>
> Can you provide more detail?
>
> Thank you

If you don't mind I like to provide the details instead of Eray. Just
look at http://arxiv.org/abs/math.CO/0408247/ (Spiral Chains: A New
Proof of the Four Color Theorem, August 18, 2004) or
http://mathworld.wolfram.com/Four-ColorTheorem.html.  


Cahit

0
icahit
6/8/2005 10:00:51 PM
icahit@gmail.com wrote:
> b92057@yahoo.com wrote:
> > Re: "a mathematician  who claimed a shorter proof for the four-color
> > theorem,"
> >
> > Can you provide more detail?
> >
> > Thank you
>
> If you don't mind I like to provide the details instead of Eray. Just
> look at http://arxiv.org/abs/math.CO/0408247/ (Spiral Chains: A New
> Proof of the Four Color Theorem, August 18, 2004) or
> http://mathworld.wolfram.com/Four-ColorTheorem.html.

I find Cahit's work significant because it is a concrete demonstration
of why an abstract concept is powerful, e.g. it quantifies what this
power means with respect to theorem proving. By the right language,
he makes an amazing cut down in the proof length. (e.g. my previous
comments that "an abstract concept" helps us gain insight by
making the proofs shorter. The right concept will also cut down
the size of not one proof but many. I this these ideas are relevant to
formalization of abstraction...)

Regards,

--
Eray Ozkural

0
examachine
6/9/2005 7:07:54 AM
icahit@gmail.com wrote:
> b92057@yahoo.com wrote:
> > Re: "a mathematician  who claimed a shorter proof for the four-color
> > theorem,"
> >
> > Can you provide more detail?
> >
> > Thank you
>
> If you don't mind I like to provide the details instead of Eray. Just
> look at http://arxiv.org/abs/math.CO/0408247/ (Spiral Chains: A New
> Proof of the Four Color Theorem, August 18, 2004) or
> http://mathworld.wolfram.com/Four-ColorTheorem.html.

BTW, mathworld says the proof isn't verified yet. Is it under review?

Regards,

--
Eray

0
examachine
6/9/2005 7:09:47 AM
Not yet but planning to submit it to a journal. On the other hand the
proof proposed i.e., using spiral chain coloring can be understood even
by a college student so proper selection of the journal is important.
Of course I have received many positive feedbacks from the mathematics
community. I have applied the technique (spiral chains) to the other
some open graph coloring conjectures such as to Steinberg's three
coloring planar graphs, Hadwiger's conjecture (generalization of 4CT),
Hajos' conjecture. Preliminary results are very hopeful and I hope I
will annonce them in near future. I think all these show the power of
spiral chains in graph coloring problems  which is a direct algorithmic
proof  enable us to give shorter and hopefully more elegant proofs.

Regards,

Cahit

0
icahit
6/9/2005 7:49:11 AM
icahit@gmail.com wrote:
> Not yet but planning to submit it to a journal. On the other hand the
> proof proposed i.e., using spiral chain coloring can be understood even
> by a college student so proper selection of the journal is important.
> Of course I have received many positive feedbacks from the mathematics
> community. I have applied the technique (spiral chains) to the other
> some open graph coloring conjectures such as to Steinberg's three
> coloring planar graphs, Hadwiger's conjecture (generalization of 4CT),
> Hajos' conjecture. Preliminary results are very hopeful and I hope I
> will annonce them in near future. I think all these show the power of
> spiral chains in graph coloring problems  which is a direct algorithmic
> proof  enable us to give shorter and hopefully more elegant proofs.

If it's the right concept, it ought to be applicable to multiple
domains.

I'd read your paper, and indeed it was quite accessible. I didn't see
anything suspicious in it, but since I'm not formally a
"mathematician",
just a lousy computer guy, I'm not entitled to say much on such a
highly-valued proof. It will be quite significant if this turns out
to be true. I hope the publication turns out well.

Best of luck!

Regards,

--
Eray

0
examachine
6/10/2005 9:22:05 AM
In article <1118303351.194573.168960@f14g2000cwb.googlegroups.com>,
 <icahit@gmail.com> wrote:
>Of course I have received many positive feedbacks from the mathematics
>community.

Just out of curiosity, who have you received positive feedback from?
-- 
Tim Chow       tchow-at-alum-dot-mit-dot-edu
The range of our projectiles---even ... the artillery---however great, will
never exceed four of those miles of which as many thousand separate us from
the center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences
0
tchow
6/10/2005 1:18:34 PM
tchow@lsa.umich.edu wrote:
> In article <1118303351.194573.168960@f14g2000cwb.googlegroups.com>,
>  <icahit@gmail.com> wrote:
> >Of course I have received many positive feedbacks from the mathematics
> >community.
>
> Just out of curiosity, who have you received positive feedback from?

Well, the abstract doesn't have a very good english, but I think it has
some nice figures. On looking at the paper, I realize that I'd only
skimmed it, putting it atop the infinite stack of papers to read :( But
I would be in fact interested in reading it, I adore graph theory.

Cheers,

--
Eray

0
examachine
6/10/2005 1:44:52 PM
To date the number of persons (some are well known graph theorists) is
about twenty. It is not a good idea to list their names here. Nice
thing is that no one yet has submited an counter-example. Of course I
trust to my proof that is the "spiral chains" in the proof of the four
color theorem.

Cahit

tchow@lsa.umich.edu wrote:
> In article <1118303351.194573.168960@f14g2000cwb.googlegroups.com>,
>  <icahit@gmail.com> wrote:
> >Of course I have received many positive feedbacks from the mathematics
> >community.
>
> Just out of curiosity, who have you received positive feedback from?
> --
> Tim Chow       tchow-at-alum-dot-mit-dot-edu
> The range of our projectiles---even ... the artillery---however great, will
> never exceed four of those miles of which as many thousand separate us from
> the center of the earth.  ---Galileo, Dialogues Concerning Two New Sciences

0
icahit
6/10/2005 3:09:39 PM
 <tchow@lsa.umich.edu> wrote:
> <icahit@gmail.com> wrote:
>>Of course I have received many positive feedbacks from the mathematics
>>community.
>
>Just out of curiosity, who have you received positive feedback from?

People who put the poles on the righthand side of the plane.
--scott

-- 
"C'est un Nagra.  C'est suisse, et tres, tres precis."
0
kludge
6/10/2005 7:44:48 PM
Reply: