f



Robotics, AI, and Ethics

Over the past half century, researchers and engineers have primarily
been interested in the the technical aspects of artificial
intelligence and robotics. However, as technology becomes more
advanced some are starting to examine a concept that is typically
restricted to the realm of human interactions - ethical behavior.

http://www.syntheticthought.com/st/robotics/59-general/48-robotics-ai-and-ethics
0
5/9/2009 2:54:50 AM
comp.ai.alife 885 articles. 0 followers. Post Follow

45 Replies
1392 Views

Similar Articles

[PageSpeed] 51

On 9 May, 03:54, "chuck.ril...@gmail.com" <chuck.ril...@gmail.com>
wrote:
> Over the past half century, researchers and engineers have primarily
> been interested in the the technical aspects of artificial
> intelligence and robotics. However, as technology becomes more
> advanced some are starting to examine a concept that is typically
> restricted to the realm of human interactions - ethical behavior.
>
> http://www.syntheticthought.com/st/robotics/59-general/48-robotics-ai...

See my comment in Creating AI.

One such example is the Three Laws of Robotics first penned by author
Isaac Asimov over sixty years ago.  The laws state:
1) A robot may not injure a human being or, through inaction, allow a
human being to come to harm.
2) A robot must obey orders given to it by human beings, except where
such orders would conflict with the First Law.
3) A robot must protect its own existence as long as such protection
does not conflict with the First or Second Law.

A lot of the effort in robotics is military.

http://groups.google.co.uk/group/creatingAI/browse_frm/thread/cf68ca9b956a91a9?hl=en
http://www.foster-miller.com/lemming.htm
http://news.cnet.com/military-tech/?keyword=Modular+Advanced+Armed+Robotic+System
http://www.foxnews.com/story/0,2933,509684,00.html "Big dog" looks
rather like something out of Star Wars
http://blogs.reuters.com/great-debate/2009/04/22/killer-robots-and-a-revolution-in-warfare/comment-page-4/#comment-13586

Bang go all of Asimov's laws. Should we be saying that MAARS is
protecting us against terrorists and other sorts of malefactor. In
sort are our wars just. MAARS represents an enormous force
multiplication. This force multiplication is AT LEAST comparable with
machine guns in the latter part of the 19th century.

Stanley of "Dr. Livingstone I presume" fame talked about us having the
Maxim gun anf they have not.

http://www.buzzle.com/editorials/11-15-2004-61664.asp
http://www.bwefirearms.com/gatling.pdf

http://www.absoluteastronomy.com/topics/Maxim_gun

The Maxim gun was instrumental in the subjugation of the Belgian Congo
and the scramble for Africa.

http://209.85.229.132/search?q=cache:QhCPWNAQdnMJ:www.wsu.edu/~dee/TEXT/111/unit13.rtf+stanley+congo+maxim+gun&cd=3&hl=en&ct=clnk&gl=uk

The just war has been a subject of debate among theologians.

http://catholicism.about.com/od/beliefsteachings/p/Just_War_Theory.htm

This reference gives Catholic teaching in general terms. Pope Benedict
both as Pope and as Cardinal Ratzinger has made some very specific
remarks about Iraq.

http://www.iep.utm.edu/j/justwar.htm

This is a much more philosophical article. In talks about Jus ante
bellum, jus in bello and jus post bellum. Latin BTW is not as dead as
all that!

While I agree in principle with Asimov I think a lot more
philosophical analysis is needed. I do not believe that we can discuss
robotic ethics in the absence of a general "just war" discussion.
Quite clearly if we use MAARS to support a war that is not just then
we would simply be behaving like bullies - just like Stanley and the
Belgians.

http://209.85.229.132/search?q=cache:xBZPH4V-TsAJ:ethics.calpoly.edu/ONR_report.pdf+fully+autonomous+military+robots&cd=1&hl=en&ct=clnk&gl=uk

is an article of total futility. The first question to be decided is
"Is the war just?" Now a distinction between autonomous robotic
weapons and those with a human controller is false. Both are force
multipliers. At the moment a very limited autonomy is proposed. The
autonomy to strike at infra red sources. A 250Kg bomb flattens
everything in its vicinity, soldiers and civilians. If a robot does
sometimes fire at civilians what's the difference?

The military are a bit cagey at MAARS. They say it is purely
controlled. If that were true though why are they going to such great
pains to give it knowledge of the disposition of friendly forces.

Iraq was unjust from beginning to end, yet the AI build up is taking
place in Afghanistan. Robots are increasing there at an exponential
rate. Obama is shortly to have a "surge". Yet the real surge is the
robotic surge. Because of this the US is going to be able to withdraw
a considerable number of troops prior to his re-election in 4 years
time. He will spin it that the political situation has improved, but
he will be able to make that withdrawal even if it has not.

Will AI mean more war? Yes, I think it undoubtedly will. It will mean
that you can go to war with zero casualties  I don't think Obama is
going to sanction new wars, but an incoming Republican administration
could well.

How just is Afghanistan? Hard to say. At the present time Obama finds
himself trapped. In the 1980s the CIA supported jihad against the
USSR. It set up the ISI (Inter Services Intelligence) in Pakistan.
This ISI is now reported as being "out of control" and a "Frankenstein
monster".

Justice viewed from an Afghan/Pakistani perspective is quite non
existent. First the US forces out Najibullah who while not being
perfect was better than anything subsequent. It connived at an
extremist takeover. After 9/11 it tried to do a U turn and support
democracy.

Hamid Karzai was put into power. Now not only is Mr. Karzai totally
corrupt, his brother is a leading drug lord, but he is a bit of an
extremist himself. Not only did he pass a law on marital rape (in fact
the law went further than this and denied women the right to inherit)
only revising it after an international outcry. He also jailed the
translators of a Pathan version of the Qur'an approved by American
scholars. Obama tried to get rid of him but found he could not.

This brings be on to yet another point. AI must be looked at
holistically. Translation is in fact becoming a major part of AI and
suppose we let AI loose on the Qur'an. Well even the software I have
written myself is capable of instantly dispelling dictation by the
Archangel Gabriel. I have set up the Buckwalter dictionary using hash
tables and it finds prefixes and suffixes by repeated look up. It will
thus tell you whether a word is plural, dual or singular. Two women
with plural husbands spring to mind.

Professor Luxembourg has done research on the origins of the Qur'an.
Luxembourg is a pseudonym.

http://books.google.co.uk/books?id=EfFA4dF-Zg8C&dq=text+of+koran&pg=PP1&ots=TYaCFkE_fr&source=citation&sig=uaOdwbxKvQ22BEG1UuxwL139D9s&hl=en&sa=X&oi=book_result&resnum=13&ct=result#PPA5,M1

I consider it quite disgraceful that the life of an academic should be
threatened in this way.

There is an irony here we look towards AI to aid this type of
scholarly research.

http://news.softpedia.com/news/AI-Machine-Identifies-4-000-Year-Old-Language-Code-110044.shtml
Computer analysis of the grammar of the Indus valley text. The script
cannot yet be read. The computer indicates that the text was a spoken
language. The writing is like Chinese, that is to say there is one
pictogram per word. The computer using statistical techniques was able
to work out a grammar and the way of writing words. We now know what
the parts of speech of each word was, but not the meaning.

Karzai and his theologians are perfectly correct in saying that Arabic
should always be available. What is in fact wanted is a translator
which will enable you to delve into Syriac roots.

This is a bit of a digression I know. Not completely though. Just war
involves ante bellum, in bello and post bellum. All that roboticists
have ever considered is a very limited version of "in bello".

Ante bellum to me involves a proper analysis of policy, done with the
help of AI. Ante bellum involves, in particular, not supporting jihad
against Najibullah and not allow jihardists to become strong in the
first place. This should be perfectly possible.

Obama has never promised any post bellum justice. Provided that
extremists do not attack the US he is perfectly happy, for example, to
have women's rights trampled underfoot. Of course when we have MAARS
and they do not, does this matter?


  - Ian Parker
0
Ian
5/9/2009 12:41:27 PM
On May 8, 10:54=A0pm, "chuck.ril...@gmail.com" <chuck.ril...@gmail.com>
wrote:
> Over the past half century, researchers and engineers have primarily
> been interested in the the technical aspects of artificial
> intelligence and robotics. However, as technology becomes more
> advanced some are starting to examine a concept that is typically
> restricted to the realm of human interactions - ethical behavior.

   Well, many of the more intelligent engineering people have already
given
   up on the uneducable, laser-controlled, and syncopated, robotic
idiots already though.
   Which is why mostly they even built the AUVs, Cruise Missiles,
Drones, Phalanx,
   GPS,  Digital-Terrain Mapping, Atomic Clock Wristwatches,  Optical
Computers,
   MP3, MPEG,  CD-rom, DVD-rom, XML, C++, Flat-Screen HDTV Debuggers,
Spam Blocking,
   Distributed Processing, Pv Cell Energy Arrays, Self-Assembling
Robots.
   Self-Replicating Machines, Holographic Systems, USB, Broadband
Fiber Optics,
   Cell Phones, Thermo-Electric Cooling, Microwave Cooling, Compact
Flourescent Lighting,
   Light Sticks, On-Line Banking, On-Line Shopping, and On-Line
Publishing.






>
> http://www.syntheticthought.com/st/robotics/59-general/48-robotics-ai...

0
zzbunker
5/10/2009 1:21:16 AM
chuck.riley3@gmail.com wrote:
> Over the past half century, researchers and engineers have primarily
> been interested in the the technical aspects of artificial
> intelligence and robotics. However, as technology becomes more
> advanced some are starting to examine a concept that is typically
> restricted to the realm of human interactions - ethical behavior.
> 
> http://www.syntheticthought.com/st/robotics/59-general/48-robotics-ai-and-ethics

Morals, ethics (and also goals of most religions) may be very fine
and most people are good. The problem is that there is a very small
but unfortunately non-zero percentage of bad guys, who never "care"
morals and ethics. Thus crimes, wars etc. cannot be eradicated from
this world through preaching morals and ethics, which seems to be
defacto a waste of time and effort, sadly. I read that killing and
warfare also occur in chimpanzees. Man's higer intelligence
apparently doesn't help him to fare better than lower species.

M. K. Shen
0
Mok
5/10/2009 10:36:53 AM
Mok-Kong Shen wrote:
> chuck.riley3@gmail.com wrote:
[snip]

I like to reproduce a quote concerning peace from R. Biegelow,
The dawn warriors: Man's evolution toward peace, London, 1969, p.216:

    Only an even mightier juggernaut of even more complex and
    all-pervading social organization can establish and maintain
    global law and order.

M. K. Shen
0
Mok
5/10/2009 11:38:25 AM
On May 10, 6:38=A0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> Mok-Kong Shen wrote:
> > chuck.ril...@gmail.com wrote:
>
> [snip]
>
> I like to reproduce a quote concerning peace from R. Biegelow,
> The dawn warriors: Man's evolution toward peace, London, 1969, p.216:
>
> =A0 =A0 Only an even mightier juggernaut of even more complex and
> =A0 =A0 all-pervading social organization can establish and maintain
> =A0 =A0 global law and order.
>
> M. K. Shen

Hmmmmmmmm....... I wonder what that could be???????????????????????
0
Don
5/10/2009 12:40:38 PM
On 10 May, 12:38, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> Mok-Kong Shen wrote:
> > chuck.ril...@gmail.com wrote:
>
> [snip]
>
> I like to reproduce a quote concerning peace from R. Biegelow,
> The dawn warriors: Man's evolution toward peace, London, 1969, p.216:
>
> =A0 =A0 Only an even mightier juggernaut of even more complex and
> =A0 =A0 all-pervading social organization can establish and maintain
> =A0 =A0 global law and order.
>
Now a war to produce global peace and harmony may or may not be just.
Asimov put forward his "laws" without any reference to theories of a
just war. If he were publishing a paper, as opposed to writing science
fiction,  the referees would have picked up on it.

If we are to follow Asimov in any way we have to build on "just war"
philosophy, and ask ourselves whether it is possible to translate
these definitions into AI terms. This I think is the fundamental
academic point.

All the wars fought by the US in the Middle East and Central Asia were
fundamentally unjust.

http://en.wikipedia.org/wiki/Saddam_Hussein

Read the section on his rise to power. Christian jurists have pitched
their discussions in purely abstract terms. Of course justice and
injustice is a matter of political judgement. If you put a bad guy in
power in the first place then any subsequent war has to be unjust.

Afghanistan is unjust for precisely the same reason. Marital rape,
imprisonment for blasphemy etc. underlines unjustness.

I do not believe that Karzai (or Obama) has any sort of "Mandate from
Heaven" in the Chinese sense. In fact the very advance of AI and its
application to scholarship could well mean that that the settlement
produced by AI in the shape of MAARS could well be unravelled by the
AI implicit in Arabic scholarship! If a complete audit trail of the
Qur'an is available on the Web, Karzai is going to look just a bit
silly.


  - Ian Parker
0
Ian
5/10/2009 3:43:41 PM
Don Stockbauer schrieb:
> On May 10, 6:38 am, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
>> Mok-Kong Shen wrote:
>>> chuck.ril...@gmail.com wrote:
>> [snip]
>>
>> I like to reproduce a quote concerning peace from R. Biegelow,
>> The dawn warriors: Man's evolution toward peace, London, 1969, p.216:
>>
>>     Only an even mightier juggernaut of even more complex and
>>     all-pervading social organization can establish and maintain
>>     global law and order.
> 
> Hmmmmmmmm....... I wonder what that could be???????????????????????

In my understanding, the author meant that it is utopic, i.e.
entierly unrealistic, to expect that mankind would ever live
permanently in peace. The lesson of the second world war is
practically null in my humble view.

M. K. Shen


0
Mok
5/11/2009 8:53:23 AM
On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> Hmmmmmmmm....... I wonder what that could be???????????????????????
>
> In my understanding, the author meant that it is utopic, i.e.
> entierly unrealistic, to expect that mankind would ever live
> permanently in peace. The lesson of the second world war is
> practically null in my humble view.
>
I think this is unduly pessimistic. All countries realize that nuclear
wear would be a total disaster where the losses would completely
outweigh any gains. In war between fairly equal nation states this
rubric holds true. Peace CAN be negotiated through mutual interest.

The moral questions arise when a strong nation is fighting a very much
weaker one. This is the real point about the morality of robotic
war.People talk about wars being fought between robots. This will
never be the case. Robots will be fighting on one side. Flesh and
blood will be on the other side. Science fiction stories talk about
the human race being invaded from space and fighting robots. Science
fact is that the technological nations of Earth are developing robots
in order to fight Third World countries. They are not being developed
to fight each other.


  - Ian Parker
0
Ian
5/11/2009 10:19:29 AM
On May 11, 6:19=A0am, Ian Parker <ianpark...@gmail.com> wrote:
> On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:> Hmmmm=
mmmm....... I wonder what that could be???????????????????????
>
> > In my understanding, the author meant that it is utopic, i.e.
> > entierly unrealistic, to expect that mankind would ever live
> > permanently in peace. The lesson of the second world war is
> > practically null in my humble view.
>
> I think this is unduly pessimistic. All countries realize that nuclear
> wear would be a total disaster where the losses would completely
> outweigh any gains. In war between fairly equal nation states this
> rubric holds true. Peace CAN be negotiated through mutual interest.
>
> The moral questions arise when a strong nation is fighting a very much
> weaker one. This is the real point about the morality of robotic
> war.People talk about wars being fought between robots. This will
> never be the case. Robots will be fighting on one side. Flesh and
> blood will be on the other side. Science fiction stories talk about
> the human race being invaded from space and fighting robots. Science
> fact is that the technological nations of Earth are developing robots
> in order to fight Third World countries. They are not being developed
> to fight each other.

   Well, but most of the idiots understand so little about technology-
in-general,
   computers, communications, positioning, logistics, technology-
transfer,
   and expense, it's also why the people with the non-zero technology
   brains build GPS, Cruise Missiles, Drones, Holographics, On-Line
Publishing,
   and Self-Replicating Machines.


>
> =A0 - Ian Parker

0
zzbunker
5/11/2009 11:45:21 AM
On May 11, 5:19=A0am, Ian Parker <ianpark...@gmail.com> wrote:
> On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:> Hmmmm=
mmmm....... I wonder what that could be???????????????????????
>
> > In my understanding, the author meant that it is utopic, i.e.
> > entierly unrealistic, to expect that mankind would ever live
> > permanently in peace. The lesson of the second world war is
> > practically null in my humble view.
>
> I think this is unduly pessimistic. All countries realize that nuclear
> wear would be a total disaster where the losses would completely
> outweigh any gains. In war between fairly equal nation states this
> rubric holds true. Peace CAN be negotiated through mutual interest.
>
> The moral questions arise when a strong nation is fighting a very much
> weaker one. This is the real point about the morality of robotic
> war.People talk about wars being fought between robots. This will
> never be the case. Robots will be fighting on one side. Flesh and
> blood will be on the other side. Science fiction stories talk about
> the human race being invaded from space and fighting robots. Science
> fact is that the technological nations of Earth are developing robots
> in order to fight Third World countries. They are not being developed
> to fight each other.
>
> =A0 - Ian Parker

That's what's so nice about the Global Brain.  It brings about Eternal
Peace automatically and, oddly enough, unavoidably.  Hmmmmmm.......
That sounds  a bit like a religion, doesn't it?  But it's science
(cybernetics).  See the Principia Cybernetica for more optimism.
0
Don
5/11/2009 12:33:11 PM
On 11 May, 13:33, Don Stockbauer <donstockba...@hotmail.com> wrote:

> That's what's so nice about the Global Brain. =A0It brings about Eternal
> Peace automatically and, oddly enough, unavoidably. =A0Hmmmmmm.......
> That sounds =A0a bit like a religion, doesn't it? =A0But it's science
> (cybernetics). =A0See the Principia Cybernetica for more optimism.

There is a whole lot here which I think needs picking over. There is
an extremely important point about the Internet and it is this. All
our TV, entertainment and news id going to come via tailored feeds. It
will because this is what we prefer. We want to look at programs we
are interested in. A global brain would make sure that we got the
appropriate feeds and did not engage in terrorism, war etc. etc. We
would all have a common experience and the nation state would be a
thing of the past.

This machine would be clever. It would constantly be giving us what we
wanted, or what we appeared to want, down to the best plot of "adult"
movie to titillate us. There is nothing way out in this. In fact it is
already happening in a limited way with Google.

There are a number of points. The first of these is what would happen
if someone malevolent hijacked the system for their own ends? Suppose
malevolence WANTED war, terrorism etc. for their own ends. Suppose
terrorism was "glorified" and presented as noble self sacrifice. This
is, in effect, what happened in Wilhelmite and then in Nazi Germany.
The real question to ask is could a code of ethics be imprinted on
Google which governments would find difficulty in circumventing?

The second point is that in certain countries, not the major ones,
there is censorship and organized unreason. Our brain will probably
find a way of dealing with this. I have mentioned Arabic scholarship
and the Afghan Qur'an. Disambiguation in translation is closely
related to the ability to perform textual criticism. This may be a
trivial point but -

1) For this discussion group it is extremely interesting even if a
sideline.

2) Obama is going to look a bit silly if the settlement that he
"imposes" on Afghanistan is totally undermined by "the Brain", as it
surely must be.

This leads me on to a third point which is this. "How will politicians
react if the World is being governed by a Brain and not by them?" No
doubt the Brain will attempt to give politicians the illusion of
control. That will be part of its technique. Of course, in a
"democratic" country the Brain will decide who is elected.


  - Ian Parker
0
Ian
5/11/2009 1:53:27 PM
Ian Parker <ianparker2@gmail.com> wrote:
> On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> > Hmmmmmmmm....... I wonder what that could be???????????????????????
> >
> > In my understanding, the author meant that it is utopic, i.e.
> > entierly unrealistic, to expect that mankind would ever live
> > permanently in peace. The lesson of the second world war is
> > practically null in my humble view.
> >
> I think this is unduly pessimistic. All countries realize that nuclear
> wear would be a total disaster where the losses would completely
> outweigh any gains. In war between fairly equal nation states this
> rubric holds true. Peace CAN be negotiated through mutual interest.
>
> The moral questions arise when a strong nation is fighting a very much
> weaker one. This is the real point about the morality of robotic
> war.People talk about wars being fought between robots. This will
> never be the case. Robots will be fighting on one side. Flesh and
> blood will be on the other side.

That's just nonsense.

Robots are just more tools of war.  They have been fighting on both sides
ever since the first time some pre-man picked up a stick and used it in a
fight.

Robots are just better sticks.  But so are tanks, and guided missiles, and
bullets.

What do you think a spear is?  It's a knife on the end of a stick?  Why was
the knife put on the end of the stick?  To get the human further away from
the other guy with a knife so as to limit potential harm to himself.

All weapons of war are developed as a way of increasing the harm to the
enemy while minimizing harm to one's self.  Robots in all forms are just
more of the same.

As long as there is war, they will be fighting on both sides.

> Science fiction stories talk about
> the human race being invaded from space and fighting robots. Science
> fact is that the technological nations of Earth are developing robots
> in order to fight Third World countries. They are not being developed
> to fight each other.
>
>   - Ian Parker

Robots are just one more weapon of war being developed. The reason we see
robots being developed for war has nothing to do with who's fighting. It's
just because robots happen to be on the cutting edge of war technology at
this point in history.  If we had advanced nations fighting each other
(like Europe against the US), you would actually see a HUGE surge in robot
development.  The fact that the current battles are so one sided reduces
the pressure to invest in new war technology and as such, it's far slower
right now than it would be if there were some real battles going on.  This
stuff isn't war from the side of the west. It's a police action.

The world is coming together into a stronger, united "global brain" as Don
likes to talk about.  However, becuase the world is still populated with
diverse cultures that have fundamentally different views about what's
important in life, and about how a culture should be structured, we have
friction to deal with.  The stronger cultures are, as they always have in
history, are forcing their culture onto the weaker cultures, and the result
is exactly what is expected - the weaker cultures don't like it and are
fight back.  They are using terrorist techniques to defend their culture
becuase that's all you can use when a weak force fights overweeningly
stronger force.

The reason the strong is going against the weak, is becuase the strong is
made up of cultures that have already had their battles in the past, worked
out most their differences, and agreed on what form of culture to move
forward with.  The middle east, has fallen behind in the race of advancing
strength something like 800 years ago and have been mostly ignored since
then. But their control over oil, the single most valuable limited resource
on the planet, forces the combined cultures of the west to have to deal
with the middle east.  If the middle east had a culture that worked, and
allowed the west to continue to get oil, and didn't in the process make
them so rich that they would be able to force their culture on the west,
then all would be fine.  But the culture of the middle east hasn't worked
well enough to keep them out of trouble with the west.

The bottom line, is that none of this conflict will be over until 1) we run
out of oil making the deserts of the middle east of no significant interest
to the developed nations (like much of Africa currently is), or 2) our
cultures merge well enough that we can coexist as one large functioning
culture.

In the end, the whole earth will merge together as one large culture, with
one large system of government, at which point there will be great peace
for an extended time.  But there will continue to be more disagreements
until that day, and some of those disagreements will rise up to the level
war as people try to defend their cultures.

None of this has anything to do with robots, which are just one more weapon
of war.  And if you think atomic bombs were nasty weapons of war, just wait
to see what's coming with AI.  A weapon with human like intelligence or
greater built for war would not just wipe out a few cities, they could kill
(at a relatively low price) every human on a continent without hurting the
countryside or the natural resources of the land.  It wouldn't be like the
bombed out buildings and destruction of past wars.  It would just be a wave
of millions and millions of smart machines swarming over the land and
killing everything that got in their way.  They might not even use
explosives or guns.  A simple blade might be the best weapon for such a
swarm of AIs.  The best form of these AIs might be bird like creatures that
would simply swarm on humans and slice them up.  The humans go into hiding,
and the AI birds just wait for them to die because you can't survive as a
race underground for long.

The real point is not that such a thing will happen like that, but that as
technology develops, we are forced to operate as one large culture instead
of as many smaller cultures. That's becuase advancing technology causes our
desires to impinge on each other, and that leads to disagreements, which,
if there's not a functioning culture to settle these little disagreements,
they will always build into large disagreements which will then turn into
war.  AI is just one more technology that will force the world to merge
into one large culture and that merging will always come with some pain,
and sadly, the smaller weaker cultures will always have to receive more of
that pain in the process.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/11/2009 6:43:54 PM
Ian Parker <ianparker2@gmail.com> wrote:
> On 11 May, 13:33, Don Stockbauer <donstockba...@hotmail.com> wrote:
>
> > That's what's so nice about the Global Brain. =A0It brings about
> > Eternal Peace automatically and, oddly enough, unavoidably.
> > =A0Hmmmmmm....... That sounds =A0a bit like a religion, doesn't it?
> > =A0But it's science (cybernetics). =A0See the Principia Cybernetica for
> > more optimism.
>
> There is a whole lot here which I think needs picking over. There is
> an extremely important point about the Internet and it is this. All
> our TV, entertainment and news id going to come via tailored feeds. It
> will because this is what we prefer. We want to look at programs we
> are interested in. A global brain would make sure that we got the
> appropriate feeds and did not engage in terrorism, war etc. etc. We
> would all have a common experience and the nation state would be a
> thing of the past.
>
> This machine would be clever. It would constantly be giving us what we
> wanted, or what we appeared to want, down to the best plot of "adult"
> movie to titillate us. There is nothing way out in this. In fact it is
> already happening in a limited way with Google.

The "global brain" that Don talks about is already here.  It's just what
results when you link the needs and desires of people into a functioning
unit.  We build those links with our economy, our governments, our
communication systems, and all our social organizations.  It's just called
culture, or society.  And, BTW, it's already conscious (but that's a debate
for another thread).

What's happening over time is that the links are becoming stronger, and
wider reaching.  As we develop the technologies to allow it to happen, we
are being more closely linked into a larger, and stronger, and smarter,
global brain.

> There are a number of points. The first of these is what would happen
> if someone malevolent hijacked the system for their own ends? Suppose
> malevolence WANTED war, terrorism etc. for their own ends. Suppose
> terrorism was "glorified" and presented as noble self sacrifice. This
> is, in effect, what happened in Wilhelmite and then in Nazi Germany.
> The real question to ask is could a code of ethics be imprinted on
> Google which governments would find difficulty in circumventing?

That's happened many times.  It's what happens when a strong dictator comes
into power.  They do it by finding a way to hijack the system to meet their
own desires.

We have developed "global brain" systems that minimize the odds of that
happening.  It's what a democracy is all about.  It's a system where every
person is given equal power to control the system (one person, one vote).
And we structure our society to that control, is the supreme control of
power in the land so that some other power, like Google, can't take it away
from us.

The odds of one bad guy subverting the power of the US government (for
example) is very small.  But it's something we always have to be very
vigilant about, which is why someone like Saddam Hussein, Adolf Hitler, or
Kim Jong-il can scare us so much.

> The second point is that in certain countries, not the major ones,
> there is censorship and organized unreason.

Well, the US has lots of organized unreason as well.  We elected an idiot
to run the country for 8 years in one of the most unreasoned moves I've
seen in my life.  But that's the price you pay for living in a democracy.
The power is held by the people as a group, so you live and die by their
reasoned, and unreasoned view.  The net win is that most people, get things
"there way", most the time, even if it's not the smartest way.  The goal is
to maximize the happiness of the group by minimizing the disagreement in
the mob. If most people want to let the idiot run the country, we let the
idiot run the country becuase to do anything else would make more people
less happy.

> Our brain will probably
> find a way of dealing with this. I have mentioned Arabic scholarship
> and the Afghan Qur'an. Disambiguation in translation is closely
> related to the ability to perform textual criticism. This may be a
> trivial point but -
>
> 1) For this discussion group it is extremely interesting even if a
> sideline.
>
> 2) Obama is going to look a bit silly if the settlement that he
> "imposes" on Afghanistan is totally undermined by "the Brain", as it
> surely must be.

I don't understand your points about the Qur'an.

However, I do understand that some large percentage of the middle east
believes that very old and out of date book tells them what they should do
as a culture.

Though the US has been a mostly Christian nation and has strong Christian
beliefs flowing though our culture, the culture is not ruled by the bible.
It's ruled by our constitution - a far smaller, but far more important,
document.  It's the highest authority in our land in terms of binding us
together as a culture - as a functioning "Global Brain".  Next in power,
are the state constitutions, and some where way down the line, are the
religious beliefs like the bibles and the Qur'ans.  There's a strict
pecking order on what authority binds our culture together, and the
religious doctrines have always been way down the ladder.

This came about in the US becuase we were founded from such a wide mix of
different religions at a time where many were looking to escape from the
religious based cultures - which are just another form of dictatorship.

Though religion still plays an important role in what the US is (sadly in
my view), it's not the ultimate power.  And though it got a born again
idiot elected to run the US for 8 years, the real power of the land, the
constitution, got him removed, without anyone having to be killed.

The middle east, (the little I understand it), is till mostly a religious
based culture.  It's what most the west rejected as a bad way to organize a
society hundreds of years ago.  It's what's creating the most friction
between our cultures as history and fate forces us to figure out a way to
coexist and ultimately, to merge.

A fundamental tenet of western culture is that we do not respect religion
as the corner stone of our society.  A constitution is.  But if you are
raised in a culture that puts the religions beliefs as the ultimate
foundation of "right" in the land, it's not easy to accept, or even
understand, a culture that is organized around a constitution.  Sure, they
can understand how a government might be important, and how it needs a
constitution to operate if it's not going to be a dictatorship, but what
they won't understand, is this idea that this stupid little paper that
tells the government what to do, is to be treated as the highest authority
in the land - higher than any religious belief.

Though I understand very little about the middle east and the cultures
there, I do get the strong impression they, as a culture, can't grasp, or
understand this, and certainly will not accept it any time soon.

But until they accept a constitutional based democracy as the ruling force
in the land, their culture won't be compatible with the west.  And as long
as they aren't compatible with the west, we can't merge as a strong united
"global brain" culture. And until we find a way to merge, the west is not
going to fully trust what happens there, and as long as the region is
important to us, becuase of oil, or because of general world stability, the
west will keep their "boot" on them.  They will be held slaves to the
desire of our "global brain culture" until they convert, or until we get
too weak to hold them.  And our desire is not have them as slaves, or to
hold them hostage, but to have them another healthy functioning part of the
global brain.  But for that to happen, they have to learn how to create a
democracy, and learn how to create a culture run by voting citizens, and
not run by clerics, and dictators.

I'm not sure how far away they are from that, but it seems like they are
still very far away from - maybe 1 or 2 generations of family members away
from it still.

I wrote all that because your comments about the Qur'an sound like you
believe the Qur'an and what it says is important in that society (which I
believe it is too).  My point with this long drawn out reply, is that until
they learn to stop looking at the wording of some religious document to
lead them, and instead, write their own constitution to form a democratic
government, and let that lead them, they won't have a chance of being
compatible with the west.  The conflict between the middle east and the
west won't end, until either they change, or the west changes.  We will
operate as two separate sub-global-brains fighting with each other, until
we become more compatible and are able to merge into one larger and
stronger global brain.

Where as Bush was an old school little brain trying to fight the other
brains just becuase they were different and he couldn't understand someone
if they are different, Obama understands that the solution to our
disagreements and war is to do whatever we can to help the merging of our
societies to occur.

> This leads me on to a third point which is this. "How will politicians
> react if the World is being governed by a Brain and not by them?"

The US has been ruled by a "global brain" for over 200 years now.  It's the
global brain formed by the voting population of the US.  The politicians do
not rule the US, the people do.  That's why one was kicked out, and another
was put in place regardless of what they might have wanted (or the other 20
that didn't get picked might have wanted).

The combined desires of the world population is what we want to rule the
world (in order to create maximize global happiness and minimize global
wars and other problems).  Currently, the desires of US citizens have far
too much power over what happens to the rest of the world.  The rest of the
world deserves to have voting rights so we in the US don't stomp all over
them with our opinions and desires.  But that's not going to happen until
they first learn to be part of a democracy in their own land.  When they
learn to do that, then later, we can look at merging them in with fair
rights to the "global brain" which is still trying to form.

> No
> doubt the Brain will attempt to give politicians the illusion of
> control. That will be part of its technique. Of course, in a
> "democratic" country the Brain will decide who is elected.
>
>   - Ian Parker

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/11/2009 8:49:59 PM
Curt Welch wrote:
> 
> Well, the US has lots of organized unreason as well.  We elected an idiot
> to run the country for 8 years in one of the most unreasoned moves I've
> seen in my life.  But that's the price you pay for living in a democracy.
> The power is held by the people as a group, so you live and die by their
> reasoned, and unreasoned view.  The net win is that most people, get
> things
> "there way", most the time, even if it's not the smartest way.  The goal
> is to maximize the happiness of the group by minimizing the disagreement
> in the mob. If most people want to let the idiot run the country, we let
> the idiot run the country becuase to do anything else would make more
> people less happy.

I wonder if things would be improved if we had some type of minimum
standards for voting.  It seems the current voting block tends to vote
emotionally, not rationally.  Bush/Cheney were able to use good old
ignorant fear to their own ends.  Right now it seems we're ruled by people
who get their worldview from Entertainment Tonight.

Mike Ross

0
someone
5/11/2009 9:24:11 PM
Ian Parker wrote:
> Mok-Kong Shen wrote:
>> Hmmmmmmmm....... I wonder what that could be???????????????????????
>>
>> In my understanding, the author meant that it is utopic, i.e.
>> entierly unrealistic, to expect that mankind would ever live
>> permanently in peace. The lesson of the second world war is
>> practically null in my humble view.
>>
> I think this is unduly pessimistic. All countries realize that nuclear
> wear would be a total disaster where the losses would completely
> outweigh any gains. In war between fairly equal nation states this
> rubric holds true. Peace CAN be negotiated through mutual interest.
[snip]

The time that only nations are players in wars is bygone. Today
there are terrorists who are ready to sacrifice their own lives.
They wouldn't be able to make atomic bombs. They might even have
some difficulties with chemical weapons. But how about biological
weapons, that could be cheaply built once one has the knowhow?
One knows e.g. that sophisticated labs can today rebuild the
notorious virus of the spanish flu of 1918. Is there a really
sure way of avoiding terroists acquiring such techniques sometime
somehow??

M. K. Shen
0
Mok
5/11/2009 9:27:30 PM
someone@somewhere.net wrote:
> I wonder if things would be improved if we had some type of minimum
> standards for voting.

Oh, that would be Australia then, where because people are
required to vote, people actually wake up a little once every
four years and give some thought to what policies are likely
to benefit them. This brings out the otherwise apathetic
middle ground, which is where the USA has such a huge hole
in its voting population, and so gets dominated by minorities
and "true believers". The AU system is much more stable and
less partisan.
0
Clifford
5/11/2009 10:35:55 PM
On May 11, 5:24=A0pm, some...@somewhere.net wrote:
> Curt Welch wrote:
>
> > Well, the US has lots of organized unreason as well. =A0We elected an i=
diot
> > to run the country for 8 years in one of the most unreasoned moves I've
> > seen in my life. =A0But that's the price you pay for living in a democr=
acy.
> > The power is held by the people as a group, so you live and die by thei=
r
> > reasoned, and unreasoned view. =A0The net win is that most people, get
> > things
> > "there way", most the time, even if it's not the smartest way. =A0The g=
oal
> > is to maximize the happiness of the group by minimizing the disagreemen=
t
> > in the mob. If most people want to let the idiot run the country, we le=
t
> > the idiot run the country becuase to do anything else would make more
> > people less happy.
>
> I wonder if things would be improved if we had some type of minimum
> standards for voting. =A0It seems the current voting block tends to vote
> emotionally, not rationally. =A0Bush/Cheney were able to use good old
> ignorant fear to their own ends. =A0Right now it seems we're ruled by peo=
ple
> who get their worldview from Entertainment Tonight.

  Well, idiot Bush gets his aircraft information from Disney World.
  Which is why the people with actual technology brains even
  build stuff like Holographics, All-In-One Printers,  XML, USB, On-
Line Banking,
  On-Line Shopping, On-Line Publishling, C++, Distributed Processing,
Light Sticks,
  Compact Flourescent Lighting, Broadband Fiber Optics, Cell Phones,
Drones,
  Cruise Missiles, AUVs,, and Sefl-Assembling Robots.

  And idiot Cheney gets all his economic theories from Saudia Arabia.
  So that's what why the people with actual non-zero technology brains
  build Pv Cell Energy, Solar Energy, Neo Wind Energy, Biodiesel,
  MP3. MPEG, CD-rom, DVD-rom, Optical Computers, Thermo-Electrical,
  Microwave Cooling, and Self-Replicating Machines.








>
> Mike Ross

0
zzbunker
5/11/2009 11:03:55 PM
On 11 May, 19:43, c...@kcwc.com (Curt Welch) wrote:

> > The moral questions arise when a strong nation is fighting a very much
> > weaker one. This is the real point about the morality of robotic
> > war.People talk about wars being fought between robots. This will
> > never be the case. Robots will be fighting on one side. Flesh and
> > blood will be on the other side.
>
> That's just nonsense.
>
> Robots are just more tools of war.  They have been fighting on both sides
> ever since the first time some pre-man picked up a stick and used it in a
> fight.
>
> Robots are just better sticks.  But so are tanks, and guided missiles, an=
d
> bullets.

At the moment this is true. There is no chance of a "takeover" in
terms of the robots employed in Afghanistan. It is however making war
a lot more one sided. It is rather depressing to see a man like Karzai
backed up by US technology.
>
> What do you think a spear is?  It's a knife on the end of a stick?  Why w=
as
> the knife put on the end of the stick?  To get the human further away fro=
m
> the other guy with a knife so as to limit potential harm to himself.
>
> All weapons of war are developed as a way of increasing the harm to the
> enemy while minimizing harm to one's self.  Robots in all forms are just
> more of the same.
>
> As long as there is war, they will be fighting on both sides.
>
> > Science fiction stories talk about
> > the human race being invaded from space and fighting robots. Science
> > fact is that the technological nations of Earth are developing robots
> > in order to fight Third World countries. They are not being developed
> > to fight each other.
>
> >   - Ian Parker
>
> Robots are just one more weapon of war being developed. The reason we see
> robots being developed for war has nothing to do with who's fighting. It'=
s
> just because robots happen to be on the cutting edge of war technology at
> this point in history.  If we had advanced nations fighting each other
> (like Europe against the US), you would actually see a HUGE surge in robo=
t
> development.  The fact that the current battles are so one sided reduces
> the pressure to invest in new war technology and as such, it's far slower
> right now than it would be if there were some real battles going on.  Thi=
s
> stuff isn't war from the side of the west. It's a police action.
>
It is a rather brutal and biased Police action. It cannot be
considered a police action. A Police force is there to enforce the
law. I am concerned as well about the laws that are being enforced.
Marital rape, blasphemy to name just two. The United Nations charter
tells us that people should have the freedom of who they wish to marry
and wjhether to marry at all. The fact of the matter is that Afghans
are being denied that freedom.

Up to 9/11 the Police (so called) were enforcing jihadism. The CIA and
OBL were also instrumental in setting up the KLA and destabilizing
Yugoslavia.

> The world is coming together into a stronger, united "global brain" as Do=
n
> likes to talk about.  However, because the world is still populated with
> diverse cultures that have fundamentally different views about what's
> important in life, and about how a culture should be structured, we have
> friction to deal with.  The stronger cultures are, as they always have in
> history, are forcing their culture onto the weaker cultures, and the resu=
lt
> is exactly what is expected - the weaker cultures don't like it and are
> fight back.  They are using terrorist techniques to defend their culture
> becuase that's all you can use when a weak force fights overweeningly
> stronger force.

I think this is oversimplified. I think it is true but it ignores the
fact that the CIA supported, at least for a time, the values of the
"weaker culture".

It is also oversimplified because the Arab world does not speak with
one voice. There are Arabs who want secular scientific government.
This was typified by the Ba'ath parties in Iraq and Syria and by Gamal
Nasser in Egypt. The US fought consistently against this movement. The
US put its weight instead behind the Saudi Monarchy.

http://en.wikipedia.org/wiki/Saddam_Hussein

Arif and Al-Bakr were progressive Ba'athists and were removed by
Saddam Hussein with US connivance
>
> The reason the strong is going against the weak, is becuase the strong is
> made up of cultures that have already had their battles in the past, work=
ed
> out most their differences, and agreed on what form of culture to move
> forward with.  The middle east, has fallen behind in the race of advancin=
g
> strength something like 800 years ago and have been mostly ignored since
> then. But their control over oil, the single most valuable limited resour=
ce
> on the planet, forces the combined cultures of the west to have to deal
> with the middle east.  If the middle east had a culture that worked, and
> allowed the west to continue to get oil, and didn't in the process make
> them so rich that they would be able to force their culture on the west,
> then all would be fine.  But the culture of the middle east hasn't worked
> well enough to keep them out of trouble with the west.
>
There is NO Arab culture as I have said. The US wanted compliant
r=E9gimes who would pump the oil. The Al-Saud family fitted the bill
perfectly.

The tragedy of the Middle East is that, unlike Europe, it did not have
institutions strong enough to resist the CIA.

> The bottom line, is that none of this conflict will be over until 1) we r=
un
> out of oil making the deserts of the middle east of no significant intere=
st
> to the developed nations (like much of Africa currently is), or 2) our
> cultures merge well enough that we can coexist as one large functioning
> culture.
>
> In the end, the whole earth will merge together as one large culture, wit=
h
> one large system of government, at which point there will be great peace
> for an extended time.  But there will continue to be more disagreements
> until that day, and some of those disagreements will rise up to the level
> war as people try to defend their cultures.
>
The basic truth of Afghanistan is that the 1980s were jihad, jihad,
jihad. After the fall of the Berlin wall the Taliban became rather an
embarrassment and were abandoned. The KLA were however useful in terms
of destabalizing Yugoslavia.

> None of this has anything to do with robots, which are just one more weap=
on
> of war.  And if you think atomic bombs were nasty weapons of war, just wa=
it
> to see what's coming with AI.  A weapon with human like intelligence or
> greater built for war would not just wipe out a few cities, they could ki=
ll
> (at a relatively low price) every human on a continent without hurting th=
e
> countryside or the natural resources of the land.  It wouldn't be like th=
e
> bombed out buildings and destruction of past wars.  It would just be a wa=
ve
> of millions and millions of smart machines swarming over the land and
> killing everything that got in their way.  They might not even use
> explosives or guns.  A simple blade might be the best weapon for such a
> swarm of AIs.  The best form of these AIs might be bird like creatures th=
at
> would simply swarm on humans and slice them up.  The humans go into hidin=
g,
> and the AI birds just wait for them to die because you can't survive as a
> race underground for long.

MAARS, the Reaper and other robots are basically web based weapons.
Their "intelligence" is basically that of the Web. The point is that
Afghanistan can be fought now entirely from within the borders of the
US.Each robot (now) has a human controller. The only exception to this
being MAARS firing at the warm barrels of sniper's rifles.

You are right in that long term nanotechnology will be developed. Nano
bots COULD act exactly as you describe. A nano swarm will be self
communicating. They will have an "intelligence" that basically
consists of an array of 8 (or even 4) bit pico processors. Thousands
of pico processors will constitute an intelligence.

Views on this should I think be mixed. Academically it will be a
tremendous challenge, it will also be quite a significant development
in the art of computing. Let us look at an FPGA (Field Programmable
Gate Array). You can by burning fuses produce some quite complex
devices. If we were to put pico processors on a chip, with fuses
controlled by the pico processors, think of what we could do, the
architectures that could be supported. It would in fact have quite a
few similarities with the brain itself.

On the other hand military use will produce the ultimate nightmare.
The military is extremely sensitive to the whole issue of AI. Some
posters in sci.space.policy are intent on ridiculing the whole idea.
Mind you I can't see there being much of a future in manned space
flight without the presence of AI! I think the top brass is probably
aware, however dimly of these facts.

One thing I feel I should add to what you have said and it is this.
You have totally disproved flying saucers as alien spacecraft. You see
if aliens really HAD visited Earth the technology you describe would
be what they would in prasctice be using - if not something totally
unimaginable to us.
>
> The real point is not that such a thing will happen like that, but that a=
s
> technology develops, we are forced to operate as one large culture instea=
d
> of as many smaller cultures. That's becuase advancing technology causes o=
ur
> desires to impinge on each other, and that leads to disagreements, which,
> if there's not a functioning culture to settle these little disagreements=
,
> they will always build into large disagreements which will then turn into
> war.  AI is just one more technology that will force the world to merge
> into one large culture and that merging will always come with some pain,
> and sadly, the smaller weaker cultures will always have to receive more o=
f
> that pain in the process.
>
They are going to receive more pain that they need have done because
of their exploitation by the stronger cultures.
> --
There are some points from other contributions which I will discuss
here. First and foremost is how did "the Shrub" manage to get elected
for 2 terms? He manipulated the information system, he smeared his
opponents, and managed to a favourable spin on things.

Hitler got into power in 1933 because he was supported by powerful
industrialists. Bush was elected, at least in part because the CIA was
on his side.

Google is American but not affiliated to the government. Should the
CIA ever attempt to take Google over WATCH OUT.


  - Ian Parker
0
Ian
5/12/2009 12:16:05 PM
Ian Parker <ianparker2@gmail.com> wrote:
> On 11 May, 19:43, c...@kcwc.com (Curt Welch) wrote:

> It is a rather brutal and biased Police action.

Yes it is.  But I was talking more about big picture ideas of how it fits
in to what's happening in the history of the world.

> It cannot be
> considered a police action. A Police force is there to enforce the
> law. I am concerned as well about the laws that are being enforced.
> Marital rape, blasphemy to name just two. The United Nations charter
> tells us that people should have the freedom of who they wish to marry
> and wjhether to marry at all. The fact of the matter is that Afghans
> are being denied that freedom.
>
> Up to 9/11 the Police (so called) were enforcing jihadism. The CIA and
> OBL were also instrumental in setting up the KLA and destabilizing
> Yugoslavia.
>
> > The world is coming together into a stronger, united "global brain" as
> > Do=
> n
> > likes to talk about.  However, because the world is still populated
> > with diverse cultures that have fundamentally different views about
> > what's important in life, and about how a culture should be structured,
> > we have friction to deal with.  The stronger cultures are, as they
> > always have in history, are forcing their culture onto the weaker
> > cultures, and the resu=
> lt
> > is exactly what is expected - the weaker cultures don't like it and are
> > fight back.  They are using terrorist techniques to defend their
> > culture becuase that's all you can use when a weak force fights
> > overweeningly stronger force.
>
> I think this is oversimplified. I think it is true but it ignores the
> fact that the CIA supported, at least for a time, the values of the
> "weaker culture".
>
> It is also oversimplified because the Arab world does not speak with
> one voice. There are Arabs who want secular scientific government.
> This was typified by the Ba'ath parties in Iraq and Syria and by Gamal
> Nasser in Egypt. The US fought consistently against this movement. The
> US put its weight instead behind the Saudi Monarchy.
>
> http://en.wikipedia.org/wiki/Saddam_Hussein
>
> Arif and Al-Bakr were progressive Ba'athists and were removed by
> Saddam Hussein with US connivance
> >
> > The reason the strong is going against the weak, is becuase the strong
> > is made up of cultures that have already had their battles in the past,
> > work=
> ed
> > out most their differences, and agreed on what form of culture to move
> > forward with.  The middle east, has fallen behind in the race of
> > advancin=
> g
> > strength something like 800 years ago and have been mostly ignored
> > since then. But their control over oil, the single most valuable
> > limited resour=
> ce
> > on the planet, forces the combined cultures of the west to have to deal
> > with the middle east.  If the middle east had a culture that worked,
> > and allowed the west to continue to get oil, and didn't in the process
> > make them so rich that they would be able to force their culture on the
> > west, then all would be fine.  But the culture of the middle east
> > hasn't worked well enough to keep them out of trouble with the west.
> >
> There is NO Arab culture as I have said. The US wanted compliant
> r=E9gimes who would pump the oil. The Al-Saud family fitted the bill
> perfectly.
>
> The tragedy of the Middle East is that, unlike Europe, it did not have
> institutions strong enough to resist the CIA.
>
> > The bottom line, is that none of this conflict will be over until 1) we
> > r=
> un
> > out of oil making the deserts of the middle east of no significant
> > intere=
> st
> > to the developed nations (like much of Africa currently is), or 2) our
> > cultures merge well enough that we can coexist as one large functioning
> > culture.
> >
> > In the end, the whole earth will merge together as one large culture,
> > wit=
> h
> > one large system of government, at which point there will be great
> > peace for an extended time.  But there will continue to be more
> > disagreements until that day, and some of those disagreements will rise
> > up to the level war as people try to defend their cultures.
> >
> The basic truth of Afghanistan is that the 1980s were jihad, jihad,
> jihad. After the fall of the Berlin wall the Taliban became rather an
> embarrassment and were abandoned. The KLA were however useful in terms
> of destabalizing Yugoslavia.
>
> > None of this has anything to do with robots, which are just one more
> > weap=
> on
> > of war.  And if you think atomic bombs were nasty weapons of war, just
> > wa=
> it
> > to see what's coming with AI.  A weapon with human like intelligence or
> > greater built for war would not just wipe out a few cities, they could
> > ki=
> ll
> > (at a relatively low price) every human on a continent without hurting
> > th=
> e
> > countryside or the natural resources of the land.  It wouldn't be like
> > th=
> e
> > bombed out buildings and destruction of past wars.  It would just be a
> > wa=
> ve
> > of millions and millions of smart machines swarming over the land and
> > killing everything that got in their way.  They might not even use
> > explosives or guns.  A simple blade might be the best weapon for such a
> > swarm of AIs.  The best form of these AIs might be bird like creatures
> > th=
> at
> > would simply swarm on humans and slice them up.  The humans go into
> > hidin=
> g,
> > and the AI birds just wait for them to die because you can't survive as
> > a race underground for long.
>
> MAARS, the Reaper and other robots are basically web based weapons.
> Their "intelligence" is basically that of the Web. The point is that
> Afghanistan can be fought now entirely from within the borders of the
> US.Each robot (now) has a human controller. The only exception to this
> being MAARS firing at the warm barrels of sniper's rifles.

Yeah, they aren't very much AI yet. They are just very long sticks with
spears on them for the most part still.  But more of the AI stuff is coming
soon enough.

> You are right in that long term nanotechnology will be developed.

Well, I was thinking about shorter term small robots vs nano tech, but that
too is probably coming in time.

> Nano
> bots COULD act exactly as you describe. A nano swarm will be self
> communicating. They will have an "intelligence" that basically
> consists of an array of 8 (or even 4) bit pico processors. Thousands
> of pico processors will constitute an intelligence.
>
> Views on this should I think be mixed. Academically it will be a
> tremendous challenge, it will also be quite a significant development
> in the art of computing.

I think effective AI is not as compute intensive as it's believed to be.
It's far more of a software problem than a compute power problem.  People
like to believe lack of progress in AI is at least partially do to a lack
of computing power but I think that's the far less significant part of the
equation.  Progress is slow becuase AI is just a tricky problem and
requires breakthroughs in software an in understanding that just takes
time.

> Let us look at an FPGA (Field Programmable
> Gate Array). You can by burning fuses produce some quite complex
> devices. If we were to put pico processors on a chip, with fuses
> controlled by the pico processors, think of what we could do, the
> architectures that could be supported. It would in fact have quite a
> few similarities with the brain itself.
>
> On the other hand military use will produce the ultimate nightmare.
> The military is extremely sensitive to the whole issue of AI. Some
> posters in sci.space.policy are intent on ridiculing the whole idea.
> Mind you I can't see there being much of a future in manned space
> flight without the presence of AI! I think the top brass is probably
> aware, however dimly of these facts.

Well, AI is already in space so I'm not sure what your point is.  As AI
gets better, we will send less and less humans out to space.  Humans
weren't built to live in space so it costs a lot of money to send a human
along with all the required support systems.  As AI gets better, we will
stop sending humans all together simple becuase it would be a total waste
of money to do so - not to mention the danger to the life of the
astronauts.

> One thing I feel I should add to what you have said and it is this.
> You have totally disproved flying saucers as alien spacecraft. You see
> if aliens really HAD visited Earth the technology you describe would
> be what they would in prasctice be using - if not something totally
> unimaginable to us.

That's only if they had a reason to kill us right?  Any advanced
civilization would not see us as a threat, they would see us like we see a
family of chimps living in the trees.  We don't send our best tools of war
to wipe out the chimps.  Why would the aliens waste their best tools of
destruction on us?

Personally, my bet on alien robots is that they are nano bots and there
could be billions of them here monitoring this "zoo" planet and we would
never know it.  The best tool for finding UFOs might turn out to be the
scanning electron microscope.

> > The real point is not that such a thing will happen like that, but that
> > a=
> s
> > technology develops, we are forced to operate as one large culture
> > instea=
> d
> > of as many smaller cultures. That's becuase advancing technology causes
> > o=
> ur
> > desires to impinge on each other, and that leads to disagreements,
> > which, if there's not a functioning culture to settle these little
> > disagreements=
> ,
> > they will always build into large disagreements which will then turn
> > into war.  AI is just one more technology that will force the world to
> > merge into one large culture and that merging will always come with
> > some pain, and sadly, the smaller weaker cultures will always have to
> > receive more o=
> f
> > that pain in the process.
> >
> They are going to receive more pain that they need have done because
> of their exploitation by the stronger cultures.
> > --
> There are some points from other contributions which I will discuss
> here. First and foremost is how did "the Shrub" manage to get elected
> for 2 terms? He manipulated the information system, he smeared his
> opponents, and managed to a favourable spin on things.

He manged it because there were a lot of people that actually liked him.  I
didn't happen to be one of them, but I know a lot who did.  No amount of
cheap underhanded information manipulation tricks can get you elected if
the people weren't basically wanting to hear what you fed them to start
with.  The majority of Americans wanted to hear what Bush was selling them.

> Hitler got into power in 1933 because he was supported by powerful
> industrialists. Bush was elected, at least in part because the CIA was
> on his side.
>
> Google is American but not affiliated to the government. Should the
> CIA ever attempt to take Google over WATCH OUT.
>
>   - Ian Parker

There seems to be this odd belief in the world outside the US that the CIA
is some huge nasty power on its own to be feared.  It's not.  It's just one
of the many instruments the American PEOPLE are using to get their way with
the rest of the world.  Presidents don't get elected because they have the
CIA "on their side".  That's absurd.  They get elected becuase they manage
to get the PEOPLE on their side.

The CIA gets it's power (budget) from Congress, and Congress gets their
power, and money from the people.  If the CIA is fucking someone over in
the world (as they have done), it's becuase it's the will of the American
people making it happen.  Even when we aren't directly aware of what's
being done (and might not approve if we knew about it), it's our collective
desires, and needs, and fears, that is causing it to happen.

CIA taking over Google?  Nothing could be more absurd.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/12/2009 3:45:04 PM
You have raised a lot of points, very good points in fact. I just want
to amplify a few of them.

1) AI is not simply a matter of getting processor power.

No, AI is at least in part a matter of communications. Google exists
through bringing together all the World's information. I think we have
in fact emphasised things like Chess whereas what we should have been
talking about a lot more is memory and reasoning from memory. I
personally believe that the understanding of language is an absolutely
vital part of AI. Crack language and you have AI. Language is after
all what will enable you to organize your data base.

If you look at the Web you will find that most algorithms that are
needed have already been written. Question is finding them and
ensuring that they are compatible with other algorithms. I do not see
computers actually wring their own code, what I can see though is a
data base of code and the use of a coordination language like Manifold
for writing further applications.

http://www.ercim.org/publication/Ercim_News/enw35/arbab.html

This being a coordination language is fully parallel.

2) Space - You are absolutely right. The scientific value of manned
space flight is zilch. It has been so for some time now. However there
are humanistic reasons for manned space flight. Everest was climbed
because it was there. Mars and the Moon are there in the same way.
Mind Mallory did not command a $100 billion budget. The ISS is very
much a political entity and it can be defended on political grounds -
NOT, emphatically, NOT scientific.

As far as the history of the World is concerned I think you have to be
right. It is very much the conclusion I have come to independently.
Wait a moment, if the Singularity is confidently predicted to be
around 2045 and if the history of the World is going to turn long
before that date, how would this affect West Point? This may seem a
trivial point, but WP cadets are not going to stand any chance at all
of becoming generals. The Military will have to accept that it will be
a decling industry.

In fact World History moves in fits and starts. I see a crisis coming
in 2-3 years time. The US will be out of recession, the banks will be
operating reasonably normally. Obama will have to produce a budget of
"rectitude". Rectitude meaning that debt does not increase any faster
(2-3%) than the overall growth of the economy. Obama will be forced to
present "rectitude" to a Democratic Congress who won't like it one
little bit. Obama has :-

1) Promised an expansion in Medicaid. He simply cannot retreat on
that.

2) Promised no rise in taxation for middle income earners. He still
has the option for large increases in taxation > $250,000. He might
put middle income taxes up a little bit by steath, not much though.

3) A green stimulus. He cannot really retreat on that.

The only source of "rectitude" is savage cuts in military expenditure.
These savage cuts are going to terminate careers and make West Point
and Annapolis singularly unattractive propositions. What's more if
young people twig this they simply won't want to go into the military.
World History does indeed have a flow. The Republicans when they
(eventually) win are not likely to reverse cuts in military spending.

Last point the CIA - One thing I want you to consider and that is
this. "The top military brass and the CIA are just as much a trade
union as the Auto Workers." Congress approves CIA funds. You are
right. However Congress frequently does not know the facts. Congress
should make it its business to find out. The "Union" talks about a
dangerous world. People often don't have the facts.

Lets get students to write essays on that topic - see what they come
up with.


  - Ian Parker


0
Ian
5/12/2009 6:17:02 PM
On May 12, 8:45=A0am, c...@kcwc.com (Curt Welch) wrote:

> I think effective AI is not as compute intensive as
> it's believed to be.  It's far more of a software
> problem than a compute power problem.  People like
> to believe lack of progress in AI is at least
> partially do to a lack of computing power but I
> think that's the far less significant part of the
> equation.  Progress is slow because AI is just a
> tricky problem and requires breakthroughs in
> software an in understanding that just takes time.

Certainly an understanding of complex "intelligence"
lags well behind the hardware but it will still
require lots of hardware to crunch all the numbers
to carry out sensory analysis and motor synthesis
even in a generic learning network arrangement you
favour. What you call "breakthroughs" amounts to
creativity which in the past has involved a deep
search in the subject area of interest. It will
not come from a bunch of philosophizers exchanging
opinions about what will happen. Those who predict
the future are usually wrong for reasons they didn't
think of or know about at the time.

I have been working on different types of networks
and although the subject is learning tic tac toe I
have been keeping in mind the need for a generic
solution (not to code ttt specific stuff). One issue
that came up was do I insist it learn the rules of
ttt such as you can't place your mark in a non blank
square? Well I decided it must learn those rules by
itself and it wasn't as hard as I anticipated. It
did however require immediate feedback that the move
was a loss, adjust your weights and try again rather
than wait until the end of the game which of course
would never happen if the program kept trying to put
its mark on a blank square. In other words a win
reward was insufficient for it to learn the rules
if a win never occurred because it wasn't following
the rules.

JC


0
casey
5/12/2009 9:22:37 PM
On May 11, 2:43=A0pm, c...@kcwc.com (Curt Welch) wrote:
> Ian Parker <ianpark...@gmail.com> wrote:
> > On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> > > Hmmmmmmmm....... I wonder what that could be???????????????????????
>
> > > In my understanding, the author meant that it is utopic, i.e.
> > > entierly unrealistic, to expect that mankind would ever live
> > > permanently in peace. The lesson of the second world war is
> > > practically null in my humble view.
>
> > I think this is unduly pessimistic. All countries realize that nuclear
> > wear would be a total disaster where the losses would completely
> > outweigh any gains. In war between fairly equal nation states this
> > rubric holds true. Peace CAN be negotiated through mutual interest.
>
> > The moral questions arise when a strong nation is fighting a very much
> > weaker one. This is the real point about the morality of robotic
> > war.People talk about wars being fought between robots. This will
> > never be the case. Robots will be fighting on one side. Flesh and
> > blood will be on the other side.
>
> That's just nonsense.
>
> Robots are just more tools of war. =A0They have been fighting on both sid=
es
> ever since the first time some pre-man picked up a stick and used it in a
> fight.
>
> Robots are just better sticks. =A0But so are tanks, and guided missiles, =
and
> bullets.
>
> What do you think a spear is? =A0It's a knife on the end of a stick? =A0W=
hy was
> the knife put on the end of the stick? =A0To get the human further away f=
rom
> the other guy with a knife so as to limit potential harm to himself.
>
> All weapons of war are developed as a way of increasing the harm to the
> enemy while minimizing harm to one's self. =A0Robots in all forms are jus=
t
> more of the same.
>
> As long as there is war, they will be fighting on both sides.
>
> > Science fiction stories talk about
> > the human race being invaded from space and fighting robots. Science
> > fact is that the technological nations of Earth are developing robots
> > in order to fight Third World countries. They are not being developed
> > to fight each other.
>
> > =A0 - Ian Parker
>
> Robots are just one more weapon of war being developed. The reason we see
> robots being developed for war has nothing to do with who's fighting. It'=
s
> just because robots happen to be on the cutting edge of war technology at
> this point in history. =A0If we had advanced nations fighting each other
> (like Europe against the US), you would actually see a HUGE surge in robo=
t
> development. =A0The fact that the current battles are so one sided reduce=
s
> the pressure to invest in new war technology and as such, it's far slower
> right now than it would be if there were some real battles going on. =A0T=
his
> stuff isn't war from the side of the west. It's a police action.
>
> The world is coming together into a stronger, united "global brain" as Do=
n
> likes to talk about. =A0However, becuase the world is still populated wit=
h
> diverse cultures that have fundamentally different views about what's
> important in life, and about how a culture should be structured, we have
> friction to deal with. =A0The stronger cultures are, as they always have =
in
> history, are forcing their culture onto the weaker cultures, and the resu=
lt
> is exactly what is expected - the weaker cultures don't like it and are
> fight back. =A0They are using terrorist techniques to defend their cultur=
e
> becuase that's all you can use when a weak force fights overweeningly
> stronger force.
>
> The reason the strong is going against the weak, is becuase the strong is
> made up of cultures that have already had their battles in the past, work=
ed
> out most their differences, and agreed on what form of culture to move
> forward with. =A0The middle east, has fallen behind in the race of advanc=
ing
> strength something like 800 years ago and have been mostly ignored since
> then. But their control over oil, the single most valuable limited resour=
ce
> on the planet, forces the combined cultures of the west to have to deal
> with the middle east. =A0If the middle east had a culture that worked, an=
d
> allowed the west to continue to get oil, and didn't in the process make
> them so rich that they would be able to force their culture on the west,
> then all would be fine. =A0But the culture of the middle east hasn't work=
ed
> well enough to keep them out of trouble with the west.
>
> The bottom line, is that none of this conflict will be over until 1) we r=
un
> out of oil making the deserts of the middle east of no significant intere=
st
> to the developed nations (like much of Africa currently is), or 2) our
> cultures merge well enough that we can coexist as one large functioning
> culture.
>
> In the end, the whole earth will merge together as one large culture, wit=
h
> one large system of government, at which point there will be great peace
> for an extended time. =A0But there will continue to be more disagreements
> until that day, and some of those disagreements will rise up to the level
> war as people try to defend their cultures.
>
> None of this has anything to do with robots, which are just one more weap=
on
> of war. =A0And if you think atomic bombs were nasty weapons of war, just =
wait
> to see what's coming with AI. =A0A weapon with human like intelligence or
> greater built for war would not just wipe out a few cities, they could ki=
ll
> (at a relatively low price) every human on a continent without hurting th=
e
> countryside or the natural resources of the land. =A0It wouldn't be like =
the
> bombed out buildings and destruction of past wars. =A0It would just be a =
wave
> of millions and millions of smart machines swarming over the land and
> killing everything that got in their way. =A0They might not even use
> explosives or guns. =A0A simple blade might be the best weapon for such a
> swarm of AIs. =A0The best form of these AIs might be bird like creatures =
that
> would simply swarm on humans and slice them up. =A0The humans go into hid=
ing,
> and the AI birds just wait for them to die because you can't survive as a
> race underground for long.
>
> The real point is not that such a thing will happen like that, but that a=
s
> technology develops, we are forced to operate as one large culture instea=
d
> of as many smaller cultures. That's becuase advancing technology causes o=
ur
> desires to impinge on each other, and that leads to disagreements, which,
> if there's not a functioning culture to settle these little disagreements=
,
> they will always build into large disagreements which will then turn into
> war. =A0AI is just one more technology that will force the world to merge
> into one large culture and that merging will always come with some pain,
> and sadly, the smaller weaker cultures will always have to receive more o=
f
> that pain in the process.
>
> --
> Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/
Hi Curt,

I've just discovered the "summarize" function on my computer and,
although I'd never use it for an important document, it seems like
just the thing we need to get your replies down to reasonable size.
Here we go:


The bottom line, is that none of this conflict will be over until 1)
we run out of oil making the deserts of the middle east of no
significant interest to the developed nations (like much of Africa
currently is), or 2) our cultures merge well enough that we can
coexist as one large functioning culture....  The humans go into
hiding, and the AI birds just wait for them to die because you can't
survive as a race underground for long. The real point is not that
such a thing will happen like that, but that as technology develops,
we are forced to operate as one large culture instead of as many
smaller cultures.
0
J
5/12/2009 10:27:55 PM
On May 11, 4:49=A0pm, c...@kcwc.com (Curt Welch) wrote:
> Ian Parker <ianpark...@gmail.com> wrote:
> > On 11 May, 13:33, Don Stockbauer <donstockba...@hotmail.com> wrote:
>
> > > That's what's so nice about the Global Brain. =3DA0It brings about
> > > Eternal Peace automatically and, oddly enough, unavoidably.
> > > =3DA0Hmmmmmm....... That sounds =3DA0a bit like a religion, doesn't i=
t?
> > > =3DA0But it's science (cybernetics). =3DA0See the Principia Cyberneti=
ca for
> > > more optimism.
>
> > There is a whole lot here which I think needs picking over. There is
> > an extremely important point about the Internet and it is this. All
> > our TV, entertainment and news id going to come via tailored feeds. It
> > will because this is what we prefer. We want to look at programs we
> > are interested in. A global brain would make sure that we got the
> > appropriate feeds and did not engage in terrorism, war etc. etc. We
> > would all have a common experience and the nation state would be a
> > thing of the past.
>
> > This machine would be clever. It would constantly be giving us what we
> > wanted, or what we appeared to want, down to the best plot of "adult"
> > movie to titillate us. There is nothing way out in this. In fact it is
> > already happening in a limited way with Google.
>
> The "global brain" that Don talks about is already here. =A0It's just wha=
t
> results when you link the needs and desires of people into a functioning
> unit. =A0We build those links with our economy, our governments, our
> communication systems, and all our social organizations. =A0It's just cal=
led
> culture, or society. =A0And, BTW, it's already conscious (but that's a de=
bate
> for another thread).
>
> What's happening over time is that the links are becoming stronger, and
> wider reaching. =A0As we develop the technologies to allow it to happen, =
we
> are being more closely linked into a larger, and stronger, and smarter,
> global brain.
>
> > There are a number of points. The first of these is what would happen
> > if someone malevolent hijacked the system for their own ends? Suppose
> > malevolence WANTED war, terrorism etc. for their own ends. Suppose
> > terrorism was "glorified" and presented as noble self sacrifice. This
> > is, in effect, what happened in Wilhelmite and then in Nazi Germany.
> > The real question to ask is could a code of ethics be imprinted on
> > Google which governments would find difficulty in circumventing?
>
> That's happened many times. =A0It's what happens when a strong dictator c=
omes
> into power. =A0They do it by finding a way to hijack the system to meet t=
heir
> own desires.
>
> We have developed "global brain" systems that minimize the odds of that
> happening. =A0It's what a democracy is all about. =A0It's a system where =
every
> person is given equal power to control the system (one person, one vote).
> And we structure our society to that control, is the supreme control of
> power in the land so that some other power, like Google, can't take it aw=
ay
> from us.
>
> The odds of one bad guy subverting the power of the US government (for
> example) is very small. =A0But it's something we always have to be very
> vigilant about, which is why someone like Saddam Hussein, Adolf Hitler, o=
r
> Kim Jong-il can scare us so much.
>
> > The second point is that in certain countries, not the major ones,
> > there is censorship and organized unreason.
>
> Well, the US has lots of organized unreason as well. =A0We elected an idi=
ot
> to run the country for 8 years in one of the most unreasoned moves I've
> seen in my life. =A0But that's the price you pay for living in a democrac=
y.
> The power is held by the people as a group, so you live and die by their
> reasoned, and unreasoned view. =A0The net win is that most people, get th=
ings
> "there way", most the time, even if it's not the smartest way. =A0The goa=
l is
> to maximize the happiness of the group by minimizing the disagreement in
> the mob. If most people want to let the idiot run the country, we let the
> idiot run the country becuase to do anything else would make more people
> less happy.
>
> > Our brain will probably
> > find a way of dealing with this. I have mentioned Arabic scholarship
> > and the Afghan Qur'an. Disambiguation in translation is closely
> > related to the ability to perform textual criticism. This may be a
> > trivial point but -
>
> > 1) For this discussion group it is extremely interesting even if a
> > sideline.
>
> > 2) Obama is going to look a bit silly if the settlement that he
> > "imposes" on Afghanistan is totally undermined by "the Brain", as it
> > surely must be.
>
> I don't understand your points about the Qur'an.
>
> However, I do understand that some large percentage of the middle east
> believes that very old and out of date book tells them what they should d=
o
> as a culture.
>
> Though the US has been a mostly Christian nation and has strong Christian
> beliefs flowing though our culture, the culture is not ruled by the bible=
..
> It's ruled by our constitution - a far smaller, but far more important,
> document. =A0It's the highest authority in our land in terms of binding u=
s
> together as a culture - as a functioning "Global Brain". =A0Next in power=
,
> are the state constitutions, and some where way down the line, are the
> religious beliefs like the bibles and the Qur'ans. =A0There's a strict
> pecking order on what authority binds our culture together, and the
> religious doctrines have always been way down the ladder.
>
> This came about in the US becuase we were founded from such a wide mix of
> different religions at a time where many were looking to escape from the
> religious based cultures - which are just another form of dictatorship.
>
> Though religion still plays an important role in what the US is (sadly in
> my view), it's not the ultimate power. =A0And though it got a born again
> idiot elected to run the US for 8 years, the real power of the land, the
> constitution, got him removed, without anyone having to be killed.
>
> The middle east, (the little I understand it), is till mostly a religious
> based culture. =A0It's what most the west rejected as a bad way to organi=
ze a
> society hundreds of years ago. =A0It's what's creating the most friction
> between our cultures as history and fate forces us to figure out a way to
> coexist and ultimately, to merge.
>
> A fundamental tenet of western culture is that we do not respect religion
> as the corner stone of our society. =A0A constitution is. =A0But if you a=
re
> raised in a culture that puts the religions beliefs as the ultimate
> foundation of "right" in the land, it's not easy to accept, or even
> understand, a culture that is organized around a constitution. =A0Sure, t=
hey
> can understand how a government might be important, and how it needs a
> constitution to operate if it's not going to be a dictatorship, but what
> they won't understand, is this idea that this stupid little paper that
> tells the government what to do, is to be treated as the highest authorit=
y
> in the land - higher than any religious belief.
>
> Though I understand very little about the middle east and the cultures
> there, I do get the strong impression they, as a culture, can't grasp, or
> understand this, and certainly will not accept it any time soon.
>
> But until they accept a constitutional based democracy as the ruling forc=
e
> in the land, their culture won't be compatible with the west. =A0And as l=
ong
> as they aren't compatible with the west, we can't merge as a strong unite=
d
> "global brain" culture. And until we find a way to merge, the west is not
> going to fully trust what happens there, and as long as the region is
> important to us, becuase of oil, or because of general world stability, t=
he
> west will keep their "boot" on them. =A0They will be held slaves to the
> desire of our "global brain culture" until they convert, or until we get
> too weak to hold them. =A0And our desire is not have them as slaves, or t=
o
> hold them hostage, but to have them another healthy functioning part of t=
he
> global brain. =A0But for that to happen, they have to learn how to create=
 a
> democracy, and learn how to create a culture run by voting citizens, and
> not run by clerics, and dictators.
>
> I'm not sure how far away they are from that, but it seems like they are
> still very far away from - maybe 1 or 2 generations of family members awa=
y
> from it still.
>
> I wrote all that because your comments about the Qur'an sound like you
> believe the Qur'an and what it says is important in that society (which I
> believe it is too). =A0My point with this long drawn out reply, is that u=
ntil
> they learn to stop looking at the wording of some religious document to
> lead them, and instead, write their own constitution to form a democratic
> government, and let that lead them, they won't have a chance of being
> compatible with the west. =A0The conflict between the middle east and the
> west won't end, until either they change, or the west changes. =A0We will
> operate as two separate sub-global-brains fighting with each other, until
> we become more compatible and are able to merge into one larger and
> stronger global brain.
>
> Where as Bush was an old school little brain trying to fight the other
> brains just becuase they were different and he couldn't understand someon=
e
> if they are different, Obama understands that the solution to our
> disagreements and war is to do whatever we can to help the merging of our
> societies to occur.
>
> > This leads me on to a third point which is this. "How will politicians
> > react if the World is being governed by a Brain and not by them?"
>
> The US has been ruled by a "global brain" for over 200 years now. =A0It's=
 the
> global brain formed by the voting population of the US. =A0The politician=
s do
> not rule the US, the people do. =A0That's why one was kicked out, and ano=
ther
> was put in place regardless of what they might have wanted (or the other =
20
> that didn't get picked might have wanted).
>
> The combined desires of the world population is what we want to rule the
> world (in order to create maximize global happiness and minimize global
> wars and other problems). =A0Currently, the desires of US citizens have f=
ar
> too much power over what happens to the rest of the world. =A0The rest of=
 the
> world deserves to have voting rights so we in the US don't stomp all over
> them with our opinions and desires. =A0But that's not going to happen unt=
il
> they first learn to be part of a democracy in their own land. =A0When the=
y
> learn to do that, then later, we can look at merging them in with fair
> rights to the "global brain" which is still trying to form.
>
> > No
> > doubt the Brain will attempt to give politicians the illusion of
> > control. That will be part of its technique. Of course, in a
> > "democratic" country the Brain will decide who is elected.
>
> > =A0 - Ian Parker
>
> --
> Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/

Summary:

My point with this long drawn out reply, is that until they learn to
stop looking at the wording of some religious document to lead them,
and instead, write their own constitution to form a democratic
government, and let that lead them, they won't have a chance of being
compatible with the west.
0
J
5/12/2009 10:30:36 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On May 12, 8:45=A0am, c...@kcwc.com (Curt Welch) wrote:

> I have been working on different types of networks
> and although the subject is learning tic tac toe I
> have been keeping in mind the need for a generic
> solution (not to code ttt specific stuff). One issue
> that came up was do I insist it learn the rules of
> ttt such as you can't place your mark in a non blank
> square? Well I decided it must learn those rules by
> itself and it wasn't as hard as I anticipated. It
> did however require immediate feedback that the move
> was a loss, adjust your weights and try again rather
> than wait until the end of the game which of course
> would never happen if the program kept trying to put
> its mark on a blank square. In other words a win
> reward was insufficient for it to learn the rules
> if a win never occurred because it wasn't following
> the rules.
>
> JC

Making the machine learn to move only in open squares simply changes the
nature of the environment slightly.  It's not the right or wrong way, or
better or worse way.  It's just a slightly different learning task.  There
are an infinite number of environments with varying levels of learning
difficulty we can dream up to test the quality of our learning machines.  I
could think up 100 more variations if you want to try to make it even
harder.

You could let it draw an X or an O each time as well and force it to learn
that it must keep using the same mark until the game is over.  That would
be just one more thing it would have to learn.

You could define the board as a high resolution 2D surface, and allow it to
"stamp" the X or O at some X,Y coordinate.  That way it would have to learn
to put the mark at the correct location.

You could give it the power to draw lines on the 2D surface, and force it
to learn to draw an X, or draw an O as well.

You could make it a real time game so it would have to learn to wait it's
turn before it made the next mark.

The number of variations of learning tasks is infinite. :)

On the issue of "needing" instant feedback in order for it to learn to only
pick open moves, that's not true.  It should be able to learn even if the
only reward you give it is for winning.  That's because even if it makes
random moves, and lots of them are invalid, it will at some point, luck
into winning (or luck into a tie).  And if it can do that, it will be able
to learn the rules.  The issue of course, is that it will take a lot longer
becuase it will have to play for a long time before it happens to win a
game if it's playing against someone that is trying to win.

One way to make it easier is to allow it to play against itself - or allow
it to play against someone who is making valid moves, but not trying to
win.  That will simply (again) change the difficulty of the learning task.
It will statistically allow it to get more rewards which will allow it to
learn faster.

There is no magic solution to make a generic learning system learn hard
problems faster.  Hard problems are always hard problems.  Problems so hard
that they will take a billion years to learn are simply problems that can't
be learned in a reasonable time.

Instant reward learning is mostly trivial.  So any time you set up an
environment that is able to give the agent instant rewards for an action
(and where only the last action was the cause of the reward), you have
created a learning problem that should be trivial for your agent.  Reward
based learning doesn't get really interesting until you start to deal with
delayed rewards - which is what all the well known RL algorithms are all
about.

The interesting aspect of the domain humans work in is that 1) they are
reward rich - that is there are lots of different events that generate
rewards, or generate a negative reward 2) they are solution rich - there
are lots of different behaviors that lead to an improvement of rewards.
As such, there are always lots of "good behaviors" for humans to stumble on
and learn, and as it finds these, those good behaviors it learned often
help make the next reward easier to find.

Rewarding a tic tac toe program for making valid moves, and rewarding it
more for winning is an example of making the environment more "reward
rich".  That is, there are multiple behaviors to be learned.  It's also a
little bit "solution rich" in the fact that there is more than one way to
win a game.  That is, there is not just one set of behaviors that will work
to allow it win games.  This means any set of moves that lets it win, will
be a valuable learning experience and because there are multiple paths to
the reward, it won't have to hunt as long to get a reward - which means
learning is quicker.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/12/2009 10:42:22 PM
"J.A. Legris" <jalegris@sympatico.ca> wrote:
> On May 11, 4:49=A0pm, c...@kcwc.com (Curt Welch) wrote:

> Summary:
>
> My point with this long drawn out reply, is that until they learn to
> stop looking at the wording of some religious document to lead them,
> and instead, write their own constitution to form a democratic
> government, and let that lead them, they won't have a chance of being
> compatible with the west.

Gee, I wish my newsreader had that function built in!

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/12/2009 10:45:26 PM
On May 12, 3:42=A0pm, c...@kcwc.com (Curt Welch) wrote:
> I could think up 100 more variations if you want
> to try to make it even harder.

The goal is not to make it harder but to work out
methods based on easy examples that are not size
dependent. Just, for example, like angles do not
change as the image gets larger.

> On the issue of "needing" instant feedback in
> order for it to learn to only pick open moves,
> that's not true.  It should be able to learn
> even if the only reward you give it is for
> winning.  That's because even if it makes random
> moves, and lots of them are invalid, it will at
> some point, luck into winning (or luck into a tie).

The network wasn't generating random outputs rather
it was converting an input to an output based on the
weights in the network. If the output didn't produce
any changes in the input nothing changed, it never
reached any winning state at which point it would
have changed its weights and thus future behaviour.


Imagine you were playing with a ttt-machine that
worked by pressing one of nine buttons. When you
press a blank button it changes to an 'x' and at
the same time another button takes on a 'o' symbol.
Now you press a button with an 'x' or an 'o' and
nothing happens. You don't find "nothing happens"
rewarding so you change your behaviour to that
game state and try another button. In other words
you acted on an immediate negative feedback signal.

While googling for ideas I came across this:

The Psychology of human thought By Robert J. Sternberg,
Edward E. Smith


[1] [2] [3] [4] [5] [6] [7] [8] [9]  cards

+---+---+---+
| 2 | 7 | 6 |
+---+---+---+
| 9 | 5 | 1 |
+---+---+---+
| 4 | 3 | 8 |
+---+---+---+

"Two players are to take turns picking up single cards.
When a player holds three cards that sum to exactly 15,
he or she puts them down and is declared the winner.
This problem is isomorphic in structure to the familiar
game of tic-tac-toe, but we tend to represent the two
games differently. When we think about tic-tac-toe,
we think about line, sides, corners, and the center.
When we think about the first game, we think about
needing three odd numbers of two evens and an odd, or
perhaps we thin about needing two numbers that sum
to 10, plus the 5. Our mental representations are
different, and this often leads us to notice only some
of the constraints in the problem as posed. Two such
isomorphic problems can differ in difficulty due to
temporary memory."

JC
0
casey
5/13/2009 12:57:13 AM
On May 12, 5:27=A0pm, "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> On May 11, 2:43=A0pm, c...@kcwc.com (Curt Welch) wrote:
>
> > Ian Parker <ianpark...@gmail.com> wrote:
> > > On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> > > > Hmmmmmmmm....... I wonder what that could be???????????????????????
>
> > > > In my understanding, the author meant that it is utopic, i.e.
> > > > entierly unrealistic, to expect that mankind would ever live
> > > > permanently in peace. The lesson of the second world war is
> > > > practically null in my humble view.
>
> > > I think this is unduly pessimistic. All countries realize that nuclea=
r
> > > wear would be a total disaster where the losses would completely
> > > outweigh any gains. In war between fairly equal nation states this
> > > rubric holds true. Peace CAN be negotiated through mutual interest.
>
> > > The moral questions arise when a strong nation is fighting a very muc=
h
> > > weaker one. This is the real point about the morality of robotic
> > > war.People talk about wars being fought between robots. This will
> > > never be the case. Robots will be fighting on one side. Flesh and
> > > blood will be on the other side.
>
> > That's just nonsense.
>
> > Robots are just more tools of war. =A0They have been fighting on both s=
ides
> > ever since the first time some pre-man picked up a stick and used it in=
 a
> > fight.
>
> > Robots are just better sticks. =A0But so are tanks, and guided missiles=
, and
> > bullets.
>
> > What do you think a spear is? =A0It's a knife on the end of a stick? =
=A0Why was
> > the knife put on the end of the stick? =A0To get the human further away=
 from
> > the other guy with a knife so as to limit potential harm to himself.
>
> > All weapons of war are developed as a way of increasing the harm to the
> > enemy while minimizing harm to one's self. =A0Robots in all forms are j=
ust
> > more of the same.
>
> > As long as there is war, they will be fighting on both sides.
>
> > > Science fiction stories talk about
> > > the human race being invaded from space and fighting robots. Science
> > > fact is that the technological nations of Earth are developing robots
> > > in order to fight Third World countries. They are not being developed
> > > to fight each other.
>
> > > =A0 - Ian Parker
>
> > Robots are just one more weapon of war being developed. The reason we s=
ee
> > robots being developed for war has nothing to do with who's fighting. I=
t's
> > just because robots happen to be on the cutting edge of war technology =
at
> > this point in history. =A0If we had advanced nations fighting each othe=
r
> > (like Europe against the US), you would actually see a HUGE surge in ro=
bot
> > development. =A0The fact that the current battles are so one sided redu=
ces
> > the pressure to invest in new war technology and as such, it's far slow=
er
> > right now than it would be if there were some real battles going on. =
=A0This
> > stuff isn't war from the side of the west. It's a police action.
>
> > The world is coming together into a stronger, united "global brain" as =
Don
> > likes to talk about. =A0However, becuase the world is still populated w=
ith
> > diverse cultures that have fundamentally different views about what's
> > important in life, and about how a culture should be structured, we hav=
e
> > friction to deal with. =A0The stronger cultures are, as they always hav=
e in
> > history, are forcing their culture onto the weaker cultures, and the re=
sult
> > is exactly what is expected - the weaker cultures don't like it and are
> > fight back. =A0They are using terrorist techniques to defend their cult=
ure
> > becuase that's all you can use when a weak force fights overweeningly
> > stronger force.
>
> > The reason the strong is going against the weak, is becuase the strong =
is
> > made up of cultures that have already had their battles in the past, wo=
rked
> > out most their differences, and agreed on what form of culture to move
> > forward with. =A0The middle east, has fallen behind in the race of adva=
ncing
> > strength something like 800 years ago and have been mostly ignored sinc=
e
> > then. But their control over oil, the single most valuable limited reso=
urce
> > on the planet, forces the combined cultures of the west to have to deal
> > with the middle east. =A0If the middle east had a culture that worked, =
and
> > allowed the west to continue to get oil, and didn't in the process make
> > them so rich that they would be able to force their culture on the west=
,
> > then all would be fine. =A0But the culture of the middle east hasn't wo=
rked
> > well enough to keep them out of trouble with the west.
>
> > The bottom line, is that none of this conflict will be over until 1) we=
 run
> > out of oil making the deserts of the middle east of no significant inte=
rest
> > to the developed nations (like much of Africa currently is), or 2) our
> > cultures merge well enough that we can coexist as one large functioning
> > culture.
>
> > In the end, the whole earth will merge together as one large culture, w=
ith
> > one large system of government, at which point there will be great peac=
e
> > for an extended time. =A0But there will continue to be more disagreemen=
ts
> > until that day, and some of those disagreements will rise up to the lev=
el
> > war as people try to defend their cultures.
>
> > None of this has anything to do with robots, which are just one more we=
apon
> > of war. =A0And if you think atomic bombs were nasty weapons of war, jus=
t wait
> > to see what's coming with AI. =A0A weapon with human like intelligence =
or
> > greater built for war would not just wipe out a few cities, they could =
kill
> > (at a relatively low price) every human on a continent without hurting =
the
> > countryside or the natural resources of the land. =A0It wouldn't be lik=
e the
> > bombed out buildings and destruction of past wars. =A0It would just be =
a wave
> > of millions and millions of smart machines swarming over the land and
> > killing everything that got in their way. =A0They might not even use
> > explosives or guns. =A0A simple blade might be the best weapon for such=
 a
> > swarm of AIs. =A0The best form of these AIs might be bird like creature=
s that
> > would simply swarm on humans and slice them up. =A0The humans go into h=
iding,
> > and the AI birds just wait for them to die because you can't survive as=
 a
> > race underground for long.
>
> > The real point is not that such a thing will happen like that, but that=
 as
> > technology develops, we are forced to operate as one large culture inst=
ead
> > of as many smaller cultures. That's becuase advancing technology causes=
 our
> > desires to impinge on each other, and that leads to disagreements, whic=
h,
> > if there's not a functioning culture to settle these little disagreemen=
ts,
> > they will always build into large disagreements which will then turn in=
to
> > war. =A0AI is just one more technology that will force the world to mer=
ge
> > into one large culture and that merging will always come with some pain=
,
> > and sadly, the smaller weaker cultures will always have to receive more=
 of
> > that pain in the process.
>
> > --
> > Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> > c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/
>
> Hi Curt,
>
> I've just discovered the "summarize" function on my computer and,
> although I'd never use it for an important document, it seems like
> just the thing we need to get your replies down to reasonable size.
> Here we go:
>
> The bottom line, is that none of this conflict will be over until 1)
> we run out of oil making the deserts of the middle east of no
> significant interest to the developed nations (like much of Africa
> currently is), or 2) our cultures merge well enough that we can
> coexist as one large functioning culture.... =A0The humans go into
> hiding, and the AI birds just wait for them to die because you can't
> survive as a race underground for long. The real point is not that
> such a thing will happen like that, but that as technology develops,
> we are forced to operate as one large culture instead of as many
> smaller cultures.

Gee, what could ever form this one large culture?  I have no idea.  I
wonder if it would involve peer-peer global telecommunication?  Will
that eventually overcome the Babel we have now?   Who knows?
0
Don
5/13/2009 1:52:06 AM
Dear Ian Parker, I need your help! I have sent some messages to your
mail on google groups. Please open your mail!

Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!
0
oki239pcl
5/13/2009 4:40:56 AM
Dear Ian Parker, I need your help! I have sent some messages to your
mail on google groups. Please open your mail!

Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!
0
oki239pcl
5/13/2009 4:48:35 AM
J.A. Legris wrote:
[snip]

 > ......... as technology develops,
 > we are forced to operate as one large culture instead of as many
 > smaller cultures.

In fact. We have globalization and currently the worldwide financial
collapse. Whether there is a causal relation between the two, is, of 
course, a matter of individual personal speculation.

BTW, I like to quote something from a German newspaper. In order to
avoid errors of translation, it is given in original.

M. K. Shen

-----------------------------------------------------

Quoted from an article in Sddeutsche Zeitung, 24. April, 2009, p.8,
written by Ernst-Wolfgang Bckenfrde, former judge of the Federal
Court of Constitution of Germany, entitled "Woran der Kapitalismus
krankt":

    In diesem System gilt es, alle Regulative abzubauen,
    regulatives Prinzip soll der Markt sein.

    Man kann sich der Aktuatlitt der Prognose vom Marx
    nicht entziehen.

    Ein Umbau erfordert eine entscheidungsfhige Staatsgewalt.

    Rein koordinativ, auf dem Wege allseitiger Konsensbildung,
    lsst sich ein solcher Umbau nicht bewirken.


0
Mok
5/13/2009 6:04:26 AM
Dear Ian Parker, I need your help! I have sent some messages to your
mail on google groups. Please open your mail!

Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!
0
oki239pcl
5/13/2009 7:59:39 AM
Dear Ian Parker, I need your help! I have sent some messages to your
mail on google groups. Please open your mail!

Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian Parker!Ian
Parker!Ian Parker!Ian Parker!Ian Parker!
0
oki239pcl
5/13/2009 8:08:13 AM
On May 13, 1:04=A0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> J.A. Legris wrote:
>
> [snip]
>
> =A0> ......... as technology develops,
> =A0> we are forced to operate as one large culture instead of as many
> =A0> smaller cultures.
>
> In fact. We have globalization and currently the worldwide financial
> collapse. Whether there is a causal relation between the two, is, of
> course, a matter of individual personal speculation.
>
> BTW, I like to quote something from a German newspaper. In order to
> avoid errors of translation, it is given in original.
>
> M. K. Shen
>
> -----------------------------------------------------
>
> Quoted from an article in S=FCddeutsche Zeitung, 24. April, 2009, p.8,
> written by Ernst-Wolfgang B=F6ckenf=F6rde, former judge of the Federal
> Court of Constitution of Germany, entitled "Woran der Kapitalismus
> krankt":
>
> =A0 =A0 In diesem System gilt es, alle Regulative abzubauen,
> =A0 =A0 regulatives Prinzip soll der Markt sein.
>
> =A0 =A0 Man kann sich der Aktuatlit=E4t der Prognose vom Marx
> =A0 =A0 nicht entziehen.
>
> =A0 =A0 Ein Umbau erfordert eine entscheidungsf=E4hige Staatsgewalt.
>
> =A0 =A0 Rein koordinativ, auf dem Wege allseitiger Konsensbildung,
> =A0 =A0 l=E4sst sich ein solcher Umbau nicht bewirken.

God, at least have the decency to run it through Babel Fish, that
bastion of cutting-edge machine-translation AI technology:

In this system it is valid to reduce all regulations, regulation
principle should be the market. One can itself the Aktuatlit=E4t of the
prognosis of Marx do not withdraw. A change requires a government
authority capable of making decisions. Purely coordinatively, on the
way of all-round consent finding, such a change cannot be caused.
0
Don
5/13/2009 12:00:07 PM
On May 13, 8:00=A0am, Don Stockbauer <donstockba...@hotmail.com> wrote:
> On May 13, 1:04=A0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
>
>
>
> > J.A. Legris wrote:
>
> > [snip]
>
> > =A0> ......... as technology develops,
> > =A0> we are forced to operate as one large culture instead of as many
> > =A0> smaller cultures.
>
> > In fact. We have globalization and currently the worldwide financial
> > collapse. Whether there is a causal relation between the two, is, of
> > course, a matter of individual personal speculation.
>
> > BTW, I like to quote something from a German newspaper. In order to
> > avoid errors of translation, it is given in original.
>
> > M. K. Shen
>
> > -----------------------------------------------------
>
> > Quoted from an article in S=FCddeutsche Zeitung, 24. April, 2009, p.8,
> > written by Ernst-Wolfgang B=F6ckenf=F6rde, former judge of the Federal
> > Court of Constitution of Germany, entitled "Woran der Kapitalismus
> > krankt":
>
> > =A0 =A0 In diesem System gilt es, alle Regulative abzubauen,
> > =A0 =A0 regulatives Prinzip soll der Markt sein.
>
> > =A0 =A0 Man kann sich der Aktuatlit=E4t der Prognose vom Marx
> > =A0 =A0 nicht entziehen.
>
> > =A0 =A0 Ein Umbau erfordert eine entscheidungsf=E4hige Staatsgewalt.
>
> > =A0 =A0 Rein koordinativ, auf dem Wege allseitiger Konsensbildung,
> > =A0 =A0 l=E4sst sich ein solcher Umbau nicht bewirken.
>
> God, at least have the decency to run it through Babel Fish, that
> bastion of cutting-edge machine-translation AI technology:
>
> In this system it is valid to reduce all regulations, regulation
> principle should be the market. One can itself the Aktuatlit=E4t of the
> prognosis of Marx do not withdraw. A change requires a government
> authority capable of making decisions. Purely coordinatively, on the
> way of all-round consent finding, such a change cannot be caused.

You see Curt? Don and Mok-Kong had not read your last post, but they
responded to the summary. I suppose that's a good thing.

--
Joe
0
J
5/13/2009 12:31:35 PM
On 12 May, 23:27, "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> On May 11, 2:43=A0pm, c...@kcwc.com (Curt Welch) wrote:
>
> > Ian Parker <ianpark...@gmail.com> wrote:
> > > On 11 May, 09:53, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> > > > Hmmmmmmmm....... I wonder what that could be???????????????????????
>
> > > > In my understanding, the author meant that it is utopic, i.e.
> > > > entierly unrealistic, to expect that mankind would ever live
> > > > permanently in peace. The lesson of the second world war is
> > > > practically null in my humble view.
>
> > > I think this is unduly pessimistic. All countries realize that nuclea=
r
> > > wear would be a total disaster where the losses would completely
> > > outweigh any gains. In war between fairly equal nation states this
> > > rubric holds true. Peace CAN be negotiated through mutual interest.
>
> > > The moral questions arise when a strong nation is fighting a very muc=
h
> > > weaker one. This is the real point about the morality of robotic
> > > war.People talk about wars being fought between robots. This will
> > > never be the case. Robots will be fighting on one side. Flesh and
> > > blood will be on the other side.
>
> > That's just nonsense.
>
> > Robots are just more tools of war. =A0They have been fighting on both s=
ides
> > ever since the first time some pre-man picked up a stick and used it in=
 a
> > fight.
>
> > Robots are just better sticks. =A0But so are tanks, and guided missiles=
, and
> > bullets.
>
> > What do you think a spear is? =A0It's a knife on the end of a stick? =
=A0Why was
> > the knife put on the end of the stick? =A0To get the human further away=
 from
> > the other guy with a knife so as to limit potential harm to himself.
>
> > All weapons of war are developed as a way of increasing the harm to the
> > enemy while minimizing harm to one's self. =A0Robots in all forms are j=
ust
> > more of the same.
>
> > As long as there is war, they will be fighting on both sides.
>
> > > Science fiction stories talk about
> > > the human race being invaded from space and fighting robots. Science
> > > fact is that the technological nations of Earth are developing robots
> > > in order to fight Third World countries. They are not being developed
> > > to fight each other.
>
> > > =A0 - Ian Parker
>
> > Robots are just one more weapon of war being developed. The reason we s=
ee
> > robots being developed for war has nothing to do with who's fighting. I=
t's
> > just because robots happen to be on the cutting edge of war technology =
at
> > this point in history. =A0If we had advanced nations fighting each othe=
r
> > (like Europe against the US), you would actually see a HUGE surge in ro=
bot
> > development. =A0The fact that the current battles are so one sided redu=
ces
> > the pressure to invest in new war technology and as such, it's far slow=
er
> > right now than it would be if there were some real battles going on. =
=A0This
> > stuff isn't war from the side of the west. It's a police action.
>
> > The world is coming together into a stronger, united "global brain" as =
Don
> > likes to talk about. =A0However, becuase the world is still populated w=
ith
> > diverse cultures that have fundamentally different views about what's
> > important in life, and about how a culture should be structured, we hav=
e
> > friction to deal with. =A0The stronger cultures are, as they always hav=
e in
> > history, are forcing their culture onto the weaker cultures, and the re=
sult
> > is exactly what is expected - the weaker cultures don't like it and are
> > fight back. =A0They are using terrorist techniques to defend their cult=
ure
> > becuase that's all you can use when a weak force fights overweeningly
> > stronger force.
>
> > The reason the strong is going against the weak, is becuase the strong =
is
> > made up of cultures that have already had their battles in the past, wo=
rked
> > out most their differences, and agreed on what form of culture to move
> > forward with. =A0The middle east, has fallen behind in the race of adva=
ncing
> > strength something like 800 years ago and have been mostly ignored sinc=
e
> > then. But their control over oil, the single most valuable limited reso=
urce
> > on the planet, forces the combined cultures of the west to have to deal
> > with the middle east. =A0If the middle east had a culture that worked, =
and
> > allowed the west to continue to get oil, and didn't in the process make
> > them so rich that they would be able to force their culture on the west=
,
> > then all would be fine. =A0But the culture of the middle east hasn't wo=
rked
> > well enough to keep them out of trouble with the west.
>
> > The bottom line, is that none of this conflict will be over until 1) we=
 run
> > out of oil making the deserts of the middle east of no significant inte=
rest
> > to the developed nations (like much of Africa currently is), or 2) our
> > cultures merge well enough that we can coexist as one large functioning
> > culture.
>
> > In the end, the whole earth will merge together as one large culture, w=
ith
> > one large system of government, at which point there will be great peac=
e
> > for an extended time. =A0But there will continue to be more disagreemen=
ts
> > until that day, and some of those disagreements will rise up to the lev=
el
> > war as people try to defend their cultures.
>
> > None of this has anything to do with robots, which are just one more we=
apon
> > of war. =A0And if you think atomic bombs were nasty weapons of war, jus=
t wait
> > to see what's coming with AI. =A0A weapon with human like intelligence =
or
> > greater built for war would not just wipe out a few cities, they could =
kill
> > (at a relatively low price) every human on a continent without hurting =
the
> > countryside or the natural resources of the land. =A0It wouldn't be lik=
e the
> > bombed out buildings and destruction of past wars. =A0It would just be =
a wave
> > of millions and millions of smart machines swarming over the land and
> > killing everything that got in their way. =A0They might not even use
> > explosives or guns. =A0A simple blade might be the best weapon for such=
 a
> > swarm of AIs. =A0The best form of these AIs might be bird like creature=
s that
> > would simply swarm on humans and slice them up. =A0The humans go into h=
iding,
> > and the AI birds just wait for them to die because you can't survive as=
 a
> > race underground for long.
>
> > The real point is not that such a thing will happen like that, but that=
 as
> > technology develops, we are forced to operate as one large culture inst=
ead
> > of as many smaller cultures. That's becuase advancing technology causes=
 our
> > desires to impinge on each other, and that leads to disagreements, whic=
h,
> > if there's not a functioning culture to settle these little disagreemen=
ts,
> > they will always build into large disagreements which will then turn in=
to
> > war. =A0AI is just one more technology that will force the world to mer=
ge
> > into one large culture and that merging will always come with some pain=
,
> > and sadly, the smaller weaker cultures will always have to receive more=
 of
> > that pain in the process.
>
> > --
> > Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> > c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/
>
> Hi Curt,
>
> I've just discovered the "summarize" function on my computer and,
> although I'd never use it for an important document, it seems like
> just the thing we need to get your replies down to reasonable size.
> Here we go:
>
> The bottom line, is that none of this conflict will be over until 1)
> we run out of oil making the deserts of the middle east of no
> significant interest to the developed nations (like much of Africa
> currently is), or 2) our cultures merge well enough that we can
> coexist as one large functioning culture.... =A0The humans go into
> hiding, and the AI birds just wait for them to die because you can't
> survive as a race underground for long. The real point is not that
> such a thing will happen like that, but that as technology develops,
> we are forced to operate as one large culture instead of as many
> smaller cultures.

You are making one very big assumption here that Middle East wars are
over oil. There are many motives. Suppose I were top say that Iraq and
to some extent Afghanistan were wars over West Point and Annapolis. By
this I mean that the military is just as much a commercial
organization as GM and they want to sell their product just like
anyone else.

I mentioned West Point and Annapolis for this reason. If there were to
be cut backs cadets would be graduating without commissions. That
would never do. We now know that Iraq was entered into through cock
and bull stories. Evidence about Al-Qaida links were obtained by
torture. People ask the question "Is it moral to torture if there is a
ticking bomb?" This is, of course, a hypothetical question. Torture is
performed to get the answer you want. Torture is performed on behalf
of West Point and Annapolis.

Oil is frequently mentioned in left wing circles. I believe it is
wrong. If you are an oil producer you have to sell it. It is no use to
you, it is a sticky black substance which needs refining etc. etc. The
spot market for oil is in Rotterdam. To obtain or sell oil you go to
Rotterdam. Can you get it cheaper with war? NO WAY. Iraq in fact
pushed the price UP not down. $140 was obtained BECAUSE OF Iraq.

The way to control the Rotterdam price is by competition. Nuclear
power (peaceful), Solar power even coal. Electrolysis of water using
solar power and coal gasification. At the moment the price is down
because of recession. This is an excellent example of supply and
demand. If you were looking at oil and oil only you would get the
military to build solar power stations in the desert.

The West Point/Annapolis theory, which I believe to be correct, is in
many ways more frightening than oil. At least with oil some sort of
national interest was involved. With West Point/Annapolis the military
have in effect defrauded the public. They have indeed have been caught
telling lie after lie.

The whole torture scenario raises serious questions about the
integrity of court martial proceedings. To be sure "superior orders"
is not a defence. However when you are being judged by the very people
who are giving "superior orders" you have got a legitimate grievance.
Furthermore the courts martial that have taken place raise serious
questions in my mind about the integrity of the whole organization.

Who would want to go to West Point or Annapolis anyway when your
bosses are prepared to hand you up to dry?


   - Ian Parker
0
Ian
5/13/2009 1:14:35 PM
"J.A. Legris" <jalegris@sympatico.ca> wrote:
> On May 13, 8:00=A0am, Don Stockbauer <donstockba...@hotmail.com> wrote:
> > On May 13, 1:04=A0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wrote:
> >
> >
> >
> > > J.A. Legris wrote:
> >
> > > [snip]
> >
> > > =A0> ......... as technology develops,
> > > =A0> we are forced to operate as one large culture instead of as many
> > > =A0> smaller cultures.
> >
> > > In fact. We have globalization and currently the worldwide financial
> > > collapse. Whether there is a causal relation between the two, is, of
> > > course, a matter of individual personal speculation.
> >
> > > BTW, I like to quote something from a German newspaper. In order to
> > > avoid errors of translation, it is given in original.
> >
> > > M. K. Shen
> >
> > > -----------------------------------------------------
> >
> > > Quoted from an article in S=FCddeutsche Zeitung, 24. April, 2009,
> > > p.8, written by Ernst-Wolfgang B=F6ckenf=F6rde, former judge of the
> > > Federal Court of Constitution of Germany, entitled "Woran der
> > > Kapitalismus krankt":
> >
> > > =A0 =A0 In diesem System gilt es, alle Regulative abzubauen,
> > > =A0 =A0 regulatives Prinzip soll der Markt sein.
> >
> > > =A0 =A0 Man kann sich der Aktuatlit=E4t der Prognose vom Marx
> > > =A0 =A0 nicht entziehen.
> >
> > > =A0 =A0 Ein Umbau erfordert eine entscheidungsf=E4hige Staatsgewalt.
> >
> > > =A0 =A0 Rein koordinativ, auf dem Wege allseitiger Konsensbildung,
> > > =A0 =A0 l=E4sst sich ein solcher Umbau nicht bewirken.
> >
> > God, at least have the decency to run it through Babel Fish, that
> > bastion of cutting-edge machine-translation AI technology:
> >
> > In this system it is valid to reduce all regulations, regulation
> > principle should be the market. One can itself the Aktuatlit=E4t of the
> > prognosis of Marx do not withdraw. A change requires a government
> > authority capable of making decisions. Purely coordinatively, on the
> > way of all-round consent finding, such a change cannot be caused.
>
> You see Curt? Don and Mok-Kong had not read your last post, but they
> responded to the summary. I suppose that's a good thing.

I noticed that very fact and it made my snicker to myself! :)

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/13/2009 4:27:25 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On May 12, 3:42=A0pm, c...@kcwc.com (Curt Welch) wrote:
> > I could think up 100 more variations if you want
> > to try to make it even harder.
>
> The goal is not to make it harder but to work out
> methods based on easy examples that are not size
> dependent. Just, for example, like angles do not
> change as the image gets larger.

Right.  Start with simple things, see how the algorithm works, and if it
doesn't fix it.

> > On the issue of "needing" instant feedback in
> > order for it to learn to only pick open moves,
> > that's not true.  It should be able to learn
> > even if the only reward you give it is for
> > winning.  That's because even if it makes random
> > moves, and lots of them are invalid, it will at
> > some point, luck into winning (or luck into a tie).
>
> The network wasn't generating random outputs rather
> it was converting an input to an output based on the
> weights in the network. If the output didn't produce
> any changes in the input nothing changed, it never
> reached any winning state at which point it would
> have changed its weights and thus future behaviour.

Sure, so based on how it worked, you are saying it would just get stuck in
an infinite loop playing the same invalid game over and over and never
learning.

That simply shows a weakness in that algorithm so you should look for ways
to make it better.

A fundamental idea of reward based learning is that the agent must _search_
for behaviors that produce higher rewards.  If your agent just kept playing
the same game over and over, it obviously wasn't searching.  And that's a
weakness in that design.  Adding occasional random moves is one simplistic
approach to making sure the agent keeps searching.

This is one of the prime problems of RL based learning.  Once the system
learns something about the environment, which gives it some knowledge that
behavior A is better than behavior B, should it always, from that point
forward, use only behavior A becuase it "knows" it's more likely to produce
higher rewards, or should it try behavior B every once in a while to make
sure what it thinks it knows about A, is true, (or is still true)?  This is
the exploration vs exploitation trade off that is part of all RL
algorithms.

If your agent was playing the same moves over and over, then it had the
behavior of never exploring, and always using what it thought was the best
move - a 100% exploitation agent.

In a more complex environment, all the randomness you need might actually
come from the environment itself.  If your ttt program was playing an
opponent that played a little differently every time, that alone might have
introduced enough search behavior into the system to allow your agent to
learn.  But still, it's better to have an agent that will try different
moves even if the environment doesn't.

The typical approach is to have the agent produce an estimation how how
much total future reward any choice will produce, and then bias the choice
of what behavior to pick, based on how much better it is.  If the agent has
two choices for example, A, and B, and it expects A to produce a total
future reward of 100, and B to produce a total future reward of 99, then
there is very little loss by trying B.  It should try B a lot to keep
testing and refining the estimate of how much it's really worth.  On the
other hand, if A is expected to produce rewards of 100, and B rewards of
only 1, then it becomes very costly for the agent to try B yet again (an
expected loss of 99 rewards).  In that case, it still needs to try B every
once in a while, but should do it only very rarely.  So the larger the
difference, the less the agent should try to explore by picking options
that seem to be "bad".

> Imagine you were playing with a ttt-machine that
> worked by pressing one of nine buttons. When you
> press a blank button it changes to an 'x' and at
> the same time another button takes on a 'o' symbol.
> Now you press a button with an 'x' or an 'o' and
> nothing happens. You don't find "nothing happens"
> rewarding so you change your behaviour to that
> game state and try another button. In other words
> you acted on an immediate negative feedback signal.

Well, informally, we can describe human behavior that way, but we need to
be more formal and careful when talking about how we want to code our
learning algorithm.

If the human finds the behavior as "not very interesting" then that's most
likely something he learned from experience and not a negative reward hard
coded into the reward system.  He's learned from experience that the odds
of finding a reward is less, if his interaction with such a machine
produces no change, and he's learned, from experience, that trying
something else as a response to "nothing happens" as a better path to
higher rewards.  But it's all learned, not part of the innate reward
system.

We can just as easy create a game where the correct answer is to push the
same button 100 times when it "does nothing" to get the best reward and to
do anything else leads to less rewards.

The reward generator is (from the perspective of the learning agent), part
of the environment which it is trying to figure out.  It's not being
trained by the reward generator is much as it's trying to figure out how to
manipulate the environment to make the environment give it more rewards.

From designing RL agents for the generic case, I find it's best best to
look at the task from that direction.

If you change your ttt program to give it a negative reward when it pushes
the wrong button, you have not improved or done anything to the learning
algorithm.  You have simply changed the environment it's working with to
one which gives rewards and punishments based on different rules.

> While googling for ideas I came across this:
>
> The Psychology of human thought By Robert J. Sternberg,
> Edward E. Smith
>
> [1] [2] [3] [4] [5] [6] [7] [8] [9]  cards
>
> +---+---+---+
> | 2 | 7 | 6 |
> +---+---+---+
> | 9 | 5 | 1 |
> +---+---+---+
> | 4 | 3 | 8 |
> +---+---+---+
>
> "Two players are to take turns picking up single cards.
> When a player holds three cards that sum to exactly 15,
> he or she puts them down and is declared the winner.
> This problem is isomorphic in structure to the familiar
> game of tic-tac-toe, but we tend to represent the two
> games differently. When we think about tic-tac-toe,
> we think about line, sides, corners, and the center.
> When we think about the first game, we think about
> needing three odd numbers of two evens and an odd, or
> perhaps we thin about needing two numbers that sum
> to 10, plus the 5. Our mental representations are
> different, and this often leads us to notice only some
> of the constraints in the problem as posed. Two such
> isomorphic problems can differ in difficulty due to
> temporary memory."
>
> JC

Yeah, that's a fairly advanced idea to apply to RL.  It talks to the type
of abstractions humans use to guide their actions.  A strong RL system
needs to generate such abstractions on its own.  Such as, how could you
build an RL system that could create the abstraction "only move in an empty
space", or "move in a corner before moving on an edge"?  What sort of
internal structures can the agent use to learn such generic rules of thumb
as it builds models of how the environment works?  That's an important
question of building strong RL algorithms.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/13/2009 5:43:11 PM
On May 13, 11:27=A0am, c...@kcwc.com (Curt Welch) wrote:
> "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> > On May 13, 8:00=3DA0am, Don Stockbauer <donstockba...@hotmail.com> wrot=
e:
> > > On May 13, 1:04=3DA0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wro=
te:
>
> > > > J.A. Legris wrote:
>
> > > > [snip]
>
> > > > =3DA0> ......... as technology develops,
> > > > =3DA0> we are forced to operate as one large culture instead of as =
many
> > > > =3DA0> smaller cultures.
>
> > > > In fact. We have globalization and currently the worldwide financia=
l
> > > > collapse. Whether there is a causal relation between the two, is, o=
f
> > > > course, a matter of individual personal speculation.
>
> > > > BTW, I like to quote something from a German newspaper. In order to
> > > > avoid errors of translation, it is given in original.
>
> > > > M. K. Shen
>
> > > > -----------------------------------------------------
>
> > > > Quoted from an article in S=3DFCddeutsche Zeitung, 24. April, 2009,
> > > > p.8, written by Ernst-Wolfgang B=3DF6ckenf=3DF6rde, former judge of=
 the
> > > > Federal Court of Constitution of Germany, entitled "Woran der
> > > > Kapitalismus krankt":
>
> > > > =3DA0 =3DA0 In diesem System gilt es, alle Regulative abzubauen,
> > > > =3DA0 =3DA0 regulatives Prinzip soll der Markt sein.
>
> > > > =3DA0 =3DA0 Man kann sich der Aktuatlit=3DE4t der Prognose vom Marx
> > > > =3DA0 =3DA0 nicht entziehen.
>
> > > > =3DA0 =3DA0 Ein Umbau erfordert eine entscheidungsf=3DE4hige Staats=
gewalt.
>
> > > > =3DA0 =3DA0 Rein koordinativ, auf dem Wege allseitiger Konsensbildu=
ng,
> > > > =3DA0 =3DA0 l=3DE4sst sich ein solcher Umbau nicht bewirken.
>
> > > God, at least have the decency to run it through Babel Fish, that
> > > bastion of cutting-edge machine-translation AI technology:
>
> > > In this system it is valid to reduce all regulations, regulation
> > > principle should be the market. One can itself the Aktuatlit=3DE4t of=
 the
> > > prognosis of Marx do not withdraw. A change requires a government
> > > authority capable of making decisions. Purely coordinatively, on the
> > > way of all-round consent finding, such a change cannot be caused.
>
> > You see Curt? Don and Mok-Kong had not read your last post, but they
> > responded to the summary. I suppose that's a good thing.
>
> I noticed that very fact and it made my snicker to myself! :)
>
> --
> Curt Welch =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =
=A0 =A0 =A0 =A0 =A0 =A0 =A0http://CurtWelch.Com/
> c...@kcwc.com =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0=
 =A0 =A0 =A0 =A0 =A0http://NewsReader.Com/

0
Don
5/13/2009 6:21:19 PM
On May 13, 11:27=A0am, c...@kcwc.com (Curt Welch) wrote:
> "J.A. Legris" <jaleg...@sympatico.ca> wrote:
> > On May 13, 8:00=3DA0am, Don Stockbauer <donstockba...@hotmail.com> wrot=
e:
> > > On May 13, 1:04=3DA0am, Mok-Kong Shen <mok-kong.s...@t-online.de> wro=
te:
>
> > > > J.A. Legris wrote:
>
> > > > [snip]
>
> > > > =3DA0> ......... as technology develops,
> > > > =3DA0> we are forced to operate as one large culture instead of as =
many
> > > > =3DA0> smaller cultures.
>
> > > > In fact. We have globalization and currently the worldwide financia=
l
> > > > collapse. Whether there is a causal relation between the two, is, o=
f
> > > > course, a matter of individual personal speculation.
>
> > > > BTW, I like to quote something from a German newspaper. In order to
> > > > avoid errors of translation, it is given in original.
>
> > > > M. K. Shen
>
> > > > -----------------------------------------------------
>
> > > > Quoted from an article in S=3DFCddeutsche Zeitung, 24. April, 2009,
> > > > p.8, written by Ernst-Wolfgang B=3DF6ckenf=3DF6rde, former judge of=
 the
> > > > Federal Court of Constitution of Germany, entitled "Woran der
> > > > Kapitalismus krankt":
>
> > > > =3DA0 =3DA0 In diesem System gilt es, alle Regulative abzubauen,
> > > > =3DA0 =3DA0 regulatives Prinzip soll der Markt sein.
>
> > > > =3DA0 =3DA0 Man kann sich der Aktuatlit=3DE4t der Prognose vom Marx
> > > > =3DA0 =3DA0 nicht entziehen.
>
> > > > =3DA0 =3DA0 Ein Umbau erfordert eine entscheidungsf=3DE4hige Staats=
gewalt.
>
> > > > =3DA0 =3DA0 Rein koordinativ, auf dem Wege allseitiger Konsensbildu=
ng,
> > > > =3DA0 =3DA0 l=3DE4sst sich ein solcher Umbau nicht bewirken.
>
> > > God, at least have the decency to run it through Babel Fish, that
> > > bastion of cutting-edge machine-translation AI technology:
>
> > > In this system it is valid to reduce all regulations, regulation
> > > principle should be the market. One can itself the Aktuatlit=3DE4t of=
 the
> > > prognosis of Marx do not withdraw. A change requires a government
> > > authority capable of making decisions. Purely coordinatively, on the
> > > way of all-round consent finding, such a change cannot be caused.
>
> > You see Curt? Don and Mok-Kong had not read your last post, but they
> > responded to the summary. I suppose that's a good thing.
>
> I noticed that very fact and it made my snicker to myself! :)

Well, at least I'm good for something - making people snicker.
0
Don
5/13/2009 6:22:10 PM
On May 13, 10:43=A0am, c...@kcwc.com (Curt Welch) wrote:
> casey <jgkjca...@yahoo.com.au> wrote:
>> The network wasn't generating random outputs rather
>> it was converting an input to an output based on the
>> weights in the network. If the output didn't produce
>> any changes in the input nothing changed, it never
>> reached any winning state at which point it would
>> have changed its weights and thus future behaviour.
>
>
> Sure, so based on how it worked, you are saying it
> would just get stuck in an infinite loop playing the
> same invalid game over and over and never learning.

It played the same invalid move because unless you hard
coded it so that was impossible it had no way to ever
get a win feedback to change the weights that generated
input/output reaction.

> That simply shows a weakness in that algorithm so
> you should look for ways to make it better.

Well I did find a way to make it better by providing
a signal that nothing was changing which the network
would find "boring" and that would be the signal to
change it weights as it is in us.

> Adding occasional random moves is one simplistic
> approach to making sure the agent keeps searching.

But if it is a pure net design, no external move
generator, this occasional random move must be part
of how it works.

> This is the exploration vs exploitation trade off
> that is part of all RL algorithms.

It is also based on a payoff. Evolution will select
the right balance for any species. Predators spend
the first part of their lives learning things that
their prey don't need to learn and have their
development in other areas like walking delayed
while they fill in the blanks.


> In a more complex environment, all the randomness
> you need might actually come from the environment
> itself.  If your ttt program was playing an opponent
> that played a little differently every time, that
> alone might have introduced enough search behavior
> into the system to allow your agent to learn.

In previous discussions with you and writing the ttt
learning program I came to the view that random is
good as it explores more spaces. This was important
for TD-Gammon. Old human players avoided a certain
move as they had decided wrongly that it was not a
good move. TD-Gammon used it anyway and over many
games actual outcomes proved it was a good move.

But is also recognizes when it isn't working.

Remember my dig at you with your belief that there
was AI-gold in them there temporal pulse sorting hills?
Where do people look for real gold? They look into
places where the geology is similar to where gold has
already been found. Where has intelligence been found?
In biological brains!

Biological brains seem to show that intelligence can
evolve at the top. That the lower level input/output
processors are parallel in nature and although they
can learn things they are used and programmed by the
higher system. It is not that the higher system could
not deal directly with the environment, the cortex
does have direct connections to our fingers, it is
that apart from directing learning in the lower parts
it is too slow. When we practice something we are
programming the lower centers to work, that work in
parallel, so you no longer have to pay attention to
where the keys are on the keyboard you only have to
think what it is you want to write. What you want to
write is decided by the cortex. And what you want to
learn is decided by the cortex. But implementing that
requires access to the high speed parallel processing
units below.  An analogy is our use of a computer.
We can take our time to write instructions which the
computer will then carry out at great speed doing
what we could not do at the top but doing what was
instructed from a simple command, RUN. Or a mouse
click on an icon. We can do the same on the input
side where we get a computer to analyze data for us.
It is not that we couldn't do it ourselves it is
just computers work faster.

Yes sure you are interested in the high level learning
system not in its minions. But this system evolved
out of such minions and for practical reasons they
are an important part of what makes it work. I suspect
understanding what the cortex is doing will require
the context in which it operates.

Now I am not suggesting we need to duplicate the brain
is all its glory to get intelligent behaviors anymore
than we need to duplicate a bird in all its glory to
have flight. But I think you do yourself a disservice
if you ignore them or try to make them fit your view
as to how it should work. Not all features on a bird
are relevant to flight but some are and those features
are incorporated in modern aircraft. Not all geological
features found with gold are relevant to finding gold
but some of them are.

I am interested in the same thing you are even if we
disagree on the best approach. And I am actually trying
to write learning programs which you claim you are
also trying to do rather than just waffle on about it.


JC

0
casey
5/13/2009 8:26:12 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On May 13, 10:43=A0am, c...@kcwc.com (Curt Welch) wrote:
> > casey <jgkjca...@yahoo.com.au> wrote:
> >> The network wasn't generating random outputs rather
> >> it was converting an input to an output based on the
> >> weights in the network. If the output didn't produce
> >> any changes in the input nothing changed, it never
> >> reached any winning state at which point it would
> >> have changed its weights and thus future behaviour.
> >
> >
> > Sure, so based on how it worked, you are saying it
> > would just get stuck in an infinite loop playing the
> > same invalid game over and over and never learning.
>
> It played the same invalid move because unless you hard
> coded it so that was impossible it had no way to ever
> get a win feedback to change the weights that generated
> input/output reaction.

Yes, as I explained, that just shows you implemented NO exploration into
your agent which shows it's a REALLY BAD learning system.  If it always
picks the same move, and never tries the other options, you aren't
searching.

The trick is that system will pick what it currently _thinks_ is the
"right" move, but it can never know if what it thinks is the best move
really is.  Even if it was shown by past experience to be the best move, it
might not be the best move anymore.  For example, if the system learns to
play differently on later moves, then what was best at one time early in a
game, might no longer be best.

For example in something a bit more complex like chess, a learning system
might learn how to play the game if it opens by pushing the king's pawn.
But it might not have much clue how to play with any other opening move.
As such, it will come to "believe" that one opening move is the "best" move
to make - it's the one that will give it the best odds of wining. But if it
tries other openings, it can learn to play them as well, and might find
that it's odds of winning are even better, with a different opening.  But
it won't have a chance to learn how to play those other board positions, if
it never tries them.

Getting stuck believing that only the one opening move is the "best" move,
is like getting stuck on a local maxima.  To find out if there are other
hill tops that are even higher, the search system must be willing to make
some investment in search - to try known "bad" behaviors (walk down hill)
in order to see if there might be more "gold" somewhere else.

> > That simply shows a weakness in that algorithm so
> > you should look for ways to make it better.
>
> Well I did find a way to make it better by providing
> a signal that nothing was changing which the network
> would find "boring" and that would be the signal to
> change it weights as it is in us.

Well, my points seem to have gone RIGHT OVER YOUR HEAD AGAIN.

You didn't make the learning algorithm better by giving it negative rewards
for illegal moves.  You CHANGED THE ENVIRONMENT to make the problem easier
to solve.  Giving your agent something easier to do is not how you make the
learning algorithm stronger.

> > Adding occasional random moves is one simplistic
> > approach to making sure the agent keeps searching.
>
> But if it is a pure net design, no external move
> generator, this occasional random move must be part
> of how it works.

There are conceptual ways to structure the code without having to think of
it as simple randomness.  One way is to attempt to estimate the error in
the networks current approximation of future rewards.  The more times a
path has been explored, the more accurate the reward prediction becomes.
By tracking how many times a given behavior has been used, estimates of how
much error might exist in the estimate can be created, so that paths less
taken, are assumed to have more error.  If you assume the error takes on a
probability distribution of some time (bell curve) you can make the
assumption that there's some probability that what looks like the worse
option, might actually be the best option.  This gives you a mathematical
justification for exploration - for picking options other than the current
best option, on the grounds that there is some probability that it's
actually the best option and we just don't know it because we haven't
explored that path enough enough yet.  The net result, is that system will
try bad options some, just to verify their accuracy - but the more it's
tried, and the more the fact that it's a bad option is confirmed, the less
it will be used.

> > This is the exploration vs exploitation trade off
> > that is part of all RL algorithms.
>
> It is also based on a payoff. Evolution will select
> the right balance for any species. Predators spend
> the first part of their lives learning things that
> their prey don't need to learn and have their
> development in other areas like walking delayed
> while they fill in the blanks.

Payoff is what's being searched for by exploration - it's the "reward"
value.

> > In a more complex environment, all the randomness
> > you need might actually come from the environment
> > itself.  If your ttt program was playing an opponent
> > that played a little differently every time, that
> > alone might have introduced enough search behavior
> > into the system to allow your agent to learn.
>
> In previous discussions with you and writing the ttt
> learning program I came to the view that random is
> good as it explores more spaces. This was important
> for TD-Gammon. Old human players avoided a certain
> move as they had decided wrongly that it was not a
> good move. TD-Gammon used it anyway and over many
> games actual outcomes proved it was a good move.
>
> But is also recognizes when it isn't working.
>
> Remember my dig at you with your belief that there
> was AI-gold in them there temporal pulse sorting hills?
> Where do people look for real gold? They look into
> places where the geology is similar to where gold has
> already been found. Where has intelligence been found?
> In biological brains!

In temporal pulse sorting biological brains. :)

> Biological brains seem to show that intelligence can
> evolve at the top. That the lower level input/output
> processors are parallel in nature

? It's parallel at all levels, not just the low level.

> and although they
> can learn things they are used and programmed by the
> higher system. It is not that the higher system could
> not deal directly with the environment, the cortex
> does have direct connections to our fingers, it is
> that apart from directing learning in the lower parts
> it is too slow. When we practice something we are
> programming the lower centers to work, that work in
> parallel, so you no longer have to pay attention to
> where the keys are on the keyboard you only have to
> think what it is you want to write. What you want to
> write is decided by the cortex. And what you want to
> learn is decided by the cortex. But implementing that
> requires access to the high speed parallel processing
> units below.  An analogy is our use of a computer.
> We can take our time to write instructions which the
> computer will then carry out at great speed doing
> what we could not do at the top but doing what was
> instructed from a simple command, RUN. Or a mouse
> click on an icon. We can do the same on the input
> side where we get a computer to analyze data for us.
> It is not that we couldn't do it ourselves it is
> just computers work faster.

Well, I'm glad you think that's how the brain works.  We have no evidence
to support it however.  No one understands what the brain is doing yet so
to suggest we do is just silly in my view.  The people trying to study the
brain are gasping at straws to try and make up shit about what it's doing
and how it works for the most part.

> Yes sure you are interested in the high level learning
> system not in its minions. But this system evolved
> out of such minions and for practical reasons they
> are an important part of what makes it work. I suspect
> understanding what the cortex is doing will require
> the context in which it operates.

For some reason, you think the brain is "high level learning" using "low
level hard coded modules".

Maybe the brain is built like that.

But it's not important.

What's important, is that before anyone is going to understand what the
brain is doing, they have to first understand the problem space - they have
to understand how _any_ approach can explain how the brain learns the
complexity of behaviors it is able to learn.

It's mostly pointless playing with low level modules if you have no clue
what the high level learning system needs in order to work since the high
level learning system is the key to making everything work.

The theory about how learning works period has to be understood, before you
start to build a learning system, and before you start to build the low
level modules that allow the learning system to work.

It's like trying to design an airplane without first understanding the
fundamentals of lift and drag which explain how it's possible for something
heaver than air to not fall out of the ground.

Until we understand and master the fundamentals of how a parallel signal
processing network can learn, we aren't going to know what we are looking
at in the brain, or what sort of software, or hardware, we have to crate to
duplicate those powers in a machine.

One of the reasons the Wright Brothers succeed, is becuase they did take
the time to study and master an understanding of the fundamentals of heaver
than air flight by playing with wind tunnels not not by wasting their time
playing with birds.

That is what I'm doing.  I'm trying to master an understanding of the
learning problem that brain solves.  The nature of the problem can be well
understood without having to dig into the very complex implementation
details of the human brain.  And until that understanding is mastered,
digging into the brain is only likely to add confusion, instead of adding
enlightenment.

In two posts in a row here, you seem to have failed to understand that by
adding addition rewards, you were NOT improving your learning algorithm,
but instead changing the environment to make it easier to learn so a weak
algorithm could work.  To me, this is an example of you not understanding
the fundamentals of the problem we are facing in AI.

> Now I am not suggesting we need to duplicate the brain
> is all its glory to get intelligent behaviors anymore
> than we need to duplicate a bird in all its glory to
> have flight. But I think you do yourself a disservice
> if you ignore them or try to make them fit your view
> as to how it should work. Not all features on a bird
> are relevant to flight but some are and those features
> are incorporated in modern aircraft. Not all geological
> features found with gold are relevant to finding gold
> but some of them are.

I'm not ignoring the brain.  We KNOW it solves a learning problem which no
one has yet figured out _any_ way to duplicate.  Learning more about how
the brain is structured, or learning more about the complex chemistry at
work in the brain is highly unlikely to yield anything worthwhile for me.
I'd have to study aspects of biology and chemistry and neuroscience for 10
years to even get me close to the point of being able to learn something
from the brain.  But before I or anyone else could really make any sense of
all that complexity, we would still have to FIRST, understand how a machine
can solve this type of learning problem.

It's like trying to figure out how a bird flies, without first
understanding the basics of lift.  We might, for example, look at a bird,
and think it flies simply by flapping its wings - because we have no
understanding of how lift created by moving forward though the air can also
keep it aloft.  And then we waste endless years trying to build machines
that flap their wings because we failed to do first do the basic research
into the dynamics of flight first.  Lift is understood by studying how
different shaped objects react to air, not by studying the complex anatomy
of bird wings or feathers.

Only after getting a basic understanding of lift, can we then go back and
look at the bird wing and feathers and understand WHY that design works.

Until we understand how networks can learn, we won't have a clue what we
are looking at in the brain.

That's what I've been working on all these years.  Basic research in to how
a signal processing network can learn.

If this was already understood, and could be found in some book on teh
subject, then we could then turn to the brain to look for implementation
details and we could understand why the brain was built like it is.  But
without that basic understanding, we are lost - and most the neuroscience
people I've read, sure look lost to me in the wild ass guesses and
speculation they seem to come up with on what these brain structures are
there for and how they came to be.

> I am interested in the same thing you are even if we
> disagree on the best approach. And I am actually trying
> to write learning programs which you claim you are
> also trying to do rather than just waffle on about it.
>
> JC

Well, I've not written much code in some time becuase I'm stuck at an
impasse in trying to find the next conceptual advancement which will give
me something worth writing and testing.  I waffle on, because it's a key
technique into find that that next breakthrough and getting past the
impasse.  Spending time trying to explain things to you (or debating things
with you), helps me reorganize own thoughts on the subject which, with
luck, will help break that log jam at some point.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/13/2009 10:38:21 PM
On May 13, 3:38=A0pm, c...@kcwc.com (Curt Welch) wrote:
>>> That simply shows a weakness in that algorithm so
>>> you should look for ways to make it better.
>>
>> Well I did find a way to make it better by providing
>> a signal that nothing was changing which the network
>> would find "boring" and that would be the signal to
>> change it weights as it is in us.
>
>
> You didn't make the learning algorithm better by
> giving it negative rewards for illegal moves.  You
> CHANGED THE ENVIRONMENT to make the problem easier
> to solve.  Giving your agent something easier to do
> is not how you make the learning algorithm stronger.


There is no rule I know of that the feedback cannot be
immediate such as when you place your hand on a sharp
point. I no more changed the environment than nature did
when it gave the brain "pain" feedback. The change is in
the learner (pain) not in the environment (sharp point).


>> Biological brains seem to show that intelligence can
>> evolve at the top. That the lower level input/output
>> processors are parallel in nature
>
>
> ? It's parallel at all levels, not just the low level.


Not everything can be done in a single sweep and how
much a brain or a computer can do in a single sweep
is limited by hardware. The human brain does have a
lot of parallel processing hardware but it is not
unlimited that is why for example we have to "look
around" to take in the details of a scene.

Some things are serial by nature. Typing this text is
a serial process initiated by the cortex and meditated
by the cerebellum under control of feedback from various
sensors in the muscles and skin. On the input side
reading is also a serial process. You cannot take in
this whole post at a single glance. You have to move
your eyes over the words and build up, one step at a
time, what the text is about by holding in memory a
transient trace of each input just as a serial adder
has to do with the carry bit. And like the brain it
needs circuitry to enable this serial process to
proceed in an orderly fashion.


> Until we understand and master the fundamentals of
> how a parallel signal processing network can learn,

Conditioning all the way up right?

> One of the reasons the Wright Brothers succeeded, is
> because they did take the time to study and master an
> understanding of the fundamentals of heaver than air
> flight by playing with wind tunnels not by wasting
> their time playing with birds.

I think it was Sir George Cayley that first understood
the principles of flight?

http://www.flyingmachines.org/cayl.html

I am not sure how much the Wright brothers understood the
principles of lift and drag from their wind tunnel tests or
how much they simply observed some shapes worked better
than others as someone might do playing with different types
of networks.

> In two posts in a row here, you seem to have failed to
> understand that by adding addition rewards, you were NOT
> improving your learning algorithm, but instead changing
> the environment to make it easier to learn so a weak
> algorithm could work.  To me, this is an example of you
> not understanding the fundamentals of the problem we are
> facing in AI.


It is all about feedback. Is a brain a weak system because
of its vast feedback system?


> Until we understand how networks can learn, we won't
> have a clue what we are looking at in the brain.


I think they both add to our understanding of both
kinds of networks.


> That's what I've been working on all these years.
> Basic research in to how a signal processing network
> can learn.


Over time flying machines improved and so too over time
the current batch of networks will improve.


> If this was already understood, and could be found in
> some book on the subject, then we could then turn to
> the brain to look for implementation details and we
> could understand why the brain was built like it is.

The experiments you believe gave you the answer as to what
was required, "conditioning" came from observing rat brains at
work and now you deny them as a source of further study?


> Well, I've not written much code in some time because
> I'm stuck at an impasse in trying to find the next
> conceptual advancement which will give me something
> worth writing and testing.

Breakthroughs can also be made by experimenters without
any need for "conceptual advancement". A lot of discoveries
were in fact serendipitous accidents.


http://en.wikipedia.org/wiki/Serendipity

JC
0
casey
5/14/2009 11:01:14 AM
casey <jgkjcasey@yahoo.com.au> wrote:
> On May 13, 3:38=A0pm, c...@kcwc.com (Curt Welch) wrote:
> >>> That simply shows a weakness in that algorithm so
> >>> you should look for ways to make it better.
> >>
> >> Well I did find a way to make it better by providing
> >> a signal that nothing was changing which the network
> >> would find "boring" and that would be the signal to
> >> change it weights as it is in us.
> >
> >
> > You didn't make the learning algorithm better by
> > giving it negative rewards for illegal moves.  You
> > CHANGED THE ENVIRONMENT to make the problem easier
> > to solve.  Giving your agent something easier to do
> > is not how you make the learning algorithm stronger.
>
> There is no rule I know of that the feedback cannot be
> immediate such as when you place your hand on a sharp
> point.

Of course there's no rule against it.  And if you want to make it as easy
as possible for the algorithm to learn something, you give it as much
reward information as you can.  But our goal is not to make it easy for the
learning system, but instead, to make a _stronger_ learning system, because
the one you wrote to play TTT isn't even good enough to play Backgammon,
let alone act like a human.

> I no more changed the environment than nature did
> when it gave the brain "pain" feedback. The change is in
> the learner (pain) not in the environment (sharp point).

Maybe it's best that you don't waste your time trying to create RL
algorithms. :)

The problem in front of us to build a strong learning algorithm which is
directed by a reward signal.  As we try to create a strong generic
algorithm we take the stance that we know nothing about the nature of the
environment - so as to create an algorithm strong enough to learn how to
manipulate any environment it comes across.  We do this because that's what
humans can do.  Our brain can learn how to manipulate environments for the
purpose of getting higher rewards which evolution could not possibility
have hard wired into us.

Evolution did not hard wire our ability to use a pencil and fill in circles
on a sheet of paper so that a optical test scanner could read our answers
to a math test so we can get a better grade so that we can get into a
better college so that we can get a better job, so that we can get food to
feed our family so that we don't have to suffer the pain of hunger 30 years
down the road.  Humans are masters at learning how to manipulate (and
survive) in highly complex environments - in environments that make a
single game of Backgammon look completely trivial - and in environment were
most rewards, are highly delayed from the actions which were produced that
helped create them.

Even though the hardware that generates our reward signal is in us, it's
_not_ part of the learning system that learns how to respond to it.  The
hardware that generates the rewards defines the goal (which for humans are
a large set of different rewards all selected because they motivate us to
keep our genes alive.  The learning system figures out what type of
behavior, works best for making sure we have food in 30 years by filling in
little circles on a test today.

When we study, and attempt to design stronger generic learning algorithms,
we assume we don't know the environment, or what we are trying to learn
about the environment as defined by the reward signal.  That's why it's
generic - becuase we as an intelligent designer, need to be blind to what
problem our algorithm is going to solve.  If we are blind to, and we build
a better algorithm that can _any_ problem, then it's got a better chance of
solving problems we have never seen, or thought about.  If we instead, make
assumptions about what type of environment we are dealing with, and only
create an algorithm that can work with that one type of environment, we
will have limited the strength of our algorithm.

There is no single, finite sized algorithm, that can solve all problems.
The fact that we have to work with finite hardware (finite memory, finite
compute per seconds power) means we have to make some assumptions about the
nature of the environment we are dealing with.  So creating a pure generic
algorithm is impossible.  But the goal is to make it as generic as possible
- and as strong as possible, which means it makes as few assumptions about
the nature of the problem it's trying to solve.

But in all this, from the prospective of the learning algorithm, the reward
signal is part of the environment, not part of the learning algorithm.

If for some odd reason you think all this is something I'm just pulling out
of my ass and making it up as I go along, try reading this page:

http://www.cs.ualberta.ca/~sutton/book/ebook/node28.html

  "The environment also gives rise to rewards"...

or this one:

http://www.cs.ualberta.ca/~sutton/book/ebook/node29.html

   "In reinforcement learning, the purpose or goal of the agent is
   formalized in terms of a special reward signal passing from the
   environment to the agent."

The agent in RL does NOT generate the reward signal.  It's got one fixed
goal which is to maximize long term rewards _from_ _the_ _environment_.

When you change the code to generate different rewards, you are changing
the environment in the RL problem, not the agent.  Our goal is not to make
our agents work better by giving them a simpler environment to work in, but
by finding new ways to code the agent, so that it can deal effectively with
more complex environments.

It's fine to put instant rewards in the environment.  But if you can  take
the instant rewards out, and your agent can still solve the problem, that
shows you have a better agent.

Witting a RL based TTT program that not only needs to learn which move
works best to win the game, but must also learn only to move in empty
squares (without the help of instant rewards), is TRIVIAL.  If you can't do
that, then there's no hope of you making any progress of creating RL
algorithms to solve the problems the brain can solve.

If the algorithm you wrote only worked if it was given instant rewards,
then your algorithm is weaker than the example algorithms given in the
sutton book.

In this introduction chapter of Sutton's book, he explains how to write a
RL algorithm to play TTT.

http://www.cs.ualberta.ca/~sutton/book/ebook/node10.html

In it he asks the students some simple questions to make them think, this
is one of them:

   "Exercise 1.3: Greedy Play   Suppose the reinforcement learning player
   was greedy, that is, it always played the move that brought it to the
   position that it rated the best. Would it learn to play better, or
   worse, than a nongreedy player? What problems might occur?"

The answer to that exercise question in this introduction chapter of a book
which is only an introduction to the whole field of RL, is the problem you
algorithm ran into, and which you "fixed", not by improving your learning
agent, but by simplifying the environment.

The algorithm he specifies in that chapter, had you used it in your
learning system, would have solved the problem of learning where to move,
without having to give it instant rewards - by simply giving it only
rewards for winning.

> >> Biological brains seem to show that intelligence can
> >> evolve at the top. That the lower level input/output
> >> processors are parallel in nature
> >
> >
> > ? It's parallel at all levels, not just the low level.
>
> Not everything can be done in a single sweep and how
> much a brain or a computer can do in a single sweep
> is limited by hardware. The human brain does have a
> lot of parallel processing hardware but it is not
> unlimited that is why for example we have to "look
> around" to take in the details of a scene.

No, the brain doesn't "look around".  It, in a single sweep, comes to the
conclusion that it's time to move the eyes a little to the right.  And then
in the next instant, it comes to the conclusion to move the eyes a little
further to the right.  And then in the next instant, it comes to the
conclusion to move the eyes down, etc.  Each decision being made in one
large parallel, "single sweep".

Just because you talk about your own behavior as "sweep around the room"
does not in any way mean the brain works that way.  That's just how you
like to talk about what you do.  And that's the classic error of AI - to
assume that the abstraction you use to describe behavior, is also the best
abstraction for describing what the hardware is doing.

This was something demonstrated very well by Brooks' work with subsumption
architecture designs.  He showed how a simple reaction machine, one which
had various hard-coded ways of reacting to the _current_ condition of the
environment, could produce goal seeking behavior, even though there was not
"goal" hard wired directly into the hardware.  His hardware was not
structured so as to create some internal "goal" signal to  represent what
it's "intention" currently was.  Instead, it was structured simply as a
list of how to react based only on the condition of the _current_
environment.

All human behavior can be explained like that.  The brain did not "decide"
to "scan the room". The brain simply fell into a pattern or reactions that
caused it to scan the room, until something (likely in the environment)
triggered it to start following a different path of reactions.

> Some things are serial by nature.

Yeah, like all human behavior.

> Typing this text is
> a serial process initiated by the cortex and meditated
> by the cerebellum under control of feedback from various
> sensors in the muscles and skin. On the input side
> reading is also a serial process. You cannot take in
> this whole post at a single glance. You have to move
> your eyes over the words and build up, one step at a
> time, what the text is about by holding in memory a
> transient trace of each input just as a serial adder
> has to do with the carry bit. And like the brain it
> needs circuitry to enable this serial process to
> proceed in an orderly fashion.

Yes, but the hardware that creates these serial processes (all human
behavior) needs to be configured by training.  So how do you suggest we do
that?

> > Until we understand and master the fundamentals of
> > how a parallel signal processing network can learn,
>
> Conditioning all the way up right?

right.

> > One of the reasons the Wright Brothers succeeded, is
> > because they did take the time to study and master an
> > understanding of the fundamentals of heaver than air
> > flight by playing with wind tunnels not by wasting
> > their time playing with birds.
>
> I think it was Sir George Cayley that first understood
> the principles of flight?
>
> http://www.flyingmachines.org/cayl.html
>
> I am not sure how much the Wright brothers understood the
> principles of lift and drag from their wind tunnel tests or
> how much they simply observed some shapes worked better
> than others as someone might do playing with different types
> of networks.

My understanding is that they learned it from a book (the one you reference
above?).  But that the book, which everyone trusted as the authority on the
subject, had some fundamental errors in its formulas and that those errors
were uncovered by the Wright Bothers in their wind tunnel testing, which
greatly helped them in designing their aircraft - and in designing the
props which was something they didn't learn from nature but was a key part
of their solution.

Ah yes, from Wikipedia.. (spotted it while I was looking for the date of
the first flight for comments I was making futher down)...

http://en.wikipedia.org/wiki/Wright_Brothers

  "The poor lift of the gliders led the Wrights to question the accuracy of
  Lilienthal's data, as well as the "Smeaton coefficient" of air pressure,
  which had been in existence for over 100 years and was part of the
  accepted equation for lift."...

They realized the accepted wisdom was wrong, and did experiments to find
out what was right.

> > In two posts in a row here, you seem to have failed to
> > understand that by adding addition rewards, you were NOT
> > improving your learning algorithm, but instead changing
> > the environment to make it easier to learn so a weak
> > algorithm could work.  To me, this is an example of you
> > not understanding the fundamentals of the problem we are
> > facing in AI.
>
> It is all about feedback. Is a brain a weak system because
> of its vast feedback system?

Yes, it is all about feedback.  Dan likes to remind us of that. :)  Not
sure what that has to do with any of this however.

> > Until we understand how networks can learn, we won't
> > have a clue what we are looking at in the brain.
>
> I think they both add to our understanding of both
> kinds of networks.

Sure, we would have no clue at all what to do if people had not studied the
brain.  Brain research is important which is why I don't totally ignore it.
But it's a path I let other people take becuase I'm not a neuroscientist
and don't expect to become one in my life.  I also don't think any of them
are going to solve AI, though their contributions are key.  AI, as a theory
of machine learning will be solved first, and then the neuroscience will
uncover how it's implemented by the brain.

> > That's what I've been working on all these years.
> > Basic research in to how a signal processing network
> > can learn.
>
> Over time flying machines improved and so too over time
> the current batch of networks will improve.

Yes, but we don't have the first one off the ground yet.  And it's not
going to get off the ground if you keep working of machines that flap their
wings just because wing flapping is an important part of how evolution made
all birds fly.

Here it is, more than 100 years after the Wright's first powered flight,
and we still don't use flapping wings in any of our highly evolved airplane
designs.  Just becuase evolution found some solution worked well for it,
doesn't mean that solution will EVER work well for the type of hardware we
have to work with.

Solving the engineering problem of powered flight was not solved by
studying birds, or by studying the evolution of birds, even though both
those subjects mirror exactly every argument you make about AI.  It was
solved by someone looking for, identifying, and mastering, the PRINCIPLES
that governed the technologist.

What evolution did for flight, was to show us it was clearly possible for
objects heaver than air, to fly.  But that was about the end of it.
Everything else was figured out through thought and experimentation, not by
dissecting birds.

The principles that are important to designing a flying machine must first
be uncovered, and understood - and that for one was the lift equation you
can find in the wikipedia article.  That combined with the fundamentals of
heat engine and power so that a light enough power source could be built -
but most of that was understood at that point in time.

For AI, understanding the fundamental principles of RL is the key because
that's what we already know (and have known for over 50 years) is what the
human brain (and all animal brains that have a cortex) are - they are RL
machines that find useful survival behaviors on their own, by trial and
error.

Like in the time of the Wright Brothers, the basic problem domain is
already known. They already had simple gliders that worked, and it was only
a matter of implementation to duplicate the wonders of flight we saw in the
birds.

Today, we already understand the problem domain of reinforcement learning,
and we have simple machines doing very interesting things in that domain.
But we don't have designs that are strong enough yet.

If you understand the RL problem domain, you would understand that creating
hard coded signal pre-processing to look for edges or what not is NOT
working on the agent - it's changing the environment to make it easier for
the agent.  But if you also understand how these agents have to work, you
would understand no amount of hard coding the environment, will make the
world simple enough for a weak RL algorithm to look intelligent.

To go back to the airplane parallel.  What you are doing, is taking a bad
glider, and trying to make it fly further, by finding a higher cliff to
launch it off of.  You are making it produce better results (fly for longer
before hitting the ground) by trying to change the environment (find a
higher cliff) than by trying to improve the airplane.  No amount of looking
for higher cliffs is ever going to solve the RL problem of AI, and no
amount of adjusting the environment by given it "easier rewards", or
"better pre-processed inputs" are going to turn a weak RL algorithm, into a
strong one.

The only way to solve AI, is to work on the learning agent, not on the
environment because we don't get to pick the environment. The environment
is called the universe, and until we create a learning agent with as much
power as the human brain when it interacts with the universe, we will not
have gotten "off the ground" on this problem.

> > If this was already understood, and could be found in
> > some book on the subject, then we could then turn to
> > the brain to look for implementation details and we
> > could understand why the brain was built like it is.
>
> The experiments you believe gave you the answer as to what
> was required, "conditioning" came from observing rat brains at
> work and now you deny them as a source of further study?

Well, first of, I've never said no one should study animals, or humans, or
the brain.  I've simple said that was not going to be the path that will
take us to AI.

Why it is you think the only way to invent something new is by copying
evolution I have no clue.  True inventors don't need to copy anything.
They invent because they have an RL based brain that learns by trial and
error, not just by mimicking.  That's what gives us our creativity in the
first place.  You invent by exploring new ideas, not by coping old ones.

> > Well, I've not written much code in some time because
> > I'm stuck at an impasse in trying to find the next
> > conceptual advancement which will give me something
> > worth writing and testing.
>
> Breakthroughs can also be made by experimenters without
> any need for "conceptual advancement". A lot of discoveries
> were in fact serendipitous accidents.
>
> http://en.wikipedia.org/wiki/Serendipity
>
> JC

Yes, but you would also be a fool, to try and solve a problem, by NOT
working on it.

If you simply don't have a clue how to make a better RL algorithm, then by
all means, work on something else related and maybe it will give you the
needed insight.  There's nothing wrong with that.  I of course don't care
what you choose to spend you time working on.

The nature of all my debates here is to try and get people (you) to
understand what the problem of AI actually is.  It's the problem of
building a machine that learns by reinforcement.  That's where all our
intelligent behavior comes from. It's not built into us by evolution, it
was created by a reinforcement learning process.  Evolution built into us,
a brain that can adapt its own design, though a process of reinforcement
learning, to create these complex machines (humans) that procure all this
complex behavior we call intelligence.

Designing these sort of behaviors into our machines is way beyond our
ability.  Only in very limited scopes can we do that - like chess playing,
or simple question answering. The full complexity of what a human does is
based on our ability to learn.  Every time I write one of these messages,
my brain gets a little bit re-programmed, so that the next message I write
is a little different.  Our true intelligence is not in what we do, as much
as it's in how we change our behavior over time as we adapt to whatever
sort of environment we find ourselves in.

No amount of hard-coding beahvior into a machine will explain how it's able
to reprogram itself, and that's the key to AI - learning how to build
machines that can constantly reprogram themselves to adapt to a changing
environment.

If you don't have a clue how to do that, work on something else. But don't
expect me to buy the argument that the "something else" is an important
part of the solution to AI.

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/14/2009 6:34:34 PM
On May 14, 11:34=A0am, c...@kcwc.com (Curt Welch) wrote:
> But our goal is not to make it easy for the learning
> system, but instead, to make a _stronger_ learning
> system, because the one you wrote to play TTT isn't
> even good enough to play Backgammon, let alone act
> like a human.

Well I haven't tried it on Backgammon yet :)

The brain I assume you think is fairly "strong" and yet
those without a pain feedback break bones, damage their
bodies and so on. We don't have radiation sensors so we
will happily play with radioactive stuff until it kills
us. I think the "strength" of a learning system is limited
by its feedback system. We can't learn everything because
we don't find everything rewarding. And we can't learn
about things we cannot sense in some way.

> Even though the hardware that generates our reward signal
> is in us, it's _not_ part of the learning system that
> learns how to respond to it.

You can separate out the reward system if you like but
without it there is no learning. And if the only reward
signal is a win in backgammon that is all it will learn
no matter how "strong" it is at the task.

The idea that most feedback is not immediate is wrong.
Games are peculiar in that way, but even then, we play
games because in fact we get rewards while playing the
game, independently of the win reward.

You do not get an education (move) and then give it a
high value when you get a good job (win signal) as in
backgammon. You get a good education because you give
it a good value by observing the association between
people with good jobs and their good education.

> If we instead, make assumptions about what type of
> environment we are dealing with, and only create an
> algorithm that can work with that one type of
> environment, we  will have limited the strength of
> our algorithm.

News flash. One environment. 3d world of interacting objects.

> There is no single, finite sized algorithm, that can solve
> all problems. The fact that we have to work with finite
> hardware (finite memory, finite compute per seconds power)
> means we have to make some assumptions about the nature of
> the environment we are dealing with.  So creating a pure
> generic algorithm is impossible.  But the goal is to make
> it as generic as possible - and as strong as possible,
> which means it makes as few assumptions about the nature
> of the problem it's trying to solve.

At last, agreement! No pure generic algorithm possible.

> The agent in RL does NOT generate the reward signal.

That is right. The agent only determines if it is a reward
signal. The signal is only a reward signal because of how
the agent reacts to it. The environment generates lots of
signals but what is made of them is not determined by the
environment.


> In this introduction chapter of Sutton's book, he
> explains how to write a RL algorithm to play TTT.
>
> http://www.cs.ualberta.ca/~sutton/book/ebook/node10.html

I have read Sutton=92s book and implemented the ttt example.
You must have forgotten the exchanges on this subject?

> The algorithm he specifies in that chapter, had you
> used it in your learning system, would have solved
> the problem of learning where to move, without having
> to give it instant rewards - by simply giving it only
> rewards for winning.

I have used that algorithm and we had an exchange on
the need for an exploratory moves to build up values
for each state before the system can start exploiting
those values.

> No, the brain doesn't "look around".  It, in a single
> sweep, comes to the conclusion that it's time to move
> the eyes a little to the right.  And then in the next
> instant, it comes to the conclusion to move the eyes
> a little further to the right.  And then in the next
> instant, it comes to the conclusion to move the eyes
> down, etc.  Each decision being made in one large
> parallel, "single sweep".

Just as the serial adder decides, in one single parallel
sweep, on the sum and the carry. It then uses that carry
to decide on the next output. But the whole adding
process is serial even if each step is done by parallel
circuitry and the adder requires extra circuitry to
deal with that.

The brain decides in one parallel sweep some things
about what it is looking at. It holds the results of
that first parallel process to decide what to do next.
For example you see a page of text. That is a parallel
output that may be used to determine the next step in
the serial process such as to direct the eyes to the
start of the text.

> This was something demonstrated very well by Brooks'
> work with subsumption architecture designs.  He showed
> how a simple reaction machine, one which had various
> hard-coded ways of reacting to the _current_ condition
> of the environment, could produce goal seeking behavior,
> even though there was not "goal" hard wired directly
> into the hardware.

Depends what you mean by hard coded. The path taken by
a light seeking robot is not hard coded but the reaction
to light is hard coded. In subsumption the hard coded
low level reactions are subsumed by higher level hard
coded reactions.

> ... the hardware that creates these serial processes
> (all human behavior) needs to be configured by training.
> So how do you suggest we do that?

Or the serial process can be computed. When you read
text or add numbers you follow a fixed procedure which
deals with different types of data, text or numbers.
This -general- procedure is learned. The ability to
drive a car is a procedure for converting the data,
the current visual/tactile input, into actions on the
steering wheel, accelerator and brake. If this procedure
fails to bring about the goal or sub goal state then
the learning process is implemented on that procedure.

I would say most of our actions are learned procedures
which automatically adjust to different situations.
It is only when they fail are the higher centres alerted
to a problem. The same applies to good programming
procedures. When an error occurs the program jumps to
an error processing routine.


>> It is all about feedback. Is a brain a weak system
>> because of its vast feedback system?
>
>
> Yes, it is all about feedback.  Dan likes to remind us
> of that. :)  Not sure what that has to do with any of
> this however.

It has everything to do with it. What you call a "reward"
is feedback. Unlike a simple RL algorithm the brain has
a vast feedback system.


> Brain research is important which is why I don't totally
> ignore it. But it's a path I let other people take
> because I'm not a neuroscientist and don't expect to
> become one in my life.

Sure I understand that you are not, and never will be,
a neuroscientist but you do ignore the latest research
that doesn't require you to be a neuroscientist to
understand. Instead you base all your ideas on Skinner's
rat research.

> And it's not going to get off the ground if you keep
> working of machines that flap their wings just because
> wing flapping is an important part of how evolution
> made all birds fly.

And the equivalent to flapping wings IS an important
part of how to build a flying machine. It is how they
implement powered flight and we do the same using an
engine and a rotating wing just as we use engines and
rotating wheels for cars instead of muscles and legs.
However we can build flying machines that flap their
wings (you can buy toys that work that way) and build
walking machines.


> Solving the engineering problem of powered flight
> was not solved by studying birds, or by studying
> the evolution of birds, even though both those
> subjects mirror exactly every argument you make
> about AI.  It was solved by someone looking for,
> identifying, and mastering, the PRINCIPLES that
> governed the technologist.

And how did they know where to look for these principles?
Don't be so sure that they didn't have birds or rising
smoke as the inspiration to build gliding machines and
hot air balloons. You looked to Skinner's rats for your
conditioning principles.


> The only way to solve AI, is to work on the learning
> agent, not on the environment because we don't get to
> pick the environment. The environment is called the
> universe,

The "environment" is not really the Universe. The
environment is that part of the Universe that you have
not defined as the agent. In biology the dividing line
isn't that clear for we cannot exist outside a biosphere.

Also your claim that we don't get to pick the environment
is also not the case. We change our environment all the
time to suit ourselves. Our environment of oxygen was
created by life. We act on our environment just as much
as it acts on us. We can even build extra feedback in
the environment such as a smoke alarm.

> True inventors don't need to copy anything. They invent
> because they have an RL based brain that learns by trial
> and error, not just by mimicking.  That's what gives us
> our creativity in the first place.  You invent by
> exploring new ideas, not by coping old ones.

So you think these "new" ideas just come from trial and
error? When you get a new idea it is it never based on
previous observations or learning? Are no new ideas the
modification of old ideas?

> Our true intelligence is not in what we do, as much as
> it's in how we change our behavior over time as we adapt
> to whatever sort of environment we find ourselves in.

I understand about learning and adaptation. I read about it
in the first book I bought on the subject a long time ago.

You do not help by referring to learning and adaptation
(relearning) as "true intelligence". That is not how the
word "intelligence" is used. In this case you have taken
the category label "intelligence" and applied it only
to one of its members, learning, leaving all the other
innate or already learned behaviors we call "intelligent"
without a label. It is like saying a "true" fruit is an
apple leaving all the other items like oranges and bananas
without a category name.


> No amount of hard-coding behaviour into a machine will
> explain how it's able to reprogram itself, and that's
> the key to AI - learning how to build machines that
> can constantly reprogram themselves to adapt to a
> changing environment.

Learning and adaptation (relearning) is a desired behavior.
I have not disagreed with that. It is my interest as well.


JC


0
casey
5/15/2009 9:31:29 PM
casey <jgkjcasey@yahoo.com.au> wrote:
> On May 14, 11:34=A0am, c...@kcwc.com (Curt Welch) wrote:
> > But our goal is not to make it easy for the learning
> > system, but instead, to make a _stronger_ learning
> > system, because the one you wrote to play TTT isn't
> > even good enough to play Backgammon, let alone act
> > like a human.
>
> Well I haven't tried it on Backgammon yet :)
>
> The brain I assume you think is fairly "strong" and yet
> those without a pain feedback break bones, damage their
> bodies and so on. We don't have radiation sensors so we
> will happily play with radioactive stuff until it kills
> us. I think the "strength" of a learning system is limited
> by its feedback system.

I don't grasp how you are using the words "feedback system" here.

It's not something I've ever seen you use in this way before.

> We can't learn everything because
> we don't find everything rewarding. And we can't learn
> about things we cannot sense in some way.

Well, I'm not sure what point you think you are making here.

Some people think the human mind is some sort of great magical thing that
creates consciousness and makes us better than all the other life forms and
who the hell knows what else.  The things you say above seem to me might
apply to someone like that.

I think the brain is a fairly trivial reinforcement based arm and leg
controller and nothing else.  So to try and suggest I think it's got all
powerful magic powers is kinda odd in my view.  The organ just helps us
make movements with our body parts that improves our odds of surviving.
Everything else humans think is so special about the brain I consider to be
nonsense for the most part.  It's not perfect, any any sense.  It's a "best
guess" device at best.

> > Even though the hardware that generates our reward signal
> > is in us, it's _not_ part of the learning system that
> > learns how to respond to it.
>
> You can separate out the reward system if you like but
> without it there is no learning. And if the only reward
> signal is a win in backgammon that is all it will learn
> no matter how "strong" it is at the task.

Of course.

> The idea that most feedback is not immediate is wrong.
> Games are peculiar in that way, but even then, we play
> games because in fact we get rewards while playing the
> game, independently of the win reward.

So you are using the word "feedback" to mean "reward" now?

So, when we attempt to pick up a rock, and we use our eyes in a feedback
loop to guide our hand to the rock, that's not the type of "feedback" you
are talking about above?

> You do not get an education (move) and then give it a
> high value when you get a good job (win signal) as in
> backgammon. You get a good education because you give
> it a good value by observing the association between
> people with good jobs and their good education.

Well, you really haven't spent the time to understand the reinforcement
learning problem and the basics of operant conditioning.

The "delay" in "delayed reward" means that some of the actions that caused
the reward (or the punishment signal) were delayed from the signal.  For
humans, ALL rewards are delayed.

If I hit my hand with a hammer, the pain is instant.  But the behaviors I
had to produce to create that took a significant amount of time.  I had to
first walk into a room and get the hammer.  I had to put my arm in a
position so it could be hit by the hammer. I had to swing the hammer in
just the right way, to bring it down on my hand.  There was a delay from
ms, to seconds, to even minutes, between the pain, and the behaviors that
contributed to the pain.

In order for the brain to learn from this pain, it had to associate the
pain, with the behaviors that came before the pain.

If you push a button, and that causes you to get an electric shock, the
brain generated a large and complex sequence of events that led to that
action.  It might have started as the brain produced the behavior to make
the head turn and the eye focus on the bottom.  The brain might have
reacted to the percpetoin of the button with some internal signals that we
call "thought" which went something like "I bet that makes the bell ring,
lets find out", which then triggered the complex sequence of actions to
make the arm move and finger extend, all while using the feedback from the
eyes and the touch sensors on the skin, to make this happen, which led to
the button being pushed.  But then the shock happened, and the sensors
picked up it up, and a few ms later it finally showed up in the brain as
"pain".

The brain might have generated ONE MILLION micro behaviors to make all that
happen. And when the pain happened, it had to correctly correlate that pain
with the behaviors that were responsible for it.  But not correlate with
the behavior which was not responsible.

All human learning, is by _delayed_ rewards.  The only type of RL problems
that don't have delayed rewards, are the ones that aren't actually
interacting with the real world, but which are running in a simulated
environment.

> > If we instead, make assumptions about what type of
> > environment we are dealing with, and only create an
> > algorithm that can work with that one type of
> > environment, we  will have limited the strength of
> > our algorithm.
>
> News flash. One environment. 3d world of interacting objects.

Yes, Time and space is a fairly constant constraint we have evolved in and
as such, it's fair game for evolution to hard-code solutions to fit that
environment.

However, the same brain that deals with the 3D world of time and space,
also deals with the auditory world of sound.  And the world of sound is not
3D objects interacting by any measure.

It also deals with the domain of smell and taste, which is not a domain of
3D interacting objects.

The same brain deals with the domain of heat sensors, and balance sensors,
which, again is not world of 3D objects interacting.

Each of these domains creates a unique and very different challenge to the
brain.

And though it's quite logical to argue that since these sensors have been
part of us for a very long time, evolution is free to hard-code some of the
processing that happens for each sensor.

But after that hard-coded processing, the generic learning problem still
has to pick up and solve problems that evolution has never seen before, and
could in no way hard-coded a solution for us.

Evolution did not give us hardware for reading sheet music and playing a
piano.  Someone that knows how to sight read sheet music and play the piano
has hardware in them that they were not born with.  Their brain has
configured itself to perform this highly complex, hand/eye/ear/foot
coordination process.  It doesn't work because evolution gave us hardware
to do it.  It works because after whatever preprocessing hardware we do
have, the generic learning function of the brain was able to wire up
signals from all these different parts of the brain (eye, ears, touch
sensors all over the body, balance sensors so you don't fall off the bench
when you reach for those far away keys, body part position sensors), into a
complex program to make all these parts work, in real time (with a
resolution down to milliseconds to make the notes play at the correct
time), by it's generic learning ability.

In life, _any_ sensory pattern, on any sensory domain, might need to
regulate any possible arm or leg movement a human can make.  This cross
wiring from sensory patterns, to arm and leg movements, are not defined
ahead of time.  The brain has to be able to cross connect any of our
sensory domains, to any of our action domains, in a complex real time
control system.

The patterns that are important, and the actions that our important, is
stuff evolution could not have know about, and as such, could not have
built hardware to deal with.  The learning system must be able to learn to
recognize any sensory pattern, including patterns that combine multiple
sensory domains, and produce any complex sequence of behaviors.

> > There is no single, finite sized algorithm, that can solve
> > all problems. The fact that we have to work with finite
> > hardware (finite memory, finite compute per seconds power)
> > means we have to make some assumptions about the nature of
> > the environment we are dealing with.  So creating a pure
> > generic algorithm is impossible.  But the goal is to make
> > it as generic as possible - and as strong as possible,
> > which means it makes as few assumptions about the nature
> > of the problem it's trying to solve.
>
> At last, agreement! No pure generic algorithm possible.

I've never said it was.  The fact that for some reason, after all I've
written, you thought I believed it was possible to create "one pure generic
solution for all problems" shows how little you understand of what I've
been writing.

The stuff I've been talking about here for 5 years is not rocket science.
It's well understood by many people.  The basics of what reinforcement
learning is, and what it's limits are, is something people have understood
for a long time.

The delayed reward problem is something Minsky wrote about 50 YEARS AGO.
50 YEARS AGO he understood that the tough part of trial and error learning
was dealing with delayed rewards.  He called it the credit assignment
problem and I believe he was the one that coined the term.

In Sutton's and Barto's book, in the first paragraph of the first chapter
after the introduction, we find:

  "These two characteristics--trial-and-error search and delayed
  reward--are the two most important distinguishing features of
  reinforcement learning."

Understanding the problem of delayed rewards is the first thing to learn
about the nature of the beast - but it's something you still don't
understand.

The structure of every algorithm talked about in the Sutton book is
structured to _solve_ the delayed reward problem because that's the problem
that makes RL both hard, and interesting.

> > The agent in RL does NOT generate the reward signal.
>
> That is right. The agent only determines if it is a reward
> signal. The signal is only a reward signal because of how
> the agent reacts to it. The environment generates lots of
> signals but what is made of them is not determined by the
> environment.

John, I'm using the word "agent" above in the standard way it's used in the
field of RL (as short for "learning agent").  I'm using the word "reward
signal" in the standard way it's used by everyone who has every worked on
this problem for the past 50 years.

You are not.  You are talking nonsense.  I don't know if you are just being
defensive because I'm being a jerk, or if you are just that dumb.  But if
this is all the better you understand RL, you should give up.  Or more
accurately, I should give up trying to debate AI with you.

> > In this introduction chapter of Sutton's book, he
> > explains how to write a RL algorithm to play TTT.
> >
> > http://www.cs.ualberta.ca/~sutton/book/ebook/node10.html
>
> I have read Sutton=92s book and implemented the ttt example.
> You must have forgotten the exchanges on this subject?

And did you need to give it extra help to learn legal moves by giving it a
second punishment (negative reward) signal or not?  If you wrote it the way
it was described in the book, you would not have needed to give it that
extra reward signal - the simple "game win" reward was all that would have
been needed for your program to learn how to make legal moves.

> > The algorithm he specifies in that chapter, had you
> > used it in your learning system, would have solved
> > the problem of learning where to move, without having
> > to give it instant rewards - by simply giving it only
> > rewards for winning.
>
> I have used that algorithm and we had an exchange on
> the need for an exploratory moves to build up values
> for each state before the system can start exploiting
> those values.
>
> > No, the brain doesn't "look around".  It, in a single
> > sweep, comes to the conclusion that it's time to move
> > the eyes a little to the right.  And then in the next
> > instant, it comes to the conclusion to move the eyes
> > a little further to the right.  And then in the next
> > instant, it comes to the conclusion to move the eyes
> > down, etc.  Each decision being made in one large
> > parallel, "single sweep".
>
> Just as the serial adder decides, in one single parallel
> sweep, on the sum and the carry. It then uses that carry
> to decide on the next output. But the whole adding
> process is serial even if each step is done by parallel
> circuitry and the adder requires extra circuitry to
> deal with that.
>
> The brain decides in one parallel sweep some things
> about what it is looking at. It holds the results of
> that first parallel process to decide what to do next.
> For example you see a page of text. That is a parallel
> output that may be used to determine the next step in
> the serial process such as to direct the eyes to the
> start of the text.

Are you trying to suggest the brain moves forward, in some frame by frame
computation cycle?  The evidence for that would be what exactly?

> > This was something demonstrated very well by Brooks'
> > work with subsumption architecture designs.  He showed
> > how a simple reaction machine, one which had various
> > hard-coded ways of reacting to the _current_ condition
> > of the environment, could produce goal seeking behavior,
> > even though there was not "goal" hard wired directly
> > into the hardware.
>
> Depends what you mean by hard coded. The path taken by
> a light seeking robot is not hard coded but the reaction
> to light is hard coded. In subsumption the hard coded
> low level reactions are subsumed by higher level hard
> coded reactions.

Yes, that's what I mean by hard coded.  You seem to be using it the same
way I am.

> > ... the hardware that creates these serial processes
> > (all human behavior) needs to be configured by training.
> > So how do you suggest we do that?
>
> Or the serial process can be computed. When you read
> text or add numbers you follow a fixed procedure which
> deals with different types of data, text or numbers.
> This -general- procedure is learned. The ability to
> drive a car is a procedure for converting the data,
> the current visual/tactile input, into actions on the
> steering wheel, accelerator and brake. If this procedure
> fails to bring about the goal or sub goal state then
> the learning process is implemented on that procedure.

The brain uses what they like to talk about in machine learning as on-line
learning.  This means it's always learning while acting.  If you study the
RL algorithms, you see every thing it does creates some learning.  It is
NOT something that only happens when "a sub-goal fails to be reached".

A classic failure to understand RL is that people think it means we only
learn when we get a reward, or a punishment.  That's not true by any
measure.  Everything we do, and everything that happens to us is a learning
experience - as is directly paralleled by how all RL algorithms work.

> I would say most of our actions are learned procedures
> which automatically adjust to different situations.

Yeah, I mostly agree with that.  But I think it's far better to not think
of them as separate procedures because I'm sure there is no place you can
draw a line to separate them (logically or physically).  It's just one HUGE
procedure that defines how we react to the current state of the
environment.

I think the way we write and structure computer code fits fairly well with
your description but I think the way the brain is structured doesn't fit
the idea of separate procedures very well at all.

I also think that the way we like to think about, and talk about, human
brain tends to fit well with your description, but again, not with the wya
the brain actually works.

For example, we might talk about how we learned a procedure to use a key to
unlock the front door.  And we might talk about how we learned a procure to
open a jar.  And how we learned a procedure to use a screw driver to remove
a screw.  But in the brain, all three of these procedures overlap and merge
together in one set of holographic like behavior encoding.  Each new
procedure we learn gets mixed in with the code of many past learned
procures which share various things in common.  But unlike a strict
subroutine, we don't just make use of "some code" from a past learned
event, it's more like the new learned event is just a slight bending and
modifying of many past learned events.

We don't have any easy way to see or connect some new learned beahvior,
with all the past learned behaviors they are merged with - it's invisible
to us for the most part - so we often don't grasp it's happening.

If you learn to play one card game, and then learn a second, how much of
all the things we learned from the first, are playing an important role in
our ability to play the second?  There's the obvious stuff we make use of,
like what we know about card decks.  But then there's the subtle stuff we
don't even realize is happening, like when we are trying to decide how to
play a hand, and it's got an 8 in the hand, and our decision to fold, or
play, was biased to some extent by our experience with hands that had an 8
in it, from the other game, without us every realizing it.

The parallel, associative, memory look-up that links in odd ways all our
memories and learned behavior into one one large parallel associative
memory system, causes everything we have ever learned, to be cross linked,
and cross associated, in ways that for the most part, are completely
invisible to us.  What product I choose to pick off the shell today, might
have been influenced in some important way by something I learned 40 years
ago - but yet I have no awareness that it played such a role in my decision
process today. That however, is how the brain seems to work.  When you do
controlled tests on humans, you see this overlap happening.

> It is only when they fail are the higher centres alerted
> to a problem. The same applies to good programming
> procedures. When an error occurs the program jumps to
> an error processing routine.

Well, I think that's just another learned reaction.  That is, what you call
"higher center" is likely what I call where our speech behavior happens,
and what you call "higher level alerted" I call, another learned reaction
of how the speech section of our learning brain has learned to react to the
environment.

> >> It is all about feedback. Is a brain a weak system
> >> because of its vast feedback system?
> >
> >
> > Yes, it is all about feedback.  Dan likes to remind us
> > of that. :)  Not sure what that has to do with any of
> > this however.
>
> It has everything to do with it. What you call a "reward"
> is feedback. Unlike a simple RL algorithm the brain has
> a vast feedback system.

You really don't seem to have a clue what RL algorithms are doing.

Their "reward feedback system" is not simple.  It's vast. It's what THE
WHOLE ALGORITHM IS ABOUT.  See the picture of the cover of the Sutton book:

http://www.cs.ualberta.ca/~sutton/book/the-book.html

where it's got those stylized tree diagrams?  It's on the cover of the book
because it represents the different types of feedback the various types of
RL algorithms use.  That is, the concept of reward feedback (how it
happens, and in what form it happens) is so important and fundamental to
all of RL, that it made the cover of the book.

Reward feedback is key to how the delayed reward problem is solved by each
algorithm, and since the delayed reward problem is one of the key problems
of RL, how it's solved using different feedback algorithms becomes key to
how each RL algorithm is different.

However, when I used the word "feedback" I wasn't talking about rewards.  I
didn't understand you thought "reward" and "feedback" were words that could
be used as if they were nearly synonymous.

> > Brain research is important which is why I don't totally
> > ignore it. But it's a path I let other people take
> > because I'm not a neuroscientist and don't expect to
> > become one in my life.
>
> Sure I understand that you are not, and never will be,
> a neuroscientist but you do ignore the latest research
> that doesn't require you to be a neuroscientist to
> understand. Instead you base all your ideas on Skinner's
> rat research.

When I see something that indicates his conclusions need to be updated,
I'll update my ideas.  So far, nothing you or anyone else has told me about
recent neuroscience indicates Skinner was wrong.  What I see instead, is
constant and continuous evidence that you and others never understand what
Skinner was saying.

> > And it's not going to get off the ground if you keep
> > working of machines that flap their wings just because
> > wing flapping is an important part of how evolution
> > made all birds fly.
>
> And the equivalent to flapping wings IS an important
> part of how to build a flying machine. It is how they
> implement powered flight and we do the same using an
> engine and a rotating wing just as we use engines and
> rotating wheels for cars instead of muscles and legs.
> However we can build flying machines that flap their
> wings (you can buy toys that work that way) and build
> walking machines.

Ah, true, there are toys that work by wing flapping now, I had forgotten
about those.  But they were not built that way to make a better flying
machine, they are build that way (I assume) becuase it made a better toy -
becuase it looked and acted a little more like a bird.

> > Solving the engineering problem of powered flight
> > was not solved by studying birds, or by studying
> > the evolution of birds, even though both those
> > subjects mirror exactly every argument you make
> > about AI.  It was solved by someone looking for,
> > identifying, and mastering, the PRINCIPLES that
> > governed the technologist.
>
> And how did they know where to look for these principles?

That's the hard part in this research.  It takes a long time to find, and
uncover the new principles.  That work was done for the Wright Brothers 100
years before they started to work on the problem.  It was not easy to find,
but it was found long before them.  The question to ask, is why wasn't
everyone using the known facts about lift?  Anyone that understood how lift
worked, would have understood that solving the problem of powered flight
was simply a problem of power to weight to lift to drag.  The Wright
bothers (and a few others) understood that.  They understood that by that
point in time, it was a simple engineering problem - which meant searching
for a design that was light enough, had enough lift, and had enough power,
and low enough drag.  Find a design that adjusted those parameters to the
right point, and the machine would fly.  So what did they do? They
experimented with materials and airfoils and very carefully performed tests
to determine, weight, lift, and drag of their designs until they found one
which fit the bill.  I'm not sure, but I believe they also designed and
built their own aluminum block internal combustion engine - again, with
full understanding of the power weight prblem they were trying to solve.

The principles that needed to be understood to solve the problem of powered
flight were hard to uncover, but once understood, turned to problem into a
straight forward engineering/invention problem.  But even though that was
known for 100 years before, many people still had no clue, and thought it
was impossible, and others that wanted it to be possible, were still
building machines with flapping wings - showing know understanding of the
lift/weight/power/drag problem that had to be solved.

Understanding AI to be a reinforcement learning problem is the principle
that was not at all obvious, and which was hard to find.  It's what Skinner
and others figured out more than 50 years ago however.  Once they figure
that out, AI become an engineering problem and not a search for the correct
principle.  however, many still have no clue that's what AI is about.

Understanding that powered flight was possible, and what had to be done to
solve it, was not simple - many didn't understand it until after they saw
it working.  AI however, is far trickier.  That's becuase we all have
brains, and everyone seem to feel they understand what their own brain is
doing, or not doing, well enough to know what they are looking at, (or not
looking at).  Sadly, the brain's self image of itself is almost always dead
wrong - it's very illusionary in nature, and few seem able to look past the
illusions to see what we are really talking about.  So a lot more people
seem to fail to understand AI.

> Don't be so sure that they didn't have birds or rising
> smoke as the inspiration to build gliding machines and
> hot air balloons.

Well, what the Wright Brothers used for their inspiration I have no clue.

> You looked to Skinner's rats for your
> conditioning principles.
>
> > The only way to solve AI, is to work on the learning
> > agent, not on the environment because we don't get to
> > pick the environment. The environment is called the
> > universe,
>
> The "environment" is not really the Universe. The
> environment is that part of the Universe that you have
> not defined as the agent.

For agents that interact with the real world, the agent itself becomes part
of the environment as well (the robot can sense itself as being part of the
environment it's trying to deal with).

> In biology the dividing line
> isn't that clear for we cannot exist outside a biosphere.

The universe is the environment we are given John.  I don't really
understand how you can fail to understand this simple fact.

> Also your claim that we don't get to pick the environment
> is also not the case. We change our environment all the
> time to suit ourselves.

Sure you are right.  What was I thinking.  I can leave the universe any
time I see fit and go visit some other universe.

> Our environment of oxygen was
> created by life. We act on our environment just as much
> as it acts on us. We can even build extra feedback in
> the environment such as a smoke alarm.

Yes, the agent interacts with the environment.  That' what "interact"
means.  It means we have changed it.  If our interaction wasn't changing
the environment, then it would not be called "interaction". It would be
called "talking into a black hole".

> > True inventors don't need to copy anything. They invent
> > because they have an RL based brain that learns by trial
> > and error, not just by mimicking.  That's what gives us
> > our creativity in the first place.  You invent by
> > exploring new ideas, not by coping old ones.
>
> So you think these "new" ideas just come from trial and
> error? When you get a new idea it is it never based on
> previous observations or learning? Are no new ideas the
> modification of old ideas?

So you think "trial and error" means "trial is never based on past
experience"?

If so, you are once again, showing a total lack of understanding of even
the basics of RL algorithms.

> > Our true intelligence is not in what we do, as much as
> > it's in how we change our behavior over time as we adapt
> > to whatever sort of environment we find ourselves in.
>
> I understand about learning and adaptation. I read about it
> in the first book I bought on the subject a long time ago.
>
> You do not help by referring to learning and adaptation
> (relearning) as "true intelligence". That is not how the
> word "intelligence" is used. In this case you have taken
> the category label "intelligence" and applied it only
> to one of its members, learning, leaving all the other
> innate or already learned behaviors we call "intelligent"
> without a label. It is like saying a "true" fruit is an
> apple leaving all the other items like oranges and bananas
> without a category name.

Though you don't seem to agree, and also don't seem to understand (which is
probably at least 90% due to the fact you don't agree so you don't bother
to spend the time to understand), Everything you seem to want to call
intelligence which is not learning, is "just another learned behavior" to
me.

> > No amount of hard-coding behaviour into a machine will
> > explain how it's able to reprogram itself, and that's
> > the key to AI - learning how to build machines that
> > can constantly reprogram themselves to adapt to a
> > changing environment.
>
> Learning and adaptation (relearning) is a desired behavior.
> I have not disagreed with that. It is my interest as well.

Yeah, well that's always good.  We would not have gotten as far, or debated
these ideas as long, if that were not true. :)

-- 
Curt Welch                                            http://CurtWelch.Com/
curt@kcwc.com                                        http://NewsReader.Com/
0
curt
5/16/2009 10:39:24 AM
Reply:

Similar Artilces:

ai ai ai
Hello , I work for a small transporting company, our clients send us pick-up adressess by fax and by e-mail. We give every pick up order a number, chronologically in the way we receive the orders. Than,we make a table in which we type every pick-up order. Afterwards, we give every driver a number of pick-ups. We also give the drivers a list with all the adressess. To make these lists, we make a query (which results in a report) for every driver. In the query we type in all the numbers (the pick up numbers) Everything works very well, BUT for example: we get...

ai vs ai
which computer wargames have it? LGAA or La Grande Armee At Austerlitz.. and soon HistWar : Les Grognards! JMM "someone" <erimies@hotmail.com> a crit dans le message de news:3f79463e.684675@news.singnet.com.sg... > which computer wargames have it? erimies@hotmail.com (someone) wrote in news:3f7aa5af.2619219@news.singnet.com.sg: > what is histwar lg about? Quoting from my own post on tuesday : "Histwar seems committed to a release date of december 2003 for their Napoleonic tactical wargame more info at : http://www.histwar.com/ang/inde...

VI AI-Read,AI-Config et AI-Start manquants
bonjour a tous,<br>j ai r=E9installer LabView sur mon ordinateur et il manq= ue maintenant certains VI dans mon application. <br>Il ne connait plus les = VI commencant par AI : AI-Read, AI-Config, AI-STart, ... et les remplace pa= r une boite avec un point d interrogation.<br>Ou puis je les retrouver car = mon VI principal ne marche plus !<br>Merci d avance<br>Cordialement<br><br>= N. Paille<br>Societe RMS Bonjour,<br><br>Pour r=E9soudre le probl=E8me, il suffit d'ajouter le suppo= rt du driver NI-DAQ pour LabVIEW.<br>Ces f...

AU-AIS and TU-AIS
Bonjour Huub, I know that MS-AIS is detected when K2[6-8] = 111. Is it correct to say that AU-AIS is detected when the bits of H1 and H2 are all "1"? Is it correct to say that TU-AIS is detected when the bits of V1 and V2 are all "1"? I'm not sure of that, because I don't see that in G. 783. Thanks very much for a lighting (this doubt screw me since a long time!). Best regards, Michelot Bonjour Michelot, In fact, I think it is specified in the appendice A of G.783. And it seems you're right. But Huub will give its advice. Best regards, Michelot Bonjou...

Build an AI or Grow an AI?
Human beings (and all the other species) improve over time in two different= methods. They improve by evolution as a species (if you are a creationist = you won't believe this) and they grow from new born to adult as individuals= .. In both of these processes a living being acquires somewhat similar goals= .. They improve their productive qualities, reduce their unproductive qualit= ies and acquire new knowledge. Can we use same methods of self modification in AI? Can we grow AI instead = of straightaway building final solution? The purpose of this article is to = discuss abou...

ai
Note: you may use illustrations and diagrams to enhance explanations. Qns - The two well - known logics viz. prepositional logic (PL) and first order predicate logic (FOPL) are monotonic logics. Knowledge bases (KB&#8217;s) that use PL and FOPL for inferencing are monotonic, i.e., adding new facts of such a KB increases the amount of knowledge contained in the KB. However, the axioms for real life Abs are found to be neither complete nor certain and hence have to be non-monotonic. Discuss all the extensions (that are discussed in your text) of PL and FOPL that accommodate different fo...

AI
I think that we have already created artificial intelligence on the internet, we just need to give him/her a way of speaking. I would like to call this The first steps towards creating this artificial intelligence came from Google when they made the algorithm that promotes the best sites to the top of a search result. The thing that distinguishes human beings from animals is our ability to choose between two things, and Google does this when they put one website above another. If you agree with the book, Zen and the Art of Motorcycle maintenance, you understand that the quality is inde...

AI
I think that we have already created artificial intelligence on the internet, we just need to give him/her a way of speaking. I would like to call this The first steps towards creating this artificial intelligence came from Google when they made the algorithm that promotes the best sites to the top of a search result. The thing that distinguishes human beings from animals is our ability to choose between two things, and Google does this when they put one website above another. If you agree with the book, Zen and the Art of Motorcycle maintenance, you understand that the quality is inde...

AI
I think that we have already created artificial intelligence on the internet, we just need to give him/her a way of speaking. I would like to call this The first steps towards creating this artificial intelligence came from Google when they made the algorithm that promotes the best sites to the top of a search result. The thing that distinguishes human beings from animals is our ability to choose between two things, and Google does this when they put one website above another. If you agree with the book, Zen and the Art of Motorcycle maintenance, you understand that the quality is inde...

AI
I'm thinking of going back and trying to program a board game. I've an idea of how to do pretty much everything except how to control the computer opponent. How do you go about doing the AI (artificial intelligence) for a board game ? (I'm think of backgammon just now). I'm wanting to program in BASIC - just so I can understand it better. Are there any chunks of code you can use leaving just the graphics etc to program ? I know there are algorithms etc for chess that can be used. Steve M "Steve Marshall" <48katmos@freeukBlockA.com> writes: ...

ai
Note: you may use illustrations and diagrams to enhance explanations. Qns - The two well - known logics viz. prepositional logic (PL) and first order predicate logic (FOPL) are monotonic logics. Knowledge bases (KB&#8217;s) that use PL and FOPL for inferencing are monotonic, i.e., adding new facts of such a KB increases the amount of knowledge contained in the KB. However, the axioms for real life Abs are found to be neither complete nor certain and hence have to be non-monotonic. Discuss all the extensions (that are discussed in your text) of PL and FOPL that accommodate d...

AI
I think that we have already created artificial intelligence on the internet, we just need to give him/her a way of speaking. I would like to call this The first steps towards creating this artificial intelligence came from Google when they made the algorithm that promotes the best sites to the top of a search result. The thing that distinguishes human beings from animals is our ability to choose between two things, and Google does this when they put one website above another. If you agree with the book, Zen and the Art of Motorcycle maintenance, you understand that the quality is inde...

AI
I think that we have already created artificial intelligence on the internet, we just need to give him/her a way of speaking. I would like to call this The first steps towards creating this artificial intelligence came from Google when they made the algorithm that promotes the best sites to the top of a search result. The thing that distinguishes human beings from animals is our ability to choose between two things, and Google does this when they put one website above another. If you agree with the book, Zen and the Art of Motorcycle maintenance, you understand that the quality is indefinable,...

AI
Check it out: www.BrandonsMansion.com ...

Web resources about - Robotics, AI, and Ethics - comp.ai.alife

Robotics - Wikipedia, the free encyclopedia
as well as computer systems for their control, sensory feedback, and information processing. The design of a given robotic system will often ...

Electrical Engineer- Robotics - Facebook Careers - Facebook
Facebook was built to connect the world, and over the last decade our tools have played a critical part in changing how people interact with ...

Robotic Ewe (@RoboticEwe) on Twitter
Sign in Sign up To bring you Twitter, we and our partners use cookies on our and other websites. Cookies help personalize Twitter content, tailor ...

Robotics & Nanotechnology by WAGmob on the App Store on iTunes
Get Robotics & Nanotechnology by WAGmob on the App Store. See screenshots and ratings, and read customer reviews.

Samsung Releases robotic vacuum cleaner Smart Tango Corner Clean - Flickr - Photo Sharing!
Samsung Electronics launched robotic vacuum cleaner Smart Tango Corner Clean with upgraded dust removal capability for corners on the 2nd. ...

KPMG cyber security boss Malcolm Marshall's warning on robotics and AI - Business Insider Deutschland ...
Malcolm Marshall, Global Head of Cyber Security at KPMG International, spoke to Business Insider at the WEF Meeting in Davos, Switzerland.

WowWee CHiP is the Robotic Dog Weve Been Waiting For
At CES 2016, we met CHiP , the latest pup in the wide world of robotic dogs. CHiP knows plenty of tricks, like sitting and playing soccer, but ...

Low-cost robotic arm wins James Dyson Award
A low-cost, battery-powered robotic arm prosthetic created by four students in the US has been awarded the James Dyson Award for design and engineering. ...

Intel Invests in Robotic Startup Savioke
The chip maker is leading a $15 million funding round for Savioke, whose Relay robots are aimed at the services industry.

Double Robotics: enhanced telepresence
New $3,000 robot aims to reduce the disadvantages faced by remote workers

Resources last updated: 1/24/2016 6:05:51 AM