f



The Halting Problem is based on an ill-formed Question

If a yes or no question does not have a correct yes or no answer then 
there must be something wrong with this question.

More generally:
An ill-formed question is defined as any question that lacks a correct 
answer from the set of all possible answers.

The *only* reason that the self reference form of the Halting Problem 
can not be solved is that neither of the two possible final states of 
any potential halt decider TM corresponds to whether or not its input TM 
will halt on its input.

In other words for potential
halt decider H and input M:
---------------------------
Not<ThereExists>
<ElementOfSet>
FinalStatesOf_H
<MathematicallyMapsTo>
Halts(M, H, input)

Where M is defined as
---------------------
M(String H, String input):
if H(input, H, input) loop
else halt

The only difference between asking a yes or no question and the 
invocation of a Turing Machine is a natural language interface.

Within a natural language interface the invocation of H(M, H, M) would 
be specified as:

�Does Turing Machine M halt on input of Turing Machine H
and Turing Machine M?�

Within a natural language interface the answer to this question would be 
specified as �yes� or �no� and map to the final states of H of accept or 
reject.

So the only reason that the self reference form of the Halting Problem 
can not be solved is that it is based on a yes or no question that lacks 
a correct yes or no answer, and thereby derives an ill-formed question.

0
Peter
6/9/2012 1:43:58 AM
comp.theory 5139 articles. 1 followers. marty.musatov (1143) is leader. Post Follow

523 Replies
1386 Views

Similar Articles

[PageSpeed] 43

On Jun 9, 11:43=A0am, Peter Olcott <OCR4Screen> wrote:
> If a yes or no question does not have a correct yes or no answer then
> there must be something wrong with this question.
>
> More generally:
> An ill-formed question is defined as any question that lacks a correct
> answer from the set of all possible answers.
>
> The *only* reason that the self reference form of the Halting Problem
> can not be solved is that neither of the two possible final states of
> any potential halt decider TM corresponds to whether or not its input TM
> will halt on its input.
>
> In other words for potential
> halt decider H and input M:
> ---------------------------
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>



You need to FULLY FORMALISE this problem to make a proper argument.

Have you seen STRIPS PLAN SPECIFICATION LANGUAGE?

http://en.wikipedia.org/wiki/STRIPS
STRIPS (Stanford Research Institute Problem Solver) is an automated
planner


A sample STRIPS problem

A monkey is at location A in a lab.
There is a box in location C.
The monkey wants the bananas that are hanging from the ceiling in
location B,
but it needs to move the box and climb onto it in order to reach them.

Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
Goal state:    Have(Bananas)

Actions:
               // move from X to Y
               _Move(X, Y)_
               Preconditions:  At(X), Level(low)
               Postconditions: not At(X), At(Y)

               // climb up on the box
               _ClimbUp(Location)_
               Preconditions:  At(Location), BoxAt(Location),
Level(low)
               Postconditions: Level(high), not Level(low)

               // climb down from the box
               _ClimbDown(Location)_
               Preconditions:  At(Location), BoxAt(Location),
Level(high)
               Postconditions: Level(low), not Level(high)

               // move monkey and box from X to Y
               _MoveBox(X, Y)_
               Preconditions:  At(X), BoxAt(X), Level(low)
               Postconditions: BoxAt(Y), not BoxAt(X), At(Y), not
At(X)

               // take the bananas
               _TakeBananas(Location)_
               Preconditions:  At(Location), BananasAt(Location),
Level(high)
               Postconditions: Have(bananas)


The Halt Proof conclusion definitely says NOTHING about a Halt
Function that works exclusively on a directed acyclic graph of
programs references within the Halt parameters.

e.g.
PROGRAM1 ... HALT(program2) ...
PROGRAM2 ... HALT(program3) ...
....
PROGRAMn-1 ... HALT(programn) ...
PROGRAMN ... HALT(program1)

This CYCLE is easy to avoid in languages such as ZFC, using Axiom Of
Regularity.

Herc

0
6/9/2012 1:55:05 AM
On Jun 8, 9:43=A0pm, Peter Olcott <OCR4Screen> wrote:
> If a yes or no question does not have a correct yes or no answer then
> there must be something wrong with this question.
>
> More generally:
> An ill-formed question is defined as any question that lacks a correct
> answer from the set of all possible answers.
>
> The *only* reason that the self reference form of the Halting Problem
> can not be solved is that neither of the two possible final states of
> any potential halt decider TM corresponds to whether or not its input TM
> will halt on its input.
>
> In other words for potential
> halt decider H and input M:
> ---------------------------
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>
> The only difference between asking a yes or no question and the
> invocation of a Turing Machine is a natural language interface.
>
> Within a natural language interface the invocation of H(M, H, M) would
> be specified as:
>
> =93Does Turing Machine M halt on input of Turing Machine H
> and Turing Machine M?=94
>
> Within a natural language interface the answer to this question would be
> specified as =93yes=94 or =93no=94 and map to the final states of H of ac=
cept or
> reject.
>
> So the only reason that the self reference form of the Halting Problem
> can not be solved is that it is based on a yes or no question that lacks
> a correct yes or no answer, and thereby derives an ill-formed question.

No one would say that instances of the halting problem lack a yes or
no answer.  The issue, when one says "the halting problem 'cannot be
solved'" is that there is no Turing machine that accurately decides
the problem on every instance.

So yes--every instance either has an answer of "yes" or "no"--but not
every instance can be solved by a Turing machine.

It's not ill-formed--your "questions" have answers, they are just hard/
impossible to find.

Do you think that the question, "Is the answer to this question no?"
is ill-formed?

You've heard of Godel's Incompleteness Theorem, right?  The Halting
Problem is rather similar.
0
cplxphil (227)
6/9/2012 2:04:40 AM
On 6/8/2012 9:04 PM, cplxphil wrote:
> On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen>  wrote:
>> If a yes or no question does not have a correct yes or no answer then
>> there must be something wrong with this question.
>>
>> More generally:
>> An ill-formed question is defined as any question that lacks a correct
>> answer from the set of all possible answers.
>>
>> The *only* reason that the self reference form of the Halting Problem
>> can not be solved is that neither of the two possible final states of
>> any potential halt decider TM corresponds to whether or not its input TM
>> will halt on its input.
>>
>> In other words for potential
>> halt decider H and input M:
>> ---------------------------
>> Not<ThereExists>
>> <ElementOfSet>
>> FinalStatesOf_H
>> <MathematicallyMapsTo>
>> Halts(M, H, input)
>>
>> Where M is defined as
>> ---------------------
>> M(String H, String input):
>> if H(input, H, input) loop
>> else halt
>>
>> The only difference between asking a yes or no question and the
>> invocation of a Turing Machine is a natural language interface.
>>
>> Within a natural language interface the invocation of H(M, H, M) would
>> be specified as:
>>
>> �Does Turing Machine M halt on input of Turing Machine H
>> and Turing Machine M?�
>>
>> Within a natural language interface the answer to this question would be
>> specified as �yes� or �no� and map to the final states of H of accept or
>> reject.
>>
>> So the only reason that the self reference form of the Halting Problem
>> can not be solved is that it is based on a yes or no question that lacks
>> a correct yes or no answer, and thereby derives an ill-formed question.
> No one would say that instances of the halting problem lack a yes or
> no answer.
Yes this point has been missed for many years.

Since the final states: {accept, reject} of H form the entire solution 
set, (every possible answer that H can provide) and these states have 
been shown to mathematically map to yes or no therefore the inability of 
a Turing Machine to solve the halting problem is merely the inability to 
correctly answer a yes or no question that has no correct yes or no answer.

> The issue, when one says "the halting problem 'cannot be
> solved'" is that there is no Turing machine that accurately decides
> the problem on every instance.
>
> So yes--every instance either has an answer of "yes" or "no"--but not
> every instance can be solved by a Turing machine.
>
> It's not ill-formed--your "questions" have answers, they are just hard/
> impossible to find.
>
> Do you think that the question, "Is the answer to this question no?"
> is ill-formed?
Yes.

> You've heard of Godel's Incompleteness Theorem, right?  The Halting
> Problem is rather similar.
Yes, and Godel's Incompleteness Theorem errs for the same reason.
0
Peter
6/9/2012 3:17:44 AM
On Jun 9, 1:17=A0pm, Peter Olcott <OCR4Screen> wrote:
> On 6/8/2012 9:04 PM, cplxphil wrote:
>
> > You've heard of Godel's Incompleteness Theorem, right? =A0The Halting
> > Problem is rather similar.
>
> Yes, and Godel's Incompleteness Theorem errs for the same reason.

Well one could say Godel's Proof offers the SOLUTION METHOD to the
halting problem!

THEORY |-  !PROOF(G)
GODEL |-  PROOF( THEORY|-G )

SEPARATE THE PROGRAM FROM THE TEST HARNESS

PROGRAM1 |- PRINT "HELLO"
TESTHARNESS |- IF HALT(PROGRAM1) PRINT "P1 HALTS!"

**********************

Not only that, Godel tried to prove a FORMAL_DERIVATION
PROOF_PREDICATE was impossible to program!

DERIVABLE(THEOREM) <-> E(A) E(B) DERIVABLE(A) ^ DERIVABLE(B) ^ (A^B)-
>THEOREM

Now we have a FORMAL METHOD to prove such things as whether programs
Halt!


Herc
0
6/9/2012 3:32:09 AM
DERIVBLE(THRM) <-> E(A) E(B) DERIVBLE(A) ^ DERIVBLE(B) ^ (A^B)->THRM


^^^ FORMAL MATHEMATICS!   IT'S TRIVIAL! ^^^

PROVE IF A PROGRAM HALTS!
PROVE WHO YOUR PATERNAL GRANDMA IS!

(G->A) ^ (D^B->C)
->
(G^D^A->B)->C
since G^(G->A)^(A->B) -> B  *forward chaining
so now only D (and G..) are required to prove C

Backward chaining using PRV()
PRV(C)  <->  C v  PRV(D)^PRV(B)^(D^B->C)
PRV(D)^PRV(B)^(D^B->C) -> PRV(C)    //since C is not yet shown true
PRV(D) <-> D v PRV(x)^PRV(y)^(x^y->D)
D -> PRV(D)    //no other info about D
PRV(B) <-> B v PRV(A)^(A->B)   //Unary PRV() formula
PRV(A) <-> A v PRV(G)^(G->A)
ASSUMEG -> G
G->PRV(G)
ASSUMEG ->
  PRV(G)
  PRV(A) <- PRV(G)^(G->A)
  PRV(A) <- TRUE^(G->A)
  PRV(A) <- (G->A)
  PRV(A) <- TRUE
  PRV(A)
  PRV(B) <- PRV(A)^(A->B)
  PRV(B) <- (A->B)
  ASSUMEAImpliesB -> A->B
  ASSUMEAImpliesB ->
    PRV(B)
    PRV(D)^PRV(B)^(D^B->C) -> PRV(C)
    PRV(D)^(D^B->C) -> PRV(C)
    PRV(D)->PRV(C)

i.e. Given (G->A) ^ (D^B->C)
ASSUME G, A->B, D to Prove C
QED






Herc
--
http://tinyurl.com/BLUEPRINTS-PROOF
http://tinyurl.com/BLUEPRINTS-LOGIC
0
6/9/2012 3:46:49 AM
Peter Olcott <OCR4Screen> writes:

> If a yes or no question does not have a correct yes or no answer then
> there must be something wrong with this question.

I was just thinking "surely there's not point reading this; can there be
anything new?" when low-and-behold it contains a *new* error:

<snip>
> In other words for potential
> halt decider H and input M:
> ---------------------------
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>
> The only difference between asking a yes or no question and the
> invocation of a Turing Machine is a natural language interface.
>
> Within a natural language interface the invocation of H(M, H, M) would
> be specified as:
>
> “Does Turing Machine M halt on input of Turing Machine H
> and Turing Machine M?”

No, it would be read as "What does the machine H say when invoked with
input (M, H, M)?".  But that has a simple yes/no answer so it does not
fit with PO's preconceived idea of what the answer should be, so he has
to... er, "misrepresent the truth".

Misrepresenting the result of an invocation of a "potential halt
decider" as being a questions about whether a machine *actually* halts
or not is the error at core of all of the recent nonsense.

<snip>
-- 
Ben.
0
ben.usenet (6790)
6/9/2012 2:41:02 PM
"Ben Bacarisse" <ben.usenet@bsb.me.uk> wrote in message 
news:0.0643db3d6f08e5c88d3b.20120609154102BST.871ulo8s5d.fsf@bsb.me.uk...
> Peter Olcott <OCR4Screen> writes:
>
>> If a yes or no question does not have a correct yes or no answer then
>> there must be something wrong with this question.
>
> I was just thinking "surely there's not point reading this; can there be
> anything new?" when low-and-behold it contains a *new* error:
>
> <snip>
>> In other words for potential
>> halt decider H and input M:
>> ---------------------------
>> Not<ThereExists>
>> <ElementOfSet>
>> FinalStatesOf_H
>> <MathematicallyMapsTo>
>> Halts(M, H, input)
>>
>> Where M is defined as
>> ---------------------
>> M(String H, String input):
>> if H(input, H, input) loop
>> else halt
>>
>> The only difference between asking a yes or no question and the
>> invocation of a Turing Machine is a natural language interface.
>>
>> Within a natural language interface the invocation of H(M, H, M) would
>> be specified as:
>>
>> "Does Turing Machine M halt on input of Turing Machine H
>> and Turing Machine M?"
>
> No, it would be read as "What does the machine H say when invoked with
> input (M, H, M)?".

Since you can not possibly construct the Halting Problem from the above 
question, this proves that you are flatly incorrect. Since my version 
precisely maps to the Halting Problem, this proves that I am correct.

> But that has a simple yes/no answer so it does not
> fit with PO's preconceived idea of what the answer should be, so he has
> to... er, "misrepresent the truth".
>
> Misrepresenting the result of an invocation of a "potential halt
> decider" as being a questions about whether a machine *actually* halts
> or not is the error at core of all of the recent nonsense.
>
> <snip>
> -- 
> Ben. 


0
NoSpam271 (937)
6/9/2012 6:28:59 PM
On Jun 10, 12:41=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Peter Olcott <OCR4Screen> writes:
> > If a yes or no question does not have a correct yes or no answer then
> > there must be something wrong with this question.
>
> I was just thinking "surely there's not point reading this; can there be
> anything new?" when low-and-behold it contains a *new* error:
>
> <snip>
>
>
> > In other words for potential
> > halt decider H and input M:
> > ---------------------------
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > ---------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>
> > The only difference between asking a yes or no question and the
> > invocation of a Turing Machine is a natural language interface.
>
> > Within a natural language interface the invocation of H(M, H, M) would
> > be specified as:
>
> > =93Does Turing Machine M halt on input of Turing Machine H
> > and Turing Machine M?=94
>
> No, it would be read as "What does the machine H say when invoked with
> input (M, H, M)?". =A0But that has a simple yes/no answer so it does not
> fit with PO's preconceived idea of what the answer should be, so he has
> to... er, "misrepresent the truth".
>
> Misrepresenting the result of an invocation of a "potential halt
> decider" as being a questions about whether a machine *actually* halts
> or not is the error at core of all of the recent nonsense.
>
> <snip>
> --
> Ben.

You are incapable of comment except disdain.

Can you answer BEN!

Whether this program can be used to form an equivalent HALTING PROOF
to Turing's?

10 IF HALT() THEN GOTO 10

GEORGE GREEN HAS USED THIS VARIANT HIMSELF

but is quiet on the issue.

If you cannot even COMPREHEND BASIC TRIVIAL EQUIVALENT EXAMPLES that
are the BASIS OF THE IRREFLEXIVE SCOPE DEFINITION of the PRO-HALT()
ARGUMENT.. then why is your word worth the 200 bytes on nonsense  you
occupy.

ANSWER THE QUESTION - NOT A AD HOM DODGE LIKE ALWAYS BEN!!  BB()

Herc
0
6/9/2012 10:06:18 PM
On Jun 9, 11:55=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> On Jun 9, 11:43=A0am, Peter Olcott <OCR4Screen> wrote:
> > In other words for potential
> > halt decider H and input M:
> > ---------------------------
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > ---------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>


PETER'S FORMAL STYLE IS REALLY CLOSE TO STRIPS

a set of operators (i.e., actions); each operator is itself a
quadruple , each element being a set of conditions.

These four sets specify, in order,
  which conditions must be true for the action to be executable,
  which ones must be false,
  which ones are made true by the action and
  which ones are made false;

like a TEMPORAL PREDICATE SEARCH PROCEDURE!!


 Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
 Goal state: =A0 =A0Have(Bananas)

 Actions:
=A0 =A0 // move from X to Y
 =A0  =A0_Move(X, Y)_
 =A0  =A0Preconditions: =A0At(X), Level(low)
=A0 =A0 =A0Postconditions: not At(X), At(Y)

=A0 =A0 =A0// take the bananas
=A0 =A0 =A0_TakeBananas(Location)_
=A0 =A0 =A0Preconditions: =A0At(Location), BananasAt(Location), Level(high)
=A0 =A0 =A0Postconditions: Have(bananas)

EG
THE FALSE POSTCONDITION of MOVE FROM X TO Y
is AT(X)
I.E.  AFTER you move away from X, NOT(AT(X))

>
> e.g.
> PROGRAM1 ... HALT(program2) ...
> PROGRAM2 ... HALT(program3) ...
> ...
> PROGRAMn-1 ... HALT(programn) ...
> PROGRAMN ... HALT(program1)
>
> This CYCLE is easy to avoid in languages such as ZFC, using Axiom Of
> Regularity.
>

This LOGIC NEWSGROUP is ENTIRELY BOGUS when not one comment is made on
the above proposal to refine the domain of HALT(); and the subject is
dismissed with AD HOMS instead.

Herc
0
6/9/2012 10:23:32 PM
On Jun 8, 11:17=A0pm, Peter Olcott <OCR4Screen> wrote:
> On 6/8/2012 9:04 PM, cplxphil wrote:
>
>
>
>
>
>
>
> > On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen> =A0wrote:
> >> If a yes or no question does not have a correct yes or no answer then
> >> there must be something wrong with this question.
>
> >> More generally:
> >> An ill-formed question is defined as any question that lacks a correct
> >> answer from the set of all possible answers.
>
> >> The *only* reason that the self reference form of the Halting Problem
> >> can not be solved is that neither of the two possible final states of
> >> any potential halt decider TM corresponds to whether or not its input =
TM
> >> will halt on its input.
>
> >> In other words for potential
> >> halt decider H and input M:
> >> ---------------------------
> >> Not<ThereExists>
> >> <ElementOfSet>
> >> FinalStatesOf_H
> >> <MathematicallyMapsTo>
> >> Halts(M, H, input)
>
> >> Where M is defined as
> >> ---------------------
> >> M(String H, String input):
> >> if H(input, H, input) loop
> >> else halt
>
> >> The only difference between asking a yes or no question and the
> >> invocation of a Turing Machine is a natural language interface.
>
> >> Within a natural language interface the invocation of H(M, H, M) would
> >> be specified as:
>
> >> =93Does Turing Machine M halt on input of Turing Machine H
> >> and Turing Machine M?=94
>
> >> Within a natural language interface the answer to this question would =
be
> >> specified as =93yes=94 or =93no=94 and map to the final states of H of=
 accept or
> >> reject.
>
> >> So the only reason that the self reference form of the Halting Problem
> >> can not be solved is that it is based on a yes or no question that lac=
ks
> >> a correct yes or no answer, and thereby derives an ill-formed question=
..
> > No one would say that instances of the halting problem lack a yes or
> > no answer.
>
> Yes this point has been missed for many years.
>
> Since the final states: {accept, reject} of H form the entire solution
> set, (every possible answer that H can provide) and these states have
> been shown to mathematically map to yes or no therefore the inability of
> a Turing Machine to solve the halting problem is merely the inability to
> correctly answer a yes or no question that has no correct yes or no answe=
r.
>

Perhaps you've been over this before in your lengthy discussions, but
let me see if I have this straight.

You are saying that the question, "Does machine M halt on input I?"
may, for certain M and I, be impossible to answer either yes or no?

If it doesn't either halt or not halt on input I, what exactly does it
do?

Also, before this continues too long:  It sounds like you are quite
confident that you're right about this.  I am quite confident that you
are not.  What would it take to convince you that you are wrong?

For my part, I will be satisfied and agree that Turing's result is
somehow wrong if you can implement an algorithm that solves the
halting problem.  If you are going to say that the Halting problem is
ill-formed, then in order for me to agree with this, I would need to
see an example of a machine that both fails to halt and fails to not
halt.  (Good luck.)
0
cplxphil (227)
6/9/2012 10:58:43 PM
On Jun 10, 8:58=A0am, cplxphil <cplxp...@gmail.com> wrote:
>
> You are saying that the question, "Does machine M halt on input I?"
> may, for certain M and I, be impossible to answer either yes or no?
>



BINGO!!!!   THAT IS YOUR HALTING PROOF!


0
6/9/2012 11:20:22 PM
On Jun 9, 1:55=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> On Jun 9, 11:43=A0am, Peter Olcott <OCR4Screen> wrote:
>
>
>
>
>
>
>
>
>
> > If a yes or no question does not have a correct yes or no answer then
> > there must be something wrong with this question.
>
> > More generally:
> > An ill-formed question is defined as any question that lacks a correct
> > answer from the set of all possible answers.
>
> > The *only* reason that the self reference form of the Halting Problem
> > can not be solved is that neither of the two possible final states of
> > any potential halt decider TM corresponds to whether or not its input T=
M
> > will halt on its input.
>
> > In other words for potential
> > halt decider H and input M:
> > ---------------------------
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > ---------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>
> You need to FULLY FORMALISE this problem to make a proper argument.
>
> Have you seen STRIPS PLAN SPECIFICATION LANGUAGE?
>
> http://en.wikipedia.org/wiki/STRIPS
> STRIPS (Stanford Research Institute Problem Solver) is an automated
> planner
>
> A sample STRIPS problem
>
> A monkey is at location A in a lab.
> There is a box in location C.
> The monkey wants the bananas that are hanging from the ceiling in
> location B,
> but it needs to move the box and climb onto it in order to reach them.
>
> Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
> Goal state: =A0 =A0Have(Bananas)
>
> Actions:
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move from X to Y
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_Move(X, Y)_
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), Level(low)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: not At(X), At(Y)
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb up on the box
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbUp(Location)_
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(Loca=
tion),
> Level(low)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(high), not Level(low=
)
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb down from the box
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbDown(Location)_
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(Loca=
tion),
> Level(high)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(low), not Level(high=
)
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move monkey and box from X to Y
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_MoveBox(X, Y)_
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), BoxAt(X), Level(l=
ow)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: BoxAt(Y), not BoxAt(X), At=
(Y), not
> At(X)
>
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// take the bananas
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_TakeBananas(Location)_
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BananasAt(=
Location),
> Level(high)
> =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Have(bananas)
>
> The Halt Proof conclusion definitely says NOTHING about a Halt
> Function that works exclusively on a directed acyclic graph of
> programs references within the Halt parameters.
>
> e.g.
> PROGRAM1 ... HALT(program2) ...
> PROGRAM2 ... HALT(program3) ...
> ...
> PROGRAMn-1 ... HALT(programn) ...
> PROGRAMN ... HALT(program1)
>
> This CYCLE is easy to avoid in languages such as ZFC, using Axiom Of
> Regularity.
>
> Herc

Hi!

yes and who's climbing with those bananas too
0
n.m.keele (172)
6/9/2012 11:50:11 PM
On Jun 10, 9:50=A0am, N <n.m.ke...@hotmail.co.uk> wrote:
> On Jun 9, 1:55=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> > A sample STRIPS problem
>
> > A monkey is at location A in a lab.
> > There is a box in location C.
> > The monkey wants the bananas that are hanging from the ceiling in
> > location B,
> > but it needs to move the box and climb onto it in order to reach them.
>
> > Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
> > Goal state: =A0 =A0Have(Bananas)
>
> > Actions:
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move from X to Y
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_Move(X, Y)_
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), Level(low)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: not At(X), At(Y)
>
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb up on the box
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbUp(Location)_
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(Lo=
cation), Level(low)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(high), not Level(l=
ow)
>
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb down from the box
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbDown(Location)_
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(Lo=
cation), Level(high)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(low), not Level(hi=
gh)
>
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move monkey and box from X to Y
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_MoveBox(X, Y)_
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), BoxAt(X), Level=
(low)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: BoxAt(Y), not BoxAt(X), =
At(Y), not At(X)
>
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// take the bananas
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_TakeBananas(Location)_
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BananasA=
t(Location),Level(high)
> > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Have(bananas)
>
> > The Halt Proof conclusion definitely says NOTHING about a Halt
> > Function that works exclusively on a directed acyclic graph of
> > programs references within the Halt parameters.
>
> > e.g.
> > PROGRAM1 ... HALT(program2) ...
> > PROGRAM2 ... HALT(program3) ...
> > ...
> > PROGRAMn-1 ... HALT(programn) ...
> > PROGRAMN ... HALT(program1)
>
> > This CYCLE is easy to avoid in languages such as ZFC, using Axiom Of
> > Regularity.
>
> > Herc
>
> Hi!
>
> yes and who's climbing with those bananas too

GOAL: LIST X : CLIMBING (X, BANANAS)

_CLIMBING( CLIMBER, WITHOBJECT )_

  PRECONDITION: LEVEL(HIGH), HAS(CLIMBER, WITHOBJECT)
  POSTCONDITION: LEVEL(HIGHER), HIGHER>HIGH, HAS(CLIMBER, WITHOBJECT)

I imagine STRIPS is used in WINOGRADS Natural Language Engine, so
worth a look!

Computer: I'M NOT SURE WHAT YOU MEAN BY "ON TOP OF "
IN THE PHRASE "ON TOP OF GREEN CUBES " .
 DO YOU MEAN:
1 - DIRECTLY ON THE SURFACE
2 - ANWHERE ON TOP OF ?

Person: 2
Computer: THREE OF THEM.
Person: HAD YOU TOUCHED ANY PYRAMID BEFORE YOU PUT THE GREEN ONE ON
THE LITTLE CUBE?
Computer: YES, THE GREEN ONE.

Person: WHEN DID YOU PICK IT UP?
Computer: WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND
A LARGE GREEN CUBE.

Person: WHY?
Computer: TO GET RID OF IT.
Person: WHY DID YOU DO THAT?
Computer: TO CLEAN OFF THE RED CUBE.
Person: WHY DID YOU CLEAR OFF THAT CUBE?
Computer: TO PUT IT ON A LARGE GREEN CUBE.
Person: WHY DID YOU DO THAT?
Computer: TO STACK UP THE RED CUBE , A LARGE RED BLOCK AND A LARGE
GREEN CUBE.
Person: WHY DID YOU DO THAT?
Computer: BECAUSE YOU ASKED ME TO.

Person: HOW DID YOU DO IT?
Computer: BY PUTTING A..

HUMAN: WHY?
COMPUTER: BECAUSE YOU ASKED ME TO!!

Herc
0
6/10/2012 12:08:25 AM
On 6/9/2012 6:58 PM, cplxphil wrote:
> Perhaps you've been over this before in your lengthy discussions, but
> let me see if I have this straight.
>
> You are saying that the question, "Does machine M halt on input I?"
> may, for certain M and I, be impossible to answer either yes or no?

I think Peter believes that this is not the proper way to phrase the 
question really being asked. Although, when he was pushed into a corner, 
I think there was a tacit admission that the answer of such a question 
depends on who you asking it of.

> Also, before this continues too long:  It sounds like you are quite
> confident that you're right about this.  I am quite confident that you
> are not.  What would it take to convince you that you are wrong?

Again, I think the answer is that he has no issue with proof, per se. 
His umbrage is with the interpretation of the result: he believes that 
it is possible to make a Halt decider that is "essentially" correct but 
fails in some cases which are "necessary" (use of quotation marks to 
indicate that the terms contained within are slippery and ill-defined). 
Lots of people have attempted to illustrate several alternate 
derivations to show that the incorrectness is not so well-contained, but 
they have all been ignored because either:
a) it happens to fall under the "necessary" failures,
b) it relies on a similar "incorrect" result (the uncountability of real 
numbers and Godel's incompleteness theorem have also been explicitly 
cited as incorrect proofs due to being similar [1]), or
c) he doesn't understand it, so it has to be either case a or case b 
because he is OBVIOUSLY right.

Trying to come up with a simple, alternate proof of the Halting problem 
that is understandable and skirts any other proofs that have anything 
smacking of diagonalization or self-reference is indeed hard, but I 
doubt he'd accept anything less. I also doubt that he'd accept even that 
much, though...

[1] This just makes it seem to me that he has a really hard time 
accepting that a proof by contradiction is indeed a valid proof.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
6/10/2012 1:39:57 AM
On Jun 10, 11:39=A0am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> [1] This just makes it seem to me that he has a really hard time
> accepting that a proof by contradiction is indeed a valid proof.

.... of the negation of your hypothesis.

The following is the same form as The Halt Proof.

ASSUME a function exists that is irreflexive (i.e. tests OTHER
functions) and works on all possible values.

f(f) =3D NO VALUE

CONTRADICTION

....  BULLSHIT FOLLOWS FROM DISPROOF OF INVALID HYPOTHESIS

Herc

--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.
BUT IF IT WEREN'T NAMED P then it might not be:
Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.
~ George Green  (sci.logic)
0
6/10/2012 2:31:53 AM
On 6/9/2012 5:58 PM, cplxphil wrote:
> On Jun 8, 11:17 pm, Peter Olcott<OCR4Screen>  wrote:
>> On 6/8/2012 9:04 PM, cplxphil wrote:
>>
>>
>>
>>
>>
>>
>>
>>> On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen>    wrote:
>>>> If a yes or no question does not have a correct yes or no answer then
>>>> there must be something wrong with this question.
>>>> More generally:
>>>> An ill-formed question is defined as any question that lacks a correct
>>>> answer from the set of all possible answers.
>>>> The *only* reason that the self reference form of the Halting Problem
>>>> can not be solved is that neither of the two possible final states of
>>>> any potential halt decider TM corresponds to whether or not its input TM
>>>> will halt on its input.
>>>> In other words for potential
>>>> halt decider H and input M:
>>>> ---------------------------
>>>> Not<ThereExists>
>>>> <ElementOfSet>
>>>> FinalStatesOf_H
>>>> <MathematicallyMapsTo>
>>>> Halts(M, H, input)
>>>> Where M is defined as
>>>> ---------------------
>>>> M(String H, String input):
>>>> if H(input, H, input) loop
>>>> else halt
>>>> The only difference between asking a yes or no question and the
>>>> invocation of a Turing Machine is a natural language interface.
>>>> Within a natural language interface the invocation of H(M, H, M) would
>>>> be specified as:
>>>> �Does Turing Machine M halt on input of Turing Machine H
>>>> and Turing Machine M?�
>>>> Within a natural language interface the answer to this question would be
>>>> specified as �yes� or �no� and map to the final states of H of accept or
>>>> reject.
>>>> So the only reason that the self reference form of the Halting Problem
>>>> can not be solved is that it is based on a yes or no question that lacks
>>>> a correct yes or no answer, and thereby derives an ill-formed question.
>>> No one would say that instances of the halting problem lack a yes or
>>> no answer.
>> Yes this point has been missed for many years.
>>
>> Since the final states: {accept, reject} of H form the entire solution
>> set, (every possible answer that H can provide) and these states have
>> been shown to mathematically map to yes or no therefore the inability of
>> a Turing Machine to solve the halting problem is merely the inability to
>> correctly answer a yes or no question that has no correct yes or no answer.
>>
> Perhaps you've been over this before in your lengthy discussions, but
> let me see if I have this straight.
>
> You are saying that the question, "Does machine M halt on input I?"
> may, for certain M and I, be impossible to answer either yes or no?
>
> If it doesn't either halt or not halt on input I, what exactly does it
> do?

The input to H, either halts or does not halt, depending upon how H is 
defined.

The Halting problem is based on an ill-formed question because neither 
answer of every possible answer that H can provide (by transitioning to 
its own accept or reject state) mathematically maps to whether or not 
the input to H halts on its input.

> Also, before this continues too long:  It sounds like you are quite
> confident that you're right about this.  I am quite confident that you
> are not.  What would it take to convince you that you are wrong?
I will carefully examine every line-of-reasoning that attempts to show 
that my reasoning is incorrect. I will point out any errors that I find.

> For my part, I will be satisfied and agree that Turing's result is
> somehow wrong if you can implement an algorithm that solves the
> halting problem.  If you are going to say that the Halting problem is
> ill-formed, then in order for me to agree with this, I would need to
> see an example of a machine that both fails to halt and fails to not
> halt.  (Good luck.)

0
Peter
6/10/2012 3:32:41 AM
On 6/9/2012 8:39 PM, Joshua Cranmer wrote:
> On 6/9/2012 6:58 PM, cplxphil wrote:
>> Perhaps you've been over this before in your lengthy discussions, but
>> let me see if I have this straight.
>>
>> You are saying that the question, "Does machine M halt on input I?"
>> may, for certain M and I, be impossible to answer either yes or no?
>
> I think Peter believes that this is not the proper way to phrase the 
> question really being asked. Although, when he was pushed into a 
> corner, I think there was a tacit admission that the answer of such a 
> question depends on who you asking it of.
>

When the question is asked of the entire set of every potential halt 
decider, and every element of this set is limited to providing its 
answer by transitioning to one of its own final states of {accept, 
reject} then this question derives a yes or no question that lacks a 
correct yes or no answer and is thereby ill-formed.

Not<ThereExists>
<ElementOfSet>
   FinalStatesOf_H
<MathematicallyMapsTo>
     Halts(M, H, input)

Where M is defined as
---------------------
M(String H, String input):
if H(input, H, input) loop
else halt

>> Also, before this continues too long:  It sounds like you are quite
>> confident that you're right about this.  I am quite confident that you
>> are not.  What would it take to convince you that you are wrong?
>
> Again, I think the answer is that he has no issue with proof, per se. 
> His umbrage is with the interpretation of the result: he believes that 
> it is possible to make a Halt decider that is "essentially" correct 
> but fails in some cases which are "necessary" (use of quotation marks 
> to indicate that the terms contained within are slippery and 
> ill-defined). Lots of people have attempted to illustrate several 
> alternate derivations to show that the incorrectness is not so 
> well-contained, but they have all been ignored because either:
> a) it happens to fall under the "necessary" failures,
> b) it relies on a similar "incorrect" result (the uncountability of 
> real numbers and Godel's incompleteness theorem have also been 
> explicitly cited as incorrect proofs due to being similar [1]), or
> c) he doesn't understand it, so it has to be either case a or case b 
> because he is OBVIOUSLY right.
>
> Trying to come up with a simple, alternate proof of the Halting 
> problem that is understandable and skirts any other proofs that have 
> anything smacking of diagonalization or self-reference is indeed hard, 
> but I doubt he'd accept anything less. I also doubt that he'd accept 
> even that much, though...
>
> [1] This just makes it seem to me that he has a really hard time 
> accepting that a proof by contradiction is indeed a valid proof.
>

0
Peter
6/10/2012 3:43:02 AM
On Fri, 8 Jun 2012, Peter Olcott wrote:

> If a yes or no question does not have a correct yes or no answer then there
> must be something wrong with this question.
> 
Nonsense.  "Guilty or not guilty?" and the trial ends with a hung jury.

> More generally:
> An ill-formed question is defined as any question that lacks a correct answer
> from the set of all possible answers.

Therefore don't ask any research or philosophical questions that can't be 
answered or any question with a statistical answer such as "maybe".

0
marsh9355 (9)
6/10/2012 4:20:56 AM
On Jun 8, 9:43=A0pm, Peter Olcott <OCR4Screen> wrote:
> If a yes or no question does not have a correct yes or no answer then
> there must be something wrong with this question.

But if it is a yes or no question then you CANNOT SAY that the answer
has to come from some set other than the set {yes,no}.
Which you HAVE been doing routinely.  You have been asking a question
whose answer has to be a number and then
claiming that it is ill-formed because you have (impossibly) added a
FURTHER (non-existent) constraint that it ALSO be
a color.  Or something.


> More generally:
> An ill-formed question is defined as any question that lacks a correct
> answer from the set of all possible answers.

The set of possible answers is determined by the TYPE of the relevant
interrogative term in the question, NOT by some OTHER set that you
(impossibly) try to tack on afterwards as a constraint.


>
> The *only* reason that the self reference form of the Halting Problem
> can not be solved is that neither of the two possible final states of
> any potential halt decider TM corresponds to whether or not its input TM
> will halt on its input.

This is NOT the case.
You canNOT ask a SUBTRACTION question of an ADDITION tm.  The KIND of
question that can be posed to a tm DEPENDS ON THE KIND OF TM it is.
It depends on what the TM is programmed to do.  If the TM is NOT A
HALTS TM then it CANNOT BE ASKED  a halting question.
And since NO tm is a Halts TM, NO tm can be asked a halting question
IN THE GENERAL CASE.   Now, of course, there are plenty
of TMs that correctly decide halting questions for some simplified
subdomain of all TMs.  But there simply isn't one in the general case.
Posing a question to a subdomain/halting TM, when the input IS OUTSIDE
THE SUBDOMAIN over which the TM is known/programmed/specified to
answer halting questions, might or might not be "ill-formed" but the
point is, the TM was NOT DESIGNED OR PROGRAMMED to answer questions
outside its subdomain AT ALL,
so whatEVER happens, that happening (as an output) CANNOT BE
"incorrect".
In point of actual fact, NO behavior OF ANY tm IS EVER incorrect, nor
CAN it be incorrect.



> In other words for potential
> halt decider H and input M:

There is no such thing as a potential halt decider.
No machine IS EVER a halt decider in the general case, NOT EVEN
POTENTIALLY.
The fact that you WANT a machine to be a halt decider IS NEVER ANY
SORT OF CONSTRAINT whatsoever on the behavior of the machine,
"correct" or otherwise.
0
greeneg9613 (188)
6/10/2012 4:46:02 AM
On Jun 9, 10:41=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Misrepresenting the result of an invocation of a "potential halt
> decider" as being a question about whether a machine *actually* halts
> or not is the error at core of all of the recent nonsense.

Of course it is.  You can ONLY pose  a "does it halt?" question TO a
Halts TM.
Since no such thing as a Halts TM exists, no matter WHAT TM you are
invoking,
you are NEVER ASKING THAT question.  Unless, of course, you have
specifically
chosen to limit the inputs to some finite level of complexity and
length of input.
Then the whole problem is finitary and of course you can write a TM
that correctly
answers all the halting questions IN ITS SIMPLIFIED SUBDOMAIN.
This will of course have exactly ZERO relevance to "the" halting
problem since "the"
halting problem was the problem of designing a TM that gave the
correct answer for ALL TMs
(including itself) on ALL inputs.
0
greeneg9613 (188)
6/10/2012 4:49:02 AM
> "Ben Bacarisse" <ben.use...@bsb.me.uk> wrote in message
> > No, it would be read as "What does the machine H say when invoked with
> > input (M, H, M)?".

On Jun 9, 2:28 pm, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> Since you can not possibly construct the Halting Problem from the above
> question,

You're LYING, Peter.  The Halting Problem is constructed IN ENGLISH.
It is "can you construct a TM that tells whether a TM halts on an
input?"
It does not need to be "constructed from" ANY question.  The question
that Ben
has posed,

> > "What does the machine H say when invoked with
> > input (M, H, M)?".

is AN INSTANCE of the Halting problem, NOT something from which anyone
could ever even need TO TRY to "construct" the Halting problem.
The halting problem is "constructed" JUST BY ASKING THE QUESTION.  The
problem is to determine whether something does or doesn't exist.
The only relevant construction would be the construction of a Halts
TM.   Since the mere EXISTENCE of such a TM would entail a
contradiction,
there is CERTAINLY no point in anybody's bothering to try to CONSTRUCT
one.  As for constructing the problem, I repeat, MERELY ASKING THE
QUESTION *is* constructing the PROBLEM; the problem IS a question -- a
question about the [non-]existence of a TM having certain properties.
No TM has that collection of properties.
0
greeneg9613 (188)
6/10/2012 4:53:30 AM
On Jun 9, 11:43=A0pm, Peter Olcott <OCR4Screen> wrote:
> When the question is asked of the entire set of every potential halt
> decider

THAT would be NEVER.
THERE ARE NO Potential Halt Deciders.
From your standpoint, EVERY LAST TM ON EARTH would be a potential halt
decider.
From everybody else's, NO TM is a potential halt decider because THEY
ALL FAIL.
They don't just potentially fail, they factually fail.   And NOT A ONE
of them has ANY POTENTIAL WHATSOEVER
for EVER becoming a halt decider.
0
greeneg9613 (188)
6/10/2012 4:55:03 AM
On 6/9/2012 11:32 PM, Peter Olcott wrote:
> On 6/9/2012 5:58 PM, cplxphil wrote:
>> Also, before this continues too long:  It sounds like you are quite
>> confident that you're right about this.  I am quite confident that you
>> are not.  What would it take to convince you that you are wrong?
> I will carefully examine every line-of-reasoning that attempts to show
> that my reasoning is incorrect. I will point out any errors that I find.

So, in other words, it is impossible to convince that you are wrong.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth


0
Pidgeot18 (1520)
6/10/2012 12:21:43 PM
"Joshua Cranmer" <Pidgeot18@verizon.invalid> wrote in message 
news:jr23h2$hil$1@dont-email.me...
> On 6/9/2012 11:32 PM, Peter Olcott wrote:
>> On 6/9/2012 5:58 PM, cplxphil wrote:
>>> Also, before this continues too long:  It sounds like you are quite
>>> confident that you're right about this.  I am quite confident that you
>>> are not.  What would it take to convince you that you are wrong?
>> I will carefully examine every line-of-reasoning that attempts to show
>> that my reasoning is incorrect. I will point out any errors that I find.
>
> So, in other words, it is impossible to convince that you are wrong.

It would be impossible to convince me that I am wrong if I am *not* wrong, 
otherwise it is possible.
So far I have not seen anything resembling sound reasoning that correctly 
refutes my true position.
I have made very many errors in presenting this position, and from what I 
can tell most of these have been corrected.

Since my goal is to provide this reasoning using English that is 100% 
completely mathematically precisely correct, George may have pointed out a 
recent error. It may have been incorrect for me to use the term "potential 
halt decider".
When I used this term I was referring to any Turing Machine that attempts to 
be a halt decider, even though none could ever actually achieve this. Since 
none could ever actually achieve this, no actual potential exists.

Here is a possibly better statement of my position:

The reason why the self reference form of the Halting Problem can not be 
solved is:
1) The invocation of every Turing Machine that attempts to be a halt decider 
mathematically maps to a yes or no question (within a natural language 
interface).

2) The yes or no answer to this question that this set of Turing Machines 
can possibly provide (within a natural language interface) mathematically 
maps to its own final states of accept and reject, thus deriving the entire 
solution set of every possible answer.

3) Neither of these yes or no (accept or reject) answers is correct 
(mathematically maps to whether or not the input TM will halt on its input).

4) Therefore the reason that the self reference form of the Halting Problem 
can not be solved is that this problem is based on providing a yes or no 
answer to a question that has no correct yes or no answer.

Any attempted refutation should provide reasoning that refutes the above 
points individually. Stating that the above reasoning is nonsense merely 
indicates a failure of the respondent to comprehend and nothing more.

>
> -- 
> Beware of bugs in the above code; I have only proved it correct, not tried 
> it. -- Donald E. Knuth
>
> 


0
NoSpam271 (937)
6/10/2012 2:21:13 PM
On Jun 10, 12:08=A0am, "|-| E R C" <herc.of.z...@gmail.com> wrote:
> On Jun 10, 9:50=A0am, N <n.m.ke...@hotmail.co.uk> wrote:
>
>
>
>
>
>
>
>
>
> > On Jun 9, 1:55=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> > > A sample STRIPS problem
>
> > > A monkey is at location A in a lab.
> > > There is a box in location C.
> > > The monkey wants the bananas that are hanging from the ceiling in
> > > location B,
> > > but it needs to move the box and climb onto it in order to reach them=
..
>
> > > Initial state: At(A), Level(low), BoxAt(C), BananasAt(B)
> > > Goal state: =A0 =A0Have(Bananas)
>
> > > Actions:
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move from X to Y
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_Move(X, Y)_
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), Level(low)
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: not At(X), At(Y)
>
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb up on the box
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbUp(Location)_
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(=
Location), Level(low)
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(high), not Level=
(low)
>
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// climb down from the box
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_ClimbDown(Location)_
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), BoxAt(=
Location), Level(high)
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Level(low), not Level(=
high)
>
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// move monkey and box from X to Y
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_MoveBox(X, Y)_
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(X), BoxAt(X), Lev=
el(low)
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: BoxAt(Y), not BoxAt(X)=
, At(Y), not At(X)
>
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0// take the bananas
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0_TakeBananas(Location)_
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Preconditions: =A0At(Location), Banana=
sAt(Location),Level(high)
> > > =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0Postconditions: Have(bananas)
>
> > > The Halt Proof conclusion definitely says NOTHING about a Halt
> > > Function that works exclusively on a directed acyclic graph of
> > > programs references within the Halt parameters.
>
> > > e.g.
> > > PROGRAM1 ... HALT(program2) ...
> > > PROGRAM2 ... HALT(program3) ...
> > > ...
> > > PROGRAMn-1 ... HALT(programn) ...
> > > PROGRAMN ... HALT(program1)
>
> > > This CYCLE is easy to avoid in languages such as ZFC, using Axiom Of
> > > Regularity.
>
> > > Herc
>
> > Hi!
>
> > yes and who's climbing with those bananas too
>
> GOAL: LIST X : CLIMBING (X, BANANAS)
>
> _CLIMBING( CLIMBER, WITHOBJECT )_
>
> =A0 PRECONDITION: LEVEL(HIGH), HAS(CLIMBER, WITHOBJECT)
> =A0 POSTCONDITION: LEVEL(HIGHER), HIGHER>HIGH, HAS(CLIMBER, WITHOBJECT)
>
> I imagine STRIPS is used in WINOGRADS Natural Language Engine, so
> worth a look!
>
> Computer: I'M NOT SURE WHAT YOU MEAN BY "ON TOP OF "
> IN THE PHRASE "ON TOP OF GREEN CUBES " .
> =A0DO YOU MEAN:
> 1 - DIRECTLY ON THE SURFACE
> 2 - ANWHERE ON TOP OF ?
>
> Person: 2
> Computer: THREE OF THEM.
> Person: HAD YOU TOUCHED ANY PYRAMID BEFORE YOU PUT THE GREEN ONE ON
> THE LITTLE CUBE?
> Computer: YES, THE GREEN ONE.
>
> Person: WHEN DID YOU PICK IT UP?
> Computer: WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND
> A LARGE GREEN CUBE.
>
> Person: WHY?
> Computer: TO GET RID OF IT.
> Person: WHY DID YOU DO THAT?
> Computer: TO CLEAN OFF THE RED CUBE.
> Person: WHY DID YOU CLEAR OFF THAT CUBE?
> Computer: TO PUT IT ON A LARGE GREEN CUBE.
> Person: WHY DID YOU DO THAT?
> Computer: TO STACK UP THE RED CUBE , A LARGE RED BLOCK AND A LARGE
> GREEN CUBE.
> Person: WHY DID YOU DO THAT?
> Computer: BECAUSE YOU ASKED ME TO.
>
> Person: HOW DID YOU DO IT?
> Computer: BY PUTTING A..
>
> HUMAN: WHY?
> COMPUTER: BECAUSE YOU ASKED ME TO!!
>
> Herc

I was trying to come up with some creative short movie ideas so was on
the look out for a random word generator. It would be terrific to use
that program together with a random word generator sourcing from a web
search engine...I expect someone already did that here already tho.!
the results could be fairly abstract but entirely informative
0
n.m.keele (172)
6/10/2012 5:40:59 PM
On Jun 10, 10:21=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> Here is a possibly better statement of my position:
>
> The reason why the self reference form of the Halting Problem can not be
> solved is:

Well, THERE'S YOUR TROUBLE.  There IS NO SUCH THING AS "the self-
reference form of" the halting problem.
THE halting problem is called THE halting problem BECAUSE IT IS
UNIQUE.  It  ONLY HAS ONE form!  It is in the form of AN EXISTENCE
question.  It is in the form, "DOES THERE EXIST A TM" with a certain
property, namely, the property that, when it interprets its finite
input string as a finite specification or finite code-string for a TM,
AND (concatenated with) a finite input string for that TM, DOES THIS
(existent) TM ALWAYS RETURN The truth value of "the TM specified by
the code-string halts on the input-string that is the rest of the
input".  Well, does it?  CAN there even exist such a TM?  THAT IS THE
ONLY form of THE halting problem.
0
greeneg9613 (188)
6/10/2012 10:59:55 PM
On Jun 11, 8:59=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Jun 10, 10:21=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
>
> > Here is a possibly better statement of my position:
>
> > The reason why the self reference form of the Halting Problem can not b=
e
> > solved is:
>
> Well, THERE'S YOUR TROUBLE. =A0There IS NO SUCH THING AS "the self-
> reference form of" the halting problem.
> THE halting problem is called THE halting problem BECAUSE IT IS
> UNIQUE. =A0It =A0ONLY HAS ONE form! =A0It is in the form of AN EXISTENCE
> question. =A0It is in the form, "DOES THERE EXIST A TM" with a certain
> property, namely, the property that, when it interprets its finite
> input string as a finite specification or finite code-string for a TM,
> AND (concatenated with) a finite input string for that TM, DOES THIS
> (existent) TM ALWAYS RETURN The truth value of "the TM specified by
> the code-string halts on the input-string that is the rest of the
> input". =A0Well, does it? =A0CAN there even exist such a TM? =A0THAT IS T=
HE
> ONLY form of THE halting problem.


Right!  and the method to show it cannot exist is the self reference
case.

Herc

--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.

BUT IF IT WEREN'T NAMED P then it might not be:
Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.
~ GEORGE GREEN (sci.logic)
0
6/11/2012 12:14:21 AM
On Jun 10, 10:21=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> "Joshua Cranmer" <Pidgeo...@verizon.invalid> wrote in message
>
> news:jr23h2$hil$1@dont-email.me...
>
> > On 6/9/2012 11:32 PM, Peter Olcott wrote:
> >> On 6/9/2012 5:58 PM, cplxphil wrote:
> >>> Also, before this continues too long: =A0It sounds like you are quite
> >>> confident that you're right about this. =A0I am quite confident that =
you
> >>> are not. =A0What would it take to convince you that you are wrong?
> >> I will carefully examine every line-of-reasoning that attempts to show
> >> that my reasoning is incorrect. I will point out any errors that I fin=
d.
>
> > So, in other words, it is impossible to convince that you are wrong.
>
> It would be impossible to convince me that I am wrong if I am *not* wrong=
,
> otherwise it is possible.
> So far I have not seen anything resembling sound reasoning that correctly
> refutes my true position.
> I have made very many errors in presenting this position, and from what I
> can tell most of these have been corrected.
>
> Since my goal is to provide this reasoning using English that is 100%
> completely mathematically precisely correct, George may have pointed out =
a
> recent error. It may have been incorrect for me to use the term "potentia=
l
> halt decider".
> When I used this term I was referring to any Turing Machine that attempts=
 to
> be a halt decider, even though none could ever actually achieve this. Sin=
ce
> none could ever actually achieve this, no actual potential exists.
>
> Here is a possibly better statement of my position:
>
> The reason why the self reference form of the Halting Problem can not be
> solved is:
> 1) The invocation of every Turing Machine that attempts to be a halt deci=
der
> mathematically maps to a yes or no question (within a natural language
> interface).
>
> 2) The yes or no answer to this question that this set of Turing Machines
> can possibly provide (within a natural language interface) mathematically
> maps to its own final states of accept and reject, thus deriving the enti=
re
> solution set of every possible answer.
>
> 3) Neither of these yes or no (accept or reject) answers is correct
> (mathematically maps to whether or not the input TM will halt on its inpu=
t).
>
> 4) Therefore the reason that the self reference form of the Halting Probl=
em
> can not be solved is that this problem is based on providing a yes or no
> answer to a question that has no correct yes or no answer.
>
> Any attempted refutation should provide reasoning that refutes the above
> points individually. Stating that the above reasoning is nonsense merely
> indicates a failure of the respondent to comprehend and nothing more.
>
>
>
>
>
>
>
>
>
> > --
> > Beware of bugs in the above code; I have only proved it correct, not tr=
ied
> > it. -- Donald E. Knuth

I think I see what you're getting at.

Let's see if I can translate (into terms I'm comfortable with) and
then refute your 4 points:

1 > A Turing machine that could solve the halting problem must map to
a yes or no question.
2 > Such a Turing machine can only ever accept or reject.
3 > Neither answer to a "yes or no" halting question is correct.
(justification?)
4 > The halting problem has no answer.

Although I don't understand #3, the problem is with #1.  You suggest,
"a halt decider...maps to a yes or no question."  Yes, but only if it
works!  Every "potential halt decider" we can develop *fails to halt*
when given a sufficiently problematic instance of the halting
problem.  You are essentially starting out your argument by assuming
that the halting problem can be decided properly, which it cannot.

You are saying that every Turing machine must, via a natural language
interface (which--by the way--technically has no place in a
mathematical argument), map to a natural language question.  Wrong!
The mathematical *question* that you discuss must map to a yes or no
question, but a Turing machine is a computer program and a
mathematical object--and thus is not governed by your beliefs about
natural language.

The irony of your argument is that a modification of it actually
suggests that Turing's result is correct.  In fact, if Turing's result
were wrong, you would be correct.  Logically speaking, if you could
establish something as logically contradictory as the actual existence
of a  "potential halt decider," you would be able to prove anything--
even that the moon is made of cheese.

Moons made of cheese are neither here nor there, but in summary, your
fallacy is the claim that the *Turing machine* must map to a yes or no
question.  A Turing machine is free to do whatever it wants, including
never halt, even if you would like it to be a potential halt decider.

Ironically, your step #1 represents a contradiction that Turing
exploits in his proof.  You are assuming that the halt decider exists,
and that it maps to a yes or no question.  That is precisely what
Turing assumes--and using that false assumption and some self-
reference tricks, he produces precisely the result that you claim
doesn't make sense.

I've been a bit repetitive above, mainly for emphasis (and out of
laziness), but I hope I've made my point.  One last time:  A Turing
machine does not map to a yes or no question, it maps to "yes, no,
never halt."

I have a sinking feeling you won't be convinced....
0
cplxphil (227)
6/11/2012 12:26:52 AM
On Jun 10, 10:21=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> It would be impossible to convince me that I am wrong if I am *not* wrong=
,
> otherwise it is possible.

Liar.

> So far I have not seen anything resembling sound reasoning that correctly
> refutes my true position.

Liar.  You have seen sound reasoning showing you that there cannot
exist ANY h that bears ANY binary relation R to all and only those x's
that do not bear R to themselves.  You have seen that there is nothing
ill-formed in that reasoning.  Yet you persist in continuing to think
that there is something ill-formed about non-
existence claims following from this fact.  If there were a TM that
could decide halting in the general case, then you have seen sound
reasoning showing that there
would then LOGICALLY, NECESSARILY, ALSO exist a TM halting on all and
only those TMs that halted on their OWN code-strings as input-
strings.  Since the mere existence of THAT tm is a logical
impossibility, sound reasoning implies that the existence of a halt-
deciding TM is also a logical impossibility. Yet you persist in
claiming that (falsely) that there is some sort of "question" in this
chain of reasoning (there is not) and that the question is "ill-
formed".  Well, to the extent that the question is about something
that cannot exist, maybe THAT question is ill-formed, but the PRIOR
question of WHETHER this thing (a halts TM) exists IS NOT ill-formed;
the answer is just clearly, logically, "No, it doesn't exist".
0
greeneg9613 (188)
6/11/2012 12:48:52 AM
On Jun 11, 10:26=A0am, cplxphil <cplxp...@gmail.com> wrote:
>
> 1 > A Turing machine that could solve the halting problem must map to
> a yes or no question.
> 2 > Such a Turing machine can only ever accept or reject.
> 3 > Neither answer to a "yes or no" halting question is correct.
> (justification?)

SEE
www.tinyurl.com/BLUEPRINTS-HALT

fot the 3rd option

Herc
0
6/11/2012 12:49:02 AM
On Jun 10, 8:14=A0pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
> Right! =A0and the method to show it cannot exist is the self reference  c=
ase.

This method is not unique.  There is no such thing as "the" method
here.
This is just the most obvious method.  If something in fact cannot
exist then
the necessarily&provably- false assumption that it does, like ANY
provably false assumption,
NECESSARILY BEGETS MYRIAD absurdities as consequences.  Other proofs
are possible
but as yet we have no motivation to go there because P.O. refuses to
stop lying about this one.

>
> Herc
>
> --
> P: If Halts(P) Then Loop Else Halt.
> is obviously a paradoxical program if Halts() exists.
>
> BUT IF IT WEREN'T NAMED P then it might not be:
> Q: If Halts(P) Then Loop Else Halt.
> is NOT paradoxical.


Well, it's not paradoxical as long as Q has a different output from P
on SOME input-string.


0
greeneg9613 (188)
6/11/2012 12:51:59 AM
On 6/10/2012 10:21 AM, Peter Olcott wrote:
> It would be impossible to convince me that I am wrong if I am *not* wrong,
> otherwise it is possible.

I can't think of a nice sugar-coated way to say this, so I'll be blunt: 
you're taking an egotistical approach to proofs, in that you are by 
default infallible until something is proved wrong. This is not a good 
approach to theorem proving; instead, you should attempt to attack your 
own proofs as much as possible and wait to see if you still see it 
holding after a barrage.

> So far I have not seen anything resembling sound reasoning that correctly
> refutes my true position.

I would posit that this is because you cannot see sound reasoning in the 
first place :-P.

> I have made very many errors in presenting this position, and from what I
> can tell most of these have been corrected.

Er... no. You're still exuding extremely elementary errors.

> Since my goal is to provide this reasoning using English that is 100%
> completely mathematically precisely correct, George may have pointed out a
> recent error. It may have been incorrect for me to use the term "potential
> halt decider".

Starting with the biggest one of all: your proofs are in effect using 
English terminology to anthropomorphize Turing machines. This results in 
all of your proof attempts amounting to extremely vague things which are 
somewhere between vacuously true and hopelessly wrong, since your 
interpretation colors what you think it actually means. Add in your 
nondesire to consider the notion that your proofs are actually rubbish, 
and you result in non sequiturs that can't be argued against since no 
one has any clue what you are saying.

> 1) The invocation of every Turing Machine that attempts to be a halt decider
> mathematically maps to a yes or no question (within a natural language
> interface).

One line into your proof and you've already shot your argument in the 
foot. What does it mean for a Turing machine to "attempt to be" 
something? This isn't a small nitpick: it completely determines the 
class of machines you are discussing, which has a major impact on 
argumentation against your proof.

It's also the kind of language which has absolutely no business being in 
anything that resembles serious scholarly writing, or even armchair 
introductions. No, it's the kind of language which is only suited for 
things like TRON or ReBoot, where machines are actually anthropomorphized.

Your parenthetical note is also ambiguous, since you never clarify what 
a "natural language interface" is.

> 2) The yes or no answer to this question that this set of Turing Machines
> can possibly provide (within a natural language interface) mathematically
> maps to its own final states of accept and reject, thus deriving the entire
> solution set of every possible answer.

The above note about ambiguous parenthetical notes remains true here as 
well. Otherwise, everything up to the comma is more or less correct, but 
it's missing something. What you have is not a surjective mapping, so 
what is meant by a Turing machine that fails to halt is not clear. This 
makes a logical interpretation of the phrase following the comma 
incorrect. Of course, that phrase has several possible logical 
interpretations, so you're sure as hell not being precise.

> 3) Neither of these yes or no (accept or reject) answers is correct
> (mathematically maps to whether or not the input TM will halt on its input).

And this is where you fall down. Let me list the sins in no particular 
order:
1. Going straight from generalization to specialization without 
describing context.
2. Stating the core of your argument without proof
3. Eschewing definitions.
4. Why? You never provide anything that smacks of answer to "why?"

> 4) Therefore the reason that the self reference form of the Halting Problem
> can not be solved is that this problem is based on providing a yes or no
> answer to a question that has no correct yes or no answer.

Attacking this individually is pointless, since you fall apart so badly 
at #3 that all the problems here are a continuation of the confusion 
resulting from the previous step.

> Any attempted refutation should provide reasoning that refutes the above
> points individually. Stating that the above reasoning is nonsense merely
> indicates a failure of the respondent to comprehend and nothing more.

Sometimes, the failure of a listener to understand isn't because the 
listener can't comprehend but because the speaker can't explain. And 
when you consider that no fewer than a dozen people have attempted to 
understand your proofs and come out feeling that they are nonsense, a 
basic application of Occam's Razor suggests that maybe the problem isn't 
that everyone else in this channel are idiots who can't understand your 
ethereal glory but maybe that you are someone who can't explain yourself 
to everybody else.

Let me put it in another, simpler way that I hope you will understand. 
Your logical argument is roughly equivalent to the following:

1. <Insert contemporary dictator here> is evil [Fact stated as "obvious" 
which actually relies on a mixture of subjectivity and the particular 
definition of a word]
2. Evil people should never be in power, so <insert contemporary 
dictator here> shouldn't have been in power.
3. Democracy did it. [Wait what? Where did this come from?]
4. Therefore, democracy is a bad form of government.

That's kind of the feeling I get when I try to understand your proofs. 
You make extraordinary claims and never give any strong justification 
for them--extraordinary claims require extraordinary evidence. There was 
one ... personality I knew who rejected any argument against his proofs 
unless it was 100% completely watertight and explicit; any sketch of a 
counterexample wouldn't be accepted. Yet you go further and reject any 
argument that you can't understand.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
6/11/2012 3:08:12 AM
Joshua Cranmer <Pidgeot18@verizon.invalid> writes:

> On 6/10/2012 10:21 AM, Peter Olcott wrote:
>> It would be impossible to convince me that I am wrong if I am *not* wrong,
>> otherwise it is possible.
>
> I can't think of a nice sugar-coated way to say this, so I'll be
> blunt: you're taking an egotistical approach to proofs, in that you
> are by default infallible until something is proved wrong.

And long after!  I, too, think being blunt is worthwhile here.  Peter
has no interest persuading anyone (how could the agreement of someone
with inferior reasoning skill be of any benefit to him?).  What he needs
is to be engaged with -- to be part of an academic community that
discusses deep matters.  Of course, his points are rather trivial, but
we can't refute them (in his mind) so they must be deep.  It's a form of
"teach the controversy" for halting -- there is controversy so long as
the person making the silly claims says that there is.

I have no problem pointing out errors, but I've long ago decided that
debate is either impossible or pointless.

<snip>
>> I have made very many errors in presenting this position, and from what I
>> can tell most of these have been corrected.
>
> Er... no. You're still exuding extremely elementary errors.

I went to Google groups to try to find the start of this long series of
threads to see how far things had moved on, and I concluded that they
had not.  I was a little surprised that I did not remember the exact
details but all the elements were there -- it's all due to ill-formed
questions, "malignant" self-reference, and so on.

Then I saw the date: 2006.

There are similar posts in 2004, and in 2012 he told us he'd actually
acquired some books about this subject.  Given the history, that seems
like an uncharacteristically humble thing to do.  Maybe the next set of
threads in 2014 will include a formal statement of the problem.

<snip>
-- 
Ben.
0
ben.usenet (6790)
6/11/2012 12:23:17 PM
On Jun 11, 10:23=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> like an uncharacteristically humble thing to do. =A0Maybe the next set of
> threads in 2014 will include a formal statement of the problem.
>
> --
> Ben.

This sounds like a good starting block!  Without specifically using
TMs,

LET ITHALTS be a property of all <function, argument> tuples.

HYPOTHESIS 0
THE HALTING PROOF SHOWS THAT THE FUNCTION HALT() IS UNCOMPUTABLE

HYPOTHESIS 1
THE HALTING PROOF SHOWS THAT THE FUNCTION HALT() DOMAIN IS NOT OVER
ALL <FUNCTION, ARGUMENT> TUPLES.


Discuss, brainstorm, further HYPOTHESES!

Herc
--
http://tinyurl.com/BLUEPRINTS-HALT
0
6/11/2012 8:52:49 PM
On 6/10/2012 7:26 PM, cplxphil wrote:
> On Jun 10, 10:21 am, "Peter Olcott"<NoS...@OCR4Screen.com>  wrote:
>> "Joshua Cranmer"<Pidgeo...@verizon.invalid>  wrote in message
>>
>> news:jr23h2$hil$1@dont-email.me...
>>
>>> On 6/9/2012 11:32 PM, Peter Olcott wrote:
>>>> On 6/9/2012 5:58 PM, cplxphil wrote:
>>>>> Also, before this continues too long:  It sounds like you are quite
>>>>> confident that you're right about this.  I am quite confident that you
>>>>> are not.  What would it take to convince you that you are wrong?
>>>> I will carefully examine every line-of-reasoning that attempts to show
>>>> that my reasoning is incorrect. I will point out any errors that I find.
>>> So, in other words, it is impossible to convince that you are wrong.
>> It would be impossible to convince me that I am wrong if I am *not* wrong,
>> otherwise it is possible.
>> So far I have not seen anything resembling sound reasoning that correctly
>> refutes my true position.
>> I have made very many errors in presenting this position, and from what I
>> can tell most of these have been corrected.
>>
>> Since my goal is to provide this reasoning using English that is 100%
>> completely mathematically precisely correct, George may have pointed out a
>> recent error. It may have been incorrect for me to use the term "potential
>> halt decider".
>> When I used this term I was referring to any Turing Machine that attempts to
>> be a halt decider, even though none could ever actually achieve this. Since
>> none could ever actually achieve this, no actual potential exists.
>>
>> Here is a possibly better statement of my position:
>>
>> The reason why the self reference form of the Halting Problem can not be
>> solved is:
>> 1) The invocation of every Turing Machine that attempts to be a halt decider
>> mathematically maps to a yes or no question (within a natural language
>> interface).
>>
>> 2) The yes or no answer to this question that this set of Turing Machines
>> can possibly provide (within a natural language interface) mathematically
>> maps to its own final states of accept and reject, thus deriving the entire
>> solution set of every possible answer.
>>
>> 3) Neither of these yes or no (accept or reject) answers is correct
>> (mathematically maps to whether or not the input TM will halt on its input).
>>
>> 4) Therefore the reason that the self reference form of the Halting Problem
>> can not be solved is that this problem is based on providing a yes or no
>> answer to a question that has no correct yes or no answer.
>>
>> Any attempted refutation should provide reasoning that refutes the above
>> points individually. Stating that the above reasoning is nonsense merely
>> indicates a failure of the respondent to comprehend and nothing more.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> --
>>> Beware of bugs in the above code; I have only proved it correct, not tried
>>> it. -- Donald E. Knuth
> I think I see what you're getting at.
>
> Let's see if I can translate (into terms I'm comfortable with) and
> then refute your 4 points:
>
> 1>  A Turing machine that could solve the halting problem must map to
> a yes or no question.

That would be an incorrect paraphrase.

I think that I am beginning to see why people are not understanding me.
There is a completely different point of view between the timing of 
intentions of a software engineer and a mathematician.

Software engineers start with a problem to be solved, and from this 
devise a specification, and then translate this specification into an 
implementation. From this point of view deriving a Turing Machine that 
solves the Halting Problem derives an ill-formed question.

Mathematicians examine the set of all finite length string 
implementations (skipping the first two steps) and find than none of 
these derive a Halting Decider, thus the mathematician never asks the 
ill-formed question.

> 2>  Such a Turing machine can only ever accept or reject.
> 3>  Neither answer to a "yes or no" halting question is correct.
> (justification?)
> 4>  The halting problem has no answer.
>
> Although I don't understand #3, the problem is with #1.  You suggest,
> "a halt decider...maps to a yes or no question."  Yes, but only if it
> works!  Every "potential halt decider" we can develop *fails to halt*
> when given a sufficiently problematic instance of the halting
> problem.  You are essentially starting out your argument by assuming
> that the halting problem can be decided properly, which it cannot.
>
> You are saying that every Turing machine must, via a natural language
> interface (which--by the way--technically has no place in a
> mathematical argument), map to a natural language question.  Wrong!
> The mathematical *question* that you discuss must map to a yes or no
> question, but a Turing machine is a computer program and a
> mathematical object--and thus is not governed by your beliefs about
> natural language.
>
> The irony of your argument is that a modification of it actually
> suggests that Turing's result is correct.  In fact, if Turing's result
> were wrong, you would be correct.  Logically speaking, if you could
> establish something as logically contradictory as the actual existence
> of a  "potential halt decider," you would be able to prove anything--
> even that the moon is made of cheese.
>
> Moons made of cheese are neither here nor there, but in summary, your
> fallacy is the claim that the *Turing machine* must map to a yes or no
> question.  A Turing machine is free to do whatever it wants, including
> never halt, even if you would like it to be a potential halt decider.
>
> Ironically, your step #1 represents a contradiction that Turing
> exploits in his proof.  You are assuming that the halt decider exists,
> and that it maps to a yes or no question.  That is precisely what
> Turing assumes--and using that false assumption and some self-
> reference tricks, he produces precisely the result that you claim
> doesn't make sense.
>
> I've been a bit repetitive above, mainly for emphasis (and out of
> laziness), but I hope I've made my point.  One last time:  A Turing
> machine does not map to a yes or no question, it maps to "yes, no,
> never halt."
>
> I have a sinking feeling you won't be convinced....

0
Peter
6/12/2012 1:04:39 AM
Argh...you say that my paraphrase is incorrect, but then don't try to
supply examples of a better one!

If your prose were precise and clear, it would be reasonable to just
say "oh that's wrong," but given that it clearly isn't, why not take a
little time to improve your exposition?  You are not uncivil, but you
come across as a bit arrogant.  I say this in the spirit of trying to
be constructive, not an effort to be vindictive or criticize you in an
ad hominem fashion.

Why not work on some clearer definitions?  If you have the world's
most brilliant philosophical point in the world, but can't make it
clearly, you might as well be speaking Portuguese to a room full of
non-speakers.  I suspect that if you have something of value to say,
it's disguised in your exposition to the point that you are wasting
your time trying to get anyone to understand it.

If you are going to keep posting to the same group over and over with
the same point, I for one won't try to stop you.  But you would likely
make more progress if you clarified your definitions, phrased your
points as questions rather than groundbreaking discoveries, or sought
guidance from someone who understands your argument on how to make it
clearer.
0
cplxphil (227)
6/12/2012 2:29:15 AM
On Jun 12, 12:29=A0pm, cplxphil <cplxp...@gmail.com> wrote:
> Argh...you say that my paraphrase is incorrect, but then don't try to
> supply examples of a better one!
>
> If your prose were precise and clear, it would be reasonable to just
> say "oh that's wrong," but given that it clearly isn't, why not take a
> little time to improve your exposition? =A0You are not uncivil, but you
> come across as a bit arrogant. =A0I say this in the spirit of trying to
> be constructive, not an effort to be vindictive or criticize you in an
> ad hominem fashion.
>
> Why not work on some clearer definitions? =A0If you have the world's
> most brilliant philosophical point in the world, but can't make it
> clearly, you might as well be speaking Portuguese to a room full of
> non-speakers. =A0I suspect that if you have something of value to say,
> it's disguised in your exposition to the point that you are wasting
> your time trying to get anyone to understand it.
>
> If you are going to keep posting to the same group over and over with
> the same point, I for one won't try to stop you. =A0But you would likely
> make more progress if you clarified your definitions, phrased your
> points as questions rather than groundbreaking discoveries, or sought
> guidance from someone who understands your argument on how to make it
> clearer.

HOW MUCH CLEARER DO YOU NEED IT?

HYPOTHESIS 0
THE HALTING PROOF SHOWS THAT THE FUNCTION HALT() IS UNCOMPUTABLE

HYPOTHESIS 1
THE HALTING PROOF SHOWS THAT THE FUNCTION HALT() DOMAIN IS NOT OVER
ALL <FUNCTION, ARGUMENT> TUPLES

Peter's proposal is the FINAL STATE TRANSITION of the HALT TM FINISHES
by outputting 1 on the tape for IT-HALTS and 0 for IT-HANGS

This is the DEFINITION OF SCOPE OF THE HALT TM - since it is designed
as a TEST HARNESS.

Feel free to comment on the discussion so far!

Herc
--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.

BUT IF IT WEREN'T NAMED P then it might not be:

Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.

~ GEORGE GREEN (sci.logic)

0
6/12/2012 8:17:19 AM
On 6/11/2012 6:04 PM, Peter Olcott wrote:

> I think that I am beginning to see why people are not understanding me.
> There is a completely different point of view between the timing of
> intentions of a software engineer and a mathematician.

This seems to suggest that you think only mathematicians disagree with
you and software engineers would agree. That is totally incorrect.

I am far better educated and more experienced as a software engineer
than as a mathematician.

Nothing in my education or experience as a software engineer supports
arbitrarily assuming an algorithm exists and is correct, and then
calling all the cases it gets wrong "ill-formed questions".

Patricia


0
pats (3556)
6/12/2012 8:18:42 AM
On Jun 12, 6:18=A0pm, Patricia Shanahan <p...@acm.org> wrote:
>
> Nothing in my education or experience as a software engineer supports
> arbitrarily assuming an algorithm exists and is correct, and then
> calling all the cases it gets wrong "ill-formed questions".
>
> Patricia

If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?

then what would you call that question considering there are cases it
gets wrong?


Herc
0
6/12/2012 8:57:45 AM
"Patricia Shanahan" <pats@acm.org> wrote in message 
news:rrOdnZsg-fr-ZkvSnZ2dnUVZ_v6dnZ2d@earthlink.com...
> On 6/11/2012 6:04 PM, Peter Olcott wrote:
>
>> I think that I am beginning to see why people are not understanding me.
>> There is a completely different point of view between the timing of
>> intentions of a software engineer and a mathematician.
>
> This seems to suggest that you think only mathematicians disagree with
> you and software engineers would agree. That is totally incorrect.
>
> I am far better educated and more experienced as a software engineer
> than as a mathematician.
>
> Nothing in my education or experience as a software engineer supports
> arbitrarily assuming an algorithm exists and is correct, and then
> calling all the cases it gets wrong "ill-formed questions".
>
> Patricia
>
>

Try applying Montague Grammar to the Liar Paradox and see where you get. 


0
NoSpam271 (937)
6/12/2012 9:07:09 AM
In comp.theory Graham Cooper <grahamcooper7@gmail.com> wrote:
> 
> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
> 
> then what would you call that question considering there are cases it
> gets wrong?

"Answered." 

-- 
Leif Roar Moldskred
0
leifm1143 (162)
6/12/2012 4:13:00 PM
On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote:
> In comp.theory Graham Cooper<grahamcooper7@gmail.com>  wrote:
>>
>> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
>>
>> then what would you call that question considering there are cases it
>> gets wrong?
>
> "Answered."
>

This seems like the obvious, and obviously correct, answer.

Even a single input pair for which the algorithm gets a wrong answer or
fails to terminate is sufficient to prove the answer is "No.".

There is one relevant difference between my mathematician and my
software engineer opinions. With my mathematician hat on, I am prepared
to consider the possibility of a proof that an algorithm exists that
does not involved exhibiting the algorithm and proving it correct. As a
software engineer, I would have no use for a proof that an algorithm
exists unless it tells me what the algorithm is, and why the algorithm
always works.

Patricia
0
pats (3556)
6/12/2012 8:46:02 PM
On Jun 13, 6:46=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote:
>
> > In comp.theory Graham Cooper<grahamcoop...@gmail.com> =A0wrote:
>
> >> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
>
> >> then what would you call that question considering there are cases it
> >> gets wrong?
>
> > "Answered."
>
> This seems like the obvious, and obviously correct, answer.
>
> Even a single input pair for which the algorithm gets a wrong answer or
> fails to terminate is sufficient to prove the answer is "No.".


So the function 1/X is also UnComputable??

The input value X=3D0 the algorithm gets a wrong answer or fails,

this is sufficient to prove the answer is "No".

QUESTION: DOES THE ALGORITHM WORK ON ALL INPUTS?



Herc
--
?? - Now watch Patricia fail to answer My Question in this very post!
0
6/12/2012 9:12:56 PM
On 6/12/2012 2:12 PM, Graham Cooper wrote:
> On Jun 13, 6:46 am, Patricia Shanahan <p...@acm.org> wrote:
>> On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote:
>>
>>> In comp.theory Graham Cooper<grahamcoop...@gmail.com>  wrote:
>>
>>>> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
>>
>>>> then what would you call that question considering there are cases it
>>>> gets wrong?
>>
>>> "Answered."
>>
>> This seems like the obvious, and obviously correct, answer.
>>
>> Even a single input pair for which the algorithm gets a wrong answer or
>> fails to terminate is sufficient to prove the answer is "No.".
>
>
> So the function 1/X is also UnComputable??

Before we can discuss computability of a function, we need a fully
defined function. What is the domain? If 0 is in the domain, what is the
value of 1/0?

Patricia
0
pats (3556)
6/12/2012 10:42:40 PM
On Jun 13, 8:42=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 6/12/2012 2:12 PM, Graham Cooper wrote:
>
> > On Jun 13, 6:46 am, Patricia Shanahan <p...@acm.org> wrote:
> >> On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote:
>
> >>> In comp.theory Graham Cooper<grahamcoop...@gmail.com> =A0wrote:
>
> >>>> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
>
> >>>> then what would you call that question considering there are cases i=
t
> >>>> gets wrong?
>
> >>> "Answered."
>
> >> This seems like the obvious, and obviously correct, answer.
>
> >> Even a single input pair for which the algorithm gets a wrong answer o=
r
> >> fails to terminate is sufficient to prove the answer is "No.".
>
> > So the function 1/X is also UnComputable??
>
> Before we can discuss computability of a function, we need a fully
> defined function. What is the domain? If 0 is in the domain, what is the
> value of 1/0?
>
> Patricia


Just ASSUME A TM EXISTS that computes 1/X for all values of X.

This is YOUR LOGIC:

ALL FUNCTIONS HALT OR THEY DON'T HALT (1 OR 0)
->
A HALT DECIDER MUST WORK ON ALL INPUTS (ALL FUNCTIONS)
and answer in the range (1 OR 0)

....even though Halt() itself has no need to test if it itself halts.
....just like INVERSE(x) has no need to compute INVERSE(0)


Herc

0
6/12/2012 11:39:39 PM
On 6/10/2012 7:26 PM, cplxphil wrote:
> On Jun 10, 10:21 am, "Peter Olcott"<NoS...@OCR4Screen.com>  wrote:
>> "Joshua Cranmer"<Pidgeo...@verizon.invalid>  wrote in message
>>
>> news:jr23h2$hil$1@dont-email.me...
>>
>>> On 6/9/2012 11:32 PM, Peter Olcott wrote:
>>>> On 6/9/2012 5:58 PM, cplxphil wrote:
>>>>> Also, before this continues too long:  It sounds like you are quite
>>>>> confident that you're right about this.  I am quite confident that you
>>>>> are not.  What would it take to convince you that you are wrong?
>>>> I will carefully examine every line-of-reasoning that attempts to show
>>>> that my reasoning is incorrect. I will point out any errors that I find.
>>> So, in other words, it is impossible to convince that you are wrong.
>> It would be impossible to convince me that I am wrong if I am *not* wrong,
>> otherwise it is possible.
>> So far I have not seen anything resembling sound reasoning that correctly
>> refutes my true position.
>> I have made very many errors in presenting this position, and from what I
>> can tell most of these have been corrected.
>>
>> Since my goal is to provide this reasoning using English that is 100%
>> completely mathematically precisely correct, George may have pointed out a
>> recent error. It may have been incorrect for me to use the term "potential
>> halt decider".
>> When I used this term I was referring to any Turing Machine that attempts to
>> be a halt decider, even though none could ever actually achieve this. Since
>> none could ever actually achieve this, no actual potential exists.
>>
>> Here is a possibly better statement of my position:
>>
>> The reason why the self reference form of the Halting Problem can not be
>> solved is:
>> 1) The invocation of every Turing Machine that attempts to be a halt decider
>> mathematically maps to a yes or no question (within a natural language
>> interface).
>>
>> 2) The yes or no answer to this question that this set of Turing Machines
>> can possibly provide (within a natural language interface) mathematically
>> maps to its own final states of accept and reject, thus deriving the entire
>> solution set of every possible answer.
>>
>> 3) Neither of these yes or no (accept or reject) answers is correct
>> (mathematically maps to whether or not the input TM will halt on its input).
>>
>> 4) Therefore the reason that the self reference form of the Halting Problem
>> can not be solved is that this problem is based on providing a yes or no
>> answer to a question that has no correct yes or no answer.
>>
>> Any attempted refutation should provide reasoning that refutes the above
>> points individually. Stating that the above reasoning is nonsense merely
>> indicates a failure of the respondent to comprehend and nothing more.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>> --
>>> Beware of bugs in the above code; I have only proved it correct, not tried
>>> it. -- Donald E. Knuth
> I think I see what you're getting at.
>
> Let's see if I can translate (into terms I'm comfortable with) and
> then refute your 4 points:
>
> 1>  A Turing machine that could solve the halting problem must map to
> a yes or no question.

It is a yes or no question whether the TM can solve it or not, and the 
*only* reason that a TM can not solve it is that it is an incorrect yes 
or no question.

The mathematical mapping is not that hard, I can not see why most people 
are not getting it.

X = 7 + 5;
mathematically maps to:
Assign the value of the sum of seven plus five to the integer variable 
named X.
The math representation and the English representation have identical 
semantic meanings.

In the self reference form of the Halting Problem (more literally (but 
too clumsy to say) the self-reference form of the reason why the Halting 
Problem can not be solved), the Turing Machine is asked the question 
"Does your input TM halt on its input?"

 From the software engineering point of view this is its question. From 
the software engineering perspective it is required to answer the 
question of its design specification. From a mathematicians point of 
view the design specification would be missing and instead one would 
examine the set of all finite length strings and find that none meet the 
intended goal.


> 2>  Such a Turing machine can only ever accept or reject.
> 3>  Neither answer to a "yes or no" halting question is correct.
> (justification?)
> 4>  The halting problem has no answer.
>
> Although I don't understand #3, the problem is with #1.  You suggest,
> "a halt decider...maps to a yes or no question."  Yes, but only if it
> works!  Every "potential halt decider" we can develop *fails to halt*
> when given a sufficiently problematic instance of the halting
> problem.  You are essentially starting out your argument by assuming
> that the halting problem can be decided properly, which it cannot.
>
> You are saying that every Turing machine must, via a natural language
> interface (which--by the way--technically has no place in a
> mathematical argument), map to a natural language question.  Wrong!
> The mathematical *question* that you discuss must map to a yes or no
> question, but a Turing machine is a computer program and a
> mathematical object--and thus is not governed by your beliefs about
> natural language.
>
> The irony of your argument is that a modification of it actually
> suggests that Turing's result is correct.  In fact, if Turing's result
> were wrong, you would be correct.  Logically speaking, if you could
> establish something as logically contradictory as the actual existence
> of a  "potential halt decider," you would be able to prove anything--
> even that the moon is made of cheese.
>
> Moons made of cheese are neither here nor there, but in summary, your
> fallacy is the claim that the *Turing machine* must map to a yes or no
> question.  A Turing machine is free to do whatever it wants, including
> never halt, even if you would like it to be a potential halt decider.
>
> Ironically, your step #1 represents a contradiction that Turing
> exploits in his proof.  You are assuming that the halt decider exists,
> and that it maps to a yes or no question.  That is precisely what
> Turing assumes--and using that false assumption and some self-
> reference tricks, he produces precisely the result that you claim
> doesn't make sense.
>
> I've been a bit repetitive above, mainly for emphasis (and out of
> laziness), but I hope I've made my point.  One last time:  A Turing
> machine does not map to a yes or no question, it maps to "yes, no,
> never halt."
>
> I have a sinking feeling you won't be convinced....

0
Peter
6/13/2012 1:43:02 AM
On 6/11/2012 9:29 PM, cplxphil wrote:
> Argh...you say that my paraphrase is incorrect, but then don't try to
> supply examples of a better one!

The best one was the original one:
�Does Turing Machine M halt on input of Turing Machine H and Turing 
Machine M?�

Where M is defined as
---------------------
M(String H, String input):
if H(input, H, input) loop
else halt

> If your prose were precise and clear, it would be reasonable to just
> say "oh that's wrong," but given that it clearly isn't, why not take a
> little time to improve your exposition?  You are not uncivil, but you
> come across as a bit arrogant.  I say this in the spirit of trying to
> be constructive, not an effort to be vindictive or criticize you in an
> ad hominem fashion.
>
> Why not work on some clearer definitions?  If you have the world's
> most brilliant philosophical point in the world, but can't make it
> clearly, you might as well be speaking Portuguese to a room full of
> non-speakers.  I suspect that if you have something of value to say,
> it's disguised in your exposition to the point that you are wasting
> your time trying to get anyone to understand it.
>
> If you are going to keep posting to the same group over and over with
> the same point, I for one won't try to stop you.  But you would likely
> make more progress if you clarified your definitions, phrased your
> points as questions rather than groundbreaking discoveries, or sought
> guidance from someone who understands your argument on how to make it
> clearer.

0
Peter
6/13/2012 1:47:01 AM
On 6/12/2012 3:46 PM, Patricia Shanahan wrote:
> On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote:
>> In comp.theory Graham Cooper<grahamcooper7@gmail.com>  wrote:
>>>
>>> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS?
>>>
>>> then what would you call that question considering there are cases it
>>> gets wrong?
>>
>> "Answered."
>>
>
> This seems like the obvious, and obviously correct, answer.
>
> Even a single input pair for which the algorithm gets a wrong answer or
> fails to terminate is sufficient to prove the answer is "No.".
>
H is asked the question:
"Does your input TM halt on its input?"

The *only* possible answers to the question posed to the TM are its own 
two final states, all other answers of every kind are not within the set 
of possible answers.


> There is one relevant difference between my mathematician and my
> software engineer opinions. With my mathematician hat on, I am prepared
> to consider the possibility of a proof that an algorithm exists that
> does not involved exhibiting the algorithm and proving it correct. As a
> software engineer, I would have no use for a proof that an algorithm
> exists unless it tells me what the algorithm is, and why the algorithm
> always works.
>
> Patricia

0
Peter
6/13/2012 1:57:35 AM
On Jun 13, 11:47=A0am, Peter Olcott <OCR4Screen> wrote:
> On 6/11/2012 9:29 PM, cplxphil wrote:
>
> > Argh...you say that my paraphrase is incorrect, but then don't try to
> > supply examples of a better one!
>
> The best one was the original one:
> =93Does Turing Machine M halt on input of Turing Machine H and Turing
> Machine M?=94
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>


I don't get where the 3 parameter functions are coming from.

Every <PROGRAM, ARGUMENT> Pair has a property IT-HALTS or NOT(IT-
HALTS)

Let's examine the ANTI-DIAGONAL METHOD USED IN HALT PROOF

 ---INPUT  1 2 3 4 5 6 7 8 9 10 11 12...
TM
1
2
3
4
5
6
7
8
9
10
11
12
....

If HALT is some TM-n, then all the OUTPUT VALUES of HALT appear on
some row.


 ---INPUT  1 2 3 4 5 6 7 8 9 10 11 12...
TM
1
2
3
4
5
6
7
8 .......... 0 1 0 0 0 0 0 1 1 1 0 0 0 1 1 ....
9
10
11
12
....

EG.
TM-8(1) =3D 0

or HALT(1) =3D 0

i.e TM-1 has property NOT(IT-HALTS)

When WE Run
TM-8(8)

we are performing the computation equivalent to asking:

"Does your input TM halt on its input?"


------

To obtain a contradiction you can create secondary functions or split
the input into a tuple to input any parameter into the TM itself.

Herc

0
6/13/2012 2:03:15 AM
In comp.theory Graham Cooper <grahamcooper7@gmail.com> wrote:
 
> 
> So the function 1/X is also UnComputable??

To match the Halting problem, you need to rephrase the question to "Is
the function 1/x computable for all values of x?" to which the answer
is "no."

Newsgroups: and Followup-To: lines trimmed.

-- 
Leif Roar Moldskred
0
leifm1143 (162)
6/13/2012 4:05:44 AM
On 6/12/2012 9:05 PM, Leif Roar Moldskred wrote:
> In comp.theory Graham Cooper <grahamcooper7@gmail.com> wrote:
>
>>
>> So the function 1/X is also UnComputable??
>
> To match the Halting problem, you need to rephrase the question to "Is
> the function 1/x computable for all values of x?" to which the answer
> is "no."

With my computer scientist hat on, I'm not even going to attempt to
evaluate computability until I have a defined function.

Maybe the domain is IEEE 754 64-bit floating point numbers, in which
case 1/0 is positive infinity and the function is indeed computable - it
is implemented in hardware on many processors.

At this point, I don't even know whether 0 is in the domain, and if so
how 1/0 is defined.

Patricia
0
pats (3556)
6/13/2012 3:42:20 PM
On Jun 12, 9:57=A0pm, Peter Olcott <OCR4Screen> wrote:
> H is asked the question:
> "Does your input TM halt on its input?"

*NO*,*DUMBMASS*, H *IS*NOT* asked THAT question because H CANNOT be
asked THAT question, because H
IS NOT A *HALTS* TM!  IN ORDER to be asked the question, "Does your
1st parameter halt on your 2nd?", you must FIRST BE
a HALTS tm.

This is NOT supposed to be HARD to understand!

Suppose I have an ADDITION tm.  Suppose I have a TM that expects to
get 2 numbers as input strings and is guaranteed to write the string
meaning THE SUM of the two input numbers ON THE TAPE AS OUTPUT (and
then halt).
THIS TM CANNOT BE ASKED ANY questions OTHER than ADDITION questions!
FOR ANY string input you give it, it is going to break the string into
a 1st and 2nd number, and then write the string for the sum of those
two numbers on the tape!  ALWAYS!  Therefore, ADDITION questions are
THE ONLY KINDS of questions that this machine CAN be asked!  You
canNOT ASK this machine questions about subtraction, division, or
halting, unless you first do some very fancy recoding.
BECAUSE it is in ADDITION tm, it CAN ONLY BE ASKED questions about
ADDITION PROBLEMS.

IN ORDER for H to be asked questions about halting, H would FIRST HAVE
TO *BE* a Halts TM.  SINCE H ISN'T a Halts TM, NO inputs to H EVER
constitute asking H a question about halting!  Unless, of course, they
are pre-restricted to some finite simplistic subdomain.  In that case,
ALL FINITARY questions ARE ALWAYS EASY for TMs in general, so H
*could* be interpreted as being asked (and answering) a halting
question IF it were a question about the halting of a machine MUCH
SMALLER AND SIMPLER THAN H itself.
But H itself ISN'T smaller or simpler than H.
0
greeneg9613 (188)
6/13/2012 8:48:29 PM
On Jun 14, 6:48=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Jun 12, 9:57=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > H is asked the question:
> > "Does your input TM halt on its input?"
>
> *NO*,*DUMBMASS*, H *IS*NOT* asked THAT question because H CANNOT be
> asked THAT question, because H
> IS NOT A *HALTS* TM! =A0IN ORDER to be asked the question, "Does your
> 1st parameter halt on your 2nd?", you must FIRST BE
> a HALTS tm.
>
> This is NOT supposed to be HARD to understand!
>
> Suppose I have an ADDITION tm. =A0Suppose I have a TM that expects to
> get 2 numbers as input strings and is guaranteed to write the string
> meaning THE SUM of the two input numbers ON THE TAPE AS OUTPUT (and
> then halt).
> THIS TM CANNOT BE ASKED ANY questions OTHER than ADDITION questions!
> FOR ANY string input you give it, it is going to break the string into
> a 1st and 2nd number, and then write the string for the sum of those
> two numbers on the tape! =A0ALWAYS! =A0Therefore, ADDITION questions are
> THE ONLY KINDS of questions that this machine CAN be asked! =A0You
> canNOT ASK this machine questions about subtraction, division, or
> halting, unless you first do some very fancy recoding.
> BECAUSE it is in ADDITION tm, it CAN ONLY BE ASKED questions about
> ADDITION PROBLEMS.
>
> IN ORDER for H to be asked questions about halting, H would FIRST HAVE
> TO *BE* a Halts TM. =A0SINCE H ISN'T a Halts TM, NO inputs to H EVER
> constitute asking H a question about halting! =A0Unless, of course, they
> are pre-restricted to some finite simplistic subdomain. =A0In that case,
> ALL FINITARY questions ARE ALWAYS EASY for TMs in general, so H
> *could* be interpreted as being asked (and answering) a halting
> question IF it were a question about the halting of a machine MUCH
> SMALLER AND SIMPLER THAN H itself.
> But H itself ISN'T smaller or simpler than H.


HALT(halt()) is asking the Question to Halt

"Does *your* TM halt on its input?"

This is Peter's argument or HYPOTHESIS.

You don't rebuke the Hypothesis, this is not a debate to score points.

If your problem with the argument in general is about size of H you
need to explain why.

certainly small algorithms can work on larger ones.


Herc
--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.

BUT IF IT WEREN'T NAMED P then it might not be:
Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.
~ GEORGE GREEN (sci.logic)
0
6/13/2012 11:32:00 PM
"George Greene" <greeneg@email.unc.edu> wrote in message 
news:a387b7af-5763-452d-a601-a58a6304df7d@z19g2000vbe.googlegroups.com...
On Jun 12, 9:57 pm, Peter Olcott <OCR4Screen> wrote:
> H is asked the question:
> "Does your input TM halt on its input?"

*NO*,*DUMBMASS*, H *IS*NOT* asked THAT question because H CANNOT be
asked THAT question, because H
IS NOT A *HALTS* TM!  IN ORDER to be asked the question, "Does your
1st parameter halt on your 2nd?", you must FIRST BE
a HALTS tm.

How is that not analogous to saying that it is impossible to ask a question 
that someone does not know the answer to?

If one examines the set of finite length strings, one will find within this 
set a Turing Machine that knows every natural language on the planet, and 
knows every single detail of everything that is currently known about 
anything, and has reasoning capability at least equal to the ability of each 
of the best human experts in each field of inquiry.

You are saying that this machine can not even be asked the question:
"Does this TM halt on its input?" 


0
NoSpam271 (937)
6/15/2012 11:26:19 AM
On Jun 15, 9:26=A0pm, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
>
> You are saying that this machine can not even be asked the question:
> "Does this TM halt on its input?"

George seems unable to distinguish between the topic and the halting
proof, and takes every opportunity to reverse engineer his answer
backwards from "NO!  NO!  NO!  You're not allowed to use the term HALT
in ANY ARGUMENT!"

George acknowledges you can *ASK* a TM "Does this TM halt?"

But when you explain the self reference to him...

"Does YOUR TM halt?"

he invokes the Result of the halting proof in order to argue why the
halting proof is correct!

You will never get anywhere with these die hard classic theorists who
" ..studied years and years .." all their bullshit!

10 IF HALT() GOTO 10

WHO ON EARTH WOULD BELIEVE THIS BULLSHIT??

IF I TOLD MY BOSS - SORRY WE CAN'T PROGRAM THE TEST HARNESS BECAUSE IF
IT WENT INTO AN INFINITE LOOP DECIDING IF IT HALTED OR NOT AND DOING
THE OPPOSITE IT WOULD BE UN-COMPUTABLE!

HE'S FIRE MY ASS AND HIRE A COMPETENT PROGRAMMER!


Herc
--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.

BUT IF IT WEREN'T NAMED P then it might not be:
Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.

~ GEORGE GREEN (sci.logic)
0
6/15/2012 2:00:49 PM
>
> IF I TOLD MY BOSS - SORRY WE CAN'T PROGRAM THE TEST HARNESS BECAUSE IF
> IT WENT INTO AN INFINITE LOOP DECIDING IF IT HALTED OR NOT AND DOING
> THE OPPOSITE IT WOULD BE UN-COMPUTABLE!
>
> HE'S FIRE MY ASS AND HIRE A COMPETENT PROGRAMMER!
>


HE'D FIRE MY ASS!

"Sorry Boss, can't program anything to check if the programs work,
imagine if we coded
10 IF HALT() THEN GOTO 10"

"Sorry Boss, can't program a Proof() Predicate to formally derive
anything, imagine if we coded
T|- G
G = !PROOF(G)

Herc
--
MATHEMATICS
E(Y) Y = {x|P(x)}
<->
PRVBLE( E(Y) Y = {x|P(x)} )

PRVBLE(T) <-> NOT(DERIVE(NOT(T)))
DERIVE(T) <-> E(a) E(b) DERIVE(a) ^ DERIVE(b) ^ (a^b)->T
0
6/15/2012 2:51:37 PM
On 6/15/2012 7:00 AM, Graham Cooper wrote:
....
> IF I TOLD MY BOSS - SORRY WE CAN'T PROGRAM THE TEST HARNESS BECAUSE IF
> IT WENT INTO AN INFINITE LOOP DECIDING IF IT HALTED OR NOT AND DOING
> THE OPPOSITE IT WOULD BE UN-COMPUTABLE!

And he would be right to do so, both for the attempt to build a general
halt decider into the test harness, and for failing to realize that
there are many useful tests that can be performed without depending on
the non-existent general halt decider.

The usual way of avoiding the problem in a practical test harness is to
include a timeout on the program under test.

In formal Turing machine terms, it is similar to deciding the set of TM
computations that terminate in no more than X steps, for some integer X.

Patricia
0
pats (3556)
6/15/2012 5:42:28 PM
On Jun 16, 3:42=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 6/15/2012 7:00 AM, Graham Cooper wrote:
> ...
>
> > IF I TOLD MY BOSS - SORRY WE CAN'T PROGRAM THE TEST HARNESS BECAUSE IF
> > IT WENT INTO AN INFINITE LOOP DECIDING IF IT HALTED OR NOT AND DOING
> > THE OPPOSITE IT WOULD BE UN-COMPUTABLE!
>
> And he would be right to do so, both for the attempt to build a general
> halt decider into the test harness, and for failing to realize that
> there are many useful tests that can be performed without depending on
> the non-existent general halt decider.
>
> The usual way of avoiding the problem in a practical test harness is to
> include a timeout on the program under test.
>
> In formal Turing machine terms, it is similar to deciding the set of TM
> computations that terminate in no more than X steps, for some integer X.
>
> Patricia

Sorry Boss, we can't program a Proof() Predicate to formally
derive=A0anything, imagine if we coded
T|- G
G =3D !PROOF(G)

DERIVE(T) <-> E(a) E(b) DERIVE(a) ^ DERIVE(b) ^ (a^b)->T


Herc

--
http://tinyURL.com/BLUEPRINTS-TURING
http://tinyURL.com/BLUEPRINTS-CANTOR
http://tinyURL.com/BLUEPRINTS-GODEL
http://tinyURL.com/BLUEPRINTS-PROOF
http://tinyURL.com/BLUEPRINTS-LOGIC
http://tinyURL.com/BLUEPRINTS-MATHS
http://tinyURL.com/BLUEPRINTS-HALT
http://tinyURL.com/BLUEPRINTS-P-NP
http://tinyURL.com/BLUEPRINTS-GUT
0
6/15/2012 9:33:47 PM
On Jun 15, 7:26=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> If one examines the set of finite length strings

This is almost meaningless.  To the extent that anyone understands
what it means, she understands that ONE DOES NOT (*ever*) do THIS
because
"the set of finite length strings" IS AN INFINITE SET.  THEREORE, AT
NO TIME have the finite number of human beings EVER gotten around to
"examing" ALL of it,
NOR WILL WE EVER.
0
greeneg9613 (188)
6/16/2012 1:12:19 AM
On Jun 16, 11:12=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Jun 15, 7:26=A0am, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
>
> > If one examines the set of finite length strings
>
> This is almost meaningless. =A0To the extent that anyone understands
> what it means, she understands that ONE DOES NOT (*ever*) do THIS
> because
> "the set of finite length strings" IS AN INFINITE SET. =A0THEREORE, AT
> NO TIME have the finite number of human beings EVER gotten around to
> "examing" ALL of it,
> NOR WILL WE EVER.

did he say "if one has EXAMINED all the set of finite length
strings" ??

If you rebuke all terminology, rebuke a halt program because a program
might check whether itself halts, rebuke any argument since "hard
problems will always be impossible" then you proved yourself into a
closed paradox and sealed the door!

HAIL THE EDUCATION SYSTEM that gave George a degree in LoGIc!

Herc

--
P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.
BUT IF IT WEREN'T NAMED P then it might not be:

Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.
~ GEORGE GREEN (sci.logic)
0
6/16/2012 1:27:54 AM
On 6/15/2012 8:12 PM, George Greene wrote:
> On Jun 15, 7:26 am, "Peter Olcott"<NoS...@OCR4Screen.com>  wrote:
>> If one examines the set of finite length strings
> This is almost meaningless.  To the extent that anyone understands
> what it means, she understands that ONE DOES NOT (*ever*) do THIS
> because
> "the set of finite length strings" IS AN INFINITE SET.  THEREORE, AT
> NO TIME have the finite number of human beings EVER gotten around to
> "examing" ALL of it,
> NOR WILL WE EVER.
Mindless automaton stuck in refute mode.
0
Peter
6/16/2012 2:11:12 AM
On Jun 16, 12:11=A0pm, Peter Olcott <OCR4Screen> wrote:
> On 6/15/2012 8:12 PM, George Greene wrote:>
> On Jun 15, 7:26 am, "Peter Olcott"<NoS...@OCR4Screen.com> =A0wrote:
> >> If one examines the set of finite length strings
> > This is almost meaningless. =A0To the extent that anyone understands
> > what it means, she understands that ONE DOES NOT (*ever*) do THIS
> > because
> > "the set of finite length strings" IS AN INFINITE SET. =A0THEREORE, AT
> > NO TIME have the finite number of human beings EVER gotten around to
> > "examing" ALL of it,
> > NOR WILL WE EVER.
>
> Mindless automaton stuck in refute mode.

How can an algorithm be "impossible" when even
the ORACLE VERSION of the Halt(p1) Program,
that randomly guesses the right halt() value,
can be twisted into a paradox?

If an Oracle-Halt() gave the right answer,
then LOOP IF O-HALTS()

How can you DESIGN a PROGRAM to check for INFINITE LOOPS,
then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP
and do the opposite?

And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?

They don't even follow their own rules of TM computation!!!

1 Every <TM-n, i> Pair INFINITE LOOPS or NOT

2 ASSUME:  HALT(n,i)  WRITES  ["INFLOOP" | "HALT"] for all Pairs in
[1]
   on the infinite turing tape and then HALTS

****

They all BREAK the 2nd STEP!

WE only assumed a single TEST HARNESS HALT PROGRAM EXISTED,
not that it could be utilised as a general purpose function.

Enforcing that the HALT() TM HALTS after giving it's answer is part of
the specification of the algorithm.


Herc

0
6/16/2012 2:33:02 AM
Stick with constructivistic methods and discussions like yours vanish like gorillas in the mist.
0
6/16/2012 2:59:08 AM
On Jun 15, 10:11=A0pm, Peter Olcott <OCR4Screen> wrote:

> Mindless automaton stuck in refute mode.

Mindless automaton not even addressing the actual issue, which was
about
the fact that the question being asked depends on the programming of
the interpreter,
not the string being interpreted.
0
greeneg9613 (188)
6/16/2012 5:50:12 AM
On Jun 15, 9:27=A0pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
> If you rebuke all terminology, rebuke a halt program because a program
> might check whether itself halts, rebuke any argument

Is your native language English?  You do not know the meaning of the
verb "rebuke".
0
greeneg9613 (188)
6/16/2012 5:52:06 AM
On Jun 16, 3:52=A0pm, George Greene <gree...@email.unc.edu> wrote:
> On Jun 15, 9:27=A0pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > If you rebuke all terminology, rebuke a halt program because a program
> > might check whether itself halts, rebuke any argument
>
> Is your native language English? =A0You do not know the meaning of the
> verb "rebuke".

Ahh sorry, haven't seen you actually refute anything as yet!

Herc
0
6/16/2012 7:18:27 AM
On Jun 16, 3:50=A0pm, George Greene <gree...@email.unc.edu> wrote:
> On Jun 15, 10:11=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > Mindless automaton stuck in refute mode.
>
> Mindless automaton not even addressing the actual issue, which was
> about
> the fact that the question being asked depends on the programming of
> the interpreter,
> not the string being interpreted.

The Goal Shifting argument, endless ad nauseum ahead!

Everything is too hard <-> We don't know what we're saying!

Herc
0
6/16/2012 7:21:13 AM
> > > If you rebuke all terminology, rebuke a halt program because a progra=
m
> > > might check whether itself halts, rebuke any argument
>
> > Is your native language English? =A0You do not know the meaning of the
> > verb "rebuke".
>

So tell us George, do you have ANY IDEA what we mean by

THE_SELF_REFERENCE aspect of the Halting Proof?

before we are detoured any further?

Herc
--

P: If Halts(P) Then Loop Else Halt.
is obviously a paradoxical program if Halts() exists.
BUT IF IT WEREN'T NAMED P then it might not be:

Q: If Halts(P) Then Loop Else Halt.
is NOT paradoxical.
~ GEORGE GREEN (sci.logic)
0
6/16/2012 8:00:34 AM
Graham Cooper <grahamcooper7@gmail.com> writes:
<snip>
> How can an algorithm be "impossible" when even
> the ORACLE VERSION of the Halt(p1) Program,
> that randomly guesses the right halt() value,
> can be twisted into a paradox?
>
> If an Oracle-Halt() gave the right answer,
> then LOOP IF O-HALTS()

An TM with a halting oracle isn't a TM, so you can't apply the "usual"
proof.  You can't construct a TM that uses (in your terminology)
O-HALTS().

You can extend the model of computation, beyond TMs, to include oracle
machines.  If TM_OH is the set of TMs with a halting oracle, you can
show that there is no M in TM+OH that decides halting for that set of
machines, but you can now posit an oracle for this new decidability
problem.  You get an oracle for deciding halting of machines in TM_OH,
but, again, no contradiction can be derived because this new machine is
not in TM_OH.

There is an obvious "and so on".

> How can you DESIGN a PROGRAM to check for INFINITE LOOPS,
> then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP
> and do the opposite?

Exactly -- you can't.

> And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?

There are only two choices: either the construction of the derived
machine is impossible, or the machine to decide halting is impossible.
Since the construction is trivial, what are you left with?

<snip>
-- 
Ben.
0
ben.usenet (6790)
6/16/2012 11:31:02 AM
On 6/16/2012 12:50 AM, George Greene wrote:
> On Jun 15, 10:11 pm, Peter Olcott<OCR4Screen>  wrote:
>
>> Mindless automaton stuck in refute mode.
> Mindless automaton not even addressing the actual issue, which was
> about
> the fact that the question being asked depends on the programming of
> the interpreter,
> not the string being interpreted.
You are a mindless automaton stuck in refute mode!
If A TM had all of the knowledge in the world encoded within it, then 
the question:
"Does this TM instance halt on its input instance?"
depends upon:
(a) This string: "Does this TM instance halt on its input instance?"
(b) The input TM instance.
(b) The input TM instance's input instance.


0
Peter
6/16/2012 12:05:53 PM
On 6/16/2012 3:00 AM, Graham Cooper wrote:
>>>> If you rebuke all terminology, rebuke a halt program because a program
>>>> might check whether itself halts, rebuke any argument
>>> Is your native language English?  You do not know the meaning of the
>>> verb "rebuke".
> So tell us George, do you have ANY IDEA what we mean by
>
> THE_SELF_REFERENCE aspect of the Halting Proof?

I call it Pathological Self Reference because it derives an ill-formed 
question.

An ill-formed question is defined as any question that lacks a correct 
answer form the set of all possible answers.

>
> before we are detoured any further?
>
> Herc
> --
>
> P: If Halts(P) Then Loop Else Halt.
> is obviously a paradoxical program if Halts() exists.
> BUT IF IT WEREN'T NAMED P then it might not be:
>
> Q: If Halts(P) Then Loop Else Halt.
> is NOT paradoxical.
> ~ GEORGE GREEN (sci.logic)

0
Peter
6/16/2012 12:08:30 PM
On 6/16/2012 6:31 AM, Ben Bacarisse wrote:
> Graham Cooper<grahamcooper7@gmail.com>  writes:
> <snip>
>> How can an algorithm be "impossible" when even
>> the ORACLE VERSION of the Halt(p1) Program,
>> that randomly guesses the right halt() value,
>> can be twisted into a paradox?
>>
>> If an Oracle-Halt() gave the right answer,
>> then LOOP IF O-HALTS()
> An TM with a halting oracle isn't a TM, so you can't apply the "usual"
> proof.  You can't construct a TM that uses (in your terminology)
> O-HALTS().
That does not matter. The question behind the Pathological Self 
Reference form of the Halting Problem would be unsolvable even for an 
all-knowing mind because this question is ill-formed.

> You can extend the model of computation, beyond TMs, to include oracle
> machines.  If TM_OH is the set of TMs with a halting oracle, you can
> show that there is no M in TM+OH that decides halting for that set of
> machines, but you can now posit an oracle for this new decidability
> problem.  You get an oracle for deciding halting of machines in TM_OH,
> but, again, no contradiction can be derived because this new machine is
> not in TM_OH.
>
> There is an obvious "and so on".
>
>> How can you DESIGN a PROGRAM to check for INFINITE LOOPS,
>> then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP
>> and do the opposite?
> Exactly -- you can't.
>
>> And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?
> There are only two choices: either the construction of the derived
> machine is impossible, or the machine to decide halting is impossible.
> Since the construction is trivial, what are you left with?
>
> <snip>

0
Peter
6/16/2012 12:11:25 PM
"Ben Bacarisse" <ben.usenet@bsb.me.uk> wrote in message 
news:0.73d1d062cab079c41ba4.20120616123102BST.87aa03wl1l.fsf@bsb.me.uk...
> Graham Cooper <grahamcooper7@gmail.com> writes:
<snip>

>> And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?
>
> There are only two choices: either the construction of the derived
> machine is impossible, or the machine to decide halting is impossible.
> Since the construction is trivial, what are you left with?

But the objection there is not really that such definitions are impossible, 
rather it is about which of these belong to the given collection of machines 
and which don't and require an enlargement of the domain.  IOW, which are 
machines, which are meta-machines, which are meta-meta-machines, and so 
n.  --  I am still looking around for different proofs of the unsolvability 
of the halting problem, first- and second-order, but surely a proof where we 
posit a machine T that contradicts H to conclude that H cannot exist looks a 
very "weak" argument.

-LV
 

0
julio (505)
6/16/2012 1:04:52 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
<snip>

> The question behind the Pathological Self Reference form of the Halting 
> Problem would be unsolvable even for an all-knowing mind because this 
> question is ill-formed.

Sorry, maybe I have missed the critical posts, but which question is this? 
Could you please state it?

I know the usual one: "Is there a TM that can tell the halting behavior of 
any TM-input pair?", and the answer is (provably) no.  Then one might 
question the proof, but the question seems valid.

-LV
 

0
julio (505)
6/16/2012 1:14:14 PM
On 6/16/2012 8:14 AM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
> <snip>
>
>> The question behind the Pathological Self Reference form of the 
>> Halting Problem would be unsolvable even for an all-knowing mind 
>> because this question is ill-formed.
>
> Sorry, maybe I have missed the critical posts, but which question is 
> this? Could you please state it?
>
> I know the usual one: "Is there a TM that can tell the halting 
> behavior of any TM-input pair?", and the answer is (provably) no.  
> Then one might question the proof, but the question seems valid.
>
> -LV
>
>
"Does the input TM instance halt on its input?"
0
Peter
6/16/2012 2:47:08 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com...
> On 6/16/2012 8:14 AM, LudovicoVan wrote:
>> "Peter Olcott" <OCR4Screen> wrote in message 
>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
>> <snip>
>>
>>> The question behind the Pathological Self Reference form of the Halting 
>>> Problem would be unsolvable even for an all-knowing mind because this 
>>> question is ill-formed.
>>
>> Sorry, maybe I have missed the critical posts, but which question is 
>> this? Could you please state it?
>>
>> I know the usual one: "Is there a TM that can tell the halting behavior 
>> of any TM-input pair?", and the answer is (provably) no.  Then one might 
>> question the proof, but the question seems valid.
>
> "Does the input TM instance halt on its input?"

That would be a sub-question, where you are constraining the input to be the 
given TM's code, so implicitly assuming that you have a coding for the TMs 
(which is something not necessarily implied by the original, more general 
question).  Spelled out, this would be the question:  "Is there a TM that 
can tell the halting behavior of any TM-input pair, where input is equal to 
the code of the given TM itself?"  I still cannot see anything invalid, not 
even with this (sub-)question: for instance, one could conceive a TM that 
does negation of its input: when the input corresponds to the code of this 
TM itself, nothing "invalid" happens.  As another example, the putative 
universal halt decider, even if assumed to be a TM, should have no problems 
in telling its own halting behavior: in fact, by construction hypothesis, it 
always halts.

That said, I still have to see a working and convincing proof (except maybe 
for the ruling out of impredicativity in the second-order proof; my fault 
for not having a final answer on this issue), but, as said, that would be an 
objection to the proof, not to the question (not to the original one and, 
most probably, not even to the sub-question).

-LV
 

0
julio (505)
6/16/2012 3:20:55 PM
On 6/16/2012 10:20 AM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com...
>> On 6/16/2012 8:14 AM, LudovicoVan wrote:
>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
>>> <snip>
>>>
>>>> The question behind the Pathological Self Reference form of the 
>>>> Halting Problem would be unsolvable even for an all-knowing mind 
>>>> because this question is ill-formed.
>>>
>>> Sorry, maybe I have missed the critical posts, but which question is 
>>> this? Could you please state it?
>>>
>>> I know the usual one: "Is there a TM that can tell the halting 
>>> behavior of any TM-input pair?", and the answer is (provably) no.  
>>> Then one might question the proof, but the question seems valid.
>>
>> "Does the input TM instance halt on its input?"
>
> That would be a sub-question, 
It is the core question behind the reason why the Pathological 
Self-Reference form of the Halting Problem can not be solved.

> where you are constraining the input to be the given TM's code, so 
> implicitly assuming that you have a coding for the TMs (which is 
> something not necessarily implied by the original, more general 
> question).

When one takes into account AI knowledge representation, then this TM 
could have encoded within it the sum total of everything known to date 
about everything and anything, along with reasoning at least equal to 
the best human in each field of inquiry. Taking this example to its 
logical extreme, the question then becomes:
Could any all-knowing mind solve the Halting Problem?

> Spelled out, this would be the question:  "Is there a TM that can tell 
> the halting behavior of any TM-input pair, where input is equal to the 
> code of the given TM itself?"  I still cannot see anything invalid, 
> not even with this (sub-)question: for instance, one could conceive a 
> TM that does negation of its input: when the input corresponds to the 
> code of this TM itself, nothing "invalid" happens.  As another 
> example, the putative universal halt decider, even if assumed to be a 
> TM, should have no problems in telling its own halting behavior: in 
> fact, by construction hypothesis, it always halts.
>
> That said, I still have to see a working and convincing proof (except 
> maybe for the ruling out of impredicativity in the second-order proof; 
> my fault for not having a final answer on this issue), but, as said, 
> that would be an objection to the proof, not to the question (not to 
> the original one and, most probably, not even to the sub-question).
>
> -LV
>
>

0
Peter
6/16/2012 3:55:43 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:JZ-dneL4K_TiMUHSnZ2dnUVZ_v6dnZ2d@giganews.com...
> On 6/16/2012 10:20 AM, LudovicoVan wrote:
>> "Peter Olcott" <OCR4Screen> wrote in message 
>> news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com...
>>> On 6/16/2012 8:14 AM, LudovicoVan wrote:
>>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
>>>> <snip>
>>>>
>>>>> The question behind the Pathological Self Reference form of the 
>>>>> Halting Problem would be unsolvable even for an all-knowing mind 
>>>>> because this question is ill-formed.
>>>>
>>>> Sorry, maybe I have missed the critical posts, but which question is 
>>>> this? Could you please state it?
>>>>
>>>> I know the usual one: "Is there a TM that can tell the halting behavior 
>>>> of any TM-input pair?", and the answer is (provably) no.  Then one 
>>>> might question the proof, but the question seems valid.
>>>
>>> "Does the input TM instance halt on its input?"
>>
>> That would be a sub-question,
>
> It is the core question behind the reason why the Pathological 
> Self-Reference form of the Halting Problem can not be solved.

But you now move to yet another different question.

>> where you are constraining the input to be the given TM's code, so 
>> implicitly assuming that you have a coding for the TMs (which is 
>> something not necessarily implied by the original, more general 
>> question).
>
> When one takes into account AI knowledge representation, then this TM 
> could have encoded within it the sum total of everything known to date 
> about everything and anything, along with reasoning at least equal to the 
> best human in each field of inquiry. Taking this example to its logical 
> extreme, the question then becomes:
> Could any all-knowing mind solve the Halting Problem?

I do not see the "continuity" from the original question: never mind, in 
this case, by definition, the answer would be "yes".  Of course, it isn't 
about a TM and not even about human  minds.  OTOH, I still cannot see how 
the original question (the "halting problem") should be ill-formed.

-LV
 

0
julio (505)
6/16/2012 6:43:28 PM
On 6/16/2012 1:43 PM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:JZ-dneL4K_TiMUHSnZ2dnUVZ_v6dnZ2d@giganews.com...
>> On 6/16/2012 10:20 AM, LudovicoVan wrote:
>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>> news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com...
>>>> On 6/16/2012 8:14 AM, LudovicoVan wrote:
>>>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>>>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
>>>>> <snip>
>>>>>
>>>>>> The question behind the Pathological Self Reference form of the 
>>>>>> Halting Problem would be unsolvable even for an all-knowing mind 
>>>>>> because this question is ill-formed.
>>>>>
>>>>> Sorry, maybe I have missed the critical posts, but which question 
>>>>> is this? Could you please state it?
>>>>>
>>>>> I know the usual one: "Is there a TM that can tell the halting 
>>>>> behavior of any TM-input pair?", and the answer is (provably) no.  
>>>>> Then one might question the proof, but the question seems valid.
>>>>
>>>> "Does the input TM instance halt on its input?"
>>>
>>> That would be a sub-question,
>>
>> It is the core question behind the reason why the Pathological 
>> Self-Reference form of the Halting Problem can not be solved.
>
> But you now move to yet another different question.

No it has always been the same question for the several thousand times 
that I have stated it.

>
>>> where you are constraining the input to be the given TM's code, so 
>>> implicitly assuming that you have a coding for the TMs (which is 
>>> something not necessarily implied by the original, more general 
>>> question).
>>
>> When one takes into account AI knowledge representation, then this TM 
>> could have encoded within it the sum total of everything known to 
>> date about everything and anything, along with reasoning at least 
>> equal to the best human in each field of inquiry. Taking this example 
>> to its logical extreme, the question then becomes:
>> Could any all-knowing mind solve the Halting Problem?
>
> I do not see the "continuity" from the original question: never mind, 
> in this case, by definition, the answer would be "yes".  Of course, it 
> isn't about a TM and not even about human  minds.  OTOH, I still 
> cannot see how the original question (the "halting problem") should be 
> ill-formed.
>
> -LV
>
If no element of the set of Turing Machines can solve the self reference 
form of the Halting Problem, then the element of this set that would 
derive a mind that knows everything currently known would also not be 
able to solve the HP. Since this element would necessarily also know the 
syntax and Semantics of every human language that ever existed, it could 
be asked this question in English: "Will this TM instance halt on this 
input instance?"

If this TM can not answer this question then this question must 
necessarily be ill-formed.
0
Peter
6/16/2012 7:06:59 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
<snip>

> If this TM can not answer this question then this question must 
> necessarily be ill-formed.

It cannot be a TM, while the question can be and has been answered.  I 
cannot follow your line of reasoning, in fact I cannot see any line of 
reasoning.

-LV
 

0
julio (505)
6/16/2012 7:15:05 PM
On 6/16/2012 2:15 PM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
> <snip>
>
>> If this TM can not answer this question then this question must 
>> necessarily be ill-formed.
>
> It cannot be a TM, while the question can be and has been answered.  I 
> cannot follow your line of reasoning, in fact I cannot see any line of 
> reasoning.
>
> -LV
>
>
Can a TM compute anything computable?
Can anything computable include knowledge?
Can knowledge include knowledge of the syntax and semantics of human 
languages?


0
Peter
6/16/2012 8:03:33 PM
On Jun 17, 1:20=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Peter Olcott" <OCR4Screen> wrote in message
>
>
> > On 6/16/2012 8:14 AM, LudovicoVan wrote:
> >> "Peter Olcott" <OCR4Screen> wrote in message
> >>news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com...
> >> <snip>
>
> >>> The question behind the Pathological Self Reference form of the Halti=
ng
> >>> Problem would be unsolvable even for an all-knowing mind because this
> >>> question is ill-formed.
>
> >> Sorry, maybe I have missed the critical posts, but which question is
> >> this? Could you please state it?
>
> >> I know the usual one: "Is there a TM that can tell the halting behavio=
r
> >> of any TM-input pair?", and the answer is (provably) no. =A0Then one m=
ight
> >> question the proof, but the question seems valid.
>
> > "Does the input TM instance halt on its input?"
>
> That would be a sub-question, where you are constraining the input to be =
the
> given TM's code, so implicitly assuming that you have a coding for the TM=
s
> (which is something not necessarily implied by the original, more general
> question). =A0Spelled out, this would be the question: =A0"Is there a TM =
that
> can tell the halting behavior of any TM-input pair, where input is equal =
to
> the code of the given TM itself?" =A0I still cannot see anything invalid,=
 not



It's a self reference.

It's like saying...  you can never avoid infinite loops when
programming
because you could program an infinite loop (on the condition that it
detected itself didn't infinite loop)

yeh so?  DONT PROGRAM THAT!

Herc
0
6/16/2012 9:23:09 PM
On Jun 16, 9:31=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Graham Cooper <grahamcoop...@gmail.com> writes:
>
> <snip>
>
> > How can an algorithm be "impossible" when even
> > the ORACLE VERSION of the Halt(p1) Program,
> > that randomly guesses the right halt() value,
> > can be twisted into a paradox?
>
> > If an Oracle-Halt() gave the right answer,
> > then LOOP IF O-HALTS()
>
> An TM with a halting oracle isn't a TM, so you can't apply the "usual"
> proof. =A0You can't construct a TM that uses (in your terminology)
> O-HALTS().
>
> You can extend the model of computation, beyond TMs, to include oracle
> machines. =A0If TM_OH is the set of TMs with a halting oracle, you can
> show that there is no M in TM+OH that decides halting for that set of
> machines, but you can now posit an oracle for this new decidability
> problem. =A0You get an oracle for deciding halting of machines in TM_OH,
> but, again, no contradiction can be derived because this new machine is
> not in TM_OH.
>
> There is an obvious "and so on".
>
> > How can you DESIGN a PROGRAM to check for INFINITE LOOPS,
> > then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP
> > and do the opposite?
>
> Exactly -- you can't.
>
> > And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?
>
> There are only two choices: either the construction of the derived
> machine is impossible, or the machine to decide halting is impossible.
> Since the construction is trivial, what are you left with?
>


You don't even follow your own rules of TM construction.

1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
2 ASSUME:  HALT(n,i)  WRITES  ["INFLOOP" | "HALT"] for all [1]
   on the infinite turing tape and then HALTS


You can construct a MONA-LISA Painting algorithm
using the code somewhere X++

Proving a fact about the un-paintability of the Mona Lisa by an
algorithm says nothing about the correctness of the sub-code X++

It's WHAT YOU DO with the HALT PROGRAM that is wrong!

Herc
0
6/16/2012 9:27:40 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com...
> On 6/16/2012 2:15 PM, LudovicoVan wrote:
>> "Peter Olcott" <OCR4Screen> wrote in message 
>> news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
>> <snip>
>>
>>> If this TM can not answer this question then this question must 
>>> necessarily be ill-formed.
>>
>> It cannot be a TM, while the question can be and has been answered.  I 
>> cannot follow your line of reasoning, in fact I cannot see any line of 
>> reasoning.
>
> Can a TM compute anything computable?

Yes, an Universal Turing Machine can do so.  (For all we know, "anything 
computable" does not include the halting problem.)

> Can anything computable include knowledge?

No: on the contrary, you could say that knowledge includes things computable 
(so that these are part of what can be known, they do not include or exhaust 
it).

> Can knowledge include knowledge of the syntax and semantics of human 
> languages?

Of course.

Strictly speaking, a TM has no "knowledge", except maybe for the knowledge 
underlying the algorithm it implements, which ultimately remains *our* 
knowledge.

-LV
 

0
julio (505)
6/16/2012 10:08:55 PM
On Jun 17, 8:08=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Peter Olcott" <OCR4Screen> wrote in message
>
> news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com...
>
> > On 6/16/2012 2:15 PM, LudovicoVan wrote:
> >> "Peter Olcott" <OCR4Screen> wrote in message
> >>news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
> >> <snip>
>
> >>> If this TM can not answer this question then this question must
> >>> necessarily be ill-formed.
>
> >> It cannot be a TM, while the question can be and has been answered. =
=A0I
> >> cannot follow your line of reasoning, in fact I cannot see any line of
> >> reasoning.
>
> > Can a TM compute anything computable?
>
> Yes, an Universal Turing Machine can do so. =A0(For all we know, "anythin=
g
> computable" does not include the halting problem.)
>
> > Can anything computable include knowledge?
>
> No: on the contrary, you could say that knowledge includes things computa=
ble
> (so that these are part of what can be known, they do not include or exha=
ust
> it).
>
> > Can knowledge include knowledge of the syntax and semantics of human
> > languages?
>
> Of course.
>
> Strictly speaking, a TM has no "knowledge", except maybe for the knowledg=
e
> underlying the algorithm it implements, which ultimately remains *our*
> knowledge.
>
> -LV

So your brain is doing something EXTRA than symbol manipulation?

SCORES(BASEBALL, YANKEES, 22, RED-SOX, 14, BLUE-STADIUM, 4/1/2012)

is not a FACT or KNOWLEDGE?



Herc
0
6/16/2012 10:25:37 PM
"Graham Cooper" <grahamcooper7@gmail.com> wrote in message 
news:32a505c1-413f-4e40-9cb1-6d89cc269b8b@s6g2000pbi.googlegroups.com...
> On Jun 17, 1:20 am, "LudovicoVan" <ju...@diegidio.name> wrote:
<snip>

>> I still cannot see anything invalid, not

You have snipped my examples.

> It's a self reference.

So what??

> It's like saying...  you can never avoid infinite loops when
> programming

No, it's not.  (And you can always avoid infinite loops when programming.)

-LV
 

0
julio (505)
6/16/2012 10:37:26 PM
"Graham Cooper" <grahamcooper7@gmail.com> wrote in message 
news:9af0e89d-a755-42ef-97d8-8d2b643101d5@po9g2000pbb.googlegroups.com...
> On Jun 17, 8:08 am, "LudovicoVan" <ju...@diegidio.name> wrote:
>> "Peter Olcott" <OCR4Screen> wrote in message
>> news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com...
>> > On 6/16/2012 2:15 PM, LudovicoVan wrote:
>> >> "Peter Olcott" <OCR4Screen> wrote in message
>> >>news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
>> >> <snip>
>>
>> >>> If this TM can not answer this question then this question must
>> >>> necessarily be ill-formed.
>>
>> >> It cannot be a TM, while the question can be and has been answered.  I
>> >> cannot follow your line of reasoning, in fact I cannot see any line of
>> >> reasoning.
>>
>> > Can a TM compute anything computable?
>>
>> Yes, an Universal Turing Machine can do so.  (For all we know, "anything
>> computable" does not include the halting problem.)
>>
>> > Can anything computable include knowledge?
>>
>> No: on the contrary, you could say that knowledge includes things 
>> computable
>> (so that these are part of what can be known, they do not include or 
>> exhaust
>> it).
>>
>> > Can knowledge include knowledge of the syntax and semantics of human
>> > languages?
>>
>> Of course.
>>
>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>> knowledge
>> underlying the algorithm it implements, which ultimately remains *our*
>> knowledge.
>
> So your brain is doing something EXTRA than symbol manipulation?

Maybe so, but the relevance to TMs and our discussion is ZERO.

> SCORES(BASEBALL, YANKEES, 22, RED-SOX, 14, BLUE-STADIUM, 4/1/2012)
>
> is not a FACT or KNOWLEDGE?

It is *part of* knowledge.  Reread my post after switching your calculators 
on.

-LV
 

0
julio (505)
6/16/2012 10:40:28 PM
On Jun 17, 8:40=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>
> > On Jun 17, 8:08 am, "LudovicoVan" <ju...@diegidio.name> wrote:
> >> "Peter Olcott" <OCR4Screen> wrote in message
> >>news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com...
> >> > On 6/16/2012 2:15 PM, LudovicoVan wrote:
> >> >> "Peter Olcott" <OCR4Screen> wrote in message
> >> >>news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
> >> >> <snip>
>
> >> >>> If this TM can not answer this question then this question must
> >> >>> necessarily be ill-formed.
>
> >> >> It cannot be a TM, while the question can be and has been answered.=
 =A0I
> >> >> cannot follow your line of reasoning, in fact I cannot see any line=
 of
> >> >> reasoning.
>
> >> > Can a TM compute anything computable?
>
> >> Yes, an Universal Turing Machine can do so. =A0(For all we know, "anyt=
hing
> >> computable" does not include the halting problem.)
>
> >> > Can anything computable include knowledge?
>
> >> No: on the contrary, you could say that knowledge includes things
> >> computable
> >> (so that these are part of what can be known, they do not include or
> >> exhaust
> >> it).
>
> >> > Can knowledge include knowledge of the syntax and semantics of human
> >> > languages?
>
> >> Of course.
>
> >> Strictly speaking, a TM has no "knowledge", except maybe for the
> >> knowledge
> >> underlying the algorithm it implements, which ultimately remains *our*
> >> knowledge.
>
> > So your brain is doing something EXTRA than symbol manipulation?
>
> Maybe so, but the relevance to TMs and our discussion is ZERO.
>
> > SCORES(BASEBALL, YANKEES, 22, RED-SOX, 14, BLUE-STADIUM, 4/1/2012)
>
> > is not a FACT or KNOWLEDGE?
>
> It is *part of* knowledge. =A0Reread my post after switching your calcula=
tors
> on.
>
> -LV

this bit?

  > Can anything computable include knowledge?

   No:
   a TM has no "knowledge"

So Propositions and Predicates(),
sentences and formula that can be TRUE or FALSE
are disjoint to "knowledge" ?

Herc
0
6/16/2012 10:53:31 PM
On Jun 17, 8:37=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>
>
> <snip>
>
> >> I still cannot see anything invalid, not
>
> You have snipped my examples.
>
> > It's a self reference.
>
> So what??

how could I know?  you snipped the sentence under discussion.



>
> > It's like saying... =A0you can never avoid infinite loops when
> > programming
>
> No, it's not. =A0(And you can always avoid infinite loops when programmin=
g.)
>
> -LV

Not systematically and still program any computable function.

That *IS* the Halting Proof!

--=3DThat no function can test for infinite loops=3D--

Obviously basing the notion of *UNCOMPUTABLE*
on a proof about INFINITE-LOOPS would be seen as nonsense, so you get
the neatened up HALT() version.

Herc
0
6/17/2012 12:23:21 AM
"Graham Cooper" <grahamcooper7@gmail.com> wrote in message 
news:ae59ea02-4125-4a6d-96b5-ddf5967dbb7e@l10g2000pbi.googlegroups.com...
> On Jun 17, 8:37 am, "LudovicoVan" <ju...@diegidio.name> wrote:
>> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
<snip>

>> > It's like saying...  you can never avoid infinite loops when
>> > programming
>>
>> No, it's not.  (And you can always avoid infinite loops when 
>> programming.)
>
> Not systematically and still program any computable function.

Yes, systematically: not *generally*.  Then I do not even know what you mean 
by "program[ming] any computable function".

> That *IS* the Halting Proof!
>
> --=That no function can test for infinite loops=--

That is *not* the halting problem, which is to have a function that decides 
halting *in general* (i.e. for any input).

> Obviously basing the notion of *UNCOMPUTABLE*
> on a proof about INFINITE-LOOPS would be seen as nonsense, so you get
> the neatened up HALT() version.

But this is wrong too when one looks nearer: you are assuming that any 
machine that does not halt on some input does so in a periodic manner 
("looping"), which is not necessarily the case.

-LV
 

0
julio (505)
6/17/2012 1:13:54 AM
On Jun 17, 11:13=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>
> >> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>
> <snip>
>
> >> > It's like saying... =A0you can never avoid infinite loops when
> >> > programming
>
> >> No, it's not. =A0(And you can always avoid infinite loops when
> >> programming.)
>
> > Not systematically and still program any computable function.
>
> Yes, systematically: not *generally*. =A0Then I do not even know what you=
 mean
> by "program[ming] any computable function".
>
> > That *IS* the Halting Proof!
>
> > --=3DThat no function can test for infinite loops=3D--
>
> That is *not* the halting problem, which is to have a function that decid=
es
> halting *in general* (i.e. for any input).
>
> > Obviously basing the notion of *UNCOMPUTABLE*
> > on a proof about INFINITE-LOOPS would be seen as nonsense, so you get
> > the neatened up HALT() version.
>
> But this is wrong too when one looks nearer: you are assuming that any
> machine that does not halt on some input does so in a periodic manner
> ("looping"), which is not necessarily the case.
>
> -LV

How can a program hang without going into a loop?

Assuming the computer is working and a CPU has been allocated to the
process.

Herc
0
6/17/2012 1:30:23 AM
"Graham Cooper" <grahamcooper7@gmail.com> wrote in message 
news:f64b22e5-6d09-498e-815d-2bc84b0ae28b@po9g2000pbb.googlegroups.com...
> On Jun 17, 11:13 am, "LudovicoVan" <ju...@diegidio.name> wrote:
<snip>

>> [...] you are assuming that any
>> machine that does not halt on some input does so in a periodic manner
>> ("looping"), which is not necessarily the case.
>
> How can a program hang without going into a loop?

Think an "irrational" machine, i.e. one that goes through its states 
endlessly but in a non-periodic manner.  I think a valid example could be a 
machine that is programmed to print on the tape *every* digit of the 
fractional expansion of pi.  This machine would never terminate, although 
non-periodically.  (Someone please correct me if I am mistaken.)

-LV
 

0
julio (505)
6/17/2012 1:43:31 AM
On Jun 17, 11:43=A0am, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
> > On Jun 17, 11:13 am, "LudovicoVan" <ju...@diegidio.name> wrote:
>
> <snip>
>
> >> [...] you are assuming that any
> >> machine that does not halt on some input does so in a periodic manner
> >> ("looping"), which is not necessarily the case.
>
> > How can a program hang without going into a loop?
>
> Think an "irrational" machine, i.e. one that goes through its states
> endlessly but in a non-periodic manner. =A0I think a valid example could =
be a
> machine that is programmed to print on the tape *every* digit of the
> fractional expansion of pi. =A0This machine would never terminate, althou=
gh
> non-periodically. =A0(Someone please correct me if I am mistaken.)
>
> -LV

you have oo-many commands to run on a fixed size S commands program.

Use the pigeon hole principle, how many times will the
WENDs and UNTIL Xs and RETURNs and GOTOs
send the Program Counter back up higher to the same earlier statement?

You're on a LIMB on a LIMB on a LIMB on a LIMB of an argument here!

CHECK-INFINITE-LOOP(program) is "UNCOMPUTABLE"

because you can LOOP IT!

It's like denying the patent for a Wheelie Bin because a Secretary
wrote [STATIONARY] on it!

Herc
0
6/17/2012 2:07:51 AM
"Graham Cooper" <grahamcooper7@gmail.com> wrote in message 
news:bb4361bb-845e-44a2-b2d1-a965881dcd9a@qz1g2000pbc.googlegroups.com...
> On Jun 17, 11:43 am, "LudovicoVan" <ju...@diegidio.name> wrote:
>> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>> > On Jun 17, 11:13 am, "LudovicoVan" <ju...@diegidio.name> wrote:
>> <snip>
>>
>> >> [...] you are assuming that any
>> >> machine that does not halt on some input does so in a periodic manner
>> >> ("looping"), which is not necessarily the case.
>>
>> > How can a program hang without going into a loop?
>>
>> Think an "irrational" machine, i.e. one that goes through its states
>> endlessly but in a non-periodic manner.  I think a valid example could be 
>> a
>> machine that is programmed to print on the tape *every* digit of the
>> fractional expansion of pi.  This machine would never terminate, although
>> non-periodically.  (Someone please correct me if I am mistaken.)
>
> you have oo-many commands to run on a fixed size S commands program.

Not really: there are actual functions that can compute every digit of pi.

> Use the pigeon hole principle, how many times will the
> WENDs and UNTIL Xs and RETURNs and GOTOs
> send the Program Counter back up higher to the same earlier statement?
>
> You're on a LIMB on a LIMB on a LIMB on a LIMB of an argument here!
>
> CHECK-INFINITE-LOOP(program) is "UNCOMPUTABLE"
>
> because you can LOOP IT!
>
> It's like denying the patent for a Wheelie Bin because a Secretary
> wrote [STATIONARY] on it!

Yes, I understand your doubts here: I just haven't got a final answer on 
this myself.  --  I will and maybe you too should investigate other proofs 
than the one I have attempted...

-LV
 

0
julio (505)
6/17/2012 2:22:47 AM
On Jun 17, 12:22=A0pm, "LudovicoVan" <ju...@diegidio.name> wrote:
> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
>
> > On Jun 17, 11:43 am, "LudovicoVan" <ju...@diegidio.name> wrote:
> >> "Graham Cooper" <grahamcoop...@gmail.com> wrote in message
> >> > On Jun 17, 11:13 am, "LudovicoVan" <ju...@diegidio.name> wrote:
> >> <snip>
>
> >> >> [...] you are assuming that any
> >> >> machine that does not halt on some input does so in a periodic mann=
er
> >> >> ("looping"), which is not necessarily the case.
>
> >> > How can a program hang without going into a loop?
>
> >> Think an "irrational" machine, i.e. one that goes through its states
> >> endlessly but in a non-periodic manner. =A0I think a valid example cou=
ld be
> >> a
> >> machine that is programmed to print on the tape *every* digit of the
> >> fractional expansion of pi. =A0This machine would never terminate, alt=
hough
> >> non-periodically. =A0(Someone please correct me if I am mistaken.)
>
> > you have oo-many commands to run on a fixed size S commands program.
>
> Not really: there are actual functions that can compute every digit of pi=
..

WITHIN A LOOP!

This is FIXED LENGTH PROGRAM

LINE 1
LINE 2
LINE 3
LINE 4
LINE 5

Now tell me... WHAT INFINITE LINE # SEQUENCE doesn't LOOP?

<1 4 5 2 3... >


>
> > Use the pigeon hole principle, how many times will the
> > WENDs and UNTIL Xs and RETURNs and GOTOs
> > send the Program Counter back up higher to the same earlier statement?
>
> > You're on a LIMB on a LIMB on a LIMB on a LIMB of an argument here!
>
> > CHECK-INFINITE-LOOP(program) is "UNCOMPUTABLE"
>
> > because you can LOOP IT!
>
> > It's like denying the patent for a Wheelie Bin because a Secretary
> > wrote [STATIONARY] on it!
>
> Yes, I understand your doubts here: I just haven't got a final answer on
> this myself. =A0-- =A0I will and maybe you too should investigate other p=
roofs
> than the one I have attempted...
>
> -LV

There's a Diagonal Version in The Emperor's New Mind.

The one we did was different.

Program a function

RUNIFHALTS(f,a)
  IF HALT(f,a) THEN RETURN f(a)

Then program a function

ADDONE(f, a)
  IF HALT(f,a) THEN RETURN f(a)+1

then you can find a contradiction by listing all programs.


THIS IS MY VERSION

ASSUME CHECK-INF-LOOP(program) EXISTS

program5
10 IF CHECK-INF-LOOP('program5') THEN GOTO 100
20 GOTO 10
100 PRINT "1 LINE LEFT UNTIL HALT"

you know the rest.. it's around here somewhere..

Herc
0
6/17/2012 2:34:55 AM
On 6/16/2012 5:08 PM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com...
>> On 6/16/2012 2:15 PM, LudovicoVan wrote:
>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>> news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com...
>>> <snip>
>>>
>>>> If this TM can not answer this question then this question must 
>>>> necessarily be ill-formed.
>>>
>>> It cannot be a TM, while the question can be and has been answered.  
>>> I cannot follow your line of reasoning, in fact I cannot see any 
>>> line of reasoning.
>>
>> Can a TM compute anything computable?
>
> Yes, an Universal Turing Machine can do so.  (For all we know, 
> "anything computable" does not include the halting problem.)
>
>> Can anything computable include knowledge?
>
> No: on the contrary, you could say that knowledge includes things 
> computable (so that these are part of what can be known, they do not 
> include or exhaust it).
>
>> Can knowledge include knowledge of the syntax and semantics of human 
>> languages?
>
> Of course.
>
> Strictly speaking, a TM has no "knowledge", except maybe for the 
> knowledge underlying the algorithm it implements, which ultimately 
> remains *our* knowledge.
>
> -LV
>
>
That statement would not be true for every possible future technological 
innovation within computer science.
http://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning

A computing machine 10,000 years from now could learn new things by 
reading about them or watching them on TV, or even simply figuring them 
out for itself.

0
Peter
6/17/2012 3:40:39 AM
In comp.theory Graham Cooper <grahamcooper7@gmail.com> wrote:
 
> WITHIN A LOOP!
> 
> This is FIXED LENGTH PROGRAM
> 
> LINE 1
> LINE 2
> LINE 3
> LINE 4
> LINE 5
> 
> Now tell me... WHAT INFINITE LINE # SEQUENCE doesn't LOOP?

It depends on how you define "loop", really. I wouldn't think
of the following program as looping, for instance -- even though
it revists earlier states:

 1 n -> 1
 2 goto 100 
10 print "do"
20 print "re" 
30 print "mi"
40 print "fa" 
50 print "so"
60 print "la"
70 print "si"
80 print ""

100 x -> 1 + calculateNthDigitOfPi( n )
110 n -> n + 1
120 on x goto 10, 20, 30, 40, 50, 60, 70, 80, 80, 80

-- 
Leif Roar Moldskred
0
leifm1143 (162)
6/17/2012 7:58:17 AM
On Jun 17, 5:58=A0pm, Leif Roar Moldskred <le...@dimnakorr.com> wrote:
> In comp.theory Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > WITHIN A LOOP!
>
> > This is FIXED LENGTH PROGRAM
>
> > LINE 1
> > LINE 2
> > LINE 3
> > LINE 4
> > LINE 5
>
> > Now tell me... WHAT INFINITE LINE # SEQUENCE doesn't LOOP?
>
> It depends on how you define "loop", really. I wouldn't think
> of the following program as looping, for instance -- even though
> it revists earlier states:
>
> =A01 n -> 1
> =A02 goto 100
> 10 print "do"
> 20 print "re"
> 30 print "mi"
> 40 print "fa"
> 50 print "so"
> 60 print "la"
> 70 print "si"
> 80 print ""
>
> 100 x -> 1 + calculateNthDigitOfPi( n )
> 110 n -> n + 1
> 120 on x goto 10, 20, 30, 40, 50, 60, 70, 80, 80, 80
>


Unfortunately we don't have sound here on ole usenet,
but what would the output of that program be?

I lost my voice from  shouting last week and had to learn to sing
again btw!

Herc
0
6/17/2012 9:03:05 AM
"Peter Olcott" <OCR4Screen> wrote in message 
news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
> On 6/16/2012 5:08 PM, LudovicoVan wrote:
<snip>

>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>> knowledge underlying the algorithm it implements, which ultimately 
>> remains *our* knowledge.
>
> That statement would not be true for every possible future technological 
> innovation within computer science.

No, it is true, strictly true!  The day machine will be intelligent, that 
day machines will not be machines as we now conceive them: IOW, a *paradigm 
shift* is needed, without which machines getting intelligent (in 1 or 10K 
years, by plan or by accident: whatever) is only sci-fi.

-LV
 

0
julio (505)
6/17/2012 12:44:35 PM
On 6/17/2012 7:44 AM, LudovicoVan wrote:
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
>> On 6/16/2012 5:08 PM, LudovicoVan wrote:
> <snip>
>
>>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>>> knowledge underlying the algorithm it implements, which ultimately 
>>> remains *our* knowledge.
>>
>> That statement would not be true for every possible future 
>> technological innovation within computer science.
>
> No, it is true, strictly true!  The day machine will be intelligent, 
> that day machines will not be machines as we now conceive them: IOW, a 
> *paradigm shift* is needed, without which machines getting intelligent 
> (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi.
>
> -LV
>
>
So as soon as a Turing Computable algorithm acquires sufficient 
capability to exceed the human mind, even though it is Turing 
Computable, we arbitrarily and capriciously by fiat declare that this is 
no long a Turing Machine?
0
Peter
6/17/2012 4:57:52 PM
On Jun 16, 9:31=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Graham Cooper <grahamcoop...@gmail.com> writes:
>
> <snip>
>
> > How can an algorithm be "impossible" when even
> > the ORACLE VERSION of the Halt(p1) Program,
> > that randomly guesses the right halt() value,
> > can be twisted into a paradox?
>
> > If an Oracle-Halt() gave the right answer,
> > then LOOP IF O-HALTS()
>
> An TM with a halting oracle isn't a TM, so you can't apply the "usual"
> proof. =A0You can't construct a TM that uses (in your terminology)
> O-HALTS().
>
> You can extend the model of computation, beyond TMs, to include oracle
> machines. =A0If TM_OH is the set of TMs with a halting oracle, you can
> show that there is no M in TM+OH that decides halting for that set of
> machines, but you can now posit an oracle for this new decidability
> problem. =A0You get an oracle for deciding halting of machines in TM_OH,
> but, again, no contradiction can be derived because this new machine is
> not in TM_OH.
>
> There is an obvious "and so on".

I see how that works!

Your constructed program is not in the form of a TuringMachine

FUNCTION LOOP(x)
  IF HALT(x,x) RETURN TRUE

It may have an EQUIVALENT TURING MACHINE
but you can't just swap representations mid proof.

The only way to implement the RETURN function is with a
FUNCTION AND PARAMETER CALL RETURN STACK

Then you can trivially branch out of your OracleHalt() Induction.

oo
....
program-n+1 <... ( if OH(program-n) ... >
program-n   <... ( if OH(program-n-1) ... >
program-n-1 <... ( if OH(program-n-2) ... >
....
program-2 <... ( if OH(program-1) ... >
program-1 <... ( if OH(program-X) ... >

BY CHECKING THERE IS NOTHING ON THE STACK
I.E. NO OTHER FUNCTION IS CALLING THE FUNCTION BEING RUN




>
> > How can you DESIGN a PROGRAM to check for INFINITE LOOPS,
> > then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP
> > and do the opposite?
>
> Exactly -- you can't.
>
> > And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS?
>
> There are only two choices: either the construction of the derived
> machine is impossible, or the machine to decide halting is impossible.
> Since the construction is trivial, what are you left with?
>

You SWITCH BETWEEN A AND B TO SUIT.

A1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
A2 ASSUME: HALT(n,i)  WRITES  ["INFLOOP" | "HALT"] for all [1]
   on the infinite turing tape and then HALTS

B1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
B2 ASSUME: HALT(n,i)  RETURNS  ["INFLOOP" | "HALT"] for all [1]

C1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
C2 ASSUME: HALT(n,i)  RETURNS
         ["LOOP" | "HALT" | "OutOfRange"] for all [1]

Only B2 is trivially contradictory

Herc
0
6/17/2012 6:13:59 PM
"Peter Olcott" <OCR4Screen> wrote in message 
news:64SdnaEnn_8NkUPSnZ2dnUVZ_jWdnZ2d@giganews.com...
> On 6/17/2012 7:44 AM, LudovicoVan wrote:
>> "Peter Olcott" <OCR4Screen> wrote in message 
>> news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
>>> On 6/16/2012 5:08 PM, LudovicoVan wrote:
>> <snip>
>>
>>>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>>>> knowledge underlying the algorithm it implements, which ultimately 
>>>> remains *our* knowledge.
>>>
>>> That statement would not be true for every possible future technological 
>>> innovation within computer science.
>>
>> No, it is true, strictly true!  The day machine will be intelligent, that 
>> day machines will not be machines as we now conceive them: IOW, a 
>> *paradigm shift* is needed, without which machines getting intelligent 
>> (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi.
>
> So as soon as a Turing Computable algorithm acquires sufficient capability 
> to exceed the human mind, even though it is Turing Computable, we 
> arbitrarily and capriciously by fiat declare that this is no long a Turing 
> Machine?

So you don't know what "paradigm shift" means and cannot understand what I 
said.

Stop pretending and start learning, troll.

-LV
 

0
julio (505)
6/17/2012 6:39:37 PM
"LudovicoVan" <julio@diegidio.name> wrote in message 
news:jrl897$sfj$1@speranza.aioe.org...
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:64SdnaEnn_8NkUPSnZ2dnUVZ_jWdnZ2d@giganews.com...
>> On 6/17/2012 7:44 AM, LudovicoVan wrote:
>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>> news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
>>>> On 6/16/2012 5:08 PM, LudovicoVan wrote:
>>> <snip>
>>>
>>>>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>>>>> knowledge underlying the algorithm it implements, which ultimately 
>>>>> remains *our* knowledge.
>>>>
>>>> That statement would not be true for every possible future 
>>>> technological innovation within computer science.
>>>
>>> No, it is true, strictly true!  The day machine will be intelligent, 
>>> that day machines will not be machines as we now conceive them: IOW, a 
>>> *paradigm shift* is needed, without which machines getting intelligent 
>>> (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi.
>>
>> So as soon as a Turing Computable algorithm acquires sufficient 
>> capability to exceed the human mind, even though it is Turing Computable, 
>> we arbitrarily and capriciously by fiat declare that this is no long a 
>> Turing Machine?
>
> So you don't know what "paradigm shift" means and cannot understand what I 
> said.
>
> Stop pretending and start learning, troll.
>
> -LV
>
>
Jackass! 


0
NoSpam271 (937)
6/17/2012 7:05:40 PM
"Peter Olcott" <NoSpam@OCR4Screen.com> wrote in message 
news:4LKdnXmgl7sbt0PSnZ2dnUVZ_sSdnZ2d@giganews.com...
> "LudovicoVan" <julio@diegidio.name> wrote in message 
> news:jrl897$sfj$1@speranza.aioe.org...
>> "Peter Olcott" <OCR4Screen> wrote in message 
>> news:64SdnaEnn_8NkUPSnZ2dnUVZ_jWdnZ2d@giganews.com...
>>> On 6/17/2012 7:44 AM, LudovicoVan wrote:
>>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>>> news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
>>>>> On 6/16/2012 5:08 PM, LudovicoVan wrote:
>>>> <snip>
>>>>
>>>>>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>>>>>> knowledge underlying the algorithm it implements, which ultimately 
>>>>>> remains *our* knowledge.
>>>>>
>>>>> That statement would not be true for every possible future 
>>>>> technological innovation within computer science.
>>>>
>>>> No, it is true, strictly true!  The day machine will be intelligent, 
>>>> that day machines will not be machines as we now conceive them: IOW, a 
>>>> *paradigm shift* is needed, without which machines getting intelligent 
>>>> (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi.
>>>
>>> So as soon as a Turing Computable algorithm acquires sufficient 
>>> capability to exceed the human mind, even though it is Turing 
>>> Computable, we arbitrarily and capriciously by fiat declare that this is 
>>> no long a Turing Machine?
>>
>> So you don't know what "paradigm shift" means and cannot understand what 
>> I said.
>>
>> Stop pretending and start learning, troll.
>
> Jackass!

Idiot, you have not been able to put together a single argument that is one 
in hundreds and hundreds of posts.  You are a troll and not even that good 
at it, although maybe good enough for sci.logic where there is not very much 
left to do in any case (apart from burning down the house and restarting 
from scratch).  In the meantime, keep enjoying your own company and that of 
those who have nothing better to do.

-LV
 

0
julio (505)
6/17/2012 7:10:24 PM
"LudovicoVan" <julio@diegidio.name> wrote in message 
news:jrl897$sfj$1@speranza.aioe.org...
> "Peter Olcott" <OCR4Screen> wrote in message 
> news:64SdnaEnn_8NkUPSnZ2dnUVZ_jWdnZ2d@giganews.com...
>> On 6/17/2012 7:44 AM, LudovicoVan wrote:
>>> "Peter Olcott" <OCR4Screen> wrote in message 
>>> news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com...
>>>> On 6/16/2012 5:08 PM, LudovicoVan wrote:
>>> <snip>
>>>
>>>>> Strictly speaking, a TM has no "knowledge", except maybe for the 
>>>>> knowledge underlying the algorithm it implements, which ultimately 
>>>>> remains *our* knowledge.
>>>>
>>>> That statement would not be true for every possible future 
>>>> technological innovation within computer science.
>>>
>>> No, it is true, strictly true!  The day machine will be intelligent, 
>>> that day machines will not be machines as we now conceive them: IOW, a 
>>> *paradigm shift* is needed, without which machines getting intelligent 
>>> (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi.
>>
>> So as soon as a Turing Computable algorithm acquires sufficient 
>> capability to exceed the human mind, even though it is Turing Computable, 
>> we arbitrarily and capriciously by fiat declare that this is no long a 
>> Turing Machine?
>
> So you don't know what "paradigm shift" means and cannot understand what I 
> said.
>

The problem with the Halting Problem is that it that because of the 
Church/Turing thesis it must apply to every possible future computing 
technology what-so-ever because it must apply to anything at all that is 
ever computable, paradigm shift or not.

If it is computable, then the Church/Turing thesis applies, otherwise it is 
*not* computable. If an mind far greater than the human mind is ever 
computable, then the Church/Turing thesis necessarily also applies to this 
mind too. 


0
NoSpam271 (937)
6/17/2012 7:19:03 PM
"Peter Olcott" <NoSpam@OCR4Screen.com> writes:
<snip>
> The problem with the Halting Problem is that it that because of the 
> Church/Turing thesis it must apply to every possible future computing 
> technology what-so-ever because it must apply to anything at all that is 
> ever computable, paradigm shift or not.
>
> If it is computable, then the Church/Turing thesis applies, otherwise it is 
> *not* computable. If an mind far greater than the human mind is ever 
> computable, then the Church/Turing thesis necessarily also applies to this 
> mind too. 

Oh dear.  First you mash up the halting problem, and now you turn your
attention to the Church-Turing thesis.

Fist, it is not a theorem, so you can't draw conclusions from it.  At
best you can make statement conditional on it.  Secondly, it does not
say what you think it says:

  "The Church-Turing thesis does not entail that the brain (or the mind,
  or consciousness) can be modelled by a Turing machine program, not
  even in conjunction with the belief that the brain (or mind, etc.) is
  scientifically explicable, or exhibits a systematic pattern of
  responses to the environment, or is 'rule-governed' (etc.)."

  "... one frequently encounters the view that psychology must be
  capable of being expressed ultimately in terms of the Turing machine
  (e.g. Fodor 1981: 130; Boden 1988: 259).  To one who makes the error,
  conceptual space will seem to contain no room for mechanical models of
  the mind that are not equivalent to Turing machines.  Yet it is
  certainly possible that psychology will find the need to employ models
  of human cognition that transcend Turing machines."

                   http://plato.stanford.edu/entries/church-turing/

-- 
Ben.
0
ben.usenet (6790)
6/17/2012 8:27:23 PM
On Jun 17, 3:19=A0pm, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
> The problem with the Halting Problem is that it that because of the
> Church/Turing thesis it must apply

You DON'T UNDERSTAND Church's Thesis.
If you did,  you would know that IT IS NOT RELEVANT to this problem!

> to every possible future computing  technology

NO, IT DOESN'T!!!! DAMN, YOU ARE *STUPID*!!!
The halting problem is the halting problem OF TMs, BY TMs, and FOR
TMs!
IT *ABSOLUTELY* IS *NOT* ABOUT " the human mind " or "all human
knowledge "
OR ANY OTHER technology!!!!!!!!!!!!
EXCEPT TMs!!
!!!!!!!!!DAMN!!!!!!!!!!!!!!!!!!!!!!
0
greeneg9613 (188)
6/17/2012 9:37:44 PM
On Jun 17, 4:27=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> =A0Yet it is
> =A0 certainly possible that psychology will find the need to employ model=
s
> =A0 of human cognition that transcend Turing machines."
This is not something that any self-respecting logician ought to be
caught saying.
If Psychologists were to "find the need" to do this, then that would
be AT BEST
a PSYCHOLOGICAL need and NOT a logical need.
The human brain IS FINITE AND TMs ARE UNBOUNDEDLY finite.
You would have to start theorizing long and hard about the pace of
evolution of the brain
and the quantum electrodynamics of its neurobiology BEFORE you could
even SUSPECT
that psychologists OR NEUROLOGISTS could EVER say ANYthing *coherent*
about
"transcending" TMs.
0
greeneg9613 (188)
6/17/2012 9:40:12 PM
On Jun 18, 7:37=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Jun 17, 3:19=A0pm, "Peter Olcott" <NoS...@OCR4Screen.com> wrote:
>
> > The problem with the Halting Problem is that it that because of the
> > Church/Turing thesis it must apply
>
> You DON'T UNDERSTAND Church's Thesis.
> If you did, =A0you would know that IT IS NOT RELEVANT to this problem!
>
> > to every possible future computing =A0technology
>
> NO, IT DOESN'T!!!! DAMN, YOU ARE *STUPID*!!!
> The halting problem is the halting problem OF TMs, BY TMs, and FOR
> TMs!
> IT *ABSOLUTELY* IS *NOT* ABOUT " the human mind " or "all human
> knowledge "
> OR ANY OTHER technology!!!!!!!!!!!!
> EXCEPT TMs!!
> !!!!!!!!!DAMN!!!!!!!!!!!!!!!!!!!!!!

Then why does it - the disproof of H() - use Function definitions?
And a Function Return Stack?

Herc
0
6/17/2012 10:23:57 PM
On 6/17/2012 4:37 PM, George Greene wrote:
> On Jun 17, 3:19 pm, "Peter Olcott"<NoS...@OCR4Screen.com>  wrote:
>> The problem with the Halting Problem is that it that because of the
>> Church/Turing thesis it must apply
> You DON'T UNDERSTAND Church's Thesis.
> If you did,  you would know that IT IS NOT RELEVANT to this problem!
>
>> to every possible future computing  technology
> NO, IT DOESN'T!!!! DAMN, YOU ARE *STUPID*!!!
> The halting problem is the halting problem OF TMs, BY TMs, and FOR
> TMs!
> IT *ABSOLUTELY* IS *NOT* ABOUT " the human mind " or "all human
> knowledge "

Also, don't forget that this never happened:
http://www-03.ibm.com/innovation/us/watson/index.html

Also the Church Turing thesis does not apply to Watson was implemented, 
because Watson is *not* computable.

More on, More on More on...

> OR ANY OTHER technology!!!!!!!!!!!!
> EXCEPT TMs!!
> !!!!!!!!!DAMN!!!!!!!!!!!!!!!!!!!!!!

0
Peter
6/18/2012 2:29:32 AM
Peter Olcott wrote:
> If a yes or no question does not have a correct yes or no answer then
> there must be something wrong with this question.
>
> More generally:
> An ill-formed question is defined as any question that lacks a correct
> answer from the set of all possible answers.
>
> The *only* reason that the self reference form of the Halting Problem
> can not be solved is that neither of the two possible final states of
> any potential halt decider TM corresponds to whether or not its input TM
> will halt on its input.
>
> In other words for potential
> halt decider H and input M:
> ---------------------------
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>
> The only difference between asking a yes or no question and the
> invocation of a Turing Machine is a natural language interface.
>
> Within a natural language interface the invocation of H(M, H, M) would
> be specified as:
>
> =93Does Turing Machine M halt on input of Turing Machine H
> and Turing Machine M?=94
>
> Within a natural language interface the answer to this question would be
> specified as =93yes=94 or =93no=94 and map to the final states of H of ac=
cept or
> reject.
>
> So the only reason that the self reference form of the Halting Problem
> can not be solved is that it is based on a yes or no question that lacks
> a correct yes or no answer, and thereby derives an ill-formed question.

Peter, I really know nothing about the Entscheidungsproblem. It is out
of my league (I am below all leagues), but I've been thinking.

If the problem is that the thing is just ill-formed, why not re-form
it to well-form it?

http://www.mcescher.net/Escher/dragon.jpg

We can't do that because it'd amount to reforming all of math, which
might then end up not being called math at all? With permission, what
would you call it?

Tom
0
tkorna (20)
6/18/2012 9:29:46 AM
On 6/18/2012 4:29 AM, Tom wrote:
> Peter Olcott wrote:
>> If a yes or no question does not have a correct yes or no answer then
>> there must be something wrong with this question.
>>
>> More generally:
>> An ill-formed question is defined as any question that lacks a correct
>> answer from the set of all possible answers.
>>
>> The *only* reason that the self reference form of the Halting Problem
>> can not be solved is that neither of the two possible final states of
>> any potential halt decider TM corresponds to whether or not its input TM
>> will halt on its input.
>>
>> In other words for potential
>> halt decider H and input M:
>> ---------------------------
>> Not<ThereExists>
>> <ElementOfSet>
>> FinalStatesOf_H
>> <MathematicallyMapsTo>
>> Halts(M, H, input)
>>
>> Where M is defined as
>> ---------------------
>> M(String H, String input):
>> if H(input, H, input) loop
>> else halt
>>
>> The only difference between asking a yes or no question and the
>> invocation of a Turing Machine is a natural language interface.
>>
>> Within a natural language interface the invocation of H(M, H, M) would
>> be specified as:
>>
>> �Does Turing Machine M halt on input of Turing Machine H
>> and Turing Machine M?�
>>
>> Within a natural language interface the answer to this question would be
>> specified as �yes� or �no� and map to the final states of H of accept or
>> reject.
>>
>> So the only reason that the self reference form of the Halting Problem
>> can not be solved is that it is based on a yes or no question that lacks
>> a correct yes or no answer, and thereby derives an ill-formed question.
> Peter, I really know nothing about the Entscheidungsproblem. It is out
> of my league (I am below all leagues), but I've been thinking.
>
> If the problem is that the thing is just ill-formed, why not re-form
> it to well-form it?
>
> http://www.mcescher.net/Escher/dragon.jpg
>
> We can't do that because it'd amount to reforming all of math, which
> might then end up not being called math at all? With permission, what
> would you call it?
>
> Tom
I would call it Pete, for Pete's sake.
0
Peter
6/18/2012 12:35:17 PM
On 06/16/2012 03:06 PM, Peter Olcott wrote:
> If no element of the set of Turing Machines can solve the self reference
> form of the Halting Problem, then the element of this set that would
> derive a mind that knows everything currently known would also not be
> able to solve the HP. Since this element would necessarily also know the
> syntax and Semantics of every human language that ever existed, it could
> be asked this question in English: "Will this TM instance halt on this
> input instance?"
>
> If this TM can not answer this question then this question must
> necessarily be ill-formed.

Huh? There are *lots* of well-formed questions that can't be answered by 
"a mind that knows everything currently known", simply because it is not 
currently known how to answer them.

Similarly, there are well formed questions that can't be answered by a 
mind that knows everything that *will* ever be known, because many 
things will never be known (there are infinitely many things to know and 
the universe is finite).

What about a TM that knows everything that can *possibly* be known? Is 
that even possible?

No, because it would have to be able to answer all instances of "Will 
this TM instance halt on this input instance?", which are all well 
formed questions, your insistence to the contrary notwithstanding, and 
no TM can correctly answer them all.

Ralph Hartley
0
hartley (156)
6/18/2012 12:49:45 PM
On 6/17/2012 10:29 PM, Peter Olcott wrote:
> On 6/17/2012 4:37 PM, George Greene wrote:
>> On Jun 17, 3:19 pm, "Peter Olcott"<NoS...@OCR4Screen.com>  wrote:
>>> The problem with the Halting Problem is that it that because of the
>>> Church/Turing thesis it must apply
>> You DON'T UNDERSTAND Church's Thesis.
>> If you did,  you would know that IT IS NOT RELEVANT to this problem!
>>
>>> to every possible future computing  technology
>> NO, IT DOESN'T!!!! DAMN, YOU ARE *STUPID*!!!
>> The halting problem is the halting problem OF TMs, BY TMs, and FOR
>> TMs!
>> IT *ABSOLUTELY* IS *NOT* ABOUT " the human mind " or "all human
>> knowledge "
>
> Also, don't forget that this never happened:
> http://www-03.ibm.com/innovation/us/watson/index.html

Someone is the victim of marketing.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth


0
Pidgeot18 (1520)
6/18/2012 1:48:10 PM
On 6/18/2012 7:49 AM, Ralph Hartley wrote:
> On 06/16/2012 03:06 PM, Peter Olcott wrote:
>> If no element of the set of Turing Machines can solve the self reference
>> form of the Halting Problem, then the element of this set that would
>> derive a mind that knows everything currently known would also not be
>> able to solve the HP. Since this element would necessarily also know the
>> syntax and Semantics of every human language that ever existed, it could
>> be asked this question in English: "Will this TM instance halt on this
>> input instance?"
>>
>> If this TM can not answer this question then this question must
>> necessarily be ill-formed.
>
> Huh? There are *lots* of well-formed questions that can't be answered 
> by "a mind that knows everything currently known", simply because it 
> is not currently known how to answer them.
>
> Similarly, there are well formed questions that can't be answered by a 
> mind that knows everything that *will* ever be known, because many 
> things will never be known (there are infinitely many things to know 
> and the universe is finite).
>
> What about a TM that knows everything that can *possibly* be known? Is 
> that even possible?
>
> No, because it would have to be able to answer all instances of "Will 
> this TM instance halt on this input instance?", which are all well 
> formed questions, your insistence to the contrary notwithstanding, and 
> no TM can correctly answer them all.
>
> Ralph Hartley

Few here have any experience at all with "formal semantics" (treating 
human language as a mathematical formalism). Because of this they lack 
sufficient basis to determine whether or not a statement or a question 
is semantically well-formed or ill-formed. Lacking this sufficient basis 
and making claims one way or the other necessarily can only derive an 
argument from ignorance.

The Specification of a TM to solve the Halting Problem:
A Turing Machine that can take as input the specification of another 
Turing Machine and the input to this other Turing Machine, and determine 
in all cases whether or not this other Turing Machine would halt on its 
input, and report this as one of its two final halting states of accept 
or reject.

To make my point clear we augment the TM with the capability of 
processing natural language such as English. This Natural Language input 
would be specified on the conventional Turing Machine's tape.

Now we can pose this question to elements of the the set of Turing Machines:
Which of your two final states of accept or reject corresponds to your 
input TM halting on its input?

Since (in some cases) neither element forms the correct answer to this 
question and these two element derive the entire solution set, to the 
above question, therefore this question (in some cases) matches the 
specification of an ill-formed question: Any question lacking a correct 
answer from the set of possible answers.


0
Peter
6/18/2012 2:07:39 PM
On 6/18/2012 10:07 AM, Peter Olcott wrote:
> Few here have any experience at all with "formal semantics" (treating
> human language as a mathematical formalism).

This includes you, apparently. Indeed, given my knowledge of formal 
semantics, you give a completely wrong definition of it (mathematicians 
eschew the implicit ambiguity of natural language for a more rigidly 
defined stream of symbols).

 > Because of this they lack
> sufficient basis to determine whether or not a statement or a question
> is semantically well-formed or ill-formed.

No, it's because you've been abusing these terms to high heaven.

> Lacking this sufficient basis
> and making claims one way or the other necessarily can only derive an
> argument from ignorance.

Pot calling the kettle black?

> The Specification of a TM to solve the Halting Problem:
> A Turing Machine that can take as input the specification of another
> Turing Machine and the input to this other Turing Machine, and determine
> in all cases whether or not this other Turing Machine would halt on its
> input, and report this as one of its two final halting states of accept
> or reject.

I don't offhand see any problem with this.

> To make my point clear we augment the TM with the capability of
> processing natural language such as English. This Natural Language input
> would be specified on the conventional Turing Machine's tape.

No, this is not to make your point clear. This is to make your point as 
unclear as possible, to distract us from the utter nonsense that you are 
proposing.

> Now we can pose this question to elements of the the set of Turing
> Machines:
> Which of your two final states of accept or reject corresponds to your
> input TM halting on its input?

I'm getting English parse errors here. My best error recovery would 
amount to "Am I intended to interpret halting in the accept state 
correspond to the input TM halting on its input and halting in the 
reject state corresponding to the input TM not halting on its input?", 
which is a vacuous question since the answer to this question (at least 
in the simplest specification) is an obvious "yes."

The question that is being asked to the purported TM is nothing more or 
less than "Does the input TM halt on its input?" with a final state of 
halting in the accept state indicating an affirmative answer, the reject 
state a negative answer, and failing to halt an erroneous situation.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth


0
Pidgeot18 (1520)
6/18/2012 2:27:06 PM
On 06/18/2012 10:07 AM, Peter Olcott wrote:
> On 6/18/2012 7:49 AM, Ralph Hartley wrote:
>> On 06/16/2012 03:06 PM, Peter Olcott wrote:
>>> If no element of the set of Turing Machines can solve the self reference
>>> form of the Halting Problem, then the element of this set that would
>>> derive a mind that knows everything currently known would also not be
>>> able to solve the HP. Since this element would necessarily also know the
>>> syntax and Semantics of every human language that ever existed, it could
>>> be asked this question in English: "Will this TM instance halt on this
>>> input instance?"
>>>
>>> If this TM can not answer this question then this question must
>>> necessarily be ill-formed.

[I have clipped my own reply to your argument above, since you ignore 
it, and make a completely different argument]

> To make my point clear we augment the TM with the capability of
> processing natural language such as English. This Natural Language input
> would be specified on the conventional Turing Machine's tape.

> Now we can pose this question to elements of the the set of Turing
> Machines:

> Which of your two final states of accept or reject corresponds to your
> input TM halting on its input?

We could pose that question, but why would we want to?

Lets call that the Peter Olcott (PO) question.

I will grant that the PO question is ill-formed (to the extent that I 
understand it at all, it is). So what?

You claimed above that some instances of "Will this TM instance halt on 
this input instance?" are ill-formed, but your justification for that 
claim seems to be that the PO question is ill-formed.

How is *that* not an instance of the fallacy of equivocation?

Ralph Hartley
0
hartley (156)
6/18/2012 3:49:51 PM
On 17/06/12 17:57, Peter Olcott wrote:
> So as soon as a Turing Computable algorithm acquires sufficient
> capability [...].

	It may be worth pointing out [tho' experience with this
thread is against it] that there is no need in CS to contemplate
*algorithms*.  We already know about universal TMs, inc some very
specific such animals, inc some quite small ones [completely
specified on one side of paper, for example].  Any question you
choose to formulate about TMs can be re-formulated as a question
about the properties of strings, where the intention is that the
strings will be fed as input to the specific UTM.  Instead of "is
there a TN such that ...?" or "is there an algorithm such that
....?" we have "is there a string such that ...?", or even "is
there an integer such that ...?".

	Note that such a formulation is not even inefficient --
we do it all the time in "real" computing when we supply code
in some standard programming language, or if we choose to regard
a sequence of 0s and 1s on a disc as a [many-digit] binary number
rather than as a text file.

	We don't have any problem with the general notion that
there are some questions of the form "is there an integer such
that ...?" for which the answer is "no";  even primes different
from 2, solutions to Fermat's Last Theorem, and so on.  So there
should be no general problem about the notion that there are
questions about algorithms to which the answer is that there is
no such algorithm.

-- 
Andy Walker,
Nottingham.
0
news3378 (22)
6/18/2012 4:32:03 PM
On 18 Cze, 14:35, Peter Olcott <OCR4Screen> wrote:

> I would call it Pete, for Pete's sake.

I'd call it "I".

Tom
0
tkorna (20)
6/18/2012 7:36:17 PM
On Jun 19, 5:36=A0am, Tom <tko...@wp.pl> wrote:
> On 18 Cze, 14:35, Peter Olcott <OCR4Screen> wrote:
>
> > I would call it Pete, for Pete's sake.
>
> I'd call it "I".
>
> Tom

no we'd be calling it "you"

hopefully we don't have to throw a "Sir" in there!

Herc
0
6/18/2012 7:46:45 PM
On Jun 19, 12:07=A0am, Peter Olcott <OCR4Screen> wrote:
>
> Now we can pose this question to the...  **PROSPECTIVE HALT-TM**
>
> Which of your two final states of accept or reject corresponds to your
> input TM halting on its input?
>

OK THEN - *THIS* is the much discussed ILL-FORMED-QUESTION.


A1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
A2 ASSUME: HALT(n,i)  WRITES  ["INFLOOP" | "HALT"] for all [1]
   on the infinite turing tape and then HALTS

B1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
B2 ASSUME: HALT(n,i)  RETURNS  ["INFLOOP" | "HALT"] for all [1]

C1 Every <TM-n, i> Pair INFINITE LOOPS or NOT
C2 ASSUME: HALT(n,i)  RETURNS
         ["LOOP" | "HALT" | "OutOfRange"] for all [1]

Only B2 is impossible via a recursive function definition HALT()
(not via the HALT TM in A2 - that *IS* possible)


REMEMBER: Turing switches computation models mid-proof!

TMs
Molecular-sub-TM
Lambda Calculus
Grammar LR Reduction Rules
HIgh Order Functions
Recursive Functions

Herc
0
6/18/2012 8:48:55 PM
On Jun 18, 7:29=A0pm, Tom <tko...@wp.pl> wrote:
> Peter Olcott wrote:
> > If a yes or no question does not have a correct yes or no answer then
> > there must be something wrong with this question.
>
> > More generally:
> > An ill-formed question is defined as any question that lacks a correct
> > answer from the set of all possible answers.
>
> > The *only* reason that the self reference form of the Halting Problem
> > can not be solved is that neither of the two possible final states of
> > any potential halt decider TM corresponds to whether or not its input T=
M
> > will halt on its input.
>
> > In other words for potential
> > halt decider H and input M:
> > ---------------------------
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > ---------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>
> > The only difference between asking a yes or no question and the
> > invocation of a Turing Machine is a natural language interface.
>
> > Within a natural language interface the invocation of H(M, H, M) would
> > be specified as:
>
> > =93Does Turing Machine M halt on input of Turing Machine H
> > and Turing Machine M?=94
>
> > Within a natural language interface the answer to this question would b=
e
> > specified as =93yes=94 or =93no=94 and map to the final states of H of =
accept or
> > reject.
>
> > So the only reason that the self reference form of the Halting Problem
> > can not be solved is that it is based on a yes or no question that lack=
s
> > a correct yes or no answer, and thereby derives an ill-formed question.
>
> Peter, I really know nothing about the Entscheidungsproblem. It is out
> of my league (I am below all leagues), but I've been thinking.
>
> If the problem is that the thing is just ill-formed, why not re-form
> it to well-form it?
>
> http://www.mcescher.net/Escher/dragon.jpg
>

In this 'Model Of Computation'  the function is un-computable when the
Train crashes into it's carriage!

http://wn.com/Universal_Turing_Machine_implemented_in_Minecraft_redstone_lo=
gic

Herc
0
6/18/2012 9:36:40 PM
On Jun 18, 10:07=A0am, Peter Olcott <OCR4Screen> wrote:
> Few here have any experience at all with "formal semantics" (treating
> human language as a mathematical formalism).

And You Do??   MBWAHAHAHHAHHAAAA!!!

> Because of this they lack sufficient basis to determine whether or not a =
statement or a question
> is semantically well-formed or ill-formed.

So far, you have ONLY presented YOUR OWN definitions of "semantically
ill-formed",
and they are semantically ill-formed.  If you want some respect for
knowing more about
"formal semantics" THAN I know, then PRESENT a definition FROM SOME
KNOWN ACADEMIC
SOURCE.


>  Lacking this sufficient basis

Exactly, you lack the basis of a formal semantics textbook or any
other COHERENT treatment of anything you are trying to say.
But WE ARE TALKING ABOUT TMs, so what YOU call "formal semantics" IS
NOT RELEVANT IN ANY CASE.
You do not yet know ENOUGH ABOUT TMs to BE ABLE to discuss their
formal semantics.  In particular, you don't
yet understand the way in which invoking a TM on an input string is
analogous to asking a question.
0
greeneg9613 (188)
6/18/2012 11:57:22 PM
On 18 Cze, 23:36, Graham Cooper <grahamcoop...@gmail.com> wrote:
> On Jun 18, 7:29=A0pm, Tom <tko...@wp.pl> wrote:
> In this 'Model Of Computation' =A0the function is un-computable when the
> Train crashes into it's carriage!
>
> http://wn.com/Universal_Turing_Machine_implemented_in_Minecraft_redst...
>
> Herc

Herc, with permission, what stopped you from becoming a professor of
mathematics?

It's the most serious question I've ever asked here.

Tom
0
tkorna (20)
6/19/2012 10:20:21 AM
On Jun 19, 8:20=A0pm, Tom <tko...@wp.pl> wrote:
> On 18 Cze, 23:36, Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > On Jun 18, 7:29=A0pm, Tom <tko...@wp.pl> wrote:
> > In this 'Model Of Computation' =A0the function is un-computable when th=
e
> > Train crashes into it's carriage!
>
> >http://wn.com/Universal_Turing_Machine_implemented_in_Minecraft_redst...
>
> > Herc
>
> Herc, with permission, what stopped you from becoming a professor of
> mathematics?
>
> It's the most serious question I've ever asked here.
>
> Tom

oh thanks!  my Dad was a lecturer but totally different personality.

A 2 hour lecture takes 20 hours preparation.

I was half way through a 45 minute class teaching high school small
business, finished my lesson plan and had nothing else ready for the
remaining 20 minutes!  As I'm not trained in accounting, except maybe
EC102 and EC204, teaching 30 kids for 20 minutes off the top of your
head is not easy!

Contracting work is really good, as a guest the company treats you
well, are usually progressive and your consulting firm is a backup 3rd
party so there is no politics involved.

Herc
0
6/19/2012 10:50:08 PM
On 6/18/2012 6:57 PM, George Greene wrote:
> On Jun 18, 10:07 am, Peter Olcott <OCR4Screen> wrote:
>> Few here have any experience at all with "formal semantics" (treating
>> human language as a mathematical formalism).
> And You Do??   MBWAHAHAHHAHHAAAA!!!
>
>> Because of this they lack sufficient basis to determine whether or not a statement or a question
>> is semantically well-formed or ill-formed.
> So far, you have ONLY presented YOUR OWN definitions of "semantically
> ill-formed",
> and they are semantically ill-formed.  If you want some respect for
> knowing more about
> "formal semantics" THAN I know, then PRESENT a definition FROM SOME
> KNOWN ACADEMIC
> SOURCE.
>
>
>>   Lacking this sufficient basis
> Exactly, you lack the basis of a formal semantics textbook or any
> other COHERENT treatment of anything you are trying to say.

The Formal semantics of natural languages.

> But WE ARE TALKING ABOUT TMs, so what YOU call "formal semantics" IS
> NOT RELEVANT IN ANY CASE.

That the reason why the self reference Halting Problem can  not be 
solved can be correctly translating into an ill-formed English question 
posted to the potential Halt Decider:
Which of your final states can you return to indicate Halts(TM, input)?

http://en.wikipedia.org/wiki/Saul_Kripke
Here is another guy that agrees with me ion the Liar paradox.

neither "This sentence is true" nor "This sentence is not true" receive 
truth-conditions; they are, in Kripke's terms, "ungrounded."

> You do not yet know ENOUGH ABOUT TMs to BE ABLE to discuss their
> formal semantics.  In particular, you don't
> yet understand the way in which invoking a TM on an input string is
> analogous to asking a question.


0
Peter
6/21/2012 2:29:33 AM
On Jun 22, 3:05=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Jun 20, 10:29=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > That the reason why the self reference Halting Problem can =A0not be
> > solved can be correctly translating into an ill-formed English question
> > posted to the potential Halt Decider:
> > Which of your final states can you return to indicate Halts(TM, input)?
>
> That IS NOT any kind of relevant question.
> If we define
> M(H,I) : If H(I,(H,I)) Then Loop Else Halt
> and "you" is the tm H in an invocation of M(H,M), which gives
> If H(M,(H,M)) Then Loop Else Halt,
> then H can return false to indicate that M(H,M) halts and true to
> indicate that it doesn't.
> That is an answer to your "which of your final states" question.
> It does not make H a Halts TM since a Halts TM can't exist.
> But there are many ways for a question to have no answer WITHOUT being
> ill-formed: when did you stop beating your wife?

Your entire notion of "UN-COMPUTABLE" rests on the assumption
that this 1 line program ought to be programmable.

START: IF NOT(IinfiniteLoop(START)) THEN GOTO START

is function infiniteLoop() programmable?
infiniteLoop(X) <-> TRUE  IFF  function X does not terminate
infiniteLoop(X) <-> FALSE IFF  function X terminates

Herc
0
6/21/2012 6:38:12 PM
On Jun 21, 6:38=A0pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
> On Jun 22, 3:05=A0am, George Greene <gree...@email.unc.edu> wrote:
>
>
>
>
>
>
>
>
>
> > On Jun 20, 10:29=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > > That the reason why the self reference Halting Problem can =A0not be
> > > solved can be correctly translating into an ill-formed English questi=
on
> > > posted to the potential Halt Decider:
> > > Which of your final states can you return to indicate Halts(TM, input=
)?
>
> > That IS NOT any kind of relevant question.
> > If we define
> > M(H,I) : If H(I,(H,I)) Then Loop Else Halt
> > and "you" is the tm H in an invocation of M(H,M), which gives
> > If H(M,(H,M)) Then Loop Else Halt,
> > then H can return false to indicate that M(H,M) halts and true to
> > indicate that it doesn't.
> > That is an answer to your "which of your final states" question.
> > It does not make H a Halts TM since a Halts TM can't exist.
> > But there are many ways for a question to have no answer WITHOUT being
> > ill-formed: when did you stop beating your wife?
>
> Your entire notion of "UN-COMPUTABLE" rests on the assumption
> that this 1 line program ought to be programmable.
>
> START: IF NOT(IinfiniteLoop(START)) THEN GOTO START
>
> is function infiniteLoop() programmable?
> infiniteLoop(X) <-> TRUE =A0IFF =A0function X does not terminate
> infiniteLoop(X) <-> FALSE IFF =A0function X terminates
>
> Herc

tthen why teach any man to pick his nose))then pick his nose?

will I have !?
0
n.m.keele (172)
6/21/2012 8:20:03 PM
On Jun 22, 6:20=A0am, N <n.m.ke...@hotmail.co.uk> wrote:
> On Jun 21, 6:38=A0pm, Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > On Jun 22, 3:05=A0am, George Greene <gree...@email.unc.edu> wrote:
>
> > > On Jun 20, 10:29=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > > > That the reason why the self reference Halting Problem can =A0not b=
e
> > > > solved can be correctly translating into an ill-formed English ques=
tion
> > > > posted to the potential Halt Decider:
> > > > Which of your final states can you return to indicate Halts(TM, inp=
ut)?
>
> > > That IS NOT any kind of relevant question.
> > > If we define
> > > M(H,I) : If H(I,(H,I)) Then Loop Else Halt
> > > and "you" is the tm H in an invocation of M(H,M), which gives
> > > If H(M,(H,M)) Then Loop Else Halt,
> > > then H can return false to indicate that M(H,M) halts and true to
> > > indicate that it doesn't.
> > > That is an answer to your "which of your final states" question.
> > > It does not make H a Halts TM since a Halts TM can't exist.
> > > But there are many ways for a question to have no answer WITHOUT bein=
g
> > > ill-formed: when did you stop beating your wife?
>
> > Your entire notion of "UN-COMPUTABLE" rests on the assumption
> > that this 1 line program ought to be programmable.
>
> > START: IF NOT(IinfiniteLoop(START)) THEN GOTO START
>
> > is function infiniteLoop() programmable?
> > infiniteLoop(X) <-> TRUE =A0IFF =A0function X does not terminate
> > infiniteLoop(X) <-> FALSE IFF =A0function X terminates
>
> > Herc
>
> tthen why teach any man to pick his nose))then pick his nose?
>
> will I have !?

Why use AXIOMS to construct theorems and sets and not use them on
functions?

Herc
0
6/21/2012 8:24:23 PM
Graham Cooper wrote:
> > Tom wrote:
> > Herc, with permission, what stopped you from becoming a professor of
> > mathematics?
>
> > It's the most serious question I've ever asked here.
>
> > Tom
>
> oh thanks! =A0my Dad was a lecturer but totally different personality.
>
> A 2 hour lecture takes 20 hours preparation.
>
> I was half way through a 45 minute class teaching high school small
> business, finished my lesson plan and had nothing else ready for the
> remaining 20 minutes! =A0As I'm not trained in accounting, except maybe
> EC102 and EC204, teaching 30 kids for 20 minutes off the top of your
> head is not easy!
>
> Contracting work is really good, as a guest the company treats you
> well, are usually progressive and your consulting firm is a backup 3rd
> party so there is no politics involved.
>
> Herc

I see :D

Hope to talk to you about Goedel's incompleteness results soon
enough :D

Kindest regards,

Tom
0
tkorna (20)
6/23/2012 10:53:32 PM
On Jun 24, 8:53=A0am, Tom <tko...@wp.pl> wrote:
> Graham Cooper wrote:
> > > Tom wrote:
> > > Herc, with permission, what stopped you from becoming a professor of
> > > mathematics?
>
> > > It's the most serious question I've ever asked here.
>
> > > Tom
>
> > oh thanks! =A0my Dad was a lecturer but totally different personality.
>
> > A 2 hour lecture takes 20 hours preparation.
>
> > I was half way through a 45 minute class teaching high school small
> > business, finished my lesson plan and had nothing else ready for the
> > remaining 20 minutes! =A0As I'm not trained in accounting, except maybe
> > EC102 and EC204, teaching 30 kids for 20 minutes off the top of your
> > head is not easy!
>
> > Contracting work is really good, as a guest the company treats you
> > well, are usually progressive and your consulting firm is a backup 3rd
> > party so there is no politics involved.
>
> > Herc
>
> I see :D
>
> Hope to talk to you about Goedel's incompleteness results soon
> enough :D
>


OK, just start a new thread.

Homework for tonight is:

Work out the Godel Number of the Godel Statement using a 16 symbol HEX
alphabet.

ALPHABET =3D { 01 =3D ! ( ) , ^ v > A B C D E F }

REVISION: using a DECIMAL ALPHABET =3D { 0 1 A ( , ) ^ v ! =3D }

A0 =3D PROVE
A1 =3D 8203215

A1 =3D 11111010010101111001111
!A0(A1)

!A0(A1) is a Godel Statement.

NOT(PROVE(THIS-FORMULA-GODEL-NUMBER))

Herc
0
6/24/2012 3:54:21 AM
On 6/8/2012 8:43 PM, Peter Olcott wrote:
> If a yes or no question does not have a correct yes or no answer then 
> there must be something wrong with this question.
>
> More generally:
> An ill-formed question is defined as any question that lacks a correct 
> answer from the set of all possible answers.
>
> The *only* reason that the self reference form of the Halting Problem 
> can not be solved is that neither of the two possible final states of 
> any potential halt decider TM corresponds to whether or not its input 
> TM will halt on its input.
>
> In other words for potential
> halt decider H and input M:
> ---------------------------
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> ---------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>
> The only difference between asking a yes or no question and the 
> invocation of a Turing Machine is a natural language interface.
>
> Within a natural language interface the invocation of H(M, H, M) would 
> be specified as:
>
> �Does Turing Machine M halt on input of Turing Machine H
> and Turing Machine M?�
>
> Within a natural language interface the answer to this question would 
> be specified as �yes� or �no� and map to the final states of H of 
> accept or reject.
>
> So the only reason that the self reference form of the Halting Problem 
> can not be solved is that it is based on a yes or no question that 
> lacks a correct yes or no answer, and thereby derives an ill-formed 
> question.
>
The only reason that no one here has agreed with the above is that no 
one here has a sufficient understanding of the formal semantics of 
linguistics, what I have referred to as the mathematics of the meaning 
of words.
http://en.wikipedia.org/wiki/Formal_semantics_(linguistics)

When one is sufficiently precise when dealing with the meaning of words, 
thenn (then and only then) one can see that the Halting Problem is 
constructed entirely on the basis of an ill-formed question.
0
Peter
8/5/2012 1:28:41 PM
On Aug 5, 11:28=A0pm, Peter Olcott <OCR4Screen> wrote:
> On 6/8/2012 8:43 PM, Peter Olcott wrote:
>
>
> > If a yes or no question does not have a correct yes or no answer then
> > there must be something wrong with this question.
>
> > More generally:
> > An ill-formed question is defined as any question that lacks a correct
> > answer from the set of all possible answers.
>
> > The *only* reason that the self reference form of the Halting Problem
> > can not be solved is that neither of the two possible final states of
> > any potential halt decider TM corresponds to whether or not its input
> > TM will halt on its input.
>
> > In other words for potential
> > halt decider H and input M:
> > ---------------------------
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > ---------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>
> > The only difference between asking a yes or no question and the
> > invocation of a Turing Machine is a natural language interface.
>
> > Within a natural language interface the invocation of H(M, H, M) would
> > be specified as:
>
> > Does Turing Machine M halt on input of Turing Machine H
> > and Turing Machine M?
>
> > Within a natural language interface the answer to this question would
> > be specified as yes or no and map to the final states of H of
> > accept or reject.
>
> > So the only reason that the self reference form of the Halting Problem
> > can not be solved is that it is based on a yes or no question that
> > lacks a correct yes or no answer, and thereby derives an ill-formed
> > question.
>
> The only reason that no one here has agreed with the above is that no
> one here has a sufficient understanding of the formal semantics of
> linguistics, what I have referred to as the mathematics of the meaning
> of words.http://en.wikipedia.org/wiki/Formal_semantics_(linguistics)
>
> When one is sufficiently precise when dealing with the meaning of words,
> thenn (then and only then) one can see that the Halting Problem is
> constructed entirely on the basis of an ill-formed question.


People who speak ill-formed questions also have deaf ears..

The Halting Problem is merely:

S:  IF STOPS(S) GOTO S

If S STOPS - then STOPS() is wrong
If S LOOPS - then STOPS() is wrong

somehow substituting the word STOP for HALT fooled every mathematician
on Earth for a century!

no reasonable person would defend the above "STOPPING PROOF",
but sci.logic confuses "IMPOSSIBLE" with "WE DON'T KNOW HOW" and
argues backwards, much like a Skeptic disproves Paranormal since it
"doesn't exist".


G. Cooper  (BInfTech)
--

http://tinyURL.com/BLUEPRINTS-QUESTIONS
http://tinyURL.com/BLUEPRINTS-POWERSET
http://tinyURL.com/BLUEPRINTS-THEOREM
http://tinyURL.com/BLUEPRINTS-FORALL
http://tinyURL.com/BLUEPRINTS-TURING
http://tinyURL.com/BLUEPRINTS-GODEL
http://tinyURL.com/BLUEPRINTS-PROOF
http://tinyURL.com/BLUEPRINTS-MATHS
http://tinyURL.com/BLUEPRINTS-LOGIC
http://tinyURL.com/BLUEPRINTS-BRAIN
http://tinyURL.com/BLUEPRINTS-REAL
http://tinyURL.com/BLUEPRINTS-SETS
http://tinyURL.com/BLUEPRINTS-HALT
http://tinyURL.com/BLUEPRINTS-PERM
http://tinyURL.com/BLUEPRINTS-P-NP
http://tinyURL.com/BLUEPRINTS-GUT
http://tinyURL.com/BLUEPRINTS-BB
http://tinyURL.com/BLUEPRINTS-AI

S: IF STOPS(S) GOTO S
THE ORIGIN OF "UNCOMPUTABLE" CHAITAN'S OMEGA

AND THE IDIOTIC SCI.MATH |R|>|N| (BIGGER THAN INFINITY)

************
| 5GL    /  WHY? WHEN?
| 4GL   /   WHAT? not HOW!     ?person(P)
| 3GL  /    FUNCTION STACK     proc(a,b)
| 2GL /     MNEMONICS          LDA 0101
| 1GL/      MACHINE CODE       101 0101
  =3D=3D=3D
  CPU
0
8/7/2012 3:51:38 AM
On Aug 8, 6:21=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Aug 5, 9:28=A0am, Peter Olcott <OCR4Screen> wrote:
>
> > The only reason that no one here has agreed with the above is that no
> > one here has a sufficient understanding of the formal semantics of
> > linguistics,
>
> This is simply not the case. =A0Linguistics IS NOT RELEVANT as a science
> here.
> Linguistics is about NATURAL language. =A0This whole discussion is about
> FORMAL
> languages. =A0What is basically going on here is that a certain mode of
> construction
> produces a collection that is recursively enumerable BUT NOT just-
> plain-recursive
> (i.e., its complement is NOT, ALSO, recursively enumerable). =A0This
> simply does
> not have a damn thing to do with linguistics. =A0You do not need to
> invoke any COMPLICATED
> understanding of linguistics to know that some phrases-in-natural-
> language that
> appear to have a grammatical form requiring that they refer, can,
> despite that, FAIL,
> in real life, to refer. =A0When that failure is "contingent" or
> dependent on facts in the
> real world, nobody has any problem with it. =A0If I ask "Who was your
> first wife?", that
> question does NOT become "ill-formed" just because you have never been
> married.
> That question remains well-formed DESPITE the fact that there is no
> phrase referring
> to any woman that would be a true answer for it (if you have never
> been married).
> It is obvious to all hearers that the problem in answering the
> question comes FROM
> THE OUTSIDE WORLD OF FACTS about your love-life AND NOT from anything
> intrinsic
> to language, linguistics, or the question. =A0 =A0You're just wrong.

You're just incompetent if you insist formalism cannot solve this
problem within computation!

S: IF STOPS(S) GOTO S

Herc
--
www.microPROLOG.com

0
8/8/2012 11:07:58 AM
On Aug 8, 7:07=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> You're just incompetent if you insist formalism cannot solve this
> problem within computation!
>
> S: IF STOPS(S) GOTO S
>
> Herc


Well, YOU certainly can't solve this problem, so YOUR competence to
judge anybody ELSE'S "formalism"
is beneath notice.

0
greeneg9613 (188)
8/10/2012 12:28:51 AM
On Aug 10, 10:28=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Aug 8, 7:07=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > You're just incompetent if you insist formalism cannot solve this
> > problem within computation!
>
> > S: IF STOPS(S) GOTO S
>
> > Herc
>
> Well, YOU certainly can't solve this problem, so YOUR competence to
> judge anybody ELSE'S "formalism"
> is beneath notice.

Unlike your actual rationale shown here.

you just detect periodic Vs chaotic behaviour,
maybe add datum program stubs as required.

Unrelated to Aritificial Intelligence.

I wrote:
  Godels Impossible Proof(Theorem) Predicate,
  Busy Beaver uncomputable function,
  Countable Powerset of N function and
  Countable closed set of reals


Your brainwashing by Academia is so adamant you selectively blank out
to all of this.

Not to mention a simpler turing machine,
contradiction to ZFC,
logically stratified complete formalism,
permutation of infinite sets algorithm,
magnetic theory of sentience,
reducing P=3DNP from O(n^n) to O(2^n),
a 20 line intersecting polygon rendering engine,
a G.U.T. in 2000AD as seen by the Hawk
scored 7/11 on a mindreading test
taking a High Court Judge to High Court
and a 1997 patent on bipedal robots..

http://www.youtube.com/watch?v=3DDpYE8FtWXhA

G. Cooper  (BInfTech)
--

http://tinyURL.com/BLUEPRINTS-QUESTIONS
http://tinyURL.com/BLUEPRINTS-POWERSET
http://tinyURL.com/BLUEPRINTS-THEOREM
http://tinyURL.com/BLUEPRINTS-FORALL
http://tinyURL.com/BLUEPRINTS-TURING
http://tinyURL.com/BLUEPRINTS-GODEL
http://tinyURL.com/BLUEPRINTS-PROOF
http://tinyURL.com/BLUEPRINTS-MATHS
http://tinyURL.com/BLUEPRINTS-LOGIC
http://tinyURL.com/BLUEPRINTS-BRAIN
http://tinyURL.com/BLUEPRINTS-REAL
http://tinyURL.com/BLUEPRINTS-SETS
http://tinyURL.com/BLUEPRINTS-HALT
http://tinyURL.com/BLUEPRINTS-PERM
http://tinyURL.com/BLUEPRINTS-P-NP
http://tinyURL.com/BLUEPRINTS-GUT
http://tinyURL.com/BLUEPRINTS-BB
http://tinyURL.com/BLUEPRINTS-AI

S: IF STOPS(S) GOTO S

THE ORIGIN OF "UNCOMPUTABLE" CHAITAN'S OMEGA
AND THE IDIOTIC SCI.MATH

|R|>|N|

(BIGGER THAN INFINITY)

in George Greene's words  "unrefutable fantasy"
0
8/10/2012 2:29:50 AM
On 6/9/2012 9:41 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> If a yes or no question does not have a correct yes or no answer then
>> there must be something wrong with this question.
> I was just thinking "surely there's not point reading this; can there be
> anything new?" when low-and-behold it contains a *new* error:
>
> <snip>
>> In other words for potential
>> halt decider H and input M:
>> ---------------------------
>> Not<ThereExists>
>> <ElementOfSet>
>> FinalStatesOf_H
>> <MathematicallyMapsTo>
>> Halts(M, H, input)
>>
>> Where M is defined as
>> ---------------------
>> M(String H, String input):
>> if H(input, H, input) loop
>> else halt
>>
>> The only difference between asking a yes or no question and the
>> invocation of a Turing Machine is a natural language interface.
>>
>> Within a natural language interface the invocation of H(M, H, M) would
>> be specified as:
>>
>> “Does Turing Machine M halt on input of Turing Machine H
>> and Turing Machine M?”
> No, it would be read as "What does the machine H say when invoked with
> input (M, H, M)?".  But that has a simple yes/no answer so it does not
> fit with PO's preconceived idea of what the answer should be, so he has
> to... er, "misrepresent the truth".
I have learned many of the recent developments in the mathematics of the 
meaning of words as expressed using  predicate logic and developed by 
Montague and others since the last time that I talked to you.
http://en.wikipedia.org/wiki/Montague_grammar

This is my primary source:
Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics) 
Ronnie Cann

The mistake of the Halting Problem can only be detected from the point 
of view of the mathematics of the meaning of words. Since this is 
outside of the field of Turing, Godel, and  most everyone here, it is 
clear that you all have a good reason for not seeing this perspective.

The mistake that you just made was framing the question of the Halting 
Problem incorrectly. The reason that the Halting Problem is an 
ill-formed question is Pathological Self-Reference. Your incorrect 
statement of the question eliminated the self-reference, and thus 
changed the underlying semantics.

Turing Machine H is *not* asked: "What would you say?"

You have incorrectly switched the perspective from the perspective of 
Turing Machine H to that of an outside observer of Turing Machine H.  
The outside observer perspective lacks the essential self-reference 
required to derive the Halting Problem.

> Misrepresenting the result of an invocation of a "potential halt
> decider" as being a questions about whether a machine *actually* halts
> or not is the error at core of all of the recent nonsense.
>
> <snip>

0
Peter
8/19/2012 1:36:37 PM
On 6/9/2012 5:58 PM, cplxphil wrote:
> On Jun 8, 11:17 pm, Peter Olcott <OCR4Screen> wrote:
>> On 6/8/2012 9:04 PM, cplxphil wrote:
>>
>>
>>
>>
>>
>>
>>
>>> On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen>  wrote:
>>>> If a yes or no question does not have a correct yes or no answer then
>>>> there must be something wrong with this question.
>>>> More generally:
>>>> An ill-formed question is defined as any question that lacks a correct
>>>> answer from the set of all possible answers.
>>>> The *only* reason that the self reference form of the Halting Problem
>>>> can not be solved is that neither of the two possible final states of
>>>> any potential halt decider TM corresponds to whether or not its input TM
>>>> will halt on its input.
>>>> In other words for potential
>>>> halt decider H and input M:
>>>> ---------------------------
>>>> Not<ThereExists>
>>>> <ElementOfSet>
>>>> FinalStatesOf_H
>>>> <MathematicallyMapsTo>
>>>> Halts(M, H, input)
>>>> Where M is defined as
>>>> ---------------------
>>>> M(String H, String input):
>>>> if H(input, H, input) loop
>>>> else halt
>>>> The only difference between asking a yes or no question and the
>>>> invocation of a Turing Machine is a natural language interface.
>>>> Within a natural language interface the invocation of H(M, H, M) would
>>>> be specified as:
>>>> �Does Turing Machine M halt on input of Turing Machine H
>>>> and Turing Machine M?�
>>>> Within a natural language interface the answer to this question would be
>>>> specified as �yes� or �no� and map to the final states of H of accept or
>>>> reject.
>>>> So the only reason that the self reference form of the Halting Problem
>>>> can not be solved is that it is based on a yes or no question that lacks
>>>> a correct yes or no answer, and thereby derives an ill-formed question.
>>> No one would say that instances of the halting problem lack a yes or
>>> no answer.
>> Yes this point has been missed for many years.
>>
>> Since the final states: {accept, reject} of H form the entire solution
>> set, (every possible answer that H can provide) and these states have
>> been shown to mathematically map to yes or no therefore the inability of
>> a Turing Machine to solve the halting problem is merely the inability to
>> correctly answer a yes or no question that has no correct yes or no answer.
>>
> Perhaps you've been over this before in your lengthy discussions, but
> let me see if I have this straight.
>
> You are saying that the question, "Does machine M halt on input I?"
> may, for certain M and I, be impossible to answer either yes or no?
>
> If it doesn't either halt or not halt on input I, what exactly does it
> do?
It is the slipperiness of inadvertently switching perspectives from the 
point of view of the potential halt decider (PHD) to the point of view 
outside that of the PHD that makes the error of the Halting Problem most 
difficult to see.

The error of the Halting Problem only exists from the perspective of the 
potential halt decider. When you switch to a perspective outside that of 
the PHD, you change the meaning of the question.

You might not ever see this until after you understand the mathematics 
of the meaning of words developed by Montague. 
http://en.wikipedia.org/wiki/Montague_grammar

Almost all of the current research in Formal Semantics (mathematics of 
the meaning of words) is fundamentally based on the seminal work of 
Montague.

>
> Also, before this continues too long:  It sounds like you are quite
> confident that you're right about this.  I am quite confident that you
> are not.  What would it take to convince you that you are wrong?
>
> For my part, I will be satisfied and agree that Turing's result is
> somehow wrong if you can implement an algorithm that solves the
> halting problem.  If you are going to say that the Halting problem is
> ill-formed, then in order for me to agree with this, I would need to
> see an example of a machine that both fails to halt and fails to not
> halt.  (Good luck.)

0
Peter
8/19/2012 1:49:35 PM
On 6/9/2012 8:39 PM, Joshua Cranmer wrote:
> On 6/9/2012 6:58 PM, cplxphil wrote:
>> Perhaps you've been over this before in your lengthy discussions, but
>> let me see if I have this straight.
>>
>> You are saying that the question, "Does machine M halt on input I?"
>> may, for certain M and I, be impossible to answer either yes or no?
>
> I think Peter believes that this is not the proper way to phrase the 
> question really being asked. Although, when he was pushed into a 
> corner, I think there was a tacit admission that the answer of such a 
> question depends on who you asking it of.
>
>> Also, before this continues too long:  It sounds like you are quite
>> confident that you're right about this.  I am quite confident that you
>> are not.  What would it take to convince you that you are wrong?
>
> Again, I think the answer is that he has no issue with proof, per se. 
> His umbrage is with the interpretation of the result: he believes that 
> it is possible to make a Halt decider that is "essentially" correct 
> but fails in some cases which are "necessary" (use of quotation marks 
> to indicate that the terms contained within are slippery and 
Neccessity and Possibility form the fundamental basis of Model Logic:
http://en.wikipedia.org/wiki/Modal_logic

> ill-defined). Lots of people have attempted to illustrate several 
> alternate derivations to show that the incorrectness is not so 
> well-contained, but they have all been ignored because either:
> a) it happens to fall under the "necessary" failures,
> b) it relies on a similar "incorrect" result (the uncountability of 
> real numbers and Godel's incompleteness theorem have also been 
> explicitly cited as incorrect proofs due to being similar [1]), or
> c) he doesn't understand it, so it has to be either case a or case b 
> because he is OBVIOUSLY right.
>
> Trying to come up with a simple, alternate proof of the Halting 
> problem that is understandable and skirts any other proofs that have 
> anything smacking of diagonalization or self-reference is indeed hard, 
> but I doubt he'd accept anything less. I also doubt that he'd accept 
> even that much, though...
>
> [1] This just makes it seem to me that he has a really hard time 
> accepting that a proof by contradiction is indeed a valid proof.
>
Ultimately to understand what I am saying requires an understanding of 
the mathematics of the meaning of words.
Since the last time that we spoke I have done extensive reading and 
found that the field called Formal Semantics in Linguistics provides the 
most complete specification of all knowledge of mathematics of the 
meaning of words.

This single source describes the basis for essentially all of this work:
Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics) 
Ronnie Cann

0
Peter
8/19/2012 1:59:23 PM
Peter Olcott <OCR4Screen> writes:

> On 6/9/2012 9:41 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> If a yes or no question does not have a correct yes or no answer then
>>> there must be something wrong with this question.
>> I was just thinking "surely there's not point reading this; can there be
>> anything new?" when low-and-behold it contains a *new* error:
>>
>> <snip>
>>> In other words for potential
>>> halt decider H and input M:
>>> ---------------------------
>>> Not<ThereExists>
>>> <ElementOfSet>
>>> FinalStatesOf_H
>>> <MathematicallyMapsTo>
>>> Halts(M, H, input)
>>>
>>> Where M is defined as
>>> ---------------------
>>> M(String H, String input):
>>> if H(input, H, input) loop
>>> else halt
>>>
>>> The only difference between asking a yes or no question and the
>>> invocation of a Turing Machine is a natural language interface.
>>>
>>> Within a natural language interface the invocation of H(M, H, M) would
>>> be specified as:
>>>
>>> “Does Turing Machine M halt on input of Turing Machine H
>>> and Turing Machine M?”
>> No, it would be read as "What does the machine H say when invoked with
>> input (M, H, M)?".  But that has a simple yes/no answer so it does not
>> fit with PO's preconceived idea of what the answer should be, so he has
>> to... er, "misrepresent the truth".
> I have learned many of the recent developments in the mathematics of
> the meaning of words as expressed using  predicate logic and developed
> by Montague and others since the last time that I talked to you.
> http://en.wikipedia.org/wiki/Montague_grammar
>
> This is my primary source:
> Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics)
> Ronnie Cann
>
> The mistake of the Halting Problem can only be detected from the point
> of view of the mathematics of the meaning of words.

The Halting Problem is not defined in words, it's defined in the formal
syntax of mathematics.

> Since this is
> outside of the field of Turing, Godel, and  most everyone here, it is
> clear that you all have a good reason for not seeing this perspective.

Yes, these theorems are theorems of mathematics, not of English.  Your
English descriptions of them are in error if they lead you to the
conclusion that any of then are invalid.

> The mistake that you just made was framing the question of the Halting
> Problem incorrectly.

And your mistake was framing the question in English.

> The reason that the Halting Problem is an
> ill-formed question is Pathological Self-Reference. Your incorrect
> statement of the question eliminated the self-reference, and thus
> changed the underlying semantics.

Best not to use English to talk about mathematics then.  We could, if
you want, communicate almost unambiguously using maths, but I think you
prefer your words since they can't be refuted.

> Turing Machine H is *not* asked: "What would you say?"

No indeed.  It's not asked anything and it makes no answer.  It is
simply a 5-tuple drawn from a set, and it has properties formally
defined by a mathematical notation.

> You have incorrectly switched the perspective from the perspective of
> Turing Machine H to that of an outside observer of Turing Machine H.
> The outside observer perspective lacks the essential self-reference
> required to derive the Halting Problem.

You have incorrectly switched perspective to English words.  HP is
statement in mathematics and we've all been guilty of using words to
talk about it because you are not comfortable with mathematics.  That is
an error that we should not repeat.

<snip>
-- 
Ben.
0
ben.usenet (6790)
8/19/2012 6:12:18 PM
On 8/19/2012 1:12 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 6/9/2012 9:41 AM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> If a yes or no question does not have a correct yes or no answer then
>>>> there must be something wrong with this question.
>>> I was just thinking "surely there's not point reading this; can there be
>>> anything new?" when low-and-behold it contains a *new* error:
>>>
>>> <snip>
>>>> In other words for potential
>>>> halt decider H and input M:
>>>> ---------------------------
>>>> Not<ThereExists>
>>>> <ElementOfSet>
>>>> FinalStatesOf_H
>>>> <MathematicallyMapsTo>
>>>> Halts(M, H, input)
>>>>
>>>> Where M is defined as
>>>> ---------------------
>>>> M(String H, String input):
>>>> if H(input, H, input) loop
>>>> else halt
>>>>
>>>> The only difference between asking a yes or no question and the
>>>> invocation of a Turing Machine is a natural language interface.
>>>>
>>>> Within a natural language interface the invocation of H(M, H, M) would
>>>> be specified as:
>>>>
>>>> “Does Turing Machine M halt on input of Turing Machine H
>>>> and Turing Machine M?”
>>> No, it would be read as "What does the machine H say when invoked with
>>> input (M, H, M)?".  But that has a simple yes/no answer so it does not
>>> fit with PO's preconceived idea of what the answer should be, so he has
>>> to... er, "misrepresent the truth".
>> I have learned many of the recent developments in the mathematics of
>> the meaning of words as expressed using  predicate logic and developed
>> by Montague and others since the last time that I talked to you.
>> http://en.wikipedia.org/wiki/Montague_grammar
>>
>> This is my primary source:
>> Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics)
>> Ronnie Cann
>>
>> The mistake of the Halting Problem can only be detected from the point
>> of view of the mathematics of the meaning of words.
> The Halting Problem is not defined in words, it's defined in the formal
> syntax of mathematics.
>
>> Since this is
>> outside of the field of Turing, Godel, and  most everyone here, it is
>> clear that you all have a good reason for not seeing this perspective.
> Yes, these theorems are theorems of mathematics, not of English.  Your
> English descriptions of them are in error if they lead you to the
> conclusion that any of then are invalid.
>
>> The mistake that you just made was framing the question of the Halting
>> Problem incorrectly.
> And your mistake was framing the question in English.
>
>> The reason that the Halting Problem is an
>> ill-formed question is Pathological Self-Reference. Your incorrect
>> statement of the question eliminated the self-reference, and thus
>> changed the underlying semantics.
> Best not to use English to talk about mathematics then.  We could, if
> you want, communicate almost unambiguously using maths, but I think you
> prefer your words since they can't be refuted.
>
>> Turing Machine H is *not* asked: "What would you say?"
> No indeed.  It's not asked anything and it makes no answer.  It is
> simply a 5-tuple drawn from a set, and it has properties formally
> defined by a mathematical notation.
>
>> You have incorrectly switched the perspective from the perspective of
>> Turing Machine H to that of an outside observer of Turing Machine H.
>> The outside observer perspective lacks the essential self-reference
>> required to derive the Halting Problem.
> You have incorrectly switched perspective to English words.  HP is
> statement in mathematics and we've all been guilty of using words to
> talk about it because you are not comfortable with mathematics.  That is
> an error that we should not repeat.
>
> <snip>
Only the mathematics of the meaning of words is sufficiently expressive 
to show that the self reference form of the Halting Problem is 
constructed entirely on the basis of an error of reasoning.
0
Peter
8/19/2012 6:39:41 PM
On Aug 20, 4:12=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Peter Olcott <OCR4Screen> writes:
>
> > Turing Machine H is *not* asked: "What would you say?"
>
> No indeed. =A0It's not asked anything and it makes no answer. =A0It is
> simply a 5-tuple drawn from a set, and it has properties formally
> defined by a mathematical notation.
>

Is the non-existence of Potential Halt TM formally defined?

Herc
0
8/19/2012 8:18:36 PM
On 8/19/2012 11:12 AM, Ben Bacarisse wrote:
....
> You have incorrectly switched perspective to English words.  HP is
> statement in mathematics and we've all been guilty of using words to
> talk about it because you are not comfortable with mathematics.  That is
> an error that we should not repeat.

I also use English words in newsgroups because ASCII text does a far
better job of representing them than it does for mathematics. However,
they are intended as a circumlocution for the mathematics, not to stand
alone.

The meaning of a natural language word changes with dialect, context,
and time. That makes word meanings far too vague and unreliable to be
useful as a foundation for mathematics or logic.

Patricia
0
pats (3556)
8/20/2012 4:47:35 PM
On Aug 20, 11:47=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 8/19/2012 11:12 AM, Ben Bacarisse wrote:
> ...
>
> > You have incorrectly switched perspective to English words. =A0HP is
> > statement in mathematics and we've all been guilty of using words to
> > talk about it because you are not comfortable with mathematics. =A0That=
 is
> > an error that we should not repeat.
>
> I also use English words in newsgroups because ASCII text does a far
> better job of representing them than it does for mathematics. However,
> they are intended as a circumlocution for the mathematics, not to stand
> alone.
>


> The meaning of a natural language word changes with dialect, context,
> and time. That makes word meanings far too vague and unreliable to be
> useful as a foundation for mathematics or logic.

This has not been the case since the work of Montague, (previously
cited).

>
> Patricia

0
PeteOlcott (86)
8/20/2012 5:55:42 PM
On Aug 21, 4:01=A0am, Marshall <marshall.spi...@gmail.com> wrote:
> On Monday, August 20, 2012 10:55:42 AM UTC-7, PeteOlcott wrote:
> > On Aug 20, 11:47=A0am, Patricia Shanahan <p...@acm.org> wrote:
>
> > > The meaning of a natural language word changes with dialect, context,
> > > and time. That makes word meanings far too vague and unreliable to be
> > > useful as a foundation for mathematics or logic.
>
> > This has not been the case since the work of Montague, (previously
> > cited).
>
> So Montague made all dialects disappear, made words no longer
> have contextual dependencies, and made the meaning of words
> stop changing with time?
>
> Marshall


THE HALTING PROBLEM IS BASED ON STUPIDITY

S:  IF STOPS(S) GOTO S

Still waiting on the FORMAL PROOF BEN!

All I see is a bunch of programmers hypothesis about putting a 1GL
(Turing Machine) function spec. into a 3GL harness without any idea
what you are doing, and even dumber logicians adamant prancing about
that it can't work because it has to say only either of STOP or GO!


Herc

************
| 5GL    /  WHY? WHEN?
| 4GL   /   WHAT? not HOW!     ?person(P)
| 3GL  /    FUNCTION STACK     proc(a,b)     **FUNCTION CALL
| 2GL /     MNEMONICS          LDA 0101
| 1GL/      MACHINE CODE       101 0101      **TURING MACHINE
  =3D=3D=3D
  CPU

0
8/20/2012 8:43:26 PM
On 8/20/2012 10:55 AM, PeteOlcott wrote:
> On Aug 20, 11:47 am, Patricia Shanahan <p...@acm.org> wrote:
>> On 8/19/2012 11:12 AM, Ben Bacarisse wrote:
>> ...
>>
>>> You have incorrectly switched perspective to English words.  HP is
>>> statement in mathematics and we've all been guilty of using words to
>>> talk about it because you are not comfortable with mathematics.  That is
>>> an error that we should not repeat.
>>
>> I also use English words in newsgroups because ASCII text does a far
>> better job of representing them than it does for mathematics. However,
>> they are intended as a circumlocution for the mathematics, not to stand
>> alone.
>>
>
>
>> The meaning of a natural language word changes with dialect, context,
>> and time. That makes word meanings far too vague and unreliable to be
>> useful as a foundation for mathematics or logic.
>
> This has not been the case since the work of Montague, (previously
> cited).

Huh? Which are you disagreeing with?

The meaning of a natural language word changes with:

A. dialect

B. context

C. time

Patricia
0
pats (3556)
8/21/2012 4:43:17 PM
On Aug 21, 11:43=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 8/20/2012 10:55 AM, PeteOlcott wrote:
>
>
>
>
>
> > On Aug 20, 11:47 am, Patricia Shanahan <p...@acm.org> wrote:
> >> On 8/19/2012 11:12 AM, Ben Bacarisse wrote:
> >> ...
>
> >>> You have incorrectly switched perspective to English words. =A0HP is
> >>> statement in mathematics and we've all been guilty of using words to
> >>> talk about it because you are not comfortable with mathematics. =A0Th=
at is
> >>> an error that we should not repeat.
>
> >> I also use English words in newsgroups because ASCII text does a far
> >> better job of representing them than it does for mathematics. However,
> >> they are intended as a circumlocution for the mathematics, not to stan=
d
> >> alone.
>
> >> The meaning of a natural language word changes with dialect, context,
> >> and time. That makes word meanings far too vague and unreliable to be
> >> useful as a foundation for mathematics or logic.
>
> > This has not been the case since the work of Montague, (previously
> > cited).
>
> Huh? Which are you disagreeing with?
>
> The meaning of a natural language word changes with:
>
> A. dialect
>
> B. context
>
> C. time
>
> Patricia- Hide quoted text -
>
> - Show quoted text -

Montague developed as system such that the meaning of an utterance can
remain fixed. In this case the timeframe (if relevant) would be
explicitly specified along with all of relevant the details of the
context, (if any) and the meaning of the lexical items are fixed as
specific nodes within an ontology.
0
PeteOlcott (86)
8/21/2012 5:52:11 PM
PeteOlcott <peteolcott@gmail.com> writes:

> On Aug 21, 11:43 am, Patricia Shanahan <p...@acm.org> wrote:
>> On 8/20/2012 10:55 AM, PeteOlcott wrote:
<snip>
>> > This has not been the case since the work of Montague, (previously
>> > cited).
>>
>> Huh? Which are you disagreeing with?
>>
>> The meaning of a natural language word changes with:
>>
>> A. dialect
>>
>> B. context
>>
>> C. time
>>
>> Patricia- Hide quoted text -
>>
>> - Show quoted text -
>
> Montague developed as system such that the meaning of an utterance can
> remain fixed. In this case the timeframe (if relevant) would be
> explicitly specified along with all of relevant the details of the
> context, (if any) and the meaning of the lexical items are fixed as
> specific nodes within an ontology.

No he didn't.  His analysis relates to the relationship between words.
What a cat is, and whether one can smooth it, are not part of his work.
What's worse, the result of Montague's analysis are *mathematical
formulae* explaining the relationships between the concepts denoted by
the words.  There's no advantage in doing that when the concepts and the
relationships are already simple to represent in exactly the same
mathematics.

By all means, show me I'm wrong.  Take any statement about halting or
Turing machines that has been misconstrued here and present the Montague
analysis of it.

-- 
Ben.
0
ben.usenet (6790)
8/21/2012 7:39:10 PM
On 8/21/2012 2:39 PM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
>
>> On Aug 21, 11:43 am, Patricia Shanahan <p...@acm.org> wrote:
>>> On 8/20/2012 10:55 AM, PeteOlcott wrote:
> <snip>
>>>> This has not been the case since the work of Montague, (previously
>>>> cited).
>>> Huh? Which are you disagreeing with?
>>>
>>> The meaning of a natural language word changes with:
>>>
>>> A. dialect
>>>
>>> B. context
>>>
>>> C. time
>>>
>>> Patricia- Hide quoted text -
>>>
>>> - Show quoted text -
>> Montague developed as system such that the meaning of an utterance can
>> remain fixed. In this case the timeframe (if relevant) would be
>> explicitly specified along with all of relevant the details of the
>> context, (if any) and the meaning of the lexical items are fixed as
>> specific nodes within an ontology.
> No he didn't.  His analysis relates to the relationship between words.
> What a cat is, and whether one can smooth it, are not part of his work.
> What's worse, the result of Montague's analysis are *mathematical
> formulae* explaining the relationships between the concepts denoted by
> the words.  There's no advantage in doing that when the concepts and the
> relationships are already simple to represent in exactly the same
> mathematics.
>
> By all means, show me I'm wrong.  Take any statement about halting or
> Turing machines that has been misconstrued here and present the Montague
> analysis of it.
>
I have to learn more about it first. I have only read one book and a 
dozen articles, so far.
He did not even attempt to formalize the notion of a question, yet 
others after him have.


0
Peter
8/21/2012 9:39:53 PM
On 8/21/2012 2:39 PM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
>
>> On Aug 21, 11:43 am, Patricia Shanahan <p...@acm.org> wrote:
>>> On 8/20/2012 10:55 AM, PeteOlcott wrote:
> <snip>
>>>> This has not been the case since the work of Montague, (previously
>>>> cited).
>>> Huh? Which are you disagreeing with?
>>>
>>> The meaning of a natural language word changes with:
>>>
>>> A. dialect
>>>
>>> B. context
>>>
>>> C. time
>>>
>>> Patricia- Hide quoted text -
>>>
>>> - Show quoted text -
>> Montague developed as system such that the meaning of an utterance can
>> remain fixed. In this case the timeframe (if relevant) would be
>> explicitly specified along with all of relevant the details of the
>> context, (if any) and the meaning of the lexical items are fixed as
>> specific nodes within an ontology.
> No he didn't.  His analysis relates to the relationship between words.
> What a cat is, and whether one can smooth it, are not part of his work.
> What's worse, the result of Montague's analysis are *mathematical
> formulae* explaining the relationships between the concepts denoted by
> the words.  There's no advantage in doing that when the concepts and the
> relationships are already simple to represent in exactly the same
> mathematics.
>
> By all means, show me I'm wrong.  Take any statement about halting or
> Turing machines that has been misconstrued here and present the Montague
> analysis of it.
>
I have had visions of ideas such as Montague's in my mind for 25 years.
This subject is the one subject has  most fascinated me for the last 25 
years.
0
Peter
8/22/2012 1:36:10 AM
> On Aug 20, 11:47=A0am, Patricia Shanahan <p...@acm.org> wrote:
> > The meaning of a natural language word changes with dialect, context,
> > and time. That makes word meanings far too vague and unreliable to be
> > useful as a foundation for mathematics or logic.

On Aug 20, 1:55 pm, PeteOlcott <peteolc...@gmail.com> wrote:
> This has not been the case since the work of Montague, (previously
> cited).

One researcher doing some work obviously CANNOT AFFECT THE FACTS
about how people THEMSELVES use words!

It turns out that you are stupid about a lot more things than just
mathematical logic.
I truly wish I knew your age.
0
greeneg9613 (188)
8/22/2012 3:09:49 AM
> > By all means, show me I'm wrong. =A0Take any statement about halting or
> > Turing machines that has been misconstrued here and present the Montagu=
e
> > analysis of it.

On Aug 21, 5:39 pm, Peter Olcott <OCR4Screen> wrote:
> I have to learn more about it first. I have only read one book and a
> dozen articles, so far.
> He did not even attempt to formalize the notion of a question, yet
> others after him have.

More about WHAT?  Turing machines and formal languages or "The
Montague analysis"
of something?!?  Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS!
You are just bringing it up in the hope of being able to talk about
something that
you know more about than we do. Tragically for you, even this little
backwater of
a newsgroup is a big enough place that THAT is NOT going to happen.
0
greeneg9613 (188)
8/22/2012 3:16:20 AM
On 8/21/2012 10:16 PM, George Greene wrote:
>>> By all means, show me I'm wrong.  Take any statement about halting or
>>> Turing machines that has been misconstrued here and present the Montague
>>> analysis of it.
> On Aug 21, 5:39 pm, Peter Olcott <OCR4Screen> wrote:
>> I have to learn more about it first. I have only read one book and a
>> dozen articles, so far.
>> He did not even attempt to formalize the notion of a question, yet
>> others after him have.
> More about WHAT?  Turing machines and formal languages or "The
> Montague analysis"
> of something?!?  Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS!

You wouldn't know.  Ignorance is perceived by the ignorant mind as 
disagreement.

Ignorance can only be perceived as ignorance once the missing knowledge 
is provided, otherwise there is nothing to contrast it with.

> You are just bringing it up in the hope of being able to talk about
> something that
> you know more about than we do. Tragically for you, even this little
> backwater of
> a newsgroup is a big enough place that THAT is NOT going to happen.

0
Peter
8/22/2012 9:43:57 AM
On Aug 21, 2:39=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> PeteOlcott <peteolc...@gmail.com> writes:
> > On Aug 21, 11:43=A0am, Patricia Shanahan <p...@acm.org> wrote:
> >> On 8/20/2012 10:55 AM, PeteOlcott wrote:
> <snip>
> >> > This has not been the case since the work of Montague, (previously
> >> > cited).
>
> >> Huh? Which are you disagreeing with?
>
> >> The meaning of a natural language word changes with:
>
> >> A. dialect
>
> >> B. context
>
> >> C. time
>
> >> Patricia- Hide quoted text -
>
> >> - Show quoted text -
>
> > Montague developed as system such that the meaning of an utterance can
> > remain fixed. In this case the timeframe (if relevant) would be
> > explicitly specified along with all of relevant the details of the
> > context, (if any) and the meaning of the lexical items are fixed as
> > specific nodes within an ontology.
>
> No he didn't. =A0His analysis relates to the relationship between words.
> What a cat is, and whether one can smooth it, are not part of his work.
> What's worse, the result of Montague's analysis are *mathematical
> formulae* explaining the relationships between the concepts denoted by
> the words. =A0There's no advantage in doing that when the concepts and th=
e
> relationships are already simple to represent in exactly the same
> mathematics.
>
One thing that has not been even touched on within the Theory of
Computation is:
What exactly is the correct mathematical model for a natural language
question?

Unless this is fully specified it is not possible to see that the self-
reference form of the Halting Problem is based on an ill-formed
question.

One of the key aspects that I was correct about is that a question
inherently specifies its own set of possible answers.

> By all means, show me I'm wrong. =A0Take any statement about halting or
> Turing machines that has been misconstrued here and present the Montague
> analysis of it.
>
> --
> Ben.- Hide quoted text -
>
> - Show quoted text -

0
PeteOlcott (86)
8/22/2012 11:06:41 AM
On 8/22/2012 2:43 AM, Peter Olcott wrote:
....
> You wouldn't know.  Ignorance is perceived by the ignorant mind as
> disagreement.

You should quote that paragraph to yourself the next time you are
tempted to disagree with others about something about which you are
profoundly ignorant, such as the concept of decidability in computer theory.

Patricia

0
pats (3556)
8/22/2012 2:58:34 PM
On Aug 22, 9:58=A0am, Patricia Shanahan <p...@acm.org> wrote:
> On 8/22/2012 2:43 AM, Peter Olcott wrote:
> ...
>
> > You wouldn't know. =A0Ignorance is perceived by the ignorant mind as
> > disagreement.
>
> You should quote that paragraph to yourself the next time you are
> tempted to disagree with others about something about which you are
> profoundly ignorant, such as the concept of decidability in computer theo=
ry.
>
> Patricia

We won't know which side of this debate is lacking the key knowledge
until after I have represented my view in something like Montague
Semantics.
0
PeteOlcott (86)
8/22/2012 4:43:46 PM
PeteOlcott <peteolcott@gmail.com> writes:

> One thing that has not been even touched on within the Theory of
> Computation is:
> What exactly is the correct mathematical model for a natural language
> question?

Yes, there's a lot missing from the Theory of Computation.  There's no
mathematical model for metaphorical eulogising either.  Until such time
as there is, "my Turing machine is like a red red rose" will remain an
deeply ambiguous statement.  And the there's no model of TM
consciousness either, so we can't even ask if its read head hurts.  All
we can do ask what can and can't be computed by the narrow rules imposed
by such dull and constrained minds as Church, Kleene and Turing.

-- 
Ben.
0
ben.usenet (6790)
8/22/2012 10:47:07 PM
On 8/22/2012 5:47 PM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
>
>> One thing that has not been even touched on within the Theory of
>> Computation is:
>> What exactly is the correct mathematical model for a natural language
>> question?
> Yes, there's a lot missing from the Theory of Computation.  There's no
> mathematical model for metaphorical eulogising either.  Until such time
> as there is, "my Turing machine is like a red red rose" will remain an
> deeply ambiguous statement.  And the there's no model of TM
> consciousness either, so we can't even ask if its read head hurts.  All
> we can do ask what can and can't be computed by the narrow rules imposed
> by such dull and constrained minds as Church, Kleene and Turing.
>
Another fundamental limit to computation (analogous to the Halting Problem):
CAD systems will never be able to correctly represent square circles.
0
Peter
8/23/2012 1:11:17 AM
> > More about WHAT? =A0Turing machines and formal languages or "The
> > Montague analysis"
> > of something?!? =A0Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS=
!

On Aug 22, 5:43 am, Peter Olcott <OCR4Screen> wrote:
> You wouldn't know.

I WOULD SO TOO know.
I do actually have  relevant degree, dumbass.

> =A0Ignorance

I'm NOT ignorant.  You are.
You keep talking about questions having a set of "possible" answers.
You also keep ignoring factual examples of questions with no "true"
answers that ARE OBVIOUSLY
well-formed.
0
greeneg9613 (188)
8/23/2012 1:20:19 AM
On 8/22/2012 8:20 PM, George Greene wrote:
>>> More about WHAT?  Turing machines and formal languages or "The
>>> Montague analysis"
>>> of something?!?  Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS!
> On Aug 22, 5:43 am, Peter Olcott <OCR4Screen> wrote:
>> You wouldn't know.
> I WOULD SO TOO know.
> I do actually have  relevant degree, dumbass.
What is your degree in animosity?

>
>>   Ignorance
> I'm NOT ignorant.  You are.
You have insufficient basis for making this determination.

> You keep talking about questions having a set of "possible" answers.
It turns out that Montague Semantics agrees.

> You also keep ignoring factual examples of questions with no "true"
> answers that ARE OBVIOUSLY
> well-formed.

0
Peter
8/23/2012 1:37:58 AM
> > I'm NOT ignorant. =A0You are.

On Aug 22, 9:37 pm, Peter Olcott <OCR4Screen> wrote:
> You have insufficient basis for making this determination.

Believe me, everything you are saying here constitutes a sufficient
basis FOR EVERYbody (except you)
to make THAT determination.
0
greeneg9613 (188)
8/23/2012 7:27:47 PM
On 8/23/2012 2:27 PM, George Greene wrote:
>>> I'm NOT ignorant.  You are.
> On Aug 22, 9:37 pm, Peter Olcott <OCR4Screen> wrote:
>> You have insufficient basis for making this determination.
> Believe me, everything you are saying here constitutes a sufficient
> basis FOR EVERYbody (except you)
> to make THAT determination.
The most significant human fallibility is the inability to distinguish 
presumption from truth.
"Everyone" also knew that Fulton would fail in his folly.
0
Peter
8/23/2012 9:08:50 PM
On 8/22/2012 5:47 PM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
>
>> One thing that has not been even touched on within the Theory of
>> Computation is:
>> What exactly is the correct mathematical model for a natural language
>> question?
> Yes, there's a lot missing from the Theory of Computation.  There's no
> mathematical model for metaphorical eulogising either.  Until such time
> as there is, "my Turing machine is like a red red rose" will remain an
> deeply ambiguous statement.  And the there's no model of TM
> consciousness either, so we can't even ask if its read head hurts.  All
> we can do ask what can and can't be computed by the narrow rules imposed
> by such dull and constrained minds as Church, Kleene and Turing.
>
The key aspect of the Halting Problem that is erroneous that can not be 
sufficiently expressed within the mathematics of the Theory of 
Computation is the claim that the Halting Problem somehow forms an 
{actual limit to computation}.

The term: {actual limit to computation} can only be sufficiently 
expressed in languages as expressive as English.
Montague Grammar provides a way that I can express this such that 
misinterpretation can not occur.
0
Peter
8/23/2012 9:32:50 PM
Peter Olcott <OCR4Screen> writes:

> On 8/22/2012 5:47 PM, Ben Bacarisse wrote:
>> PeteOlcott <peteolcott@gmail.com> writes:
>>
>>> One thing that has not been even touched on within the Theory of
>>> Computation is:
>>> What exactly is the correct mathematical model for a natural language
>>> question?
>> Yes, there's a lot missing from the Theory of Computation.  There's no
>> mathematical model for metaphorical eulogising either.  Until such time
>> as there is, "my Turing machine is like a red red rose" will remain an
>> deeply ambiguous statement.  And the there's no model of TM
>> consciousness either, so we can't even ask if its read head hurts.  All
>> we can do ask what can and can't be computed by the narrow rules imposed
>> by such dull and constrained minds as Church, Kleene and Turing.
>>
> The key aspect of the Halting Problem that is erroneous that can not
> be sufficiently expressed within the mathematics of the Theory of
> Computation is the claim that the Halting Problem somehow forms an
> {actual limit to computation}.
>
> The term: {actual limit to computation} can only be sufficiently
> expressed in languages as expressive as English.
> Montague Grammar provides a way that I can express this such that
> misinterpretation can not occur.

The halting theorem is much easier to state in mathematics.  It just
asserts that a set (TM) does not contain any member with a particular
property (that of deciding halting).  How this simple fact relates to
whatever meaning you attribute to the phrase "actual limit to
computation" is a mystery to me, but if all that work with Montague
grammar helps you to understand your own words, then go for it.

However, if you think it might help explain things to others, you are in
for a shock.  No matter what machinery you employ to give entirely unambiguous
and clear meanings to English-language phrases, what will matter is
whether they are true or not, and that will be decided in the domain to
which they relate -- the theory of computation.  Any that concur with
the theory ca be accepted as true, but any that don't are simply very
clear and unambiguous false statements.

-- 
Ben.
0
ben.usenet (6790)
8/24/2012 12:41:50 AM
On 8/20/2012 12:55 PM, PeteOlcott wrote:
> On Aug 20, 11:47 am, Patricia Shanahan <p...@acm.org> wrote:
>> The meaning of a natural language word changes with dialect, context,
>> and time. That makes word meanings far too vague and unreliable to be
>> useful as a foundation for mathematics or logic.
>
> This has not been the case since the work of Montague, (previously
> cited).

How about the verb "like"? This is one that has changed definitions in 
the past half-decade even, thanks to a certain monolithic social network.

People who claim that vocabulary doesn't change are ignorant of history. 
People who claim that vocabulary has stopped changing are completely 
blind to the world around them.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/24/2012 1:16:02 AM
On 8/21/2012 4:39 PM, Peter Olcott wrote:
> I have to learn more about it first. I have only read one book and a
> dozen articles, so far.

For someone who argues that all his detractors are floundering in 
ignorance, citing as sources things you admit you don't understand is a 
very odd way to try to go about your argument.

> He did not even attempt to formalize the notion of a question, yet
> others after him have.

To me, this would suggest you're barking up the wrong tree.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/24/2012 1:23:40 AM
On 8/23/2012 7:41 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 8/22/2012 5:47 PM, Ben Bacarisse wrote:
>>> PeteOlcott <peteolcott@gmail.com> writes:
>>>
>>>> One thing that has not been even touched on within the Theory of
>>>> Computation is:
>>>> What exactly is the correct mathematical model for a natural language
>>>> question?
>>> Yes, there's a lot missing from the Theory of Computation.  There's no
>>> mathematical model for metaphorical eulogising either.  Until such time
>>> as there is, "my Turing machine is like a red red rose" will remain an
>>> deeply ambiguous statement.  And the there's no model of TM
>>> consciousness either, so we can't even ask if its read head hurts.  All
>>> we can do ask what can and can't be computed by the narrow rules imposed
>>> by such dull and constrained minds as Church, Kleene and Turing.
>>>
>> The key aspect of the Halting Problem that is erroneous that can not
>> be sufficiently expressed within the mathematics of the Theory of
>> Computation is the claim that the Halting Problem somehow forms an
>> {actual limit to computation}.
>>
>> The term: {actual limit to computation} can only be sufficiently
>> expressed in languages as expressive as English.
>> Montague Grammar provides a way that I can express this such that
>> misinterpretation can not occur.
> The halting theorem is much easier to state in mathematics.  It just
> asserts that a set (TM) does not contain any member with a particular
> property (that of deciding halting).  How this simple fact relates to
> whatever meaning you attribute to the phrase "actual limit to
> computation" is a mystery to me, but if all that work with Montague
> grammar helps you to understand your own words, then go for it.
>
> However, if you think it might help explain things to others, you are in
> for a shock.  No matter what machinery you employ to give entirely unambiguous
> and clear meanings to English-language phrases, what will matter is
> whether they are true or not, and that will be decided in the domain to
> which they relate -- the theory of computation.  Any that concur with
> the theory ca be accepted as true, but any that don't are simply very
> clear and unambiguous false statements.
>
The cool thing about the possibilities that Montague Grammar unlocks is 
now for the first time we will have an unequivocal measure of analytical 
truth.
0
Peter
8/24/2012 2:34:13 AM
On 8/23/2012 8:16 PM, Joshua Cranmer wrote:
> On 8/20/2012 12:55 PM, PeteOlcott wrote:
>> On Aug 20, 11:47 am, Patricia Shanahan <p...@acm.org> wrote:
>>> The meaning of a natural language word changes with dialect, context,
>>> and time. That makes word meanings far too vague and unreliable to be
>>> useful as a foundation for mathematics or logic.
>>
>> This has not been the case since the work of Montague, (previously
>> cited).
>
> How about the verb "like"? This is one that has changed definitions in 
> the past half-decade even, thanks to a certain monolithic social network.
>
> People who claim that vocabulary doesn't change are ignorant of 
> history. People who claim that vocabulary has stopped changing are 
> completely blind to the world around them.
>
You provided an excellent example that immediately proved your point.

My end goal is to basically figure out how to teach a computer to read, 
in the same way that humans read, with increasing comprehension. That 
would provide the key missing piece of the CYC project:  http://cyc.com/
0
Peter
8/24/2012 2:37:58 AM
On 8/23/2012 8:23 PM, Joshua Cranmer wrote:
> On 8/21/2012 4:39 PM, Peter Olcott wrote:
>> I have to learn more about it first. I have only read one book and a
>> dozen articles, so far.
>
> For someone who argues that all his detractors are floundering in 
> ignorance, citing as sources things you admit you don't understand is 
> a very odd way to try to go about your argument.

I tend to be honest to a fault, and let the chips fall where they may.
My detractors are only ignorant of something that is outside of their field.

The error of the Halting Problem that I have been arguing all along can 
not be expressed within the mathematics of the Theory of Computation, 
thus it is not an error within the Theory of Computation itself. The 
error is at the next level of abstraction higher than the Theory of 
Computation .

>
>> He did not even attempt to formalize the notion of a question, yet
>> others after him have.
>
> To me, this would suggest you're barking up the wrong tree.
>
No, I have read what the others have said, and the best combination of 
existing theories is the same theory that I have had since long before I 
read about any of this. The key factor in the best adaptation of 
Montague Grammar to questions is (what I have been saying here all long) 
A question inherently specifies its own set of possible answers.
0
Peter
8/24/2012 2:46:46 AM
Peter Olcott <OCR4Screen> writes:
<snip>
> The cool thing about the possibilities that Montague Grammar unlocks
> is now for the first time we will have an unequivocal measure of
> analytical truth.

The number of things about which you know very little is growing.
Montague's work can obviously be added to that list.

-- 
Ben.
0
ben.usenet (6790)
8/24/2012 2:55:17 AM
On 8/23/2012 9:55 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
> <snip>
>> The cool thing about the possibilities that Montague Grammar unlocks
>> is now for the first time we will have an unequivocal measure of
>> analytical truth.
> The number of things about which you know very little is growing.
> Montague's work can obviously be added to that list.
>
I have had ideas about this subject since 1991, long before I ever heard 
of Montague (within the last six months).

Kurt Godel: (This is the foundation that Montague Meaning Postulates are 
based on)
By the theory of simple types I mean the doctrine which says that the 
objects of thought ... are divided into types, namely: individuals, 
properties of individuals, relations between individuals, properties of 
such relations, etc. (with a similar hierarchy for extensions),
0
Peter
8/24/2012 3:17:01 AM
On 8/23/2012 9:37 PM, Peter Olcott wrote:
> My end goal is to basically figure out how to teach a computer to read,
> in the same way that humans read, with increasing comprehension. That
> would provide the key missing piece of the CYC project:  http://cyc.com/

This would be the holy grail of natural language understanding and AI in 
general. And, if the last twenty years of advancements in natural 
language are any guideline, it's also completely unfeasible. All major 
recent developments that I'm aware of resort to primarily statistical 
determination and not any actual understanding of text.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/24/2012 3:35:39 AM
On 8/23/2012 10:35 PM, Joshua Cranmer wrote:
> On 8/23/2012 9:37 PM, Peter Olcott wrote:
>> My end goal is to basically figure out how to teach a computer to read,
>> in the same way that humans read, with increasing comprehension. That
>> would provide the key missing piece of the CYC project: http://cyc.com/
>
> This would be the holy grail of natural language understanding and AI 
> in general. And, if the last twenty years of advancements in natural 
> language are any guideline, it's also completely unfeasible. All major 
> recent developments that I'm aware of resort to primarily statistical 
> determination and not any actual understanding of text.
>
I myself can see how comprehension can be specified using some extension 
of Montague to form meaning postulates.

Now all we need is comprehension of reading.  The CYC project is correct 
in that there will be a certain threshold of knowledge that will 
effectively BootStrap real AI (with comprehension and reasoning at least 
as good as humans).

What is needed now is the set of meta knowledge about knowledge 
acquisition. The details of this set can be deduced.
0
Peter
8/24/2012 3:57:01 AM
On 8/23/2012 10:35 PM, Joshua Cranmer wrote:
> On 8/23/2012 9:37 PM, Peter Olcott wrote:
>> My end goal is to basically figure out how to teach a computer to read,
>> in the same way that humans read, with increasing comprehension. That
>> would provide the key missing piece of the CYC project: http://cyc.com/
>
> This would be the holy grail of natural language understanding and AI 
> in general. And, if the last twenty years of advancements in natural 
> language are any guideline, it's also completely unfeasible. All major 
> recent developments that I'm aware of resort to primarily statistical 
> determination and not any actual understanding of text.
>
We would begin the process of deducing the requirements of automated 
knowledge acquisition by specifying the structure of the set of 
conceptual knowledge. This is actually much simpler than it sounds **. I 
have had these ideas on my mind since 1991.

 From the structure of the set of conceptual knowledge we now know 
exactly what holes to fill and the structure of how they will be filled.

** The structure of conceptual knowledge is enormously simpler than the 
structure of natural language.
0
Peter
8/24/2012 4:13:55 AM
On 8/23/2012 11:13 PM, Peter Olcott wrote:
> We would begin the process of deducing the requirements of automated
> knowledge acquisition by specifying the structure of the set of
> conceptual knowledge. This is actually much simpler than it sounds **. I
> have had these ideas on my mind since 1991.
>
>  From the structure of the set of conceptual knowledge we now know
> exactly what holes to fill and the structure of how they will be filled.

This shows a very, VERY naive understanding of the current 
state-of-the-art limits in AI. It is well-known by know how to, in 
effect, teach a system that knows about animals about mythological 
creatures. That is the sort of thing that already fits in its framework.

What we really don't know how to do is how to teach systems radically 
different things--we can't take a system that knows about animals and 
teach it about World War I. This ability is what I would consider the 
crux of AI research, just as P=NP is the crux of complexity theory research.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/24/2012 4:34:38 AM
On Aug 24, 1:35=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/23/2012 9:37 PM, Peter Olcott wrote:
>
> > My end goal is to basically figure out how to teach a computer to read,
> > in the same way that humans read, with increasing comprehension. That
> > would provide the key missing piece of the CYC project: =A0http://cyc.c=
om/
>
> This would be the holy grail of natural language understanding and AI in
> general. And, if the last twenty years of advancements in natural
> language are any guideline, it's also completely unfeasible. All major
> recent developments that I'm aware of resort to primarily statistical
> determination and not any actual understanding of text.


Right, but go back 40 YEARS AGO and computers could read

Person: WHY DID YOU DROP IT?

Computer: BY "IT", I ASSUME YOU MEAN THE SUPERBLOCK.
I CAN'T EXPLAIN A NON-EXISTENT EVENT.

http://en.wikipedia.org/wiki/SHRDLU

One was that SHRDLU's world was so simple that the entire set of
objects and locations could be described by including as few as
perhaps 50 words: nouns like "block" and "cone", verbs like "place on"
and "move to", and adjectives like "big" and "blue". The possible
combinations of these basic language building blocks were quite
simple, and the program was fairly adept at figuring out what the user
meant.

Person: FIND A BLOCK WHICH IS TALLER THAN THE ONE YOU ARE HOLDING AND
PUT IT INTO THE BOX.
Computer: BY "IT", I ASSUME YOU MEAN THE BLOCK WHICH IS TALLER THAN
THE ONE I AM HOLDING.
Computer: OK. (does it)

Person: WHAT DOES THE BOX CONTAIN?
....

Person: HOW MANY THINGS ARE ON TOP OF GREEN CUBES?
Computer: I'M NOT SURE WHAT YOU MEAN BY "ON TOP OF " IN THE PHRASE "ON
TOP OF GREEN CUBES " .

DO YOU MEAN:
1 - DIRECTLY ON THE SURFACE
2 - ANWHERE ON TOP OF ?

Person: 2
Computer: THREE OF THEM.

Person: HAD YOU TOUCHED ANY PYRAMID BEFORE YOU PUT THE GREEN ONE ON
THE LITTLE CUBE?
Computer: YES, THE GREEN ONE.

Person: WHEN DID YOU PICK IT UP?
Computer: WHILE I WAS STACKING UP THE RED CUBE, A LARGE RED BLOCK AND
A LARGE GREEN CUBE.

Person: WHY?
Computer: TO GET RID OF IT.

Person: WHY DID YOU DO THAT?
Computer: TO CLEAN OFF THE RED CUBE.

Person: HOW DID YOU DO IT?
Computer: BY PUTTING A LARGE...

FIND, WHAT, HAD, HOW MANY, HOW, WHEN, WHY...

Herc
--
http://microPROLOG.com

[mP]-RULE ADDED
[mP]-[happy Dude] if [rich Dude].

[mP]-RULE ADDED
[mP]-[happy Dude] if [drunk Dude].
0
8/24/2012 6:31:19 AM
On 8/23/2012 11:34 PM, Joshua Cranmer wrote:
> On 8/23/2012 11:13 PM, Peter Olcott wrote:
>> We would begin the process of deducing the requirements of automated
>> knowledge acquisition by specifying the structure of the set of
>> conceptual knowledge. This is actually much simpler than it sounds **. I
>> have had these ideas on my mind since 1991.
>>
>>  From the structure of the set of conceptual knowledge we now know
>> exactly what holes to fill and the structure of how they will be filled.
>
> This shows a very, VERY naive understanding of the current 
> state-of-the-art limits in AI. It is well-known by know how to, in 
> effect, teach a system that knows about animals about mythological 
> creatures. That is the sort of thing that already fits in its framework.
>
> What we really don't know how to do is how to teach systems radically 
> different things--we can't take a system that knows about animals and 
> teach it about World War I. This ability is what I would consider the 
> crux of AI research, just as P=NP is the crux of complexity theory 
> research.
>

Ultimately we can very easily** teach a computer system all of the 
details about any set of concepts by simply hand-coding these details 
into an ontology. The problem with this approach is that it takes an 
infeasible amount of time.

** Unless there exists limits of the expressiveness of the current set 
of ontology languages. It looks like some form of predicate logic may 
provide a sufficient basis.
0
Peter
8/24/2012 2:01:42 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:
>> ... All major
>> recent developments that I'm aware of resort to primarily statistical
>> determination and not any actual understanding of text.
> Right, but go back 40 YEARS AGO and computers could read

Winograd's SHRDLU was a really interesting piece of work, but his spirit
of actually taking a shot at the true problem is probably more important
than the specifics of his implementation (which all evidence indicates
is a dead end).

FWIW, we're also working on this problem:

    http://www.cogbot.com/

Another interesting perspective on SHRDLU was how little was said in his
thesis about the algorithms used by SHRDLU to *generate* language.  My own
work has shown me that it's just as interesting and challenging a problem was
parsing and understanding input.

-- 
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
0
vandys (135)
8/24/2012 3:46:54 PM
On Aug 25, 1:46=A0am, van...@vsta.org wrote:
> In comp.lang.prolog Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> >> ... All major
> >> recent developments that I'm aware of resort to primarily statistical
> >> determination and not any actual understanding of text.
> > Right, but go back 40 YEARS AGO and computers could read
>
> Winograd's SHRDLU was a really interesting piece of work, but his spirit
> of actually taking a shot at the true problem is probably more important
> than the specifics of his implementation (which all evidence indicates
> is a dead end).
>
> FWIW, we're also working on this problem:
>
> =A0 =A0http://www.cogbot.com/
>
> Another interesting perspective on SHRDLU was how little was said in his
> thesis about the algorithms used by SHRDLU to *generate* language. =A0My =
own
> work has shown me that it's just as interesting and challenging a problem=
 was
> parsing and understanding input.
>


He published a book, Natural Language Understanding, about 150 pages
half of which is snippets of LISP.

My copy is in storage waiting until the day I get microPROLOG.com
online to port Winograd's Bot over to SQL/PROLOG.

Just glancing at COGBOT, it seems you've only started *TRANSLATING*
from 1 sentence form to another.

Winograd's bot is fully functional English Voice control that
translates everything it is DOING.

It is based on MICROPLANNER (1971), in 1972 PROLOG came out, a general
purpose language which was really an attempt to diversify the domain
of Winograd's talking program from Blocks World.

http://en.wikipedia.org/wiki/Micro-Planner_(programming_language)

MICROPLANNER
Forward chaining (antecedently):
If assert P, assert Q

Backward chaining (consequently)
If goal Q, goal P

PROLOG
Backward chaining (consequently)
goal Q :- goal P

microPROLOG
[fact arg] if [fact arg]

Forward chaining takes up infinite memory and you can just work out
all the deductions backwards in PROLOG!

UNIFY(  f(A,b,C)   ,   f(x,Y, g(h))  )

A=3Dx
Y=3Db
C=3Dg(h)

There is only 1 Algorithm in AI, Unify() will search the database for
the functions it needs to solve anything.

It seems PROLOG was written especially for Winograd's Bot but it's
breadth of applications didn't make it that far.


***

USING PROLOG FOR MICROPLANNER COMPONENT

eat(MONKEY, FOOD) :- have(MONKEY, FOOD), hungry(MONKEY).
have(MONKEY, FOOD) :- see(MONKEY, FOOD), take(MONKEY, FOOD).
take(MONKEY, FOOD) :- level(MONKEY, L), level(FOOD, L), at(MONKEY,
FOOD).

etc. etc.   monkey pushes crate under light switch and grabs bananas!!


Herc
--
http://microPROLOG.com

[mP]-RULE ADDED
[mP]-[scary SPIDER] if [big SPIDER] [hairy SPIDER].
0
8/25/2012 1:30:15 AM
On Aug 25, 1:46=A0am, van...@vsta.org wrote:
> Another interesting perspective on SHRDLU was how little was said in his
> thesis about the algorithms used by SHRDLU to *generate* language. =A0My =
own
> work has shown me that it's just as interesting and challenging a problem=
 was
> parsing and understanding input.


They ran the program in a certain mode to generate all future outcomes
and it generated what 'could' be done..

SHRDLU could search back further through the interactions to find the
proper context in most cases when additional adjectives were supplied.
One could also ask questions about the history, for instance one could
ask "did you pick up anything before the cone?"

A side effect of this memory, and the original rules SHRDLU was
supplied with, is that the program could answer questions about what
was possible in the world and what was not. For instance, SHRDLU would
deduce that blocks could be stacked by looking for examples, but would
realize that triangles couldn't be stacked, after having tried it.

It generates language, just ask the question "what have you been up
to?"

Herc
0
8/25/2012 1:52:25 AM
> On Aug 25, 1:46=A0am, van...@vsta.org wrote:
>
> > Another interesting perspective on SHRDLU was how little was said in hi=
s
> > thesis about the algorithms used by SHRDLU to *generate* language. =A0M=
y own
> > work has shown me that it's just as interesting and challenging a probl=
em was
> > parsing and understanding input.



Actually I am approaching this problem from a different tact!

microPROLOG looks like it will be breadth first instead of the usual
PROLOG depth first Unify Search, same complexity for needle in a
haystack problems!

It's easier to query the database of facts in SQL just to grab all
matching records!  (BREADTH FIRST SEARCH)

PROLOG
?- lady(X, Y).

X =3D jane,  Y=3Dblond;
X =3D gaga,  Y=3Dredhead;
X =3D lucy,  Y=3Dblond;

^^ one record at a time.

MICROPROLOG
[lady X Y]?

X      Y
jane  blond
gaga redhead
lucy  blond


microPROLOG output will be more like SQL QUERY RESULTS in Table Form.

OK, just a change in format you say??

It could also output the results IN THE SAME WAY FACTS ARE ADDED!

[lady X Y]?

       X       Y
[lady jane blond].
[lady gaga redhead].
[lady lucy blond].

So the OUTPUT from microProlog is the same Predicate format as the
INPUT.

This would make multiple agents in the domain able to communicate with
each other!

[mP1]-[location X 4 5]?     //anybody here?
[mP2]-[location mP2 4 5].   //I'm here

I just have to simplify the grammar slightly so microPROLOG can always
give it's result in predicate form, Queries must be a single predicate
format not a Tail of Predicates, which means multiple queries must be
given as rules (table joins) before querying them.

microPROLOG Syntax
Rule 1   LINE. --> FACT.
Rule 2   LINE. --> FACT IF TAIL.
Rule 3   LINE? --> TAIL?
Rule 4   TAIL  --> FACT FACT ... FACT
Rule 5   FACT  --> term  |  VAR  |  [term TAIL]

Rule 3 would become

LINE? --> FACT?

to set the 'template' for the answer format.

OK.  Done!  ;)


Herc
0
8/25/2012 2:15:09 AM
On Aug 23, 10:46=A0pm, Peter Olcott <OCR4Screen> wrote:
> My detractors are only ignorant of something that is outside of their fie=
ld.

You're a liar and a fool.
You don't know anywhere near as much about ANY of these fields,
especially
any field relating to linguistics or semantics, AS I do OR as MOST of
the people
you have been arguing with.  The main question you are arguing (which
is
about whether there is a TM that ALWAYS correctly tells whether any TM
does or doesn't halt on any input) is in fact MUCH SIMPLER THAN ANY
of the "outside" fields you are TRYING to invoke.  It is the fact that
you can't
even see how simple the issue is, and think that "something outside
their field"
MIGHT EVEN BE RELEVANT that is causing the problem.  The ISSUE ITSELF
is in fact WHOLLY INSIDE OUR field.

> The error of the Halting Problem that I have been arguing all along can
> not be expressed within the mathematics of the Theory of Computation,

Well, that's YOUR problem.  Since the PROBLEM ITSELF IS IN theory
of computation, it would HAVE to be expressible there.  If it doesn't
have any errors INSIDE that context, THEN IT DOESN'T HAVE ANY errors.


But the real, whole, total point is this: Strawson's Theorem.
THAT IS ALL that is going on, and it was a simple point even back when
it was
Russell's Paradox and even BEFORE then.

Suppose (hypothetically) that you thought that being conceited or self-
aggrandizing
was BAD and you decided to make your little personal contribution to
lessening
hubris in this world by deciding on the following personal policy:
"I am going to praise all and only those who don't praise themselves".
This is NOT POSSIBLE (are you going to praise yourself OR NOT?),
and complications can arise when any question relating to this
situation
is phrased in a way that presumes that doing this IS possible (e.g. by
asking a question about some attribute or property of a/the person who
IS doing this).
But none of the complications will ever imply that "the question is
ill-formed".
"Formed" IS ALREADY IN THE DICTIONARY.
0
greeneg9613 (188)
8/25/2012 2:52:21 AM
On Aug 23, 8:41=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> what will matter is
> whether they are true or not, and that will be decided in the domain to
> which they relate -- the theory of computation.

But in the case of the halting problem, even THAT much is NOT true,
because
THAT problem is just a simple corollary of Russell's paradox, which is
purely logical and
therefore field-INdependent.   IN EVERY field, THERE SIMPLY CANNOT
EXIST A THING
that R's all and only those things that don't R themselves.  THAT is
true for EVERY
binary relation R in EVERY  field.  THAT IS ALL THAT IS GOING ON here.
What is going on here therefore  has NOTHING TO DO WITH HALTING OR TMs
OR COMPUTATION.

It's just BASIC MEANINGS OF  WORDS and the notion of a transitive verb
(standing in here
for a two-place relation).
0
greeneg9613 (188)
8/25/2012 2:58:47 AM
On 8/24/2012 9:52 PM, George Greene wrote:
> On Aug 23, 10:46 pm, Peter Olcott <OCR4Screen> wrote:
>> My detractors are only ignorant of something that is outside of their field.
> You're a liar and a fool.
You have too much animosity and hostility.

> You don't know anywhere near as much about ANY of these fields,
> especially
> any field relating to linguistics or semantics, AS I do OR as MOST of
> the people
> you have been arguing with.  The main question you are arguing (which
> is
> about whether there is a TM that ALWAYS correctly tells whether any TM
> does or doesn't halt on any input) is in fact MUCH SIMPLER THAN ANY
> of the "outside" fields you are TRYING to invoke.  It is the fact that
> you can't
> even see how simple the issue is, and think that "something outside
> their field"
> MIGHT EVEN BE RELEVANT that is causing the problem.  The ISSUE ITSELF
> is in fact WHOLLY INSIDE OUR field.
>
>> The error of the Halting Problem that I have been arguing all along can
>> not be expressed within the mathematics of the Theory of Computation,
> Well, that's YOUR problem.  Since the PROBLEM ITSELF IS IN theory
> of computation, it would HAVE to be expressible there.  If it doesn't
> have any errors INSIDE that context, THEN IT DOESN'T HAVE ANY errors.
>
>
> But the real, whole, total point is this: Strawson's Theorem.
> THAT IS ALL that is going on, and it was a simple point even back when
> it was
> Russell's Paradox and even BEFORE then.
>
> Suppose (hypothetically) that you thought that being conceited or self-
> aggrandizing
> was BAD and you decided to make your little personal contribution to
> lessening
> hubris in this world by deciding on the following personal policy:
> "I am going to praise all and only those who don't praise themselves".
> This is NOT POSSIBLE (are you going to praise yourself OR NOT?),
> and complications can arise when any question relating to this
> situation
> is phrased in a way that presumes that doing this IS possible (e.g. by
> asking a question about some attribute or property of a/the person who
> IS doing this).
> But none of the complications will ever imply that "the question is
> ill-formed".
> "Formed" IS ALREADY IN THE DICTIONARY.

0
Peter
8/25/2012 3:32:51 AM
On 8/24/2012 9:58 PM, George Greene wrote:
> On Aug 23, 8:41 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> what will matter is
>> whether they are true or not, and that will be decided in the domain to
>> which they relate -- the theory of computation.
> But in the case of the halting problem, even THAT much is NOT true,
> because
> THAT problem is just a simple corollary of Russell's paradox, which is
> purely logical and
> therefore field-INdependent.   IN EVERY field, THERE SIMPLY CANNOT
> EXIST A THING
> that R's all and only those things that don't R themselves.  THAT is
> true for EVERY
> binary relation R in EVERY  field.  THAT IS ALL THAT IS GOING ON here.
> What is going on here therefore  has NOTHING TO DO WITH HALTING OR TMs
> OR COMPUTATION.
>
> It's just BASIC MEANINGS OF  WORDS and the notion of a transitive verb
> (standing in here
> for a two-place relation).
Here is an analogy to the error of the Halting Problem:
The error of the Halting Problem is that it is considered to form an 
actual limit to computation.

Here are two examples of limits to computation:
a) No computer system will ever divide any number by zero and derive a 
correct result of five.
b) No computer system will be able to perform any sequence of 
computation in no time at all.

Which of the above forms an actual limit to computation?
Is the Halting Problem analogous to (a) ?
0
Peter
8/25/2012 3:41:52 AM
George Greene <greeneg@email.unc.edu> writes:

> On Aug 23, 8:41 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> what will matter is
>> whether they are true or not, and that will be decided in the domain to
>> which they relate -- the theory of computation.
>
> But in the case of the halting problem, even THAT much is NOT true,
> because
> THAT problem is just a simple corollary of Russell's paradox, which is
> purely logical and
> therefore field-INdependent.   IN EVERY field, THERE SIMPLY CANNOT
> EXIST A THING
> that R's all and only those things that don't R themselves.  THAT is
> true for EVERY
> binary relation R in EVERY  field.  THAT IS ALL THAT IS GOING ON here.
> What is going on here therefore  has NOTHING TO DO WITH HALTING OR TMs
> OR COMPUTATION.
>
> It's just BASIC MEANINGS OF  WORDS and the notion of a transitive verb
> (standing in here
> for a two-place relation).

So in what way was what I said not true?

-- 
Ben.
0
ben.usenet (6790)
8/25/2012 3:25:27 PM
On 8/24/2012 10:41 PM, Peter Olcott wrote:
> Here is an analogy to the error of the Halting Problem:
> The error of the Halting Problem is that it is considered to form an
> actual limit to computation.
>
> Here are two examples of limits to computation:
> a) No computer system will ever divide any number by zero and derive a
> correct result of five.
> b) No computer system will be able to perform any sequence of
> computation in no time at all.
>
> Which of the above forms an actual limit to computation?
> Is the Halting Problem analogous to (a) ?

No, it is not. In case (a), division is simply not defined when the 
divisor is 0 [1].

You seem to believe that the Halting problem is a case where the 
"necessarily undecidable" results are limited to a few pathological 
inputs. This is very much not the case. Let me start by pointing out 
that the field I study is compiler theory, and compiler theory is full 
of several interesting undecidable problems: we're interested in 
nontrivial properties of programs specified by a Turing-complete 
language, so Rice's Theorem pretty much universally applies. In 
practice, the amount of cases where we can accurately find the solution 
is a very, very small fraction of the search space.

Take array dependence analysis. This amounts to solving Diophantine 
equations and is undecidable in general. We can only solve it explicitly 
in the cases of linear equations. Even then, it's remarkably intractable 
since it degenerates into ILP, which is itself NP-complete.


[1] In a mathematical sense. Computer don't implement pure mathematical 
operations; on a computer, if you're doing integer division, you'll get 
a floating point error instead; division using IEEE 754 logic actually 
produces well-defined results: Infinity, -Infinity, and NaN, depending 
on the dividend.
-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/25/2012 3:44:32 PM
On Aug 25, 10:44=A0am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/24/2012 10:41 PM, Peter Olcott wrote:
>
> > Here is an analogy to the error of the Halting Problem:
> > The error of the Halting Problem is that it is considered to form an
> > actual limit to computation.
>
> > Here are two examples of limits to computation:
> > a) No computer system will ever divide any number by zero and derive a
> > correct result of five.
> > b) No computer system will be able to perform any sequence of
> > computation in no time at all.
>
> > Which of the above forms an actual limit to computation?
> > Is the Halting Problem analogous to (a) ?
>
> No, it is not. In case (a), division is simply not defined when the
> divisor is 0 [1].
>

What about the limit to computation of not being able to divide the
integer four by two and derive a correct result of seventeen?
This *is* a limit to computation, it is a limit of any consequence?

> You seem to believe that the Halting problem is a case where the
> "necessarily undecidable" results are limited to a few pathological
> inputs. This is very much not the case. Let me start by pointing out

Within the scope of the self-reference form of the Halting Problem,
this *is* the case.

> that the field I study is compiler theory, and compiler theory is full
> of several interesting undecidable problems: we're interested in
> nontrivial properties of programs specified by a Turing-complete
> language, so Rice's Theorem pretty much universally applies. In
> practice, the amount of cases where we can accurately find the solution
> is a very, very small fraction of the search space.
>
> Take array dependence analysis. This amounts to solving Diophantine
> equations and is undecidable in general. We can only solve it explicitly
> in the cases of linear equations. Even then, it's remarkably intractable
> since it degenerates into ILP, which is itself NP-complete.
>
> [1] In a mathematical sense. Computer don't implement pure mathematical
> operations; on a computer, if you're doing integer division, you'll get
> a floating point error instead; division using IEEE 754 logic actually
> produces well-defined results: Infinity, -Infinity, and NaN, depending
> on the dividend.
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth

0
PeteOlcott (86)
8/25/2012 3:54:08 PM
On 8/25/2012 10:25 AM, Ben Bacarisse wrote:
> George Greene <greeneg@email.unc.edu> writes:
>
>> On Aug 23, 8:41 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>> what will matter is
>>> whether they are true or not, and that will be decided in the domain to
>>> which they relate -- the theory of computation.
>> But in the case of the halting problem, even THAT much is NOT true,
>> because
>> THAT problem is just a simple corollary of Russell's paradox, which is
>> purely logical and
>> therefore field-INdependent.   IN EVERY field, THERE SIMPLY CANNOT
>> EXIST A THING
>> that R's all and only those things that don't R themselves.  THAT is
>> true for EVERY
>> binary relation R in EVERY  field.  THAT IS ALL THAT IS GOING ON here.
>> What is going on here therefore  has NOTHING TO DO WITH HALTING OR TMs
>> OR COMPUTATION.
>>
>> It's just BASIC MEANINGS OF  WORDS and the notion of a transitive verb
>> (standing in here
>> for a two-place relation).
> So in what way was what I said not true?
>
Russell's Paradox is also an error of reasoning similar to the error of 
the Halting Problem.

It is analogous to the command to go buy a candy bar from the very first 
store that you come to that never has and never will sell candy bars.
0
Peter
8/25/2012 4:52:58 PM
On Aug 26, 1:44=A0am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/24/2012 10:41 PM, Peter Olcott wrote:
>
> > Here is an analogy to the error of the Halting Problem:
> > The error of the Halting Problem is that it is considered to form an
> > actual limit to computation.
>
> > Here are two examples of limits to computation:
> > a) No computer system will ever divide any number by zero and derive a
> > correct result of five.
> > b) No computer system will be able to perform any sequence of
> > computation in no time at all.
>
> > Which of the above forms an actual limit to computation?
> > Is the Halting Problem analogous to (a) ?
>
> No, it is not. In case (a), division is simply not defined when the
> divisor is 0 [1].
>
> You seem to believe that the Halting problem is a case where the
> "necessarily undecidable" results are limited to a few pathological
> inputs. This is very much not the case. Let me start by pointing out
> that the field I study is compiler theory, and compiler theory is full
> of several interesting undecidable problems: we're interested in
> nontrivial properties of programs specified by a Turing-complete
> language, so Rice's Theorem pretty much universally applies. In
> practice, the amount of cases where we can accurately find the solution
> is a very, very small fraction of the search space.
>
> Take array dependence analysis. This amounts to solving Diophantine
> equations and is undecidable in general. We can only solve it explicitly
> in the cases of linear equations. Even then, it's remarkably intractable
> since it degenerates into ILP, which is itself NP-complete.
>
> [1] In a mathematical sense. Computer don't implement pure mathematical
> operations; on a computer, if you're doing integer division, you'll get
> a floating point error instead; division using IEEE 754 logic actually
> produces well-defined results: Infinity, -Infinity, and NaN, depending
> on the dividend.
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth


All cyclic arguments based on your dumb compilers.

Herc
0
8/25/2012 5:28:38 PM
On Aug 26, 12:05=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Aug 24, 11:41=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > Here is an analogy to the error of the Halting Problem:
> > The error of the Halting Problem is that it is considered to form an
> > actual limit to computation.
>
> There IS an actual limit to "computation". =A0The halting problem is
> JUST ONE example of
> a problem that is outside the limit. =A0The limit is purely by a
> COUNTING argument.
> There are only denumerably many (countably infinitely many) finite
> programs.
> But there are UNcountably infinitely many different infinite-families-
> of-yes-no-questions
> (i.e., uncountably many PROBLEMS, or answers-to-problems).
> There canNOT POSSIBLY exist a program for each problem.
> There are in fact far MORE problems than there are programs.
> The halting problem is just one of the problems that is MOST EASILY
> proved not
> to have a solution-program.
>


10 years from now the smart Google Bots will classify George's Meta-
Rational here as branching into delirium.

REALLY GEORGE??

UNCOUNTABLE MANY QUESTIONS?

AND ONLY COUNTABLE MANY (infinite) COMPUTER PROGRAMS TO ANSWER THEM?

------

Sort these 5 terms into 3 columns

FINITE   INFINITE&COUNTABLE   UNCOUNTABLE

 amount of natural numbers in N
 amount of GODEL NUMBERS OF ZFC
 amount of FUNCTIONS OF ZFC
 amount of CHOICE FUNCTIONS OF ZFC
 amount of SETS OF ZFC

- Hide quoted text -
> Note that one can say that there are more than |N| sets in ZFC

-------

How do you intend to WRITE these questions down?

Are they INFINITELY LONG STRINGS?

Are they using INFINITELY BIG CHARACTER SETS?

Why cannot you enumerate QUESTIONS now?

PULLEEEEZE!

Herc
--
S: IF STOPS(S) GOTO S
OMEGA + |C| > |C|
|R| > |N|
S: IF STOPS(S) GOTO S
0
8/25/2012 5:38:14 PM
On 8/25/2012 10:54 AM, PeteOlcott wrote:
> What about the limit to computation of not being able to divide the
> integer four by two and derive a correct result of seventeen?
> This *is* a limit to computation, it is a limit of any consequence?

You're talking nonsense here.

>> You seem to believe that the Halting problem is a case where the
>> "necessarily undecidable" results are limited to a few pathological
>> inputs. This is very much not the case. Let me start by pointing out
>
> Within the scope of the self-reference form of the Halting Problem,
> this *is* the case.

Oh, how quaint. The response by a self-admitted amateur is to tell a 
professional that the professional is wrong with no proof given. I've 
heard more persuasive arguments out of five-year olds.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/25/2012 5:50:12 PM
On 8/25/2012 12:50 PM, Joshua Cranmer wrote:
> On 8/25/2012 10:54 AM, PeteOlcott wrote:
>> What about the limit to computation of not being able to divide the
>> integer four by two and derive a correct result of seventeen?
>> This *is* a limit to computation, it is a limit of any consequence?
>
> You're talking nonsense here.
>
>>> You seem to believe that the Halting problem is a case where the
>>> "necessarily undecidable" results are limited to a few pathological
>>> inputs. This is very much not the case. Let me start by pointing out
>>
>> Within the scope of the self-reference form of the Halting Problem,
>> this *is* the case.
>
> Oh, how quaint. The response by a self-admitted amateur is to tell a 
> professional that the professional is wrong with no proof given. I've 
> heard more persuasive arguments out of five-year olds.
>
You don't yet have the tools (based on Montague Grammar) to understand 
the proof.
0
Peter
8/25/2012 6:14:08 PM
On 8/25/2012 11:14 AM, Peter Olcott wrote:
> On 8/25/2012 12:50 PM, Joshua Cranmer wrote:
>> On 8/25/2012 10:54 AM, PeteOlcott wrote:
>>> What about the limit to computation of not being able to divide the
>>> integer four by two and derive a correct result of seventeen?
>>> This *is* a limit to computation, it is a limit of any consequence?
>>
>> You're talking nonsense here.
>>
>>>> You seem to believe that the Halting problem is a case where the
>>>> "necessarily undecidable" results are limited to a few pathological
>>>> inputs. This is very much not the case. Let me start by pointing out
>>>
>>> Within the scope of the self-reference form of the Halting Problem,
>>> this *is* the case.
>>
>> Oh, how quaint. The response by a self-admitted amateur is to tell a
>> professional that the professional is wrong with no proof given. I've
>> heard more persuasive arguments out of five-year olds.
>>
> You don't yet have the tools (based on Montague Grammar) to understand
> the proof.

Given the range of skills and knowledge among the readers of these
newsgroups, I'm sure there is someone here who has studied Montague's
work and could evaluate the proof for both validity and relevance to
theory of computation.

I suggest putting it up on a web page, which will make it available to
even more experts, and posting a link here.

Patricia

0
pats (3556)
8/25/2012 9:46:44 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:
> Just glancing at COGBOT, it seems you've only started *TRANSLATING*
> from 1 sentence form to another.

I don't know why you think that.  It is fully conversational.
In fact, I can't think of any translation examples on my blog.

Oh, I guess sometimes I mentioned one of its regression testing
modes where it says back (from underlying meaning) what you just
said to it.  But that's just a testing mode, not the actual
target of our technology.  Finds some interesting bugs, too.

-- 
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
0
vandys (135)
8/25/2012 10:23:50 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:
> On Aug 25, 1:46?am, van...@vsta.org wrote:
>> Another interesting perspective on SHRDLU was how little was said in his
>> thesis about the algorithms used by SHRDLU to *generate* language.
> It generates language, just ask the question "what have you been up
> to?"

I didn't say it couldn't generate language.  I said that it's an interesting
(and non-trivial) part of the problem, and his thesis hardly touches on what
it takes to generate language in his system.  As compared to input parsing,
which has quite a lot of detail.

-- 
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
0
vandys (135)
8/25/2012 10:25:39 PM
On 8/25/2012 4:46 PM, Patricia Shanahan wrote:
> On 8/25/2012 11:14 AM, Peter Olcott wrote:
>> On 8/25/2012 12:50 PM, Joshua Cranmer wrote:
>>> On 8/25/2012 10:54 AM, PeteOlcott wrote:
>>>> What about the limit to computation of not being able to divide the
>>>> integer four by two and derive a correct result of seventeen?
>>>> This *is* a limit to computation, it is a limit of any consequence?
>>>
>>> You're talking nonsense here.
>>>
>>>>> You seem to believe that the Halting problem is a case where the
>>>>> "necessarily undecidable" results are limited to a few pathological
>>>>> inputs. This is very much not the case. Let me start by pointing out
>>>>
>>>> Within the scope of the self-reference form of the Halting Problem,
>>>> this *is* the case.
>>>
>>> Oh, how quaint. The response by a self-admitted amateur is to tell a
>>> professional that the professional is wrong with no proof given. I've
>>> heard more persuasive arguments out of five-year olds.
>>>
>> You don't yet have the tools (based on Montague Grammar) to understand
>> the proof.
>
> Given the range of skills and knowledge among the readers of these
> newsgroups, I'm sure there is someone here who has studied Montague's
> work and could evaluate the proof for both validity and relevance to
> theory of computation.
>
> I suggest putting it up on a web page, which will make it available to
> even more experts, and posting a link here.
>
> Patricia
>
Certainly no one here (or anywhere else) has the ability to critique it 
before it has been presented.
0
Peter
8/25/2012 10:33:54 PM
On 8/25/2012 3:33 PM, Peter Olcott wrote:
> On 8/25/2012 4:46 PM, Patricia Shanahan wrote:
>> On 8/25/2012 11:14 AM, Peter Olcott wrote:
....
>>> You don't yet have the tools (based on Montague Grammar) to understand
>>> the proof.
>>
>> Given the range of skills and knowledge among the readers of these
>> newsgroups, I'm sure there is someone here who has studied Montague's
>> work and could evaluate the proof for both validity and relevance to
>> theory of computation.
>>
>> I suggest putting it up on a web page, which will make it available to
>> even more experts, and posting a link here.
>>
>> Patricia
>>
> Certainly no one here (or anywhere else) has the ability to critique it
> before it has been presented.

Of course. That is why the default for a claimed proof that has not yet
been presented and reviewed is that it should be treated as not
existing, and is no better than "I think that ..." as support for any
argument.

If you want to convince anyone of anything through the claimed existence
of a proof you need to present it and persuade some relevant experts to
review it.

Patricia

0
pats (3556)
8/25/2012 11:34:05 PM
On Aug 26, 8:25=A0am, van...@vsta.org wrote:
> In comp.lang.prolog Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > On Aug 25, 1:46?am, van...@vsta.org wrote:
> >> Another interesting perspective on SHRDLU was how little was said in h=
is
> >> thesis about the algorithms used by SHRDLU to *generate* language.
> > It generates language, just ask the question "what have you been up
> > to?"
>
> I didn't say it couldn't generate language. =A0I said that it's an intere=
sting
> (and non-trivial) part of the problem, and his thesis hardly touches on w=
hat
> it takes to generate language in his system. =A0As compared to input pars=
ing,
> which has quite a lot of detail.
>
>

www.COGBOT.com

It=92s very good at keying on common conversational constructs, and
giving you a witty response.  But it clearly isn=92t =93tracking=94 what yo=
u
mean.

I have a dog.
{=91Ok.=92}

He is Fido.
{=91Ok.=92}

He ran to the store.
{=91Ok.=92}

Where is he?
{=91He may be at the store.=92}

Where is Fido?
{=91He may be at the store.=92}

While nowhere near as flashy as Cleverbot, you can see several
mechanisms.  Introduction, dogs, then inferring of gender based on
pronoun use.  A paste tense motion event involving this dog, and then
a question about current location based on past movements (that=92s why
Cogbot answers with =93may=94).  Notice that Cogbot is correctly tracking
mention of the dog via its name as well as the appropriate pronoun.


**********

So you still haven't got an internal representation.

I assume Winograd's sentence *construction* can be worked out given
how input sentences are modelled

1 parse sentence
2 into the database

You seem to have worked out how he did 1 and not 2, hence you are
missing

3 construct sentence

At a guess!  Will give you a hoy if I get a Nat Lang parser running
myself.

1st I'll be putting this PROLOG ADVENTURE GAME into microPROLOG.

http://www.amzi.com/AdventureInProlog/a1start.php

  You are in the kitchen.
  You can see: apple, table, broccoli
  You can go to: cellar, office, dining room

  > go to the cellar
  You can't go to the cellar because it's dark in the cellar,
  and you're afraid of the dark.

  > turn on the light
  You can't reach the switch and there's nothing to stand on.

  > go to the office
  You are in the office.
  You can see the following things: desk
  You can go to the following rooms: hall, kitchen

  > open desk
  The desk contains:
    flashlight
    crackers

  > take the flashlight
  You now have the flashlight

Once I get that far I should be better qualified to attack Winograd's
work.

getting these bots running on websites will be better for
collaboration!

Herc
--
www.microPROLOG.com

[scary SPIDER] if [big SPIDER] [hairy SPIDER].
RULE ADDED
0
8/26/2012 12:30:05 AM
On 8/25/2012 6:34 PM, Patricia Shanahan wrote:
> On 8/25/2012 3:33 PM, Peter Olcott wrote:
>> On 8/25/2012 4:46 PM, Patricia Shanahan wrote:
>>> On 8/25/2012 11:14 AM, Peter Olcott wrote:
> ...
>>>> You don't yet have the tools (based on Montague Grammar) to understand
>>>> the proof.
>>>
>>> Given the range of skills and knowledge among the readers of these
>>> newsgroups, I'm sure there is someone here who has studied Montague's
>>> work and could evaluate the proof for both validity and relevance to
>>> theory of computation.
>>>
>>> I suggest putting it up on a web page, which will make it available to
>>> even more experts, and posting a link here.
>>>
>>> Patricia
>>>
>> Certainly no one here (or anywhere else) has the ability to critique it
>> before it has been presented.
>
> Of course. That is why the default for a claimed proof that has not yet
> been presented and reviewed is that it should be treated as not
> existing, and is no better than "I think that ..." as support for any
> argument.
>
> If you want to convince anyone of anything through the claimed existence
> of a proof you need to present it and persuade some relevant experts to
> review it.
>
> Patricia
>
I have to first build the language of the terms of the proof.

I think that the key aspect of the proof will be exactly what the full 
and complete meaning of the concept of the term {analogy} is.

This will require a full set of Meaning Postulates that pertain to the 
meaning of the term: {analogy}.
0
Peter
8/26/2012 2:52:56 AM
> On 8/25/2012 4:46 PM, Patricia Shanahan wrote:
> > Given the range of skills and knowledge among the readers of these
> > newsgroups, I'm sure there is someone here who has studied Montague's
> > work and could evaluate the proof for both validity and relevance to
> > theory of computation.
> > Patricia

On Aug 25, 6:33 pm, Peter Olcott <OCR4Screen> wrote:
> Certainly no one here (or anywhere else) has the ability to critique it
> before it has been presented.

Of course we do.
We can critique it (and you) for presuming to invoke Montague in a
context where Montague is not relevant to begin with.
You cannot presume to need Montague to explain why
"There IS SO TOO an r that R's all and only those things that don't R
themselves" is a toddler's temper tantrum.
0
greeneg9613 (188)
8/26/2012 4:16:08 AM
On Aug 25, 10:52=A0pm, Peter Olcott <OCR4Screen> wrote:
> I have to first build the language of the terms of the proof.

No, you don't.
Logic ALREADY HAS some terms.
It already has enough terms to say
Ar[ ~Ax[ rRx <--> ~xRx]  ].

If you want to go to 2nd order you can put a
"for all R" in front of all that.
That ALREADY has a very trivial, easy, straightforward, SOPHOMORE IN
HIGH SCHOOL
proof.  NO FURTHER EFFORT BY YOU (let alone Montague) REQUIRED.



> I think that the key aspect of the proof will be exactly what the full
> and complete meaning of the concept of the term {analogy} is.

We have a very slimmed-down version of that (involving the fact that
any binary relation R whatsoever is automatically analogous TO ANY AND
EVERY OTHER ONE,
IN the RELEVANT way) in the schema above.

> This will require a full set of Meaning Postulates that pertain to the
> meaning of the term: {analogy}.

No, really, it won't.
It will just require you to notice a logical contradiction
when one is staring you in the face.
0
greeneg9613 (188)
8/26/2012 4:21:01 AM
It's that old define set as complete, find thing from items in set which is=
 not in set, so show set is not complete, then either infer set is not comp=
lete and add new computed item, or insist set complete and blerb on about h=
ow there's a bigger set than the one which is complete, some of which is by=
 circular definition not able to be computed as it's not in the first set b=
ecause of the pre-assumption that it's complete which it obviously isn't as=
 you found a way to generate other things which can be in it.

Cheers
0
jackokring (1001)
8/26/2012 4:27:53 AM
Peter Olcott wrote:
> On 8/22/2012 5:47 PM, Ben Bacarisse wrote:> PeteOlcott <peteolc...@gmail.=
com> writes:
>
> >> One thing that has not been even touched on within the Theory of
> >> Computation is:
> >> What exactly is the correct mathematical model for a natural language
> >> question?
> > Yes, there's a lot missing from the Theory of Computation. =A0There's n=
o
> > mathematical model for metaphorical eulogising either. =A0Until such ti=
me
> > as there is, "my Turing machine is like a red red rose" will remain an
> > deeply ambiguous statement. =A0And the there's no model of TM
> > consciousness either, so we can't even ask if its read head hurts. =A0A=
ll
> > we can do ask what can and can't be computed by the narrow rules impose=
d
> > by such dull and constrained minds as Church, Kleene and Turing.
>
> The key aspect of the Halting Problem that is erroneous that can not be
> sufficiently expressed within the mathematics of the Theory of
> Computation is the claim that the Halting Problem somehow forms an
> {actual limit to computation}.

"The key aspect" of un-moderated Usenet groups is that negative
contributors, such as you Peter Olcott, can and do go on with your
blather despite definitive refutations, even such delightfully clever
refutations as Ben Bacarisse posted.

You, Peter, went off on theory of computation not touching on natural
language, so Ben came back with how the color to Turing machines
is also missing. I mean, come on, that's funny. 'Cept that it kills a
joke when I have to explain it.
0
8/26/2012 4:35:28 AM
On Aug 26, 2:35=A0pm, Bryan <bryanjugglercryptograp...@yahoo.com> wrote:
>
> "The key aspect" of un-moderated Usenet groups is that negative
> contributors, such as you Peter Olcott, can and do go on with your
> blather despite definitive refutations, even such delightfully clever
> refutations as Ben Bacarisse posted.
>

Rubbish!  None of you have a clue about computations.

Why do you use a TURING MACHINE to define the program Halt?

why not SPECIFY A FUNCTION HALT!

THEN you can USE THAT FUNCTION inside OTHER FUNCTIONS.

At the moment, the Turing Machine version of the Halting Proof is
clearly, evidently and simply not a proof of anything.

S: IF STOPS(S) GOTO S

There is no way in the world you could convince a NON-BAPTISED
MATHEMATICIAN that that program proves any limit to what computers can
and cannot do, and certainly not a method to find a witness
uncomputable REAL NUMBER that there are MORE THAN INFINITY Numbers!

Start with a FUNCTION SPECIFICATION

Otherwise you have proven nothing.

There is no formal proof, if so.  POST IT!

Herc
0
8/26/2012 5:25:37 AM
On Aug 26, 3:38=A0am, Graham Cooper <grahamcoop...@gmail.com> wrote:
> On Aug 26, 12:05=A0am, George Greene <gree...@email.unc.edu> wrote
> >
> >
> > There IS an actual limit to "computation". =A0The halting problem is
> > JUST ONE example of
> > a problem that is outside the limit. =A0The limit is purely by a
> > COUNTING argument.
> > There are only denumerably many (countably infinitely many) finite
> > programs.
> > But there are UNcountably infinitely many different infinite-families-
> > of-yes-no-questions
>


   UNCOUNTABLE MANY QUESTIONS?

   AND ONLY COUNTABLE MANY (infinite) COMPUTER PROGRAMS
   TO ANSWER THEM?

*******************************************

Perhaps this post is uncountable too?

George can you answer the Question!

Do you RETRACT or can you SUPPORT your idea that there are
uncountable many YES/NO questions?

If you can SUPPORT YOUR ASSERTIONS for once,

then it should be trivial for you to answer the following on
which items belong to the category of uncountable sizes.


 ------

 Sort these 5 terms into 3 columns

 FINITE =A0 INFINITE&COUNTABLE =A0 UNCOUNTABLE

 =A0amount of natural numbers in N
 =A0amount of GODEL NUMBERS OF ZFC
 =A0amount of FUNCTIONS OF ZFC
 =A0amount of CHOICE FUNCTIONS OF ZFC
 =A0amount of SETS OF ZFC

WELL????

You just spout wizardry all day and give us a serve!!

Herc
0
8/26/2012 6:00:27 AM
On 8/25/2012 11:35 PM, Bryan wrote:
> Peter Olcott wrote:
>> On 8/22/2012 5:47 PM, Ben Bacarisse wrote:> PeteOlcott <peteolc...@gmail.com> writes:
>>
>>>> One thing that has not been even touched on within the Theory of
>>>> Computation is:
>>>> What exactly is the correct mathematical model for a natural language
>>>> question?
>>> Yes, there's a lot missing from the Theory of Computation.  There's no
>>> mathematical model for metaphorical eulogising either.  Until such time
>>> as there is, "my Turing machine is like a red red rose" will remain an
>>> deeply ambiguous statement.  And the there's no model of TM
>>> consciousness either, so we can't even ask if its read head hurts.  All
>>> we can do ask what can and can't be computed by the narrow rules imposed
>>> by such dull and constrained minds as Church, Kleene and Turing.
>> The key aspect of the Halting Problem that is erroneous that can not be
>> sufficiently expressed within the mathematics of the Theory of
>> Computation is the claim that the Halting Problem somehow forms an
>> {actual limit to computation}.
> "The key aspect" of un-moderated Usenet groups is that negative
> contributors, such as you Peter Olcott, can and do go on with your
> blather despite definitive refutations, even such delightfully clever
> refutations as Ben Bacarisse posted.
>
> You, Peter, went off on theory of computation not touching on natural
> language, so Ben came back with how the color to Turing machines
> is also missing. I mean, come on, that's funny. 'Cept that it kills a
> joke when I have to explain it.
The Halting Problem places an actual limit on computation in an 
analogous way that inability to correctly represent a square circle 
limits the capabilities of CAD systems.  Until we have a mathematical 
model of the term {analogous} we can not sufficiently either prove or 
refute this claim.
0
Peter
8/26/2012 1:16:48 PM
On 8/26/2012 1:00 AM, Graham Cooper wrote:
>   Sort these 5 terms into 3 columns
>
>   FINITE   INFINITE&COUNTABLE   UNCOUNTABLE
>
Countable:
>    amount of natural numbers in N
>    amount of GODEL NUMBERS OF ZFC

Uncountable:
>    amount of FUNCTIONS OF ZFC
>    amount of CHOICE FUNCTIONS OF ZFC
>    amount of SETS OF ZFC


-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/26/2012 3:09:49 PM
On 8/26/2012 8:16 AM, Peter Olcott wrote:
> The Halting Problem places an actual limit on computation in an
> analogous way that inability to correctly represent a square circle
> limits the capabilities of CAD systems.  Until we have a mathematical
> model of the term {analogous} we can not sufficiently either prove or
> refute this claim.

No, we can do it without that mathematical model, via informal proofs, 
which (surprise, surprise) seem to have satisfied us for most of these 
threads.

A "square circle" isn't a definable geometric object. If you view the 
CAD system as a mathematical function, this "square circle" remains 
outside the domain, so asking it to draw such a thing is complete and 
utter nonsense.

The Halting problem, in complete contrast, is a very well-defined 
language. Thus, it is in the domain of all languages, so the inability 
of a Turing machine to represent this language is a clear constraint 
that Turing machines can't define all languages.

The core of your argument remains as fallacious as ever; no amount of 
attempting to dress it up in mathematical notation will change it.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/26/2012 3:35:16 PM
On 8/26/2012 8:35 AM, Joshua Cranmer wrote:
....
> The Halting problem, in complete contrast, is a very well-defined
> language. Thus, it is in the domain of all languages, so the inability
> of a Turing machine to represent this language is a clear constraint
> that Turing machines can't define all languages.

Turing machines can't even define all languages for which a decision
procedure would be of practical value. That is a real constraint on
computation by any reasonable standard.

Patricia
0
pats (3556)
8/26/2012 4:14:50 PM
On 8/26/2012 10:35 AM, Joshua Cranmer wrote:
> On 8/26/2012 8:16 AM, Peter Olcott wrote:
>> The Halting Problem places an actual limit on computation in an
>> analogous way that inability to correctly represent a square circle
>> limits the capabilities of CAD systems.  Until we have a mathematical
>> model of the term {analogous} we can not sufficiently either prove or
>> refute this claim.
>
> No, we can do it without that mathematical model, via informal proofs, 
> which (surprise, surprise) seem to have satisfied us for most of these 
> threads.
>
> A "square circle" isn't a definable geometric object. If you view the 
> CAD system as a mathematical function, this "square circle" remains 
> outside the domain, so asking it to draw such a thing is complete and 
> utter nonsense.
>
Although you correctly pointed out an aspect where the two are not 
analogous, to conclude that two things lack any analogy on the basis of 
these two things lacking one aspect of an analogy would be incorrect.

Two analytical impossibilities constrain the solution space in the same 
(analytically impossible) way.

> The Halting problem, in complete contrast, is a very well-defined 
> language. Thus, it is in the domain of all languages, so the inability 
> of a Turing machine to represent this language is a clear constraint 
> that Turing machines can't define all languages.
>
> The core of your argument remains as fallacious as ever; no amount of 
> attempting to dress it up in mathematical notation will change it.
>

0
Peter
8/26/2012 4:20:05 PM
On 8/26/2012 11:14 AM, Patricia Shanahan wrote:
> On 8/26/2012 8:35 AM, Joshua Cranmer wrote:
> ...
>> The Halting problem, in complete contrast, is a very well-defined
>> language. Thus, it is in the domain of all languages, so the inability
>> of a Turing machine to represent this language is a clear constraint
>> that Turing machines can't define all languages.
>
> Turing machines can't even define all languages for which a decision
> procedure would be of practical value. That is a real constraint on
> computation by any reasonable standard.
>
> Patricia
Certainly determining whether or not a machine will halt is of practical 
value, it is an aspect of proof of correctness.
0
Peter
8/26/2012 4:22:44 PM
On 8/26/2012 11:20 AM, Peter Olcott wrote:
> On 8/26/2012 10:35 AM, Joshua Cranmer wrote:
>> On 8/26/2012 8:16 AM, Peter Olcott wrote:
>>> The Halting Problem places an actual limit on computation in an
>>> analogous way that inability to correctly represent a square circle
>>> limits the capabilities of CAD systems.  Until we have a mathematical
>>> model of the term {analogous} we can not sufficiently either prove or
>>> refute this claim.
>>
>> No, we can do it without that mathematical model, via informal proofs,
>> which (surprise, surprise) seem to have satisfied us for most of these
>> threads.
>>
>> A "square circle" isn't a definable geometric object. If you view the
>> CAD system as a mathematical function, this "square circle" remains
>> outside the domain, so asking it to draw such a thing is complete and
>> utter nonsense.
>>
> Although you correctly pointed out an aspect where the two are not
> analogous, to conclude that two things lack any analogy on the basis of
> these two things lacking one aspect of an analogy would be incorrect.

I did not defeat all analogies between the two, but nor did I intend to. 
You gave an unqualified claim of analogy: the lack of qualification 
means that you clearly intend the two to be broadly similar except in 
the case of a few "minor" differences. It is this analogy in particular 
that I attacked.

In other words, it's the difference between "heaven is like hell" and 
"heaven is like hell in that they are both places you go after you die." 
One will get you in trouble if you try to say it in most places of 
worship; the other will likely just get people rolling their eyes at you.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/26/2012 6:08:59 PM
On Aug 26, 12:22=A0pm, Peter Olcott <OCR4Screen> wrote:
> Certainly determining whether or not a machine will halt is of practical
> value, it is an aspect of proof of correctness.

Not always.
A completely correct universal TM will not always halt.  You cannot
always tell or prove that it is going to halt.
Sometimes not halting is the correct behavior, but you cannot prove,
at all of those times, that non-halting will
in fact occur.  Sometimes you have to do a correctness proof ANOTHER
WAY.


0
greeneg9613 (188)
8/26/2012 6:20:29 PM
On 8/26/2012 1:08 PM, Joshua Cranmer wrote:
> On 8/26/2012 11:20 AM, Peter Olcott wrote:
>> On 8/26/2012 10:35 AM, Joshua Cranmer wrote:
>>> On 8/26/2012 8:16 AM, Peter Olcott wrote:
>>>> The Halting Problem places an actual limit on computation in an
>>>> analogous way that inability to correctly represent a square circle
>>>> limits the capabilities of CAD systems.  Until we have a mathematical
>>>> model of the term {analogous} we can not sufficiently either prove or
>>>> refute this claim.
>>>
>>> No, we can do it without that mathematical model, via informal proofs,
>>> which (surprise, surprise) seem to have satisfied us for most of these
>>> threads.
>>>
>>> A "square circle" isn't a definable geometric object. If you view the
>>> CAD system as a mathematical function, this "square circle" remains
>>> outside the domain, so asking it to draw such a thing is complete and
>>> utter nonsense.
>>>
>> Although you correctly pointed out an aspect where the two are not
>> analogous, to conclude that two things lack any analogy on the basis of
>> these two things lacking one aspect of an analogy would be incorrect.
>
> I did not defeat all analogies between the two, but nor did I intend 
> to. You gave an unqualified claim of analogy: the lack of 
> qualification means that you clearly intend the two to be broadly 
> similar except in the case of a few "minor" differences. It is this 
> analogy in particular that I attacked.
>
That would simply be a false assumption. It would also be an 
unreasonable assumption. Analogies do not typically function as great 
similarities across several dimensions. Analogies typically function as 
great similarities along a single dimension.

The two examples are exactly analogous in that they both denote 
analytical impossibilities that  constrain the solution space in the 
same (analytically impossible) way. All analytical impossibilities are 
exactly equally analytically impossible and thus completely identical 
along the dimension of possible versus impossible.
0
Peter
8/26/2012 6:29:50 PM
On Aug 26, 2:29=A0pm, Peter Olcott <OCR4Screen> wrote:
> That would simply be a false assumption. It would also be an
> unreasonable assumption. Analogies do not typically function as great
> similarities across several dimensions. Analogies typically function as
> great similarities along a single dimension.

And you would just be a liar who is not familiar with analogies in
general.

Seriously, counterexamples ABOUND.
Here's ONE:
"Harrison Ford is like one of those sports cars that advertise
acceleration from 0 to 60 m.p.h. in three or four seconds. He can go
from slightly broody inaction to ferocious reaction in approximately
that same time span. And he handles the tight turns and corkscrew
twists of a suspense story without losing his balance or leaving skid
marks on the film. But maybe the best and most interesting thing about
him is that he doesn't look particularly sleek, quick, or powerful;
until something or somebody causes him to gun his engine, he projects
the seemly aura of the family sedan."

Here's another point: analogies usually involve FOUR things, NOT two:
A is to B AS C is to D.

That actually tends to support your point A LITTLE since the other 2
things tend to narrow down the "dimension" along which the two things
are being compared.
But they don't always narrow it ENOUGH.  They often still leave REALMS
of dimensions invoked.
0
greeneg9613 (188)
8/26/2012 6:37:49 PM
On 8/26/2012 1:29 PM, Peter Olcott wrote:
> That would simply be a false assumption. It would also be an
> unreasonable assumption. Analogies do not typically function as great
> similarities across several dimensions. Analogies typically function as
> great similarities along a single dimension.
>
> The two examples are exactly analogous in that they both denote
> analytical impossibilities that  constrain the solution space in the
> same (analytically impossible) way. All analytical impossibilities are
> exactly equally analytically impossible and thus completely identical
> along the dimension of possible versus impossible.

You're trying so hard to prove the assertion you've made that you're 
twisting it away from the actual statement you're trying to convince us 
of: that the undecidability of the Halting problem is "restricted" to a 
"few" "necessarily impossible" instances. In that light, your analogy is 
still completely wrong, and let me give you a much better one.

Imagine a CAD system that could only construct things by means of 
compass and straightedge. It turns out that such a system would not be 
able to trisect an angle. It is to this system that the Halting problem 
is most analogous: in both cases, we can clearly define a success metric 
for when the problem is solved. We can also prove in both cases that our 
system (Turing machines in the Halting problem case, 
compass-and-straightedge in the CAD case) is insufficiently powerful to 
be able to solve it for all inputs. And both leave open the possibility 
that more powerful forms of computation can solve the problem: give CAD 
a protractor and it can easily solve it, while most hypercomputational 
models can solve the Halting problem for Turing machines.


-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/26/2012 7:02:48 PM
On Aug 27, 1:56=A0am, George Greene <gree...@email.unc.edu> wrote:
> On Aug 25, 12:52=A0pm, Peter Olcott <OCR4Screen> wrote:
>
> > Russell's Paradox is also an error of reasoning
>
> No, it isn't.
>
> > It is analogous to the command to go buy a candy bar from the very firs=
t
> > store that you come to that never has and never will sell candy bars.
>
> Well, perhaps. =A0That is a command that cannot be carried out.
> The command "Construct me a TM that halts on all and only those TMs
> that don't halt on themselves"
> ALSO cannot be carried out.

YES IT CAN!

A TURING MACHINE CANNOT BE EMBEDDED INSIDE ANOTHER TURING MACHINE

SO THERE IS NO CONTRADICTION!

Why don't you ACKNOWLEDGE THE ARGUMENT before dismissing it.

TM SPECIFICATION:
   INPUT TAPE NUMBER OF TM TO EMULATE
   IF A UTM WOULD HALT BY EMULATING THAT TM
      WRITE 1 AND HALT
   ELSE
      WRITE 0 AND HALT


FUNCTION SPECIFICATION
   FUNCTION HALT(INPUT)
   RETURN TRUE IF TM-INPUT HALTS
   OTHERWISE RETURN FALSE

IT - IS - COMPLETELY - AND - UTTERLY - FCKING - WRONG

TO - EQUATE - THESE - 2 - SPECIFICATIONS

Herc
--
A DISPROOF OF TARSKI'S NO-TRUTH-PREDICATE PROOF
http://tinyurl.com/tarski-proof
http://tinyurl.com/BLUEPRINTS-TARSKI

0
8/26/2012 8:30:07 PM
On Aug 27, 1:09=A0am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/26/2012 1:00 AM, Graham Cooper wrote:
>
> > =A0 Sort these 5 terms into 3 columns
>
> > =A0 FINITE =A0 INFINITE&COUNTABLE =A0 UNCOUNTABLE
>
> Countable:
> > =A0 =A0amount of natural numbers in N
> > =A0 =A0amount of GODEL NUMBERS OF ZFC
>
> Uncountable:
>
> > =A0 =A0amount of FUNCTIONS OF ZFC
> > =A0 =A0amount of CHOICE FUNCTIONS OF ZFC
> > =A0 =A0amount of SETS OF ZFC
>

Thanks!

What about:

amount of FUNCTIONS OF ZFC
amount of TURING MACHINES


FINITE, INFINITE&COUNTABLE, or UNCOUNTABLE ???

Herc
0
8/26/2012 8:32:55 PM
On 8/26/2012 3:32 PM, Graham Cooper wrote:
> On Aug 27, 1:09 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
>> On 8/26/2012 1:00 AM, Graham Cooper wrote:
>>
>>>    Sort these 5 terms into 3 columns
>>
>>>    FINITE   INFINITE&COUNTABLE   UNCOUNTABLE
>>
>> Countable:
>>>     amount of natural numbers in N
>>>     amount of GODEL NUMBERS OF ZFC
>>
>> Uncountable:
>>
>>>     amount of FUNCTIONS OF ZFC
>>>     amount of CHOICE FUNCTIONS OF ZFC
>>>     amount of SETS OF ZFC
>>
>
> Thanks!
>
> What about:
>
> amount of FUNCTIONS OF ZFC

I mentioned that above, apparently you're blind.

> amount of TURING MACHINES

countable.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/27/2012 4:13:10 AM
On Aug 27, 2:13=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/26/2012 3:32 PM, Graham Cooper wrote:
>
> > On Aug 27, 1:09 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> >> On 8/26/2012 1:00 AM, Graham Cooper wrote:
>
> >>> =A0 =A0Sort these 5 terms into 3 columns
>
> >>> =A0 =A0FINITE =A0 INFINITE&COUNTABLE =A0 UNCOUNTABLE
>
> >> Countable:
> >>> =A0 =A0 amount of natural numbers in N
> >>> =A0 =A0 amount of GODEL NUMBERS OF ZFC
>
> >> Uncountable:
>
> >>> =A0 =A0 amount of FUNCTIONS OF ZFC
> >>> =A0 =A0 amount of CHOICE FUNCTIONS OF ZFC
> >>> =A0 =A0 amount of SETS OF ZFC
>
> > Thanks!
>
> > What about:
>
> > amount of FUNCTIONS OF ZFC
>
> I mentioned that above, apparently you're blind.
>
> > amount of TURING MACHINES
>
> countable.
>


So your definition of function is different to Will Hughes then?

(you see why I have to repeat everything)

There are more Functions than Turing Machines?


=A0FINITE =A0 INFINITE&COUNTABLE =A0 UNCOUNTABLE
               TURING MACHINE      FUNCTIONS


Herc
0
8/27/2012 4:31:32 AM
On 8/26/2012 11:31 PM, Graham Cooper wrote:
> So your definition of function is different to Will Hughes then?
>
> (you see why I have to repeat everything)
>
> There are more Functions than Turing Machines?

Turing Machines only correspond to computable functions, which are a 
strict subset of functions.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/27/2012 5:07:27 AM
On Aug 27, 3:07=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/26/2012 11:31 PM, Graham Cooper wrote:
>
> > So your definition of function is different to Will Hughes then?
>
> > (you see why I have to repeat everything)
>
> > There are more Functions than Turing Machines?
>
> Turing Machines only correspond to computable functions, which are a
> strict subset of functions.


So in ZFC there are uncountable many uncomputable choice functions
1 for each set right?

A non-lexicographical choice function, more than |N| in total
that when applied to a ZFC SET will output 1 element.

Like  MIDPOINT(0,1) except you can't actually write it down?

What category are THEOREMS of ZFC?

What category are YES/NO QUESTIONS?

You seem to agree with George here:

 There are only denumerably many (countably infinitely many)
 finite programs.
 But there are UNcountably infinitely many different infinite-
families-
 of-yes-no-questions


Herc
0
8/27/2012 6:55:29 AM
On Aug 26, 1:08=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/26/2012 11:20 AM, Peter Olcott wrote:
>
>
>
>
>
> > On 8/26/2012 10:35 AM, Joshua Cranmer wrote:
> >> On 8/26/2012 8:16 AM, Peter Olcott wrote:
> >>> The Halting Problem places an actual limit on computation in an
> >>> analogous way that inability to correctly represent a square circle
> >>> limits the capabilities of CAD systems. =A0Until we have a mathematic=
al
> >>> model of the term {analogous} we can not sufficiently either prove or
> >>> refute this claim.
>
> >> No, we can do it without that mathematical model, via informal proofs,
> >> which (surprise, surprise) seem to have satisfied us for most of these
> >> threads.
>
> >> A "square circle" isn't a definable geometric object. If you view the
> >> CAD system as a mathematical function, this "square circle" remains
> >> outside the domain, so asking it to draw such a thing is complete and
> >> utter nonsense.
>
> > Although you correctly pointed out an aspect where the two are not
> > analogous, to conclude that two things lack any analogy on the basis of
> > these two things lacking one aspect of an analogy would be incorrect.
>
> I did not defeat all analogies between the two, but nor did I intend to.
> You gave an unqualified claim of analogy: the lack of qualification
> means that you clearly intend the two to be broadly similar except in
> the case of a few "minor" differences. It is this analogy in particular
> that I attacked.
>
You apparently don't know much about how analogies work. To state that
two concepts are analogous is to state that they are comparable along
at least one dimension.

When I specifically provide the dimension that the two elements are
comparable, you make sure to ignore this. This would tend to indicate
an argumentative mind set, and thus not a seeker of truth.

> In other words, it's the difference between "heaven is like hell" and
> "heaven is like hell in that they are both places you go after you die."
> One will get you in trouble if you try to say it in most places of
> worship; the other will likely just get people rolling their eyes at you.
>
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth- Hide quoted text -
>
> - Show quoted text -

0
PeteOlcott (86)
8/27/2012 11:27:53 AM
PeteOlcott <peteolcott@gmail.com> writes:
<snip everything because I'm not commenting on the sub-argument about
what analogies are>

I thought your analogy was quite good -- it makes it clear to new
readers why your thinking about halting is so wrong.

First, it makes a category error.  Your analogy was about asking a
question of some software, but the halting problem is about the
existence of something much like software.  To correct this, you might
have said "does a CAD system *exist* that can output a square circle".
But now you can see that we've lost the notion of input upon which the
output should be based.  A better analogy (by which I mean one closer to
your erroneous thinking) would have been: "does a program exist that
prints an integer with exactly half the value of the input".

Secondly (and I can't believe you've got me to say this again!) the
output you want from your supposedly analogous system does not exist.
There are no "square circles" nor is there a correct answer to every
instance of the arithmetic question I've just proposed.  But every
instance of the halting problem has a correct answer -- it's either yes
or no.  It really is the case that for every encoded machine and input
pair, there is a correct yes/no answer the corresponds to whether that
machine would halt on that input.

It's revealing that you have both accepted and refuted this simple fact
in the recent threads on the subject.  I think that it's only by
believing it to be both true and false that the huge threads were
possible.  I don't suppose you care to give a definitive answer now?

-- 
Ben.
0
ben.usenet (6790)
8/27/2012 4:18:20 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:
> It?s very good at keying on common conversational constructs, and
> giving you a witty response.  But it clearly isn?t ?tracking? what you
> mean.
> 
> I have a dog.
> {?Ok.?}
> 
> He is Fido.
> {?Ok.?}
> 
> He ran to the store.
> {?Ok.?}
> 
> Where is he?
> {?He may be at the store.?}

Motion in the past only (statistically) tells location in the present.
I'm open to suggestions on when it should just assume no subsequent motion,
and thus answer definitely.

> So you still haven't got an internal representation.

It knew what was said, realized that past-present was bridged, and had
to answer statistically.  You can tell that it back-referenced its assembled
knowledge, since it mentioned the location.

-- 
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
0
vandys (135)
8/27/2012 4:19:20 PM
On Aug 28, 2:19=A0am, van...@vsta.org wrote:
> In comp.lang.prolog Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > It?s very good at keying on common conversational constructs, and
> > giving you a witty response. =A0But it clearly isn?t ?tracking? what yo=
u
> > mean.
>
> > I have a dog.
> > {?Ok.?}
>
> > He is Fido.
> > {?Ok.?}
>
> > He ran to the store.
> > {?Ok.?}
>
> > Where is he?
> > {?He may be at the store.?}
>
> Motion in the past only (statistically) tells location in the present.
> I'm open to suggestions on when it should just assume no subsequent motio=
n,
> and thus answer definitely.
>
> > So you still haven't got an internal representation.
>
> It knew what was said, realized that past-present was bridged, and had
> to answer statistically. =A0You can tell that it back-referenced its asse=
mbled
> knowledge, since it mentioned the location.
>

OK, blocks world is a closed world.

Put the red pyramid on the box.

Where is the red pyramid?

C:> on the box

***

I have to overhaul microPROLOG, add a redundancy into the database
table to simplify the SQL queries.

pred  pos  term  arg
----------------------
  1     1     f
  1     2     a
  1     3     b    2
  1     4     c

  2     1     b
  2     2     x
  2     3     y


This is  f(a, b(x,y), c)

Row b is duplicated but it's a much simpler table to work with.

Then I will be able to process the bulk of PROLOG UNIFY() in SQL
which should be lightning fast!

So I just have to switch all the parsing code over form the Primary
Key FID to the new Composite Key

[PRED POS TERM ARG TYP]
 --------------


Herc
0
8/27/2012 4:58:40 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:
> OK, blocks world is a closed world.
> Put the red pyramid on the box.
> Where is the red pyramid?
> C:> on the box

Ok, but think about some difficulties even within a closed world:
    Where is it? (pronoun resolution)
    Where is the pyramid? (recentness disambig)

and especially:
    Where was the red pyramid? (construction and traversal of history)
    Where has the red pyramid been?

Andy
0
vandys (135)
8/27/2012 5:03:50 PM
On Aug 28, 2:18=A0am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>
> Secondly (and I can't believe you've got me to say this again!) the
> output you want from your supposedly analogous system does not exist.
> There are no "square circles" nor is there a correct answer to every
> instance of the arithmetic question I've just proposed. =A0But every
> instance of the halting problem has a correct answer -- it's either yes
> or no. =A0It really is the case that for every encoded machine and input
> pair, there is a correct yes/no answer the corresponds to whether that
> machine would halt on that input.

Yes and no!

tinyurl.com/blueprints-halting

TM SPECIFICATION:
   INPUT TAPE NUMBER OF TM TO EMULATE
   IF A UTM WOULD HALT BY EMULATING THAT TM
      WRITE 1 AND HALT
   ELSE
      WRITE 0 AND HALT

There is no contradiction for this TM specification.

**********************

   there is a correct yes/no answer the corresponds to whether that
   machine would halt on that input.

OK let's THINK BACKWARDS!  <<<<<

UNDEFINE THE DEFINITION OF COMPUTABLE IF WE MAY

S: IF HALTS(S) GOTO S

IF the above program is to work
THEN HALT() will require 3 values to always be correct.

Note HALT() is used_differently to the TM SPECIFICATION which only
outputs 2 values.

Herc
0
8/27/2012 5:08:10 PM
On Aug 28, 3:03=A0am, van...@vsta.org wrote:
> In comp.lang.prolog Graham Cooper <grahamcoop...@gmail.com> wrote:
>
> > OK, blocks world is a closed world.
> > Put the red pyramid on the box.
> > Where is the red pyramid?
> > C:> on the box
>
> Ok, but think about some difficulties even within a closed world:
> =A0 =A0 Where is it? (pronoun resolution)
> =A0 =A0 Where is the pyramid? (recentness disambig)
>
> and especially:
> =A0 =A0 Where was the red pyramid? (construction and traversal of history=
)
> =A0 =A0 Where has the red pyramid been?
>
> Andy

I'm not up to that far yet!

But WHO, WHAT, WHERE, HOW, WAS are all parts of querying the PLANNER
routines and records of the PLANS carried out so far.

putontop(BLOCK1, BLOCK2) :- hold(BLOCK1), clear(BLOCK2).

clear(BLOCK) :- remove(BLOCKX), ison(BLOCKX, BLOCK)

This is in PROLOG which was based on the backward chaining component -
UNIFY() - of microPLANNER used in Winograds bot.

Herc
0
8/27/2012 5:15:02 PM
Graham Cooper <grahamcooper7@gmail.com> writes:

> On Aug 28, 2:18 am, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>
>> Secondly (and I can't believe you've got me to say this again!) the
>> output you want from your supposedly analogous system does not exist.
>> There are no "square circles" nor is there a correct answer to every
>> instance of the arithmetic question I've just proposed.  But every
>> instance of the halting problem has a correct answer -- it's either yes
>> or no.  It really is the case that for every encoded machine and input
>> pair, there is a correct yes/no answer the corresponds to whether that
>> machine would halt on that input.
>
> Yes and no!

No.

> tinyurl.com/blueprints-halting
>
> TM SPECIFICATION:
>    INPUT TAPE NUMBER OF TM TO EMULATE
>    IF A UTM WOULD HALT BY EMULATING THAT TM
>       WRITE 1 AND HALT
>    ELSE
>       WRITE 0 AND HALT
>
> There is no contradiction for this TM specification.
>
> **********************
>
>    there is a correct yes/no answer the corresponds to whether that
>    machine would halt on that input.
>
> OK let's THINK BACKWARDS!  <<<<<
>
> UNDEFINE THE DEFINITION OF COMPUTABLE IF WE MAY
>
> S: IF HALTS(S) GOTO S
>
> IF the above program is to work
> THEN HALT() will require 3 values to always be correct.
>
> Note HALT() is used_differently to the TM SPECIFICATION which only
> outputs 2 values.

I can't attribute a meaning to what you've written.  This may be because
I've stopped reading what you post; maybe some of these things are
defined in other posts.  Direct replies to me are scored up, so I had go
with this post but I got nothing.  Sorry.

If it helps: three-way quasi halt deciders are obviously possible, as
are two-way deciders that aren't always right.

-- 
Ben.
0
ben.usenet (6790)
8/27/2012 6:19:24 PM
On Aug 28, 4:19=A0am, Ben Bacarisse  wrote:
> > TM SPECIFICATION:
> > =A0 =A0INPUT TAPE NUMBER OF TM TO EMULATE
> > =A0 =A0IF A UTM WOULD HALT BY EMULATING THAT TM
> > =A0 =A0 =A0 WRITE 1 AND HALT
> > =A0 =A0ELSE
> > =A0 =A0 =A0 WRITE 0 AND HALT
>
>
> I can't attribute a meaning to what you've written. =A0This may be becaus=
e

Then why are you posting in a HALTING PROOF thread?

Herc
0
8/27/2012 9:51:45 PM
On 8/27/2012 11:18 AM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
> <snip everything because I'm not commenting on the sub-argument about
> what analogies are>
>
> I thought your analogy was quite good -- it makes it clear to new
> readers why your thinking about halting is so wrong.
>
> First, it makes a category error.  Your analogy was about asking a
> question of some software, but the halting problem is about the
> existence of something much like software.  To correct this, you might
> have said "does a CAD system *exist* that can output a square circle".
These are all analogous in that they are all based on analytical 
impossibilities:
a) A CAD system can not possibly exist that can represent a square circle.
b) The self-reference (Pathological Self-Reference) form of the Halting 
Problem has no possible solution.
c) How many feet long is the color of your car? Has no correct answer.

> But now you can see that we've lost the notion of input upon which the
> output should be based.  A better analogy (by which I mean one closer to
> your erroneous thinking) would have been: "does a program exist that
> prints an integer with exactly half the value of the input".
>
> Secondly (and I can't believe you've got me to say this again!) the
> output you want from your supposedly analogous system does not exist.
> There are no "square circles" nor is there a correct answer to every
> instance of the arithmetic question I've just proposed.  But every
> instance of the halting problem has a correct answer -- it's either yes
> or no.  It really is the case that for every encoded machine and input
> pair, there is a correct yes/no answer the corresponds to whether that
> machine would halt on that input.
>
> It's revealing that you have both accepted and refuted this simple fact
> in the recent threads on the subject.  I think that it's only by
> believing it to be both true and false that the huge threads were
> possible.  I don't suppose you care to give a definitive answer now?
>

0
Peter
8/27/2012 11:02:22 PM
Peter Olcott <OCR4Screen> writes:

> On 8/27/2012 11:18 AM, Ben Bacarisse wrote:
>> PeteOlcott <peteolcott@gmail.com> writes:
>> <snip everything because I'm not commenting on the sub-argument about
>> what analogies are>
>>
>> I thought your analogy was quite good -- it makes it clear to new
>> readers why your thinking about halting is so wrong.
>>
>> First, it makes a category error.  Your analogy was about asking a
>> question of some software, but the halting problem is about the
>> existence of something much like software.  To correct this, you might
>> have said "does a CAD system *exist* that can output a square circle".
> These are all analogous in that they are all based on analytical
> impossibilities:
> a) A CAD system can not possibly exist that can represent a square circle.
> b) The self-reference (Pathological Self-Reference) form of the
> Halting Problem has no possible solution.
> c) How many feet long is the color of your car? Has no correct answer.

Except you are wrong about b.  All forms of the halting problem have
correct yes/no answers.  Please don't post the example you usually post
in reply to this simple claim.  It's a naive bait-and-switch to another
kind of question which has not persuaded anyone so far and I can't see
why it would be different today.

-- 
Ben.
0
ben.usenet (6790)
8/27/2012 11:34:49 PM
On Aug 27, 7:02=A0pm, Peter Olcott <OCR4Screen> wrote:
> These are all analogous in that they are all based on analytical
> impossibilities:
> a) A CAD system can not possibly exist that can represent a square circle=
..
> b) The self-reference (Pathological Self-Reference) form of the Halting
> Problem has no possible solution.

Wrong.  b) does not even exist at all. b) is a figment of your
imagination.
The thing that WOULD be analogous to a) would be,
"A TM cannot possibly exist that can correctly answer all halting
questions."


> c) How many feet long is the color of your car? Has no correct answer.

That's not analogous either.  Colors in fact CAN be alleged to have
lengths, since they are WAVELENGTHS OF THE RELEVANT COLOR *OF LIGHT*.
The wavelengths of visible light are all between about 3800 angstroms
and about double that.  An angstrom is a 10-millionth of a millimeter.

Another reason why these 3 cannot be analogous to each other is that
THE FIRST TWO ARE STATEMENTS BUT THE THIRD IS A QUESTION.
If you  REALLY wanted to make all 3 of them analogous then you would
make all 3 of them questions, or all 3 of them answers.

It is indeed the case that the mere EXISTENCE of a TM that correctly
answers all halting questions is an analytical impossibility.  But
that fact DOES NOT imply that  "The Halting Problem is based on an ill-
formed Question".
It RATHER implies that THE WELL-FORMED ANSWER TO the WELL-formed
question in question IS *NO*.


"What is the height of red?" or "What is the color of an inch?"
Are COMPLETELY DIFFERENT NON-analogous kinds of unanswerable
questions, and "When did you stop beating your wife" is unanswerable
for a THIRD kind of reason, again NOT analogous to EITHER of the
reasons (for unanswerability) of the previous two.  You are so BAD at
drawing analogies that sidelines about the meaning of "analogy" or the
meaning of "question" are entirely beside the point.

Given that YOU have ALREADY now CONCEDED that a "Halts() TM" IS AN
ANALYTICAL IMPOSSIBLITY, YOU HAVE ALREADY CONCEDED that YOU have been
wrong the whole time.
0
greeneg9613 (188)
8/27/2012 11:37:28 PM
On 8/27/2012 6:34 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 8/27/2012 11:18 AM, Ben Bacarisse wrote:
>>> PeteOlcott <peteolcott@gmail.com> writes:
>>> <snip everything because I'm not commenting on the sub-argument about
>>> what analogies are>
>>>
>>> I thought your analogy was quite good -- it makes it clear to new
>>> readers why your thinking about halting is so wrong.
>>>
>>> First, it makes a category error.  Your analogy was about asking a
>>> question of some software, but the halting problem is about the
>>> existence of something much like software.  To correct this, you might
>>> have said "does a CAD system *exist* that can output a square circle".
>> These are all analogous in that they are all based on analytical
>> impossibilities:
>> a) A CAD system can not possibly exist that can represent a square circle.
>> b) The self-reference (Pathological Self-Reference) form of the
>> Halting Problem has no possible solution.
>> c) How many feet long is the color of your car? Has no correct answer.
> Except you are wrong about b.  All forms of the halting problem have
> correct yes/no answers.
That is the part that everyone here fails to comprehend:
I say that within the scope of pathological self-reference there is no 
correct answer and you (and everyone else) say that it is not true 
because there is an answer outside of the scope of pathological 
self-reference.

>   Please don't post the example you usually post
> in reply to this simple claim.  It's a naive bait-and-switch to another
> kind of question which has not persuaded anyone so far and I can't see
> why it would be different today.
>

0
Peter
8/28/2012 12:01:53 AM
On 8/27/2012 6:02 PM, Peter Olcott wrote:
> b) The self-reference (Pathological Self-Reference) form of the Halting
> Problem has no possible solution.

Ah yes, I had forgotten that you were convinced of this particular 
misconception. A misconception that arises out of treating a particular 
informal version of the proof as the actual full, formal version of the 
proof, and then injecting your beliefs about what the author is 
intending to say in the proof as what he is saying. Then you impose an 
interpretation of particular context and say that, as a result, the 
theorem is really wrong, and, obviously, everyone who disagrees with you 
are stupid idiots who don't know what they're talking about, which makes 
you the biggest genius since $FAMOUS_HISTORICAL_GENIUS.

Because the alternative, that you are misinterpreting an informal 
version of a proof, and that no fewer than three other people who 
probably have more experience in this field than you do are all correct 
in identifying your same misinterpretation is much less likely to be true.

While eating supper, I thought of yet another rebuttal to your 
viewpoint. Then I realized that you are so dead-set in insisting on your 
fallacious interpretation (which makes less and less sense the more I 
think about it) that it's not worth the cost of transmitting the bits to 
send it: you'll reject it without trying to rebut it with a stock phrase 
that has been replied to do a dozen times previously that only shows 
just how persistently you miss the point, as well as how unable you are 
to comprehend what others write. I've got better things to do with my time.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/28/2012 12:30:35 AM
On 8/27/2012 7:30 PM, Joshua Cranmer wrote:
> On 8/27/2012 6:02 PM, Peter Olcott wrote:
>> b) The self-reference (Pathological Self-Reference) form of the Halting
>> Problem has no possible solution.
>
> Ah yes, I had forgotten that you were convinced of this particular 
> misconception. A misconception that arises out of treating a 
> particular informal version of the proof as the actual full, formal 
> version of the proof, and then injecting your beliefs about what the 
> author is intending to say in the proof as what he is saying. 
This is why a mathematical model of {analogy} is needed.

> Then you impose an interpretation of particular context and say that, 
> as a result, the theorem is really wrong, and, obviously, everyone who 
> disagrees with you are stupid idiots who don't know what they're 
> talking about, which makes you the biggest genius since 
> $FAMOUS_HISTORICAL_GENIUS.
>
> Because the alternative, that you are misinterpreting an informal 
> version of a proof, and that no fewer than three other people who 
> probably have more experience in this field than you do are all 
> correct in identifying your same misinterpretation is much less likely 
> to be true.
>
Yes and these same people are also experts in translating any natural 
language statement into its mathematical formalism, thus can point out 
using extension of Montague Semantics exactly where I went wrong in my 
analogy.

Alternatively, the mathematical formalisms of the Theory of Computation 
are insufficiency expressive to represent the view that I am proposing 
within the mind, and thus must be augmented.

> While eating supper, I thought of yet another rebuttal to your 
> viewpoint. Then I realized that you are so dead-set in insisting on 
> your fallacious interpretation (which makes less and less sense the 
> more I think about it) that it's not worth the cost of transmitting 
> the bits to send it: you'll reject it without trying to rebut it with 
> a stock phrase that has been replied to do a dozen times previously 
> that only shows just how persistently you miss the point, as well as 
> how unable you are to comprehend what others write. I've got better 
> things to do with my time.
>

0
Peter
8/28/2012 1:00:55 AM
Peter Olcott <OCR4Screen> writes:

> On 8/27/2012 6:34 PM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 8/27/2012 11:18 AM, Ben Bacarisse wrote:
>>>> PeteOlcott <peteolcott@gmail.com> writes:
>>>> <snip everything because I'm not commenting on the sub-argument about
>>>> what analogies are>
>>>>
>>>> I thought your analogy was quite good -- it makes it clear to new
>>>> readers why your thinking about halting is so wrong.
>>>>
>>>> First, it makes a category error.  Your analogy was about asking a
>>>> question of some software, but the halting problem is about the
>>>> existence of something much like software.  To correct this, you might
>>>> have said "does a CAD system *exist* that can output a square circle".
>>> These are all analogous in that they are all based on analytical
>>> impossibilities:
>>> a) A CAD system can not possibly exist that can represent a square circle.
>>> b) The self-reference (Pathological Self-Reference) form of the
>>> Halting Problem has no possible solution.
>>> c) How many feet long is the color of your car? Has no correct answer.
>> Except you are wrong about b.  All forms of the halting problem have
>> correct yes/no answers.
> That is the part that everyone here fails to comprehend:
> I say that within the scope of pathological self-reference there is no
> correct answer and you (and everyone else) say that it is not true
> because there is an answer outside of the scope of pathological
> self-reference.

I am glad you agree that there is an answer to all such questions.

<snip>
-- 
Ben.
0
ben.usenet (6790)
8/28/2012 1:13:32 AM
On 8/27/2012 8:13 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 8/27/2012 6:34 PM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 8/27/2012 11:18 AM, Ben Bacarisse wrote:
>>>>> PeteOlcott <peteolcott@gmail.com> writes:
>>>>> <snip everything because I'm not commenting on the sub-argument about
>>>>> what analogies are>
>>>>>
>>>>> I thought your analogy was quite good -- it makes it clear to new
>>>>> readers why your thinking about halting is so wrong.
>>>>>
>>>>> First, it makes a category error.  Your analogy was about asking a
>>>>> question of some software, but the halting problem is about the
>>>>> existence of something much like software.  To correct this, you might
>>>>> have said "does a CAD system *exist* that can output a square circle".
>>>> These are all analogous in that they are all based on analytical
>>>> impossibilities:
>>>> a) A CAD system can not possibly exist that can represent a square circle.
>>>> b) The self-reference (Pathological Self-Reference) form of the
>>>> Halting Problem has no possible solution.
>>>> c) How many feet long is the color of your car? Has no correct answer.
>>> Except you are wrong about b.  All forms of the halting problem have
>>> correct yes/no answers.
>> That is the part that everyone here fails to comprehend:
>> I say that within the scope of pathological self-reference there is no
>> correct answer and you (and everyone else) say that it is not true
>> because there is an answer outside of the scope of pathological
>> self-reference.
> I am glad you agree that there is an answer to all such questions.
>
> <snip>
No is no possible correct answer to the Pathological Self Reference form 
of the Halting Problem.
0
Peter
8/28/2012 1:26:17 AM
On 8/27/2012 6:26 PM, Peter Olcott wrote:
....
> No is no possible correct answer to the Pathological Self Reference form
> of the Halting Problem.

Fortunately, there is no self-reference in the real proof, only in your
misunderstanding of an informal version, so we don't even have to try to
find out what, if anything, "Pathological" means in this context.

Patricia

0
pats (3556)
8/28/2012 1:55:53 AM
On Aug 27, 9:00=A0pm, Peter Olcott <OCR4Screen> wrote:
> This is why a mathematical model of {analogy} is needed.

No, it isn't.  Your decision to invoke "Montague", "analogy",
"question",
and tons of redefinitions of terms that ARE BESIDE THE POINT is just
public masturbation.   You are just taking the discussion afield
because you
HAVE ALREADY LOST the original issue -- indeed, as soon as you said
"analytical impossibliity", you HAD ALREADY CONCEDED the original
issue --
about THE HALTING PROBLEM.  OUR WHOLE POINT WAS ALWAYS that
WE HAD A PROOF that it WAS an analytical impossibility that "a totally
correct
halt-deciding TM exists."  NOTICING THAT THAT  *IS* an analytical
impossibility
IS the solution to the halting problem, or is the proof that it has NO
solution.
IN NEITHER case is the problem "ill-formed".
0
greeneg9613 (188)
8/28/2012 1:55:58 AM
On Aug 27, 9:26=A0pm, Peter Olcott <OCR4Screen> wrote:
> No is no possible correct answer to the Pathological Self Reference form
> of the Halting Problem.

This sentence is not even grammatical.
WILL YOU EVER LEARN *TO*SPEAK*ENGLISH* correctly???

A *PROBLEM* does NOT have an ANSWER!
A PROBLEM has (or lacks) a SOLUTION!!
A QUESTION has an ANSWER!!
PICK A SIDE!!!!!!!!!!!!!!!!!!!!!!!!!!


0
greeneg9613 (188)
8/28/2012 1:57:10 AM
On 8/27/2012 8:55 PM, Patricia Shanahan wrote:
> On 8/27/2012 6:26 PM, Peter Olcott wrote:
> ...
>> No is no possible correct answer to the Pathological Self Reference form
>> of the Halting Problem.
>
> Fortunately, there is no self-reference in the real proof, only in your
> misunderstanding of an informal version, so we don't even have to try to
> find out what, if anything, "Pathological" means in this context.
>
> Patricia
>
We need a mathematical model of the term: {analogy} to accurately divide 
the true borders of this dispute.
Even making this mathematical model could (if it is good enough) qualify 
for publication.


0
Peter
8/28/2012 2:24:11 AM
"Peter Olcott" <OCR4Screen> wrote in message 
news:yuadnUMJ06FRtqHNnZ2dnUVZ_hGdnZ2d@giganews.com...

> We need a mathematical model of the term: {analogy} to accurately divide 
> the true borders of this dispute.
> Even making this mathematical model could (if it is good enough) qualify 
> for publication.

Troll.

-LV
 

0
julio (505)
8/28/2012 2:31:32 AM
Peter Olcott <OCR4Screen> writes:

> On 8/27/2012 8:55 PM, Patricia Shanahan wrote:
>> On 8/27/2012 6:26 PM, Peter Olcott wrote:
>> ...
>>> No is no possible correct answer to the Pathological Self Reference form
>>> of the Halting Problem.
>>
>> Fortunately, there is no self-reference in the real proof, only in your
>> misunderstanding of an informal version, so we don't even have to try to
>> find out what, if anything, "Pathological" means in this context.
>>
>> Patricia
>>
> We need a mathematical model of the term: {analogy} to accurately
> divide the true borders of this dispute.

No.  Nothing about halting decidability is based on analogy.  You
decided (up front) that you don't like one of the sound theorem of
mathematics, but your only available weapons are bad analogies and some
poorly defined terms you've made up.

And there is no "dispute" -- just you railing against the truth.

> Even making this mathematical model could (if it is good enough)
> qualify for publication.

You don't have time, surely?  Is your multi-million pound, revolutionary
OCR software finished already?  It's been some time coming. 

-- 
Ben.
0
ben.usenet (6790)
8/28/2012 2:45:24 AM
On 8/27/2012 9:45 PM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 8/27/2012 8:55 PM, Patricia Shanahan wrote:
>>> On 8/27/2012 6:26 PM, Peter Olcott wrote:
>>> ...
>>>> No is no possible correct answer to the Pathological Self Reference form
>>>> of the Halting Problem.
>>> Fortunately, there is no self-reference in the real proof, only in your
>>> misunderstanding of an informal version, so we don't even have to try to
>>> find out what, if anything, "Pathological" means in this context.
>>>
>>> Patricia
>>>
>> We need a mathematical model of the term: {analogy} to accurately
>> divide the true borders of this dispute.
> No.  Nothing about halting decidability is based on analogy.  You
> decided (up front) that you don't like one of the sound theorem of
> mathematics, but your only available weapons are bad analogies and some
> poorly defined terms you've made up.
>
> And there is no "dispute" -- just you railing against the truth.
>
>> Even making this mathematical model could (if it is good enough)
>> qualify for publication.
> You don't have time, surely?  Is your multi-million pound, revolutionary
> OCR software finished already?  It's been some time coming.
I do these postings for entertainment, (putting great effort in to) 
solving some the the complexity in Linguistics is part of this 
entertainment.

www.OCR4Screen.com product will be complete soon.
0
Peter
8/28/2012 2:52:06 AM
On 8/27/2012 7:52 PM, Peter Olcott wrote:
....
> I do these postings for entertainment, (putting great effort in to)
> solving some the the complexity in Linguistics is part of this
> entertainment.
>
> www.OCR4Screen.com product will be complete soon.

I've come up with a guess at your motivation, and I'm curious about
whether there is any truth to it.

My guess is that you have a great philosophical system you are working
on that just does not deal gracefully with impossibility results, such
as halting undecidability. Rather than changing your theory to match
reality, you are trying to change reality to fit the theory.

Patricia

0
pats (3556)
8/28/2012 8:02:42 AM
On 8/28/2012 3:02 AM, Patricia Shanahan wrote:
> On 8/27/2012 7:52 PM, Peter Olcott wrote:
> ...
>> I do these postings for entertainment, (putting great effort in to)
>> solving some the the complexity in Linguistics is part of this
>> entertainment.
>>
>> www.OCR4Screen.com product will be complete soon.
>
> I've come up with a guess at your motivation, and I'm curious about
> whether there is any truth to it.
>
> My guess is that you have a great philosophical system you are working
> on that just does not deal gracefully with impossibility results, such
> as halting undecidability. Rather than changing your theory to match
> reality, you are trying to change reality to fit the theory.
>
> Patricia
>
Not at all. There is a nuance of meaning that is slipping through the 
cracks of the limited expressiveness of the language of the Mathematics 
of the Theory of Computation. If the language that you express thoughts 
within has limited expressiveness then some thoughts can not be 
expressed within this language. Thoughts that can not be expressed can 
not be understood.
0
Peter
8/28/2012 10:06:56 AM
Peter Olcott <OCR4Screen> wrote:
> Patricia Shanahan wrote:
>> Peter Olcott wrote:
>> ...
>>> I do these postings for entertainment, (putting great effort in to)
>>> solving some the the complexity in Linguistics is part of this
>>> entertainment.
>>>    www.OCR4Screen.com product will be complete soon.
>>
>> I've come up with a guess at your motivation, and I'm curious about
>> whether there is any truth to it.
>>   My guess is that you have a great philosophical system you are working
>> on that just does not deal gracefully with impossibility results, such
>> as halting undecidability. Rather than changing your theory to match
>> reality, you are trying to change reality to fit the theory.
>> Patricia
>>
> Not at all. There is a nuance of meaning that is slipping through the 
> cracks of the limited expressiveness of the language of the Mathematics 
> of the Theory of Computation. If the language that you express thoughts 
> within has limited expressiveness then some thoughts can not be 
> expressed within this language. Thoughts that can not be expressed can 
> not be understood.

Google Sapir-Whorf hypothesis.
-- 
John Forkosh  ( mailto:  j@f.com  where j=john and f=forkosh )
0
john42 (342)
8/28/2012 12:29:37 PM
On 8/28/2012 5:29 AM, JohnF wrote:
> Peter Olcott <OCR4Screen> wrote:
....
>> Not at all. There is a nuance of meaning that is slipping through the
>> cracks of the limited expressiveness of the language of the Mathematics
>> of the Theory of Computation. If the language that you express thoughts
>> within has limited expressiveness then some thoughts can not be
>> expressed within this language. Thoughts that can not be expressed can
>> not be understood.
>
> Google Sapir-Whorf hypothesis.
>

Make sure you read some of the later research checking on the evidence,
not just the original case in favor of Sapir-Whorf.

Patricia

0
pats (3556)
8/28/2012 2:36:28 PM
On 8/28/2012 5:06 AM, Peter Olcott wrote:
> On 8/28/2012 3:02 AM, Patricia Shanahan wrote:
>> My guess is that you have a great philosophical system you are working
>> on that just does not deal gracefully with impossibility results, such
>> as halting undecidability. Rather than changing your theory to match
>> reality, you are trying to change reality to fit the theory.
>>
> Not at all. There is a nuance of meaning that is slipping through the
> cracks of the limited expressiveness of the language of the Mathematics
> of the Theory of Computation. If the language that you express thoughts
> within has limited expressiveness then some thoughts can not be
> expressed within this language. Thoughts that can not be expressed can
> not be understood.

I'm going to have to go with Patricia here. The theorem to which you are 
referring originates from the pure mathematical side of things and is 
not a translation of intuitive logic into formalized notation. Your 
"nuance of meaning" is arising from a natural language description of 
the theorem, which is the noncanonical form. By insisting on the 
translation that you do, it shows that you have a particular 
philosophical worldview you wish to uphold, instead of holding a 
worldview that comes from the pure mathematics.

Your last statement is also indicative of this world. As others have 
stated, it sounds like the Sapir-Whorf Hypothesis (in particular, the 
strong variant that claims that it controls if something can be 
understood instead of how it will be understand). This hypothesis has 
largely been discredited, with most of its current proponents reduced to 
claiming merely that expressiveness in language plays a role in shaping 
its speakers' world-views.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/28/2012 3:07:43 PM
On 8/28/2012 8:07 AM, Joshua Cranmer wrote:
> On 8/28/2012 5:06 AM, Peter Olcott wrote:
>> On 8/28/2012 3:02 AM, Patricia Shanahan wrote:
>>> My guess is that you have a great philosophical system you are working
>>> on that just does not deal gracefully with impossibility results, such
>>> as halting undecidability. Rather than changing your theory to match
>>> reality, you are trying to change reality to fit the theory.
>>>
>> Not at all. There is a nuance of meaning that is slipping through the
>> cracks of the limited expressiveness of the language of the Mathematics
>> of the Theory of Computation. If the language that you express thoughts
>> within has limited expressiveness then some thoughts can not be
>> expressed within this language. Thoughts that can not be expressed can
>> not be understood.
>
> I'm going to have to go with Patricia here. The theorem to which you are
> referring originates from the pure mathematical side of things and is
> not a translation of intuitive logic into formalized notation. Your
> "nuance of meaning" is arising from a natural language description of
> the theorem, which is the noncanonical form. By insisting on the
> translation that you do, it shows that you have a particular
> philosophical worldview you wish to uphold, instead of holding a
> worldview that comes from the pure mathematics.

Also, if you consider the history of this series of threads, it is clear
that Peter Olcott is working from a conviction that there is something
wrong with halting undecidability, but that he does not care what.

He has been through a series of ideas, such as a machine like a Turing
machine but able to modify its state machine to add states, or a TM that
writes an encrypted answer on its tape, and some human being has the
decryption key. The "ill-formed question" view is just the latest. It is
vague and fuzzy enough to be far harder to squish than the earlier attempts.

It is not a case of seeing a flaw in the proof, and therefore rejecting
the conclusion. It is a case of a deeply held, unquestioning disbelief
in the conclusion, leading to a wide ranging search for an excuse to go
on disbelieving it despite the proof.

He even seems to assume that if he could argue his way out of the proof,
even if it were only by twisting the words of an informal version, that
would itself prove halting decidable.

Patricia

0
pats (3556)
8/28/2012 6:31:45 PM
On Aug 28, 1:31=A0pm, Patricia Shanahan <p...@acm.org> wrote:
> On 8/28/2012 8:07 AM, Joshua Cranmer wrote:
>
>
>
>
>
> > On 8/28/2012 5:06 AM, Peter Olcott wrote:
> >> On 8/28/2012 3:02 AM, Patricia Shanahan wrote:
> >>> My guess is that you have a great philosophical system you are workin=
g
> >>> on that just does not deal gracefully with impossibility results, suc=
h
> >>> as halting undecidability. Rather than changing your theory to match
> >>> reality, you are trying to change reality to fit the theory.
>
> >> Not at all. There is a nuance of meaning that is slipping through the
> >> cracks of the limited expressiveness of the language of the Mathematic=
s
> >> of the Theory of Computation. If the language that you express thought=
s
> >> within has limited expressiveness then some thoughts can not be
> >> expressed within this language. Thoughts that can not be expressed can
> >> not be understood.
>
> > I'm going to have to go with Patricia here. The theorem to which you ar=
e
> > referring originates from the pure mathematical side of things and is
> > not a translation of intuitive logic into formalized notation. Your
> > "nuance of meaning" is arising from a natural language description of
> > the theorem, which is the noncanonical form. By insisting on the
> > translation that you do, it shows that you have a particular
> > philosophical worldview you wish to uphold, instead of holding a
> > worldview that comes from the pure mathematics.
>
> Also, if you consider the history of this series of threads, it is clear
> that Peter Olcott is working from a conviction that there is something
> wrong with halting undecidability, but that he does not care what.
>
> He has been through a series of ideas, such as a machine like a Turing
> machine but able to modify its state machine to add states, or a TM that
> writes an encrypted answer on its tape, and some human being has the
> decryption key. The "ill-formed question" view is just the latest. It is
> vague and fuzzy enough to be far harder to squish than the earlier attemp=
ts.
>
> It is not a case of seeing a flaw in the proof, and therefore rejecting
> the conclusion. It is a case of a deeply held, unquestioning disbelief
> in the conclusion, leading to a wide ranging search for an excuse to go
> on disbelieving it despite the proof.
>
> He even seems to assume that if he could argue his way out of the proof,
> even if it were only by twisting the words of an informal version, that
> would itself prove halting decidable.
>
> Patricia- Hide quoted text -
>
> - Show quoted text -

All of the examples of paradox of self-reference:
a) Liar Paradox
b) Barber Paradox
c) G=F6del=92s Incompleteness Theorem
d) Halting Problem

contain very slippery semantic structures that have never been
sufficiently elaborated to fully understand their true meaning. Only a
formalism as robust as Montague Semantics could provide this
sufficient specification.
0
PeteOlcott (86)
8/28/2012 6:48:09 PM
PeteOlcott <peteolcott@gmail.com> writes:

> On Aug 28, 1:31 pm, Patricia Shanahan <p...@acm.org> wrote:
<snip>
>> Also, if you consider the history of this series of threads, it is clear
>> that Peter Olcott is working from a conviction that there is something
>> wrong with halting undecidability, but that he does not care what.
>>
>> He has been through a series of ideas, such as a machine like a Turing
>> machine but able to modify its state machine to add states, or a TM that
>> writes an encrypted answer on its tape, and some human being has the
>> decryption key. The "ill-formed question" view is just the latest. It is
>> vague and fuzzy enough to be far harder to squish than the earlier attempts.
>>
>> It is not a case of seeing a flaw in the proof, and therefore rejecting
>> the conclusion. It is a case of a deeply held, unquestioning disbelief
>> in the conclusion, leading to a wide ranging search for an excuse to go
>> on disbelieving it despite the proof.
>>
>> He even seems to assume that if he could argue his way out of the proof,
>> even if it were only by twisting the words of an informal version, that
>> would itself prove halting decidable.
>>
>> Patricia- Hide quoted text -
>>
>> - Show quoted text -
>
> All of the examples of paradox of self-reference:
> a) Liar Paradox
> b) Barber Paradox
> c) Gödel’s Incompleteness Theorem
> d) Halting Problem
>
> contain very slippery semantic structures that have never been
> sufficiently elaborated to fully understand their true meaning. Only a
> formalism as robust as Montague Semantics could provide this
> sufficient specification.

How do you know that?  Have you done the analysis in the hyper-exact
formalism of which you speak?  How did you know that in March, when you
were saying the same thing, and you certainly had not done it?

I think Patricia has it exactly right.  You are entirely convinced that
there is a problem without having any evidence for it at all.

-- 
Ben.
0
ben.usenet (6790)
8/28/2012 7:25:23 PM
On 8/28/2012 1:48 PM, PeteOlcott wrote:
> All of the examples of paradox of self-reference:
> a) Liar Paradox
> b) Barber Paradox
> c) Gödel’s Incompleteness Theorem
> d) Halting Problem
>
> contain very slippery semantic structures that have never been
> sufficiently elaborated to fully understand their true meaning. Only a
> formalism as robust as Montague Semantics could provide this
> sufficient specification.

And you once again show that you're going the wrong way. The first two 
examples are "valid" in some sense, in that they contain examples which 
show that naive handling of English prose leads to apparent paradoxes.

However, the last two do not share the same problems, since they use 
rigorously defined mathematical constructs and are carefully designed to 
avoid the pitfalls of self-reference. Both have been proven using formal 
verification software that can rely on the first principles of ZFC 
axioms. Their authors were very careful to use precise mathematical 
semantics and never rely on anything so slippery as a second-person 
reference. Gödel discovered how to give each statement and proof a 
corresponding number and (literally) let the numbers do the talking. 
Turing similarly gave every Turing machine a number and used that 
instead. To argue that these approaches are wrong is to argue that 
number theory itself is wrong.

The fallacy you give is that popular introduction to these results use 
informal language, and you assert that the theorems are specified by the 
informal language, so that discrediting the language discredits the theorem.
-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/28/2012 8:37:42 PM
On Aug 28, 3:37=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>
> > All of the examples of paradox of self-reference:
> > a) Liar Paradox
> > b) Barber Paradox
> > c) G=F6del=92s Incompleteness Theorem
> > d) Halting Problem
>
> > contain very slippery semantic structures that have never been
> > sufficiently elaborated to fully understand their true meaning. Only a
> > formalism as robust as Montague Semantics could provide this
> > sufficient specification.
>
> And you once again show that you're going the wrong way. The first two
> examples are "valid" in some sense, in that they contain examples which
> show that naive handling of English prose leads to apparent paradoxes.
>
> However, the last two do not share the same problems, since they use
> rigorously defined mathematical constructs and are carefully designed to
> avoid the pitfalls of self-reference. Both have been proven using formal
> verification software that can rely on the first principles of ZFC
> axioms. Their authors were very careful to use precise mathematical
> semantics and never rely on anything so slippery as a second-person
> reference. G=F6del discovered how to give each statement and proof a
> corresponding number and (literally) let the numbers do the talking.
> Turing similarly gave every Turing machine a number and used that
> instead. To argue that these approaches are wrong is to argue that
> number theory itself is wrong.
>
> The fallacy you give is that popular introduction to these results use
> informal language, and you assert that the theorems are specified by the
> informal language, so that discrediting the language discredits the theor=
em.
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth

The problem is the same with all four. The semantics of these four is
not specified correctly as an acyclic graph. Whenever there are any
cycles in the semantic specification, the specification is erroneous.
0
PeteOlcott (86)
8/28/2012 8:48:56 PM
On 8/28/2012 11:48 AM, PeteOlcott wrote:
....
> c) G�del�s Incompleteness Theorem
> d) Halting Problem
>
> contain very slippery semantic structures that have never been
> sufficiently elaborated to fully understand their true meaning. Only a
> formalism as robust as Montague Semantics could provide this
> sufficient specification.
>

If you have completed the analysis in question for either of those
theorems, please make it available for review. Don't worry about any
limitations of knowledge of participants in this discussion. I am sure
we will be able to find experts in whatever topics are needed.

If you have not completed the analysis, how, other than simple
prejudice, do you know what the results would be?

Patricia

0
pats (3556)
8/28/2012 9:01:37 PM
PeteOlcott <peteolcott@gmail.com> writes:
<snip>
>> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>
>> > All of the examples of paradox of self-reference:
>> > a) Liar Paradox
>> > b) Barber Paradox
>> > c) Gödel’s Incompleteness Theorem
>> > d) Halting Problem
>>
>> > contain very slippery semantic structures that have never been
>> > sufficiently elaborated to fully understand their true meaning. Only a
>> > formalism as robust as Montague Semantics could provide this
>> > sufficient specification.

<snip>
> The problem is the same with all four. The semantics of these four is
> not specified correctly as an acyclic graph. Whenever there are any
> cycles in the semantic specification, the specification is erroneous.

Consider a "long-run" decider.  It answers yes/no to the question "does
machine M execute more than 1000 steps when given input I".  This is a
specification about machines, so any putative long-run decider (let's call
it L) can be applied to its own encoding:

  L(L, L)

Is this specification erroneous?

-- 
Ben.
0
ben.usenet (6790)
8/28/2012 11:23:41 PM
> On 8/28/2012 3:02 AM, Patricia Shanahan wrote:
> > My guess is that you have a great philosophical system you are working
> > on that just does not deal gracefully with impossibility results,

That would be unexpected because he has ALREADY USED the phrase
"analytically impossible".
He knows logical impossibility when he sees it, at least in simple
cases.
Russell's paradox has a short proof (as Strawson's theorem).
The proof that RP  really is paradoxical/contradictory IS SHORT AND
SIMPLE.
It is analytically impossible for there to exist a thing
that R's all and only those things that don't R themselves.
This DOES NOT REQUIRE ANY sort of deep analysis of
"analogy" OR ANY OTHER natural language concept, LET ALONE
the invocation of Montague semantics.  PO is practicing a pure
trolling tactic
in continually trying to introduce MORE complexity via Montague,
analogy,
and "nuance of meaning" from natural language.

On Aug 28, 6:06 am, Peter Olcott <OCR4Screen> wrote:
> Not at all. There is a nuance of meaning that is slipping through the
> cracks of the limited expressiveness of the language of the Mathematics

NO THERE ISN'T, YOU *LYING* jackass!  The Halting Problem (READ THE
TITLE of the thread -- YOU titled it!!)  IS INHERENTLY *IN*AND*ABOUT*
FORMAL
languages!  A TM is A FORMAL thing!  The verb "halt" has a natural
language meaning
but the formal meaning IS EXACTLY THE SAME!  It does NOT require any
ANALYSIS OF "ANALOGY",
by Montague or any other scholar of natural language, to deal with the
FORMAL meaning!

It is astonishing that you can even TRY to invoke NATURAL language
concerns or consideration
about a problem that is INHERENTLY IN AND ABOUT A WHOLLY formal
realm!!!!
0
greeneg9613 (188)
8/29/2012 12:15:29 AM
> > Peter Olcott <OCR4Screen> wrote:
> >> Thoughts that can not be expressed can
> >> not be understood.


But WE'RE NOT TALKING about THAT!
WE are talking about the Halting Problem and
Russell's Paradox, BOTH OF WHICH HAVE BEEN expressed and understood,
in BOTH formal AND INformal language!!
0
greeneg9613 (188)
8/29/2012 12:16:41 AM
On Aug 28, 6:23=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> PeteOlcott <peteolc...@gmail.com> writes:
>
> <snip>
>
> >> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>
> >> > All of the examples of paradox of self-reference:
> >> > a) Liar Paradox
> >> > b) Barber Paradox
> >> > c) G=F6del=92s Incompleteness Theorem
> >> > d) Halting Problem
>
> >> > contain very slippery semantic structures that have never been
> >> > sufficiently elaborated to fully understand their true meaning. Only=
 a
> >> > formalism as robust as Montague Semantics could provide this
> >> > sufficient specification.
>
> <snip>
>
> > The problem is the same with all four. The semantics of these four is
> > not specified correctly as an acyclic graph. Whenever there are any
> > cycles in the semantic specification, the specification is erroneous.
>
> Consider a "long-run" decider. =A0It answers yes/no to the question "does
> machine M execute more than 1000 steps when given input I". =A0This is a
> specification about machines, so any putative long-run decider (let's cal=
l
> it L) can be applied to its own encoding:
>
> =A0 L(L, L)
>
> Is this specification erroneous?
>
> --
> Ben.

The Pathological Self-Reference instances of this (if any) would be
erroneous.
I do not have memorized whether or not this can be transformed into
the Halting Problem.
0
PeteOlcott (86)
8/29/2012 10:24:32 AM
On Aug 28, 4:01=A0pm, Patricia Shanahan <p...@acm.org> wrote:
> On 8/28/2012 11:48 AM, PeteOlcott wrote:
> ...
>
> > c) G=F6del=92s Incompleteness Theorem
> > d) Halting Problem
>
> > contain very slippery semantic structures that have never been
> > sufficiently elaborated to fully understand their true meaning. Only a
> > formalism as robust as Montague Semantics could provide this
> > sufficient specification.
>
> If you have completed the analysis in question for either of those
> theorems, please make it available for review. Don't worry about any
> limitations of knowledge of participants in this discussion. I am sure
> we will be able to find experts in whatever topics are needed.
>
> If you have not completed the analysis, how, other than simple
> prejudice, do you know what the results would be?
>
> Patricia

Can you see how Russell's Paradox is both analogous to the Halting
Problem and erroneous?

My analysis will be presented as a dialogue, interpolating upon the
specification required to become understood.
0
PeteOlcott (86)
8/29/2012 10:26:59 AM
On Aug 28, 3:37=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>
> > All of the examples of paradox of self-reference:
> > a) Liar Paradox
> > b) Barber Paradox
> > c) G=F6del=92s Incompleteness Theorem
> > d) Halting Problem
>
> > contain very slippery semantic structures that have never been
> > sufficiently elaborated to fully understand their true meaning. Only a
> > formalism as robust as Montague Semantics could provide this
> > sufficient specification.
>
> And you once again show that you're going the wrong way. The first two
> examples are "valid" in some sense, in that they contain examples which
> show that naive handling of English prose leads to apparent paradoxes.
>
> However, the last two do not share the same problems, since they use
> rigorously defined mathematical constructs and are carefully designed to
> avoid the pitfalls of self-reference. Both have been proven using formal

This is where a mathematical model of {analogy} may be required.
Did they actually eliminate the problems of self-reference, or merely
hide them.
Most every analysis that I have seen indicates that c) is based on
self-reference.

> verification software that can rely on the first principles of ZFC
> axioms. Their authors were very careful to use precise mathematical
> semantics and never rely on anything so slippery as a second-person
> reference. G=F6del discovered how to give each statement and proof a
> corresponding number and (literally) let the numbers do the talking.
> Turing similarly gave every Turing machine a number and used that
> instead. To argue that these approaches are wrong is to argue that
> number theory itself is wrong.

Or merely that is is insuffiicently expressive to show the problem
that I refer to.

>
> The fallacy you give is that popular introduction to these results use
> informal language, and you assert that the theorems are specified by the
> informal language, so that discrediting the language discredits the theor=
em.
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth

0
PeteOlcott (86)
8/29/2012 10:30:58 AM
On Aug 28, 3:37=A0pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>
> > All of the examples of paradox of self-reference:
> > a) Liar Paradox
> > b) Barber Paradox
> > c) G=F6del=92s Incompleteness Theorem
> > d) Halting Problem
>
> > contain very slippery semantic structures that have never been
> > sufficiently elaborated to fully understand their true meaning. Only a
> > formalism as robust as Montague Semantics could provide this
> > sufficient specification.
>
> And you once again show that you're going the wrong way. The first two
> examples are "valid" in some sense, in that they contain examples which
> show that naive handling of English prose leads to apparent paradoxes.
>
> However, the last two do not share the same problems, since they use
> rigorously defined mathematical constructs and are carefully designed to
> avoid the pitfalls of self-reference.

So you are essentially claiming that the informal representation and
the formal representation each specify different sets of semantic
meaning that do not correspond to each other. Also you are claiming
that since there is a lack of analogy between the two, at least one of
these two must be less than truthfully represented.

> Both have been proven using formal
> verification software that can rely on the first principles of ZFC
> axioms. Their authors were very careful to use precise mathematical
> semantics and never rely on anything so slippery as a second-person
> reference. G=F6del discovered how to give each statement and proof a
> corresponding number and (literally) let the numbers do the talking.
> Turing similarly gave every Turing machine a number and used that
> instead. To argue that these approaches are wrong is to argue that
> number theory itself is wrong.
>
> The fallacy you give is that popular introduction to these results use
> informal language, and you assert that the theorems are specified by the
> informal language, so that discrediting the language discredits the theor=
em.
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth

0
PeteOlcott (86)
8/29/2012 10:45:52 AM
PeteOlcott <peteolcott@gmail.com> writes:

> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> PeteOlcott <peteolc...@gmail.com> writes:
>>
>> <snip>
>>
>> >> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>
>> >> > All of the examples of paradox of self-reference:
>> >> > a) Liar Paradox
>> >> > b) Barber Paradox
>> >> > c) Gödel’s Incompleteness Theorem
>> >> > d) Halting Problem
>>
>> >> > contain very slippery semantic structures that have never been
>> >> > sufficiently elaborated to fully understand their true meaning. Only a
>> >> > formalism as robust as Montague Semantics could provide this
>> >> > sufficient specification.
>>
>> <snip>
>>
>> > The problem is the same with all four. The semantics of these four is
>> > not specified correctly as an acyclic graph. Whenever there are any
>> > cycles in the semantic specification, the specification is erroneous.
>>
>> Consider a "long-run" decider.  It answers yes/no to the question "does
>> machine M execute more than 1000 steps when given input I".  This is a
>> specification about machines, so any putative long-run decider (let's call
>> it L) can be applied to its own encoding:
>>
>>   L(L, L)
>>
>> Is this specification erroneous?
>
> The Pathological Self-Reference instances of this (if any) would be
> erroneous.
> I do not have memorized whether or not this can be transformed into
> the Halting Problem.

That just about says it all.

Are you withdrawing your previous statement that "whenever there are any
cycles in the semantic specification, the specification is erroneous"?
Or do you understand what you yourself are saying so poorly that you
can't even tell if this simple specification has what you term
"Pathological Self-Reference"?  I hope you can see why people won't take
you seriously when you can't answer simple questions about your own
terms.

-- 
Ben.
0
ben.usenet (6790)
8/29/2012 12:55:06 PM
On 8/29/2012 5:45 AM, PeteOlcott wrote:
> On Aug 28, 3:37 pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
>> However, the last two do not share the same problems, since they use
>> rigorously defined mathematical constructs and are carefully designed to
>> avoid the pitfalls of self-reference.
>
> So you are essentially claiming that the informal representation and
> the formal representation each specify different sets of semantic
> meaning that do not correspond to each other. Also you are claiming
> that since there is a lack of analogy between the two, at least one of
> these two must be less than truthfully represented.

Yes, I am. As I mentioned just one paragraph later:
>> The fallacy you give is that popular introduction to these results use
>> informal language, and you assert that the theorems are specified by the
>> informal language, so that discrediting the language discredits the theorem.

If you want a really good analogy for what you're doing, imagine a 
standard court case. Many big-publicity court cases have their arguments 
and their decisions published in regular newspapers. If a lawyer were to 
argue in court, citing this newspaper summary as precedent, he would be 
laughed out of court, as these summaries don't carry any legal weight. 
It's the actual court decisions that carry the weight and need to be 
used, since simpler summaries often miss the very fine nuances that come 
about in law (and sometimes the decisions are downright funny :D ).

Your argument deals entirely with very fine nuances on how things are 
specified and dealt with. These nuances are most likely to be changed 
when translating from the careful original proofs to more popular 
introductions. Therefore, you need to craft your arguments based on the 
real, proper formal mathematical proofs and not the informal style you 
are more used to seeing.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/29/2012 2:46:28 PM
On 8/29/2012 5:30 AM, PeteOlcott wrote:
> On Aug 28, 3:37 pm, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
>> However, the last two do not share the same problems, since they use
>> rigorously defined mathematical constructs and are carefully designed to
>> avoid the pitfalls of self-reference. Both have been proven using formal
>
> This is where a mathematical model of {analogy} may be required.
> Did they actually eliminate the problems of self-reference, or merely
> hide them.
> Most every analysis that I have seen indicates that c) is based on
> self-reference.

Most presentations of Godel's Second Theorem present the critical 
statement as "This sentence is not provable," as this gets the essential 
point across of what it's trying to say while sidestepping a very long 
discussion on how it's arrived at (the book I'll be citing from shortly 
takes over 400 pages to get this to point). The truth is that the "This 
sentence" portion of the statement is actually derived using similar 
techniques to the Recursion Theorem in computational theory [1]. The 
actual statement, as translated by "Godel, Escher, Bach" amounts to the 
following:

"There do not exist numbers a and a' such that both (1) they a' is the 
Godel number of a derivation which proves the statement whose Godel 
number is a and (2) a' is the `arithmoquinification' of u." Here, 
"arithmoquinification" is a shorthand for saying "the Godel number of 
the formula arrived by replacing a free variable in the original formula 
with the Godel number of the original formula." As you might expect at 
this point, the variable `u' is the Godel number of the original form of 
the statement where `u' is a free variable.

This complex statement achieves the same effect as "This sentence is not 
provable," but takes much longer to explain (my paragraph is an attempt 
to reduce a discussion which takes 6 pages in the book). For most 
people, the simple statement together with the knowledge that Godel 
developed an exquisite way of getting a statement to say "this sentence" 
using only the ability to do arithmetic is enough to justify the 
theorem, but that appears to not be the case with you, who argues that 
"this sentence" is an erroneous construct.

Perhaps another way to describe this is that the core idea is based 
around a quine: a program that outputs its own source code. You can 
render the critical statement instead as:
", when preceded by itself in quotes, is not provable", when preceded by 
itself in quotes, is not provable.


[1] This probably isn't accidental, as the techniques in computational 
theory were probably originally heavily influenced by the ingenuity of 
Godel's proof.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/29/2012 3:45:19 PM
On 8/29/2012 3:45 AM, PeteOlcott wrote:
....
> So you are essentially claiming that the informal representation and
> the formal representation each specify different sets of semantic
> meaning that do not correspond to each other. Also you are claiming
> that since there is a lack of analogy between the two, at least one of
> these two must be less than truthfully represented.

Correct. There are at least three styles of writing related to proofs,
listed in order of increasing length and also increasing precision:

1. An informal sketch. It is there to give people a general idea of what
is going on, at a major cost in precision. It is indeed "less than
truthfully represented" for the sake of simplicity.

I believe this is the only style of proof-related material that you
consider, and even there you strongly resist or ignore any attempt at
increasing precision.

2. The type of proof I wrote as a mathematics student or a computer
science graduate student. These proofs are supposed to be detailed
enough to convince a knowledgeable reader that the theorem is provable,
but can still use some natural language text, not just mathematical
symbols, and therefore may contain errors, suffer from ambiguity, and be
misinterpreted.

Textbooks often give both these styles for the same theorem. The sketch
is suppose to prime the reader to understand the actual proof.

3. A formal, machine-checkable proof from axioms. These are often
impractical, because they would be too long, and are hard for people to
understand. Despite that, if you really want to get into fine issues of
semantics, you need to study formal proofs. Everything else trades out
unambiguous precision for convenience, compactness, and ease of
understanding, to varying degrees.

If there were one way of representing a proof that was both completely
precise and unambiguous and also compact and easy for people to read,
that would be all we would use. Unfortunately, there is a trade-off.
You have been taking the extreme at one end, the version that is
optimized for quickly and easily giving a general impression, and
treating it as though it were the other extreme, the formal proof.

Patricia
0
pats (3556)
8/29/2012 5:07:01 PM
In comp.lang.prolog Graham Cooper <grahamcooper7@gmail.com> wrote:

Hey, my cofounder thought I should add one to the discussed scenario:

>  A boy named Joe.
>  {'Ok.'}
>  
>  He ran to the store.
>  {'Ok.'}
>  
>  Where is he?
>  {'Joe may be at the store.'}

Which I referenced as tense bridging.  I should've added:

>  Where was he?
>  {'Joe ran to the store.'}

So tense matching (past->past), plus of course the solution in both cases
involves the connection between motion and location.

-- 
Andy Valencia
Home page: http://www.vsta.org/andy/
To contact me: http://www.vsta.org/contact/andy.html
0
vandys (135)
8/29/2012 5:35:46 PM
On 8/29/2012 12:07 PM, Patricia Shanahan wrote:
> On 8/29/2012 3:45 AM, PeteOlcott wrote:
> ...
>> So you are essentially claiming that the informal representation and
>> the formal representation each specify different sets of semantic
>> meaning that do not correspond to each other. Also you are claiming
>> that since there is a lack of analogy between the two, at least one of
>> these two must be less than truthfully represented.
>
> Correct. There are at least three styles of writing related to proofs,
> listed in order of increasing length and also increasing precision:
>
If the different variations of the proof are not analogous in their 
reasoning and conclusions then it would seem that one of them would be 
lying to some degree. This would not be the case if one one the two ways 
was simply vague about things that the other one was specific about.

> 1. An informal sketch. It is there to give people a general idea of what
> is going on, at a major cost in precision. It is indeed "less than
> truthfully represented" for the sake of simplicity.
>
> I believe this is the only style of proof-related material that you
> consider, and even there you strongly resist or ignore any attempt at
> increasing precision.
>
> 2. The type of proof I wrote as a mathematics student or a computer
> science graduate student. These proofs are supposed to be detailed
> enough to convince a knowledgeable reader that the theorem is provable,
> but can still use some natural language text, not just mathematical
> symbols, and therefore may contain errors, suffer from ambiguity, and be
> misinterpreted.
>
> Textbooks often give both these styles for the same theorem. The sketch
> is suppose to prime the reader to understand the actual proof.
>
> 3. A formal, machine-checkable proof from axioms. These are often
> impractical, because they would be too long, and are hard for people to
> understand. Despite that, if you really want to get into fine issues of
> semantics, you need to study formal proofs. Everything else trades out
> unambiguous precision for convenience, compactness, and ease of
> understanding, to varying degrees.
>
> If there were one way of representing a proof that was both completely
> precise and unambiguous and also compact and easy for people to read,
> that would be all we would use. Unfortunately, there is a trade-off.
> You have been taking the extreme at one end, the version that is
> optimized for quickly and easily giving a general impression, and
> treating it as though it were the other extreme, the formal proof.
>
> Patricia

What other way is there to prove the Halting Problem besides what I call 
Pathological Self-Reference and Diagonalization?
0
Peter
8/29/2012 11:46:31 PM
Peter Olcott <OCR4Screen> writes:
<snip>
> What other way is there to prove the Halting Problem besides what I
> call Pathological Self-Reference and Diagonalization?

(a) An example was posted here if I recall.  For another, read up on the
Busy Beaver function.  I tried this tack before but you did not want to
loose focus -- a perfectly understandable concern: education often gets
in the way of irrational beliefs.  I am sure you'll continue to ignore
this advice.

(b) No one knows (not even you) what you mean by PSR, so asking about is
pointless.  An attempt to get you to define it failed.  Even asking if
you can spot it in a simple example has so far failed.  Being undefined,
it has the benefit that you can spot it lurking anywhere you don't like
a proof, and claim it's absent when you do.  For this reason, I'm sure
you won't try too hard to pin it down.

(c) There's hubris -- knowing that you are right and so many experts are
wrong -- and then there's believing that simply naming something
"Pathological Self-Reference" alters the validity of a proof.  You can't
say what it is, yet it's the crucial thing that's wrong with at least
four famous proofs.  If such stunning arrogance had not become
commonplace on Usenet, it would beggar belief.

-- 
Ben.
0
ben.usenet (6790)
8/30/2012 2:13:23 AM
On Aug 29, 9:13=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> Peter Olcott <OCR4Screen> writes:
>
> <snip>
>
> > What other way is there to prove the Halting Problem besides what I
> > call Pathological Self-Reference and Diagonalization?
>
> (a) An example was posted here if I recall. =A0For another, read up on th=
e
> Busy Beaver function. =A0I tried this tack before but you did not want to
> loose focus -- a perfectly understandable concern: education often gets
> in the way of irrational beliefs. =A0I am sure you'll continue to ignore
> this advice.
>

Is this an aspect of the formal proof of the Halting Problem?

> (b) No one knows (not even you) what you mean by PSR, so asking about is
> pointless. =A0An attempt to get you to define it failed. =A0Even asking i=
f
> you can spot it in a simple example has so far failed. =A0Being undefined=
,
> it has the benefit that you can spot it lurking anywhere you don't like
> a proof, and claim it's absent when you do. =A0For this reason, I'm sure
> you won't try too hard to pin it down.

Not<ThereExists>
<ElementOfSet>
FinalStatesOf_H
<MathematicallyMapsTo>
Halts(M, H, input)

Where M is defined as
--------------------------------
M(String H, String input):
if H(input, H, input) loop
else halt

This seems completely unambiguous to me, I see no possible way that it
could be misunderstood unless someone wanted to fake misunderstanding
for the purpose of maintaining a fake disagreement.

More telling is the fact that Daryl McCullough  provided a perfectly
analogous example of Pathological Self-Reference, (The Volunteer) yet
could not even comprehend his own example. Maybe he simply copied it
out of a book somewhere without ever really understanding it.

>
> (c) There's hubris -- knowing that you are right and so many experts are
> wrong -- and then there's believing that simply naming something
> "Pathological Self-Reference" alters the validity of a proof. =A0You can'=
t
> say what it is, yet it's the crucial thing that's wrong with at least
> four famous proofs. =A0If such stunning arrogance had not become
> commonplace on Usenet, it would beggar belief.
>
> --
> Ben.

Ultimately I am either correct or incorrect, and any other judgments
are not relevant. By focusing on these other judgments one avoids
focusing on the real issues.
0
PeteOlcott (86)
8/30/2012 10:23:06 AM
PeteOlcott <peteolcott@gmail.com> writes:

> On Aug 29, 9:13 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>> <snip>
>>
>> > What other way is there to prove the Halting Problem besides what I
>> > call Pathological Self-Reference and Diagonalization?
>>
>> (a) An example was posted here if I recall.  For another, read up on the
>> Busy Beaver function.  I tried this tack before but you did not want to
>> loose focus -- a perfectly understandable concern: education often gets
>> in the way of irrational beliefs.  I am sure you'll continue to ignore
>> this advice.

> Is this an aspect of the formal proof of the Halting Problem?

Is what an aspect of the proof?  Is the definitive article important --
do you think there is only "one true proof" and that's all you need to
attack?

Also, you might not mean "formal proof".  Patricia explained about
different kinds of proof, but I don't think you always read the replies
you get.

I have never seen a formal proof of this theorem.  I've seen a formal
proof of the un-countability of P(N), but not one of the un-decidability
of halting.  Formal proofs are quite rare.

By the way, the halting problem is a "problem" -- a statement of
requirement if you like -- you can't prove it.  What is proved is that
there is no machine that decides halting.


>> (b) No one knows (not even you) what you mean by PSR, so asking about is
>> pointless.  An attempt to get you to define it failed.  Even asking if
>> you can spot it in a simple example has so far failed.  Being undefined,
>> it has the benefit that you can spot it lurking anywhere you don't like
>> a proof, and claim it's absent when you do.  For this reason, I'm sure
>> you won't try too hard to pin it down.
>
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> --------------------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt
>
> This seems completely unambiguous to me,

This is not a definition of your mysterious term.  If you had a
definition of PSR you'd have been able to answer my question about
whether it's present in the example I posted.  Indeed, if you'd
published a definition, I would have been able to answer it myself.

And if you see the above as unambiguous, there's no hope of any mutual
understanding.  I suspect you just don't know how to write something
clearly using mathematical notation.

To illustrate...  What is Halts?  It seems it might be the mathematical
function that a halting decide must compute (effectively the "real"
answer").  But it's domain is a triple, not a pair, so that seems wrong.

Is the statement about one H or all H?  I.e. is there an implied
"for all" in the above.  It's not uncommon to assume universal
quantification over otherwise free variables, but then we need to do the
same for M, too.  Ah!  But the final part suggest that M is functionally
dependent on H, so it's not free.  OK, but what's it doing in "Halts(M,
H, input)" then?  Maybe you meant "Halts(M(H), input)" instead?  I just
don't know.

And then we get to MathematicallyMapsTo.  Some sort of correspondence,
but what exactly?  You can't mean a static mapping because there is no
natural static mapping.  Maybe you mean the dynamic mapping -- that H
enters an accepting state iff Halts(...) and a rejecting state iff
!Halts(...).  Seems reasonable, but that's not far off being a statement
of the halting theorem, which can't be what you are trying to say...

> I see no possible way that it
> could be misunderstood unless someone wanted to fake misunderstanding
> for the purpose of maintaining a fake disagreement.

Yes, that's a big part of the problem.  Unless you can see the ambiguity
you won't be able to fix it.  Actually, you probably don't want to be
precise.  Every time someone has re-written what you've posted in a
clearer form, you reject it because the clarity highlights the error in
your thinking.

> More telling is the fact that Daryl McCullough  provided a perfectly
> analogous example of Pathological Self-Reference, (The Volunteer) yet
> could not even comprehend his own example. Maybe he simply copied it
> out of a book somewhere without ever really understanding it.
>
>>
>> (c) There's hubris -- knowing that you are right and so many experts are
>> wrong -- and then there's believing that simply naming something
>> "Pathological Self-Reference" alters the validity of a proof.  You can't
>> say what it is, yet it's the crucial thing that's wrong with at least
>> four famous proofs.  If such stunning arrogance had not become
>> commonplace on Usenet, it would beggar belief.
>
> Ultimately I am either correct or incorrect,

Not so.  It's possible you are saying meaningless things.  That's my
front runner, in fact.  I don't think you are wrong and I don't think
you are right; I don't think you, or anyone else, knows what you are
saying.  It's often satirised as "not even wrong".

(Of course, some of the things you've said are wrong, but some have been
right too.  I'm talking about the big picture -- the problem you have
with all these theorems.  You can't state what you think about them
clearly.  Every time some clears up what they think you've said, it
shows that there is no problem, so that can't be what you meant!  Only
by being unclear can you maintain the illusion that you have something
to say about them.)

> and any other judgments
> are not relevant. By focusing on these other judgments one avoids
> focusing on the real issues.

That's why it was point (c) after the "real issues".  But even that's a
dishonest answer.  (c) must soon become more important than any other
issue, because it's ultimately why all reasoned debate will failed.

-- 
Ben.
0
ben.usenet (6790)
8/30/2012 12:28:06 PM
On Aug 28, 6:23=A0pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
> PeteOlcott <peteolc...@gmail.com> writes:
>
> <snip>
>
> >> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>
> >> > All of the examples of paradox of self-reference:
> >> > a) Liar Paradox
> >> > b) Barber Paradox
> >> > c) G=F6del=92s Incompleteness Theorem
> >> > d) Halting Problem
>
> >> > contain very slippery semantic structures that have never been
> >> > sufficiently elaborated to fully understand their true meaning. Only=
 a
> >> > formalism as robust as Montague Semantics could provide this
> >> > sufficient specification.
>
> <snip>
>
> > The problem is the same with all four. The semantics of these four is
> > not specified correctly as an acyclic graph. Whenever there are any
> > cycles in the semantic specification, the specification is erroneous.
>
> Consider a "long-run" decider. =A0It answers yes/no to the question "does
> machine M execute more than 1000 steps when given input I". =A0This is a
> specification about machines, so any putative long-run decider (let's cal=
l
> it L) can be applied to its own encoding:
>
> =A0 L(L, L)
>
> Is this specification erroneous?
>
> --
> Ben.

From the details that have been provided Pathological Self-Reference
has not been defined for L, and the specification does not seem to be
otherwise erroneous.
0
PeteOlcott (86)
8/30/2012 2:41:54 PM
On 8/30/2012 5:23 AM, PeteOlcott wrote:
> Not<ThereExists>
> <ElementOfSet>
> FinalStatesOf_H
> <MathematicallyMapsTo>
> Halts(M, H, input)
>
> Where M is defined as
> --------------------------------
> M(String H, String input):
> if H(input, H, input) loop
> else halt

.. . .

Seeing your attempt at a formal definition makes me believe that you 
have seen these sorts of things before, but never really understood 
them. A list of all of the problems here:
1. What is the exact model of Turing machine you're using here? You 
mention "FinalStatesOf_H" which suggests that you're not using a simple 
q_acc/q_rej model.
2. How is Halts defined? How is H defined? You seem to be trying for 
conventional terminology, but no conventional definition of Halts is a 
3-ary function.
3. You keep using the term "mathematically maps to." I have no idea what 
you mean by this. If what you mean is "if and only if", then use that. 
If that is not what you mean, then you need to come up with better 
terminology.
4. You use a quantifier without giving us a variable that it quantifies 
over.
5. This is an open formula (it has free variables).
6. What we're really asking for is a semi-formal description of the 
function IsPSR : Turing Machine -> boolean.

Here is an example of a formal definition (I just adapted this on the spot):
An acceptable computable history C for a given machine M is a sequence 
of configurations C1, C2, ..., Cn such that C1 is the initial 
configuration of M, Cn is a configuration where M is in the accept 
state, and, for all k, the difference between Ck and Ck-1 is a legal 
step according to the rules of M.

> This seems completely unambiguous to me, I see no possible way that it
> could be misunderstood unless someone wanted to fake misunderstanding
> for the purpose of maintaining a fake disagreement.

Your attempt is missing basic boilerplate to explain what half your 
symbols are. This gives us as readers no context for half of what you 
mean, which you presumably know and hence causes it to look unambiguous.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/30/2012 3:21:56 PM
On 8/30/2012 5:28 AM, Ben Bacarisse wrote:
....
> I have never seen a formal proof of this theorem.  I've seen a formal
> proof of the un-countability of P(N), but not one of the un-decidability
> of halting.  Formal proofs are quite rare.

Fortunately, Peter Olcott also disbelieves Goedel's incompleteness
theorems, and that work is inherently interesting to people who study
formal proof. See e.g. http://r6.ca/Goedel/goedel1.html

That web page is nicely formatted. The actual formal proof is set off in
boxes, clearly distinguished from the surrounding text explaining and
discussing it.

Patricia

0
pats (3556)
8/30/2012 3:32:25 PM
On 8/29/2012 6:46 PM, Peter Olcott wrote:
> What other way is there to prove the Halting Problem besides what I call
> Pathological Self-Reference and Diagonalization?

There's reduction from any other of the set of equivalently-pessimistic 
problems (like Godel's Incompleteness Theorems, undecidability of 
Diophantine equations), although some of those may be conventionally 
derived from the undecidability proof and hence require alternative 
definitions.

You might be able to prove that the complement of halting is not 
Turing-recognizable and derive halting's undecidability from that, 
although the other way around is the conventional way and much easier.

You can also derive it from the uncomputability of several functions, 
including the Busy-Beaver and Chaitin's Complexity, which would be 
(trivially) computable if you could solve the Halting problem.

Halting is a corollary of Rice's Theorem, although that's normally 
proved by the halting problem anyways...

That's all I could think up in just 5 minutes. :-(

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/30/2012 3:33:21 PM
On 8/30/2012 7:28 AM, Ben Bacarisse wrote:
> I have never seen a formal proof of this theorem.  I've seen a formal
> proof of the un-countability of P(N), but not one of the un-decidability
> of halting.  Formal proofs are quite rare.

Googling for coq undecidability of halting yields this page:
<http://lampwww.epfl.ch/~cremet/halting_problem/with_proofs/halting_problem.html>.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/30/2012 3:42:04 PM
Patricia Shanahan <pats@acm.org> writes:

> On 8/30/2012 5:28 AM, Ben Bacarisse wrote:
> ...
>> I have never seen a formal proof of this theorem.  I've seen a formal
>> proof of the un-countability of P(N), but not one of the un-decidability
>> of halting.  Formal proofs are quite rare.
>
> Fortunately, Peter Olcott also disbelieves Goedel's incompleteness
> theorems, and that work is inherently interesting to people who study
> formal proof. See e.g. http://r6.ca/Goedel/goedel1.html
>
> That web page is nicely formatted. The actual formal proof is set off in
> boxes, clearly distinguished from the surrounding text explaining and
> discussing it.

Thank you for this link, and thank you Joshua for yours.  I'm not sure I
want to read either, but I am very happy to know they are there!

It's amazing how much Coq has advanced.  I used it (or maybe it's
predecessor) very briefly in the 90s for some security protocol checking
and I don't think it was anything like as sophisticated as it now seems
to be.

-- 
Ben.
0
ben.usenet (6790)
8/30/2012 3:56:16 PM
PeteOlcott <peteolcott@gmail.com> writes:

> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>> PeteOlcott <peteolc...@gmail.com> writes:
>>
>> <snip>
>>
>> >> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>
>> >> > All of the examples of paradox of self-reference:
>> >> > a) Liar Paradox
>> >> > b) Barber Paradox
>> >> > c) Gödel’s Incompleteness Theorem
>> >> > d) Halting Problem
>>
>> >> > contain very slippery semantic structures that have never been
>> >> > sufficiently elaborated to fully understand their true meaning. Only a
>> >> > formalism as robust as Montague Semantics could provide this
>> >> > sufficient specification.
>>
>> <snip>
>>
>> > The problem is the same with all four. The semantics of these four is
>> > not specified correctly as an acyclic graph. Whenever there are any
>> > cycles in the semantic specification, the specification is erroneous.
>>
>> Consider a "long-run" decider.  It answers yes/no to the question "does
>> machine M execute more than 1000 steps when given input I".  This is a
>> specification about machines, so any putative long-run decider (let's call
>> it L) can be applied to its own encoding:
>>
>>   L(L, L)
>>
>> Is this specification erroneous?
>
> From the details that have been provided Pathological Self-Reference
> has not been defined for L, and the specification does not seem to be
> otherwise erroneous.

So what makes one OK and the other "pathological"?  How could I make
this determination for myself?

I suspect it's just that one is used in one of the proofs of a theorem
you don't like and the other is not, but maybe there is more to it than
that.

-- 
Ben.
0
ben.usenet (6790)
8/30/2012 4:02:59 PM
On Aug 30, 10:21=A0am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> On 8/30/2012 5:23 AM, PeteOlcott wrote:
>
> > Not<ThereExists>
> > <ElementOfSet>
> > FinalStatesOf_H
> > <MathematicallyMapsTo>
> > Halts(M, H, input)
>
> > Where M is defined as
> > --------------------------------
> > M(String H, String input):
> > if H(input, H, input) loop
> > else halt
>
> . . .
>
> Seeing your attempt at a formal definition makes me believe that you
> have seen these sorts of things before, but never really understood
> them. A list of all of the problems here:
> 1. What is the exact model of Turing machine you're using here? You
> mention "FinalStatesOf_H" which suggests that you're not using a simple
> q_acc/q_rej model.
> 2. How is Halts defined? How is H defined?
H is defined as the set of TM's that correspond to the specification
below:

> You seem to be trying for
> conventional terminology, but no conventional definition of Halts is a
> 3-ary function.
> 3. You keep using the term "mathematically maps to." I have no idea what
> you mean by this. If what you mean is "if and only if", then use that.
> If that is not what you mean, then you need to come up with better
> terminology.
> 4. You use a quantifier without giving us a variable that it quantifies
> over.
> 5. This is an open formula (it has free variables).
> 6. What we're really asking for is a semi-formal description of the
> function IsPSR : Turing Machine -> boolean.
>
> Here is an example of a formal definition (I just adapted this on the spo=
t):
> An acceptable computable history C for a given machine M is a sequence
> of configurations C1, C2, ..., Cn such that C1 is the initial
> configuration of M, Cn is a configuration where M is in the accept
> state, and, for all k, the difference between Ck and Ck-1 is a legal
> step according to the rules of M.
>
> > This seems completely unambiguous to me, I see no possible way that it
> > could be misunderstood unless someone wanted to fake misunderstanding
> > for the purpose of maintaining a fake disagreement.
>
> Your attempt is missing basic boilerplate to explain what half your
> symbols are. This gives us as readers no context for half of what you
> mean, which you presumably know and hence causes it to look unambiguous.
>
> --
> Beware of bugs in the above code; I have only proved it correct, not
> tried it. -- Donald E. Knuth

Not(<ThereExists> x (x <ElementOfSet>
      FinalStatesOf_H)<-->Halts(M, input) )

Where M is defined as
---------------------
M(String input):
if H(M, input) loop
else halt

FinalStatesOf_H
a) accept
b) reject
0
PeteOlcott (86)
8/30/2012 4:27:37 PM
On 8/30/2012 11:27 AM, PeteOlcott wrote:
> Not(<ThereExists> x (x <ElementOfSet>
>        FinalStatesOf_H)<-->Halts(M, input) )
>
> Where M is defined as
> ---------------------
> M(String input):
> if H(M, input) loop
> else halt
>
> FinalStatesOf_H
> a) accept
> b) reject

Sigh. It seems that you have a persistent difficulty with responding to 
multiple points of criticism in a single post, so I'll make it easy for 
you and boil this down to one point:

Whatever you think this is, this is not a usable formal definition. This 
is a mishmash of mathematical symbols (or attempts to write math symbols 
without using them), some of them used incorrectly. A good formal 
definition requires English prose, and lots of it.

-- 
Beware of bugs in the above code; I have only proved it correct, not 
tried it. -- Donald E. Knuth
0
Pidgeot18 (1520)
8/30/2012 5:35:09 PM
On 8/30/2012 9:27 AM, PeteOlcott wrote:
> On Aug 30, 10:21 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
....
>> Here is an example of a formal definition (I just adapted this on the spot):
>> An acceptable computable history C for a given machine M is a sequence
>> of configurations C1, C2, ..., Cn such that C1 is the initial
>> configuration of M, Cn is a configuration where M is in the accept
>> state, and, for all k, the difference between Ck and Ck-1 is a legal
>> step according to the rules of M.
....
> Not(<ThereExists> x (x <ElementOfSet>
>        FinalStatesOf_H)<-->Halts(M, input) )
>
> Where M is defined as
> ---------------------
> M(String input):
> if H(M, input) loop
> else halt
>
> FinalStatesOf_H
> a) accept
> b) reject
>

Joshua's example started "An acceptable computable history C for a given
machine M is". That is, he said what he is defining, "An acceptable
computable history", any parameters, "a given machine M" and then the
word "is". It means that the subject of "is" is defined to be the same
thing as the complement.

A meaningful definition of "pathological self-reference" might start
something like: "A proof P exhibits pathological self-reference if, and
only if, ...." where "..." would be replaced a condition that would tell
us whether or not P exhibits pathological self-reference.

I assumed above that pathological self-reference is a property of a
proof. It might be a property of a statement. The definition should make
it absolutely obvious what types of things can exhibit pathological
self-reference, as well as how we evaluate whether they actually exhibit it.

At best, you seem to be aiming for an example. An example is not a
definition. Typical proofs e.g. of Goedel's first incompleteness
theorem do not mention accept or reject states. Does that mean they do
not exhibit pathological self-reference?

Patricia
0
pats (3556)
8/30/2012 5:57:26 PM
On 8/30/2012 12:57 PM, Patricia Shanahan wrote:
> On 8/30/2012 9:27 AM, PeteOlcott wrote:
>> On Aug 30, 10:21 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
> ...
>>> Here is an example of a formal definition (I just adapted this on 
>>> the spot):
>>> An acceptable computable history C for a given machine M is a sequence
>>> of configurations C1, C2, ..., Cn such that C1 is the initial
>>> configuration of M, Cn is a configuration where M is in the accept
>>> state, and, for all k, the difference between Ck and Ck-1 is a legal
>>> step according to the rules of M.
> ...
>> Not(<ThereExists> x (x <ElementOfSet>
>>        FinalStatesOf_H)<-->Halts(M, input) )
>>
>> Where M is defined as
>> ---------------------
>> M(String input):
>> if H(M, input) loop
>> else halt
>>
>> FinalStatesOf_H
>> a) accept
>> b) reject
>>
>
> Joshua's example started "An acceptable computable history C for a given
> machine M is". That is, he said what he is defining, "An acceptable
> computable history", any parameters, "a given machine M" and then the
> word "is". It means that the subject of "is" is defined to be the same
> thing as the complement.
>
> A meaningful definition of "pathological self-reference" might start
> something like: "A proof P exhibits pathological self-reference if, and
> only if, ...." where "..." would be replaced a condition that would tell
> us whether or not P exhibits pathological self-reference.
>
> I assumed above that pathological self-reference is a property of a
> proof. It might be a property of a statement. 
Yes the latter, it is not a property of a proof.

> The definition should make
> it absolutely obvious what types of things can exhibit pathological
> self-reference, as well as how we evaluate whether they actually 
> exhibit it.
>
All that I specified was an infinite set of instances of Halting Problem 
Pathological Self-Reference.

> At best, you seem to be aiming for an example. An example is not a
> definition. Typical proofs e.g. of Goedel's first incompleteness
> theorem do not mention accept or reject states. Does that mean they do
> not exhibit pathological self-reference?
>
> Patricia
How about we back up a step and work on an analogous problem and see 
where that leads?

Is the set of all sets that are not members of themselves within the 
Universal Set or not?

0
Peter
8/31/2012 12:22:04 AM
On 8/30/2012 5:22 PM, Peter Olcott wrote:
> On 8/30/2012 12:57 PM, Patricia Shanahan wrote:
>> On 8/30/2012 9:27 AM, PeteOlcott wrote:
>>> On Aug 30, 10:21 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote:
>> ...
>>>> Here is an example of a formal definition (I just adapted this on
>>>> the spot):
>>>> An acceptable computable history C for a given machine M is a sequence
>>>> of configurations C1, C2, ..., Cn such that C1 is the initial
>>>> configuration of M, Cn is a configuration where M is in the accept
>>>> state, and, for all k, the difference between Ck and Ck-1 is a legal
>>>> step according to the rules of M.
>> ...
>>> Not(<ThereExists> x (x <ElementOfSet>
>>>        FinalStatesOf_H)<-->Halts(M, input) )
>>>
>>> Where M is defined as
>>> ---------------------
>>> M(String input):
>>> if H(M, input) loop
>>> else halt
>>>
>>> FinalStatesOf_H
>>> a) accept
>>> b) reject
>>>
>>
>> Joshua's example started "An acceptable computable history C for a given
>> machine M is". That is, he said what he is defining, "An acceptable
>> computable history", any parameters, "a given machine M" and then the
>> word "is". It means that the subject of "is" is defined to be the same
>> thing as the complement.
>>
>> A meaningful definition of "pathological self-reference" might start
>> something like: "A proof P exhibits pathological self-reference if, and
>> only if, ...." where "..." would be replaced a condition that would tell
>> us whether or not P exhibits pathological self-reference.
>>
>> I assumed above that pathological self-reference is a property of a
>> proof. It might be a property of a statement.
> Yes the latter, it is not a property of a proof.
>
>> The definition should make
>> it absolutely obvious what types of things can exhibit pathological
>> self-reference, as well as how we evaluate whether they actually
>> exhibit it.
>>
> All that I specified was an infinite set of instances of Halting Problem
> Pathological Self-Reference.
>
>> At best, you seem to be aiming for an example. An example is not a
>> definition. Typical proofs e.g. of Goedel's first incompleteness
>> theorem do not mention accept or reject states. Does that mean they do
>> not exhibit pathological self-reference?
>>
>> Patricia
> How about we back up a step and work on an analogous problem and see
> where that leads?
>
> Is the set of all sets that are not members of themselves within the
> Universal Set or not?
>

How about we stick to defining "pathological self-reference"?

You have made dramatic claims based on your claim that it exists in
certain proofs and is somehow a serious flaw. It's time to define it so
that we can find out what it is, or admit that it is merely your
prejudice against certain impossibility results.

Patricia
0
pats (3556)
8/31/2012 2:12:27 AM
Joshua Cranmer <Pidgeot18@verizon.invalid> writes:

> On 8/30/2012 7:28 AM, Ben Bacarisse wrote:
>> I have never seen a formal proof of this theorem.  I've seen a formal
>> proof of the un-countability of P(N), but not one of the un-decidability
>> of halting.  Formal proofs are quite rare.
>
> Googling for coq undecidability of halting yields this page:
> <http://lampwww.epfl.ch/~cremet/halting_problem/with_proofs/halting_problem.html>.

Thanks for posting this.  I think I found it the last time I searched
for this sort of thing, but I did not have time to look at it then (it's
not short!) and I then lost the link.

I thought it would be fun to try getting it running, but it turns out it's
written for an old version of Coq.  The differences seem to be largely
syntactic, so I have manged to get it to verify under Coq 8.3.  It
relies on surprisingly little logic (you don't need Coq.Logic.Classical
for example).

It occurs to me that if Peter wants a very high degree of precision and
an almost total lack of ambiguity when talking about this sort of thing,
the Coq source is surely one of the best ways to go about it.  For
example, he recently posted something (almost meaningless to me) which
he seemed to think could be written clearly and in some sort of symbolic
way.  I am sure it could be expressed in Coq.

-- 
Ben.
0
ben.usenet (6790)
8/31/2012 2:12:35 AM
On Aug 30, 9:12=A0pm, Patricia Shanahan <p...@acm.org> wrote:
> On 8/30/2012 5:22 PM, Peter Olcott wrote:
>
>
>
>
>
> > On 8/30/2012 12:57 PM, Patricia Shanahan wrote:
> >> On 8/30/2012 9:27 AM, PeteOlcott wrote:
> >>> On Aug 30, 10:21 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote=
:
> >> ...
> >>>> Here is an example of a formal definition (I just adapted this on
> >>>> the spot):
> >>>> An acceptable computable history C for a given machine M is a sequen=
ce
> >>>> of configurations C1, C2, ..., Cn such that C1 is the initial
> >>>> configuration of M, Cn is a configuration where M is in the accept
> >>>> state, and, for all k, the difference between Ck and Ck-1 is a legal
> >>>> step according to the rules of M.
> >> ...
> >>> Not(<ThereExists> x (x <ElementOfSet>
> >>> =A0 =A0 =A0 =A0FinalStatesOf_H)<-->Halts(M, input) )
>
> >>> Where M is defined as
> >>> ---------------------
> >>> M(String input):
> >>> if H(M, input) loop
> >>> else halt
>
> >>> FinalStatesOf_H
> >>> a) accept
> >>> b) reject
>
> >> Joshua's example started "An acceptable computable history C for a giv=
en
> >> machine M is". That is, he said what he is defining, "An acceptable
> >> computable history", any parameters, "a given machine M" and then the
> >> word "is". It means that the subject of "is" is defined to be the same
> >> thing as the complement.
>
> >> A meaningful definition of "pathological self-reference" might start
> >> something like: "A proof P exhibits pathological self-reference if, an=
d
> >> only if, ...." where "..." would be replaced a condition that would te=
ll
> >> us whether or not P exhibits pathological self-reference.
>
> >> I assumed above that pathological self-reference is a property of a
> >> proof. It might be a property of a statement.
> > Yes the latter, it is not a property of a proof.
>
> >> The definition should make
> >> it absolutely obvious what types of things can exhibit pathological
> >> self-reference, as well as how we evaluate whether they actually
> >> exhibit it.
>
> > All that I specified was an infinite set of instances of Halting Proble=
m
> > Pathological Self-Reference.
>
> >> At best, you seem to be aiming for an example. An example is not a
> >> definition. Typical proofs e.g. of Goedel's first incompleteness
> >> theorem do not mention accept or reject states. Does that mean they do
> >> not exhibit pathological self-reference?
>
> >> Patricia
> > How about we back up a step and work on an analogous problem and see
> > where that leads?
>
> > Is the set of all sets that are not members of themselves within the
> > Universal Set or not?
>
> How about we stick to defining "pathological self-reference"?
>
> You have made dramatic claims based on your claim that it exists in
> certain proofs and is somehow a serious flaw. It's time to define it so
> that we can find out what it is, or admit that it is merely your
> prejudice against certain impossibility results.
>
> Patricia

I think that I may have to build an analogical bridge from Russell's
Paradox to the Halting Problem.
Pathological Self-Reference has an analog within Russell's Paradox.
0
PeteOlcott (86)
8/31/2012 2:29:28 AM
On 8/30/2012 7:29 PM, PeteOlcott wrote:
....
> I think that I may have to build an analogical bridge from Russell's
> Paradox to the Halting Problem.
> Pathological Self-Reference has an analog within Russell's Paradox.
>

Are you really using a system of axioms in which Russell's Parodox is an
issue? If so, why?

The closest analogy I can see is the need to avoid defining into
one's reasoning entities that lead to paradoxes, such as a TM that
decides TM halting, but that is probably not what you meant.

However, we have already had a couple of cases of confusion caused by
use of analogies. Let's not go down that route. Please just go ahead and
define "pathological self-reference".

If you need some time to think it through, we can suspend this thread
while you work on it. That might also allow you some time to learn
mathematics and theory of computation.

Patricia

0
pats (3556)
8/31/2012 2:42:34 AM
On Aug 30, 9:42=A0pm, Patricia Shanahan <p...@acm.org> wrote:
> On 8/30/2012 7:29 PM, PeteOlcott wrote:
> ...
>
> > I think that I may have to build an analogical bridge from Russell's
> > Paradox to the Halting Problem.
> > Pathological Self-Reference has an analog within Russell's Paradox.
>
> Are you really using a system of axioms in which Russell's Parodox is an
> issue? If so, why?

The error of the Halting Problem is exactly analogous to the error of
Russell's Paradox along exactly one dimension.

>
> The closest analogy I can see is the need to avoid defining into
> one's reasoning entities that lead to paradoxes, such as a TM that
> decides TM halting, but that is probably not what you meant.
>

All pardoxes are actually errors. the error of Russell's Paradox is
much easier to point out than the error of the Halting Problem.

> However, we have already had a couple of cases of confusion caused by
> use of analogies. Let's not go down that route. Please just go ahead and
> define "pathological self-reference".
>
> If you need some time to think it through, we can suspend this thread
> while you work on it. That might also allow you some time to learn
> mathematics and theory of computation.
>
> Patricia

0
PeteOlcott (86)
8/31/2012 3:03:40 AM
On 8/30/2012 10:01 PM, david petry wrote:
> On Thursday, August 30, 2012 7:12:27 PM UTC-7, Patricia Shanahan
> wrote:
>
>> How about we stick to defining "pathological self-reference"?
>
> I'm jumping into this discussion rather late, but have you guys
> satisfactorily defined "self-reference" itself yet?

Not yet. And it is a good question.

It is easy to show existence, for a given Turing machine M, of an
infinite family of Turing machines that are have different
implementation from M, but produce the same outcome as M for any initial
tape contents. Does applying a TM to a distinct, but equivalent, TM
constitute "self-reference"?

There are also TMs that differ e.g. in what they do on empty initial
tape, but produce the same outcomes for each non-empty initial tape.
Does applying M to a TM that is equivalent for non-empty initial tape,
or for initial tape with more than some number of input symbols,
constitute "self-reference"?

Essentially, how similar, in what ways, do two machines M and N have to
be for applying M to N to constitute self-reference?

These are the sorts of issues that need to be resolved in a definition
of "pathological self-reference".

Patricia

0
pats (3556)
8/31/2012 8:01:34 AM
On 8/30/2012 8:03 PM, PeteOlcott wrote:
> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote:
>> On 8/30/2012 7:29 PM, PeteOlcott wrote:
>> ...
>>
>>> I think that I may have to build an analogical bridge from Russell's
>>> Paradox to the Halting Problem.
>>> Pathological Self-Reference has an analog within Russell's Paradox.
>>
>> Are you really using a system of axioms in which Russell's Parodox is an
>> issue? If so, why?
>
> The error of the Halting Problem is exactly analogous to the error of
> Russell's Paradox along exactly one dimension.

There are at least two dimensions in which they are exactly analogous.

1. In each case, assuming the existence of an entity (set of all sets;
halting decider) causes contradictions (a set that is a member of itself
if, and only if, it is not a member of itself; programs that halt if,
and only if, they do not halt).

2. The conventional solution, in each case, is to choose axioms that do
not imply the existence of the troublesome entity. ZFC, for example,
does not allow construction of the set of all sets. Theory of
computation researchers do not assume the existence of a halting
decider, and consider its existence disproved by the fact that assuming
its existence leads to contradictions.

There may be other dimensions in which they are analogous, but I doubt
if there are any analogies between them as direct and significant as
those two.

When can we expect a definition of "pathological self-reference"?

Patricia

0
pats (3556)
8/31/2012 4:33:38 PM
On 8/30/2012 9:02 AM, Ben Bacarisse wrote:
> PeteOlcott <peteolcott@gmail.com> writes:
>
>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>> PeteOlcott <peteolc...@gmail.com> writes:
>>>
>>> <snip>
>>>
>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>>
>>>>>> All of the examples of paradox of self-reference:
>>>>>> a) Liar Paradox
>>>>>> b) Barber Paradox
>>>>>> c) Gödel’s Incompleteness Theorem
>>>>>> d) Halting Problem
>>>
>>>>>> contain very slippery semantic structures that have never been
>>>>>> sufficiently elaborated to fully understand their true meaning. Only a
>>>>>> formalism as robust as Montague Semantics could provide this
>>>>>> sufficient specification.
>>>
>>> <snip>
>>>
>>>> The problem is the same with all four. The semantics of these four is
>>>> not specified correctly as an acyclic graph. Whenever there are any
>>>> cycles in the semantic specification, the specification is erroneous.
>>>
>>> Consider a "long-run" decider.  It answers yes/no to the question "does
>>> machine M execute more than 1000 steps when given input I".  This is a
>>> specification about machines, so any putative long-run decider (let's call
>>> it L) can be applied to its own encoding:
>>>
>>>    L(L, L)
>>>
>>> Is this specification erroneous?
>>
>>  From the details that have been provided Pathological Self-Reference
>> has not been defined for L, and the specification does not seem to be
>> otherwise erroneous.
>
> So what makes one OK and the other "pathological"?  How could I make
> this determination for myself?
>
> I suspect it's just that one is used in one of the proofs of a theorem
> you don't like and the other is not, but maybe there is more to it than
> that.
>

Also, I'm a little concerned about the wording "From the details that
have been provided". If pathological self-reference is a useful and
meaningful property of an expression, it should be possible to say
whether an expression exhibits it without any other details.

That tends to support your suspicion - if you were to use the long run
decider in some proof of a theorem Peter does not like, it would turn
out to be pathological self-reference.

Patricia

0
pats (3556)
8/31/2012 4:42:11 PM
On 8/31/2012 11:42 AM, Patricia Shanahan wrote:
> On 8/30/2012 9:02 AM, Ben Bacarisse wrote:
>> PeteOlcott <peteolcott@gmail.com> writes:
>>
>>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>>> PeteOlcott <peteolc...@gmail.com> writes:
>>>>
>>>> <snip>
>>>>
>>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>>>
>>>>>>> All of the examples of paradox of self-reference:
>>>>>>> a) Liar Paradox
>>>>>>> b) Barber Paradox
>>>>>>> c) Gödel’s Incompleteness Theorem
>>>>>>> d) Halting Problem
>>>>
>>>>>>> contain very slippery semantic structures that have never been
>>>>>>> sufficiently elaborated to fully understand their true meaning. 
>>>>>>> Only a
>>>>>>> formalism as robust as Montague Semantics could provide this
>>>>>>> sufficient specification.
>>>>
>>>> <snip>
>>>>
>>>>> The problem is the same with all four. The semantics of these four is
>>>>> not specified correctly as an acyclic graph. Whenever there are any
>>>>> cycles in the semantic specification, the specification is erroneous.
>>>>
>>>> Consider a "long-run" decider.  It answers yes/no to the question 
>>>> "does
>>>> machine M execute more than 1000 steps when given input I". This is a
>>>> specification about machines, so any putative long-run decider 
>>>> (let's call
>>>> it L) can be applied to its own encoding:
>>>>
>>>>    L(L, L)
>>>>
>>>> Is this specification erroneous?
>>>
>>>  From the details that have been provided Pathological Self-Reference
>>> has not been defined for L, and the specification does not seem to be
>>> otherwise erroneous.
>>
>> So what makes one OK and the other "pathological"?  How could I make
>> this determination for myself?
>>
>> I suspect it's just that one is used in one of the proofs of a theorem
>> you don't like and the other is not, but maybe there is more to it than
>> that.
>>
>
> Also, I'm a little concerned about the wording "From the details that
> have been provided". If pathological self-reference is a useful and
> meaningful property of an expression, it should be possible to say
> whether an expression exhibits it without any other details.
>
If you ask me about the behavior of a function and only provide me with 
the function prototype and not the function body there is no way that my 
answer can be certain.

> That tends to support your suspicion - if you were to use the long run
> decider in some proof of a theorem Peter does not like, it would turn
> out to be pathological self-reference.
>
> Patricia
>

0
Peter
8/31/2012 5:09:40 PM
On 8/31/2012 10:09 AM, Peter Olcott wrote:
> On 8/31/2012 11:42 AM, Patricia Shanahan wrote:
>> On 8/30/2012 9:02 AM, Ben Bacarisse wrote:
>>> PeteOlcott <peteolcott@gmail.com> writes:
>>>
>>>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>>>> PeteOlcott <peteolc...@gmail.com> writes:
>>>>>
>>>>> <snip>
>>>>>
>>>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>>>>
>>>>>>>> All of the examples of paradox of self-reference:
>>>>>>>> a) Liar Paradox
>>>>>>>> b) Barber Paradox
>>>>>>>> c) Gödel’s Incompleteness Theorem
>>>>>>>> d) Halting Problem
>>>>>
>>>>>>>> contain very slippery semantic structures that have never been
>>>>>>>> sufficiently elaborated to fully understand their true meaning.
>>>>>>>> Only a
>>>>>>>> formalism as robust as Montague Semantics could provide this
>>>>>>>> sufficient specification.
>>>>>
>>>>> <snip>
>>>>>
>>>>>> The problem is the same with all four. The semantics of these four is
>>>>>> not specified correctly as an acyclic graph. Whenever there are any
>>>>>> cycles in the semantic specification, the specification is erroneous.
>>>>>
>>>>> Consider a "long-run" decider.  It answers yes/no to the question
>>>>> "does
>>>>> machine M execute more than 1000 steps when given input I". This is a
>>>>> specification about machines, so any putative long-run decider
>>>>> (let's call
>>>>> it L) can be applied to its own encoding:
>>>>>
>>>>>    L(L, L)
>>>>>
>>>>> Is this specification erroneous?
>>>>
>>>>  From the details that have been provided Pathological Self-Reference
>>>> has not been defined for L, and the specification does not seem to be
>>>> otherwise erroneous.
>>>
>>> So what makes one OK and the other "pathological"?  How could I make
>>> this determination for myself?
>>>
>>> I suspect it's just that one is used in one of the proofs of a theorem
>>> you don't like and the other is not, but maybe there is more to it than
>>> that.
>>>
>>
>> Also, I'm a little concerned about the wording "From the details that
>> have been provided". If pathological self-reference is a useful and
>> meaningful property of an expression, it should be possible to say
>> whether an expression exhibits it without any other details.
>>
> If you ask me about the behavior of a function and only provide me with
> the function prototype and not the function body there is no way that my
> answer can be certain.

So whether L(L,L) is pathological self-reference or not depends not just
on L's functionality, which Ben did specify, in addition to the
prototype, but on how that functionality is implemented.

What aspects of the implementation matter? If you were given an L
implementation, what tests would you apply to it determine whether
L(L,L) is pathological self-reference?

Patricia

0
pats (3556)
8/31/2012 6:56:09 PM
On 8/31/2012 1:56 PM, Patricia Shanahan wrote:
> On 8/31/2012 10:09 AM, Peter Olcott wrote:
>> On 8/31/2012 11:42 AM, Patricia Shanahan wrote:
>>> On 8/30/2012 9:02 AM, Ben Bacarisse wrote:
>>>> PeteOlcott <peteolcott@gmail.com> writes:
>>>>
>>>>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote:
>>>>>> PeteOlcott <peteolc...@gmail.com> writes:
>>>>>>
>>>>>> <snip>
>>>>>>
>>>>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote:
>>>>>>
>>>>>>>>> All of the examples of paradox of self-reference:
>>>>>>>>> a) Liar Paradox
>>>>>>>>> b) Barber Paradox
>>>>>>>>> c) Gödel’s Incompleteness Theorem
>>>>>>>>> d) Halting Problem
>>>>>>
>>>>>>>>> contain very slippery semantic structures that have never been
>>>>>>>>> sufficiently elaborated to fully understand their true meaning.
>>>>>>>>> Only a
>>>>>>>>> formalism as robust as Montague Semantics could provide this
>>>>>>>>> sufficient specification.
>>>>>>
>>>>>> <snip>
>>>>>>
>>>>>>> The problem is the same with all four. The semantics of these 
>>>>>>> four is
>>>>>>> not specified correctly as an acyclic graph. Whenever there are any
>>>>>>> cycles in the semantic specification, the specification is 
>>>>>>> erroneous.
>>>>>>
>>>>>> Consider a "long-run" decider.  It answers yes/no to the question
>>>>>> "does
>>>>>> machine M execute more than 1000 steps when given input I". This 
>>>>>> is a
>>>>>> specification about machines, so any putative long-run decider
>>>>>> (let's call
>>>>>> it L) can be applied to its own encoding:
>>>>>>
>>>>>>    L(L, L)
>>>>>>
>>>>>> Is this specification erroneous?
>>>>>
>>>>>  From the details that have been provided Pathological Self-Reference
>>>>> has not been defined for L, and the specification does not seem to be
>>>>> otherwise erroneous.
>>>>
>>>> So what makes one OK and the other "pathological"?  How could I make
>>>> this determination for myself?
>>>>
>>>> I suspect it's just that one is used in one of the proofs of a theorem
>>>> you don't like and the other is not, but maybe there is more to it 
>>>> than
>>>> that.
>>>>
>>>
>>> Also, I'm a little concerned about the wording "From the details that
>>> have been provided". If pathological self-reference is a useful and
>>> meaningful property of an expression, it should be possible to say
>>> whether an expression exhibits it without any other details.
>>>
>> If you ask me about the behavior of a function and only provide me with
>> the function prototype and not the function body there is no way that my
>> answer can be certain.
>
> So whether L(L,L) is pathological self-reference or not depends not just
> on L's functionality, which Ben did specify, in addition to the
> prototype, but on how that functionality is implemented.
>
> What aspects of the implementation matter? If you were given an L
> implementation, what tests would you apply to it determine whether
> L(L,L) is pathological self-reference?
>
> Patricia
>
Pathological Self-Reference is self-reference that prevents the 
functional specification of the desired end-result from being achieved.
I have stated this countless times as analogous to the process of 
deriving an ill-formed question. The {answer} to this question being the 
desired functional end-results: Does TM x halt on input y?
0
Peter
8/31/2012 7:10:38 PM
On 8/31/2012 12:10 PM, Peter Olcott wrote:
....
> Pathological Self-Reference is self-reference that prevents the
> functional specification of the desired end-result from being achieved.

This seems to end the discussion. I am sure that pathological
self-reference is not something that would matter to most people, and
especially not to anyone who has worked on compilers.

It is certainly not something that can be made rigorous enough for use
in mathematical proofs.

Patricia

0
pats (3556)
8/31/2012 7:41:32 PM
On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
> On 8/31/2012 12:10 PM, Peter Olcott wrote:
> ...
>> Pathological Self-Reference is self-reference that prevents the
>> functional specification of the desired end-result from being achieved.
>
> This seems to end the discussion. I am sure that pathological
> self-reference is not something that would matter to most people, and
> especially not to anyone who has worked on compilers.
>
Maybe I can specify Pathological Self-Reference later on. Now that I 
have a good understanding of Montague Grammar, I have a language as 
expressive as English and Precise as Math.

I will have to specify a set of Meaning Postulates using something like 
Predicate Logic.

> It is certainly not something that can be made rigorous enough for use
> in mathematical proofs.
>
> Patricia
>

0
Peter
9/1/2012 1:09:42 AM
On 8/31/2012 11:33 AM, Patricia Shanahan wrote:
> On 8/30/2012 8:03 PM, PeteOlcott wrote:
>> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote:
>>> On 8/30/2012 7:29 PM, PeteOlcott wrote:
>>> ...
>>>
>>>> I think that I may have to build an analogical bridge from Russell's
>>>> Paradox to the Halting Problem.
>>>> Pathological Self-Reference has an analog within Russell's Paradox.
>>>
>>> Are you really using a system of axioms in which Russell's Parodox 
>>> is an
>>> issue? If so, why?
>>
>> The error of the Halting Problem is exactly analogous to the error of
>> Russell's Paradox along exactly one dimension.
>
> There are at least two dimensions in which they are exactly analogous.
>
> 1. In each case, assuming the existence of an entity (set of all sets;
> halting decider) causes contradictions (a set that is a member of itself
> if, and only if, it is not a member of itself; programs that halt if,
> and only if, they do not halt).
>
The criterion measure then becomes:
{Assumptions that  cause Contradictions}.

So the above two (RP and HP) would then be analogous to assuming the 
existence of a Square Circle:
The assumption of the entity {Square Circle} results if the 
contradiction of the requirement of mutually exclusive properties.

It would also be analogous to every other element in the set of 
{Assumptions that cause Contradictions}.

> 2. The conventional solution, in each case, is to choose axioms that do
> not imply the existence of the troublesome entity. ZFC, for example,
> does not allow construction of the set of all sets. Theory of
> computation researchers do not assume the existence of a halting
> decider, and consider its existence disproved by the fact that assuming
> its existence leads to contradictions.
>
> There may be other dimensions in which they are analogous, but I doubt
> if there are any analogies between them as direct and significant as
> those two.
>
> When can we expect a definition of "pathological self-reference"?
>
> Patricia
>

0
Peter
9/1/2012 1:45:35 AM
On 8/31/2012 6:45 PM, Peter Olcott wrote:
> On 8/31/2012 11:33 AM, Patricia Shanahan wrote:
>> On 8/30/2012 8:03 PM, PeteOlcott wrote:
>>> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote:
>>>> On 8/30/2012 7:29 PM, PeteOlcott wrote:
>>>> ...
>>>>
>>>>> I think that I may have to build an analogical bridge from Russell's
>>>>> Paradox to the Halting Problem.
>>>>> Pathological Self-Reference has an analog within Russell's Paradox.
>>>>
>>>> Are you really using a system of axioms in which Russell's Parodox
>>>> is an
>>>> issue? If so, why?
>>>
>>> The error of the Halting Problem is exactly analogous to the error of
>>> Russell's Paradox along exactly one dimension.
>>
>> There are at least two dimensions in which they are exactly analogous.
>>
>> 1. In each case, assuming the existence of an entity (set of all sets;
>> halting decider) causes contradictions (a set that is a member of itself
>> if, and only if, it is not a member of itself; programs that halt if,
>> and only if, they do not halt).
>>
> The criterion measure then becomes:
> {Assumptions that  cause Contradictions}.
>
> So the above two (RP and HP) would then be analogous to assuming the
> existence of a Square Circle:
> The assumption of the entity {Square Circle} results if the
> contradiction of the requirement of mutually exclusive properties.

Yup, assuming existence of a halting decider is just like assuming
existence of a set-of-all-sets. Either leads to contradictions. Perhaps
you are beginning to get it.

Patricia

0
pats (3556)
9/1/2012 1:53:16 AM
On 8/31/2012 8:53 PM, Patricia Shanahan wrote:
> On 8/31/2012 6:45 PM, Peter Olcott wrote:
>> On 8/31/2012 11:33 AM, Patricia Shanahan wrote:
>>> On 8/30/2012 8:03 PM, PeteOlcott wrote:
>>>> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote:
>>>>> On 8/30/2012 7:29 PM, PeteOlcott wrote:
>>>>> ...
>>>>>
>>>>>> I think that I may have to build an analogical bridge from Russell's
>>>>>> Paradox to the Halting Problem.
>>>>>> Pathological Self-Reference has an analog within Russell's Paradox.
>>>>>
>>>>> Are you really using a system of axioms in which Russell's Parodox
>>>>> is an
>>>>> issue? If so, why?
>>>>
>>>> The error of the Halting Problem is exactly analogous to the error of
>>>> Russell's Paradox along exactly one dimension.
>>>
>>> There are at least two dimensions in which they are exactly analogous.
>>>
>>> 1. In each case, assuming the existence of an entity (set of all sets;
>>> halting decider) causes contradictions (a set that is a member of 
>>> itself
>>> if, and only if, it is not a member of itself; programs that halt if,
>>> and only if, they do not halt).
>>>
>> The criterion measure then becomes:
>> {Assumptions that  cause Contradictions}.
>>
>> So the above two (RP and HP) would then be analogous to assuming the
>> existence of a Square Circle:
>> The assumption of the entity {Square Circle} results if the
>> contradiction of the requirement of mutually exclusive properties.
>
> Yup, assuming existence of a halting decider is just like assuming
> existence of a set-of-all-sets. Either leads to contradictions. Perhaps
> you are beginning to get it.
>
> Patricia
>
Assuming the existence of an answer to the question:
What time is it {yes or no} ?

{Assumptions that  cause Contradictions}. Same thing right?
0
Peter
9/1/2012 2:07:48 AM
On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
> On 8/31/2012 12:10 PM, Peter Olcott wrote:
> ...
>> Pathological Self-Reference is self-reference that prevents the
>> functional specification of the desired end-result from being achieved.
>
> This seems to end the discussion. I am sure that pathological
> self-reference is not something that would matter to most people, and
> especially not to anyone who has worked on compilers.
>

{Assumptions that  cause Contradictions} or the analogous {Requirements 
that  cause Contradictions} would form one aspect of the specification 
of Pathological Self-Reference.

> It is certainly not something that can be made rigorous enough for use
> in mathematical proofs.
>
> Patricia
>

0
Peter
9/1/2012 3:30:16 AM
> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:> On 8/31/2012 12:10 PM, Peter Olcott wrote:
> > ...
> >> Pathological Self-Reference is self-reference that prevents the
> >> functional specification of the desired end-result from being achieved.

On Aug 31, 9:09 pm, Peter Olcott <OCR4Screen> wrote:
> > This seems to end the discussion.

NO, IT DOESN'T.

The problem in the case of the halting problem is that IT IS NOT the
pathological self reference, EVEN IF SUCH A THING EXISTED,
that prevents "the functional specification of the desired end-result
from begin achieved".
What prevents the desired end-result from being achieved is that
achievement of the desired end-result
IS AN ANALYTICAL IMPOSSIBLITY!
There is A PROOF (you have seen it 50 times) that the MERE EXISTENCE
of a totally correct halt-deciding
TM *LOGICALLY*&*ANALYTICALLY*IMPLIES* a contradiction!!  The easiest
contradiction to produce is
Russellian and does in fact involve passing a program's own code-
string to that program as a parameter,
but that IS NOT *NECESSARY*!!  THE SAME paradox would arise if you
passed ANY OTHER code-string
of ANY OTHER program that just HAPPENED TO HAVE THE SAME I/O
behavior!  So there NEED not be
any SELF-reference at ALL!  THAT IS NOT what is causing the problem!

> > I am sure that pathological
> > self-reference is not something that would matter to most people, and
> > especially not to anyone who has worked on compilers.

It matters TO EVERYBODY IN THIS ROOM as long as you are going to keep
denigrating one of OUR CORE RESULTS
as "based on an ill-formed" whatever, OR as "pathological"!! WE JUST
AIN'T HAVIN' THAT!!


> Maybe I can specify Pathological Self-Reference later on.

Oh, BULLSHIT!  YOU HAVE BEEN USING IT SINCE 2004 !!
If you did not specify it THEN then YOU HAVE JUST BEEN LYING ALL THIS
time!!

> Now that I
> have a good understanding of Montague Grammar, I have a language as
> expressive as English and Precise as Math.

Gawd, you are SUCH a troll.
0
greeneg9613 (188)
9/1/2012 6:12:26 AM
Peter Olcott <OCR4Screen> writes:

> On 8/31/2012 8:53 PM, Patricia Shanahan wrote:
>> On 8/31/2012 6:45 PM, Peter Olcott wrote:
>>> On 8/31/2012 11:33 AM, Patricia Shanahan wrote:
>>>> On 8/30/2012 8:03 PM, PeteOlcott wrote:
>>>>> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote:
>>>>>> On 8/30/2012 7:29 PM, PeteOlcott wrote:
>>>>>> ...
>>>>>>
>>>>>>> I think that I may have to build an analogical bridge from Russell's
>>>>>>> Paradox to the Halting Problem.
>>>>>>> Pathological Self-Reference has an analog within Russell's Paradox.
>>>>>>
>>>>>> Are you really using a system of axioms in which Russell's Parodox
>>>>>> is an
>>>>>> issue? If so, why?
>>>>>
>>>>> The error of the Halting Problem is exactly analogous to the error of
>>>>> Russell's Paradox along exactly one dimension.
>>>>
>>>> There are at least two dimensions in which they are exactly analogous.
>>>>
>>>> 1. In each case, assuming the existence of an entity (set of all sets;
>>>> halting decider) causes contradictions (a set that is a member of
>>>> itself
>>>> if, and only if, it is not a member of itself; programs that halt if,
>>>> and only if, they do not halt).
>>>>
>>> The criterion measure then becomes:
>>> {Assumptions that  cause Contradictions}.
>>>
>>> So the above two (RP and HP) would then be analogous to assuming the
>>> existence of a Square Circle:
>>> The assumption of the entity {Square Circle} results if the
>>> contradiction of the requirement of mutually exclusive properties.
>>
>> Yup, assuming existence of a halting decider is just like assuming
>> existence of a set-of-all-sets. Either leads to contradictions. Perhaps
>> you are beginning to get it.
>>
>> Patricia
>>
> Assuming the existence of an answer to the question:
> What time is it {yes or no} ?
>
> {Assumptions that  cause Contradictions}. Same thing right?

Yes.  Can we end this now?

If you'd posted that all those months ago (and all those years ago when
last brought up your pet hate), there would have been none of this.

I predict two things.  One, that you won't understand why there is now
agreement; and two, that agreement is not what you seek, so you will go
back to saying wrong things.

-- 
Ben.
0
ben.usenet (6790)
9/1/2012 12:09:52 PM
Peter Olcott <OCR4Screen> writes:

> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>> ...
>>> Pathological Self-Reference is self-reference that prevents the
>>> functional specification of the desired end-result from being achieved.
>>
>> This seems to end the discussion. I am sure that pathological
>> self-reference is not something that would matter to most people, and
>> especially not to anyone who has worked on compilers.
>>
>
> {Assumptions that  cause Contradictions} or the analogous
> {Requirements that  cause Contradictions} would form one aspect of the
> specification of Pathological Self-Reference.

It's interesting that even though you have an idea that are sure
is right (PSR) you still won't commit to what it is.  You'll state only
one aspect of it (and that only in the subjunctive!).

But that's all old hat now.  By an extraordinary twist in the tale (well
done Patricia) agreement has been reached.

<snip>
-- 
Ben.
0
ben.usenet (6790)
9/1/2012 12:18:27 PM
On 9/1/2012 7:18 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>>> ...
>>>> Pathological Self-Reference is self-reference that prevents the
>>>> functional specification of the desired end-result from being achieved.
>>> This seems to end the discussion. I am sure that pathological
>>> self-reference is not something that would matter to most people, and
>>> especially not to anyone who has worked on compilers.
>>>
>> {Assumptions that  cause Contradictions} or the analogous
>> {Requirements that  cause Contradictions} would form one aspect of the
>> specification of Pathological Self-Reference.
> It's interesting that even though you have an idea that are sure
> is right (PSR) you still won't commit to what it is.  You'll state only
> one aspect of it (and that only in the subjunctive!).
>
> But that's all old hat now.  By an extraordinary twist in the tale (well
> done Patricia) agreement has been reached.
>
> <snip>
Yet this is also the same thing as an ill-formed question.
So the Halting Problem *is* based on an ill-formed question, and 
everyone that disagreed was wrong?
0
Peter
9/1/2012 12:58:46 PM
Peter Olcott <OCR4Screen> writes:

> On 9/1/2012 7:18 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>>>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>>>> ...
>>>>> Pathological Self-Reference is self-reference that prevents the
>>>>> functional specification of the desired end-result from being achieved.
>>>> This seems to end the discussion. I am sure that pathological
>>>> self-reference is not something that would matter to most people, and
>>>> especially not to anyone who has worked on compilers.
>>>>
>>> {Assumptions that  cause Contradictions} or the analogous
>>> {Requirements that  cause Contradictions} would form one aspect of the
>>> specification of Pathological Self-Reference.
>> It's interesting that even though you have an idea that are sure
>> is right (PSR) you still won't commit to what it is.  You'll state only
>> one aspect of it (and that only in the subjunctive!).
>>
>> But that's all old hat now.  By an extraordinary twist in the tale (well
>> done Patricia) agreement has been reached.
>>
>> <snip>
> Yet this is also the same thing as an ill-formed question.
> So the Halting Problem *is* based on an ill-formed question, and
> everyone that disagreed was wrong?

No.  I predicted elsewhere that you would not know why people were
agreeing with you, but that you'd go back to saying wrong things very
soon.

-- 
Ben.
0
ben.usenet (6790)
9/1/2012 1:20:38 PM
On 9/1/2012 8:20 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 9/1/2012 7:18 AM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>>>>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>>>>> ...
>>>>>> Pathological Self-Reference is self-reference that prevents the
>>>>>> functional specification of the desired end-result from being achieved.
>>>>> This seems to end the discussion. I am sure that pathological
>>>>> self-reference is not something that would matter to most people, and
>>>>> especially not to anyone who has worked on compilers.
>>>>>
>>>> {Assumptions that  cause Contradictions} or the analogous
>>>> {Requirements that  cause Contradictions} would form one aspect of the
>>>> specification of Pathological Self-Reference.
>>> It's interesting that even though you have an idea that are sure
>>> is right (PSR) you still won't commit to what it is.  You'll state only
>>> one aspect of it (and that only in the subjunctive!).
>>>
>>> But that's all old hat now.  By an extraordinary twist in the tale (well
>>> done Patricia) agreement has been reached.
>>>
>>> <snip>
>> Yet this is also the same thing as an ill-formed question.
>> So the Halting Problem *is* based on an ill-formed question, and
>> everyone that disagreed was wrong?
> No.  I predicted elsewhere that you would not know why people were
> agreeing with you, but that you'd go back to saying wrong things very
> soon.
>
An ill-formed question is any question that lacks a correct answer from 
the set of all possible answers.

Since {Assumptions that cause Contradictions} are analogous to 
{Requirements that cause Contradictions} in that an entity {Assumption 
or Requirement} causes a contradiction, therefore questions that have 
required answers that form contradictions are also an element of the set 
of {Entities that case contradictions}.
0
Peter
9/1/2012 1:51:12 PM
Peter Olcott <OCR4Screen> writes:

> On 9/1/2012 8:20 AM, Ben Bacarisse wrote:
>> Peter Olcott <OCR4Screen> writes:
>>
>>> On 9/1/2012 7:18 AM, Ben Bacarisse wrote:
>>>> Peter Olcott <OCR4Screen> writes:
>>>>
>>>>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>>>>>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>>>>>> ...
>>>>>>> Pathological Self-Reference is self-reference that prevents the
>>>>>>> functional specification of the desired end-result from being achieved.
>>>>>> This seems to end the discussion. I am sure that pathological
>>>>>> self-reference is not something that would matter to most people, and
>>>>>> especially not to anyone who has worked on compilers.
>>>>>>
>>>>> {Assumptions that  cause Contradictions} or the analogous
>>>>> {Requirements that  cause Contradictions} would form one aspect of the
>>>>> specification of Pathological Self-Reference.
>>>> It's interesting that even though you have an idea that are sure
>>>> is right (PSR) you still won't commit to what it is.  You'll state only
>>>> one aspect of it (and that only in the subjunctive!).
>>>>
>>>> But that's all old hat now.  By an extraordinary twist in the tale (well
>>>> done Patricia) agreement has been reached.
>>>>
>>>> <snip>
>>> Yet this is also the same thing as an ill-formed question.
>>> So the Halting Problem *is* based on an ill-formed question, and
>>> everyone that disagreed was wrong?
>> No.  I predicted elsewhere that you would not know why people were
>> agreeing with you, but that you'd go back to saying wrong things very
>> soon.
>>
> An ill-formed question is any question that lacks a correct answer
> from the set of all possible answers.

OK, that's a reasonable definition, though I've never needed it myself.

> Since {Assumptions that cause Contradictions} are analogous to
> {Requirements that cause Contradictions} in that an entity {Assumption
> or Requirement} causes a contradiction, therefore questions that have
> required answers that form contradictions are also an element of the
> set of {Entities that case contradictions}.

Yes, that's why I was prepared to agree before.  The key (if you are
still confused about why I seem to agree sometimes and not others) is
the phrase "based on".

Mind you, I don't like the language; it's far too informal since there's
no causality in (ordinary) logic.  Sets of formulae are either
consistent or they are not, and when they are not, you can't tell which
one of them "caused" the inconsistency.  I don't mind the usage in
informal discussions, but I suspect this informality is one reason you
so often go from imprecise but acceptable statements to ones that are
flat-out wrong.  I think, for example, that you believe you've found out
exactly what it is that makes halting undecidable (and that's it's
something you can feel OK about).

Go to the page I cited recently and you can see a formal proof of
halting undecidability written out for the Coq proof assistant.  There
you can see, in gruesome precision, all the axioms, assumptions and
definitions that are need for the theorem to be provable.  Take any one
away, and the theorem is no longer provable.

It's an interesting piece of work (based, originally, on this proof[1])
because it does not prove the theorem for TMs.  Instead, it abstracts
out those aspects of computability that are required for halting
undecidability.

[1] http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.98.2911&rep=rep1&type=pdf

-- 
Ben.
0
ben.usenet (6790)
9/1/2012 2:38:45 PM
On 9/1/2012 9:38 AM, Ben Bacarisse wrote:
> Peter Olcott <OCR4Screen> writes:
>
>> On 9/1/2012 8:20 AM, Ben Bacarisse wrote:
>>> Peter Olcott <OCR4Screen> writes:
>>>
>>>> On 9/1/2012 7:18 AM, Ben Bacarisse wrote:
>>>>> Peter Olcott <OCR4Screen> writes:
>>>>>
>>>>>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote:
>>>>>>> On 8/31/2012 12:10 PM, Peter Olcott wrote:
>>>>>>> ...
>>>>>>>> Pathological Self-Reference is self-reference that prevents the
>>>>>>>> functional specification of the desired end-result from being achieved.
>>>>>>> This seems to end the discussion. I am sure that pathological
>>>>>>> self-reference is not something that would matter to most people, and
>>>>>>> especially not to anyone who has worked on compilers.
>>>>>>>
>>>>>> {Assumptions that  cause Contradictions} or the analogous
>>>>>> {Requirements that  cause Contradictions} would form one aspect of the
>>>>>> specification of Pathological Self-Reference.
>>>>> It's interesting that even though you have an idea that are sure
>>>>> is right (PSR) you still won't commit to what it is.  You'll state only