If a yes or no question does not have a correct yes or no answer then there must be something wrong with this question. More generally: An ill-formed question is defined as any question that lacks a correct answer from the set of all possible answers. The *only* reason that the self reference form of the Halting Problem can not be solved is that neither of the two possible final states of any potential halt decider TM corresponds to whether or not its input TM will halt on its input. In other words for potential halt decider H and input M: --------------------------- Not<ThereExists> <ElementOfSet> FinalStatesOf_H <MathematicallyMapsTo> Halts(M, H, input) Where M is defined as --------------------- M(String H, String input): if H(input, H, input) loop else halt The only difference between asking a yes or no question and the invocation of a Turing Machine is a natural language interface. Within a natural language interface the invocation of H(M, H, M) would be specified as: �Does Turing Machine M halt on input of Turing Machine H and Turing Machine M?� Within a natural language interface the answer to this question would be specified as �yes� or �no� and map to the final states of H of accept or reject. So the only reason that the self reference form of the Halting Problem can not be solved is that it is based on a yes or no question that lacks a correct yes or no answer, and thereby derives an ill-formed question.

0 |

6/9/2012 1:43:58 AM

On 6/8/2012 9:04 PM, cplxphil wrote: > On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen> wrote: >> If a yes or no question does not have a correct yes or no answer then >> there must be something wrong with this question. >> >> More generally: >> An ill-formed question is defined as any question that lacks a correct >> answer from the set of all possible answers. >> >> The *only* reason that the self reference form of the Halting Problem >> can not be solved is that neither of the two possible final states of >> any potential halt decider TM corresponds to whether or not its input TM >> will halt on its input. >> >> In other words for potential >> halt decider H and input M: >> --------------------------- >> Not<ThereExists> >> <ElementOfSet> >> FinalStatesOf_H >> <MathematicallyMapsTo> >> Halts(M, H, input) >> >> Where M is defined as >> --------------------- >> M(String H, String input): >> if H(input, H, input) loop >> else halt >> >> The only difference between asking a yes or no question and the >> invocation of a Turing Machine is a natural language interface. >> >> Within a natural language interface the invocation of H(M, H, M) would >> be specified as: >> >> �Does Turing Machine M halt on input of Turing Machine H >> and Turing Machine M?� >> >> Within a natural language interface the answer to this question would be >> specified as �yes� or �no� and map to the final states of H of accept or >> reject. >> >> So the only reason that the self reference form of the Halting Problem >> can not be solved is that it is based on a yes or no question that lacks >> a correct yes or no answer, and thereby derives an ill-formed question. > No one would say that instances of the halting problem lack a yes or > no answer. Yes this point has been missed for many years. Since the final states: {accept, reject} of H form the entire solution set, (every possible answer that H can provide) and these states have been shown to mathematically map to yes or no therefore the inability of a Turing Machine to solve the halting problem is merely the inability to correctly answer a yes or no question that has no correct yes or no answer. > The issue, when one says "the halting problem 'cannot be > solved'" is that there is no Turing machine that accurately decides > the problem on every instance. > > So yes--every instance either has an answer of "yes" or "no"--but not > every instance can be solved by a Turing machine. > > It's not ill-formed--your "questions" have answers, they are just hard/ > impossible to find. > > Do you think that the question, "Is the answer to this question no?" > is ill-formed? Yes. > You've heard of Godel's Incompleteness Theorem, right? The Halting > Problem is rather similar. Yes, and Godel's Incompleteness Theorem errs for the same reason.

0 |

6/9/2012 3:17:44 AM

On 6/9/2012 5:58 PM, cplxphil wrote: > On Jun 8, 11:17 pm, Peter Olcott<OCR4Screen> wrote: >> On 6/8/2012 9:04 PM, cplxphil wrote: >> >> >> >> >> >> >> >>> On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen> wrote: >>>> If a yes or no question does not have a correct yes or no answer then >>>> there must be something wrong with this question. >>>> More generally: >>>> An ill-formed question is defined as any question that lacks a correct >>>> answer from the set of all possible answers. >>>> The *only* reason that the self reference form of the Halting Problem >>>> can not be solved is that neither of the two possible final states of >>>> any potential halt decider TM corresponds to whether or not its input TM >>>> will halt on its input. >>>> In other words for potential >>>> halt decider H and input M: >>>> --------------------------- >>>> Not<ThereExists> >>>> <ElementOfSet> >>>> FinalStatesOf_H >>>> <MathematicallyMapsTo> >>>> Halts(M, H, input) >>>> Where M is defined as >>>> --------------------- >>>> M(String H, String input): >>>> if H(input, H, input) loop >>>> else halt >>>> The only difference between asking a yes or no question and the >>>> invocation of a Turing Machine is a natural language interface. >>>> Within a natural language interface the invocation of H(M, H, M) would >>>> be specified as: >>>> �Does Turing Machine M halt on input of Turing Machine H >>>> and Turing Machine M?� >>>> Within a natural language interface the answer to this question would be >>>> specified as �yes� or �no� and map to the final states of H of accept or >>>> reject. >>>> So the only reason that the self reference form of the Halting Problem >>>> can not be solved is that it is based on a yes or no question that lacks >>>> a correct yes or no answer, and thereby derives an ill-formed question. >>> No one would say that instances of the halting problem lack a yes or >>> no answer. >> Yes this point has been missed for many years. >> >> Since the final states: {accept, reject} of H form the entire solution >> set, (every possible answer that H can provide) and these states have >> been shown to mathematically map to yes or no therefore the inability of >> a Turing Machine to solve the halting problem is merely the inability to >> correctly answer a yes or no question that has no correct yes or no answer. >> > Perhaps you've been over this before in your lengthy discussions, but > let me see if I have this straight. > > You are saying that the question, "Does machine M halt on input I?" > may, for certain M and I, be impossible to answer either yes or no? > > If it doesn't either halt or not halt on input I, what exactly does it > do? The input to H, either halts or does not halt, depending upon how H is defined. The Halting problem is based on an ill-formed question because neither answer of every possible answer that H can provide (by transitioning to its own accept or reject state) mathematically maps to whether or not the input to H halts on its input. > Also, before this continues too long: It sounds like you are quite > confident that you're right about this. I am quite confident that you > are not. What would it take to convince you that you are wrong? I will carefully examine every line-of-reasoning that attempts to show that my reasoning is incorrect. I will point out any errors that I find. > For my part, I will be satisfied and agree that Turing's result is > somehow wrong if you can implement an algorithm that solves the > halting problem. If you are going to say that the Halting problem is > ill-formed, then in order for me to agree with this, I would need to > see an example of a machine that both fails to halt and fails to not > halt. (Good luck.)

0 |

6/10/2012 3:32:41 AM

On 6/9/2012 8:39 PM, Joshua Cranmer wrote: > On 6/9/2012 6:58 PM, cplxphil wrote: >> Perhaps you've been over this before in your lengthy discussions, but >> let me see if I have this straight. >> >> You are saying that the question, "Does machine M halt on input I?" >> may, for certain M and I, be impossible to answer either yes or no? > > I think Peter believes that this is not the proper way to phrase the > question really being asked. Although, when he was pushed into a > corner, I think there was a tacit admission that the answer of such a > question depends on who you asking it of. > When the question is asked of the entire set of every potential halt decider, and every element of this set is limited to providing its answer by transitioning to one of its own final states of {accept, reject} then this question derives a yes or no question that lacks a correct yes or no answer and is thereby ill-formed. Not<ThereExists> <ElementOfSet> FinalStatesOf_H <MathematicallyMapsTo> Halts(M, H, input) Where M is defined as --------------------- M(String H, String input): if H(input, H, input) loop else halt >> Also, before this continues too long: It sounds like you are quite >> confident that you're right about this. I am quite confident that you >> are not. What would it take to convince you that you are wrong? > > Again, I think the answer is that he has no issue with proof, per se. > His umbrage is with the interpretation of the result: he believes that > it is possible to make a Halt decider that is "essentially" correct > but fails in some cases which are "necessary" (use of quotation marks > to indicate that the terms contained within are slippery and > ill-defined). Lots of people have attempted to illustrate several > alternate derivations to show that the incorrectness is not so > well-contained, but they have all been ignored because either: > a) it happens to fall under the "necessary" failures, > b) it relies on a similar "incorrect" result (the uncountability of > real numbers and Godel's incompleteness theorem have also been > explicitly cited as incorrect proofs due to being similar [1]), or > c) he doesn't understand it, so it has to be either case a or case b > because he is OBVIOUSLY right. > > Trying to come up with a simple, alternate proof of the Halting > problem that is understandable and skirts any other proofs that have > anything smacking of diagonalization or self-reference is indeed hard, > but I doubt he'd accept anything less. I also doubt that he'd accept > even that much, though... > > [1] This just makes it seem to me that he has a really hard time > accepting that a proof by contradiction is indeed a valid proof. >

0 |

6/10/2012 3:43:02 AM

On 6/10/2012 7:26 PM, cplxphil wrote: > On Jun 10, 10:21 am, "Peter Olcott"<NoS...@OCR4Screen.com> wrote: >> "Joshua Cranmer"<Pidgeo...@verizon.invalid> wrote in message >> >> news:jr23h2$hil$1@dont-email.me... >> >>> On 6/9/2012 11:32 PM, Peter Olcott wrote: >>>> On 6/9/2012 5:58 PM, cplxphil wrote: >>>>> Also, before this continues too long: It sounds like you are quite >>>>> confident that you're right about this. I am quite confident that you >>>>> are not. What would it take to convince you that you are wrong? >>>> I will carefully examine every line-of-reasoning that attempts to show >>>> that my reasoning is incorrect. I will point out any errors that I find. >>> So, in other words, it is impossible to convince that you are wrong. >> It would be impossible to convince me that I am wrong if I am *not* wrong, >> otherwise it is possible. >> So far I have not seen anything resembling sound reasoning that correctly >> refutes my true position. >> I have made very many errors in presenting this position, and from what I >> can tell most of these have been corrected. >> >> Since my goal is to provide this reasoning using English that is 100% >> completely mathematically precisely correct, George may have pointed out a >> recent error. It may have been incorrect for me to use the term "potential >> halt decider". >> When I used this term I was referring to any Turing Machine that attempts to >> be a halt decider, even though none could ever actually achieve this. Since >> none could ever actually achieve this, no actual potential exists. >> >> Here is a possibly better statement of my position: >> >> The reason why the self reference form of the Halting Problem can not be >> solved is: >> 1) The invocation of every Turing Machine that attempts to be a halt decider >> mathematically maps to a yes or no question (within a natural language >> interface). >> >> 2) The yes or no answer to this question that this set of Turing Machines >> can possibly provide (within a natural language interface) mathematically >> maps to its own final states of accept and reject, thus deriving the entire >> solution set of every possible answer. >> >> 3) Neither of these yes or no (accept or reject) answers is correct >> (mathematically maps to whether or not the input TM will halt on its input). >> >> 4) Therefore the reason that the self reference form of the Halting Problem >> can not be solved is that this problem is based on providing a yes or no >> answer to a question that has no correct yes or no answer. >> >> Any attempted refutation should provide reasoning that refutes the above >> points individually. Stating that the above reasoning is nonsense merely >> indicates a failure of the respondent to comprehend and nothing more. >> >> >> >> >> >> >> >> >> >>> -- >>> Beware of bugs in the above code; I have only proved it correct, not tried >>> it. -- Donald E. Knuth > I think I see what you're getting at. > > Let's see if I can translate (into terms I'm comfortable with) and > then refute your 4 points: > > 1> A Turing machine that could solve the halting problem must map to > a yes or no question. That would be an incorrect paraphrase. I think that I am beginning to see why people are not understanding me. There is a completely different point of view between the timing of intentions of a software engineer and a mathematician. Software engineers start with a problem to be solved, and from this devise a specification, and then translate this specification into an implementation. From this point of view deriving a Turing Machine that solves the Halting Problem derives an ill-formed question. Mathematicians examine the set of all finite length string implementations (skipping the first two steps) and find than none of these derive a Halting Decider, thus the mathematician never asks the ill-formed question. > 2> Such a Turing machine can only ever accept or reject. > 3> Neither answer to a "yes or no" halting question is correct. > (justification?) > 4> The halting problem has no answer. > > Although I don't understand #3, the problem is with #1. You suggest, > "a halt decider...maps to a yes or no question." Yes, but only if it > works! Every "potential halt decider" we can develop *fails to halt* > when given a sufficiently problematic instance of the halting > problem. You are essentially starting out your argument by assuming > that the halting problem can be decided properly, which it cannot. > > You are saying that every Turing machine must, via a natural language > interface (which--by the way--technically has no place in a > mathematical argument), map to a natural language question. Wrong! > The mathematical *question* that you discuss must map to a yes or no > question, but a Turing machine is a computer program and a > mathematical object--and thus is not governed by your beliefs about > natural language. > > The irony of your argument is that a modification of it actually > suggests that Turing's result is correct. In fact, if Turing's result > were wrong, you would be correct. Logically speaking, if you could > establish something as logically contradictory as the actual existence > of a "potential halt decider," you would be able to prove anything-- > even that the moon is made of cheese. > > Moons made of cheese are neither here nor there, but in summary, your > fallacy is the claim that the *Turing machine* must map to a yes or no > question. A Turing machine is free to do whatever it wants, including > never halt, even if you would like it to be a potential halt decider. > > Ironically, your step #1 represents a contradiction that Turing > exploits in his proof. You are assuming that the halt decider exists, > and that it maps to a yes or no question. That is precisely what > Turing assumes--and using that false assumption and some self- > reference tricks, he produces precisely the result that you claim > doesn't make sense. > > I've been a bit repetitive above, mainly for emphasis (and out of > laziness), but I hope I've made my point. One last time: A Turing > machine does not map to a yes or no question, it maps to "yes, no, > never halt." > > I have a sinking feeling you won't be convinced....

0 |

6/12/2012 1:04:39 AM

On 6/10/2012 7:26 PM, cplxphil wrote: > On Jun 10, 10:21 am, "Peter Olcott"<NoS...@OCR4Screen.com> wrote: >> "Joshua Cranmer"<Pidgeo...@verizon.invalid> wrote in message >> >> news:jr23h2$hil$1@dont-email.me... >> >>> On 6/9/2012 11:32 PM, Peter Olcott wrote: >>>> On 6/9/2012 5:58 PM, cplxphil wrote: >>>>> Also, before this continues too long: It sounds like you are quite >>>>> confident that you're right about this. I am quite confident that you >>>>> are not. What would it take to convince you that you are wrong? >>>> I will carefully examine every line-of-reasoning that attempts to show >>>> that my reasoning is incorrect. I will point out any errors that I find. >>> So, in other words, it is impossible to convince that you are wrong. >> It would be impossible to convince me that I am wrong if I am *not* wrong, >> otherwise it is possible. >> So far I have not seen anything resembling sound reasoning that correctly >> refutes my true position. >> I have made very many errors in presenting this position, and from what I >> can tell most of these have been corrected. >> >> Since my goal is to provide this reasoning using English that is 100% >> completely mathematically precisely correct, George may have pointed out a >> recent error. It may have been incorrect for me to use the term "potential >> halt decider". >> When I used this term I was referring to any Turing Machine that attempts to >> be a halt decider, even though none could ever actually achieve this. Since >> none could ever actually achieve this, no actual potential exists. >> >> Here is a possibly better statement of my position: >> >> The reason why the self reference form of the Halting Problem can not be >> solved is: >> 1) The invocation of every Turing Machine that attempts to be a halt decider >> mathematically maps to a yes or no question (within a natural language >> interface). >> >> 2) The yes or no answer to this question that this set of Turing Machines >> can possibly provide (within a natural language interface) mathematically >> maps to its own final states of accept and reject, thus deriving the entire >> solution set of every possible answer. >> >> 3) Neither of these yes or no (accept or reject) answers is correct >> (mathematically maps to whether or not the input TM will halt on its input). >> >> 4) Therefore the reason that the self reference form of the Halting Problem >> can not be solved is that this problem is based on providing a yes or no >> answer to a question that has no correct yes or no answer. >> >> Any attempted refutation should provide reasoning that refutes the above >> points individually. Stating that the above reasoning is nonsense merely >> indicates a failure of the respondent to comprehend and nothing more. >> >> >> >> >> >> >> >> >> >>> -- >>> Beware of bugs in the above code; I have only proved it correct, not tried >>> it. -- Donald E. Knuth > I think I see what you're getting at. > > Let's see if I can translate (into terms I'm comfortable with) and > then refute your 4 points: > > 1> A Turing machine that could solve the halting problem must map to > a yes or no question. It is a yes or no question whether the TM can solve it or not, and the *only* reason that a TM can not solve it is that it is an incorrect yes or no question. The mathematical mapping is not that hard, I can not see why most people are not getting it. X = 7 + 5; mathematically maps to: Assign the value of the sum of seven plus five to the integer variable named X. The math representation and the English representation have identical semantic meanings. In the self reference form of the Halting Problem (more literally (but too clumsy to say) the self-reference form of the reason why the Halting Problem can not be solved), the Turing Machine is asked the question "Does your input TM halt on its input?" From the software engineering point of view this is its question. From the software engineering perspective it is required to answer the question of its design specification. From a mathematicians point of view the design specification would be missing and instead one would examine the set of all finite length strings and find that none meet the intended goal. > 2> Such a Turing machine can only ever accept or reject. > 3> Neither answer to a "yes or no" halting question is correct. > (justification?) > 4> The halting problem has no answer. > > Although I don't understand #3, the problem is with #1. You suggest, > "a halt decider...maps to a yes or no question." Yes, but only if it > works! Every "potential halt decider" we can develop *fails to halt* > when given a sufficiently problematic instance of the halting > problem. You are essentially starting out your argument by assuming > that the halting problem can be decided properly, which it cannot. > > You are saying that every Turing machine must, via a natural language > interface (which--by the way--technically has no place in a > mathematical argument), map to a natural language question. Wrong! > The mathematical *question* that you discuss must map to a yes or no > question, but a Turing machine is a computer program and a > mathematical object--and thus is not governed by your beliefs about > natural language. > > The irony of your argument is that a modification of it actually > suggests that Turing's result is correct. In fact, if Turing's result > were wrong, you would be correct. Logically speaking, if you could > establish something as logically contradictory as the actual existence > of a "potential halt decider," you would be able to prove anything-- > even that the moon is made of cheese. > > Moons made of cheese are neither here nor there, but in summary, your > fallacy is the claim that the *Turing machine* must map to a yes or no > question. A Turing machine is free to do whatever it wants, including > never halt, even if you would like it to be a potential halt decider. > > Ironically, your step #1 represents a contradiction that Turing > exploits in his proof. You are assuming that the halt decider exists, > and that it maps to a yes or no question. That is precisely what > Turing assumes--and using that false assumption and some self- > reference tricks, he produces precisely the result that you claim > doesn't make sense. > > I've been a bit repetitive above, mainly for emphasis (and out of > laziness), but I hope I've made my point. One last time: A Turing > machine does not map to a yes or no question, it maps to "yes, no, > never halt." > > I have a sinking feeling you won't be convinced....

0 |

6/13/2012 1:43:02 AM

On 6/11/2012 9:29 PM, cplxphil wrote: > Argh...you say that my paraphrase is incorrect, but then don't try to > supply examples of a better one! The best one was the original one: �Does Turing Machine M halt on input of Turing Machine H and Turing Machine M?� Where M is defined as --------------------- M(String H, String input): if H(input, H, input) loop else halt > If your prose were precise and clear, it would be reasonable to just > say "oh that's wrong," but given that it clearly isn't, why not take a > little time to improve your exposition? You are not uncivil, but you > come across as a bit arrogant. I say this in the spirit of trying to > be constructive, not an effort to be vindictive or criticize you in an > ad hominem fashion. > > Why not work on some clearer definitions? If you have the world's > most brilliant philosophical point in the world, but can't make it > clearly, you might as well be speaking Portuguese to a room full of > non-speakers. I suspect that if you have something of value to say, > it's disguised in your exposition to the point that you are wasting > your time trying to get anyone to understand it. > > If you are going to keep posting to the same group over and over with > the same point, I for one won't try to stop you. But you would likely > make more progress if you clarified your definitions, phrased your > points as questions rather than groundbreaking discoveries, or sought > guidance from someone who understands your argument on how to make it > clearer.

0 |

6/13/2012 1:47:01 AM

On 6/12/2012 3:46 PM, Patricia Shanahan wrote: > On 6/12/2012 9:13 AM, Leif Roar Moldskred wrote: >> In comp.theory Graham Cooper<grahamcooper7@gmail.com> wrote: >>> >>> If your Question is DOES THE ALGORITHM WORK ON ALL INPUT PAIRS? >>> >>> then what would you call that question considering there are cases it >>> gets wrong? >> >> "Answered." >> > > This seems like the obvious, and obviously correct, answer. > > Even a single input pair for which the algorithm gets a wrong answer or > fails to terminate is sufficient to prove the answer is "No.". > H is asked the question: "Does your input TM halt on its input?" The *only* possible answers to the question posed to the TM are its own two final states, all other answers of every kind are not within the set of possible answers. > There is one relevant difference between my mathematician and my > software engineer opinions. With my mathematician hat on, I am prepared > to consider the possibility of a proof that an algorithm exists that > does not involved exhibiting the algorithm and proving it correct. As a > software engineer, I would have no use for a proof that an algorithm > exists unless it tells me what the algorithm is, and why the algorithm > always works. > > Patricia

0 |

6/13/2012 1:57:35 AM

On 6/15/2012 8:12 PM, George Greene wrote: > On Jun 15, 7:26 am, "Peter Olcott"<NoS...@OCR4Screen.com> wrote: >> If one examines the set of finite length strings > This is almost meaningless. To the extent that anyone understands > what it means, she understands that ONE DOES NOT (*ever*) do THIS > because > "the set of finite length strings" IS AN INFINITE SET. THEREORE, AT > NO TIME have the finite number of human beings EVER gotten around to > "examing" ALL of it, > NOR WILL WE EVER. Mindless automaton stuck in refute mode.

0 |

6/16/2012 2:11:12 AM

Stick with constructivistic methods and discussions like yours vanish like gorillas in the mist.

On 6/16/2012 12:50 AM, George Greene wrote: > On Jun 15, 10:11 pm, Peter Olcott<OCR4Screen> wrote: > >> Mindless automaton stuck in refute mode. > Mindless automaton not even addressing the actual issue, which was > about > the fact that the question being asked depends on the programming of > the interpreter, > not the string being interpreted. You are a mindless automaton stuck in refute mode! If A TM had all of the knowledge in the world encoded within it, then the question: "Does this TM instance halt on its input instance?" depends upon: (a) This string: "Does this TM instance halt on its input instance?" (b) The input TM instance. (b) The input TM instance's input instance.

0 |

6/16/2012 12:05:53 PM

On 6/16/2012 3:00 AM, Graham Cooper wrote: >>>> If you rebuke all terminology, rebuke a halt program because a program >>>> might check whether itself halts, rebuke any argument >>> Is your native language English? You do not know the meaning of the >>> verb "rebuke". > So tell us George, do you have ANY IDEA what we mean by > > THE_SELF_REFERENCE aspect of the Halting Proof? I call it Pathological Self Reference because it derives an ill-formed question. An ill-formed question is defined as any question that lacks a correct answer form the set of all possible answers. > > before we are detoured any further? > > Herc > -- > > P: If Halts(P) Then Loop Else Halt. > is obviously a paradoxical program if Halts() exists. > BUT IF IT WEREN'T NAMED P then it might not be: > > Q: If Halts(P) Then Loop Else Halt. > is NOT paradoxical. > ~ GEORGE GREEN (sci.logic)

0 |

6/16/2012 12:08:30 PM

On 6/16/2012 6:31 AM, Ben Bacarisse wrote: > Graham Cooper<grahamcooper7@gmail.com> writes: > <snip> >> How can an algorithm be "impossible" when even >> the ORACLE VERSION of the Halt(p1) Program, >> that randomly guesses the right halt() value, >> can be twisted into a paradox? >> >> If an Oracle-Halt() gave the right answer, >> then LOOP IF O-HALTS() > An TM with a halting oracle isn't a TM, so you can't apply the "usual" > proof. You can't construct a TM that uses (in your terminology) > O-HALTS(). That does not matter. The question behind the Pathological Self Reference form of the Halting Problem would be unsolvable even for an all-knowing mind because this question is ill-formed. > You can extend the model of computation, beyond TMs, to include oracle > machines. If TM_OH is the set of TMs with a halting oracle, you can > show that there is no M in TM+OH that decides halting for that set of > machines, but you can now posit an oracle for this new decidability > problem. You get an oracle for deciding halting of machines in TM_OH, > but, again, no contradiction can be derived because this new machine is > not in TM_OH. > > There is an obvious "and so on". > >> How can you DESIGN a PROGRAM to check for INFINITE LOOPS, >> then get a PROGRAM to CHECK ITSELF for an INFINITE LOOP >> and do the opposite? > Exactly -- you can't. > >> And call that - IMPOSSIBLE TO DETECT INFINITE LOOPS? > There are only two choices: either the construction of the derived > machine is impossible, or the machine to decide halting is impossible. > Since the construction is trivial, what are you left with? > > <snip>

0 |

6/16/2012 12:11:25 PM

On 6/16/2012 8:14 AM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com... > <snip> > >> The question behind the Pathological Self Reference form of the >> Halting Problem would be unsolvable even for an all-knowing mind >> because this question is ill-formed. > > Sorry, maybe I have missed the critical posts, but which question is > this? Could you please state it? > > I know the usual one: "Is there a TM that can tell the halting > behavior of any TM-input pair?", and the answer is (provably) no. > Then one might question the proof, but the question seems valid. > > -LV > > "Does the input TM instance halt on its input?"

0 |

6/16/2012 2:47:08 PM

On 6/16/2012 10:20 AM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com... >> On 6/16/2012 8:14 AM, LudovicoVan wrote: >>> "Peter Olcott" <OCR4Screen> wrote in message >>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com... >>> <snip> >>> >>>> The question behind the Pathological Self Reference form of the >>>> Halting Problem would be unsolvable even for an all-knowing mind >>>> because this question is ill-formed. >>> >>> Sorry, maybe I have missed the critical posts, but which question is >>> this? Could you please state it? >>> >>> I know the usual one: "Is there a TM that can tell the halting >>> behavior of any TM-input pair?", and the answer is (provably) no. >>> Then one might question the proof, but the question seems valid. >> >> "Does the input TM instance halt on its input?" > > That would be a sub-question, It is the core question behind the reason why the Pathological Self-Reference form of the Halting Problem can not be solved. > where you are constraining the input to be the given TM's code, so > implicitly assuming that you have a coding for the TMs (which is > something not necessarily implied by the original, more general > question). When one takes into account AI knowledge representation, then this TM could have encoded within it the sum total of everything known to date about everything and anything, along with reasoning at least equal to the best human in each field of inquiry. Taking this example to its logical extreme, the question then becomes: Could any all-knowing mind solve the Halting Problem? > Spelled out, this would be the question: "Is there a TM that can tell > the halting behavior of any TM-input pair, where input is equal to the > code of the given TM itself?" I still cannot see anything invalid, > not even with this (sub-)question: for instance, one could conceive a > TM that does negation of its input: when the input corresponds to the > code of this TM itself, nothing "invalid" happens. As another > example, the putative universal halt decider, even if assumed to be a > TM, should have no problems in telling its own halting behavior: in > fact, by construction hypothesis, it always halts. > > That said, I still have to see a working and convincing proof (except > maybe for the ruling out of impredicativity in the second-order proof; > my fault for not having a final answer on this issue), but, as said, > that would be an objection to the proof, not to the question (not to > the original one and, most probably, not even to the sub-question). > > -LV > >

0 |

6/16/2012 3:55:43 PM

On 6/16/2012 1:43 PM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:JZ-dneL4K_TiMUHSnZ2dnUVZ_v6dnZ2d@giganews.com... >> On 6/16/2012 10:20 AM, LudovicoVan wrote: >>> "Peter Olcott" <OCR4Screen> wrote in message >>> news:i_SdnUZNm_rxAUHSnZ2dnUVZ_qudnZ2d@giganews.com... >>>> On 6/16/2012 8:14 AM, LudovicoVan wrote: >>>>> "Peter Olcott" <OCR4Screen> wrote in message >>>>> news:eqidnbIl7qhw6kHSnZ2dnUVZ_o6dnZ2d@giganews.com... >>>>> <snip> >>>>> >>>>>> The question behind the Pathological Self Reference form of the >>>>>> Halting Problem would be unsolvable even for an all-knowing mind >>>>>> because this question is ill-formed. >>>>> >>>>> Sorry, maybe I have missed the critical posts, but which question >>>>> is this? Could you please state it? >>>>> >>>>> I know the usual one: "Is there a TM that can tell the halting >>>>> behavior of any TM-input pair?", and the answer is (provably) no. >>>>> Then one might question the proof, but the question seems valid. >>>> >>>> "Does the input TM instance halt on its input?" >>> >>> That would be a sub-question, >> >> It is the core question behind the reason why the Pathological >> Self-Reference form of the Halting Problem can not be solved. > > But you now move to yet another different question. No it has always been the same question for the several thousand times that I have stated it. > >>> where you are constraining the input to be the given TM's code, so >>> implicitly assuming that you have a coding for the TMs (which is >>> something not necessarily implied by the original, more general >>> question). >> >> When one takes into account AI knowledge representation, then this TM >> could have encoded within it the sum total of everything known to >> date about everything and anything, along with reasoning at least >> equal to the best human in each field of inquiry. Taking this example >> to its logical extreme, the question then becomes: >> Could any all-knowing mind solve the Halting Problem? > > I do not see the "continuity" from the original question: never mind, > in this case, by definition, the answer would be "yes". Of course, it > isn't about a TM and not even about human minds. OTOH, I still > cannot see how the original question (the "halting problem") should be > ill-formed. > > -LV > If no element of the set of Turing Machines can solve the self reference form of the Halting Problem, then the element of this set that would derive a mind that knows everything currently known would also not be able to solve the HP. Since this element would necessarily also know the syntax and Semantics of every human language that ever existed, it could be asked this question in English: "Will this TM instance halt on this input instance?" If this TM can not answer this question then this question must necessarily be ill-formed.

0 |

6/16/2012 7:06:59 PM

On 6/16/2012 2:15 PM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com... > <snip> > >> If this TM can not answer this question then this question must >> necessarily be ill-formed. > > It cannot be a TM, while the question can be and has been answered. I > cannot follow your line of reasoning, in fact I cannot see any line of > reasoning. > > -LV > > Can a TM compute anything computable? Can anything computable include knowledge? Can knowledge include knowledge of the syntax and semantics of human languages?

0 |

6/16/2012 8:03:33 PM

On 6/16/2012 5:08 PM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:jrednTblnqYLe0HSnZ2dnUVZ5oKdnZ2d@giganews.com... >> On 6/16/2012 2:15 PM, LudovicoVan wrote: >>> "Peter Olcott" <OCR4Screen> wrote in message >>> news:taWdnZf2gKbJREHSnZ2dnUVZ5g2dnZ2d@giganews.com... >>> <snip> >>> >>>> If this TM can not answer this question then this question must >>>> necessarily be ill-formed. >>> >>> It cannot be a TM, while the question can be and has been answered. >>> I cannot follow your line of reasoning, in fact I cannot see any >>> line of reasoning. >> >> Can a TM compute anything computable? > > Yes, an Universal Turing Machine can do so. (For all we know, > "anything computable" does not include the halting problem.) > >> Can anything computable include knowledge? > > No: on the contrary, you could say that knowledge includes things > computable (so that these are part of what can be known, they do not > include or exhaust it). > >> Can knowledge include knowledge of the syntax and semantics of human >> languages? > > Of course. > > Strictly speaking, a TM has no "knowledge", except maybe for the > knowledge underlying the algorithm it implements, which ultimately > remains *our* knowledge. > > -LV > > That statement would not be true for every possible future technological innovation within computer science. http://en.wikipedia.org/wiki/Knowledge_representation_and_reasoning A computing machine 10,000 years from now could learn new things by reading about them or watching them on TV, or even simply figuring them out for itself.

0 |

6/17/2012 3:40:39 AM

On 6/17/2012 7:44 AM, LudovicoVan wrote: > "Peter Olcott" <OCR4Screen> wrote in message > news:MtGdnd3-rYMqzEDSnZ2dnUVZ_qednZ2d@giganews.com... >> On 6/16/2012 5:08 PM, LudovicoVan wrote: > <snip> > >>> Strictly speaking, a TM has no "knowledge", except maybe for the >>> knowledge underlying the algorithm it implements, which ultimately >>> remains *our* knowledge. >> >> That statement would not be true for every possible future >> technological innovation within computer science. > > No, it is true, strictly true! The day machine will be intelligent, > that day machines will not be machines as we now conceive them: IOW, a > *paradigm shift* is needed, without which machines getting intelligent > (in 1 or 10K years, by plan or by accident: whatever) is only sci-fi. > > -LV > > So as soon as a Turing Computable algorithm acquires sufficient capability to exceed the human mind, even though it is Turing Computable, we arbitrarily and capriciously by fiat declare that this is no long a Turing Machine?

0 |

6/17/2012 4:57:52 PM

On 6/17/2012 4:37 PM, George Greene wrote: > On Jun 17, 3:19 pm, "Peter Olcott"<NoS...@OCR4Screen.com> wrote: >> The problem with the Halting Problem is that it that because of the >> Church/Turing thesis it must apply > You DON'T UNDERSTAND Church's Thesis. > If you did, you would know that IT IS NOT RELEVANT to this problem! > >> to every possible future computing technology > NO, IT DOESN'T!!!! DAMN, YOU ARE *STUPID*!!! > The halting problem is the halting problem OF TMs, BY TMs, and FOR > TMs! > IT *ABSOLUTELY* IS *NOT* ABOUT " the human mind " or "all human > knowledge " Also, don't forget that this never happened: http://www-03.ibm.com/innovation/us/watson/index.html Also the Church Turing thesis does not apply to Watson was implemented, because Watson is *not* computable. More on, More on More on... > OR ANY OTHER technology!!!!!!!!!!!! > EXCEPT TMs!! > !!!!!!!!!DAMN!!!!!!!!!!!!!!!!!!!!!!

0 |

6/18/2012 2:29:32 AM

On 6/18/2012 4:29 AM, Tom wrote: > Peter Olcott wrote: >> If a yes or no question does not have a correct yes or no answer then >> there must be something wrong with this question. >> >> More generally: >> An ill-formed question is defined as any question that lacks a correct >> answer from the set of all possible answers. >> >> The *only* reason that the self reference form of the Halting Problem >> can not be solved is that neither of the two possible final states of >> any potential halt decider TM corresponds to whether or not its input TM >> will halt on its input. >> >> In other words for potential >> halt decider H and input M: >> --------------------------- >> Not<ThereExists> >> <ElementOfSet> >> FinalStatesOf_H >> <MathematicallyMapsTo> >> Halts(M, H, input) >> >> Where M is defined as >> --------------------- >> M(String H, String input): >> if H(input, H, input) loop >> else halt >> >> The only difference between asking a yes or no question and the >> invocation of a Turing Machine is a natural language interface. >> >> Within a natural language interface the invocation of H(M, H, M) would >> be specified as: >> >> �Does Turing Machine M halt on input of Turing Machine H >> and Turing Machine M?� >> >> Within a natural language interface the answer to this question would be >> specified as �yes� or �no� and map to the final states of H of accept or >> reject. >> >> So the only reason that the self reference form of the Halting Problem >> can not be solved is that it is based on a yes or no question that lacks >> a correct yes or no answer, and thereby derives an ill-formed question. > Peter, I really know nothing about the Entscheidungsproblem. It is out > of my league (I am below all leagues), but I've been thinking. > > If the problem is that the thing is just ill-formed, why not re-form > it to well-form it? > > http://www.mcescher.net/Escher/dragon.jpg > > We can't do that because it'd amount to reforming all of math, which > might then end up not being called math at all? With permission, what > would you call it? > > Tom I would call it Pete, for Pete's sake.

0 |

6/18/2012 12:35:17 PM

On 6/18/2012 7:49 AM, Ralph Hartley wrote: > On 06/16/2012 03:06 PM, Peter Olcott wrote: >> If no element of the set of Turing Machines can solve the self reference >> form of the Halting Problem, then the element of this set that would >> derive a mind that knows everything currently known would also not be >> able to solve the HP. Since this element would necessarily also know the >> syntax and Semantics of every human language that ever existed, it could >> be asked this question in English: "Will this TM instance halt on this >> input instance?" >> >> If this TM can not answer this question then this question must >> necessarily be ill-formed. > > Huh? There are *lots* of well-formed questions that can't be answered > by "a mind that knows everything currently known", simply because it > is not currently known how to answer them. > > Similarly, there are well formed questions that can't be answered by a > mind that knows everything that *will* ever be known, because many > things will never be known (there are infinitely many things to know > and the universe is finite). > > What about a TM that knows everything that can *possibly* be known? Is > that even possible? > > No, because it would have to be able to answer all instances of "Will > this TM instance halt on this input instance?", which are all well > formed questions, your insistence to the contrary notwithstanding, and > no TM can correctly answer them all. > > Ralph Hartley Few here have any experience at all with "formal semantics" (treating human language as a mathematical formalism). Because of this they lack sufficient basis to determine whether or not a statement or a question is semantically well-formed or ill-formed. Lacking this sufficient basis and making claims one way or the other necessarily can only derive an argument from ignorance. The Specification of a TM to solve the Halting Problem: A Turing Machine that can take as input the specification of another Turing Machine and the input to this other Turing Machine, and determine in all cases whether or not this other Turing Machine would halt on its input, and report this as one of its two final halting states of accept or reject. To make my point clear we augment the TM with the capability of processing natural language such as English. This Natural Language input would be specified on the conventional Turing Machine's tape. Now we can pose this question to elements of the the set of Turing Machines: Which of your two final states of accept or reject corresponds to your input TM halting on its input? Since (in some cases) neither element forms the correct answer to this question and these two element derive the entire solution set, to the above question, therefore this question (in some cases) matches the specification of an ill-formed question: Any question lacking a correct answer from the set of possible answers.

0 |

6/18/2012 2:07:39 PM

On 6/18/2012 6:57 PM, George Greene wrote: > On Jun 18, 10:07 am, Peter Olcott <OCR4Screen> wrote: >> Few here have any experience at all with "formal semantics" (treating >> human language as a mathematical formalism). > And You Do?? MBWAHAHAHHAHHAAAA!!! > >> Because of this they lack sufficient basis to determine whether or not a statement or a question >> is semantically well-formed or ill-formed. > So far, you have ONLY presented YOUR OWN definitions of "semantically > ill-formed", > and they are semantically ill-formed. If you want some respect for > knowing more about > "formal semantics" THAN I know, then PRESENT a definition FROM SOME > KNOWN ACADEMIC > SOURCE. > > >> Lacking this sufficient basis > Exactly, you lack the basis of a formal semantics textbook or any > other COHERENT treatment of anything you are trying to say. The Formal semantics of natural languages. > But WE ARE TALKING ABOUT TMs, so what YOU call "formal semantics" IS > NOT RELEVANT IN ANY CASE. That the reason why the self reference Halting Problem can not be solved can be correctly translating into an ill-formed English question posted to the potential Halt Decider: Which of your final states can you return to indicate Halts(TM, input)? http://en.wikipedia.org/wiki/Saul_Kripke Here is another guy that agrees with me ion the Liar paradox. neither "This sentence is true" nor "This sentence is not true" receive truth-conditions; they are, in Kripke's terms, "ungrounded." > You do not yet know ENOUGH ABOUT TMs to BE ABLE to discuss their > formal semantics. In particular, you don't > yet understand the way in which invoking a TM on an input string is > analogous to asking a question.

0 |

6/21/2012 2:29:33 AM

On 6/8/2012 8:43 PM, Peter Olcott wrote: > If a yes or no question does not have a correct yes or no answer then > there must be something wrong with this question. > > More generally: > An ill-formed question is defined as any question that lacks a correct > answer from the set of all possible answers. > > The *only* reason that the self reference form of the Halting Problem > can not be solved is that neither of the two possible final states of > any potential halt decider TM corresponds to whether or not its input > TM will halt on its input. > > In other words for potential > halt decider H and input M: > --------------------------- > Not<ThereExists> > <ElementOfSet> > FinalStatesOf_H > <MathematicallyMapsTo> > Halts(M, H, input) > > Where M is defined as > --------------------- > M(String H, String input): > if H(input, H, input) loop > else halt > > The only difference between asking a yes or no question and the > invocation of a Turing Machine is a natural language interface. > > Within a natural language interface the invocation of H(M, H, M) would > be specified as: > > �Does Turing Machine M halt on input of Turing Machine H > and Turing Machine M?� > > Within a natural language interface the answer to this question would > be specified as �yes� or �no� and map to the final states of H of > accept or reject. > > So the only reason that the self reference form of the Halting Problem > can not be solved is that it is based on a yes or no question that > lacks a correct yes or no answer, and thereby derives an ill-formed > question. > The only reason that no one here has agreed with the above is that no one here has a sufficient understanding of the formal semantics of linguistics, what I have referred to as the mathematics of the meaning of words. http://en.wikipedia.org/wiki/Formal_semantics_(linguistics) When one is sufficiently precise when dealing with the meaning of words, thenn (then and only then) one can see that the Halting Problem is constructed entirely on the basis of an ill-formed question.

0 |

8/5/2012 1:28:41 PM

On 6/9/2012 9:41 AM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> If a yes or no question does not have a correct yes or no answer then >> there must be something wrong with this question. > I was just thinking "surely there's not point reading this; can there be > anything new?" when low-and-behold it contains a *new* error: > > <snip> >> In other words for potential >> halt decider H and input M: >> --------------------------- >> Not<ThereExists> >> <ElementOfSet> >> FinalStatesOf_H >> <MathematicallyMapsTo> >> Halts(M, H, input) >> >> Where M is defined as >> --------------------- >> M(String H, String input): >> if H(input, H, input) loop >> else halt >> >> The only difference between asking a yes or no question and the >> invocation of a Turing Machine is a natural language interface. >> >> Within a natural language interface the invocation of H(M, H, M) would >> be specified as: >> >> “Does Turing Machine M halt on input of Turing Machine H >> and Turing Machine M?” > No, it would be read as "What does the machine H say when invoked with > input (M, H, M)?". But that has a simple yes/no answer so it does not > fit with PO's preconceived idea of what the answer should be, so he has > to... er, "misrepresent the truth". I have learned many of the recent developments in the mathematics of the meaning of words as expressed using predicate logic and developed by Montague and others since the last time that I talked to you. http://en.wikipedia.org/wiki/Montague_grammar This is my primary source: Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics) Ronnie Cann The mistake of the Halting Problem can only be detected from the point of view of the mathematics of the meaning of words. Since this is outside of the field of Turing, Godel, and most everyone here, it is clear that you all have a good reason for not seeing this perspective. The mistake that you just made was framing the question of the Halting Problem incorrectly. The reason that the Halting Problem is an ill-formed question is Pathological Self-Reference. Your incorrect statement of the question eliminated the self-reference, and thus changed the underlying semantics. Turing Machine H is *not* asked: "What would you say?" You have incorrectly switched the perspective from the perspective of Turing Machine H to that of an outside observer of Turing Machine H. The outside observer perspective lacks the essential self-reference required to derive the Halting Problem. > Misrepresenting the result of an invocation of a "potential halt > decider" as being a questions about whether a machine *actually* halts > or not is the error at core of all of the recent nonsense. > > <snip>

0 |

8/19/2012 1:36:37 PM

On 6/9/2012 5:58 PM, cplxphil wrote: > On Jun 8, 11:17 pm, Peter Olcott <OCR4Screen> wrote: >> On 6/8/2012 9:04 PM, cplxphil wrote: >> >> >> >> >> >> >> >>> On Jun 8, 9:43 pm, Peter Olcott<OCR4Screen> wrote: >>>> If a yes or no question does not have a correct yes or no answer then >>>> there must be something wrong with this question. >>>> More generally: >>>> An ill-formed question is defined as any question that lacks a correct >>>> answer from the set of all possible answers. >>>> The *only* reason that the self reference form of the Halting Problem >>>> can not be solved is that neither of the two possible final states of >>>> any potential halt decider TM corresponds to whether or not its input TM >>>> will halt on its input. >>>> In other words for potential >>>> halt decider H and input M: >>>> --------------------------- >>>> Not<ThereExists> >>>> <ElementOfSet> >>>> FinalStatesOf_H >>>> <MathematicallyMapsTo> >>>> Halts(M, H, input) >>>> Where M is defined as >>>> --------------------- >>>> M(String H, String input): >>>> if H(input, H, input) loop >>>> else halt >>>> The only difference between asking a yes or no question and the >>>> invocation of a Turing Machine is a natural language interface. >>>> Within a natural language interface the invocation of H(M, H, M) would >>>> be specified as: >>>> �Does Turing Machine M halt on input of Turing Machine H >>>> and Turing Machine M?� >>>> Within a natural language interface the answer to this question would be >>>> specified as �yes� or �no� and map to the final states of H of accept or >>>> reject. >>>> So the only reason that the self reference form of the Halting Problem >>>> can not be solved is that it is based on a yes or no question that lacks >>>> a correct yes or no answer, and thereby derives an ill-formed question. >>> No one would say that instances of the halting problem lack a yes or >>> no answer. >> Yes this point has been missed for many years. >> >> Since the final states: {accept, reject} of H form the entire solution >> set, (every possible answer that H can provide) and these states have >> been shown to mathematically map to yes or no therefore the inability of >> a Turing Machine to solve the halting problem is merely the inability to >> correctly answer a yes or no question that has no correct yes or no answer. >> > Perhaps you've been over this before in your lengthy discussions, but > let me see if I have this straight. > > You are saying that the question, "Does machine M halt on input I?" > may, for certain M and I, be impossible to answer either yes or no? > > If it doesn't either halt or not halt on input I, what exactly does it > do? It is the slipperiness of inadvertently switching perspectives from the point of view of the potential halt decider (PHD) to the point of view outside that of the PHD that makes the error of the Halting Problem most difficult to see. The error of the Halting Problem only exists from the perspective of the potential halt decider. When you switch to a perspective outside that of the PHD, you change the meaning of the question. You might not ever see this until after you understand the mathematics of the meaning of words developed by Montague. http://en.wikipedia.org/wiki/Montague_grammar Almost all of the current research in Formal Semantics (mathematics of the meaning of words) is fundamentally based on the seminal work of Montague. > > Also, before this continues too long: It sounds like you are quite > confident that you're right about this. I am quite confident that you > are not. What would it take to convince you that you are wrong? > > For my part, I will be satisfied and agree that Turing's result is > somehow wrong if you can implement an algorithm that solves the > halting problem. If you are going to say that the Halting problem is > ill-formed, then in order for me to agree with this, I would need to > see an example of a machine that both fails to halt and fails to not > halt. (Good luck.)

0 |

8/19/2012 1:49:35 PM

On 6/9/2012 8:39 PM, Joshua Cranmer wrote: > On 6/9/2012 6:58 PM, cplxphil wrote: >> Perhaps you've been over this before in your lengthy discussions, but >> let me see if I have this straight. >> >> You are saying that the question, "Does machine M halt on input I?" >> may, for certain M and I, be impossible to answer either yes or no? > > I think Peter believes that this is not the proper way to phrase the > question really being asked. Although, when he was pushed into a > corner, I think there was a tacit admission that the answer of such a > question depends on who you asking it of. > >> Also, before this continues too long: It sounds like you are quite >> confident that you're right about this. I am quite confident that you >> are not. What would it take to convince you that you are wrong? > > Again, I think the answer is that he has no issue with proof, per se. > His umbrage is with the interpretation of the result: he believes that > it is possible to make a Halt decider that is "essentially" correct > but fails in some cases which are "necessary" (use of quotation marks > to indicate that the terms contained within are slippery and Neccessity and Possibility form the fundamental basis of Model Logic: http://en.wikipedia.org/wiki/Modal_logic > ill-defined). Lots of people have attempted to illustrate several > alternate derivations to show that the incorrectness is not so > well-contained, but they have all been ignored because either: > a) it happens to fall under the "necessary" failures, > b) it relies on a similar "incorrect" result (the uncountability of > real numbers and Godel's incompleteness theorem have also been > explicitly cited as incorrect proofs due to being similar [1]), or > c) he doesn't understand it, so it has to be either case a or case b > because he is OBVIOUSLY right. > > Trying to come up with a simple, alternate proof of the Halting > problem that is understandable and skirts any other proofs that have > anything smacking of diagonalization or self-reference is indeed hard, > but I doubt he'd accept anything less. I also doubt that he'd accept > even that much, though... > > [1] This just makes it seem to me that he has a really hard time > accepting that a proof by contradiction is indeed a valid proof. > Ultimately to understand what I am saying requires an understanding of the mathematics of the meaning of words. Since the last time that we spoke I have done extensive reading and found that the field called Formal Semantics in Linguistics provides the most complete specification of all knowledge of mathematics of the meaning of words. This single source describes the basis for essentially all of this work: Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics) Ronnie Cann

0 |

8/19/2012 1:59:23 PM

On 8/19/2012 1:12 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 6/9/2012 9:41 AM, Ben Bacarisse wrote: >>> Peter Olcott <OCR4Screen> writes: >>> >>>> If a yes or no question does not have a correct yes or no answer then >>>> there must be something wrong with this question. >>> I was just thinking "surely there's not point reading this; can there be >>> anything new?" when low-and-behold it contains a *new* error: >>> >>> <snip> >>>> In other words for potential >>>> halt decider H and input M: >>>> --------------------------- >>>> Not<ThereExists> >>>> <ElementOfSet> >>>> FinalStatesOf_H >>>> <MathematicallyMapsTo> >>>> Halts(M, H, input) >>>> >>>> Where M is defined as >>>> --------------------- >>>> M(String H, String input): >>>> if H(input, H, input) loop >>>> else halt >>>> >>>> The only difference between asking a yes or no question and the >>>> invocation of a Turing Machine is a natural language interface. >>>> >>>> Within a natural language interface the invocation of H(M, H, M) would >>>> be specified as: >>>> >>>> “Does Turing Machine M halt on input of Turing Machine H >>>> and Turing Machine M?” >>> No, it would be read as "What does the machine H say when invoked with >>> input (M, H, M)?". But that has a simple yes/no answer so it does not >>> fit with PO's preconceived idea of what the answer should be, so he has >>> to... er, "misrepresent the truth". >> I have learned many of the recent developments in the mathematics of >> the meaning of words as expressed using predicate logic and developed >> by Montague and others since the last time that I talked to you. >> http://en.wikipedia.org/wiki/Montague_grammar >> >> This is my primary source: >> Formal Semantics: An Introduction (Cambridge Textbooks in Linguistics) >> Ronnie Cann >> >> The mistake of the Halting Problem can only be detected from the point >> of view of the mathematics of the meaning of words. > The Halting Problem is not defined in words, it's defined in the formal > syntax of mathematics. > >> Since this is >> outside of the field of Turing, Godel, and most everyone here, it is >> clear that you all have a good reason for not seeing this perspective. > Yes, these theorems are theorems of mathematics, not of English. Your > English descriptions of them are in error if they lead you to the > conclusion that any of then are invalid. > >> The mistake that you just made was framing the question of the Halting >> Problem incorrectly. > And your mistake was framing the question in English. > >> The reason that the Halting Problem is an >> ill-formed question is Pathological Self-Reference. Your incorrect >> statement of the question eliminated the self-reference, and thus >> changed the underlying semantics. > Best not to use English to talk about mathematics then. We could, if > you want, communicate almost unambiguously using maths, but I think you > prefer your words since they can't be refuted. > >> Turing Machine H is *not* asked: "What would you say?" > No indeed. It's not asked anything and it makes no answer. It is > simply a 5-tuple drawn from a set, and it has properties formally > defined by a mathematical notation. > >> You have incorrectly switched the perspective from the perspective of >> Turing Machine H to that of an outside observer of Turing Machine H. >> The outside observer perspective lacks the essential self-reference >> required to derive the Halting Problem. > You have incorrectly switched perspective to English words. HP is > statement in mathematics and we've all been guilty of using words to > talk about it because you are not comfortable with mathematics. That is > an error that we should not repeat. > > <snip> Only the mathematics of the meaning of words is sufficiently expressive to show that the self reference form of the Halting Problem is constructed entirely on the basis of an error of reasoning.

0 |

8/19/2012 6:39:41 PM

On 8/21/2012 2:39 PM, Ben Bacarisse wrote: > PeteOlcott <peteolcott@gmail.com> writes: > >> On Aug 21, 11:43 am, Patricia Shanahan <p...@acm.org> wrote: >>> On 8/20/2012 10:55 AM, PeteOlcott wrote: > <snip> >>>> This has not been the case since the work of Montague, (previously >>>> cited). >>> Huh? Which are you disagreeing with? >>> >>> The meaning of a natural language word changes with: >>> >>> A. dialect >>> >>> B. context >>> >>> C. time >>> >>> Patricia- Hide quoted text - >>> >>> - Show quoted text - >> Montague developed as system such that the meaning of an utterance can >> remain fixed. In this case the timeframe (if relevant) would be >> explicitly specified along with all of relevant the details of the >> context, (if any) and the meaning of the lexical items are fixed as >> specific nodes within an ontology. > No he didn't. His analysis relates to the relationship between words. > What a cat is, and whether one can smooth it, are not part of his work. > What's worse, the result of Montague's analysis are *mathematical > formulae* explaining the relationships between the concepts denoted by > the words. There's no advantage in doing that when the concepts and the > relationships are already simple to represent in exactly the same > mathematics. > > By all means, show me I'm wrong. Take any statement about halting or > Turing machines that has been misconstrued here and present the Montague > analysis of it. > I have to learn more about it first. I have only read one book and a dozen articles, so far. He did not even attempt to formalize the notion of a question, yet others after him have.

0 |

8/21/2012 9:39:53 PM

On 8/21/2012 2:39 PM, Ben Bacarisse wrote: > PeteOlcott <peteolcott@gmail.com> writes: > >> On Aug 21, 11:43 am, Patricia Shanahan <p...@acm.org> wrote: >>> On 8/20/2012 10:55 AM, PeteOlcott wrote: > <snip> >>>> This has not been the case since the work of Montague, (previously >>>> cited). >>> Huh? Which are you disagreeing with? >>> >>> The meaning of a natural language word changes with: >>> >>> A. dialect >>> >>> B. context >>> >>> C. time >>> >>> Patricia- Hide quoted text - >>> >>> - Show quoted text - >> Montague developed as system such that the meaning of an utterance can >> remain fixed. In this case the timeframe (if relevant) would be >> explicitly specified along with all of relevant the details of the >> context, (if any) and the meaning of the lexical items are fixed as >> specific nodes within an ontology. > No he didn't. His analysis relates to the relationship between words. > What a cat is, and whether one can smooth it, are not part of his work. > What's worse, the result of Montague's analysis are *mathematical > formulae* explaining the relationships between the concepts denoted by > the words. There's no advantage in doing that when the concepts and the > relationships are already simple to represent in exactly the same > mathematics. > > By all means, show me I'm wrong. Take any statement about halting or > Turing machines that has been misconstrued here and present the Montague > analysis of it. > I have had visions of ideas such as Montague's in my mind for 25 years. This subject is the one subject has most fascinated me for the last 25 years.

0 |

8/22/2012 1:36:10 AM

On 8/21/2012 10:16 PM, George Greene wrote: >>> By all means, show me I'm wrong. Take any statement about halting or >>> Turing machines that has been misconstrued here and present the Montague >>> analysis of it. > On Aug 21, 5:39 pm, Peter Olcott <OCR4Screen> wrote: >> I have to learn more about it first. I have only read one book and a >> dozen articles, so far. >> He did not even attempt to formalize the notion of a question, yet >> others after him have. > More about WHAT? Turing machines and formal languages or "The > Montague analysis" > of something?!? Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS! You wouldn't know. Ignorance is perceived by the ignorant mind as disagreement. Ignorance can only be perceived as ignorance once the missing knowledge is provided, otherwise there is nothing to contrast it with. > You are just bringing it up in the hope of being able to talk about > something that > you know more about than we do. Tragically for you, even this little > backwater of > a newsgroup is a big enough place that THAT is NOT going to happen.

0 |

8/22/2012 9:43:57 AM

On 8/22/2012 5:47 PM, Ben Bacarisse wrote: > PeteOlcott <peteolcott@gmail.com> writes: > >> One thing that has not been even touched on within the Theory of >> Computation is: >> What exactly is the correct mathematical model for a natural language >> question? > Yes, there's a lot missing from the Theory of Computation. There's no > mathematical model for metaphorical eulogising either. Until such time > as there is, "my Turing machine is like a red red rose" will remain an > deeply ambiguous statement. And the there's no model of TM > consciousness either, so we can't even ask if its read head hurts. All > we can do ask what can and can't be computed by the narrow rules imposed > by such dull and constrained minds as Church, Kleene and Turing. > Another fundamental limit to computation (analogous to the Halting Problem): CAD systems will never be able to correctly represent square circles.

0 |

8/23/2012 1:11:17 AM

On 8/22/2012 8:20 PM, George Greene wrote: >>> More about WHAT? Turing machines and formal languages or "The >>> Montague analysis" >>> of something?!? Clue: Montague IS NOT RELEVANT PERIOD to ANY of THIS! > On Aug 22, 5:43 am, Peter Olcott <OCR4Screen> wrote: >> You wouldn't know. > I WOULD SO TOO know. > I do actually have relevant degree, dumbass. What is your degree in animosity? > >> Ignorance > I'm NOT ignorant. You are. You have insufficient basis for making this determination. > You keep talking about questions having a set of "possible" answers. It turns out that Montague Semantics agrees. > You also keep ignoring factual examples of questions with no "true" > answers that ARE OBVIOUSLY > well-formed.

0 |

8/23/2012 1:37:58 AM

On 8/23/2012 2:27 PM, George Greene wrote: >>> I'm NOT ignorant. You are. > On Aug 22, 9:37 pm, Peter Olcott <OCR4Screen> wrote: >> You have insufficient basis for making this determination. > Believe me, everything you are saying here constitutes a sufficient > basis FOR EVERYbody (except you) > to make THAT determination. The most significant human fallibility is the inability to distinguish presumption from truth. "Everyone" also knew that Fulton would fail in his folly.

0 |

8/23/2012 9:08:50 PM

On 8/22/2012 5:47 PM, Ben Bacarisse wrote: > PeteOlcott <peteolcott@gmail.com> writes: > >> One thing that has not been even touched on within the Theory of >> Computation is: >> What exactly is the correct mathematical model for a natural language >> question? > Yes, there's a lot missing from the Theory of Computation. There's no > mathematical model for metaphorical eulogising either. Until such time > as there is, "my Turing machine is like a red red rose" will remain an > deeply ambiguous statement. And the there's no model of TM > consciousness either, so we can't even ask if its read head hurts. All > we can do ask what can and can't be computed by the narrow rules imposed > by such dull and constrained minds as Church, Kleene and Turing. > The key aspect of the Halting Problem that is erroneous that can not be sufficiently expressed within the mathematics of the Theory of Computation is the claim that the Halting Problem somehow forms an {actual limit to computation}. The term: {actual limit to computation} can only be sufficiently expressed in languages as expressive as English. Montague Grammar provides a way that I can express this such that misinterpretation can not occur.

0 |

8/23/2012 9:32:50 PM

On 8/23/2012 7:41 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 8/22/2012 5:47 PM, Ben Bacarisse wrote: >>> PeteOlcott <peteolcott@gmail.com> writes: >>> >>>> One thing that has not been even touched on within the Theory of >>>> Computation is: >>>> What exactly is the correct mathematical model for a natural language >>>> question? >>> Yes, there's a lot missing from the Theory of Computation. There's no >>> mathematical model for metaphorical eulogising either. Until such time >>> as there is, "my Turing machine is like a red red rose" will remain an >>> deeply ambiguous statement. And the there's no model of TM >>> consciousness either, so we can't even ask if its read head hurts. All >>> we can do ask what can and can't be computed by the narrow rules imposed >>> by such dull and constrained minds as Church, Kleene and Turing. >>> >> The key aspect of the Halting Problem that is erroneous that can not >> be sufficiently expressed within the mathematics of the Theory of >> Computation is the claim that the Halting Problem somehow forms an >> {actual limit to computation}. >> >> The term: {actual limit to computation} can only be sufficiently >> expressed in languages as expressive as English. >> Montague Grammar provides a way that I can express this such that >> misinterpretation can not occur. > The halting theorem is much easier to state in mathematics. It just > asserts that a set (TM) does not contain any member with a particular > property (that of deciding halting). How this simple fact relates to > whatever meaning you attribute to the phrase "actual limit to > computation" is a mystery to me, but if all that work with Montague > grammar helps you to understand your own words, then go for it. > > However, if you think it might help explain things to others, you are in > for a shock. No matter what machinery you employ to give entirely unambiguous > and clear meanings to English-language phrases, what will matter is > whether they are true or not, and that will be decided in the domain to > which they relate -- the theory of computation. Any that concur with > the theory ca be accepted as true, but any that don't are simply very > clear and unambiguous false statements. > The cool thing about the possibilities that Montague Grammar unlocks is now for the first time we will have an unequivocal measure of analytical truth.

0 |

8/24/2012 2:34:13 AM

On 8/23/2012 8:16 PM, Joshua Cranmer wrote: > On 8/20/2012 12:55 PM, PeteOlcott wrote: >> On Aug 20, 11:47 am, Patricia Shanahan <p...@acm.org> wrote: >>> The meaning of a natural language word changes with dialect, context, >>> and time. That makes word meanings far too vague and unreliable to be >>> useful as a foundation for mathematics or logic. >> >> This has not been the case since the work of Montague, (previously >> cited). > > How about the verb "like"? This is one that has changed definitions in > the past half-decade even, thanks to a certain monolithic social network. > > People who claim that vocabulary doesn't change are ignorant of > history. People who claim that vocabulary has stopped changing are > completely blind to the world around them. > You provided an excellent example that immediately proved your point. My end goal is to basically figure out how to teach a computer to read, in the same way that humans read, with increasing comprehension. That would provide the key missing piece of the CYC project: http://cyc.com/

0 |

8/24/2012 2:37:58 AM

On 8/23/2012 8:23 PM, Joshua Cranmer wrote: > On 8/21/2012 4:39 PM, Peter Olcott wrote: >> I have to learn more about it first. I have only read one book and a >> dozen articles, so far. > > For someone who argues that all his detractors are floundering in > ignorance, citing as sources things you admit you don't understand is > a very odd way to try to go about your argument. I tend to be honest to a fault, and let the chips fall where they may. My detractors are only ignorant of something that is outside of their field. The error of the Halting Problem that I have been arguing all along can not be expressed within the mathematics of the Theory of Computation, thus it is not an error within the Theory of Computation itself. The error is at the next level of abstraction higher than the Theory of Computation . > >> He did not even attempt to formalize the notion of a question, yet >> others after him have. > > To me, this would suggest you're barking up the wrong tree. > No, I have read what the others have said, and the best combination of existing theories is the same theory that I have had since long before I read about any of this. The key factor in the best adaptation of Montague Grammar to questions is (what I have been saying here all long) A question inherently specifies its own set of possible answers.

0 |

8/24/2012 2:46:46 AM

On 8/23/2012 9:55 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > <snip> >> The cool thing about the possibilities that Montague Grammar unlocks >> is now for the first time we will have an unequivocal measure of >> analytical truth. > The number of things about which you know very little is growing. > Montague's work can obviously be added to that list. > I have had ideas about this subject since 1991, long before I ever heard of Montague (within the last six months). Kurt Godel: (This is the foundation that Montague Meaning Postulates are based on) By the theory of simple types I mean the doctrine which says that the objects of thought ... are divided into types, namely: individuals, properties of individuals, relations between individuals, properties of such relations, etc. (with a similar hierarchy for extensions),

0 |

8/24/2012 3:17:01 AM

On 8/23/2012 10:35 PM, Joshua Cranmer wrote: > On 8/23/2012 9:37 PM, Peter Olcott wrote: >> My end goal is to basically figure out how to teach a computer to read, >> in the same way that humans read, with increasing comprehension. That >> would provide the key missing piece of the CYC project: http://cyc.com/ > > This would be the holy grail of natural language understanding and AI > in general. And, if the last twenty years of advancements in natural > language are any guideline, it's also completely unfeasible. All major > recent developments that I'm aware of resort to primarily statistical > determination and not any actual understanding of text. > I myself can see how comprehension can be specified using some extension of Montague to form meaning postulates. Now all we need is comprehension of reading. The CYC project is correct in that there will be a certain threshold of knowledge that will effectively BootStrap real AI (with comprehension and reasoning at least as good as humans). What is needed now is the set of meta knowledge about knowledge acquisition. The details of this set can be deduced.

0 |

8/24/2012 3:57:01 AM

On 8/23/2012 10:35 PM, Joshua Cranmer wrote: > On 8/23/2012 9:37 PM, Peter Olcott wrote: >> My end goal is to basically figure out how to teach a computer to read, >> in the same way that humans read, with increasing comprehension. That >> would provide the key missing piece of the CYC project: http://cyc.com/ > > This would be the holy grail of natural language understanding and AI > in general. And, if the last twenty years of advancements in natural > language are any guideline, it's also completely unfeasible. All major > recent developments that I'm aware of resort to primarily statistical > determination and not any actual understanding of text. > We would begin the process of deducing the requirements of automated knowledge acquisition by specifying the structure of the set of conceptual knowledge. This is actually much simpler than it sounds **. I have had these ideas on my mind since 1991. From the structure of the set of conceptual knowledge we now know exactly what holes to fill and the structure of how they will be filled. ** The structure of conceptual knowledge is enormously simpler than the structure of natural language.

0 |

8/24/2012 4:13:55 AM

On 8/23/2012 11:34 PM, Joshua Cranmer wrote: > On 8/23/2012 11:13 PM, Peter Olcott wrote: >> We would begin the process of deducing the requirements of automated >> knowledge acquisition by specifying the structure of the set of >> conceptual knowledge. This is actually much simpler than it sounds **. I >> have had these ideas on my mind since 1991. >> >> From the structure of the set of conceptual knowledge we now know >> exactly what holes to fill and the structure of how they will be filled. > > This shows a very, VERY naive understanding of the current > state-of-the-art limits in AI. It is well-known by know how to, in > effect, teach a system that knows about animals about mythological > creatures. That is the sort of thing that already fits in its framework. > > What we really don't know how to do is how to teach systems radically > different things--we can't take a system that knows about animals and > teach it about World War I. This ability is what I would consider the > crux of AI research, just as P=NP is the crux of complexity theory > research. > Ultimately we can very easily** teach a computer system all of the details about any set of concepts by simply hand-coding these details into an ontology. The problem with this approach is that it takes an infeasible amount of time. ** Unless there exists limits of the expressiveness of the current set of ontology languages. It looks like some form of predicate logic may provide a sufficient basis.

0 |

8/24/2012 2:01:42 PM

On 8/24/2012 9:52 PM, George Greene wrote: > On Aug 23, 10:46 pm, Peter Olcott <OCR4Screen> wrote: >> My detractors are only ignorant of something that is outside of their field. > You're a liar and a fool. You have too much animosity and hostility. > You don't know anywhere near as much about ANY of these fields, > especially > any field relating to linguistics or semantics, AS I do OR as MOST of > the people > you have been arguing with. The main question you are arguing (which > is > about whether there is a TM that ALWAYS correctly tells whether any TM > does or doesn't halt on any input) is in fact MUCH SIMPLER THAN ANY > of the "outside" fields you are TRYING to invoke. It is the fact that > you can't > even see how simple the issue is, and think that "something outside > their field" > MIGHT EVEN BE RELEVANT that is causing the problem. The ISSUE ITSELF > is in fact WHOLLY INSIDE OUR field. > >> The error of the Halting Problem that I have been arguing all along can >> not be expressed within the mathematics of the Theory of Computation, > Well, that's YOUR problem. Since the PROBLEM ITSELF IS IN theory > of computation, it would HAVE to be expressible there. If it doesn't > have any errors INSIDE that context, THEN IT DOESN'T HAVE ANY errors. > > > But the real, whole, total point is this: Strawson's Theorem. > THAT IS ALL that is going on, and it was a simple point even back when > it was > Russell's Paradox and even BEFORE then. > > Suppose (hypothetically) that you thought that being conceited or self- > aggrandizing > was BAD and you decided to make your little personal contribution to > lessening > hubris in this world by deciding on the following personal policy: > "I am going to praise all and only those who don't praise themselves". > This is NOT POSSIBLE (are you going to praise yourself OR NOT?), > and complications can arise when any question relating to this > situation > is phrased in a way that presumes that doing this IS possible (e.g. by > asking a question about some attribute or property of a/the person who > IS doing this). > But none of the complications will ever imply that "the question is > ill-formed". > "Formed" IS ALREADY IN THE DICTIONARY.

0 |

8/25/2012 3:32:51 AM

On 8/24/2012 9:58 PM, George Greene wrote: > On Aug 23, 8:41 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: >> what will matter is >> whether they are true or not, and that will be decided in the domain to >> which they relate -- the theory of computation. > But in the case of the halting problem, even THAT much is NOT true, > because > THAT problem is just a simple corollary of Russell's paradox, which is > purely logical and > therefore field-INdependent. IN EVERY field, THERE SIMPLY CANNOT > EXIST A THING > that R's all and only those things that don't R themselves. THAT is > true for EVERY > binary relation R in EVERY field. THAT IS ALL THAT IS GOING ON here. > What is going on here therefore has NOTHING TO DO WITH HALTING OR TMs > OR COMPUTATION. > > It's just BASIC MEANINGS OF WORDS and the notion of a transitive verb > (standing in here > for a two-place relation). Here is an analogy to the error of the Halting Problem: The error of the Halting Problem is that it is considered to form an actual limit to computation. Here are two examples of limits to computation: a) No computer system will ever divide any number by zero and derive a correct result of five. b) No computer system will be able to perform any sequence of computation in no time at all. Which of the above forms an actual limit to computation? Is the Halting Problem analogous to (a) ?

0 |

8/25/2012 3:41:52 AM

On 8/25/2012 10:25 AM, Ben Bacarisse wrote: > George Greene <greeneg@email.unc.edu> writes: > >> On Aug 23, 8:41 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: >>> what will matter is >>> whether they are true or not, and that will be decided in the domain to >>> which they relate -- the theory of computation. >> But in the case of the halting problem, even THAT much is NOT true, >> because >> THAT problem is just a simple corollary of Russell's paradox, which is >> purely logical and >> therefore field-INdependent. IN EVERY field, THERE SIMPLY CANNOT >> EXIST A THING >> that R's all and only those things that don't R themselves. THAT is >> true for EVERY >> binary relation R in EVERY field. THAT IS ALL THAT IS GOING ON here. >> What is going on here therefore has NOTHING TO DO WITH HALTING OR TMs >> OR COMPUTATION. >> >> It's just BASIC MEANINGS OF WORDS and the notion of a transitive verb >> (standing in here >> for a two-place relation). > So in what way was what I said not true? > Russell's Paradox is also an error of reasoning similar to the error of the Halting Problem. It is analogous to the command to go buy a candy bar from the very first store that you come to that never has and never will sell candy bars.

0 |

8/25/2012 4:52:58 PM

On 8/25/2012 12:50 PM, Joshua Cranmer wrote: > On 8/25/2012 10:54 AM, PeteOlcott wrote: >> What about the limit to computation of not being able to divide the >> integer four by two and derive a correct result of seventeen? >> This *is* a limit to computation, it is a limit of any consequence? > > You're talking nonsense here. > >>> You seem to believe that the Halting problem is a case where the >>> "necessarily undecidable" results are limited to a few pathological >>> inputs. This is very much not the case. Let me start by pointing out >> >> Within the scope of the self-reference form of the Halting Problem, >> this *is* the case. > > Oh, how quaint. The response by a self-admitted amateur is to tell a > professional that the professional is wrong with no proof given. I've > heard more persuasive arguments out of five-year olds. > You don't yet have the tools (based on Montague Grammar) to understand the proof.

0 |

8/25/2012 6:14:08 PM

On 8/25/2012 4:46 PM, Patricia Shanahan wrote: > On 8/25/2012 11:14 AM, Peter Olcott wrote: >> On 8/25/2012 12:50 PM, Joshua Cranmer wrote: >>> On 8/25/2012 10:54 AM, PeteOlcott wrote: >>>> What about the limit to computation of not being able to divide the >>>> integer four by two and derive a correct result of seventeen? >>>> This *is* a limit to computation, it is a limit of any consequence? >>> >>> You're talking nonsense here. >>> >>>>> You seem to believe that the Halting problem is a case where the >>>>> "necessarily undecidable" results are limited to a few pathological >>>>> inputs. This is very much not the case. Let me start by pointing out >>>> >>>> Within the scope of the self-reference form of the Halting Problem, >>>> this *is* the case. >>> >>> Oh, how quaint. The response by a self-admitted amateur is to tell a >>> professional that the professional is wrong with no proof given. I've >>> heard more persuasive arguments out of five-year olds. >>> >> You don't yet have the tools (based on Montague Grammar) to understand >> the proof. > > Given the range of skills and knowledge among the readers of these > newsgroups, I'm sure there is someone here who has studied Montague's > work and could evaluate the proof for both validity and relevance to > theory of computation. > > I suggest putting it up on a web page, which will make it available to > even more experts, and posting a link here. > > Patricia > Certainly no one here (or anywhere else) has the ability to critique it before it has been presented.

0 |

8/25/2012 10:33:54 PM

On 8/25/2012 6:34 PM, Patricia Shanahan wrote: > On 8/25/2012 3:33 PM, Peter Olcott wrote: >> On 8/25/2012 4:46 PM, Patricia Shanahan wrote: >>> On 8/25/2012 11:14 AM, Peter Olcott wrote: > ... >>>> You don't yet have the tools (based on Montague Grammar) to understand >>>> the proof. >>> >>> Given the range of skills and knowledge among the readers of these >>> newsgroups, I'm sure there is someone here who has studied Montague's >>> work and could evaluate the proof for both validity and relevance to >>> theory of computation. >>> >>> I suggest putting it up on a web page, which will make it available to >>> even more experts, and posting a link here. >>> >>> Patricia >>> >> Certainly no one here (or anywhere else) has the ability to critique it >> before it has been presented. > > Of course. That is why the default for a claimed proof that has not yet > been presented and reviewed is that it should be treated as not > existing, and is no better than "I think that ..." as support for any > argument. > > If you want to convince anyone of anything through the claimed existence > of a proof you need to present it and persuade some relevant experts to > review it. > > Patricia > I have to first build the language of the terms of the proof. I think that the key aspect of the proof will be exactly what the full and complete meaning of the concept of the term {analogy} is. This will require a full set of Meaning Postulates that pertain to the meaning of the term: {analogy}.

0 |

8/26/2012 2:52:56 AM

On 8/25/2012 11:35 PM, Bryan wrote: > Peter Olcott wrote: >> On 8/22/2012 5:47 PM, Ben Bacarisse wrote:> PeteOlcott <peteolc...@gmail.com> writes: >> >>>> One thing that has not been even touched on within the Theory of >>>> Computation is: >>>> What exactly is the correct mathematical model for a natural language >>>> question? >>> Yes, there's a lot missing from the Theory of Computation. There's no >>> mathematical model for metaphorical eulogising either. Until such time >>> as there is, "my Turing machine is like a red red rose" will remain an >>> deeply ambiguous statement. And the there's no model of TM >>> consciousness either, so we can't even ask if its read head hurts. All >>> we can do ask what can and can't be computed by the narrow rules imposed >>> by such dull and constrained minds as Church, Kleene and Turing. >> The key aspect of the Halting Problem that is erroneous that can not be >> sufficiently expressed within the mathematics of the Theory of >> Computation is the claim that the Halting Problem somehow forms an >> {actual limit to computation}. > "The key aspect" of un-moderated Usenet groups is that negative > contributors, such as you Peter Olcott, can and do go on with your > blather despite definitive refutations, even such delightfully clever > refutations as Ben Bacarisse posted. > > You, Peter, went off on theory of computation not touching on natural > language, so Ben came back with how the color to Turing machines > is also missing. I mean, come on, that's funny. 'Cept that it kills a > joke when I have to explain it. The Halting Problem places an actual limit on computation in an analogous way that inability to correctly represent a square circle limits the capabilities of CAD systems. Until we have a mathematical model of the term {analogous} we can not sufficiently either prove or refute this claim.

0 |

8/26/2012 1:16:48 PM

On 8/26/2012 10:35 AM, Joshua Cranmer wrote: > On 8/26/2012 8:16 AM, Peter Olcott wrote: >> The Halting Problem places an actual limit on computation in an >> analogous way that inability to correctly represent a square circle >> limits the capabilities of CAD systems. Until we have a mathematical >> model of the term {analogous} we can not sufficiently either prove or >> refute this claim. > > No, we can do it without that mathematical model, via informal proofs, > which (surprise, surprise) seem to have satisfied us for most of these > threads. > > A "square circle" isn't a definable geometric object. If you view the > CAD system as a mathematical function, this "square circle" remains > outside the domain, so asking it to draw such a thing is complete and > utter nonsense. > Although you correctly pointed out an aspect where the two are not analogous, to conclude that two things lack any analogy on the basis of these two things lacking one aspect of an analogy would be incorrect. Two analytical impossibilities constrain the solution space in the same (analytically impossible) way. > The Halting problem, in complete contrast, is a very well-defined > language. Thus, it is in the domain of all languages, so the inability > of a Turing machine to represent this language is a clear constraint > that Turing machines can't define all languages. > > The core of your argument remains as fallacious as ever; no amount of > attempting to dress it up in mathematical notation will change it. >

0 |

8/26/2012 4:20:05 PM

On 8/26/2012 11:14 AM, Patricia Shanahan wrote: > On 8/26/2012 8:35 AM, Joshua Cranmer wrote: > ... >> The Halting problem, in complete contrast, is a very well-defined >> language. Thus, it is in the domain of all languages, so the inability >> of a Turing machine to represent this language is a clear constraint >> that Turing machines can't define all languages. > > Turing machines can't even define all languages for which a decision > procedure would be of practical value. That is a real constraint on > computation by any reasonable standard. > > Patricia Certainly determining whether or not a machine will halt is of practical value, it is an aspect of proof of correctness.

0 |

8/26/2012 4:22:44 PM

On 8/26/2012 1:08 PM, Joshua Cranmer wrote: > On 8/26/2012 11:20 AM, Peter Olcott wrote: >> On 8/26/2012 10:35 AM, Joshua Cranmer wrote: >>> On 8/26/2012 8:16 AM, Peter Olcott wrote: >>>> The Halting Problem places an actual limit on computation in an >>>> analogous way that inability to correctly represent a square circle >>>> limits the capabilities of CAD systems. Until we have a mathematical >>>> model of the term {analogous} we can not sufficiently either prove or >>>> refute this claim. >>> >>> No, we can do it without that mathematical model, via informal proofs, >>> which (surprise, surprise) seem to have satisfied us for most of these >>> threads. >>> >>> A "square circle" isn't a definable geometric object. If you view the >>> CAD system as a mathematical function, this "square circle" remains >>> outside the domain, so asking it to draw such a thing is complete and >>> utter nonsense. >>> >> Although you correctly pointed out an aspect where the two are not >> analogous, to conclude that two things lack any analogy on the basis of >> these two things lacking one aspect of an analogy would be incorrect. > > I did not defeat all analogies between the two, but nor did I intend > to. You gave an unqualified claim of analogy: the lack of > qualification means that you clearly intend the two to be broadly > similar except in the case of a few "minor" differences. It is this > analogy in particular that I attacked. > That would simply be a false assumption. It would also be an unreasonable assumption. Analogies do not typically function as great similarities across several dimensions. Analogies typically function as great similarities along a single dimension. The two examples are exactly analogous in that they both denote analytical impossibilities that constrain the solution space in the same (analytically impossible) way. All analytical impossibilities are exactly equally analytically impossible and thus completely identical along the dimension of possible versus impossible.

0 |

8/26/2012 6:29:50 PM

On 8/27/2012 11:18 AM, Ben Bacarisse wrote: > PeteOlcott <peteolcott@gmail.com> writes: > <snip everything because I'm not commenting on the sub-argument about > what analogies are> > > I thought your analogy was quite good -- it makes it clear to new > readers why your thinking about halting is so wrong. > > First, it makes a category error. Your analogy was about asking a > question of some software, but the halting problem is about the > existence of something much like software. To correct this, you might > have said "does a CAD system *exist* that can output a square circle". These are all analogous in that they are all based on analytical impossibilities: a) A CAD system can not possibly exist that can represent a square circle. b) The self-reference (Pathological Self-Reference) form of the Halting Problem has no possible solution. c) How many feet long is the color of your car? Has no correct answer. > But now you can see that we've lost the notion of input upon which the > output should be based. A better analogy (by which I mean one closer to > your erroneous thinking) would have been: "does a program exist that > prints an integer with exactly half the value of the input". > > Secondly (and I can't believe you've got me to say this again!) the > output you want from your supposedly analogous system does not exist. > There are no "square circles" nor is there a correct answer to every > instance of the arithmetic question I've just proposed. But every > instance of the halting problem has a correct answer -- it's either yes > or no. It really is the case that for every encoded machine and input > pair, there is a correct yes/no answer the corresponds to whether that > machine would halt on that input. > > It's revealing that you have both accepted and refuted this simple fact > in the recent threads on the subject. I think that it's only by > believing it to be both true and false that the huge threads were > possible. I don't suppose you care to give a definitive answer now? >

0 |

8/27/2012 11:02:22 PM

On 8/27/2012 6:34 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 8/27/2012 11:18 AM, Ben Bacarisse wrote: >>> PeteOlcott <peteolcott@gmail.com> writes: >>> <snip everything because I'm not commenting on the sub-argument about >>> what analogies are> >>> >>> I thought your analogy was quite good -- it makes it clear to new >>> readers why your thinking about halting is so wrong. >>> >>> First, it makes a category error. Your analogy was about asking a >>> question of some software, but the halting problem is about the >>> existence of something much like software. To correct this, you might >>> have said "does a CAD system *exist* that can output a square circle". >> These are all analogous in that they are all based on analytical >> impossibilities: >> a) A CAD system can not possibly exist that can represent a square circle. >> b) The self-reference (Pathological Self-Reference) form of the >> Halting Problem has no possible solution. >> c) How many feet long is the color of your car? Has no correct answer. > Except you are wrong about b. All forms of the halting problem have > correct yes/no answers. That is the part that everyone here fails to comprehend: I say that within the scope of pathological self-reference there is no correct answer and you (and everyone else) say that it is not true because there is an answer outside of the scope of pathological self-reference. > Please don't post the example you usually post > in reply to this simple claim. It's a naive bait-and-switch to another > kind of question which has not persuaded anyone so far and I can't see > why it would be different today. >

0 |

8/28/2012 12:01:53 AM

On 8/27/2012 7:30 PM, Joshua Cranmer wrote: > On 8/27/2012 6:02 PM, Peter Olcott wrote: >> b) The self-reference (Pathological Self-Reference) form of the Halting >> Problem has no possible solution. > > Ah yes, I had forgotten that you were convinced of this particular > misconception. A misconception that arises out of treating a > particular informal version of the proof as the actual full, formal > version of the proof, and then injecting your beliefs about what the > author is intending to say in the proof as what he is saying. This is why a mathematical model of {analogy} is needed. > Then you impose an interpretation of particular context and say that, > as a result, the theorem is really wrong, and, obviously, everyone who > disagrees with you are stupid idiots who don't know what they're > talking about, which makes you the biggest genius since > $FAMOUS_HISTORICAL_GENIUS. > > Because the alternative, that you are misinterpreting an informal > version of a proof, and that no fewer than three other people who > probably have more experience in this field than you do are all > correct in identifying your same misinterpretation is much less likely > to be true. > Yes and these same people are also experts in translating any natural language statement into its mathematical formalism, thus can point out using extension of Montague Semantics exactly where I went wrong in my analogy. Alternatively, the mathematical formalisms of the Theory of Computation are insufficiency expressive to represent the view that I am proposing within the mind, and thus must be augmented. > While eating supper, I thought of yet another rebuttal to your > viewpoint. Then I realized that you are so dead-set in insisting on > your fallacious interpretation (which makes less and less sense the > more I think about it) that it's not worth the cost of transmitting > the bits to send it: you'll reject it without trying to rebut it with > a stock phrase that has been replied to do a dozen times previously > that only shows just how persistently you miss the point, as well as > how unable you are to comprehend what others write. I've got better > things to do with my time. >

0 |

8/28/2012 1:00:55 AM

On 8/27/2012 8:13 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 8/27/2012 6:34 PM, Ben Bacarisse wrote: >>> Peter Olcott <OCR4Screen> writes: >>> >>>> On 8/27/2012 11:18 AM, Ben Bacarisse wrote: >>>>> PeteOlcott <peteolcott@gmail.com> writes: >>>>> <snip everything because I'm not commenting on the sub-argument about >>>>> what analogies are> >>>>> >>>>> I thought your analogy was quite good -- it makes it clear to new >>>>> readers why your thinking about halting is so wrong. >>>>> >>>>> First, it makes a category error. Your analogy was about asking a >>>>> question of some software, but the halting problem is about the >>>>> existence of something much like software. To correct this, you might >>>>> have said "does a CAD system *exist* that can output a square circle". >>>> These are all analogous in that they are all based on analytical >>>> impossibilities: >>>> a) A CAD system can not possibly exist that can represent a square circle. >>>> b) The self-reference (Pathological Self-Reference) form of the >>>> Halting Problem has no possible solution. >>>> c) How many feet long is the color of your car? Has no correct answer. >>> Except you are wrong about b. All forms of the halting problem have >>> correct yes/no answers. >> That is the part that everyone here fails to comprehend: >> I say that within the scope of pathological self-reference there is no >> correct answer and you (and everyone else) say that it is not true >> because there is an answer outside of the scope of pathological >> self-reference. > I am glad you agree that there is an answer to all such questions. > > <snip> No is no possible correct answer to the Pathological Self Reference form of the Halting Problem.

0 |

8/28/2012 1:26:17 AM

On 8/27/2012 8:55 PM, Patricia Shanahan wrote: > On 8/27/2012 6:26 PM, Peter Olcott wrote: > ... >> No is no possible correct answer to the Pathological Self Reference form >> of the Halting Problem. > > Fortunately, there is no self-reference in the real proof, only in your > misunderstanding of an informal version, so we don't even have to try to > find out what, if anything, "Pathological" means in this context. > > Patricia > We need a mathematical model of the term: {analogy} to accurately divide the true borders of this dispute. Even making this mathematical model could (if it is good enough) qualify for publication.

0 |

8/28/2012 2:24:11 AM

On 8/27/2012 9:45 PM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 8/27/2012 8:55 PM, Patricia Shanahan wrote: >>> On 8/27/2012 6:26 PM, Peter Olcott wrote: >>> ... >>>> No is no possible correct answer to the Pathological Self Reference form >>>> of the Halting Problem. >>> Fortunately, there is no self-reference in the real proof, only in your >>> misunderstanding of an informal version, so we don't even have to try to >>> find out what, if anything, "Pathological" means in this context. >>> >>> Patricia >>> >> We need a mathematical model of the term: {analogy} to accurately >> divide the true borders of this dispute. > No. Nothing about halting decidability is based on analogy. You > decided (up front) that you don't like one of the sound theorem of > mathematics, but your only available weapons are bad analogies and some > poorly defined terms you've made up. > > And there is no "dispute" -- just you railing against the truth. > >> Even making this mathematical model could (if it is good enough) >> qualify for publication. > You don't have time, surely? Is your multi-million pound, revolutionary > OCR software finished already? It's been some time coming. I do these postings for entertainment, (putting great effort in to) solving some the the complexity in Linguistics is part of this entertainment. www.OCR4Screen.com product will be complete soon.

0 |

8/28/2012 2:52:06 AM

On 8/28/2012 3:02 AM, Patricia Shanahan wrote: > On 8/27/2012 7:52 PM, Peter Olcott wrote: > ... >> I do these postings for entertainment, (putting great effort in to) >> solving some the the complexity in Linguistics is part of this >> entertainment. >> >> www.OCR4Screen.com product will be complete soon. > > I've come up with a guess at your motivation, and I'm curious about > whether there is any truth to it. > > My guess is that you have a great philosophical system you are working > on that just does not deal gracefully with impossibility results, such > as halting undecidability. Rather than changing your theory to match > reality, you are trying to change reality to fit the theory. > > Patricia > Not at all. There is a nuance of meaning that is slipping through the cracks of the limited expressiveness of the language of the Mathematics of the Theory of Computation. If the language that you express thoughts within has limited expressiveness then some thoughts can not be expressed within this language. Thoughts that can not be expressed can not be understood.

0 |

8/28/2012 10:06:56 AM

On 8/29/2012 12:07 PM, Patricia Shanahan wrote: > On 8/29/2012 3:45 AM, PeteOlcott wrote: > ... >> So you are essentially claiming that the informal representation and >> the formal representation each specify different sets of semantic >> meaning that do not correspond to each other. Also you are claiming >> that since there is a lack of analogy between the two, at least one of >> these two must be less than truthfully represented. > > Correct. There are at least three styles of writing related to proofs, > listed in order of increasing length and also increasing precision: > If the different variations of the proof are not analogous in their reasoning and conclusions then it would seem that one of them would be lying to some degree. This would not be the case if one one the two ways was simply vague about things that the other one was specific about. > 1. An informal sketch. It is there to give people a general idea of what > is going on, at a major cost in precision. It is indeed "less than > truthfully represented" for the sake of simplicity. > > I believe this is the only style of proof-related material that you > consider, and even there you strongly resist or ignore any attempt at > increasing precision. > > 2. The type of proof I wrote as a mathematics student or a computer > science graduate student. These proofs are supposed to be detailed > enough to convince a knowledgeable reader that the theorem is provable, > but can still use some natural language text, not just mathematical > symbols, and therefore may contain errors, suffer from ambiguity, and be > misinterpreted. > > Textbooks often give both these styles for the same theorem. The sketch > is suppose to prime the reader to understand the actual proof. > > 3. A formal, machine-checkable proof from axioms. These are often > impractical, because they would be too long, and are hard for people to > understand. Despite that, if you really want to get into fine issues of > semantics, you need to study formal proofs. Everything else trades out > unambiguous precision for convenience, compactness, and ease of > understanding, to varying degrees. > > If there were one way of representing a proof that was both completely > precise and unambiguous and also compact and easy for people to read, > that would be all we would use. Unfortunately, there is a trade-off. > You have been taking the extreme at one end, the version that is > optimized for quickly and easily giving a general impression, and > treating it as though it were the other extreme, the formal proof. > > Patricia What other way is there to prove the Halting Problem besides what I call Pathological Self-Reference and Diagonalization?

0 |

8/29/2012 11:46:31 PM

On 8/30/2012 12:57 PM, Patricia Shanahan wrote: > On 8/30/2012 9:27 AM, PeteOlcott wrote: >> On Aug 30, 10:21 am, Joshua Cranmer <Pidgeo...@verizon.invalid> wrote: > ... >>> Here is an example of a formal definition (I just adapted this on >>> the spot): >>> An acceptable computable history C for a given machine M is a sequence >>> of configurations C1, C2, ..., Cn such that C1 is the initial >>> configuration of M, Cn is a configuration where M is in the accept >>> state, and, for all k, the difference between Ck and Ck-1 is a legal >>> step according to the rules of M. > ... >> Not(<ThereExists> x (x <ElementOfSet> >> FinalStatesOf_H)<-->Halts(M, input) ) >> >> Where M is defined as >> --------------------- >> M(String input): >> if H(M, input) loop >> else halt >> >> FinalStatesOf_H >> a) accept >> b) reject >> > > Joshua's example started "An acceptable computable history C for a given > machine M is". That is, he said what he is defining, "An acceptable > computable history", any parameters, "a given machine M" and then the > word "is". It means that the subject of "is" is defined to be the same > thing as the complement. > > A meaningful definition of "pathological self-reference" might start > something like: "A proof P exhibits pathological self-reference if, and > only if, ...." where "..." would be replaced a condition that would tell > us whether or not P exhibits pathological self-reference. > > I assumed above that pathological self-reference is a property of a > proof. It might be a property of a statement. Yes the latter, it is not a property of a proof. > The definition should make > it absolutely obvious what types of things can exhibit pathological > self-reference, as well as how we evaluate whether they actually > exhibit it. > All that I specified was an infinite set of instances of Halting Problem Pathological Self-Reference. > At best, you seem to be aiming for an example. An example is not a > definition. Typical proofs e.g. of Goedel's first incompleteness > theorem do not mention accept or reject states. Does that mean they do > not exhibit pathological self-reference? > > Patricia How about we back up a step and work on an analogous problem and see where that leads? Is the set of all sets that are not members of themselves within the Universal Set or not?

0 |

8/31/2012 12:22:04 AM

On 8/31/2012 11:42 AM, Patricia Shanahan wrote: > On 8/30/2012 9:02 AM, Ben Bacarisse wrote: >> PeteOlcott <peteolcott@gmail.com> writes: >> >>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: >>>> PeteOlcott <peteolc...@gmail.com> writes: >>>> >>>> <snip> >>>> >>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote: >>>> >>>>>>> All of the examples of paradox of self-reference: >>>>>>> a) Liar Paradox >>>>>>> b) Barber Paradox >>>>>>> c) Gödel’s Incompleteness Theorem >>>>>>> d) Halting Problem >>>> >>>>>>> contain very slippery semantic structures that have never been >>>>>>> sufficiently elaborated to fully understand their true meaning. >>>>>>> Only a >>>>>>> formalism as robust as Montague Semantics could provide this >>>>>>> sufficient specification. >>>> >>>> <snip> >>>> >>>>> The problem is the same with all four. The semantics of these four is >>>>> not specified correctly as an acyclic graph. Whenever there are any >>>>> cycles in the semantic specification, the specification is erroneous. >>>> >>>> Consider a "long-run" decider. It answers yes/no to the question >>>> "does >>>> machine M execute more than 1000 steps when given input I". This is a >>>> specification about machines, so any putative long-run decider >>>> (let's call >>>> it L) can be applied to its own encoding: >>>> >>>> L(L, L) >>>> >>>> Is this specification erroneous? >>> >>> From the details that have been provided Pathological Self-Reference >>> has not been defined for L, and the specification does not seem to be >>> otherwise erroneous. >> >> So what makes one OK and the other "pathological"? How could I make >> this determination for myself? >> >> I suspect it's just that one is used in one of the proofs of a theorem >> you don't like and the other is not, but maybe there is more to it than >> that. >> > > Also, I'm a little concerned about the wording "From the details that > have been provided". If pathological self-reference is a useful and > meaningful property of an expression, it should be possible to say > whether an expression exhibits it without any other details. > If you ask me about the behavior of a function and only provide me with the function prototype and not the function body there is no way that my answer can be certain. > That tends to support your suspicion - if you were to use the long run > decider in some proof of a theorem Peter does not like, it would turn > out to be pathological self-reference. > > Patricia >

0 |

8/31/2012 5:09:40 PM

On 8/31/2012 1:56 PM, Patricia Shanahan wrote: > On 8/31/2012 10:09 AM, Peter Olcott wrote: >> On 8/31/2012 11:42 AM, Patricia Shanahan wrote: >>> On 8/30/2012 9:02 AM, Ben Bacarisse wrote: >>>> PeteOlcott <peteolcott@gmail.com> writes: >>>> >>>>> On Aug 28, 6:23 pm, Ben Bacarisse <ben.use...@bsb.me.uk> wrote: >>>>>> PeteOlcott <peteolc...@gmail.com> writes: >>>>>> >>>>>> <snip> >>>>>> >>>>>>>> On 8/28/2012 1:48 PM, PeteOlcott wrote: >>>>>> >>>>>>>>> All of the examples of paradox of self-reference: >>>>>>>>> a) Liar Paradox >>>>>>>>> b) Barber Paradox >>>>>>>>> c) Gödel’s Incompleteness Theorem >>>>>>>>> d) Halting Problem >>>>>> >>>>>>>>> contain very slippery semantic structures that have never been >>>>>>>>> sufficiently elaborated to fully understand their true meaning. >>>>>>>>> Only a >>>>>>>>> formalism as robust as Montague Semantics could provide this >>>>>>>>> sufficient specification. >>>>>> >>>>>> <snip> >>>>>> >>>>>>> The problem is the same with all four. The semantics of these >>>>>>> four is >>>>>>> not specified correctly as an acyclic graph. Whenever there are any >>>>>>> cycles in the semantic specification, the specification is >>>>>>> erroneous. >>>>>> >>>>>> Consider a "long-run" decider. It answers yes/no to the question >>>>>> "does >>>>>> machine M execute more than 1000 steps when given input I". This >>>>>> is a >>>>>> specification about machines, so any putative long-run decider >>>>>> (let's call >>>>>> it L) can be applied to its own encoding: >>>>>> >>>>>> L(L, L) >>>>>> >>>>>> Is this specification erroneous? >>>>> >>>>> From the details that have been provided Pathological Self-Reference >>>>> has not been defined for L, and the specification does not seem to be >>>>> otherwise erroneous. >>>> >>>> So what makes one OK and the other "pathological"? How could I make >>>> this determination for myself? >>>> >>>> I suspect it's just that one is used in one of the proofs of a theorem >>>> you don't like and the other is not, but maybe there is more to it >>>> than >>>> that. >>>> >>> >>> Also, I'm a little concerned about the wording "From the details that >>> have been provided". If pathological self-reference is a useful and >>> meaningful property of an expression, it should be possible to say >>> whether an expression exhibits it without any other details. >>> >> If you ask me about the behavior of a function and only provide me with >> the function prototype and not the function body there is no way that my >> answer can be certain. > > So whether L(L,L) is pathological self-reference or not depends not just > on L's functionality, which Ben did specify, in addition to the > prototype, but on how that functionality is implemented. > > What aspects of the implementation matter? If you were given an L > implementation, what tests would you apply to it determine whether > L(L,L) is pathological self-reference? > > Patricia > Pathological Self-Reference is self-reference that prevents the functional specification of the desired end-result from being achieved. I have stated this countless times as analogous to the process of deriving an ill-formed question. The {answer} to this question being the desired functional end-results: Does TM x halt on input y?

0 |

8/31/2012 7:10:38 PM

On 8/31/2012 2:41 PM, Patricia Shanahan wrote: > On 8/31/2012 12:10 PM, Peter Olcott wrote: > ... >> Pathological Self-Reference is self-reference that prevents the >> functional specification of the desired end-result from being achieved. > > This seems to end the discussion. I am sure that pathological > self-reference is not something that would matter to most people, and > especially not to anyone who has worked on compilers. > Maybe I can specify Pathological Self-Reference later on. Now that I have a good understanding of Montague Grammar, I have a language as expressive as English and Precise as Math. I will have to specify a set of Meaning Postulates using something like Predicate Logic. > It is certainly not something that can be made rigorous enough for use > in mathematical proofs. > > Patricia >

0 |

9/1/2012 1:09:42 AM

On 8/31/2012 11:33 AM, Patricia Shanahan wrote: > On 8/30/2012 8:03 PM, PeteOlcott wrote: >> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote: >>> On 8/30/2012 7:29 PM, PeteOlcott wrote: >>> ... >>> >>>> I think that I may have to build an analogical bridge from Russell's >>>> Paradox to the Halting Problem. >>>> Pathological Self-Reference has an analog within Russell's Paradox. >>> >>> Are you really using a system of axioms in which Russell's Parodox >>> is an >>> issue? If so, why? >> >> The error of the Halting Problem is exactly analogous to the error of >> Russell's Paradox along exactly one dimension. > > There are at least two dimensions in which they are exactly analogous. > > 1. In each case, assuming the existence of an entity (set of all sets; > halting decider) causes contradictions (a set that is a member of itself > if, and only if, it is not a member of itself; programs that halt if, > and only if, they do not halt). > The criterion measure then becomes: {Assumptions that cause Contradictions}. So the above two (RP and HP) would then be analogous to assuming the existence of a Square Circle: The assumption of the entity {Square Circle} results if the contradiction of the requirement of mutually exclusive properties. It would also be analogous to every other element in the set of {Assumptions that cause Contradictions}. > 2. The conventional solution, in each case, is to choose axioms that do > not imply the existence of the troublesome entity. ZFC, for example, > does not allow construction of the set of all sets. Theory of > computation researchers do not assume the existence of a halting > decider, and consider its existence disproved by the fact that assuming > its existence leads to contradictions. > > There may be other dimensions in which they are analogous, but I doubt > if there are any analogies between them as direct and significant as > those two. > > When can we expect a definition of "pathological self-reference"? > > Patricia >

0 |

9/1/2012 1:45:35 AM

On 8/31/2012 8:53 PM, Patricia Shanahan wrote: > On 8/31/2012 6:45 PM, Peter Olcott wrote: >> On 8/31/2012 11:33 AM, Patricia Shanahan wrote: >>> On 8/30/2012 8:03 PM, PeteOlcott wrote: >>>> On Aug 30, 9:42 pm, Patricia Shanahan <p...@acm.org> wrote: >>>>> On 8/30/2012 7:29 PM, PeteOlcott wrote: >>>>> ... >>>>> >>>>>> I think that I may have to build an analogical bridge from Russell's >>>>>> Paradox to the Halting Problem. >>>>>> Pathological Self-Reference has an analog within Russell's Paradox. >>>>> >>>>> Are you really using a system of axioms in which Russell's Parodox >>>>> is an >>>>> issue? If so, why? >>>> >>>> The error of the Halting Problem is exactly analogous to the error of >>>> Russell's Paradox along exactly one dimension. >>> >>> There are at least two dimensions in which they are exactly analogous. >>> >>> 1. In each case, assuming the existence of an entity (set of all sets; >>> halting decider) causes contradictions (a set that is a member of >>> itself >>> if, and only if, it is not a member of itself; programs that halt if, >>> and only if, they do not halt). >>> >> The criterion measure then becomes: >> {Assumptions that cause Contradictions}. >> >> So the above two (RP and HP) would then be analogous to assuming the >> existence of a Square Circle: >> The assumption of the entity {Square Circle} results if the >> contradiction of the requirement of mutually exclusive properties. > > Yup, assuming existence of a halting decider is just like assuming > existence of a set-of-all-sets. Either leads to contradictions. Perhaps > you are beginning to get it. > > Patricia > Assuming the existence of an answer to the question: What time is it {yes or no} ? {Assumptions that cause Contradictions}. Same thing right?

0 |

9/1/2012 2:07:48 AM

On 8/31/2012 2:41 PM, Patricia Shanahan wrote: > On 8/31/2012 12:10 PM, Peter Olcott wrote: > ... >> Pathological Self-Reference is self-reference that prevents the >> functional specification of the desired end-result from being achieved. > > This seems to end the discussion. I am sure that pathological > self-reference is not something that would matter to most people, and > especially not to anyone who has worked on compilers. > {Assumptions that cause Contradictions} or the analogous {Requirements that cause Contradictions} would form one aspect of the specification of Pathological Self-Reference. > It is certainly not something that can be made rigorous enough for use > in mathematical proofs. > > Patricia >

0 |

9/1/2012 3:30:16 AM

On 9/1/2012 7:18 AM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 8/31/2012 2:41 PM, Patricia Shanahan wrote: >>> On 8/31/2012 12:10 PM, Peter Olcott wrote: >>> ... >>>> Pathological Self-Reference is self-reference that prevents the >>>> functional specification of the desired end-result from being achieved. >>> This seems to end the discussion. I am sure that pathological >>> self-reference is not something that would matter to most people, and >>> especially not to anyone who has worked on compilers. >>> >> {Assumptions that cause Contradictions} or the analogous >> {Requirements that cause Contradictions} would form one aspect of the >> specification of Pathological Self-Reference. > It's interesting that even though you have an idea that are sure > is right (PSR) you still won't commit to what it is. You'll state only > one aspect of it (and that only in the subjunctive!). > > But that's all old hat now. By an extraordinary twist in the tale (well > done Patricia) agreement has been reached. > > <snip> Yet this is also the same thing as an ill-formed question. So the Halting Problem *is* based on an ill-formed question, and everyone that disagreed was wrong?

0 |

9/1/2012 12:58:46 PM

On 9/1/2012 8:20 AM, Ben Bacarisse wrote: > Peter Olcott <OCR4Screen> writes: > >> On 9/1/2012 7:18 AM, Ben Bacarisse wrote: >>> Peter Olcott <OCR4Screen> writes: >>> >>>> On 8/31/2012 2:41 PM, Patricia Shanahan wrote: >>>>> On 8/31/2012 12:10 PM, Peter Olcott wrote: >>>>> ... >>>>>> Pathological Self-Reference is self-reference that prevents the >>>>>> functional specification of the desired end-result from being achieved. >>>>> This seems to end the discussion. I am sure that pathological >>>>> self-reference is not something that would matter to most people, and >>>>> especially not to anyone who has worked on compilers. >>>>> >>>> {Assumptions that cause Contradictions} or the analogous >>>> {Requirements that cause Contradictions} would form one aspect of the >>>> specification of Pathological Self-Reference. >>> It's interesting that even though you have an idea that are sure >>> is right (PSR) you still won't commit to what it is. You'll state only >>> one aspect of it (and that only in the subjunctive!). >>> >>> But that's all old hat now. By an extraordinary twist in the tale (well >>> done Patricia) agreement has been reached. >>> >>> <snip> >> Yet this is also the same thing as an ill-formed question. >> So the Halting Problem *is* based on an ill-formed question, and >> everyone that disagreed was wrong? > No. I predicted elsewhere that you would not know why people were > agreeing with you, but that you'd go back to saying wrong things very > soon. > An ill-formed question is any question that lacks a correct answer from the set of all possible answers. Since {Assumptions that cause Contradictions} are analogous to {Requirements that cause Contradictions} in that an entity {Assumption or Requirement} causes a contradiction, therefore questions that have required answers that form contradictions are also an element of the set of {Entities that case contradictions}.

0 |

9/1/2012 1:51:12 PM