While on the subject of NP-completeness... Since Cook's theorem, have there been any problems that were not known to be in either P or NPC, that were later determined to be in P? Three problems that come to mind which seem hard but haven't been shown to be in NPC or P are factoring, graph isomorphism and discrete logarithm. Isn't it interesting that problems with this quality have been utilized more in cryptography than those known to be in NPC? Naively, one (at least I) would think problems known to be at least as hard, if not harder would lend themselves more to encryption systems.
fpluser <guest747@spamshield.hotmail.com> wrote: : While on the subject of NP-completeness... : Since Cook's theorem, have there been any problems that were : not known to be in either P or NPC, that were later determined : to be in P? : Three problems that come to mind which seem hard but haven't : been shown to be in NPC or P are factoring, graph isomorphism : and discrete logarithm. Primes and Linear Programming were two problems that were not known to be in P when Garey and Johnson's "Guide to a theory of NP-Completeness" was first published that have since been shown to be in P. There are others, but these are most famous. Stephen
On 3 Jul 2003 stephen@nomail.com wrote: > Primes and Linear Programming were two problems that > were not known to be in P when Garey and Johnson's "Guide to a theory > of NP-Completeness" was first published that have since been > shown to be in P. There are others, but these are most famous. Recently, recognition of Berge graphs/Perfect graphs have been shown to be in P, more than 40 years after they were first defined and studied. On a kind of different note, the idea of finding a maximum matching in a graph has a similar flavour to finding a minimum vertex cover. The paper giving the first polynomial-time algorithm to find the maximum matching, I believe, was the first (?) documented reference to a polynomial time algorithm (I'm not positive about the history of this - perhaps someone who knows can add details here. I just know that some argue that this motivated the definition of an "efficient" algorithm.) Jim
In article <frKcnYxfQsA3_56iXTWJig@comcast.com>, fpluser <guest747@SPAMSHIELD.hotmail.com> wrote: >Isn't it interesting that problems with this quality have been utilized >more in cryptography than those known to be in NPC? Naively, >one (at least I) would think problems known to be at least as hard, >if not harder would lend themselves more to encryption systems. There's some discussion of this in Papadimitriou's book _Computational_ _Complexity_, and I've seen more thorough discussions elsewhere (no references off the top of my head though). Part of the problem is that NP-completeness is a "worst-case" property; a lot of NP-complete problems have a lot of easy cases, and what you typically need in cryptography is a large family of instances, all of which are hard. -- Tim Chow tchow-at-alum-dot-mit-dot-edu The range of our projectiles---even ... the artillery---however great, will never exceed four of those miles of which as many thousand separate us from the center of the earth. ---Galileo, Dialogues Concerning Two New Sciences
> tchow@lsa.umich.edu wrote: > > fpluser <guest747@SPAMSHIELD.hotmail.com> wrote: > >>Isn't it interesting that problems with this quality have been utilized > >>more in cryptography than those known to be in NPC? Naively, > >>one (at least I) would think problems known to be at least as hard, > >>if not harder would lend themselves more to encryption systems. > > The best bet is a lattice-based cryptosystem: for these there are fairly strong hardness assumptions, and even worst-case to average-case results that address the fact that many NP-complete problems are 'easy' on average.
jaco@cs.tut.fi wrote on 07/04/2003 03:39 PM: > tchow@lsa.umich.edu wrote: > >>fpluser <guest747@SPAMSHIELD.hotmail.com> wrote: >> >>>Isn't it interesting that problems with this quality have been utilized >>>more in cryptography than those known to be in NPC? Naively, >>>one (at least I) would think problems known to be at least as hard, >>>if not harder would lend themselves more to encryption systems. >> >>There's some discussion of this in Papadimitriou's book _Computational_ >>_Complexity_, and I've seen more thorough discussions elsewhere (no >>references off the top of my head though). Part of the problem is >>that NP-completeness is a "worst-case" property; a lot of NP-complete >>problems have a lot of easy cases, and what you typically need in >>cryptography is a large family of instances, all of which are hard. > > > Echoing this, I just happened to read the following fragment in the paper > "NP-completeness: A retrospective" by Papadimitriou: > > It is now common knowledge among computer scientists that NP-completeness > is largely irrelevant to public-key cryptography, since in that area one > needs sophisticated \emph{cryptographic assumptions} that go beyond > NP-completeness and worst-case polynomial-time computation [19]; > furthermore, cryptographic protocols based on NP-complete problems have > been ill-fated. Fortunately, the founders of modern cryptography did not > know this. > > [19] R. L. Rivest "Cryptography" pp. 717--755 in "Handbook of > Theoretical Computer Science", 1990 I wouldn't say that McEliece's cryptosystem, which is "based" on Maximum-Likelihood Decoding (NP-hard) , is ill-fated. Best, -Ari