76
$\begingroup$

Do you know sensible algorithms that run in polynomial time in (Input length + Output length), but whose asymptotic running time in the same measure has a really huge exponent/constant (at least, where the proven upper bound on the running time is in such a way)?

$\endgroup$
7
  • 4
    $\begingroup$See mathoverflow.net/questions/65412: "Worst known algorithm in terms of big-O or more precisely big-Theta." I posted an answer there.$\endgroup$CommentedMay 19, 2011 at 12:09
  • 5
    $\begingroup$There's the Reingold's LOGSPACE algorithm for connectivity (see a question concerning it's time complexity), but doubt it's sensible in the sense you mean here...$\endgroup$CommentedMay 19, 2011 at 14:35
  • 1
    $\begingroup$@Joseph ORourke: I have the "fat rectangle" paper on my desk right now!$\endgroup$CommentedMay 19, 2011 at 16:11
  • 4
    $\begingroup$Although the $n^{42}$ was a legitimate calculation (dynamic programming pumps it up), I included it in the conference version as something of a joke, a joke removed in the journal version.$\endgroup$CommentedMay 19, 2011 at 17:13
  • 13
    $\begingroup$Recognition of perfect graphs is in $O(|V(G)|^9)$, and a breakthrough seems to be necessary to improve this.$\endgroup$CommentedMay 23, 2011 at 13:56

19 Answers 19

47
$\begingroup$

Algorithms based on the regularity lemma are good examples for polynomial-time algorithms with terrible constants (either in the exponent or as leading coefficients).

The regularity lemma of Szemeredi tells you that in any graph on $n$ vertices you can partition the vertices into sets where the edges between pairs of sets are "pseudo-random" (i.e., densities of sufficiently large subsets look like densities in a random graph). This is a structure that is very nice to work with, and as a consequence there are algorithms that use the partition. The catch is that the number of sets in the partition is an exponential tower in the parameter of pseudo-randomness (See here: http://en.wikipedia.org/wiki/Szemer%C3%A9di_regularity_lemma).

For some links to algorithms that rely on the regularity lemma, see, e.g.: http://www.cs.cmu.edu/~ryanw/regularity-journ.pdf

$\endgroup$
3
  • 2
    $\begingroup$Good point! Though, I'm not aware of a computational problem where there is a corresponding lower bound of tower of exponentials. Gowers proved such a lower bound for the regularity lemma itself, but I don't know of a computational problem where it applies.$\endgroup$
    – arnab
    CommentedMay 20, 2011 at 17:50
  • 3
    $\begingroup$I believe that the flocking algorithms described by Chazelle in this paper (arxiv.org/abs/0905.4241) have optimal (i.e have lower bounds) convergence that's a tower of twos$\endgroup$CommentedJan 24, 2012 at 22:39
  • $\begingroup$In a recent paper (eccc.hpi-web.de/report/2014/018), I show some other algorithms using (arithmetic) regularity lemma that have huge constants hidden by O() notation.$\endgroup$
    – arnab
    CommentedJun 20, 2014 at 7:21
64
$\begingroup$

Here are two screenshots from An Energy-Driven Approach to Linkage Unfolding by Jason H. Cantarella, Erik D. Demaine, Hayley N. Iben, James F. O’Brien, SOCG 2004:

Corollary 1. The number of steps in our algorithm is at most $1752484608000 n^{79}L^{25}/D^{26}(\Theta_0)$

Corollary 2. The number of steps in our algorithm is at most $117607251220365312000 n^{79}(\ell_{\max}/d_{\min}(\Theta_0))^{26}$]

$\endgroup$
4
  • 12
    $\begingroup$The constant is way more impressive than the power of $n$ :)$\endgroup$CommentedMay 27, 2011 at 3:18
  • 12
    $\begingroup$This is an algorithm with huge exponent AND huge constant...$\endgroup$CommentedMay 27, 2011 at 3:18
  • 6
    $\begingroup$The same bounds are true for Bubblesort.$\endgroup$
    – Raphael
    CommentedApr 3, 2013 at 21:06
  • 1
    $\begingroup$How tight are these bounds?$\endgroup$
    – Max
    CommentedApr 5, 2014 at 15:45
62
$\begingroup$

News from SODA 2013: Max-Bisection problem is approximable to within a factor 0.8776 in around $O(n^{10^{100}})$ time.

$\endgroup$
    42
    $\begingroup$

    Here is a recent result from FUN 2012 paper Picture-Hanging Puzzles by Erik D. Demaine, Martin L. Demaine, Yair N. Minsky, Joseph S. B. Mitchell, Ronald L. Rivest and Mihai Patrascu.

    We show how to hang a picture by wrapping rope around n nails, making a polynomial number of twists, such that the picture falls whenever any k out of the n nails get removed, and the picture remains hanging when fewer than k nails get removed.

    Don't let the 'polynomial number' fool you...it turns out to be $O(n^{43737})$.

    $\endgroup$
    1
    • 18
      $\begingroup$That's $(618)^{\text{#gates in an AKS sorting network}}$ (!!)$\endgroup$
      – Jeffε
      CommentedApr 11, 2012 at 7:23
    28
    $\begingroup$

    There exists a class of problems, whose solutions are hard to compute, but approximating them to any accuracy is easy, in the sense that there are polynomial-time algorithms that can approximate the solution to within $(1+\epsilon)$ for any constant ε > 0. However, there's a catch: the running time of the approximators may depend on $1/\epsilon$ quite badly, e.g., be $O(n^{1/\epsilon})$.

    See more info here: http://en.wikipedia.org/wiki/Polynomial-time_approximation_scheme.

    $\endgroup$
      24
      $\begingroup$

      The current best known algorithm for recognizing map graphs (a generalization of planar graphs) runs in $n^{120}$. Thorup, Map graphs in polynomial time.

      Computing the equilibrium of the Arrow-Debreu market takes $O(n^6\log(nU))$ max-flow computations, where $U$ is the maximum utility. Duan, Mehlhorn, A Combinatorial Polynomial Algorithm for the Linear Arrow-Debreu Market.

      $\endgroup$
      3
      • 1
        $\begingroup$I get an error from IEEE when I follow your link, but I assume you're referring to Thorup's FOCS'98 paper "Map graphs in polynomial time".$\endgroup$CommentedDec 7, 2012 at 22:51
      • 1
        $\begingroup$I do mean that paper, and it loads fine for me.$\endgroup$
        – adrianN
        CommentedDec 12, 2012 at 19:45
      • $\begingroup$works for me from the U.$\endgroup$CommentedDec 12, 2012 at 20:06
      22
      $\begingroup$

      Although the run-time for such algorithms has been subsequently improved, the original algorithm for sampling a point from a convex body had run time $\tilde{O}(n^{19})$.

      Dyer, Frieze, and Kannan: http://portal.acm.org/citation.cfm?id=102783

      $\endgroup$
        21
        $\begingroup$

        If $L$ is a tabular modal or superintuitionistic logic, then the extended Frege and substitution Frege proof systems for $L$ are polynomially equivalent, and polynomially faithfully interpretable in the classical EF (this is Theorem 5.10 in this paper of mine). The exponent $c$ of the polynomial simulations is not explicitly stated in Theorem 5.10, but the inductive proof of the theorem gives $c=2^{O(|F|)}$, where $F$ is a finite Kripke frame which generates $L$, so it can be as huge as you want depending on the logic. (It gets worse in Theorem 5.20.)

        $\endgroup$
          18
          $\begingroup$

          The "convex skull" problem is to find the maximum-area convex polygon inside a given simple polygon. The fastest algorithm known for this problem runs in $O(n^7)$ time [Chang and Yap, DCG 1986].

          $\endgroup$
            16
            $\begingroup$

            The solution of Annihilation Games (Fraenkel and Yesha) has complexity $O(n^6)$.

            $\endgroup$
              16
              $\begingroup$

              Sandpile Transience Problem

              Consider the following process. Take a thick tile and drop sand particles on it one grain at a time. A heap gradually builds up and then a large portion of sand slides off from the edges of the tile. If we continue to add sand particles, after a certain point of time, the configuration of the heap repeats. Thereafter, the configuration becomes recurrent, i.e. it keeps revisiting a state that is seen earlier.

              Consider the following model for the above process. Model the tile as an $n \times n$ grid. Sand particles are dropped on the vertices of this grid. If the number of particles at a vertex exceeds its degree, then the vertex collapses and the particles in it move to adjacent vertices (in cascading manner). A sand particle that reaches a boundary vertex disappears into a sink (`falls off'). This is known as the Abelian Sandpile Model.

              Problem: How long does it take for the configuration to become recurrent in terms of $n$, assuming the worst algorithm for dropping sand particles?

              In SODA '07, László Babai and Igor Gorodezky proved this time to be polynomially bounded but..

              enter image description here

              In SODA '12, Ayush Choure and Sundar Vishwanathan improved this bound to $O(n^7)$.

              This answer would have looked slightly better if not for their improvement :)

              $\endgroup$
                15
                $\begingroup$

                The Robertson-Seymour theorem aka Graph Minor Theorem establishes among other things that for any graph $G$, there exists an $O(n^3)$ algorithm that determines whether an arbitrary graph $H$ (of size $n$) has $G$ as a minor. The proof is nonconstructive and the (I think non-uniform) multiplicative constant is probably so enormous that no formula for it can be written down explicitly (e.g. as a primitive recursive function on $G$).

                http://en.wikipedia.org/wiki/Graph_minor_theorem#Polynomial_time_recognition

                $\endgroup$
                  14
                  $\begingroup$

                  There are some nonconstructive algorithms, most notably Fellows and Langston and Courcelle's theorem.

                  Also, Bodlaender's linear-time algorithm for tree-width and Courcelle's theorem are notoriously impractical.

                  $\endgroup$
                  0
                    12
                    $\begingroup$

                    In their ICALP 2014 paper, Andreas Björklund and Thore Husfeldt give the first (randomized) polynomial algorithm that computes 2 disjoint paths with minimum total length (sum of the two paths length) between two given pairs of vertices. The running time is in $O(n^{11})$.

                    $\endgroup$
                      6
                      $\begingroup$

                      In Polygon rectangulation, part 2: Minimum number of fat rectangles, a practical modification of the rectangle partition problem motivated by concerns in VLSI is presented:

                      Fat Rectangle Optimization Problem: Given an orthogonal polygon $P$, maximize the shortest side $\delta$ over all rectangulations of $P$. Among the partitions with the same $\delta$, choose the partition with the fewest number of rectangles.

                      As yet, only a theoretical algorithm exists, with a running time of $O(n^{42})$. (That is not a typo, and it is obtained through a “natural” dynamic programming solution to the problem stated there.)

                      $\endgroup$
                        1
                        $\begingroup$

                        A diagram of the unknot with $c$ crossings takes $(236 c)^{11}$ Reidemeister moves to be reduced to the trivial diagram in the worst case [Lackenby (2015)].

                        $\endgroup$
                          0
                          $\begingroup$

                          One family of algorithms with huge constants is the "run all algorithms" type.

                          For example. Suppose there exists a polynomial time 3-sat algorithm. If one exists, then here is an example.

                          Run all possible computer programs at once. Devoting $1/n^2$ compute to the n'th program. Give all these programs the unsolved problem as input. Check any solutions that the programs spit out.

                          If there exists a polynomial 3 sat algorithm at position n in the ordering, then $1/n^2$ of the compute will be dedicated to running it. $n\approx 2$^ bit length so the constant here is going to be rather large unless the 3 sat solver is a VERY simple program.

                          $\endgroup$
                            -2
                            $\begingroup$

                            surprisingly one of the most obvious answers not posted yet. finding a clique of size $c$ (edges or vertices) apparently takes $O(n^c)$ time by the naive/brute force algorithm that enumerates all possibilities. or more accurately proportional to $n \choose c$ steps. (strangely enough this basic factoid seems to be rarely pointed out in the literature.) however a strict proof of that would imply $\mathsf{P \neq NP}$. so this question is related to the famous open conjecture, virtually equivalent to it. other NP type problems can be parameterized in this way.

                            $\endgroup$
                            8
                            • 2
                              $\begingroup$1. there is a (simple) algorithm that improves the exponent slightly. 2. this is a much stronger statement than P not equal to NP, just as ETH is stronger than P not equal to NP. I think algorithms like this have not been pointed out because it seems that the OP is not interested in exhaustive search type of algorithms.$\endgroup$CommentedMar 14, 2012 at 5:40
                            • 6
                              $\begingroup$nitpicking: finding a clique with $c$ edges can be done in $n^{\sqrt{c}}$ or so time (since we are looking for a clique with $O(\sqrt{c})$ vertices).$\endgroup$CommentedMar 14, 2012 at 8:41
                            • 5
                              $\begingroup$you should be careful in conjecturing that all NP-complete problems are solvable only via brute force search. like I said, finding a clique can be sped up with matrix multiplication. for many other problems we have exact exponential time algorithms with better exponent than brute force search. what you're talking of most closely resembles the exponential time hypothesis: the (much stronger than P neq NP) conjecture that for all $k > 2$ $k$-SAT requires $2^{s_k n}$ time where $s_k > 0$.$\endgroup$CommentedMar 14, 2012 at 16:04
                            • 7
                              $\begingroup$There are many cases where brute force search is not optimal for an NP-hard problem. Three classic examples: (1) A vertex cover of size $k$ can be found in time e.g. $2^k\cdot n$, (2) A path of length $k$ can be found in a similar time, (3) A independent set of size $k$ in a planar graph can be found in time $2^{O(\sqrt{k})}\cdot n$. BTW: On the issue of ETH and the necessity of brute force, you might find our survey interesting: cs.bme.hu/~dmarx/papers/survey-eth-beatcs.pdf$\endgroup$CommentedMar 14, 2012 at 20:36
                            • 5
                              $\begingroup$@vzn: $2^{O(\sqrt{n})}$ is much faster than $2^{O(n)}$.$\endgroup$
                              – Jeffε
                              CommentedMar 14, 2012 at 21:51
                            -2
                            $\begingroup$

                            computing matrix rigidity[1] via brute force/naive/enumerations apparently takes $O(2^n)$ time for matrices of size $n$ elements. this can be seen as a converging limit of a sequence of increasingly accurate estimates that take $n \choose 1$, $n \choose 2$, $n \choose 3$, ... steps. in other words each estimate is in P-time $O(n^c)$ for any arbitrary exponent $c$ (ie $n \choose c$ steps). the naive algorithm chooses any $c$ elements of the matrix to change and tests for resulting dimension reduction. this is not totally surprising given that it has been related to computing circuit lower bounds. this follows a pattern where many algorithms have a conjectured P-time solution for some parameter but a solid proof of a lower bound would likely imply $\mathsf{P \neq NP}$ or something stronger.

                            [1] Questions about computing matrix rigidity

                            $\endgroup$
                            2
                            • 2
                              $\begingroup$Not sure how this is different (for example) from trying to find a max clique by enumerating over all sets of size k, for increasing k. each step is also p-time for fixed k.$\endgroup$CommentedMar 14, 2012 at 2:34
                            • $\begingroup$yes its very similar & reminds me of the Hartmanis isomorphism conjecture for NP sets. even if the isomorphism conjecture is not true (current consensus/conventional wisdom seems to lean against it), it seems NP sets have some similar property but maybe somewhat weaker, which also seems to require exhaustive search$\endgroup$
                              – vzn
                              CommentedMar 14, 2012 at 15:09

                            Start asking to get answers

                            Find the answer to your question by asking.

                            Ask question

                            Explore related questions

                            See similar questions with these tags.