Hardness of Approximation Between P and NP. Aviad Rubinstein

Hardness of Approximation Between P and NP - Aviad Rubinstein


Скачать книгу
can be written as a convex combination of d + 1 points. In general, d + 1 points are necessary, but this number can be reduced drastically if we are willing to settle for approximation. In particular, Barman7 [2015] proves an approximate Caratheodory’s theorem that requires only r = O(ρ/ε2) points to express a point such that ∥vp < ε, assuming the n points belong to a unit ℓp-ball. In particular, can be written as an average over a multi-set of r out of the n points.

      Viewing the columns of A as n vectors in ℝn, Barman observes that (after proper normalization) the point v = Ay* is in their convex hull. If we only want to approximately solve the bilinear optimization (1.1), we drastically reduce the dimension of the problem by enumerating over all choices of , and for each solving the optimization problem Av̂. It turns out that in order to guarantee a good additive approximation of (1.1), we need to set p ≈ log n. The dominant term in the running time of this meta-algorithm is the enumeration over all choices of (all multi-sets of the columns of A), which takes approximately image, i.e., quasi-polynomial time.

      The above meta-algorithm provides evidence that the approximate variant of all those problems is much easier than solving them exactly: in particular we believe that NP-hard (respectively, PPAD-hard) problems like 3-SAT (resp. END-OF-A-LINE) require approximately 2n time. This belief is formulated by the Exponential Time Hypothesis, or ETH [Impagliazzo et al. 2001] (resp. ETH for PPAD [Babichenko et al. 2016]). The reasons we believe in ETH are similar to the ones outlined in the previous section for our belief that END-OF-A-LINE is intractable: on one hand, we have a combination of unconditional lower bounds in analogous models and little or no progress on algorithms; on the other hand, we are “forced” to believe it because we have no idea how to prove it (in particular, ETH implies the much weaker P ≠ NP).

      However, quasi-polynomial time is still both unrealistic to implement in practice, and does not meet our gold standard of polynomial time in theory. The main question we address in this section is:

       Can the quasi-polynomial time be improved to polynomial time?

      For Sparse PCA this is indeed possible [Alon et al. 2013, Asteris et al. 2015, Chan et al. 2016]. But for several other problems we can prove that, assuming ETH, quasi-polynomial time is actually necessary:

      • Finding a k-subgraph that is almost a clique; this is one of the most fundamental approximation problems in theoretical computer science.

      • Finding and counting stable communities in a friendship graph; this problem received a lot of attention in recent years with the emergence of online social networks.

      • Finding an approximately optimal signaling scheme; this resolves a recent open question by Dughmi.

      • Computing the VC and Littlestone’s Dimensions of a binary concept class, two of the most fundamental quantities in learning theory.

      A common approach to all our proofs is the birthday repetition framework due to Aaronson et al. [2014]: construct a reduction from 3-SAT to any of the above problems, with reduction size image. Then, assuming ETH, one needs approximately image time to solve the problem on the larger instance. A key step in the reduction is to consider subsets of image variables; then by the birthday paradox any two subsets are likely to share at least one variable (hence the name “birthday repetition”).

      1.2.1 Densest k -Subgraph

      k-CLIQUE is one of the most fundamental problems in computer science: given a graph, decide whether it has a fully connected induced subgraph on k vertices. Since it was proven NP-complete by Karp [1972], extensive research has investigated the complexity of its relaxations.

      We consider two natural relaxations of k-CLIQUE that have received significant attention from both algorithmic and complexity communities: The first one is to relax the “k” requirement, i.e., looking for a smaller subgraph: Given an n-vertex graph G containing a clique of size k, find a clique of size at least δk for some parameter 0 < δ < 1.

      The second natural relaxation is to relax the “Clique” requirement, replacing it with the more modest goal of finding a subgraph that is almost a clique: Given an n-vertex graph G containing a clique of size k, find an induced subgraph of G of size k with (edge) density at least 1 − ε, for some parameter 0 < ε < 1.

      The first relaxation has been a motivating example throughout a long line of research that laid the foundations for NP-hardness of approximation [Arora and Safra 1998, Arora et al. 1998, Feige et al. 1996, Håstad 1999, Khot 2001, Zuckerman 2007]. In particular, we now know that it is NP-hard to distinguish between a graph that has a clique of size k, and a graph whose largest induced clique is of size at most k′ = δk, where δ = 1/n1−ε [Zuckerman 2007]. Until our work, the computational complexity of the second relaxation remained largely open. There are a couple of (very different) quasi-polynomial algorithms that guarantee finding a (1 − ε)-dense k-subgraph in every graph containing a k-clique: the meta-algorithm by Barman, which we outlined above, and an older algorithm due to Feige and Seltser [1997], but nothing non-trivial was known about hardness of approximation.

      In Chapter 12 we prove that, assuming ETH, even if one makes both relaxations the problem remains intractable. In particular, even if the graph contains a clique of size k, it takes quasi-polynomial time to find a (1 − ε)-dense δk-subgraph, for constant ε > 0 and δ = o(1).

       1.2.2 Community Detection

      Identifying communities is a central graph-theoretic problem with important applications to sociology and marketing (when applied to social networks), biology and bioinformatics (when applied to protein interaction networks), and more (see, e.g., Fortunato’s classic survey [Fortunato 2010]). Defining what exactly is a community remains an interesting problem on its own (see Arora et al. [2012] and Borgs et al. [2016] for excellent treatment from a theoretical perspective). Ultimately, there is no single “right” definition, and the precise meaning of community should be different for social networks and protein interaction networks.

      In Chapter 13 we focus on the algorithmic questions arising from one of the simplest and most canonical definitions, which has been considered by several theoretical computer scientists [Arora et al. 2012, Balcan et al. 2013, Braverman et al. 2017, Mishra et al. 2008].

       Definition 1.2

      (α, β)-community. Given an undirected graph G = (V, E), an (α, β)-community is a subset SV that satisfies:

      Strong ties inside the community. For every vS, |{v} × S| ⋂ Eα. |S|; and

      Weak ties to nodes outside the community. For every uS, |{u} × S| ∩ Eβ. |S|.

      This problem has been considered by several researchers before: Mishra et al. [2008] gave a polynomial-time algorithm for finding (α, β)-communities


Скачать книгу