Two simple questions

In how many ways can you distribute {n} distinct objects into {k} different bins?

Of course, for each object, there are {k} possible bins, so the total number of ways is {k^n}.

Now, consider the same question, except, the objects are not distinct any more. Thus all that matters is how many of them go into each bin and not which ones in particular.

The answer to this can be slightly complicated to calculate, but if we simplify things a bit and look for a loose upper bound, we can argue as follows. The problem is equivalent to assigning one number between 0 and {n} to each bin so that the sum of all {k} assigned numbers is exactly {n}. If we completely ignore the fact that the sum must be {n}, then we get that the total number of ways of assigning numbers is {(n+1)^k}.

I am certain that I knew both these things back in high school, but it was quite recently that I realized that the first one is exponential in {n} and the second one is polynomial in {n}. That’s a huge difference. Just making the objects distinct changes the total search space from polynomial to exponential.

Advertisement

How hard is NP-hard?

Assuming P{\neq} NP, an NP-hard problem cannot be solved in polynomial time. This means that there cannot exist an algorithm that, for all possible inputs, computes the corresponding output in polynomial time. However, NP-harndess doesn’t prohibit the existence of an efficient algorithm for only a subset of the possible inputs. For example, there is a constant time algorithm for any problem that solves it for a constant number of instances. The algorithm just has a lookup table where the outputs of all the constant number of inputs are stored.

But this is an extreme case. Can there exist an algorithm for an NP-hard problem that solves it in polynomial time for a significantly large number of input instances? Or, more precisely, does there exist an algorithm, that, for all {n}, solves the problem in polynomial time for {f(n)} inputs of size {n}, where {f(n)} is some fast growing function of {n}?

How fast growing should {f(n)} be? One interesting choice for {f(n)} is something that makes the average case complexity of the algorithm polynomial, i.e., if inputs are chosen uniformly at random from the set of all inputs, then the algorithm takes polynomial time in expectation. Of course, {f(n)} will have to be fairly large for this to happen.

The interesting fact is that this is possible. For example, if you pick a graph on {n} vertices randomly, with a distribution that is uniform over all graphs with {n} vertices, then there exists an algorithm that decides whether the graph is hamiltonian or not in expected polynomial time.