This comment was posted to reddit on Dec 29, 2014 at 11:03 am and was deleted within 23 hour(s) and 20 minutes.

In computer science there is this idea of how 'quickly' one can solve a problem of a certain size. For example sorting a list. If it takes so many steps to sort a list with 10 elements, it will (under most circumstances) take *more* than twice as much time to sort a list with 20 elements. The issue is how much faster does the solving time grow as the size of the problem grows.

The function f(x) = x^{2} is polynomial, but f(x) = x^{x} is not. When the x = 2, both functions return 4. When x = 3, the polynomial function returns 9, but the non-polynomial function returns 27. When x = 4, the polynomial function retuns 16, but the non-polynomial function returns 256. The non-polynomial function is growing much faster than the polynomial one. If x represents the size of the problem and the result of the function represents the number of minutes required to solve a problem of size x, we can see that there won't be enough minutes in the universe to solve the non-polynomial problem with a much smaller problem size. (fyi, there are other examples of polynomial and non-polynomial functions, these are just relatively easy to show)

Modern cryptography relies upon problems which grow in difficulty at a non-polynomial rate. 'Solving' a cryptography problem means decrypting the message without the private information called a 'private key'. These are problems which can be solved relatively quickly if one has access to the private key, but take extremely large amounts of time to solve without.

There is a widely held--though not at all universal--belief in comptational science that there is always a polynomial-time algorithm to solve a problem, but our toolbox of mathematics is incomplete and thus we are unable to currently do so. If this 'conjecture' is true, then it means that the difficulty of the problems we use for things like cryptography could drop by quite a lot very quickly, which would probably be a problem.