### While循环时间复杂度O(logn)

#### [英]While loop time complexity O(logn)

I cannot understand why the time complexity for this code is O(logn):

``````double n;
/* ... */
while (n>1) {
n*=0.999;
}
``````

At least it says so in my study materials.

## 3 个解决方案

### #1

5

Imagine if the code were as follows:

``````double n;
/* ... */
while (n>1) {
n*=0.5;
}
``````

It should be intuitive to see how this is O(logn), I hope.

When you multiply by 0.999 instead, it becomes slower by a constant factor, but of course the complexity is still written as O(logn)

### #2

5

You want to calculate how many iterations you need before `n` becomes equal to (or less than) 1.

If you call the number of iterations for `k` you want to solve

n * 0.999^k = 1

n * 0.999 ^ k = 1

It goes like this

n * 0.999^k = 1

n * 0.999 ^ k = 1

0.999^k = 1/n

0.999 ^ k = 1 / n

log(0.999^k) = log(1/n)

k * log(0.999) = -log(n)

k * log(0.999) = -log(n)

k = -log(n)/log(0.999)

k = log(n)/日志(0.999)

k = (-1/log(0.999)) * log(n)

k = (-1/log(0.999)) * log(n)

For big-O we only care about "the big picture" so we throw away constants. Here `log(0.999)` is a negative constant so (-1/log(0.999)) is a positive constant that we can "throw away", i.e. set to 1. So we get:

k ~ log(n)

k ~ log(n)

So the code is O(logn)

From this you can also notice that the value of the constant (i.e. 0.999 in your example) doesn't matter for the big-O calculation. All constant values greater than 0 and less than 1 will result in O(logn).

### #3

2

Logarithm has two inputs: a base and a number. The result of a logarithm is the power you need to raise the base to to achieve the given number. Since your base is 0.999, the number is the first smaller than 1 and you have a scalar, which is n, effectively the number of steps depends on the power you need to raise your base to achieve such a small number, which multiplied with n will yield a smaller number than 1. This corresponds to the definition of the logarithm, with which I have started my answer.

EDIT:

Think about it this way: You have n as an input and you search for the first k where

n * .999^k < 1

n * .999 ^ k < 1

since you are searching k by incrementing it, since if you have l >= n at a step, then you will have l * .999 in the next step. Repeating this achieves a logarithmic complexity for your multiplication algorithm.