Category Archives: Algorithms

Amortized time in simple words

Today, I stumbled onto this post at StackOverflow, that explains Amortized time in plain, simple words.

From the post:

If you do an operation say a million times, you don’t really care about the worst-case or the best-case of that operation – what you care about is how much time is taken in total when you repeat the operation a million times.

So it doesn’t matter if the operation is very slow once in a while, as long as “once in a while” is rare enough for the slowness to be diluted away. Essentially amortised time means “average time taken per operation, if you do many operations”. Amortised time doesn’t have to be constant; you can have linear and logarithmic amortised time or whatever else.

Let’s take mats’ example of a dynamic array, to which you repeatedly add new items. Normally adding an item takes constant time (that is, O(1)). But each time the array is full, you allocate twice as much space, copy your data into the new region, and free the old space. Assuming allocates and frees run in constant time, this enlargement process takes O(n) time where n is the current size of the array.

So each time you enlarge, you take about twice as much time as the last enlarge. But you’ve also waited twice as long before doing it! The cost of each enlargement can thus be “spread out” among the insertions. This means that in the long term, the total time taken for adding m items to the array isO(m), and so the amortised time (i.e. time per insertion) is O(1).

Counting trailing zeros in a factorial

Problem:
Find the number of trailing zeros in the factorial of a given number.

Solution:
Lets work with an example. Suppose the given number is 25 i.e we need to find number of trailing zeros in 25!. Zeros are contributed by pairs of 2 and 5, because 2*5 = 10.
So, all we need to do is find the number of multiples of 5 in 25. Also note that while 5 contributes to one 0, 25 contributes to two 0s.

So, number of trailing zeros in 25 = 25/5 + 25/25 = 5 + 1 = 6

Below is a code snippet that does the same for you!

int countNum(int num)
{
	int count = 0;
	for(int i = 5; num/i > 0; i *= 5)
	{	
		count = count + num/i;

	}
	return count;

}

Bug in the implementation of Binary Search and Merge Sort

Here is a nice post by Joshua Bloch, that identifies a severe bug in the implementation of Binary Search and Merge Sort.
To summarize the post, lets consider an implementation of Binary Search:

        int binarySearch(int a[], int low, int high, int elem)
	{
		if(high < low)
			return -1;

		int mid = (low + high)/2;
		if(a[mid] == elem)
			return mid;

		if(a[mid] < elem)
			return binarySearch(a, low, mid, elem);
		else
			return binarySearch(a,mid,high,elem);

	}

Every thing is fine except for this line:

         int mid = (low + high)/2;

As from the definition of binary search, this line assigns mid to the average of low and high, truncated down to the nearest integer.
This logic breaks for larger values of int. Specifically for values (2^31 – 1). For such high values of ints, the sum overflows to a negative
value, and stays negative after the division by 2.

It causes Array index to go out of bounds and in Java raises ArrayIndexOutOfBoundsException.

Possible solutions,
int mid = low + ((high – low) /2)

Read the complete post for more solutions.

Powering a Number

Problem Statement:

Given a number X and an integer n, n>=0.
Compute: power(X,n)

Naive Algorithm:

Naive algorithm would involve multiplying X, n times.
X*X*X ….. n times

Time Complexity: O(n)

Now, lets look at a more efiicient way of doing this algorithm

Divide and Conquer:

Idea:

Power(X,n) = Power(X,n/2)* Power(X,n/2)  ................ if n is even
           = Power(X,n-1/2)*Power(X,n-1/2)*X ........ if n is odd

Implementation:
The following function will do the needful.

Lets look at the recurrence:

public static int power(int x, int n) {
     if(n == 0)
         return 1;
    else if(n==1)
          return x;
    else if(n%2==0) {
          int y = power(x,n/2);
          return y*y;
    }else {
          int y = power(x,n-1/2);
          return y*y*x;
    }
}

 

T(n) = T(n/2) + O(1)
Because, i am dividing the problem into halves(ok not exactly halves, but since we r using Big-Oh notation, i can be sloppy). Also, the time to conquer is constant as it involves atmost 2 multiplications.

So, the following recurrence drills down to: O(log n)

Which is way better than linear time!