The basic aim of this book is to help the student understand the designing procedure of algorithms, how to analyze algorithms and how to implement the algorithms. This book provides comprehensive and completely up-to-date coverage of Design and Analysis of Algorithms. It meets student's needs by addressing both the designing principle as well as the critical role of performance in driving the algorithm.
Additional Info
  • Publisher: Laxmi Publications
  • Language: English
  • ISBN : 978-93-81159-58-3
  • Chapter 1

    INTRODUCTION TO ALGORITHM Price 2.99  |  2.99 Rewards Points

    The etymology of the word Algorithm dates back to the 8th Century A.D. The word Algorithm is derived from the name of the Persian author “Abu Jafar Mohammad ibn Musa al Khowarizmi”. He was a great mathematician who was born around 780 AD in Baghdad. He worked on, algebra, geometry, and astronomy. An algorithm is the step-by-step
  • Chapter 2

    GROWTH ORDER Price 2.99  |  2.99 Rewards Points

    The analysis of the algorithm can be performed based on the nature of the problem. Thus, we have: • Worst case analysis • Average case analysis • Best case analysis Worst case: Under what condition the algorithm when executed does consumes maximum amount of resources, or in other words we can say it is the maximum amount of resource the algorithm can consume for any value of problem size. In most of the case we prefer worst case analysis. Best case: Under what conditions the algorithm when executed does consumes minimum amount of resources. Average case: This is between worst case and best case. It is probabilistic in nature. Averagecase running times are calculated by first arriving at an understanding of the average nature of the input, and then performing a running-time analysis of the algorithm for this configuration. Average case analysis is done by considering every possibility is equally likely to happen.
  • Chapter 3

    ELEMENTARY DATA STRUCTURE Price 2.99  |  2.99 Rewards Points

    In computer science, a data structure is a particular way of storing and organizing data in a computer so that it can be used efficiently. Different kinds of data structures are suited to different kinds of applications, and some are highly specialized to certain tasks. For example, B-trees are particularly well-suited for implementation of databases, while compiler implementations usually use hash tables to look up identifiers. Data structures are used in almost every program or software system. Specific data structures are essential ingredients of many efficient algorithms, and make possible the management of huge amounts of data, such as large databases and internet indexing services. Some formal design methods and programming languages emphasize data structures, rather than algorithms, as the key organizing factor in software design.
  • Chapter 4

    DIVIDE AND CONQUER STRATEGY Price 2.99  |  2.99 Rewards Points

    In computer science, divide and conquer (D&C) is an important algorithm design paradigm. It works by recursively breaking down a problem into two or more sub-problems of the same (or related) type, until these become simple enough to be solved directly. The solutions to the subproblems are then combined to give a solution to the original problem. This technique is the basis of efficient algorithms for all kinds of problems, such as sorting (quick sort, merge sort) and Fourier transform (FFT). The Divide and Conquer strategy can be viewed as one which has three steps. The first step is called Divide which is nothing but dividing the given problems into smaller subproblems which are identical to the original problem and also these sub problems are of the same size. The second step is called Conquer wherein we solve these sub-problems recursively. The third step is called Combine wherein we combine the solutions of the sub-problems to get the solution for the original problem.
  • Chapter 5

    GREEDY METHOD Price 2.99  |  2.99 Rewards Points

    Greedy method is a method of choosing a subset of the dataset as the solution set that result in some profit. Consider a problem having n inputs; we are required to obtain the solution which is a series of subsets that satisfy some constraints or conditions. Any subset, which satisfies these constraints, is called a feasible solution. It is required to obtain the feasible solution that maximizes or minimizes the objective function. This feasible solution finally obtained is called optimal solution. If one can devise an algorithm that works in stages, considering one input at a time and at each stage, a decision is taken on whether the data chosen results with an optimal solution or not. If the inclusion of a particular data results with an optimal solution, then the data is added into the partial solution set. On the other hand, if the inclusion of that data results with infeasible solution then the data is eliminated from the solution set. In general, greedy algorithms have five pillars: • A candidate set, from which a solution is created • A selection function, which chooses the best candidate to be added to the solution • A feasibility function, that is used to determine if a candidate can be used to contribute to a solution • An objective function, which assigns a value to a solution, or a partial solution, and a solution function, which will indicate when we have discovered a complete solution. Greedy algorithms produce good solutions on some mathematical problems, but not on others. Most problems for which they work well have two properties:
  • Chapter 6

    DYNAMIC PROGRAMMING Price 2.99  |  2.99 Rewards Points

    Dynamic programming is a stage-wise search method suitable for optimization problems whose solutions may be viewed as the result of a sequence of decisions. The most attractive property of this strategy is that during the search for a solution it avoids full enumeration by pruning early partial decision solutions that cannot possibly lead to optimal solution. In many practical situations, this strategy hits the optimal solution in a polynomial number of decision steps. However, in the worst case, such a strategy may end up performing full enumeration. Dynamic programming takes advantage of the duplication and arranges to solve each sub-problem only once, saving the solution (in table or something) for later use. The underlying idea of dynamic programming is: avoid calculating the same stuff twice, usually by keeping a table of known results of sub-problems. Unlike divide-and-conquer, which solve the sub-problems top-down, a dynamic programming is a bottom-up technique.
  • Chapter 7

    NP-COMPLETENESS Price 2.99  |  2.99 Rewards Points

    Before we start: be reminded of our model of computation: all basic operations take unit time, we have infinite memory at the register level, and every step is determined deterministically as described in an algorithm. A polynomial algorithm is “faster” than an exponential algorithm. As n grows an always grows faster than nk after certain integer n0, for any integer values of a and k. Even 2n grows faster than n1000 at some large value of n. The former functions are exponential and the later functions are polynomial. It seems that for some problems we just may not have any polynomial algorithm at all (as in the information theoretic bound)! The theory of NP-completeness is about this issue, and in general the computational complexity theory addresses it.

About the Author