ALGORITHMS英文版第三版


ALGORITHMS INTRODUCTION TO THIRD EDITION THOMAS H. CHARLES E. RONALD L. CLIFFORD STEIN RIVEST LEISERSON CORMENIntroduction to Algorithms Third EditionThomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Third Edition The MIT Press Cambridge, Massachusetts London, Englandc 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email special sales@mitpress.mit.edu. This book was set in Times Roman and Mathtime Pro 2 by the authors. Printed and bound in the United States of America. Library of Congress Cataloging-in-Publication Data Introduction to algorithms / Thomas H. Cormen ...[etal.].—3rded. p. cm. Includes bibliographical references and index. ISBN 978-0-262-03384-8 (hardcover : alk. paper)—ISBN 978-0-262-53305-8 (pbk. : alk. paper) 1. Computer programming. 2. Computer algorithms. I. Cormen, Thomas H. QA76.6.I5858 2009 005.1—dc22 2009008593 1098765432Contents Preface xiii I Foundations Introduction 3 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11 2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29 3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 53 4 Divide-and-Conquer 65 4.1 The maximum-subarray problem 68 4.2 Strassen’s algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursion-tree method for solving recurrences 88 4.5 The master method for solving recurrences 93 ? 4.6 Proof of the master theorem 97 5 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator random variables 118 5.3 Randomized algorithms 122 ? 5.4 Probabilistic analysis and further uses of indicator random variables 130vi Contents II Sorting and Order Statistics Introduction 147 6Heapsort151 6.1 Heaps 151 6.2 Maintaining the heap property 154 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162 7 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 179 7.4 Analysis of quicksort 180 8 Sorting in Linear Time 191 8.1 Lower bounds for sorting 191 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200 9 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worst-case linear time 220 III Data Structures Introduction 229 10 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 241 10.4 Representing rooted trees 246 11 Hash Tables 253 11.1 Direct-address tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269 ? 11.5 Perfect hashing 277Contents vii 12 Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294 ? 12.4 Randomly built binary search trees 299 13 Red-Black Trees 308 13.1 Properties of red-black trees 308 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323 14 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 345 14.3 Interval trees 348 IV Advanced Design and Analysis Techniques Introduction 357 15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrix-chain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397 16 Greedy Algorithms 414 16.1 An activity-selection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428 ? 16.4 Matroids and greedy methods 437 ? 16.5 A task-scheduling problem as a matroid 443 17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463viii Contents V Advanced Data Structures Introduction 481 18 B-Trees 484 18.1 Definition of B-trees 488 18.2 Basic operations on B-trees 491 18.3 Deleting a key from a B-tree 499 19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeable-heap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523 20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545 21 Data Structures for Disjoint Sets 561 21.1 Disjoint-set operations 561 21.2 Linked-list representation of disjoint sets 564 21.3 Disjoint-set forests 568 ? 21.4 Analysis of union by rank with path compression 573 VI Graph Algorithms Introduction 587 22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadth-first search 594 22.3 Depth-first search 603 22.4 Topological sort 612 22.5 Strongly connected components 615 23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631Contents ix 24 Single-Source Shortest Paths 643 24.1 The Bellman-Ford algorithm 651 24.2 Single-source shortest paths in directed acyclic graphs 655 24.3 Dijkstra’s algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortest-paths properties 671 25 All-Pairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The Floyd-Warshall algorithm 693 25.3 Johnson’s algorithm for sparse graphs 700 26 Maximum Flow 708 26.1 Flow networks 709 26.2 The Ford-Fulkerson method 714 26.3 Maximum bipartite matching 732 ? 26.4 Push-relabel algorithms 736 ? 26.5 The relabel-to-front algorithm 748 VII Selected Topics Introduction 769 27 Multithreaded Algorithms 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797 28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positive-definite matrices and least-squares approximation 832 29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 859 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886x Contents 30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efficient FFT implementations 915 31 Number-Theoretic Algorithms 926 31.1 Elementary number-theoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA public-key cryptosystem 958 ? 31.8 Primality testing 965 ? 31.9 Integer factorization 975 32 String Matching 985 32.1 The naive string-matching algorithm 988 32.2 The Rabin-Karp algorithm 990 32.3 String matching with finite automata 995 ? 32.4 The Knuth-Morris-Pratt algorithm 1002 33 Computational Geometry 1014 33.1 Line-segment properties 1015 33.2 Determining whether any pair of segments intersects 1021 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039 34 NP-Completeness 1048 34.1 Polynomial time 1053 34.2 Polynomial-time verification 1061 34.3 NP-completeness and reducibility 1067 34.4 NP-completeness proofs 1078 34.5 NP-complete problems 1086 35 Approximation Algorithms 1106 35.1 The vertex-cover problem 1108 35.2 The traveling-salesman problem 1111 35.3 The set-covering problem 1117 35.4 Randomization and linear programming 1123 35.5 The subset-sum problem 1128Contents xi VIII Appendix: Mathematical Background Introduction 1143 A Summations 1145 A.1 Summation formulas and properties 1145 A.2 Bounding summations 1149 B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173 C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201 ? C.5 The tails of the binomial distribution 1208 D Matrices 1217 D.1 Matrices and matrix operations 1217 D.2 Basic matrix properties 1222 Bibliography 1231 Index 1251Preface Before there were computers, there were algorithms. But now that there are com- puters, there are even more algorithms, and algorithms lie at the heart of computing. This book provides a comprehensive introduction to the modern study of com- puter algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacrificing depth of coverage or mathematical rigor. Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The book contains 244 figures—many with multiple parts—illustrating how the algorithms work. Since we emphasize efficiency as a design criterion, we include careful analyses of the running times of all our algorithms. The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for self-study by technical professionals. In this, the third edition, we have once again updated the entire book. The changes cover a broad spectrum, including new chapters, revised pseudocode, and a more active writing style. To the teacher We have designed this book to be both versatile and complete. You should find it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can fit in a typical one-term course, you can consider this book to be a “buffet” or “smorgasbord” from which you can pick and choose the material that best supports the course you wish to teach.xiv Preface You should find it easy to organize your course around just the chapters you need. We have made chapters relatively self-contained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material first and the more difficult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter. We have included 957 exercises and 158 problems. Each section ends with exer- cises, and each chapter ends with problems. The exercises are generally short ques- tions that test basic mastery of the material. Some are simple self-check thought exercises, whereas others are more substantial and are suitable as assigned home- work. The problems are more elaborate case studies that often introduce new ma- terial; they often consist of several questions that lead the student through the steps required to arrive at a solution. Departing from our practice in previous editions of this book, we have made publicly available solutions to some, but by no means all, of the problems and ex- ercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to these solutions. You will want to check this site to make sure that it does not contain the solution to an exercise or problem that you plan to assign. We expect the set of solutions that we post to grow slowly over time, so you will need to check it each time you teach the course. We have starred (?) the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more diffi- cult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity. To the student We hope that this textbook provides you with an enjoyable introduction to the field of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difficult algorithms, we describe each one in a step-by-step manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will find the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material. This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook.Preface xv What are the prerequisites for reading this book? You should have some programming experience. In particular, you should un- derstand recursive procedures and simple data structures such as arrays and linked lists. You should have some facility with mathematical proofs, and especially proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need. We have heard, loud and clear, the call to supply solutions to problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for a few of the problems and exercises. Feel free to check your solutions against ours. We ask, however, that you do not send your solutions to us. To the professional The wide range of topics in this book makes it an excellent handbook on algo- rithms. Because each chapter is relatively self-contained, you can focus in on the topics that most interest you. Most of the algorithms we discuss have great practical utility. We therefore address implementation concerns and other engineering issues. We often provide practical alternatives to the few algorithms that are primarily of theoretical interest. If you wish to implement any of the algorithms, you should find the transla- tion of our pseudocode into your favorite programming language to be a fairly straightforward task. We have designed the pseudocode to present each algorithm clearly and succinctly. Consequently, we do not address error-handling and other software-engineering issues that require specific assumptions about your program- ming environment. We attempt to present each algorithm simply and directly with- out allowing the idiosyncrasies of a particular programming language to obscure its essence. We understand that if you are using this book outside of a course, then you might be unable to check your solutions to problems and exercises against solutions provided by an instructor. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for some of the problems and exercises so that you can check your work. Please do not send your solutions to us. To our colleagues We have supplied an extensive bibliography and pointers to the current literature. Each chapter ends with a set of chapter notes that give historical details and ref- erences. The chapter notes do not provide a complete reference to the whole fieldxvi Preface of algorithms, however. Though it may be hard to believe for a book of this size, space constraints prevented us from including many interesting algorithms. Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to find it themselves. Changes for the third edition What has changed between the second and third editions of this book? The mag- nitude of the changes is on a par with the changes between the first and second editions. As we said about the second-edition changes, depending on how you look at it, the book changed either not much or quite a bit. A quick look at the table of contents shows that most of the second-edition chap- ters and sections appear in the third edition. We removed two chapters and one section, but we have added three new chapters and two new sections apart from these new chapters. We kept the hybrid organization from the first two editions. Rather than organiz- ing chapters by only problem domains or according only to techniques, this book has elements of both. It contains technique-based chapters on divide-and-conquer, dynamic programming, greedy algorithms, amortized analysis, NP-Completeness, and approximation algorithms. But it also has entire parts on sorting, on data structures for dynamic sets, and on algorithms for graph problems. We find that although you need to know how to apply techniques for designing and analyzing al- gorithms, problems seldom announce to you which techniques are most amenable to solving them. Here is a summary of the most significant changes for the third edition: We added new chapters on van Emde Boas trees and multithreaded algorithms, and we have broken out material on matrix basics into its own appendix chapter. We revised the chapter on recurrences to more broadly cover the divide-and- conquer technique, and its first two sections apply divide-and-conquer to solve two problems. The second section of this chapter presents Strassen’s algorithm for matrix multiplication, which we have moved from the chapter on matrix operations. We removed two chapters that were rarely taught: binomial heaps and sorting networks. One key idea in the sorting networks chapter, the 0-1 principle, ap- pears in this edition within Problem 8-7 as the 0-1 sorting lemma for compare- exchange algorithms. The treatment of Fibonacci heaps no longer relies on binomial heaps as a precursor.Preface xvii We revised our treatment of dynamic programming and greedy algorithms. Dy- namic programming now leads off with a more interesting problem, rod cutting, than the assembly-line scheduling problem from the second edition. Further- more, we emphasize memoization a bit more than we did in the second edition, and we introduce the notion of the subproblem graph as a way to understand the running time of a dynamic-programming algorithm. In our opening exam- ple of greedy algorithms, the activity-selection problem, we get to the greedy algorithm more directly than we did in the second edition. The way we delete a node from binary search trees (which includes red-black trees) now guarantees that the node requested for deletion is the node that is actually deleted. In the first two editions, in certain cases, some other node would be deleted, with its contents moving into the node passed to the deletion procedure. With our new way to delete nodes, if other components of a program maintain pointers to nodes in the tree, they will not mistakenly end up with stale pointers to nodes that have been deleted. The material on flow networks now bases flows entirely on edges. This ap- proach is more intuitive than the net flow used in the first two editions. With the material on matrix basics and Strassen’s algorithm moved to other chapters, the chapter on matrix operations is smaller than in the second edition. We have modified our treatment of the Knuth-Morris-Pratt string-matching al- gorithm. We corrected several errors. Most of these errors were posted on our Web site of second-edition errata, but a few were not. Based on many requests, we changed the syntax (as it were) of our pseudocode. We now use “ D” to indicate assignment and “==” to test for equality, just as C, C++, Java, and Python do. Likewise, we have eliminated the keywords do and then and adopted “//” as our comment-to-end-of-line symbol. We also now use dot-notation to indicate object attributes. Our pseudocode remains procedural, rather than object-oriented. In other words, rather than running methods on objects, we simply call procedures, passing objects as parameters. We added 100 new exercises and 28 new problems. We also updated many bibliography entries and added several new ones. Finally, we went through the entire book and rewrote sentences, paragraphs, and sections to make the writing clearer and more active.xviii Preface Web site You can use our Web site, http://mitpress.mit.edu/algorithms/, to obtain supple- mentary information and to communicate with us. The Web site links to a list of known errors, solutions to selected exercises and problems, and (of course) a list explaining the corny professor jokes, as well as other content that we might add. The Web site also tells you how to report errors or make suggestions. How we produced this book Like the second edition, the third edition was produced in LATEX2". We used the Times font with mathematics typeset using the MathTime Pro 2 fonts. We thank Michael Spivak from Publish or Perish, Inc., Lance Carnes from Personal TeX, Inc., and Tim Tregubov from Dartmouth College for technical support. As in the previous two editions, we compiled the index using Windex, a C program that we wrote, and the bibliography was produced with BIBTEX. The PDF files for this book were created on a MacBook running OS 10.5. We drew the illustrations for the third edition using MacDraw Pro, with some of the mathematical expressions in illustrations laid in with the psfrag package for LATEX2". Unfortunately, MacDraw Pro is legacy software, having not been marketed for over a decade now. Happily, we still have a couple of Macintoshes that can run the Classic environment under OS 10.4, and hence they can run Mac- Draw Pro—mostly. Even under the Classic environment, we find MacDraw Pro to be far easier to use than any other drawing software for the types of illustrations that accompany computer-science text, and it produces beautiful output.1 Who knows how long our pre-Intel Macs will continue to run, so if anyone from Apple is listening: Please create an OS X-compatible version of MacDraw Pro! Acknowledgments for the third edition We have been working with the MIT Press for over two decades now, and what a terrific relationship it has been! We thank Ellen Faran, Bob Prior, Ada Brunstein, and Mary Reilly for their help and support. We were geographically distributed while producing the third edition, working in the Dartmouth College Department of Computer Science, the MIT Computer 1We investigated several drawing programs that run under Mac OS X, but all had significant short- comings compared with MacDraw Pro. We briefly attempted to produce the illustrations for this book with a different, well known drawing program. We found that it took at least five times as long to produce each illustration as it took with MacDraw Pro, and the resulting illustrations did not look as good. Hence the decision to revert to MacDraw Pro running on older Macintoshes.Preface xix Science and Artificial Intelligence Laboratory, and the Columbia University De- partment of Industrial Engineering and Operations Research. We thank our re- spective universities and colleagues for providing such supportive and stimulating environments. Julie Sussman, P.P.A., once again bailed us out as the technical copyeditor. Time and again, we were amazed at the errors that eluded us, but that Julie caught. She also helped us improve our presentation in several places. If there is a Hall of Fame for technical copyeditors, Julie is a sure-fire, first-ballot inductee. She is nothing short of phenomenal. Thank you, thank you, thank you, Julie! Priya Natarajan also found some errors that we were able to correct before this book went to press. Any errors that remain (and undoubtedly, some do) are the responsibility of the authors (and probably were inserted after Julie read the material). The treatment for van Emde Boas trees derives from Erik Demaine’s notes, which were in turn influenced by Michael Bender. We also incorporated ideas from Javed Aslam, Bradley Kuszmaul, and Hui Zha into this edition. The chapter on multithreading was based on notes originally written jointly with Harald Prokop. The material was influenced by several others working on the Cilk project at MIT, including Bradley Kuszmaul and Matteo Frigo. The design of the multithreaded pseudocode took its inspiration from the MIT Cilk extensions to C and by Cilk Arts’s Cilk++ extensions to C++. We also thank the many readers of the first and second editions who reported errors or submitted suggestions for how to improve this book. We corrected all the bona fide errors that were reported, and we incorporated as many suggestions as we could. We rejoice that the number of such contributors has grown so great that we must regret that it has become impractical to list them all. Finally, we thank our wives—Nicole Cormen, Wendy Leiserson, Gail Rivest, and Rebecca Ivry—and our children—Ricky, Will, Debby, and Katie Leiserson; Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein—for their love and support while we prepared this book. The patience and encouragement of our families made this project possible. We affectionately dedicate this book to them. THOMAS H. CORMEN Lebanon, New Hampshire CHARLES E. LEISERSON Cambridge, Massachusetts RONALD L. RIVEST Cambridge, Massachusetts CLIFFORD STEIN New York, New York February 2009Introduction to Algorithms Third EditionI FoundationsIntroduction This part will start you thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base. Chapter 1 provides an overview of algorithms and their place in modern com- puting systems. This chapter defines what an algorithm is and lists some examples. It also makes a case that we should consider algorithms as a technology, along- side technologies such as fast hardware, graphical user interfaces, object-oriented systems, and networks. In Chapter 2, we see our first algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the struc- ture of the algorithm clearly enough that you should be able to implement it in the language of your choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive tech- nique known as “divide-and-conquer.” Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them. Chapter 3 precisely defines this notation, which we call asymptotic notation. It starts by defining several asymptotic notations, which we use for bounding algo- rithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation, more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts.4 Part I Foundations Chapter 4 delves further into the divide-and-conquer method introduced in Chapter 2. It provides additional examples of divide-and-conquer algorithms, in- cluding Strassen’s surprising method for multiplying two square matrices. Chap- ter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the “mas- ter method,” which we often use to solve recurrences that arise from divide-and- conquer algorithms. Although much of Chapter 4 is devoted to proving the cor- rectness of the master method, you may skip this proof yet still employ the master method. Chapter 5 introduces probabilistic analysis and randomized algorithms. We typ- ically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a random-number generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs—thereby ensuring that no particular input always causes poor perfor- mance—or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis. Appendices A–D contain other mathematical material that you will find helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the specific definitions and notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial flavor.1 The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions. 1.1 Algorithms Informally, an algorithm is any well-defined computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a well-specified computa- tional problem. The statement of the problem specifies in general terms the desired input/output relationship. The algorithm describes a specific computational proce- dure for achieving that input/output relationship. For example, we might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally define the sorting problem: Input: A sequence of n numbers ha1;a2;:::;ani. Output: A permutation (reordering) ha0 1;a0 2;:::;a0 ni of the input sequence such that a0 1 a0 2 a0 n. For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem.6 Chapter 1 The Role of Algorithms in Computing Because many programs use it as an intermediate step, sorting is a fundamental operation in computer science. As a result, we have a large number of good sorting algorithms at our disposal. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, the architecture of the computer, and the kind of storage devices to be used: main memory, disks, or even tapes. An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer. Contrary to what you might expect, incorrect algorithms can sometimes be useful, if we can control their error rate. We shall see an example of an algorithm with a controllable error rate in Chapter 31 when we study algorithms for finding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms. An algorithm can be specified in English, as a computer program, or even as a hardware design. The only requirement is that the specification must provide a precise description of the computational procedure to be followed. What kinds of problems are solved by algorithms? Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the follow- ing examples: The Human Genome Project has made great progress toward the goals of iden- tifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this informa- tion in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. Although the solutions to the various prob- lems involved are beyond the scope of this book, many methods to solve these biological problems use ideas from several of the chapters in this book, thereby enabling scientists to accomplish tasks while using resources efficiently. The savings are in time, both human and machine, and in money, as more informa- tion can be extracted from laboratory techniques. The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include finding good routes on which the data will travel (techniques for solving such problems appear in1.1 Algorithms 7 Chapter 24), and using a search engine to quickly find pages on which particular information resides (related techniques are in Chapters 11 and 32). Electronic commerce enables goods and services to be negotiated and ex- changed electronically, and it depends on the privacy of personal informa- tion such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include public-key cryptography and digital signatures (covered in Chapter 31), which are based on numerical algo- rithms and number theory. Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneficial way. An oil company may wish to know where to place its wells in order to maximize its expected profit. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to flights in the least expensive way possible, making sure that each flight is covered and that government regulations regarding crew schedul- ing are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29. Although some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many specific problems, including the following: We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Part VI and Appendix B), and we wish to find the shortest path from one vertex to another in the graph. We shall see how to solve this problem efficiently in Chapter 24. We are given two ordered sequences of symbols, X Dhx1;x2;:::;xmi and Y Dhy1;y2;:::;yni, and we wish to find a longest common subsequence of X and Y . A subsequence of X is just X with some (or possibly all or none) of its elements removed. For example, one subsequence of hA; B; C; D; E; F; Gi would be hB; C; E; Gi. The length of a longest common subsequence of X and Y gives one measure of how similar these two sequences are. For example, if the two sequences are base pairs in DNA strands, then we might consider them similar if they have a long common subsequence. If X has m symbols and Y has n symbols, then X and Y have 2m and 2n possible subsequences,8 Chapter 1 The Role of Algorithms in Computing respectively. Selecting all possible subsequences of X and Y and matching them up could take a prohibitively long time unless m and n are very small. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efficiently. We are given a mechanical design in terms of a library of parts, where each part may include instances of other parts, and we need to list the parts in order so that each part appears before any part that uses it. If the design comprises n parts, then there are nŠ possible orders, where nŠ denotes the factorial function. Because the factorial function grows faster than even an exponential function, we cannot feasibly generate each possible order and then verify that, within that order, each part appears before the parts using it (unless we have only a few parts). This problem is an instance of topological sorting, and we shall see in Chapter 22 how to solve this problem efficiently. We are given n points in the plane, and we wish to find the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 1029 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for finding the convex hull. These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interest- ing algorithmic problems: 1. They have many candidate solutions, the overwhelming majority of which do not solve the problem at hand. Finding one that does, or one that is “best,” can present quite a challenge. 2. They have practical applications. Of the problems in the above list, finding the shortest path provides the easiest examples. A transportation firm, such as a trucking or railroad company, has a financial interest in finding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to find the shortest path through the network in order to route a message quickly. Or a person wishing to drive from New York to Boston may want to find driving directions from an appropriate Web site, or she may use her GPS while driving.1.1 Algorithms 9 Not every problem solved by algorithms has an easily identified set of candidate solutions. For example, suppose we are given a set of numerical values represent- ing samples of a signal, and we want to compute the discrete Fourier transform of these samples. The discrete Fourier transform converts the time domain to the fre- quency domain, producing a set of numerical coefficients, so that we can determine the strength of various frequencies in the sampled signal. In addition to lying at the heart of signal processing, discrete Fourier transforms have applications in data compression and multiplying large polynomials and integers. Chapter 30 gives an efficient algorithm, the fast Fourier transform (commonly called the FFT), for this problem, and the chapter also sketches out the design of a hardware circuit to compute the FFT. Data structures This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modifications. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them. Technique Although you can use this book as a “cookbook” for algorithms, you may someday encounter a problem for which you cannot readily find a published algorithm (many of the exercises and problems in this book, for example). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efficiency. Different chapters address different aspects of algorithmic problem solving. Some chapters address specific problems, such as finding medians and order statistics in Chapter 9, computing minimum spanning trees in Chapter 23, and determining a maximum flow in a network in Chapter 26. Other chapters address techniques, such as divide-and-conquer in Chapter 4, dynamic programming in Chapter 15, and amortized analysis in Chapter 17. Hard problems Most of this book is about efficient algorithms. Our usual measure of efficiency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efficient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NP-complete. Why are NP-complete problems interesting? First, although no efficient algo- rithm for an NP-complete problem has ever been found, nobody has ever proven10 Chapter 1 The Role of Algorithms in Computing that an efficient algorithm for one cannot exist. In other words, no one knows whether or not efficient algorithms exist for NP-complete problems. Second, the set of NP-complete problems has the remarkable property that if an efficient algo- rithm exists for any one of them, then efficient algorithms exist for all of them. This relationship among the NP-complete problems makes the lack of efficient solutions all the more tantalizing. Third, several NP-complete problems are similar, but not identical, to problems for which we do know of efficient algorithms. Computer scientists are intrigued by how a small change to the problem statement can cause a big change to the efficiency of the best known algorithm. You should know about NP-complete problems because some of them arise sur- prisingly often in real applications. If you are called upon to produce an efficient algorithm for an NP-complete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NP-complete, you can instead spend your time developing an efficient algorithm that gives a good, but not the best possible, solution. As a concrete example, consider a delivery company with a central depot. Each day, it loads up each delivery truck at the depot and sends it around to deliver goods to several addresses. At the end of the day, each truck must end up back at the depot so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by each truck. This problem is the well-known “traveling-salesman problem,” and it is NP-complete. It has no known efficient algorithm. Under certain assumptions, however, we know of efficient algorithms that give an overall distance which is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.” Parallelism For many years, we could count on processor clock speeds increasing at a steady rate. Physical limitations present a fundamental roadblock to ever-increasing clock speeds, however: because power density increases superlinearly with clock speed, chips run the risk of melting once their clock speeds become high enough. In order to perform more computations per second, therefore, chips are being designed to contain not just one but several processing “cores.” We can liken these multicore computers to several sequential computers on a single chip; in other words, they are a type of “parallel computer.” In order to elicit the best performance from multicore computers, we need to design algorithms with parallelism in mind. Chapter 27 presents a model for “multithreaded” algorithms, which take advantage of multiple cores. This model has advantages from a theoretical standpoint, and it forms the basis of several successful computer programs, including a championship chess program.1.2 Algorithms as a technology 11 Exercises 1.1-1 Give a real-world example that requires sorting or a real-world example that re- quires computing a convex hull. 1.1-2 Other than speed, what other measures of efficiency might one use in a real-world setting? 1.1-3 Select a data structure that you have seen previously, and discuss its strengths and limitations. 1.1-4 How are the shortest-path and traveling-salesman problems given above similar? How are they different? 1.1-5 Come up with a real-world problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough. 1.2 Algorithms as a technology Suppose computers were infinitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were infinitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not infinitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efficient in terms of time or space will help you do so.12 Chapter 1 The Role of Algorithms in Computing Efficiency Different algorithms devised to solve the same problem often differ dramatically in their efficiency. These differences can be much more significant than differences due to hardware and software. As an example, in Chapter 2, we will see two algorithms for sorting. The first, known as insertion sort, takes time roughly equal to c1n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2. The second, merge sort, takes time roughly equal to c2n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Inser- tion sort typically has a smaller constant factor than merge sort, so that c1 0and AŒi > key 6 AŒi C 1 D AŒi 7 i D i 1 8 AŒi C 1 D key Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A Dh5; 2; 4; 6; 1; 3i.Thein- dex j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements AŒ1 : : j 1 constitutes the currently sorted hand, and the remaining subarray AŒj C 1::n corresponds to the pile of cards still on the table. In fact, elements AŒ1 : : j 1 are the elements originally in positions 1 through j 1,but now in sorted order. We state these properties of AŒ1 : : j 1 formally as a loop invariant: At the start of each iteration of the for loop of lines 1–8, the subarray AŒ1 : : j 1 consists of the elements originally in AŒ1 : : j 1, but in sorted order. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant:2.1 Insertion sort 19 Initialization: It is true prior to the first iteration of the loop. Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. When the first two properties hold, the loop invariant is true prior to every iteration of the loop. (Of course, we are free to use established facts other than the loop invariant itself to prove that the loop invariant remains true before each iteration.) Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the first iteration corresponds to the base case, and showing that the invariant holds from iteration to iteration corresponds to the inductive step. The third property is perhaps the most important one, since we are using the loop invariant to show correctness. Typically, we use the loop invariant along with the condition that caused the loop to terminate. The termination property differs from how we usually use mathematical induction, in which we apply the inductive step infinitely; here, we stop the “induction” when the loop terminates. Let us see how these properties hold for insertion sort. Initialization: We start by showing that the loop invariant holds before the first loop iteration, when j D 2.1 The subarray AŒ1 : : j 1, therefore, consists of just the single element AŒ1, which is in fact the original element in AŒ1. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the first iteration of the loop. Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving AŒj 1, AŒj 2, AŒj 3, and so on by one position to the right until it finds the proper position for AŒj (lines 4–7), at which point it inserts the value of AŒj (line 8). The subarray AŒ1 : : j then consists of the elements originally in AŒ1 : : j , but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant. A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 5–7. At this point, however, 1When the loop is a for loop, the moment at which we check the loop invariant just prior to the first iteration is immediately after the initial assignment to the loop-counter variable and just before the first test in the loop header. In the case of INSERTION-SORT, this time is after assigning 2 to the variable j but before the first test of whether j A:length.20 Chapter 2 Getting Started we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j>A:length D n. Because each loop iteration increases j by 1,wemusthavej D n C 1 at that time. Substituting n C 1 for j in the wording of loop invariant, we have that the subarray AŒ1 : : n consists of the elements originally in AŒ1 : : n, but in sorted order. Observing that the subarray AŒ1 : : n is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct. We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well. Pseudocode conventions We use the following conventions in our pseudocode. Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of the while loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to if-else statements2 as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.3 The looping constructs while, for,andrepeat-until and the if-else conditional construct have interpretations similar to those in C, C++, Java, Python, and Pascal.4 In this book, the loop counter retains its value after exiting the loop, unlike some situations that arise in C++, Java, and Pascal. Thus, immediately after a for loop, the loop counter’s value is the value that first exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j D 2 to A:length, and so when this loop terminates, j D A:length C 1 (or, equivalently, j D n C 1,since n D A:length). We use the keyword to when a for loop increments its loop 2In an if-else statement, we indent else at the same level as its matching if. Although we omit the keyword then, we occasionally refer to the portion executed when the test following if is true as a then clause. For multiway tests, we use elseif for tests after the first one. 3Each pseudocode procedure in this book appears on one page so that you will not have to discern levels of indentation in code that is split across pages. 4Most block-structured languages have equivalent constructs, though the exact syntax may differ. Python lacks repeat-until loops, and its for loops operate a little differently from the for loops in this book.2.1 Insertion sort 21 counter in each iteration, and we use the keyword downto when a for loop decrements its loop counter. When the loop counter changes by an amount greater than 1, the amount of change follows the optional keyword by. The symbol “//” indicates that the remainder of the line is a comment. A multiple assignment of the form i D j D e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j D e followed by the assignment i D j . Variables (such as i, j ,andkey) are local to the given procedure. We shall not use global variables without explicit indication. We access array elements by specifying the array name followed by the in- dex in square brackets. For example, AŒi indicates the ith element of the array A. The notation “::” is used to indicate a range of values within an ar- ray. Thus, AŒ1 : : j indicates the subarray of A consisting of the j elements AŒ1;AŒ2;:::;AŒj. We typically organize compound data into objects, which are composed of attributes. We access a particular attribute using the syntax found in many object-oriented programming languages: the object name, followed by a dot, followed by the attribute name. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write A:length. We treat a variable representing an array or object as a pointer to the data rep- resenting the array or object. For all attributes f of an object x, setting y D x causes y:f to equal x:f. Moreover, if we now set x:f D 3, then afterward not only does x:f equal 3,buty:f equals 3 as well. In other words, x and y point to the same object after the assignment y D x. Our attribute notation can “cascade.” For example, suppose that the attribute f is itself a pointer to some type of object that has an attribute g. Then the notation x:f:g is implicitly parenthesized as .x:f/:g. In other words, if we had assigned y D x:f,thenx:f:g is the same as y:g. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL. We pass parameters to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s attributes are not. For example, if x is a parameter of a called procedure, the assignment x D y within the called procedure is not visible to the calling procedure. The assignment x:f D 3, however, is visible. Similarly, arrays are passed by pointer, so that22 Chapter 2 Getting Started a pointer to the array is passed, rather than the entire array, and changes to individual array elements are visible to the calling procedure. A return statement immediately transfers control back to the point of call in the calling procedure. Most return statements also take a value to pass back to the caller. Our pseudocode differs from many programming languages in that we allow multiple values to be returned in a single return statement. The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y”wefirstevaluatex.Ifx evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE,wemustevaluatey to determine the value of the entire expression. Similarly, in the expression “x or y”weeval- uate the expression y only if x evaluates to FALSE. Short-circuiting operators allow us to write boolean expressions such as “x ¤ NIL and x:f D y” without worrying about what happens when we try to evaluate x:f when x is NIL. The keyword error indicates that an error occurred because conditions were wrong for the procedure to have been called. The calling procedure is respon- sible for handling the error, and so we do not specify what action to take. Exercises 2.1-1 Using Figure 2.2 as a model, illustrate the operation of INSERTION-SORT on the array A Dh31; 41; 59; 26; 41; 58i. 2.1-2 Rewrite the INSERTION-SORT procedure to sort into nonincreasing instead of non- decreasing order. 2.1-3 Consider the searching problem: Input: A sequence of n numbers A Dha1;a2;:::;ani and a value . Output: An index i such that D AŒi or the special value NIL if does not appear in A. Write pseudocode for linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulfills the three necessary properties. 2.1-4 Consider the problem of adding two n-bit binary integers, stored in two n-element arrays A and B. The sum of the two integers should be stored in binary form in2.2 Analyzing algorithms 23 an .n C 1/-element array C. State the problem formally and write pseudocode for adding the two integers. 2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the algo- rithm requires. Occasionally, resources such as memory, communication band- width, or computer hardware are of primary concern, but most often it is compu- tational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, we can identify a most efficient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process. Before we can analyze an algorithm, we must have a model of the implemen- tation technology that we will use, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic one- processor, random-access machine (RAM) model of computation as our imple- mentation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after an- other, with no concurrent operations. Strictly speaking, we should precisely define the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are de- signed. The RAM model contains instructions commonly found in real computers: arithmetic (such as add, subtract, multiply, divide, remainder, floor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time. The data types in the RAM model are integer and floating point (for storing real numbers). Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typ- ically assume that integers are represented by c lg n bits for some constant c 1. We require c 1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.)24 Chapter 2 Getting Started Real computers contain instructions not listed above, and such instructions rep- resent a gray area in the RAM model. For example, is exponentiation a constant- time instruction? In the general case, no; it takes several instructions to compute xy when x and y are real numbers. In restricted situations, however, exponentiation is a constant-time operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2, so that shifting the bits by k positions to the left is equiv- alent to multiplication by 2k. Therefore, such computers can compute 2k in one constant-time instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constant-time operation when k is a small enough positive integer. In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory. Several computational models attempt to account for memory-hierarchy effects, which are sometimes significant in real programs on real machines. A handful of problems in this book examine memory-hierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, and so they can be difficult to work with. Moreover, RAM-model analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, alge- braic dexterity, and the ability to identify the most significant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas. Even though we typically select only one machine model to analyze a given al- gorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important char- acteristics of an algorithm’s resource requirements, and suppresses tedious details. Analysis of insertion sort The time taken by the INSERTION-SORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, INSERTION- SORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to define the terms “running time” and “size of input” more carefully.2.2 Analyzing algorithms 25 The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most nat- ural measure is the number of items in the input—for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algo- rithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. It is convenient to define the notion of step so that it is as machine-independent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the ith line takes time ci ,whereci is a constant. This viewpoint is in keeping with the RAM model, and it also reflects how the pseudocode would be implemented on most actual computers.5 In the following discussion, our expression for the running time of INSERTION- SORT will evolve from a messy formula that uses all the statement costs ci to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efficient than another. We start by presenting the INSERTION-SORT procedure with the time “cost” of each statement and the number of times each statement is executed. For each j D 2;3;:::;n,wheren D A:length,welettj denote the number of times the while loop test in line 5 is executed for that value of j .Whenafor or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time. 5There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say “sort the points by x-coordinate,” which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine—passing parameters to it, etc.—from the process of executing the subroutine.26 Chapter 2 Getting Started INSERTION-SORT.A/ cost times 1 for j D 2 to A:length c1 n 2 key D AŒj c2 n 1 3 // Insert AŒj into the sorted sequence AŒ1 : : j 1.0n 1 4 i D j 1c4 n 1 5 while i>0and AŒi > key c5 Pn j D2 tj 6 AŒi C 1 D AŒi c6 Pn j D2.tj 1/ 7 i D i 1c7 Pn jD2.tj 1/ 8 AŒi C 1 D key c8 n 1 The running time of the algorithm is the sum of running times for each state- ment executed; a statement that takes ci steps to execute and executes n times will contribute ci n to the total running time.6 To compute T .n/, the running time of INSERTION-SORT on an input of n values, we sum the products of the cost and times columns, obtaining T .n/ D c1n C c2.n 1/ C c4.n 1/ C c5 nX jD2 tj C c6 nX jD2 .tj 1/ C c7 nX jD2 .tj 1/ C c8.n 1/ : Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in INSERTION-SORT, the best case occurs if the array is already sorted. For each j D 2;3;:::;n, we then find that AŒi key in line 5 when i has its initial value of j 1. Thus tj D 1 for j D 2;3;:::;n, and the best-case running time is T .n/ D c1n C c2.n 1/ C c4.n 1/ C c5.n 1/ C c8.n 1/ D .c1 C c2 C c4 C c5 C c8/n .c2 C c4 C c5 C c8/: We can express this running time as an C b for constants a and b that depend on the statement costs ci ;itisthusalinear function of n. If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element AŒj with each element in the entire sorted subarray AŒ1 : : j 1,andsotj D j for j D 2;3;:::;n. Noting that 6This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily reference mn distinct words of memory.2.2 Analyzing algorithms 27 nX jD2 j D n.n C 1/ 2 1 and nX jD2 .j 1/ D n.n 1/ 2 (see Appendix A for a review of how to solve these summations), we find that in the worst case, the running time of INSERTION-SORT is T .n/ D c1n C c2.n 1/ C c4.n 1/ C c5 n.n C 1/ 2 1 C c6 n.n 1/ 2 C c7 n.n 1/ 2 C c8.n 1/ D c5 2 C c6 2 C c7 2 n2 C c1 C c2 C c4 C c5 2 c6 2 c7 2 C c8 n .c2 C c4 C c5 C c8/: We can express this worst-case running time as an2 C bn C c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n. Typically, as in insertion sort, the running time of an algorithm is fixed for a given input, although in later chapters we shall see some interesting “randomized” algorithms whose behavior can vary even for a fixed input. Worst-case and average-case analysis In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on finding only the worst-case running time, that is, the longest running time for any input of size n. We give three reasons for this orientation. The worst-case running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse. For some algorithms, the worst case occurs fairly often. For example, in search- ing a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for absent information may be frequent.28 Chapter 2 Getting Started The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray AŒ1 : : j 1 to insert element AŒj ?Onaverage, half the elements in AŒ1 : : j 1 are less than AŒj , and half the elements are greater. On average, therefore, we check half of the subarray AŒ1 : : j 1,and so tj is about j=2. The resulting average-case running time turns out to be a quadratic function of the input size, just like the worst-case running time. In some particular cases, we shall be interested in the average-case running time of an algorithm; we shall see the technique of probabilistic analysis applied to various algorithms throughout this book. The scope of average-case analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis and yield an expected running time. We explore randomized algorithms more in Chapter 5 and in several other subsequent chapters. Order of growth We used some simplifying abstractions to ease our analysis of the INSERTION- SORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: we expressed the worst-case running time as an2 C bn C c for some constants a, b,andc that depend on the statement costs ci . We thus ignored not only the actual statement costs, but also the abstract costs ci . We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the running time that really interests us. We therefore con- sider only the leading term of a formula (e.g., an2), since the lower-order terms are relatively insignificant for large values of n. We also ignore the leading term’s con- stant coefficient, since constant factors are less significant than the rate of growth in determining computational efficiency for large inputs. For insertion sort, when we ignore the lower-order terms and the leading term’s constant coefficient, we are left with the factor of n2 from the leading term. We write that insertion sort has a worst-case running time of ‚.n2/ (pronounced “theta of n-squared”). We shall use ‚-notation informally in this chapter, and we will define it precisely in Chapter 3. We usually consider one algorithm to be more efficient than another if its worst- case running time has a lower order of growth. Due to constant factors and lower- order terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower2.3 Designing algorithms 29 order of growth. But for large enough inputs, a ‚.n2/ algorithm, for example, will run more quickly in the worst case than a ‚.n3/ algorithm. Exercises 2.2-1 Express the function n3=1000 100n2 100n C 3 in terms of ‚-notation. 2.2-2 Consider sorting n numbers stored in array A by first finding the smallest element of A and exchanging it with the element in AŒ1. Then find the second smallest element of A, and exchange it with AŒ2. Continue in this manner for the first n1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the first n 1 elements, rather than for all n elements? Give the best-case and worst-case running times of selection sort in ‚-notation. 2.2-3 Consider linear search again (see Exercise 2.1-3). How many elements of the in- put sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the average-case and worst-case running times of linear search in ‚-notation? Justify your answers. 2.2-4 How can we modify almost any algorithm to have a good best-case running time? 2.3 Designing algorithms We can choose from a wide range of algorithm design techniques. For insertion sort, we used an incremental approach: having sorted the subarray AŒ1 : : j 1, we inserted the single element AŒj into its proper place, yielding the sorted subarray AŒ1 : : j . In this section, we examine an alternative design approach, known as “divide- and-conquer,” which we shall explore in more detail in Chapter 4. We’ll use divide- and-conquer to design a sorting algorithm whose worst-case running time is much less than that of insertion sort. One advantage of divide-and-conquer algorithms is that their running times are often easily determined using techniques that we will see in Chapter 4.30 Chapter 2 Getting Started 2.3.1 The divide-and-conquer approach Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related sub- problems. These algorithms typically follow a divide-and-conquer approach: they break the problem into several subproblems that are similar to the original prob- lem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. The divide-and-conquer paradigm involves three steps at each level of the recur- sion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original prob- lem. The merge sort algorithm closely follows the divide-and-conquer paradigm. In- tuitively, it operates as follows. Divide: Divide the n-element sequence to be sorted into two subsequences of n=2 elements each. Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer. The recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order. The key operation of the merge sort algorithm is the merging of two sorted sequences in the “combine” step. We merge by calling an auxiliary procedure MERGE.A;p;q;r/,whereA is an array and p, q,andr are indices into the array such that p q RŒj , then lines 16–17 perform the appropriate action to maintain the loop invariant. Termination: At termination, k D r C 1. By the loop invariant, the subarray AŒp : : k 1,whichisAŒp : : r, contains the k p D r p C 1 smallest elements of LŒ1::n1 C 1 and RŒ1::n2 C 1, in sorted order. The arrays L and R together contain n1 C n2 C 2 D r p C 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels.34 Chapter 2 Getting Started To see that the MERGE procedure runs in ‚.n/ time, where n D r p C 1, observe that each of lines 1–3 and 8–11 takes constant time, the for loops of lines 4–7 take ‚.n1 C n2/ D ‚.n/ time,7 and there are n iterations of the for loop of lines 12–17, each of which takes constant time. We can now use the MERGE procedure as a subroutine in the merge sort al- gorithm. The procedure MERGE-SORT.A;p;r/ sorts the elements in the subar- ray AŒp : : r.Ifp r, the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that par- titions AŒp : : r into two subarrays: AŒp : : q, containing dn=2e elements, and AŒq C 1::r, containing bn=2c elements.8 MERGE-SORT.A;p;r/ 1 if p1elements, we break down the running time as follows. Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D.n/ D ‚.1/. Conquer: We recursively solve two subproblems, each of size n=2, which con- tributes 2T .n=2/ to the running time. Combine: We have already noted that the MERGE procedure on an n-element subarray takes time ‚.n/,andsoC.n/ D ‚.n/. When we add the functions D.n/ and C.n/ for the merge sort analysis, we are adding a function that is ‚.n/ and a function that is ‚.1/. This sum is a linear function of n,thatis,‚.n/. Adding it to the 2T .n=2/ term from the “conquer” step gives the recurrence for the worst-case running time T .n/ of merge sort: T .n/ D ( ‚.1/ if n D 1; 2T .n=2/ C ‚.n/ if n>1: (2.1) In Chapter 4, we shall see the “master theorem,” which we can use to show that T .n/ is ‚.n lg n/, where lg n stands for log2 n. Because the logarithm func- tion grows more slowly than any linear function, for large enough inputs, merge sort, with its ‚.n lg n/ running time, outperforms insertion sort, whose running time is ‚.n2/, in the worst case. We do not need the master theorem to intuitively understand why the solution to the recurrence (2.1) is T .n/ D ‚.n lg n/. Let us rewrite recurrence (2.1) as T .n/ D ( c if n D 1; 2T .n=2/ C cn if n>1; (2.2) where the constant c represents the time required to solve problems of size 1 as well as the time per array element of the divide and combine steps.9 9It is unlikely that the same constant exactly represents both the time to solve problems of size 1 and the time per array element of the divide and combine steps. We can get around this problem by letting c be the larger of these times and understanding that our recurrence gives an upper bound on the running time, or by letting c be the lesser of these times and understanding that our recurrence gives a lower bound on the running time. Both bounds are on the order of n lg n and, taken together, give a ‚.n lg n/ running time.2.3 Designing algorithms 37 Figure 2.5 shows how we can solve recurrence (2.2). For convenience, we as- sume that n is an exact power of 2. Part (a) of the figure shows T .n/,whichwe expand in part (b) into an equivalent tree representing the recurrence. The cn term is the root (the cost incurred at the top level of recursion), and the two subtrees of the root are the two smaller recurrences T .n=2/. Part (c) shows this process carried one step further by expanding T .n=2/. The cost incurred at each of the two sub- nodes at the second level of recursion is cn=2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence, until the problem sizes get down to 1, each with a cost of c. Part (d) shows the resulting recursion tree. Next, we add the costs across each level of the tree. The top level has total cost cn, the next level down has total cost c.n=2/ C c.n=2/ D cn, the level after that has total cost c.n=4/Cc.n=4/Cc.n=4/Cc.n=4/ D cn, and so on. In general, the level i below the top has 2i nodes, each contributing a cost of c.n=2i /,sothat the ith level below the top has total cost 2i c.n=2i / D cn. The bottom level has n nodes, each contributing a cost of c, for a total cost of cn. The total number of levels of the recursion tree in Figure 2.5 is lg n C 1,where n is the number of leaves, corresponding to the input size. An informal inductive argument justifies this claim. The base case occurs when n D 1, in which case the tree has only one level. Since lg 1 D 0,wehavethatlgn C 1 gives the correct number of levels. Now assume as an inductive hypothesis that the number of levels of a recursion tree with 2i leaves is lg 2i C 1 D i C 1 (since for any value of i, we have that lg 2i D i). Because we are assuming that the input size is a power of 2, the next input size to consider is 2iC1. A tree with n D 2iC1 leaves has one more level than a tree with 2i leaves, and so the total number of levels is .i C 1/ C 1 D lg 2iC1 C 1. To compute the total cost represented by the recurrence (2.2), we simply add up the costs of all the levels. The recursion tree has lg n C 1 levels, each costing cn, for a total cost of cn.lg n C 1/ D cnlg n C cn. Ignoring the low-order term and the constant c gives the desired result of ‚.n lg n/. Exercises 2.3-1 Using Figure 2.4 as a model, illustrate the operation of merge sort on the array A Dh3; 41; 52; 26; 38; 57; 9; 49i. 2.3-2 Rewrite the MERGE procedure so that it does not use sentinels, instead stopping once either array L or R has had all its elements copied back to A and then copying the remainder of the other array back into A.38 Chapter 2 Getting Started cn cn … Total: cn lg n + cn cn lg n cn n c c c c c c c … (d) (c) cn T(n/2) T(n/2) (b) T(n) (a) cn cn/2 T(n/4) T(n/4) cn/2 T(n/4) T(n/4) cn cn/2 cn/4 cn/4 cn/2 cn/4 cn/4 Figure 2.5 How to construct a recursion tree for the recurrence T .n/ D 2T .n=2/ C cn. Part (a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has lg n C 1 levels (i.e., it has height lg n, as indicated), and each level contributes a total cost of cn. The total cost, therefore, is cnlg n C cn,whichis‚.n lg n/.Problems for Chapter 2 39 2.3-3 Use mathematical induction to show that when n is an exact power of 2,thesolu- tion of the recurrence T .n/ D ( 2 if n D 2; 2T .n=2/ C n if n D 2k,fork>1 is T .n/ D n lg n. 2.3-4 We can express insertion sort as a recursive procedure as follows. In order to sort AŒ1 : : n, we recursively sort AŒ1 : : n1 and then insert AŒn into the sorted array AŒ1 : : n 1. Write a recurrence for the running time of this recursive version of insertion sort. 2.3-5 Referring back to the searching problem (see Exercise 2.1-3), observe that if the sequence A is sorted, we can check the midpoint of the sequence against and eliminate half of the sequence from further consideration. The binary search al- gorithm repeats this procedure, halving the size of the remaining portion of the sequence each time. Write pseudocode, either iterative or recursive, for binary search. Argue that the worst-case running time of binary search is ‚.lg n/. 2.3-6 Observe that the while loop of lines 5–7 of the INSERTION-SORT procedure in Section 2.1 uses a linear search to scan (backward) through the sorted subarray AŒ1 : : j 1. Can we use a binary search (see Exercise 2.3-5) instead to improve the overall worst-case running time of insertion sort to ‚.n lg n/? 2.3-7 ? Describe a ‚.n lg n/-time algorithm that, given a set S of n integers and another integer x, determines whether or not there exist two elements in S whose sum is exactly x. Problems 2-1 Insertion sort on small arrays in merge sort Although merge sort runs in ‚.n lg n/ worst-case time and insertion sort runs in ‚.n2/ worst-case time, the constant factors in insertion sort can make it faster in practice for small problem sizes on many machines. Thus, it makes sense to coarsen the leaves of the recursion by using insertion sort within merge sort when40 Chapter 2 Getting Started subproblems become sufficiently small. Consider a modification to merge sort in which n=k sublists of length k are sorted using insertion sort and then merged using the standard merging mechanism, where k is a value to be determined. a. Show that insertion sort can sort the n=k sublists, each of length k,in‚.nk/ worst-case time. b. Show how to merge the sublists in ‚.n lg.n=k// worst-case time. c. Given that the modified algorithm runs in ‚.nk C n lg.n=k// worst-case time, what is the largest value of k as a function of n for which the modified algorithm has the same running time as standard merge sort, in terms of ‚-notation? d. How should we choose k in practice? 2-2 Correctness of bubblesort Bubblesort is a popular, but inefficient, sorting algorithm. It works by repeatedly swapping adjacent elements that are out of order. BUBBLESORT.A/ 1 for i D 1 to A:length 1 2 for j D A:length downto i C 1 3 if AŒj < AŒj 1 4 exchange AŒj with AŒj 1 a. Let A0 denote the output of BUBBLESORT.A/. To prove that BUBBLESORT is correct, we need to prove that it terminates and that A0Œ1 A0Œ2 A0Œn ; (2.3) where n D A:length. In order to show that BUBBLESORT actually sorts, what else do we need to prove? The next two parts will prove inequality (2.3). b. State precisely a loop invariant for the for loop in lines 2–4, and prove that this loop invariant holds. Your proof should use the structure of the loop invariant proof presented in this chapter. c. Using the termination condition of the loop invariant proved in part (b), state a loop invariant for the for loop in lines 1–4 that will allow you to prove in- equality (2.3). Your proof should use the structure of the loop invariant proof presented in this chapter.Problems for Chapter 2 41 d. What is the worst-case running time of bubblesort? How does it compare to the running time of insertion sort? 2-3 Correctness of Horner’s rule The following code fragment implements Horner’s rule for evaluating a polynomial P.x/ D nX kD0 akxk D a0 C x.a1 C x.a2 CCx.an1 C xan/ // ; given the coefficients a0;a1;:::;an and a value for x: 1 y D 0 2 for i D n downto 0 3 y D ai C x y a. In terms of ‚-notation, what is the running time of this code fragment for Horner’s rule? b. Write pseudocode to implement the naive polynomial-evaluation algorithm that computes each term of the polynomial from scratch. What is the running time of this algorithm? How does it compare to Horner’s rule? c. Consider the following loop invariant: At the start of each iteration of the for loop of lines 2–3, y D n.iC1/X kD0 akCiC1xk : Interpret a summation with no terms as equaling 0. Following the structure of the loop invariant proof presented in this chapter, use this loop invariant to show that, at termination, y D Pn kD0 akxk. d. Conclude by arguing that the given code fragment correctly evaluates a poly- nomial characterized by the coefficients a0;a1;:::;an. 2-4 Inversions Let AŒ1 : : n be an array of n distinct numbers. If i AŒj , then the pair .i; j / is called an inversion of A. a. List the five inversions of the array h2; 3; 8; 6; 1i.42 Chapter 2 Getting Started b. What array with elements from the set f1;2;:::;ng has the most inversions? How many does it have? c. What is the relationship between the running time of insertion sort and the number of inversions in the input array? Justify your answer. d. Give an algorithm that determines the number of inversions in any permutation on n elements in ‚.n lg n/ worst-case time. (Hint: Modify merge sort.) Chapter notes In 1968, Knuth published the first of three volumes with the general title The Art of Computer Programming [209, 210, 211]. The first volume ushered in the modern study of computer algorithms with a focus on the analysis of running time, and the full series remains an engaging and worthwhile reference for many of the topics presented here. According to Knuth, the word “algorithm” is derived from the name “al-Khowˆarizmˆı,” a ninth-century Persian mathematician. Aho, Hopcroft, and Ullman [5] advocated the asymptotic analysis of algo- rithms—using notations that Chapter 3 introduces, including ‚-notation—as a means of comparing relative performance. They also popularized the use of re- currence relations to describe the running times of recursive algorithms. Knuth [211] provides an encyclopedic treatment of many sorting algorithms. His comparison of sorting algorithms (page 381) includes exact step-counting analyses, like the one we performed here for insertion sort. Knuth’s discussion of insertion sort encompasses several variations of the algorithm. The most important of these is Shell’s sort, introduced by D. L. Shell, which uses insertion sort on periodic subsequences of the input to produce a faster sorting algorithm. Merge sort is also described by Knuth. He mentions that a mechanical colla- tor capable of merging two decks of punched cards in a single pass was invented in 1938. J. von Neumann, one of the pioneers of computer science, apparently wrote a program for merge sort on the EDVAC computer in 1945. The early history of proving programs correct is described by Gries [153], who credits P. Naur with the first article in this field. Gries attributes loop invariants to R. W. Floyd. The textbook by Mitchell [256] describes more recent progress in proving programs correct.3 Growth of Functions The order of growth of the running time of an algorithm, defined in Chapter 2, gives a simple characterization of the algorithm’s efficiency and also allows us to compare the relative performance of alternative algorithms. Once the input size n becomes large enough, merge sort, with its ‚.n lg n/ worst-case running time, beats insertion sort, whose worst-case running time is ‚.n2/. Although we can sometimes determine the exact running time of an algorithm, as we did for insertion sort in Chapter 2, the extra precision is not usually worth the effort of computing it. For large enough inputs, the multiplicative constants and lower-order terms of an exact running time are dominated by the effects of the input size itself. When we look at input sizes large enough to make only the order of growth of the running time relevant, we are studying the asymptotic efficiency of algorithms. That is, we are concerned with how the running time of an algorithm increases with the size of the input in the limit, as the size of the input increases without bound. Usually, an algorithm that is asymptotically more efficient will be the best choice for all but very small inputs. This chapter gives several standard methods for simplifying the asymptotic anal- ysis of algorithms. The next section begins by defining several types of “asymp- totic notation,” of which we have already seen an example in ‚-notation. We then present several notational conventions used throughout this book, and finally we review the behavior of functions that commonly arise in the analysis of algorithms. 3.1 Asymptotic notation The notations we use to describe the asymptotic running time of an algorithm are defined in terms of functions whose domains are the set of natural numbers N D f0; 1; 2; : : :g. Such notations are convenient for describing the worst-case running-time function T .n/, which usually is defined only on integer input sizes. We sometimes find it convenient, however, to abuse asymptotic notation in a va-44 Chapter 3 Growth of Functions riety of ways. For example, we might extend the notation to the domain of real numbers or, alternatively, restrict it to a subset of the natural numbers. We should make sure, however, to understand the precise meaning of the notation so that when we abuse, we do not misuse it. This section defines the basic asymptotic notations and also introduces some common abuses. Asymptotic notation, functions, and running times We will use asymptotic notation primarily to describe the running times of algo- rithms, as when we wrote that insertion sort’s worst-case running time is ‚.n2/. Asymptotic notation actually applies to functions, however. Recall that we charac- terized insertion sort’s worst-case running time as an2 CbnCc, for some constants a, b,andc. By writing that insertion sort’s running time is ‚.n2/, we abstracted away some details of this function. Because asymptotic notation applies to func- tions, what we were writing as ‚.n2/ was the function an2 C bn C c,whichin that case happened to characterize the worst-case running time of insertion sort. In this book, the functions to which we apply asymptotic notation will usually characterize the running times of algorithms. But asymptotic notation can apply to functions that characterize some other aspect of algorithms (the amount of space they use, for example), or even to functions that have nothing whatsoever to do with algorithms. Even when we use asymptotic notation to apply to the running time of an al- gorithm, we need to understand which running time we mean. Sometimes we are interested in the worst-case running time. Often, however, we wish to characterize the running time no matter what the input. In other words, we often wish to make a blanket statement that covers all inputs, not just the worst case. We shall see asymptotic notations that are well suited to characterizing running times no matter what the input. ‚-notation In Chapter 2, we found that the worst-case running time of insertion sort is T .n/ D ‚.n2/. Let us define what this notation means. For a given function g.n/, we denote by ‚.g.n// the set of functions ‚.g.n// Dff .n/ W there exist positive constants c1, c2,andn0 such that 0 c1g.n/ f .n/ c2g.n/ for all n n0g :1 1Within set notation, a colon means “such that.”3.1 Asymptotic notation 45 (b) (c)(a) nnn n0n0n0 f .n/ D ‚.g.n// f .n/ D O.g.n// f .n/ D .g.n// f .n/ f .n/f .n/ cg.n/ cg.n/ c1g.n/ c2g.n/ Figure 3.1 Graphic examples of the ‚, O,and notations. In each part, the value of n0 shown is the minimum possible value; any greater value would also work. (a) ‚-notation bounds a func- tion to within constant factors. We write f.n/ D ‚.g.n// if there exist positive constants n0, c1, and c2 such that at and to the right of n0,thevalueoff.n/always lies between c1g.n/ and c2g.n/ inclusive. (b) O-notation gives an upper bound for a function to within a constant factor. We write f.n/D O.g.n// if there are positive constants n0 and c such that at and to the right of n0,thevalue of f.n/always lies on or below cg.n/. (c) -notation gives a lower bound for a function to within a constant factor. We write f.n/D .g.n// if there are positive constants n0 and c such that at and to the right of n0,thevalueoff.n/always lies on or above cg.n/. A function f .n/ belongs to the set ‚.g.n// if there exist positive constants c1 and c2 such that it can be “sandwiched” between c1g.n/ and c2g.n/,forsuffi- ciently large n. Because ‚.g.n// is a set, we could write “f .n/ 2 ‚.g.n//” to indicate that f .n/ is a member of ‚.g.n//. Instead, we will usually write “f .n/ D ‚.g.n//” to express the same notion. You might be confused because we abuse equality in this way, but we shall see later in this section that doing so has its advantages. Figure 3.1(a) gives an intuitive picture of functions f .n/ and g.n/,where f .n/ D ‚.g.n//. For all values of n at and to the right of n0,thevalueoff .n/ lies at or above c1g.n/ and at or below c2g.n/. In other words, for all n n0,the function f .n/ is equal to g.n/ to within a constant factor. We say that g.n/ is an asymptotically tight bound for f .n/. The definition of ‚.g.n// requires that every member f .n/ 2 ‚.g.n// be asymptotically nonnegative, that is, that f .n/ be nonnegative whenever n is suf- ficiently large. (An asymptotically positive function is one that is positive for all sufficiently large n.) Consequently, the function g.n/ itself must be asymptotically nonnegative, or else the set ‚.g.n// is empty. We shall therefore assume that every function used within ‚-notation is asymptotically nonnegative. This assumption holds for the other asymptotic notations defined in this chapter as well.46 Chapter 3 Growth of Functions In Chapter 2, we introduced an informal notion of ‚-notation that amounted to throwing away lower-order terms and ignoring the leading coefficient of the highest-order term. Let us briefly justify this intuition by using the formal defi- nition to show that 1 2 n2 3n D ‚.n2/. To do so, we must determine positive constants c1, c2,andn0 such that c1n2 1 2n2 3n c2n2 for all n n0. Dividing by n2 yields c1 1 2 3 n c2 : We can make the right-hand inequality hold for any value of n 1 by choosing any constant c2 1=2. Likewise, we can make the left-hand inequality hold for any value of n 7 by choosing any constant c1 1=14. Thus, by choosing c1 D 1=14, c2 D 1=2,andn0 D 7, we can verify that 1 2 n2 3n D ‚.n2/. Certainly, other choices for the constants exist, but the important thing is that some choice exists. Note that these constants depend on the function 1 2 n2 3n; a different function belonging to ‚.n2/ would usually require different constants. We can also use the formal definition to verify that 6n3 ¤ ‚.n2/. Suppose for the purpose of contradiction that c2 and n0 exist such that 6n3 c2n2 for all n n0. But then dividing by n2 yields n c2=6, which cannot possibly hold for arbitrarily large n,sincec2 is constant. Intuitively, the lower-order terms of an asymptotically positive function can be ignored in determining asymptotically tight bounds because they are insignificant for large n.Whenn is large, even a tiny fraction of the highest-order term suf- fices to dominate the lower-order terms. Thus, setting c1 to a value that is slightly smaller than the coefficient of the highest-order term and setting c2 to a value that is slightly larger permits the inequalities in the definition of ‚-notation to be sat- isfied. The coefficient of the highest-order term can likewise be ignored, since it only changes c1 and c2 by a constant factor equal to the coefficient. As an example, consider any quadratic function f .n/ D an2 C bn C c,where a, b,andc are constants and a>0. Throwing away the lower-order terms and ignoring the constant yields f .n/ D ‚.n2/. Formally, to show the same thing, we take the constants c1 D a=4, c2 D 7a=4,andn0 D 2 max.jbj =a; p jcj =a/.You may verify that 0 c1n2 an2 C bn C c c2n2 for all n n0. In general, for any polynomial p.n/ D Pd iD0 ai ni , where the ai are constants and ad >0,we have p.n/ D ‚.nd / (see Problem 3-1). Since any constant is a degree-0 polynomial, we can express any constant func- tion as ‚.n0/,or‚.1/. This latter notation is a minor abuse, however, because the3.1 Asymptotic notation 47 expression does not indicate what variable is tending to infinity.2 We shall often use the notation ‚.1/ to mean either a constant or a constant function with respect to some variable. O-notation The ‚-notation asymptotically bounds a function from above and below. When we have only an asymptotic upper bound,weuseO-notation. For a given func- tion g.n/, we denote by O.g.n// (pronounced “big-oh of g of n” or sometimes just “oh of g of n”) the set of functions O.g.n// Dff .n/ W there exist positive constants c and n0 such that 0 f .n/ cg.n/ for all n n0g : We use O-notation to give an upper bound on a function, to within a constant factor. Figure 3.1(b) shows the intuition behind O-notation. For all values n at and to the right of n0, the value of the function f .n/ is on or below cg.n/. We write f .n/ D O.g.n// to indicate that a function f .n/ isamemberofthe set O.g.n//. Note that f .n/ D ‚.g.n// implies f .n/ D O.g.n//,since‚- notation is a stronger notion than O-notation. Written set-theoretically, we have ‚.g.n// O.g.n//. Thus, our proof that any quadratic function an2 C bn C c, where a>0,isin‚.n2/ also shows that any such quadratic function is in O.n2/. What may be more surprising is that when a>0,anylinear function an C b is in O.n2/, which is easily verified by taking c D a C jbj and n0 D max.1; b=a/. If you have seen O-notation before, you might find it strange that we should write, for example, n D O.n2/. In the literature, we sometimes find O-notation informally describing asymptotically tight bounds, that is, what we have defined using ‚-notation. In this book, however, when we write f .n/ D O.g.n//,we are merely claiming that some constant multiple of g.n/ is an asymptotic upper bound on f .n/, with no claim about how tight an upper bound it is. Distinguish- ing asymptotic upper bounds from asymptotically tight bounds is standard in the algorithms literature. Using O-notation, we can often describe the running time of an algorithm merely by inspecting the algorithm’s overall structure. For example, the doubly nested loop structure of the insertion sort algorithm from Chapter 2 immediately yields an O.n2/ upper bound on the worst-case running time: the cost of each it- eration of the inner loop is bounded from above by O.1/ (constant), the indices i 2The real problem is that our ordinary notation for functions does not distinguish functions from values. In -calculus, the parameters to a function are clearly specified: the function n2 could be written as n:n2,orevenr:r2. Adopting a more rigorous notation, however, would complicate algebraic manipulations, and so we choose to tolerate the abuse.48 Chapter 3 Growth of Functions and j are both at most n, and the inner loop is executed at most once for each of the n2 pairs of values for i and j . Since O-notation describes an upper bound, when we use it to bound the worst- case running time of an algorithm, we have a bound on the running time of the algo- rithm on every input—the blanket statement we discussed earlier. Thus, the O.n2/ bound on worst-case running time of insertion sort also applies to its running time on every input. The ‚.n2/ bound on the worst-case running time of insertion sort, however, does not imply a ‚.n2/ bound on the running time of insertion sort on every input. For example, we saw in Chapter 2 that when the input is already sorted, insertion sort runs in ‚.n/ time. Technically, it is an abuse to say that the running time of insertion sort is O.n2/, since for a given n, the actual running time varies, depending on the particular input of size n. When we say “the running time is O.n2/,” we mean that there is a function f .n/ that is O.n2/ such that for any value of n, no matter what particular input of size n is chosen, the running time on that input is bounded from above by the value f .n/. Equivalently, we mean that the worst-case running time is O.n2/. -notation Just as O-notation provides an asymptotic upper bound on a function, -notation provides an asymptotic lower bound. For a given function g.n/, we denote by .g.n// (pronounced “big-omega of g of n” or sometimes just “omega of g of n”) the set of functions .g.n// Dff .n/ W there exist positive constants c and n0 such that 0 cg.n/ f .n/ for all n n0g : Figure 3.1(c) shows the intuition behind -notation. For all values n at or to the right of n0,thevalueoff .n/ is on or above cg.n/. From the definitions of the asymptotic notations we have seen thus far, it is easy to prove the following important theorem (see Exercise 3.1-5). Theorem 3.1 For any two functions f .n/ and g.n/,wehavef .n/ D ‚.g.n// if and only if f .n/ D O.g.n// and f .n/ D .g.n//. As an example of the application of this theorem, our proof that an2 C bnC c D ‚.n2/ for any constants a, b,andc,wherea>0, immediately implies that an2 C bnC c D .n2/ and an2 CbnCc D O.n2/. In practice, rather than using Theorem 3.1 to obtain asymptotic upper and lower bounds from asymptotically tight bounds, as we did for this example, we usually use it to prove asymptotically tight bounds from asymptotic upper and lower bounds.3.1 Asymptotic notation 49 When we say that the running time (no modifier) of an algorithm is .g.n//, we mean that no matter what particular input of size n is chosen for each value of n, the running time on that input is at least a constant times g.n/, for sufficiently large n. Equivalently, we are giving a lower bound on the best-case running time of an algorithm. For example, the best-case running time of insertion sort is .n/, which implies that the running time of insertion sort is .n/. The running time of insertion sort therefore belongs to both .n/ and O.n2/, since it falls anywhere between a linear function of n and a quadratic function of n. Moreover, these bounds are asymptotically as tight as possible: for instance, the running time of insertion sort is not .n2/, since there exists an input for which insertion sort runs in ‚.n/ time (e.g., when the input is already sorted). It is not contradictory, however, to say that the worst-case running time of insertion sort is .n2/, since there exists an input that causes the algorithm to take .n2/ time. Asymptotic notation in equations and inequalities We have already seen how asymptotic notation can be used within mathematical formulas. For example, in introducing O-notation, we wrote “n D O.n2/.” We might also write 2n2 C3nC1 D 2n2 C‚.n/. How do we interpret such formulas? When the asymptotic notation stands alone (that is, not within a larger formula) on the right-hand side of an equation (or inequality), as in n D O.n2/,wehave already defined the equal sign to mean set membership: n 2 O.n2/. In general, however, when asymptotic notation appears in a formula, we interpret it as stand- ing for some anonymous function that we do not care to name. For example, the formula 2n2 C 3n C 1 D 2n2 C ‚.n/ means that 2n2 C 3n C 1 D 2n2 C f .n/, where f .n/ is some function in the set ‚.n/. In this case, we let f .n/ D 3n C 1, which indeed is in ‚.n/. Using asymptotic notation in this manner can help eliminate inessential detail and clutter in an equation. For example, in Chapter 2 we expressed the worst-case running time of merge sort as the recurrence T .n/ D 2T .n=2/ C ‚.n/ : If we are interested only in the asymptotic behavior of T .n/, there is no point in specifying all the lower-order terms exactly; they are all understood to be included in the anonymous function denoted by the term ‚.n/. The number of anonymous functions in an expression is understood to be equal to the number of times the asymptotic notation appears. For example, in the ex- pression nX iD1 O.i/ ;50 Chapter 3 Growth of Functions there is only a single anonymous function (a function of i). This expression is thus not the same as O.1/ C O.2/ C CO.n/, which doesn’t really have a clean interpretation. In some cases, asymptotic notation appears on the left-hand side of an equation, as in 2n2 C ‚.n/ D ‚.n2/: We interpret such equations using the following rule: No matter how the anony- mous functions are chosen on the left of the equal sign, there is a way to choose the anonymous functions on the right of the equal sign to make the equation valid. Thus, our example means that for any function f .n/ 2 ‚.n/, there is some func- tion g.n/ 2 ‚.n2/ such that 2n2 C f .n/ D g.n/ for all n. In other words, the right-hand side of an equation provides a coarser level of detail than the left-hand side. We can chain together a number of such relationships, as in 2n2 C 3n C 1 D 2n2 C ‚.n/ D ‚.n2/: We can interpret each equation separately by the rules above. The first equa- tion says that there is some function f .n/ 2 ‚.n/ such that 2n2 C 3n C 1 D 2n2 C f .n/ for all n. The second equation says that for any function g.n/ 2 ‚.n/ (such as the f .n/ just mentioned), there is some function h.n/ 2 ‚.n2/ such that 2n2 C g.n/ D h.n/ for all n. Note that this interpretation implies that 2n2 C 3n C 1 D ‚.n2/, which is what the chaining of equations intuitively gives us. o-notation The asymptotic upper bound provided by O-notation may or may not be asymp- totically tight. The bound 2n2 D O.n2/ is asymptotically tight, but the bound 2n D O.n2/ is not. We use o-notation to denote an upper bound that is not asymp- totically tight. We formally define o.g.n// (“little-oh of g of n”) as the set o.g.n// Dff .n/ W for any positive constant c>0, there exists a constant n0 >0such that 0 f .n/ < cg.n/ for all n n0g : For example, 2n D o.n2/,but2n2 ¤ o.n2/. The definitions of O-notation and o-notation are similar. The main difference is that in f .n/ D O.g.n//, the bound 0 f .n/ cg.n/ holds for some con- stant c>0,butinf .n/ D o.g.n//, the bound 0 f .n/ < cg.n/ holds for all constants c>0. Intuitively, in o-notation, the function f .n/ becomes insignificant relative to g.n/ as n approaches infinity; that is,3.1 Asymptotic notation 51 limn!1 f .n/ g.n/ D 0: (3.1) Some authors use this limit as a definition of the o-notation; the definition in this book also restricts the anonymous functions to be asymptotically nonnegative. !-notation By analogy, !-notation is to -notation as o-notation is to O-notation. We use !-notation to denote a lower bound that is not asymptotically tight. One way to define it is by f .n/ 2 !.g.n// if and only if g.n/ 2 o.f .n// : Formally, however, we define !.g.n// (“little-omega of g of n”) as the set !.g.n// Dff .n/ W for any positive constant c>0, there exists a constant n0 >0such that 0 cg.n/ < f .n/ for all n n0g : For example, n2=2 D !.n/,butn2=2 ¤ !.n2/. The relation f .n/ D !.g.n// implies that limn!1 f .n/ g.n/ D1; if the limit exists. That is, f .n/ becomes arbitrarily large relative to g.n/ as n approaches infinity. Comparing functions Many of the relational properties of real numbers apply to asymptotic comparisons as well. For the following, assume that f .n/ and g.n/ are asymptotically positive. Transitivity: f .n/ D ‚.g.n// and g.n/ D ‚.h.n// imply f .n/ D ‚.h.n// ; f .n/ D O.g.n// and g.n/ D O.h.n// imply f .n/ D O.h.n// ; f .n/ D .g.n// and g.n/ D .h.n// imply f .n/ D .h.n// ; f .n/ D o.g.n// and g.n/ D o.h.n// imply f .n/ D o.h.n// ; f .n/ D !.g.n// and g.n/ D !.h.n// imply f .n/ D !.h.n// : Reflexivity: f .n/ D ‚.f .n// ; f .n/ D O.f .n// ; f .n/ D .f .n// :52 Chapter 3 Growth of Functions Symmetry: f .n/ D ‚.g.n// if and only if g.n/ D ‚.f .n// : Transpose symmetry: f .n/ D O.g.n// if and only if g.n/ D .f .n// ; f .n/ D o.g.n// if and only if g.n/ D !.f .n// : Because these properties hold for asymptotic notations, we can draw an analogy between the asymptotic comparison of two functions f and g and the comparison of two real numbers a and b: f .n/ D O.g.n// is like a b; f .n/ D .g.n// is like a b; f .n/ D ‚.g.n// is like a D b; f .n/ D o.g.n// is like ab: We say that f .n/ is asymptotically smaller than g.n/ if f .n/ D o.g.n//,andf .n/ is asymptotically larger than g.n/ if f .n/ D !.g.n//. One property of real numbers, however, does not carry over to asymptotic nota- tion: Trichotomy: For any two real numbers a and b, exactly one of the following must hold: ab. Although any two real numbers can be compared, not all functions are asymptot- ically comparable. That is, for two functions f .n/ and g.n/, it may be the case that neither f .n/ D O.g.n// nor f .n/ D .g.n// holds. For example, we cannot compare the functions n and n1Csinn using asymptotic notation, since the value of the exponent in n1Csinn oscillates between 0 and 2, taking on all values in between. Exercises 3.1-1 Let f .n/ and g.n/ be asymptotically nonnegative functions. Using the basic defi- nition of ‚-notation, prove that max.f .n/; g.n// D ‚.f .n/ C g.n//. 3.1-2 Show that for any real constants a and b,whereb>0, .n C a/b D ‚.nb/: (3.2)3.2 Standard notations and common functions 53 3.1-3 Explain why the statement, “The running time of algorithm A is at least O.n2/,” is meaningless. 3.1-4 Is 2nC1 D O.2n/?Is22n D O.2n/? 3.1-5 Prove Theorem 3.1. 3.1-6 Prove that the running time of an algorithm is ‚.g.n// if and only if its worst-case running time is O.g.n// and its best-case running time is .g.n//. 3.1-7 Prove that o.g.n// \ !.g.n// is the empty set. 3.1-8 We can extend our notation to the case of two parameters n and m that can go to infinity independently at different rates. For a given function g.n;m/, we denote by O.g.n;m// the set of functions O.g.n;m// Dff.n;m/W there exist positive constants c, n0,andm0 such that 0 f.n;m/ cg.n; m/ for all n n0 or m m0g : Give corresponding definitions for .g.n; m// and ‚.g.n; m//. 3.2 Standard notations and common functions This section reviews some standard mathematical functions and notations and ex- plores the relationships among them. It also illustrates the use of the asymptotic notations. Monotonicity A function f .n/ is monotonically increasing if m n implies f.m/ f .n/. Similarly, it is monotonically decreasing if m n implies f.m/ f .n/.A function f .n/ is strictly increasing if m f .n/.54 Chapter 3 Growth of Functions Floors and ceilings For any real number x, we denote the greatest integer less than or equal to x by bxc (read “the floor of x”) and the least integer greater than or equal to x by dxe (read “the ceiling of x”). For all real x, x 1 0, dx=ae b D l x ab m ; (3.4) bx=ac b D j x ab k ; (3.5) la b m a C .b 1/ b ; (3.6) ja b k a .b 1/ b : (3.7) The floor function f.x/D bxc is monotonically increasing, as is the ceiling func- tion f.x/D dxe. Modular arithmetic For any integer a and any positive integer n,thevaluea mod n is the remainder (or residue) of the quotient a=n: a mod n D a n ba=nc : (3.8) It follows that 0 a mod n0.Foran asymptotically positive polynomial p.n/ of degree d,wehavep.n/ D ‚.nd /.For any real constant a 0, the function na is monotonically increasing, and for any real constant a 0, the function na is monotonically decreasing. We say that a function f .n/ is polynomially bounded if f .n/ D O.nk/ for some constant k. Exponentials For all real a>0, m,andn, we have the following identities: a0 D 1; a1 D a; a1 D 1=a ; .am/n D amn ; .am/n D .an/m ; aman D amCn : For all n and a 1, the function an is monotonically increasing in n.When convenient, we shall assume 00 D 1. We can relate the rates of growth of polynomials and exponentials by the fol- lowing fact. For all real constants a and b such that a>1, limn!1 nb an D 0; (3.10) from which we can conclude that nb D o.an/: Thus, any exponential function with a base strictly greater than 1 grows faster than any polynomial function. Using e to denote 2:71828 : : :, the base of the natural logarithm function, we have for all real x, ex D 1 C x C x2 2Š C x3 3Š C D 1X iD0 xi iŠ ; (3.11)56 Chapter 3 Growth of Functions where “Š” denotes the factorial function defined later in this section. For all real x, we have the inequality ex 1 C x; (3.12) where equality holds only when x D 0.Whenjxj 1, we have the approximation 1 C x ex 1 C x C x2 : (3.13) When x ! 0, the approximation of ex by 1 C x is quite good: ex D 1 C x C ‚.x2/: (In this equation, the asymptotic notation is used to describe the limiting behavior as x ! 0 rather than as x !1.) We have for all x, limn!1 1 C x n n D ex : (3.14) Logarithms We shall use the following notations: lg n D log2 n (binary logarithm) , ln n D loge n (natural logarithm) , lgk n D .lg n/k (exponentiation) , lg lg n D lg.lg n/ (composition) . An important notational convention we shall adopt is that logarithm functions will apply only to the next term in the formula,sothatlgn C k will mean .lg n/ C k and not lg.n C k/. If we hold b>1constant, then for n>0, the function logb n is strictly increasing. For all real a>0, b>0, c>0,andn, a D blogb a ; logc.ab/ D logc a C logc b; logb an D n logb a; logb a D logc a logc b ; (3.15) logb.1=a/ Dlogb a; logb a D 1 loga b ; alogb c D clogb a ; (3.16) where, in each equation above, logarithm bases are not 1.3.2 Standard notations and common functions 57 By equation (3.15), changing the base of a logarithm from one constant to an- other changes the value of the logarithm by only a constant factor, and so we shall often use the notation “lg n” when we don’t care about constant factors, such as in O-notation. Computer scientists find 2 to be the most natural base for logarithms because so many algorithms and data structures involve splitting a problem into two parts. There is a simple series expansion for ln.1 C x/ when jxj <1: ln.1 C x/ D x x2 2 C x3 3 x4 4 C x5 5 : We also have the following inequalities for x>1: x 1 C x ln.1 C x/ x; (3.17) where equality holds only for x D 0. We say that a function f .n/ is polylogarithmically bounded if f .n/ D O.lgk n/ for some constant k. We can relate the growth of polynomials and polylogarithms by substituting lg n for n and 2a for a in equation (3.10), yielding limn!1 lgb n .2a/lg n D limn!1 lgb n na D 0: From this limit, we can conclude that lgb n D o.na/ for any constant a>0. Thus, any positive polynomial function grows faster than any polylogarithmic function. Factorials The notation nŠ (read “n factorial”) is defined for integers n 0 as nŠ D ( 1 if n D 0; n .n 1/Š if n>0: Thus, nŠ D 1 2 3 n. A weak upper bound on the factorial function is nŠ nn, since each of the n terms in the factorial product is at most n. Stirling’s approximation, nŠ D p 2n n e n 1 C ‚ 1 n ; (3.18)58 Chapter 3 Growth of Functions where e is the base of the natural logarithm, gives us a tighter upper bound, and a lower bound as well. As Exercise 3.2-3 asks you to prove, nŠ D o.nn/; nŠ D !.2n/; lg.nŠ/ D ‚.n lg n/ ; (3.19) where Stirling’s approximation is helpful in proving equation (3.19). The following equation also holds for all n 1: nŠ D p 2n n e n e˛n (3.20) where 1 12n C 1 <˛n < 1 12n : (3.21) Functional iteration We use the notation f .i/.n/ to denote the function f .n/ iteratively applied i times to an initial value of n. Formally, let f .n/ be a function over the reals. For non- negative integers i, we recursively define f .i/.n/ D ( n if i D 0; f.f.i1/.n// if i>0: For example, if f .n/ D 2n,thenf .i/.n/ D 2i n. The iterated logarithm function We use the notation lg n (read “log star of n”) to denote the iterated logarithm, de- fined as follows. Let lg.i/ n be as defined above, with f .n/ D lg n. Because the log- arithm of a nonpositive number is undefined, lg.i/ n is defined only if lg.i1/ n>0. Be sure to distinguish lg.i/ n (the logarithm function applied i times in succession, starting with argument n) from lgi n (the logarithm of n raised to the ith power). Then we define the iterated logarithm function as lg n D min ˚ i 0 W lg.i/ n 1 : The iterated logarithm is a very slowly growing function: lg 2 D 1; lg 4 D 2; lg 16 D 3; lg 65536 D 4; lg.265536/ D 5:3.2 Standard notations and common functions 59 Since the number of atoms in the observable universe is estimated to be about 1080, which is much less than 265536, we rarely encounter an input size n such that lg n>5. Fibonacci numbers We define the Fibonacci numbers by the following recurrence: F0 D 0; F1 D 1; (3.22) Fi D Fi1 C Fi2 for i 2: Thus, each Fibonacci number is the sum of the two previous ones, yielding the sequence 0; 1; 1; 2; 3; 5; 8; 13; 21; 34; 55; : : : : Fibonacci numbers are related to the golden ratio and to its conjugate y,which are the two roots of the equation x2 D x C 1 (3.23) and are given by the following formulas (see Exercise 3.2-6): D 1 C p 5 2 (3.24) D 1:61803 : : : ; y D 1 p 5 2 D:61803 : : : : Specifically, we have Fi D i yi p 5 ; which we can prove by induction (Exercise 3.2-7). Since ˇˇy ˇˇ <1,wehave ˇˇyi ˇˇ p 5 < 1p 5 < 1 2 ; which implies that60 Chapter 3 Growth of Functions Fi D i p 5 C 1 2 ; (3.25) which is to say that the ith Fibonacci number Fi is equal to i = p 5 rounded to the nearest integer. Thus, Fibonacci numbers grow exponentially. Exercises 3.2-1 Show that if f .n/ and g.n/ are monotonically increasing functions, then so are the functions f .n/ C g.n/ and f .g.n//,andiff .n/ and g.n/ are in addition nonnegative, then f .n/ g.n/ is monotonically increasing. 3.2-2 Prove equation (3.16). 3.2-3 Prove equation (3.19). Also prove that nŠ D !.2n/ and nŠ D o.nn/. 3.2-4 ? Is the function dlg neŠ polynomially bounded? Is the function dlg lg neŠ polynomi- ally bounded? 3.2-5 ? Which is asymptotically larger: lg.lg n/ or lg.lg n/? 3.2-6 Show that the golden ratio and its conjugate y both satisfy the equation x2 D x C 1. 3.2-7 Prove by induction that the ith Fibonacci number satisfies the equality Fi D i yi p 5 ; where is the golden ratio and y is its conjugate. 3.2-8 Show that k ln k D ‚.n/ implies k D ‚.n= ln n/.Problems for Chapter 3 61 Problems 3-1 Asymptotic behavior of polynomials Let p.n/ D dX iD0 ai ni ; where ad >0,beadegree-d polynomial in n,andletk be a constant. Use the definitions of the asymptotic notations to prove the following properties. a. If k d,thenp.n/ D O.nk/. b. If k d,thenp.n/ D .nk/. c. If k D d,thenp.n/ D ‚.nk/. d. If k>d,thenp.n/ D o.nk/. e. If k0,andc>1are constants. Your answer should be in the form of the table with “yes” or “no” written in each box. ABO o ! ‚ a. lgk nn b. nk cn c. pnnsinn d. 2n 2n=2 e. nlg c clg n f. lg.nŠ/ lg.nn/ 3-3 Ordering by asymptotic growth rates a. Rank the following functions by order of growth; that is, find an arrangement g1;g2;:::;g30 of the functions satisfying g1 D .g2/, g2 D .g3/, ..., g29 D .g30/. Partition your list into equivalence classes such that functions f .n/ and g.n/ are in the same class if and only if f .n/ D ‚.g.n//.62 Chapter 3 Growth of Functions lg.lg n/ 2lg n . p 2/lg n n2 nŠ .lg n/Š . 3 2 /n n3 lg2 n lg.nŠ/ 22n n1= lg n ln ln n lg nn 2n nlg lg n ln n1 2lg n .lg n/lg n en 4lg n .n C 1/Š p lg n lg.lg n/ 2 p2 lg n n2n n lg n22nC1 b. Give an example of a single nonnegative function f .n/ such that for all func- tions gi .n/ in part (a), f .n/ is neither O.gi .n// nor .gi .n//. 3-4 Asymptotic notation properties Let f .n/ and g.n/ be asymptotically positive functions. Prove or disprove each of the following conjectures. a. f .n/ D O.g.n// implies g.n/ D O.f .n//. b. f .n/ C g.n/ D ‚.min.f .n/; g.n///. c. f .n/ D O.g.n// implies lg.f .n// D O.lg.g.n///, where lg.g.n// 1 and f .n/ 1 for all sufficiently large n. d. f .n/ D O.g.n// implies 2f.n/ D O 2g.n/ . e. f .n/ D O ..f .n//2/. f. f .n/ D O.g.n// implies g.n/ D .f .n//. g. f .n/ D ‚.f .n=2//. h. f .n/ C o.f .n// D ‚.f .n//. 3-5 Variations on O and ˝ Some authors define in a slightly different way than we do; let’s use 1 (read “omega infinity”) for this alternative definition. We say that f .n/ D 1 .g.n// if there exists a positive constant c such that f .n/ cg.n/ 0 for infinitely many integers n. a. Show that for any two functions f .n/ and g.n/ that are asymptotically nonneg- ative, either f .n/ D O.g.n// or f .n/ D 1 .g.n// or both, whereas this is not true if we use in place of 1 .Problems for Chapter 3 63 b. Describe the potential advantages and disadvantages of using 1 instead of to characterize the running times of programs. Some authors also define O in a slightly different manner; let’s use O0 for the alternative definition. We say that f .n/ D O0.g.n// if and only if jf .n/j D O.g.n//. c. What happens to each direction of the “if and only if” in Theorem 3.1 if we substitute O0 for O but still use ? Some authors define eO (read “soft-oh”) to mean O with logarithmic factors ig- nored: eO.g.n// Dff .n/ W there exist positive constants c, k,andn0 such that 0 f .n/ cg.n/ lgk.n/ for all n n0g : d. Define e and e‚ in a similar manner. Prove the corresponding analog to Theo- rem 3.1. 3-6 Iterated functions We can apply the iteration operator used in the lg function to any monotonically increasing function f .n/ over the reals. For a given constant c 2 R,wedefinethe iterated function f c by f c .n/ D min ˚ i 0 W f .i/.n/ c ; which need not be well defined in all cases. In other words, the quantity f c .n/ is the number of iterated applications of the function f required to reduce its argu- ment down to c or less. For each of the following functions f .n/ and constants c, give as tight a bound as possible on f c .n/. f .n/ c f c .n/ a. n 10 b. lg n1 c. n=2 1 d. n=2 2 e. pn2 f. pn1 g. n1=3 2 h. n= lg n264 Chapter 3 Growth of Functions Chapter notes Knuth [209] traces the origin of the O-notation to a number-theory text by P. Bach- mann in 1892. The o-notation was invented by E. Landau in 1909 for his discussion of the distribution of prime numbers. The and ‚ notations were advocated by Knuth [213] to correct the popular, but technically sloppy, practice in the literature of using O-notation for both upper and lower bounds. Many people continue to use the O-notation where the ‚-notation is more technically precise. Further dis- cussion of the history and development of asymptotic notations appears in works by Knuth [209, 213] and Brassard and Bratley [54]. Not all authors define the asymptotic notations in the same way, although the various definitions agree in most common situations. Some of the alternative def- initions encompass functions that are not asymptotically nonnegative, as long as their absolute values are appropriately bounded. Equation (3.20) is due to Robbins [297]. Other properties of elementary math- ematical functions can be found in any good mathematical reference, such as Abramowitz and Stegun [1] or Zwillinger [362], or in a calculus book, such as Apostol [18] or Thomas et al. [334]. Knuth [209] and Graham, Knuth, and Patash- nik [152] contain a wealth of material on discrete mathematics as used in computer science.4 Divide-and-Conquer In Section 2.3.1, we saw how merge sort serves as an example of the divide-and- conquer paradigm. Recall that in divide-and-conquer, we solve a problem recur- sively, applying three steps at each level of the recursion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original prob- lem. When the subproblems are large enough to solve recursively, we call that the recur- sive case. Once the subproblems become small enough that we no longer recurse, we say that the recursion “bottoms out” and that we have gotten down to the base case. Sometimes, in addition to subproblems that are smaller instances of the same problem, we have to solve subproblems that are not quite the same as the original problem. We consider solving such subproblems as part of the combine step. In this chapter, we shall see more algorithms based on divide-and-conquer. The first one solves the maximum-subarray problem: it takes as input an array of num- bers, and it determines the contiguous subarray whose values have the greatest sum. Then we shall see two divide-and-conquer algorithms for multiplying n n matri- ces. One runs in ‚.n3/ time, which is no better than the straightforward method of multiplying square matrices. But the other, Strassen’s algorithm, runs in O.n2:81/ time, which beats the straightforward method asymptotically. Recurrences Recurrences go hand in hand with the divide-and-conquer paradigm, because they give us a natural way to characterize the running times of divide-and-conquer algo- rithms. A recurrence is an equation or inequality that describes a function in terms66 Chapter 4 Divide-and-Conquer of its value on smaller inputs. For example, in Section 2.3.2 we described the worst-case running time T .n/ of the MERGE-SORT procedure by the recurrence T .n/ D ( ‚.1/ if n D 1; 2T .n=2/ C ‚.n/ if n>1; (4.1) whose solution we claimed to be T .n/ D ‚.n lg n/. Recurrences can take many forms. For example, a recursive algorithm might divide subproblems into unequal sizes, such as a 2=3-to-1=3 split. If the divide and combine steps take linear time, such an algorithm would give rise to the recurrence T .n/ D T .2n=3/ C T .n=3/ C ‚.n/. Subproblems are not necessarily constrained to being a constant fraction of the original problem size. For example, a recursive version of linear search (see Exercise 2.1-3) would create just one subproblem containing only one el- ement fewer than the original problem. Each recursive call would take con- stant time plus the time for the recursive calls it makes, yielding the recurrence T .n/ D T.n 1/ C ‚.1/. This chapter offers three methods for solving recurrences—that is, for obtaining asymptotic “‚”or“O” bounds on the solution: In the substitution method, we guess a bound and then use mathematical in- duction to prove our guess correct. The recursion-tree method converts the recurrence into a tree whose nodes represent the costs incurred at various levels of the recursion. We use techniques for bounding summations to solve the recurrence. The master method provides bounds for recurrences of the form T .n/ D aT .n=b/ C f .n/ ; (4.2) where a 1, b>1,andf .n/ is a given function. Such recurrences arise frequently. A recurrence of the form in equation (4.2) characterizes a divide- and-conquer algorithm that creates a subproblems, each of which is 1=b the size of the original problem, and in which the divide and combine steps together take f .n/ time. To use the master method, you will need to memorize three cases, but once you do that, you will easily be able to determine asymptotic bounds for many simple recurrences. We will use the master method to determine the running times of the divide-and-conquer algorithms for the maximum-subarray problem and for matrix multiplication, as well as for other algorithms based on divide- and-conquer elsewhere in this book.Chapter 4 Divide-and-Conquer 67 Occasionally, we shall see recurrences that are not equalities but rather inequal- ities, such as T .n/ 2T .n=2/ C ‚.n/. Because such a recurrence states only an upper bound on T .n/, we will couch its solution using O-notation rather than ‚-notation. Similarly, if the inequality were reversed to T .n/ 2T .n=2/ C ‚.n/, then because the recurrence gives only a lower bound on T .n/, we would use -notation in its solution. Technicalities in recurrences In practice, we neglect certain technical details when we state and solve recur- rences. For example, if we call MERGE-SORT on n elements when n is odd, we end up with subproblems of size bn=2c and dn=2e. Neither size is actually n=2, because n=2 is not an integer when n is odd. Technically, the recurrence describing the worst-case running time of MERGE-SORT is really T .n/ D ( ‚.1/ if n D 1; T.dn=2e/ C T.bn=2c/ C ‚.n/ if n>1: (4.3) Boundary conditions represent another class of details that we typically ignore. Since the running time of an algorithm on a constant-sized input is a constant, the recurrences that arise from the running times of algorithms generally have T .n/ D ‚.1/ for sufficiently small n. Consequently, for convenience, we shall generally omit statements of the boundary conditions of recurrences and assume that T .n/ is constant for small n. For example, we normally state recurrence (4.1) as T .n/ D 2T .n=2/ C ‚.n/ ; (4.4) without explicitly giving values for small n. The reason is that although changing the value of T.1/ changes the exact solution to the recurrence, the solution typi- cally doesn’t change by more than a constant factor, and so the order of growth is unchanged. When we state and solve recurrences, we often omit floors, ceilings, and bound- ary conditions. We forge ahead without these details and later determine whether or not they matter. They usually do not, but you should know when they do. Ex- perience helps, and so do some theorems stating that these details do not affect the asymptotic bounds of many recurrences characterizing divide-and-conquer algo- rithms (see Theorem 4.1). In this chapter, however, we shall address some of these details and illustrate the fine points of recurrence solution methods.68 Chapter 4 Divide-and-Conquer 4.1 The maximum-subarray problem Suppose that you been offered the opportunity to invest in the Volatile Chemical Corporation. Like the chemicals the company produces, the stock price of the Volatile Chemical Corporation is rather volatile. You are allowed to buy one unit of stock only one time and then sell it at a later date, buying and selling after the close of trading for the day. To compensate for this restriction, you are allowed to learn what the price of the stock will be in the future. Your goal is to maximize your profit. Figure 4.1 shows the price of the stock over a 17-day period. You may buy the stock at any one time, starting after day 0, when the price is $100 per share. Of course, you would want to “buy low, sell high”—buy at the lowest possible price and later on sell at the highest possible price—to maximize your profit. Unfortunately, you might not be able to buy at the lowest price and then sell at the highest price within a given period. In Figure 4.1, the lowest price occurs after day 7, which occurs after the highest price, after day 1. You might think that you can always maximize profit by either buying at the lowest price or selling at the highest price. For example, in Figure 4.1, we would maximize profit by buying at the lowest price, after day 7. If this strategy always worked, then it would be easy to determine how to maximize profit: find the highest and lowest prices, and then work left from the highest price to find the lowest prior price, work right from the lowest price to find the highest later price, and take the pair with the greater difference. Figure 4.2 shows a simple counterexample, 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 120 110 100 90 80 70 60 Day 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Price 100 113 110 85 105 102 86 63 81 101 94 106 101 79 94 90 97 Change 13 3 25 20 3 16 23 18 20 7125 22 15 47 Figure 4.1 Information about the price of stock in the Volatile Chemical Corporation after the close of trading over a period of 17 days. The horizontal axis of the chart indicates the day, and the vertical axis shows the price. The bottom row of the table gives the change in price from the previous day.4.1 The maximum-subarray problem 69 01234 11 10 9 8 7 6 Day 01 23 4 Price 10 11 7 10 6 Change 1 434 Figure 4.2 An example showing that the maximum profit does not always start at the lowest price or end at the highest price. Again, the horizontal axis indicates the day, and the vertical axis shows the price. Here, the maximum profit of $3 per share would be earned by buying after day 2 and selling after day 3. The price of $7 after day 2 is not the lowest price overall, and the price of $10 after day 3 is not the highest price overall. demonstrating that the maximum profit sometimes comes neither by buying at the lowest price nor by selling at the highest price. A brute-force solution We can easily devise a brute-force solution to this problem: just try every possible pair of buy and sell dates in which the buy date precedes the sell date. A period of n days has n 2 such pairs of dates. Since n 2 is ‚.n2/, and the best we can hope for is to evaluate each pair of dates in constant time, this approach would take .n2/ time. Can we do better? A transformation In order to design an algorithm with an o.n2/ running time, we will look at the input in a slightly different way. We want to find a sequence of days over which the net change from the first day to the last is maximum. Instead of looking at the daily prices, let us instead consider the daily change in price, where the change on day i is the difference between the prices after day i 1 and after day i. The table in Figure 4.1 shows these daily changes in the bottom row. If we treat this row as an array A, shown in Figure 4.3, we now want to find the nonempty, contiguous subarray of A whose values have the largest sum. We call this contiguous subarray the maximum subarray. For example, in the array of Figure 4.3, the maximum subarray of AŒ1 : : 16 is AŒ8 : : 11, with the sum 43. Thus, you would want to buy the stock just before day 8 (that is, after day 7) and sell it after day 11, earning a profit of $43 per share. At first glance, this transformation does not help. We still need to checkn1 2 D ‚.n2/ subarrays for a period of n days. Exercise 4.1-2 asks you to show70 Chapter 4 Divide-and-Conquer 13 1 –3 2 –25 3 20 4 –3 5 –16 6 –23 7 8 9 10 maximum subarray 11 18 12 20 13 –7 14 12 15 7 16 –5 –22 15 –4A Figure 4.3 The change in stock prices as a maximum-subarray problem. Here, the subar- ray AŒ8 : : 11,withsum43, has the greatest sum of any contiguous subarray of array A. that although computing the cost of one subarray might take time proportional to the length of the subarray, when computing all ‚.n2/ subarray sums, we can orga- nize the computation so that each subarray sum takes O.1/ time, given the values of previously computed subarray sums, so that the brute-force solution takes ‚.n2/ time. So let us seek a more efficient solution to the maximum-subarray problem. When doing so, we will usually speak of “a” maximum subarray rather than “the” maximum subarray, since there could be more than one subarray that achieves the maximum sum. The maximum-subarray problem is interesting only when the array contains some negative numbers. If all the array entries were nonnegative, then the maximum-subarray problem would present no challenge, since the entire array would give the greatest sum. A solution using divide-and-conquer Let’s think about how we might solve the maximum-subarray problem using the divide-and-conquer technique. Suppose we want to find a maximum subar- ray of the subarray AŒlow ::high. Divide-and-conquer suggests that we divide the subarray into two subarrays of as equal size as possible. That is, we find the midpoint, say mid, of the subarray, and consider the subarrays AŒlow ::mid and AŒmid C 1::high. As Figure 4.4(a) shows, any contiguous subarray AŒi : : j of AŒlow ::high must lie in exactly one of the following places: entirely in the subarray AŒlow ::mid,sothatlow i j mid, entirely in the subarray AŒmid C 1::high,sothatmid left-sum 6 left-sum D sum 7 max-left D i 8 right-sum D1 9 sum D 0 10 for j D mid C 1 to high 11 sum D sum C AŒj 12 if sum > right-sum 13 right-sum D sum 14 max-right D j 15 return .max-left; max-right; left-sum C right-sum/72 Chapter 4 Divide-and-Conquer This procedure works as follows. Lines 1–7 find a maximum subarray of the left half, AŒlow ::mid. Since this subarray must contain AŒmid,thefor loop of lines 3–7 starts the index i at mid and works down to low, so that every subarray it considers is of the form AŒi : : mid. Lines 1–2 initialize the variables left-sum, which holds the greatest sum found so far, and sum, holding the sum of the entries in AŒi : : mid. Whenever we find, in line 5, a subarray AŒi : : mid with a sum of values greater than left-sum, we update left-sum to this subarray’s sum in line 6, and in line 7 we update the variable max-left to record this index i. Lines 8–14 work analogously for the right half, AŒmidC1::high. Here, the for loop of lines 10–14 starts the index j at midC1 and works up to high, so that every subarray it considers is of the form AŒmid C 1::j. Finally, line 15 returns the indices max-left and max-right that demarcate a maximum subarray crossing the midpoint, along with the sum left-sumCright-sum of the values in the subarray AŒmax-left ::max-right. If the subarray AŒlow ::high contains n entries (so that n D high low C 1), we claim that the call FIND-MAX-CROSSING-SUBARRAY.A; low; mid; high/ takes ‚.n/ time. Since each iteration of each of the two for loops takes ‚.1/ time, we just need to count up how many iterations there are altogether. The for loop of lines 3–7 makes mid low C 1 iterations, and the for loop of lines 10–14 makes high mid iterations, and so the total number of iterations is .mid low C 1/ C .high mid/ D high low C 1 D n: With a linear-time FIND-MAX-CROSSING-SUBARRAY procedure in hand, we can write pseudocode for a divide-and-conquer algorithm to solve the maximum- subarray problem: FIND-MAXIMUM-SUBARRAY.A; low; high/ 1 if high == low 2 return .low; high;AŒlow/ // base case: only one element 3 else mid D b.low C high/=2c 4 .left-low; left-high; left-sum/ D FIND-MAXIMUM-SUBARRAY.A; low; mid/ 5 .right-low; right-high; right-sum/ D FIND-MAXIMUM-SUBARRAY.A; mid C 1; high/ 6 .cross-low; cross-high; cross-sum/ D FIND-MAX-CROSSING-SUBARRAY.A; low; mid; high/ 7 if left-sum right-sum and left-sum cross-sum 8 return .left-low; left-high; left-sum/ 9 elseif right-sum left-sum and right-sum cross-sum 10 return .right-low; right-high; right-sum/ 11 else return .cross-low; cross-high; cross-sum/4.1 The maximum-subarray problem 73 The initial call FIND-MAXIMUM-SUBARRAY.A;1;A:length/ will find a maxi- mum subarray of AŒ1 : : n. Similar to FIND-MAX-CROSSING-SUBARRAY, the recursive procedure FIND- MAXIMUM-SUBARRAY returns a tuple containing the indices that demarcate a maximum subarray, along with the sum of the values in a maximum subarray. Line 1 tests for the base case, where the subarray has just one element. A subar- ray with just one element has only one subarray—itself—and so line 2 returns a tuple with the starting and ending indices of just the one element, along with its value. Lines 3–11 handle the recursive case. Line 3 does the divide part, comput- ing the index mid of the midpoint. Let’s refer to the subarray AŒlow ::mid as the left subarray and to AŒmid C 1::high as the right subarray. Because we know that the subarray AŒlow ::high contains at least two elements, each of the left and right subarrays must have at least one element. Lines 4 and 5 conquer by recur- sively finding maximum subarrays within the left and right subarrays, respectively. Lines 6–11 form the combine part. Line 6 finds a maximum subarray that crosses the midpoint. (Recall that because line 6 solves a subproblem that is not a smaller instance of the original problem, we consider it to be in the combine part.) Line 7 tests whether the left subarray contains a subarray with the maximum sum, and line 8 returns that maximum subarray. Otherwise, line 9 tests whether the right subarray contains a subarray with the maximum sum, and line 10 returns that max- imum subarray. If neither the left nor right subarrays contain a subarray achieving the maximum sum, then a maximum subarray must cross the midpoint, and line 11 returns it. Analyzing the divide-and-conquer algorithm Next we set up a recurrence that describes the running time of the recursive FIND- MAXIMUM-SUBARRAY procedure. As we did when we analyzed merge sort in Section 2.3.2, we make the simplifying assumption that the original problem size isapowerof2, so that all subproblem sizes are integers. We denote by T .n/ the running time of FIND-MAXIMUM-SUBARRAY on a subarray of n elements. For starters, line 1 takes constant time. The base case, when n D 1, is easy: line 2 takes constant time, and so T.1/D ‚.1/ : (4.5) The recursive case occurs when n>1. Lines 1 and 3 take constant time. Each of the subproblems solved in lines 4 and 5 is on a subarray of n=2 elements (our assumption that the original problem size is a power of 2 ensures that n=2 is an integer), and so we spend T .n=2/ time solving each of them. Because we have to solve two subproblems—for the left subarray and for the right subarray—the contribution to the running time from lines 4 and 5 comes to 2T .n=2/.Aswehave74 Chapter 4 Divide-and-Conquer already seen, the call to FIND-MAX-CROSSING-SUBARRAY in line 6 takes ‚.n/ time. Lines 7–11 take only ‚.1/ time. For the recursive case, therefore, we have T .n/ D ‚.1/ C 2T .n=2/ C ‚.n/ C ‚.1/ D 2T .n=2/ C ‚.n/ : (4.6) Combining equations (4.5) and (4.6) gives us a recurrence for the running time T .n/ of FIND-MAXIMUM-SUBARRAY: T .n/ D ( ‚.1/ if n D 1; 2T .n=2/ C ‚.n/ if n>1: (4.7) This recurrence is the same as recurrence (4.1) for merge sort. As we shall see from the master method in Section 4.5, this recurrence has the solution T .n/ D ‚.n lg n/. You might also revisit the recursion tree in Figure 2.5 to un- derstand why the solution should be T .n/ D ‚.n lg n/. Thus, we see that the divide-and-conquer method yields an algorithm that is asymptotically faster than the brute-force method. With merge sort and now the maximum-subarray problem, we begin to get an idea of how powerful the divide- and-conquer method can be. Sometimes it will yield the asymptotically fastest algorithm for a problem, and other times we can do even better. As Exercise 4.1-5 shows, there is in fact a linear-time algorithm for the maximum-subarray problem, and it does not use divide-and-conquer. Exercises 4.1-1 What does FIND-MAXIMUM-SUBARRAY return when all elements of A are nega- tive? 4.1-2 Write pseudocode for the brute-force method of solving the maximum-subarray problem. Your procedure should run in ‚.n2/ time. 4.1-3 Implement both the brute-force and recursive algorithms for the maximum- subarray problem on your own computer. What problem size n0 gives the crossover point at which the recursive algorithm beats the brute-force algorithm? Then, change the base case of the recursive algorithm to use the brute-force algorithm whenever the problem size is less than n0. Does that change the crossover point? 4.1-4 Suppose we change the definition of the maximum-subarray problem to allow the result to be an empty subarray, where the sum of the values of an empty subar-4.2 Strassen’s algorithm for matrix multiplication 75 ray is 0. How would you change any of the algorithms that do not allow empty subarrays to permit an empty subarray to be the result? 4.1-5 Use the following ideas to develop a nonrecursive, linear-time algorithm for the maximum-subarray problem. Start at the left end of the array, and progress toward the right, keeping track of the maximum subarray seen so far. Knowing a maximum subarray of AŒ1 : : j , extend the answer to find a maximum subarray ending at in- dex j C1 by using the following observation: a maximum subarray of AŒ1 : : j C 1 is either a maximum subarray of AŒ1 : : j or a subarray AŒi : : j C 1,forsome 1 i j C 1. Determine a maximum subarray of the form AŒi : : j C 1 in constant time based on knowing a maximum subarray ending at index j . 4.2 Strassen’s algorithm for matrix multiplication If you have seen matrices before, then you probably know how to multiply them. (Otherwise, you should read Section D.1 in Appendix D.) If A D .aij / and B D .bij / are square n n matrices, then in the product C D A B,wedefinethe entry cij ,fori;j D 1;2;:::;n,by cij D nX kD1 aik bkj : (4.8) We must compute n2 matrix entries, and each is the sum of n values. The following procedure takes n n matrices A and B and multiplies them, returning their n n product C. We assume that each matrix has an attribute rows, giving the number of rows in the matrix. SQUARE-MATRIX-MULTIPLY.A; B/ 1 n D A:rows 2letC beanewn n matrix 3 for i D 1 to n 4 for j D 1 to n 5 cij D 0 6 for k D 1 to n 7 cij D cij C aik bkj 8 return C The SQUARE-MATRIX-MULTIPLY procedure works as follows. The for loop of lines 3–7 computes the entries of each row i, and within a given row i,the76 Chapter 4 Divide-and-Conquer for loop of lines 4–7 computes each of the entries cij , for each column j .Line5 initializes cij to 0 as we start computing the sum given in equation (4.8), and each iteration of the for loop of lines 6–7 adds in one more term of equation (4.8). Because each of the triply-nested for loops runs exactly n iterations, and each execution of line 7 takes constant time, the SQUARE-MATRIX-MULTIPLY proce- dure takes ‚.n3/ time. You might at first think that any matrix multiplication algorithm must take .n3/ time, since the natural definition of matrix multiplication requires that many mul- tiplications. You would be incorrect, however: we have a way to multiply matrices in o.n3/ time. In this section, we shall see Strassen’s remarkable recursive algo- rithm for multiplying n n matrices. It runs in ‚.nlg 7/ time, which we shall show in Section 4.5. Since lg 7 lies between 2:80 and 2:81, Strassen’s algorithm runs in O.n2:81/ time, which is asymptotically better than the simple SQUARE-MATRIX- MULTIPLY procedure. A simple divide-and-conquer algorithm To keep things simple, when we use a divide-and-conquer algorithm to compute the matrix product C D A B, we assume that n is an exact power of 2 in each of the n n matrices. We make this assumption because in each divide step, we will divide n n matrices into four n=2 n=2 matrices, and by assuming that n is an exact power of 2, we are guaranteed that as long as n 2, the dimension n=2 is an integer. Suppose that we partition each of A, B,andC into four n=2 n=2 matrices A D A11 A12 A21 A22 ;BD B11 B12 B21 B22 ;CD C11 C12 C21 C22 ; (4.9) so that we rewrite the equation C D A B as C11 C12 C21 C22 D A11 A12 A21 A22 B11 B12 B21 B22 : (4.10) Equation (4.10) corresponds to the four equations C11 D A11 B11 C A12 B21 ; (4.11) C12 D A11 B12 C A12 B22 ; (4.12) C21 D A21 B11 C A22 B21 ; (4.13) C22 D A21 B12 C A22 B22 : (4.14) Each of these four equations specifies two multiplications of n=2 n=2 matrices and the addition of their n=2 n=2 products. We can use these equations to create a straightforward, recursive, divide-and-conquer algorithm:4.2 Strassen’s algorithm for matrix multiplication 77 SQUARE-MATRIX-MULTIPLY-RECURSIVE.A; B/ 1 n D A:rows 2letC be a new n n matrix 3 if n == 1 4 c11 D a11 b11 5 else partition A, B,andC as in equations (4.9) 6 C11 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A11;B11/ C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A12;B21/ 7 C12 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A11;B12/ C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A12;B22/ 8 C21 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A21;B11/ C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A22;B21/ 9 C22 D SQUARE-MATRIX-MULTIPLY-RECURSIVE.A21;B12/ C SQUARE-MATRIX-MULTIPLY-RECURSIVE.A22;B22/ 10 return C This pseudocode glosses over one subtle but important implementation detail. How do we partition the matrices in line 5? If we were to create 12 new n=2 n=2 matrices, we would spend ‚.n2/ time copying entries. In fact, we can partition the matrices without copying entries. The trick is to use index calculations. We identify a submatrix by a range of row indices and a range of column indices of the original matrix. We end up representing a submatrix a little differently from how we represent the original matrix, which is the subtlety we are glossing over. The advantage is that, since we can specify submatrices by index calculations, executing line 5 takes only ‚.1/ time (although we shall see that it makes no difference asymptotically to the overall running time whether we copy or partition in place). Now, we derive a recurrence to characterize the running time of SQUARE- MATRIX-MULTIPLY-RECURSIVE.LetT .n/ be the time to multiply two n n matrices using this procedure. In the base case, when n D 1, we perform just the one scalar multiplication in line 4, and so T.1/D ‚.1/ : (4.15) The recursive case occurs when n>1. As discussed, partitioning the matrices in line 5 takes ‚.1/ time, using index calculations. In lines 6–9, we recursively call SQUARE-MATRIX-MULTIPLY-RECURSIVE a total of eight times. Because each recursive call multiplies two n=2 n=2 matrices, thereby contributing T .n=2/ to the overall running time, the time taken by all eight recursive calls is 8T .n=2/.We also must account for the four matrix additions in lines 6–9. Each of these matrices contains n2=4 entries, and so each of the four matrix additions takes ‚.n2/ time. Since the number of matrix additions is a constant, the total time spent adding ma-78 Chapter 4 Divide-and-Conquer trices in lines 6–9 is ‚.n2/. (Again, we use index calculations to place the results of the matrix additions into the correct positions of matrix C, with an overhead of ‚.1/ time per entry.) The total time for the recursive case, therefore, is the sum of the partitioning time, the time for all the recursive calls, and the time to add the matrices resulting from the recursive calls: T .n/ D ‚.1/ C 8T .n=2/ C ‚.n2/ D 8T .n=2/ C ‚.n2/: (4.16) Notice that if we implemented partitioning by copying matrices, which would cost ‚.n2/ time, the recurrence would not change, and hence the overall running time would increase by only a constant factor. Combining equations (4.15) and (4.16) gives us the recurrence for the running time of SQUARE-MATRIX-MULTIPLY-RECURSIVE: T .n/ D ( ‚.1/ if n D 1; 8T .n=2/ C ‚.n2/ if n>1: (4.17) As we shall see from the master method in Section 4.5, recurrence (4.17) has the solution T .n/ D ‚.n3/. Thus, this simple divide-and-conquer approach is no faster than the straightforward SQUARE-MATRIX-MULTIPLY procedure. Before we continue on to examining Strassen’s algorithm, let us review where the components of equation (4.16) came from. Partitioning each n n matrix by index calculation takes ‚.1/ time, but we have two matrices to partition. Although you could say that partitioning the two matrices takes ‚.2/ time, the constant of 2 is subsumed by the ‚-notation. Adding two matrices, each with, say, k entries, takes ‚.k/ time. Since the matrices we add each have n2=4 entries, you could say that adding each pair takes ‚.n2=4/ time. Again, however, the ‚-notation subsumes the constant factor of 1=4, and we say that adding two n2=4 n2=4 matrices takes ‚.n2/ time. We have four such matrix additions, and once again, instead of saying that they take ‚.4n2/ time, we say that they take ‚.n2/ time. (Of course, you might observe that we could say that the four matrix additions take ‚.4n2=4/ time, and that 4n2=4 D n2, but the point here is that ‚-notation subsumes constant factors, whatever they are.) Thus, we end up with two terms of ‚.n2/, which we can combine into one. When we account for the eight recursive calls, however, we cannot just sub- sume the constant factor of 8. In other words, we must say that together they take 8T .n=2/ time, rather than just T .n=2/ time. You can get a feel for why by looking back at the recursion tree in Figure 2.5, for recurrence (2.1) (which is identical to recurrence (4.7)), with the recursive case T .n/ D 2T .n=2/C‚.n/. The factor of 2 determined how many children each tree node had, which in turn determined how many terms contributed to the sum at each level of the tree. If we were to ignore4.2 Strassen’s algorithm for matrix multiplication 79 the factor of 8 in equation (4.16) or the factor of 2 in recurrence (4.1), the recursion tree would just be linear, rather than “bushy,” and each level would contribute only one term to the sum. Bear in mind, therefore, that although asymptotic notation subsumes constant multiplicative factors, recursive notation such as T .n=2/ does not. Strassen’s method The key to Strassen’s method is to make the recursion tree slightly less bushy. That is, instead of performing eight recursive multiplications of n=2 n=2 matrices, it performs only seven. The cost of eliminating one matrix multiplication will be several new additions of n=2 n=2 matrices, but still only a constant number of additions. As before, the constant number of matrix additions will be subsumed by ‚-notation when we set up the recurrence equation to characterize the running time. Strassen’s method is not at all obvious. (This might be the biggest understate- ment in this book.) It has four steps: 1. Divide the input matrices A and B and output matrix C into n=2 n=2 subma- trices, as in equation (4.9). This step takes ‚.1/ time by index calculation, just as in SQUARE-MATRIX-MULTIPLY-RECURSIVE. 2. Create 10 matrices S1;S2;:::;S10, each of which is n=2 n=2 and is the sum or difference of two matrices created in step 1. We can create all 10 matrices in ‚.n2/ time. 3. Using the submatrices created in step 1 and the 10 matrices created in step 2, recursively compute seven matrix products P1;P2;:::;P7. Each matrix Pi is n=2 n=2. 4. Compute the desired submatrices C11;C12;C21;C22 of the result matrix C by adding and subtracting various combinations of the Pi matrices. We can com- pute all four submatrices in ‚.n2/ time. We shall see the details of steps 2–4 in a moment, but we already have enough information to set up a recurrence for the running time of Strassen’s method. Let us assume that once the matrix size n gets down to 1, we perform a simple scalar mul- tiplication, just as in line 4 of SQUARE-MATRIX-MULTIPLY-RECURSIVE.When n>1, steps 1, 2, and 4 take a total of ‚.n2/ time, and step 3 requires us to per- form seven multiplications of n=2 n=2 matrices. Hence, we obtain the following recurrence for the running time T .n/ of Strassen’s algorithm: T .n/ D ( ‚.1/ if n D 1; 7T .n=2/ C ‚.n2/ if n>1: (4.18)80 Chapter 4 Divide-and-Conquer We have traded off one matrix multiplication for a constant number of matrix ad- ditions. Once we understand recurrences and their solutions, we shall see that this tradeoff actually leads to a lower asymptotic running time. By the master method in Section 4.5, recurrence (4.18) has the solution T .n/ D ‚.nlg 7/. We now proceed to describe the details. In step 2, we create the following 10 matrices: S1 D B12 B22 ; S2 D A11 C A12 ; S3 D A21 C A22 ; S4 D B21 B11 ; S5 D A11 C A22 ; S6 D B11 C B22 ; S7 D A12 A22 ; S8 D B21 C B22 ; S9 D A11 A21 ; S10 D B11 C B12 : Since we must add or subtract n=2 n=2 matrices 10 times, this step does indeed take ‚.n2/ time. In step 3, we recursively multiply n=2n=2 matrices seven times to compute the following n=2 n=2 matrices, each of which is the sum or difference of products of A and B submatrices: P1 D A11 S1 D A11 B12 A11 B22 ; P2 D S2 B22 D A11 B22 C A12 B22 ; P3 D S3 B11 D A21 B11 C A22 B11 ; P4 D A22 S4 D A22 B21 A22 B11 ; P5 D S5 S6 D A11 B11 C A11 B22 C A22 B11 C A22 B22 ; P6 D S7 S8 D A12 B21 C A12 B22 A22 B21 A22 B22 ; P7 D S9 S10 D A11 B11 C A11 B12 A21 B11 A21 B12 : Note that the only multiplications we need to perform are those in the middle col- umn of the above equations. The right-hand column just shows what these products equal in terms of the original submatrices created in step 1. Step 4 adds and subtracts the Pi matrices created in step 3 to construct the four n=2 n=2 submatrices of the product C. We start with C11 D P5 C P4 P2 C P6 :4.2 Strassen’s algorithm for matrix multiplication 81 Expanding out the right-hand side, with the expansion of each Pi on its own line and vertically aligning terms that cancel out, we see that C11 equals A11 B11 CA11 B22 CA22 B11 CA22 B22 A22 B11 CA22 B21 A11 B22 A12 B22 A22 B22 A22 B21 CA12 B22 CA12 B21 A11 B11 CA12 B21 ; which corresponds to equation (4.11). Similarly, we set C12 D P1 C P2 ; and so C12 equals A11 B12 A11 B22 C A11 B22 C A12 B22 A11 B12 C A12 B22 ; corresponding to equation (4.12). Setting C21 D P3 C P4 makes C21 equal A21 B11 C A22 B11 A22 B11 C A22 B21 A21 B11 C A22 B21 ; corresponding to equation (4.13). Finally, we set C22 D P5 C P1 P3 P7 ; so that C22 equals A11 B11 CA11 B22 CA22 B11 CA22 B22 A11 B22 CA11 B12 A22 B11 A21 B11 A11 B11 A11 B12 CA21 B11 CA21 B12 A22 B22 CA21 B12 ;82 Chapter 4 Divide-and-Conquer which corresponds to equation (4.14). Altogether, we add or subtract n=2 n=2 matrices eight times in step 4, and so this step indeed takes ‚.n2/ time. Thus, we see that Strassen’s algorithm, comprising steps 1–4, produces the cor- rect matrix product and that recurrence (4.18) characterizes its running time. Since we shall see in Section 4.5 that this recurrence has the solution T .n/ D ‚.nlg 7/, Strassen’s method is asymptotically faster than the straightforward SQUARE- MATRIX-MULTIPLY procedure. The notes at the end of this chapter discuss some of the practical aspects of Strassen’s algorithm. Exercises Note: Although Exercises 4.2-3, 4.2-4, and 4.2-5 are about variants on Strassen’s algorithm, you should read Section 4.5 before trying to solve them. 4.2-1 Use Strassen’s algorithm to compute the matrix product 13 75 68 42 : Show your work. 4.2-2 Write pseudocode for Strassen’s algorithm. 4.2-3 How would you modify Strassen’s algorithm to multiply nn matrices in which n is not an exact power of 2? Show that the resulting algorithm runs in time ‚.nlg 7/. 4.2-4 What is the largest k such that if you can multiply 3 3 matrices using k multi- plications (not assuming commutativity of multiplication), then you can multiply n n matrices in time o.nlg 7/? What would the running time of this algorithm be? 4.2-5 V. Pan has discovered a way of multiplying 68 68 matrices using 132,464 mul- tiplications, a way of multiplying 70 70 matrices using 143,640 multiplications, and a way of multiplying 72 72 matrices using 155,424 multiplications. Which method yields the best asymptotic running time when used in a divide-and-conquer matrix-multiplication algorithm? How does it compare to Strassen’s algorithm?4.3 The substitution method for solving recurrences 83 4.2-6 How quickly can you multiply a knn matrix by an nknmatrix, using Strassen’s algorithm as a subroutine? Answer the same question with the order of the input matrices reversed. 4.2-7 Show how to multiply the complex numbers a C bi and c C di using only three multiplications of real numbers. The algorithm should take a, b, c,andd as input and produce the real component ac bd and the imaginary component ad C bc separately. 4.3 The substitution method for solving recurrences Now that we have seen how recurrences characterize the running times of divide- and-conquer algorithms, we will learn how to solve recurrences. We start in this section with the “substitution” method. The substitution method for solving recurrences comprises two steps: 1. Guess the form of the solution. 2. Use mathematical induction to find the constants and show that the solution works. We substitute the guessed solution for the function when applying the inductive hypothesis to smaller values; hence the name “substitution method.” This method is powerful, but we must be able to guess the form of the answer in order to apply it. We can use the substitution method to establish either upper or lower bounds on a recurrence. As an example, let us determine an upper bound on the recurrence T .n/ D 2T .bn=2c/ C n; (4.19) which is similar to recurrences (4.3) and (4.4). We guess that the solution is T .n/ D O.nlg n/. The substitution method requires us to prove that T .n/ cnlg n for an appropriate choice of the constant c>0. We start by assuming that this bound holds for all positive m3,the recurrence does not depend directly on T.1/. Thus, we can replace T.1/by T.2/ and T.3/ as the base cases in the inductive proof, letting n0 D 2. Note that we make a distinction between the base case of the recurrence (n D 1) and the base cases of the inductive proof (n D 2 and n D 3). With T.1/ D 1, we derive from the recurrence that T.2/ D 4 and T.3/ D 5. Now we can complete the inductive proof that T .n/ cnlg n for some constant c 1 by choosing c large enough so that T.2/ c2lg 2 and T.3/ c3lg 3. As it turns out, any choice of c 2 suffices for the base cases of n D 2 and n D 3 to hold. For most of the recurrences we shall examine, it is straightforward to extend boundary conditions to make the inductive assumption work for small n, and we shall not always explicitly work out the details. Making a good guess Unfortunately, there is no general way to guess the correct solutions to recurrences. Guessing a solution takes experience and, occasionally, creativity. Fortunately, though, you can use some heuristics to help you become a good guesser. You can also use recursion trees, which we shall see in Section 4.4, to generate good guesses. If a recurrence is similar to one you have seen before, then guessing a similar solution is reasonable. As an example, consider the recurrence T .n/ D 2T .bn=2c C 17/ C n; which looks difficult because of the added “17” in the argument to T on the right- hand side. Intuitively, however, this additional term cannot substantially affect the4.3 The substitution method for solving recurrences 85 solution to the recurrence. When n is large, the difference between bn=2c and bn=2c C 17 is not that large: both cut n nearly evenly in half. Consequently, we make the guess that T .n/ D O.nlg n/, which you can verify as correct by using the substitution method (see Exercise 4.3-6). Another way to make a good guess is to prove loose upper and lower bounds on the recurrence and then reduce the range of uncertainty. For example, we might start with a lower bound of T .n/ D .n/ for the recurrence (4.19), since we have the term n in the recurrence, and we can prove an initial upper bound of T .n/ D O.n2/. Then, we can gradually lower the upper bound and raise the lower bound until we converge on the correct, asymptotically tight solution of T .n/ D ‚.n lg n/. Subtleties Sometimes you might correctly guess an asymptotic bound on the solution of a recurrence, but somehow the math fails to work out in the induction. The problem frequently turns out to be that the inductive assumption is not strong enough to prove the detailed bound. If you revise the guess by subtracting a lower-order term when you hit such a snag, the math often goes through. Consider the recurrence T .n/ D T.bn=2c/ C T.dn=2e/ C 1: We guess that the solution is T .n/ D O.n/, and we try to show that T .n/ cn for an appropriate choice of the constant c. Substituting our guess in the recurrence, we obtain T .n/ c bn=2c C c dn=2e C 1 D cn C 1; which does not imply T .n/ cn for any choice of c. We might be tempted to try a larger guess, say T .n/ D O.n2/. Although we can make this larger guess work, our original guess of T .n/ D O.n/ is correct. In order to show that it is correct, however, we must make a stronger inductive hypothesis. Intuitively, our guess is nearly right: we are off only by the constant 1,a lower-order term. Nevertheless, mathematical induction does not work unless we prove the exact form of the inductive hypothesis. We overcome our difficulty by subtracting a lower-order term from our previous guess. Our new guess is T .n/ cn d,whered 0 is a constant. We now have T .n/ .c bn=2c d/C .c dn=2e d/C 1 D cn 2d C 1 cn d;86 Chapter 4 Divide-and-Conquer as long as d 1. As before, we must choose the constant c large enough to handle the boundary conditions. You might find the idea of subtracting a lower-order term counterintuitive. Af- ter all, if the math does not work out, we should increase our guess, right? Not necessarily! When proving an upper bound by induction, it may actually be more difficult to prove that a weaker upper bound holds, because in order to prove the weaker bound, we must use the same weaker bound inductively in the proof. In our current example, when the recurrence has more than one recursive term, we get to subtract out the lower-order term of the proposed bound once per recursive term. In the above example, we subtracted out the constant d twice, once for the T.bn=2c/ term and once for the T.dn=2e/ term. We ended up with the inequality T .n/ cn 2d C 1, and it was easy to find values of d to make cn 2d C 1 be less than or equal to cn d. Avoiding pitfalls It is easy to err in the use of asymptotic notation. For example, in the recur- rence (4.19) we can falsely “prove” T .n/ D O.n/ by guessing T .n/ cn and then arguing T .n/ 2.c bn=2c/ C n cn C n D O.n/ ; wrong!! since c is a constant. The error is that we have not proved the exact form of the inductive hypothesis, that is, that T .n/ cn. We therefore will explicitly prove that T .n/ cn when we want to show that T .n/ D O.n/. Changing variables Sometimes, a little algebraic manipulation can make an unknown recurrence simi- lar to one you have seen before. As an example, consider the recurrence T .n/ D 2T pn ˘ C lg n; which looks difficult. We can simplify this recurrence, though, with a change of variables. For convenience, we shall not worry about rounding off values, such as pn, to be integers. Renaming m D lg n yields T.2m/ D 2T .2m=2/ C m: We can now rename S.m/ D T.2m/ to produce the new recurrence S.m/ D 2S.m=2/ C m;4.3 The substitution method for solving recurrences 87 which is very much like recurrence (4.19). Indeed, this new recurrence has the same solution: S.m/ D O.mlg m/. Changing back from S.m/ to T .n/, we obtain T .n/ D T.2m/ D S.m/ D O.mlg m/ D O.lg n lg lg n/ : Exercises 4.3-1 Show that the solution of T .n/ D T.n 1/ C n is O.n2/. 4.3-2 Show that the solution of T .n/ D T.dn=2e/ C 1 is O.lg n/. 4.3-3 We saw that the solution of T .n/ D 2T .bn=2c/Cn is O.nlg n/. Show that the so- lution of this recurrence is also .n lg n/. Conclude that the solution is ‚.n lg n/. 4.3-4 Show that by making a different inductive hypothesis, we can overcome the diffi- culty with the boundary condition T.1/D 1 for recurrence (4.19) without adjusting the boundary conditions for the inductive proof. 4.3-5 Show that ‚.n lg n/ is the solution to the “exact” recurrence (4.3) for merge sort. 4.3-6 Show that the solution to T .n/ D 2T .bn=2c C 17/ C n is O.nlg n/. 4.3-7 Using the master method in Section 4.5, you can show that the solution to the recurrence T .n/ D 4T .n=3/ C n is T .n/ D ‚.nlog3 4/. Show that a substitution proof with the assumption T .n/ cnlog3 4 fails. Then show how to subtract off a lower-order term to make a substitution proof work. 4.3-8 Using the master method in Section 4.5, you can show that the solution to the recurrence T .n/ D 4T .n=2/ C n2 is T .n/ D ‚.n2/. Show that a substitution proof with the assumption T .n/ cn2 fails. Then show how to subtract off a lower-order term to make a substitution proof work.88 Chapter 4 Divide-and-Conquer 4.3-9 Solve the recurrence T .n/ D 3T .pn/ C log n by making a change of variables. Your solution should be asymptotically tight. Do not worry about whether values are integral. 4.4 The recursion-tree method for solving recurrences Although you can use the substitution method to provide a succinct proof that a solution to a recurrence is correct, you might have trouble coming up with a good guess. Drawing out a recursion tree, as we did in our analysis of the merge sort recurrence in Section 2.3.2, serves as a straightforward way to devise a good guess. In a recursion tree, each node represents the cost of a single subproblem somewhere in the set of recursive function invocations. We sum the costs within each level of the tree to obtain a set of per-level costs, and then we sum all the per-level costs to determine the total cost of all levels of the recursion. A recursion tree is best used to generate a good guess, which you can then verify by the substitution method. When using a recursion tree to generate a good guess, you can often tolerate a small amount of “sloppiness,” since you will be verifying your guess later on. If you are very careful when drawing out a recursion tree and summing the costs, however, you can use a recursion tree as a direct proof of a solution to a recurrence. In this section, we will use recursion trees to generate good guesses, and in Section 4.6, we will use recursion trees directly to prove the theorem that forms the basis of the master method. For example, let us see how a recursion tree would provide a good guess for the recurrence T .n/ D 3T .bn=4c/ C ‚.n2/. We start by focusing on finding an upper bound for the solution. Because we know that floors and ceilings usually do not matter when solving recurrences (here’s an example of sloppiness that we can tolerate), we create a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2, having written out the implied constant coefficient c>0. Figure 4.5 shows how we derive the recursion tree for T .n/ D 3T .n=4/ C cn2. For convenience, we assume that n is an exact power of 4 (another example of tolerable sloppiness) so that all subproblem sizes are integers. Part (a) of the figure shows T .n/, which we expand in part (b) into an equivalent tree representing the recurrence. The cn2 term at the root represents the cost at the top level of recursion, and the three subtrees of the root represent the costs incurred by the subproblems of size n=4. Part (c) shows this process carried one step further by expanding each node with cost T .n=4/ from part (b). The cost for each of the three children of the root is c.n=4/2. We continue expanding each node in the tree by breaking it into its constituent parts as determined by the recurrence.4.4 The recursion-tree method for solving recurrences 89 … … (d) (c)(b)(a) T .n/ cn2 cn2 cn2 T n 4 T n 4 T n 4 T n 16 T n 16 T n 16 T n 16 T n 16 T n 16 T n 16 T n 16 T n 16 cn2 c n 4 2c n 4 2c n 4 2 c n 4 2c n 4 2c n 4 2 c n 16 2c n 16 2c n 16 2c n 16 2c n 16 2c n 16 2c n 16 2c n 16 2c n 16 2 3 16 cn2 3 16 2 cn2 log4 n nlog4 3 T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/T.1/ ‚.nlog4 3/ Total: O.n2/ Figure 4.5 Constructing a recursion tree for the recurrence T .n/ D 3T .n=4/ C cn2.Part(a) shows T .n/, which progressively expands in (b)–(d) to form the recursion tree. The fully expanded tree in part (d) has height log4 n (it has log4 n C 1 levels).90 Chapter 4 Divide-and-Conquer Because subproblem sizes decrease by a factor of 4 each time we go down one level, we eventually must reach a boundary condition. How far from the root do we reach one? The subproblem size for a node at depth i is n=4i . Thus, the subproblem size hits n D 1 when n=4i D 1 or, equivalently, when i D log4 n. Thus, the tree has log4 n C 1 levels (at depths 0; 1; 2; : : : ; log4 n). Next we determine the cost at each level of the tree. Each level has three times more nodes than the level above, and so the number of nodes at depth i is 3i . Because subproblem sizes reduce by a factor of 4 for each level we go down from the root, each node at depth i,fori D 0; 1; 2; : : : ; log4 n 1,hasacost of c.n=4i /2. Multiplying, we see that the total cost over all nodes at depth i,for i D 0; 1; 2; : : : ; log4 n 1,is3i c.n=4i /2 D .3=16/i cn2. The bottom level, at depth log4 n,has3log4 n D nlog4 3 nodes, each contributing cost T.1/, for a total cost of nlog4 3T.1/,whichis‚.nlog4 3/, since we assume that T.1/is a constant. Now we add up the costs over all levels to determine the cost for the entire tree: T .n/ D cn2 C 3 16 cn2 C 3 16 2 cn2 CC 3 16 log4 n1 cn2 C ‚.nlog4 3/ D log4 n1X iD0 3 16 i cn2 C ‚.nlog4 3/ D .3=16/log4 n 1 .3=16/ 1 cn2 C ‚.nlog4 3/ (by equation (A.5)) : This last formula looks somewhat messy until we realize that we can again take advantage of small amounts of sloppiness and use an infinite decreasing geometric series as an upper bound. Backing up one step and applying equation (A.6), we have T .n/ D log4 n1X iD0 3 16 i cn2 C ‚.nlog4 3/ < 1X iD0 3 16 i cn2 C ‚.nlog4 3/ D 1 1 .3=16/ cn2 C ‚.nlog4 3/ D 16 13 cn2 C ‚.nlog4 3/ D O.n2/: Thus, we have derived a guess of T .n/ D O.n2/ for our original recurrence T .n/ D 3T .bn=4c/ C ‚.n2/. In this example, the coefficients of cn2 form a decreasing geometric series and, by equation (A.6), the sum of these coefficients4.4 The recursion-tree method for solving recurrences 91 … … cn cn cn cn c n 3 c 2n 3 c n 9 c 2n 9 c 2n 9 c 4n 9 log3=2 n Total: O.nlg n/ Figure 4.6 A recursion tree for the recurrence T.n/D T .n=3/ C T .2n=3/ C cn. is bounded from above by the constant 16=13. Since the root’s contribution to the total cost is cn2, the root contributes a constant fraction of the total cost. In other words, the cost of the root dominates the total cost of the tree. In fact, if O.n2/ is indeed an upper bound for the recurrence (as we shall verify in a moment), then it must be a tight bound. Why? The first recursive call contributes acostof‚.n2/,andso.n2/ must be a lower bound for the recurrence. Now we can use the substitution method to verify that our guess was cor- rect, that is, T .n/ D O.n2/ is an upper bound for the recurrence T .n/ D 3T .bn=4c/ C ‚.n2/. We want to show that T .n/ dn2 for some constant d>0. Using the same constant c>0as before, we have T .n/ 3T .bn=4c/ C cn2 3d bn=4c2 C cn2 3d.n=4/2 C cn2 D 3 16 dn2 C cn2 dn2 ; where the last step holds as long as d .16=13/c. In another, more intricate, example, Figure 4.6 shows the recursion tree for T .n/ D T .n=3/ C T .2n=3/ C O.n/ : (Again, we omit floor and ceiling functions for simplicity.) As before, we let c represent the constant factor in the O.n/ term. When we add the values across the levels of the recursion tree shown in the figure, we get a value of cn for every level.92 Chapter 4 Divide-and-Conquer The longest simple path from the root to a leaf is n ! .2=3/n ! .2=3/2n ! !1.Since.2=3/kn D 1 when k D log3=2 n, the height of the tree is log3=2 n. Intuitively, we expect the solution to the recurrence to be at most the number of levels times the cost of each level, or O.cnlog3=2 n/ D O.nlg n/. Figure 4.6 shows only the top levels of the recursion tree, however, and not every level in the tree contributes a cost of cn. Consider the cost of the leaves. If this recursion tree were a complete binary tree of height log3=2 n, there would be 2log3=2 n D nlog3=2 2 leaves. Since the cost of each leaf is a constant, the total cost of all leaves would then be ‚.nlog3=2 2/ which, since log3=2 2 is a constant strictly greater than 1, is !.nlg n/. This recursion tree is not a complete binary tree, however, and so it has fewer than nlog3=2 2 leaves. Moreover, as we go down from the root, more and more internal nodes are absent. Consequently, levels toward the bottom of the recursion tree contribute less than cn to the total cost. We could work out an accu- rate accounting of all costs, but remember that we are just trying to come up with a guess to use in the substitution method. Let us tolerate the sloppiness and attempt to show that a guess of O.nlg n/ for the upper bound is correct. Indeed, we can use the substitution method to verify that O.nlg n/ is an upper bound for the solution to the recurrence. We show that T .n/ dnlg n,whered is a suitable positive constant. We have T .n/ T .n=3/ C T .2n=3/ C cn d.n=3/ lg.n=3/ C d.2n=3/ lg.2n=3/ C cn D .d.n=3/ lg n d.n=3/ lg 3/ C .d.2n=3/ lg n d.2n=3/ lg.3=2// C cn D dnlg n d..n=3/ lg 3 C .2n=3/ lg.3=2// C cn D dnlg n d..n=3/ lg 3 C .2n=3/ lg 3 .2n=3/ lg 2/ C cn D dnlg n dn.lg 3 2=3/ C cn dnlg n; as long as d c=.lg 3.2=3//. Thus, we did not need to perform a more accurate accounting of costs in the recursion tree. Exercises 4.4-1 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 3T .bn=2c/ C n. Use the substitution method to verify your answer. 4.4-2 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T .n=2/ C n2. Use the substitution method to verify your answer.4.5 The master method for solving recurrences 93 4.4-3 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 4T .n=2 C 2/ C n. Use the substitution method to verify your answer. 4.4-4 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D 2T .n 1/ C 1. Use the substitution method to verify your answer. 4.4-5 Use a recursion tree to determine a good asymptotic upper bound on the recurrence T .n/ D T.n1/CT .n=2/Cn. Use the substitution method to verify your answer. 4.4-6 Argue that the solution to the recurrence T .n/ D T .n=3/CT .2n=3/Ccn,wherec is a constant, is .n lg n/ by appealing to a recursion tree. 4.4-7 Draw the recursion tree for T .n/ D 4T .bn=2c/ C cn,wherec is a constant, and provide a tight asymptotic bound on its solution. Verify your bound by the substi- tution method. 4.4-8 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T.n a/ C T.a/C cn,wherea 1 and c>0are constants. 4.4-9 Use a recursion tree to give an asymptotically tight solution to the recurrence T .n/ D T.˛n/C T ..1 ˛/n/ C cn,where˛ is a constant in the range 0<˛<1 and c>0is also a constant. 4.5 The master method for solving recurrences The master method provides a “cookbook” method for solving recurrences of the form T .n/ D aT .n=b/ C f .n/ ; (4.20) where a 1 and b>1are constants and f .n/ is an asymptotically positive function. To use the master method, you will need to memorize three cases, but then you will be able to solve many recurrences quite easily, often without pencil and paper.94 Chapter 4 Divide-and-Conquer The recurrence (4.20) describes the running time of an algorithm that divides a problem of size n into a subproblems, each of size n=b,wherea and b are positive constants. The a subproblems are solved recursively, each in time T .n=b/.The function f .n/ encompasses the cost of dividing the problem and combining the results of the subproblems. For example, the recurrence arising from Strassen’s algorithm has a D 7, b D 2,andf .n/ D ‚.n2/. As a matter of technical correctness, the recurrence is not actually well defined, because n=b might not be an integer. Replacing each of the a terms T .n=b/ with either T.bn=bc/ or T.dn=be/ will not affect the asymptotic behavior of the recur- rence, however. (We will prove this assertion in the next section.) We normally find it convenient, therefore, to omit the floor and ceiling functions when writing divide-and-conquer recurrences of this form. The master theorem The master method depends on the following theorem. Theorem 4.1 (Master theorem) Let a 1 and b>1be constants, let f .n/ be a function, and let T .n/ be defined on the nonnegative integers by the recurrence T .n/ D aT .n=b/ C f .n/ ; where we interpret n=b to mean either bn=bc or dn=be.ThenT .n/ has the follow- ing asymptotic bounds: 1. If f .n/ D O.nlogb a/ for some constant >0,thenT .n/ D ‚.nlogb a/. 2. If f .n/ D ‚.nlogb a/,thenT .n/ D ‚.nlogb a lg n/. 3. If f .n/ D .nlogb aC/ for some constant >0,andifaf .n=b/ cf .n/ for some constant c<1and all sufficiently large n,thenT .n/ D ‚.f .n//. Before applying the master theorem to some examples, let’s spend a moment trying to understand what it says. In each of the three cases, we compare the function f .n/ with the function nlogb a. Intuitively, the larger of the two functions determines the solution to the recurrence. If, as in case 1, the function nlogb a is the larger, then the solution is T .n/ D ‚.nlogb a/. If, as in case 3, the function f .n/ is the larger, then the solution is T .n/ D ‚.f .n//. If, as in case 2, the two func- tions are the same size, we multiply by a logarithmic factor, and the solution is T .n/ D ‚.nlogb a lg n/ D ‚.f .n/ lg n/. Beyond this intuition, you need to be aware of some technicalities. In the first case, not only must f .n/ be smaller than nlogb a,itmustbepolynomially smaller.4.5 The master method for solving recurrences 95 That is, f .n/ must be asymptotically smaller than nlogb a by a factor of n for some constant >0. In the third case, not only must f .n/ be larger than nlogb a,italso must be polynomially larger and in addition satisfy the “regularity” condition that af .n=b/ cf .n/. This condition is satisfied by most of the polynomially bounded functions that we shall encounter. Note that the three cases do not cover all the possibilities for f .n/. There is a gap between cases 1 and 2 when f .n/ is smaller than nlogb a but not polynomi- ally smaller. Similarly, there is a gap between cases 2 and 3 when f .n/ is larger than nlogb a but not polynomially larger. If the function f .n/ falls into one of these gaps, or if the regularity condition in case 3 fails to hold, you cannot use the master method to solve the recurrence. Using the master method To use the master method, we simply determine which case (if any) of the master theorem applies and write down the answer. As a first example, consider T .n/ D 9T .n=3/ C n: For this recurrence, we have a D 9, b D 3, f .n/ D n, and thus we have that nlogb a D nlog3 9 D ‚.n2). Since f .n/ D O.nlog3 9/,where D 1, we can apply case 1 of the master theorem and conclude that the solution is T .n/ D ‚.n2/. Now consider T .n/ D T .2n=3/ C 1; in which a D 1, b D 3=2, f .n/ D 1,andnlogb a D nlog3=2 1 D n0 D 1. Case 2 applies, since f .n/ D ‚.nlogb a/ D ‚.1/, and thus the solution to the recurrence is T .n/ D ‚.lg n/. For the recurrence T .n/ D 3T .n=4/ C n lg n; we have a D 3, b D 4, f .n/ D n lg n,andnlogb a D nlog4 3 D O.n0:793/. Since f .n/ D .nlog4 3C/,where 0:2, case 3 applies if we can show that the regularity condition holds for f .n/. For sufficiently large n,wehavethat af .n=b/ D 3.n=4/ lg.n=4/ .3=4/n lg n D cf .n/ for c D 3=4. Consequently, by case 3, the solution to the recurrence is T .n/ D ‚.n lg n/. The master method does not apply to the recurrence T .n/ D 2T .n=2/ C n lg n; even though it appears to have the proper form: a D 2, b D 2, f .n/ D n lg n, and nlogb a D n. You might mistakenly think that case 3 should apply, since96 Chapter 4 Divide-and-Conquer f .n/ D n lg n is asymptotically larger than nlogb a D n. The problem is that it is not polynomially larger. The ratio f .n/=nlogb a D .n lg n/=n D lg n is asymp- totically less than n for any positive constant . Consequently, the recurrence falls into the gap between case 2 and case 3. (See Exercise 4.6-2 for a solution.) Let’s use the master method to solve the recurrences we saw in Sections 4.1 and 4.2. Recurrence (4.7), T .n/ D 2T .n=2/ C ‚.n/ ; characterizes the running times of the divide-and-conquer algorithm for both the maximum-subarray problem and merge sort. (As is our practice, we omit stating the base case in the recurrence.) Here, we have a D 2, b D 2, f .n/ D ‚.n/,and thus we have that nlogb a D nlog2 2 D n. Case 2 applies, since f .n/ D ‚.n/,andso we have the solution T .n/ D ‚.n lg n/. Recurrence (4.17), T .n/ D 8T .n=2/ C ‚.n2/; describes the running time of the first divide-and-conquer algorithm that we saw for matrix multiplication. Now we have a D 8, b D 2,andf .n/ D ‚.n2/, and so nlogb a D nlog2 8 D n3.Sincen3 is polynomially larger than f .n/ (that is, f .n/ D O.n3/ for D 1), case 1 applies, and T .n/ D ‚.n3/. Finally, consider recurrence (4.18), T .n/ D 7T .n=2/ C ‚.n2/; which describes the running time of Strassen’s algorithm. Here, we have a D 7, b D 2, f .n/ D ‚.n2/, and thus nlogb a D nlog2 7. Rewriting log2 7 as lg 7 and recalling that 2:80 < lg 7<2:81, we see that f .n/ D O.nlg 7/ for D 0:8. Again, case 1 applies, and we have the solution T .n/ D ‚.nlg 7/. Exercises 4.5-1 Use the master method to give tight asymptotic bounds for the following recur- rences. a. T .n/ D 2T .n=4/ C 1. b. T .n/ D 2T .n=4/ C pn. c. T .n/ D 2T .n=4/ C n. d. T .n/ D 2T .n=4/ C n2.4.6 Proof of the master theorem 97 4.5-2 Professor Caesar wishes to develop a matrix-multiplication algorithm that is asymptotically faster than Strassen’s algorithm. His algorithm will use the divide- and-conquer method, dividing each matrix into pieces of size n=4 n=4,andthe divide and combine steps together will take ‚.n2/ time. He needs to determine how many subproblems his algorithm has to create in order to beat Strassen’s algo- rithm. If his algorithm creates a subproblems, then the recurrence for the running time T .n/ becomes T .n/ D aT .n=4/ C ‚.n2/. What is the largest integer value of a for which Professor Caesar’s algorithm would be asymptotically faster than Strassen’s algorithm? 4.5-3 Use the master method to show that the solution to the binary-search recurrence T .n/ D T .n=2/ C ‚.1/ is T .n/ D ‚.lg n/. (See Exercise 2.3-5 for a description of binary search.) 4.5-4 Can the master method be applied to the recurrence T .n/ D 4T .n=2/ C n2 lg n? Why or why not? Give an asymptotic upper bound for this recurrence. 4.5-5 ? Consider the regularity condition af .n=b/ cf .n/ for some constant c<1, which is part of case 3 of the master theorem. Give an example of constants a 1 and b>1and a function f .n/ that satisfies all the conditions in case 3 of the master theorem except the regularity condition. ? 4.6 Proof of the master theorem This section contains a proof of the master theorem (Theorem 4.1). You do not need to understand the proof in order to apply the master theorem. The proof appears in two parts. The first part analyzes the master recur- rence (4.20), under the simplifying assumption that T .n/ is defined only on ex- act powers of b>1,thatis,forn D 1; b; b2;:::. This part gives all the intuition needed to understand why the master theorem is true. The second part shows how to extend the analysis to all positive integers n; it applies mathematical technique to the problem of handling floors and ceilings. In this section, we shall sometimes abuse our asymptotic notation slightly by using it to describe the behavior of functions that are defined only over exact powers of b. Recall that the definitions of asymptotic notations require that98 Chapter 4 Divide-and-Conquer bounds be proved for all sufficiently large numbers, not just those that are pow- ers of b. Since we could make new asymptotic notations that apply only to the set fbi W i D 0; 1; 2; : : :g, instead of to the nonnegative numbers, this abuse is minor. Nevertheless, we must always be on guard when we use asymptotic notation over a limited domain lest we draw improper conclusions. For example, proving that T .n/ D O.n/ when n is an exact power of 2 does not guarantee that T .n/ D O.n/. The function T .n/ could be defined as T .n/ D ( n if n D 1;2;4;8;: : : ; n2 otherwise ; in which case the best upper bound that applies to all values of n is T .n/ D O.n2/. Because of this sort of drastic consequence, we shall never use asymptotic notation over a limited domain without making it absolutely clear from the context that we are doing so. 4.6.1 The proof for exact powers The first part of the proof of the master theorem analyzes the recurrence (4.20) T .n/ D aT .n=b/ C f .n/ ; for the master method, under the assumption that n is an exact power of b>1, where b need not be an integer. We break the analysis into three lemmas. The first reduces the problem of solving the master recurrence to the problem of evaluating an expression that contains a summation. The second determines bounds on this summation. The third lemma puts the first two together to prove a version of the master theorem for the case in which n is an exact power of b. Lemma 4.2 Let a 1 and b>1be constants, and let f .n/ be a nonnegative function defined on exact powers of b.DefineT .n/ on exact powers of b by the recurrence T .n/ D ( ‚.1/ if n D 1; aT .n=b/ C f .n/ if n D bi ; where i is a positive integer. Then T .n/ D ‚.nlogb a/ C logb n1X jD0 aj f .n=bj /: (4.21) Proof We use the recursion tree in Figure 4.7. The root of the tree has cost f .n/, and it has a children, each with cost f .n=b/. (It is convenient to think of a as being4.6 Proof of the master theorem 99 … … … … … … … … … … … … … … … f .n/ f .n/ aaa a aaa a aaa a a f .n=b/f .n=b/f .n=b/ f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/f .n=b2/ af .n=b/ a2f .n=b2/ logb n nlogb a ‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/ ‚.nlogb a/ Total: ‚.nlogb a/ C logb n1X jD0 aj f .n=bj / Figure 4.7 The recursion tree generated by T .n/ D aT .n=b/Cf .n/. The tree is a complete a-ary tree with nlogb a leaves and height logb n. The cost of the nodes at each depth is shown at the right, and their sum is given in equation (4.21). an integer, especially when visualizing the recursion tree, but the mathematics does not require it.) Each of these children has a children, making a2 nodes at depth 2, and each of the a children has cost f .n=b2/. In general, there are aj nodes at depth j , and each has cost f .n=bj /. The cost of each leaf is T.1/ D ‚.1/,and each leaf is at depth logb n,sincen=blogb n D 1.Therearealogb n D nlogb a leaves in the tree. We can obtain equation (4.21) by summing the costs of the nodes at each depth in the tree, as shown in the figure. The cost for all internal nodes at depth j is aj f .n=bj /, and so the total cost of all internal nodes is logb n1X jD0 aj f .n=bj /: In the underlying divide-and-conquer algorithm, this sum represents the costs of dividing problems into subproblems and then recombining the subproblems. The100 Chapter 4 Divide-and-Conquer cost of all the leaves, which is the cost of doing all nlogb a subproblems of size 1, is ‚.nlogb a/. In terms of the recursion tree, the three cases of the master theorem correspond to cases in which the total cost of the tree is (1) dominated by the costs in the leaves, (2) evenly distributed among the levels of the tree, or (3) dominated by the cost of the root. The summation in equation (4.21) describes the cost of the dividing and com- bining steps in the underlying divide-and-conquer algorithm. The next lemma pro- vides asymptotic bounds on the summation’s growth. Lemma 4.3 Let a 1 and b>1be constants, and let f .n/ be a nonnegative function defined on exact powers of b. A function g.n/ defined over exact powers of b by g.n/ D logb n1X jD0 aj f .n=bj / (4.22) has the following asymptotic bounds for exact powers of b: 1. If f .n/ D O.nlogb a/ for some constant >0,theng.n/ D O.nlogb a/. 2. If f .n/ D ‚.nlogb a/,theng.n/ D ‚.nlogb a lg n/. 3. If af .n=b/ cf .n/ for some constant c<1and for all sufficiently large n, then g.n/ D ‚.f .n//. Proof For case 1, we have f .n/ D O.nlogb a/, which implies that f .n=bj / D O..n=bj /logb a/. Substituting into equation (4.22) yields g.n/ D O logb n1X jD0 aj n bj logb a ! : (4.23) We bound the summation within the O-notation by factoring out terms and simpli- fying, which leaves an increasing geometric series: logb n1X jD0 aj n bj logb a D nlogb a logb n1X jD0 ab blogb a j D nlogb a logb n1X jD0 .b/j D nlogb a b logb n 1 b 1 4.6 Proof of the master theorem 101 D nlogb a n 1 b 1 : Since b and are constants, we can rewrite the last expression as nlogb aO.n/ D O.nlogb a/. Substituting this expression for the summation in equation (4.23) yields g.n/ D O.nlogb a/; thereby proving case 1. Because case 2 assumes that f .n/ D ‚.nlogb a/,wehavethatf .n=bj / D ‚..n=bj /logb a/. Substituting into equation (4.22) yields g.n/ D ‚ logb n1X jD0 aj n bj logb a ! : (4.24) We bound the summation within the ‚-notation as in case 1, but this time we do not obtain a geometric series. Instead, we discover that every term of the summation is the same: logb n1X jD0 aj n bj logb a D nlogb a logb n1X jD0 a blogb a j D nlogb a logb n1X jD0 1 D nlogb a logb n: Substituting this expression for the summation in equation (4.24) yields g.n/ D ‚.nlogb a logb n/ D ‚.nlogb a lg n/ ; proving case 2. We prove case 3 similarly. Since f .n/ appears in the definition (4.22) of g.n/ and all terms of g.n/ are nonnegative, we can conclude that g.n/ D .f .n// for exact powers of b. We assume in the statement of the lemma that af .n=b/ cf .n/ for some constant c<1and all sufficiently large n. We rewrite this assumption as f .n=b/ .c=a/f .n/ and iterate j times, yielding f .n=bj / .c=a/j f .n/ or, equivalently, aj f .n=bj / cj f .n/, where we assume that the values we iterate on are sufficiently large. Since the last, and smallest, such value is n=bj1,itis enough to assume that n=bj1 is sufficiently large. Substituting into equation (4.22) and simplifying yields a geometric series, but unlike the series in case 1, this one has decreasing terms. We use an O.1/ term to102 Chapter 4 Divide-and-Conquer capture the terms that are not covered by our assumption that n is sufficiently large: g.n/ D logb n1X jD0 aj f .n=bj / logb n1X jD0 cj f .n/ C O.1/ f .n/ 1X jD0 cj C O.1/ D f .n/ 1 1 c C O.1/ D O.f .n// ; since c is a constant. Thus, we can conclude that g.n/ D ‚.f .n// for exact powers of b. With case 3 proved, the proof of the lemma is complete. We can now prove a version of the master theorem for the case in which n is an exact power of b. Lemma 4.4 Let a 1 and b>1be constants, and let f .n/ be a nonnegative function defined on exact powers of b.DefineT .n/ on exact powers of b by the recurrence T .n/ D ( ‚.1/ if n D 1; aT .n=b/ C f .n/ if n D bi ; where i is a positive integer. Then T .n/ has the following asymptotic bounds for exact powers of b: 1. If f .n/ D O.nlogb a/ for some constant >0,thenT .n/ D ‚.nlogb a/. 2. If f .n/ D ‚.nlogb a/,thenT .n/ D ‚.nlogb a lg n/. 3. If f .n/ D .nlogb aC/ for some constant >0,andifaf .n=b/ cf .n/ for some constant c<1and all sufficiently large n,thenT .n/ D ‚.f .n//. Proof We use the bounds in Lemma 4.3 to evaluate the summation (4.21) from Lemma 4.2. For case 1, we have T .n/ D ‚.nlogb a/ C O.nlogb a/ D ‚.nlogb a/;4.6 Proof of the master theorem 103 and for case 2, T .n/ D ‚.nlogb a/ C ‚.nlogb a lg n/ D ‚.nlogb a lg n/ : For case 3, T .n/ D ‚.nlogb a/ C ‚.f .n// D ‚.f .n// ; because f .n/ D .nlogb aC/. 4.6.2 Floors and ceilings To complete the proof of the master theorem, we must now extend our analysis to the situation in which floors and ceilings appear in the master recurrence, so that the recurrence is defined for all integers, not for just exact powers of b. Obtaining a lower bound on T .n/ D aT.dn=be/ C f .n/ (4.25) and an upper bound on T .n/ D aT.bn=bc/ C f .n/ (4.26) is routine, since we can push through the bound dn=be n=b in the first case to yield the desired result, and we can push through the bound bn=bc n=b in the second case. We use much the same technique to lower-bound the recurrence (4.26) as to upper-bound the recurrence (4.25), and so we shall present only this latter bound. We modify the recursion tree of Figure 4.7 to produce the recursion tree in Fig- ure 4.8. As we go down in the recursion tree, we obtain a sequence of recursive invocations on the arguments n; dn=be ; ddn=be =be ; dddn=be =be =be ; ::: Let us denote the j th element in the sequence by nj ,where nj D ( n if j D 0; dnj1=be if j>0: (4.27)104 Chapter 4 Divide-and-Conquer … … … … … … … … … … … … … … … f .n/ f .n/ aaa a aaa a aaa a a f.n1/f.n1/f.n1/ f.n2/f.n2/f.n2/f.n2/f.n2/f.n2/f.n2/f.n2/f.n2/ af .n 1/ a2f.n2/ blogb nc ‚.nlogb a/ ‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/‚.1/ ‚.nlogb a/ Total: ‚.nlogb a/ C blogb nc1X jD0 aj f.nj / Figure 4.8 The recursion tree generated by T .n/ D aT .dn=be/Cf .n/. The recursive argument nj is given by equation (4.27). Our first goal is to determine the depth k such that nk is a constant. Using the inequality dxe x C 1, we obtain n0 n; n1 n b C 1; n2 n b2 C 1 b C 1; n3 n b3 C 1 b2 C 1 b C 1; ::: In general, we have4.6 Proof of the master theorem 105 nj n bj C j1X iD0 1 bi < n bj C 1X iD0 1 bi D n bj C b b 1 : Letting j D blogb nc, we obtain nblogb nc < n bblogb nc C b b 1 < n blogb n1 C b b 1 D n n=b C b b 1 D b C b b 1 D O.1/ ; and thus we see that at depth blogb nc, the problem size is at most a constant. From Figure 4.8, we see that T .n/ D ‚.nlogb a/ C blogb nc1X jD0 aj f.nj /; (4.28) which is much the same as equation (4.21), except that n is an arbitrary integer and not restricted to be an exact power of b. We can now evaluate the summation g.n/ D blogb nc1X jD0 aj f.nj / (4.29) from equation (4.28) in a manner analogous to the proof of Lemma 4.3. Beginning with case 3, if af .dn=be/ cf .n/ for n>bCb=.b1/,wherec<1is a constant, then it follows that aj f.nj / cj f .n/. Therefore, we can evaluate the sum in equation (4.29) just as in Lemma 4.3. For case 2, we have f .n/ D ‚.nlogb a/.Ifwe can show that f.nj / D O.nlogb a=aj / D O..n=bj /logb a/, then the proof for case 2 of Lemma 4.3 will go through. Observe that j blogb nc implies bj =n 1.The bound f .n/ D O.nlogb a/ implies that there exists a constant c>0such that for all sufficiently large nj ,106 Chapter 4 Divide-and-Conquer f.nj / c n bj C b b 1 logb a D c n bj 1 C bj n b b 1 logb a D c nlogb a aj 1 C bj n b b 1 logb a c nlogb a aj 1 C b b 1 logb a D O nlogb a aj ; since c.1 C b=.b 1//logb a is a constant. Thus, we have proved case 2. The proof of case 1 is almost identical. The key is to prove the bound f.nj / D O.nlogb a/, which is similar to the corresponding proof of case 2, though the algebra is more intricate. We have now proved the upper bounds in the master theorem for all integers n. The proof of the lower bounds is similar. Exercises 4.6-1 ? Give a simple and exact expression for nj in equation (4.27) for the case in which b is a positive integer instead of an arbitrary real number. 4.6-2 ? Show that if f .n/ D ‚.nlogb a lgk n/,wherek 0, then the master recurrence has solution T .n/ D ‚.nlogb a lgkC1 n/. For simplicity, confine your analysis to exact powers of b. 4.6-3 ? Show that case 3 of the master theorem is overstated, in the sense that the regularity condition af .n=b/ cf .n/ for some constant c<1implies that there exists a constant >0such that f .n/ D .nlogb aC/.Problems for Chapter 4 107 Problems 4-1 Recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for n 2. Make your bounds as tight as possible, and justify your answers. a. T .n/ D 2T .n=2/ C n4. b. T .n/ D T .7n=10/ C n. c. T .n/ D 16T .n=4/ C n2. d. T .n/ D 7T .n=3/ C n2. e. T .n/ D 7T .n=2/ C n2. f. T .n/ D 2T .n=4/ C pn. g. T .n/ D T.n 2/ C n2. 4-2 Parameter-passing costs Throughout this book, we assume that parameter passing during procedure calls takes constant time, even if an N -element array is being passed. This assumption is valid in most systems because a pointer to the array is passed, not the array itself. This problem examines the implications of three parameter-passing strategies: 1. An array is passed by pointer. Time D ‚.1/. 2. An array is passed by copying. Time D ‚.N /,whereN is the size of the array. 3. An array is passed by copying only the subrange that might be accessed by the called procedure. Time D ‚.q p C 1/ if the subarray AŒp : : q is passed. a. Consider the recursive binary search algorithm for finding a number in a sorted array (see Exercise 2.3-5). Give recurrences for the worst-case running times of binary search when arrays are passed using each of the three methods above, and give good upper bounds on the solutions of the recurrences. Let N be the size of the original problem and n be the size of a subproblem. b. Redo part (a) for the MERGE-SORT algorithm from Section 2.3.1.108 Chapter 4 Divide-and-Conquer 4-3 More recurrence examples Give asymptotic upper and lower bounds for T .n/ in each of the following recur- rences. Assume that T .n/ is constant for sufficiently small n. Make your bounds as tight as possible, and justify your answers. a. T .n/ D 4T .n=3/ C n lg n. b. T .n/ D 3T .n=3/ C n= lg n. c. T .n/ D 4T .n=2/ C n2pn. d. T .n/ D 3T .n=3 2/ C n=2. e. T .n/ D 2T .n=2/ C n= lg n. f. T .n/ D T .n=2/ C T .n=4/ C T .n=8/ C n. g. T .n/ D T.n 1/ C 1=n. h. T .n/ D T.n 1/ C lg n. i. T .n/ D T.n 2/ C 1= lg n. j. T .n/ D pnT .pn/ C n. 4-4 Fibonacci numbers This problem develops properties of the Fibonacci numbers, which are defined by recurrence (3.22). We shall use the technique of generating functions to solve the Fibonacci recurrence. Define the generating function (or formal power se- ries) F as F .´/ D 1X iD0 Fi ´i D 0 C ´ C ´2 C 2´3 C 3´4 C 5´5 C 8´6 C 13´7 C 21´8 C ; where Fi is the ith Fibonacci number. a. Show that F .´/ D ´ C ´F .´/ C ´2F .´/.Problems for Chapter 4 109 b. Show that F .´/ D ´ 1 ´ ´2 D ´ .1 ´/.1 y´/ D 1p 5 1 1 ´ 1 1 y´ ; where D 1 C p 5 2 D 1:61803 : : : and y D 1 p 5 2 D0:61803 : : : : c. Show that F .´/ D 1X iD0 1p 5 .i yi /´i : d. Use part (c) to prove that Fi D i = p 5 for i>0, rounded to the nearest integer. (Hint: Observe that ˇˇy ˇˇ <1.) 4-5 Chip testing Professor Diogenes has n supposedly identical integrated-circuit chips that in prin- ciple are capable of testing each other. The professor’s test jig accommodates two chips at a time. When the jig is loaded, each chip tests the other and reports whether it is good or bad. A good chip always reports accurately whether the other chip is good or bad, but the professor cannot trust the answer of a bad chip. Thus, the four possible outcomes of a test are as follows: Chip A says Chip B says Conclusion B is good A is good both are good, or both are bad B is good A is bad at least one is bad B is bad A is good at least one is bad B is bad A is bad at least one is bad a. Show that if more than n=2 chips are bad, the professor cannot necessarily de- termine which chips are good using any strategy based on this kind of pairwise test. Assume that the bad chips can conspire to fool the professor.110 Chapter 4 Divide-and-Conquer b. Consider the problem of finding a single good chip from among n chips, as- suming that more than n=2 of the chips are good. Show that bn=2c pairwise tests are sufficient to reduce the problem to one of nearly half the size. c. Show that the good chips can be identified with ‚.n/ pairwise tests, assuming that more than n=2 of the chips are good. Give and solve the recurrence that describes the number of tests. 4-6 Monge arrays An m n array A of real numbers is a Monge array if for all i, j , k,andl such that 1 i