742 views
# Lecture 12 -- Optimizing AddLast and AddFirst on ArrayLists ## Motivating Question How efficient can we make `addLast` on array-based lists? Can we make `addFirst` just as efficient? ## Conceptual Study Questions - What is the benefit to making arrays cyclic? - Does someone using an array-based list class (like `ArrayList` in Java) know whether the array is being treated cyclically (as opposed to flat)? - How does an amortized analysis help improve a worst-case run-time argument? ## Justifying Resize Policies So far, you've learned about worst-case running time. When we add an element to an array list via `addLast`, the cost might be constant if the array has an open space, but it could also be linear in the number of elements if the array is full (because we need to copy all of the existing elements to a new, larger array). Thus in the worst case, `addLast` has linear time. How often we pay that worst cost, however, depends on how often the array is full. This in turn depends on how many elements we add each time we extend the array. In lecture, we mentioned that it is better to double the capacity of the array each time we resize, rather than to add a constant number of new slots (such as 1 or 2). How can we make a mathematical argument that doubling is better? ### Amortized Worst-Case Analysis So far, when we've talked about worst-case running time, we've talked about *the cost of calling a method one time*. To understand why doubling is better, it will help to think about the cost of doing a *sequence of calls to `addLast`*, rather than just one. Looking at a sequence of calls is helpful in situations like `addLast` when some calls have a higher run-time than others. In these cases, we **look at the total time for the sequence, then divide that time by the number of calls**. By averaging the costs this way, we *amortize*---or distribute---the cost of the expensive calls across all of the calls. #### Example 1: amortized analysis with +1 resizing To see how this is done, let's do an amortized analysis under a policy where we add one space to the array on each resize. In this case, we resize on each call to `addLast`, so we expect the cost of `addLast` to be linear (in the number of elements) on each call. We will show the math for this case, however, because it is initially easier than reasoning about the case where we double the array capacity each time. Assume we made $k$ calls to `addLast`, for some positive integer $k$. Let's try the worst possible situation: we create an initial array of size 1, and add one space to the array each time we resize. If we then add up the costs across all $k$ calls to `addLast`, we would get $$1 + (1 + 1) + (2 + 1) + ... + (k-1 + 1)$$ The first $1$ inserts the first element into the array. The $(1 + 1)$ is the cost of copying the $1$ existing element plus the cost for inserting a new element. The $(2 + 1)$ is the cost of copying $2$ existing elements plus the cost for inserting a new element. And so on. The above equation is the same as $$k + \sum_{i=1}^{k-1} i$$ You may recall from a math course that the sum term is equivalent to $(k*(k+1))/2$. The total cost of $k$ operations is therefore $$k + (k*(k+1))/2$$ which is $O(k^2)$. But this was for $k$ calls to `addLast`, not just one, so the cost per call averages out to $O(k^2)/k$, which is $O(k)$ (aka linear). Summarizing, the amortized worst-case run time formula is $$total\ cost\ of\ k\ calls \over k$$ #### Example 2: amortized analysis with double resizing What if we *doubled* the array size on each extension? What would change in our computational of total costs? Now, instead of copying all the cells every time (as the $(2 + 1)$ terms accounted for), we'll only copy some of the time. Problem is, we need a way to capture how many copies we do within the total-cost-of-calls expression. This equation is easier to set up if we start by thinking about the *eventual* size of the array after making $k$ calls to `addLast`. After $k$ calls, the array must have (at least) $k$ slots. To make things easier, we'll assume that the array has exactly $k$ slots. For simplicity, let's also assume that $k$ is a power of 2. Knowing that we doubled the array capacity on each resize, if there are $k$ slots now, then we must have done a resize when the array had $k/2$ slots (because doubling $k/2$ would get us to $k$). We also must have done a resize when the array had $k/4$ slots, and before that $k/8$ slots. The total cost of copying would therefore be: $$k/2 + k/4 + k/8 + ... + 1$$ In addition, for each of the $k$ calls, we spent $O(1)$ to insert the new element into the array. So the total cost of $k$ calls is: $$k + k/2 + k/4 + k/8 + \ldots + 1$$ This equation sums to $2*k$ (again, we assume you saw this in a math class at some point). Now, we divide this cost by the number of calls: $$2*k\over k$$ Thus, the amortized cost for each call is a constant, so `addLast` is amortized $O(1)$. #### Amortized analysis is not the same as average-case analysis This is a fine point, but worth being clear about with respect to terminology. If you go on to an algorithms course, you will learn about "average case" analysis (as opposed to "worst case" analysis). Average case is sort of "what would happen on regular or average input, rather than worst-possible input". In contrast, the average that we computed here was over a sequence of *calls* to the method: it was not about variations in the input data in a single call to the method. Amortized analysis, as the name suggests, is about distributing the cost of an expensive instance of a computation over the less expensive instances. `addLast` is still linear time in the worst case. The amortized analysis is a way of improving on that worst-case estimate. It also gives us a mathematical way to do a finer-grained comparison of two different resize policies, each of which yields the same worst-case behavior. ## Providing `addFirst` Phew, now that we have `addLast` working efficiently, let's go back and support `addFirst`. Since arrays need the items to be consecutive in memory, adding to the front of the array seems to require moving all of the elements down one space, then putting the new element at the top. This is again linear time (because, again, we need to copy all of the existing items to their new spots). Maybe not. What if we left ourselves some blank space at the front of the array to add new first elements? In the figure below (left), we set an array to size 8 with the start at index 3. We use `addLast` to add four names, then we use `addFirst` to add "Annie". By putting the start in the middle of the array, we have room to do that, as shown in the right array. Note we still have room on both ends of the array (where the `null` values are). ![illustration of start of array in center](https://docs.cs200.io/uploads/bf3a0e36-26a7-4f02-bdca-53bd6f7d8d63.png) Let's keep going! Let's add two more names to the end of our array. Now what happens? Notice that the `end` marker has run off the end of the array, which means that there's no room for name "Peter". We have to resize the array to fit that in. ![running off the end of the array](https://docs.cs200.io/uploads/f958e4d1-e900-41b9-97fb-b18c7f912926.png) Or do we? Notice we still have some extra space in the top two positions of the array? Could we somehow use those by "wrapping around" the end into the top of the unused space? We sure can! We just have to adjust the `end` marker so that it wraps around to the unused spaces: ![wrapping around the array edges](https://docs.cs200.io/uploads/3f86e67f-f6f0-4eb8-ad71-ddfe616c230e.png) Let's focus on understanding this conceptually before we turn to the code. It might help to think of the top and the bottom of the array "glued together" into a cylinder (or gear) of slots: we're just rotating the slots backwards to make room for the new element. It's as if there were a phantom index 8 that actually lies in position 0 and a phantom index 9 that actually lies in position 1. In the above picture, we've used red italic numbers on the right of the array to label the conceptual positions in the list, showing how they differ from the actual array indices on the left. With this approach, we can allow addition on both ends of the array, while also making sure we've used all available space before extending the array with a costly resize. ## Implementing WrapAround Arrays Let's look at how our code changes to allow this. ### The `get` method Let's start with the `get` method. Before we supported wrapping around, that method looked like: ```=java= public String get(int index) { if ((index >= 0) && (index < this.capacity)) { return this.theArray[index]; } throw new IllegalArgumentException("array index " + index + " out of bounds"); } ``` Someone using our code might ask for the first name in the list (which should be Annie). Since they are asking for the first name, they will write `names.get(0)` (remember, we count positions from 0). We know that the actual list actually starts in index 2, however. So in response to `names.get(0)`, we should return `theArray[2]`. If the user asks for `names.get(1)`, we should return `theArray[3]`. And so on. Generally speaking, our code has to convert from the position the user wants to the array index where that position is actually stored. We do that by adjusting the position that the user has asked for to the correct index by adding the value of : ```=java= contents[position + start] ``` What should happen if the user calls `names.get(6)`? Our current expression would then look up `theArray[6 + 2]`. But there is no index 8 into our array -- this would give an "out of bounds" error! We need a way to compute the correct index while "wrapping around". A request for position 6 should retrieve the value at index 0 (according to the red labels on the right of the above picture---the italic red 6 is actually at index 0). If you remember modular arithmetic, that's all we need here. If you are unfamiliar or rusty with this concept, it boils down to the remainder under integer division. Consider `position _ start` where `position` is 6 and `start` is 2. Naively, we would ask for `theArray[8]`. The max index within the array is 7 (one less than the capacity of the array). Let's divide the naive index by the size of the array (here, $8/8$). The remainder in this division is 0. And that's the index of the desired element! If instead we had wanted the element in position 7, we would compute the remainder of $(7 + 2) / 8$, which is 1 (the index corresponding to position 7). The remainder under integer division comes up frequently enough in programming (in cases such as this with arrays) that language build in an operator, called *modulo* for computing this remainder. In Java, this is written with a percent sign. Our get method therefore needs to look like: ```=java= public String get(int position) { if ((position >= 0) && (position < this.capacity)) { // return this.theArray[index]; return this.theArray[(position+start) % this.capacity]; } throw new IllegalArgumentException("array position " + position + " out of bounds"); } ``` Why did we change the term from "index" to "position" here? Because we have two different indices: the ones into the underlying array (that only the `ArrList` class knows) and the positions in the list (which the programmer who is using the class knows). We're going to use the term "position" for the programmer perspective, and the term "index" for the array perspective, to try to keep those distinct. **Stop and Think:** Now that we have modular arithmetic, do we still need the statement to check whether the position is within the size of the array? ### `addLast` The wrap-around adjustment that we did using modular arithmetic in has to be done in any part of the code that deals with indices into the array: if we might move off the edge of the array (on either side), we have to use modulo to put our indices back within the valid indices of the array. Both `addLast` and `addFirst` do such an adjustment when they adjust the and fields, respectively. Here's our updated `addLast` method: ```=java= public void addLast(String newItem) { if (this.isFull()) { // add capacity to the array this.resize(this.capacity * 2); // now that the array has room, add the item this.addLast(newItem); } else { if (!(this.isEmpty())) { this.end = (this.end + 1) % this.capacity; } this.eltcount = this.eltcount + 1; this.theArray[this.end] = newItem; } } ``` The change is on line 9. Rather than just add 1 to the `end` value, we add 1 then use `%` (modulo) to make sure the new value of `end` is within the array. ### `addFirst` The method would need a similar adjustment on the field. When adjusting `start` to add an element, we might expect to write ```=java= start = (start - 1) % this.capacity; ``` If you write this, you will (probably) be surprised to get an array out of bounds error that reports that `start` is -1 if it moves off the top edge of the array. This is an artifact of how Java handles division of a negative number: if `start` is 0, the above code will produce -1, even though that isn't the correct answer mathematically (it's the difference between implementing remainder and modulo). To fix this, we leverage the fact that for any number $n$, both $n$ and $n + c$ have the same remainder when divided by $c$. Thus, we write: ```=java= start = ((start - 1) + this.capacity) % this.capacity; ``` Here's the complete `addFirst` method: ```=java= public void addFirst(String newItem) { if (this.isFull()) { // add capacity to the array this.resize(this.capacity * 2); // now that the array has room, add the item this.addFirst(newItem); } else { if (!(this.isEmpty())) { this.start = ((this.start - 1) + this.capacity) % this.capacity; } this.eltcount = this.eltcount + 1; this.theArray[this.start] = newItem; } } ```