I seem more motivated to write about my failures. I sucked at round 3, and I immediately started writing this post.
And… for a failure, I will use a different format to described what is the challenge and how exactly I failed at it.
Here is a rough timeline for what I was doing during the competition:
- 00:00 – 00:20: read the four problems, as well as think about them briefly. In the end I concluded that none of them look easy but I should start with the first problem.
- 00:20 – 01:10: solve problem A.
- 01:10 – 01:20: solve the easy case of problem B.
- 01:20 – 02:00: solve the hard case of problem B, but getting a Wrong Answer.
- 02:00 – 02:30: solve the easy case of problem C, while also debugging problem B.
In the end my solutions for both problem A and problem B worked, and problem C easy is never enough for me to enter the top 25.
Problem A Zillionim
Zillionim is a turn-based game for two players. Initially, 1012 coins are arranged end-to-end in a single line, numbered from 1 to 1012 from left to right. During a turn, a player must select 1010 consecutive coins and remove them. Two coins that were not originally consecutive do not become consecutive even if all of the coins in between them are removed.
On their turn, a player makes a valid move if possible, and then it is their opponent’s turn. If a player cannot make a valid move on their turn, they lose the game (and the opponent wins the game).
Because our engineers are still hard at work training our machine learning model to play Zillionim, we have created a simple AI that plays Zillionim by making random moves. The AI always gets the first turn. On each of the AI’s turns, the AI determines all valid moves and chooses one of them uniformly at random.
Can you beat this AI… at least most of the time (499 out of 500 games for the hardest case)?
The first part of solving this problem involves writing a naïve solution. It involves all kinds of boilerplate code you are going to write anyway, so in some sense a naïve solution is one or two lines of naïve code plus lots of reusable, useful stuff.
A naïve solution also provides a right perspective into the problem. In this problem, the current state of the game is a set of intervals with length of least 1010. Each action is to pick a breaking point in an interval, and depending on where you pick and how long the interval is, it might break in two, or shorten, or disappear.
And it is also not hard to see that all intervals with length between 1010 and 2×1010-1 are functionally the same: picking anywhere in such an interval would result in eliminating it. Let’s call such an interval “Type I”.
For intervals with length between 2×1010 and 3×1010-2, however, they can either be eliminated, or shortened to an Type I interval. Let’s call such an interval “Type II”.
A key observation is that you can produce short intervals at specific lengths, and the random AI is very unlikely to mess with your plan since it is usually busy with the longer intervals. The end result is when the original long sequence finally turns into a set of short sequences, the length of about half of these sequences is decided by you.
The magic interval length I use is 3×1010-2. What’s special about it? When the AI picks a breaking point in such an interval, it almost always shorten to a Type I. On the other hand, I still have both options, meaning that I can still directly eliminate it when I need to do so.
My early game strategy involves pumping as many magical intervals as I can. The middle game strategy involves getting rid of all non-magical Type II intervals, because it results in a simpler situation, making analysis easier.
Now we arrive in a late game, already controlled by our early and middle game strategies. In a late game, each interval is either a Type I one or a magical one. Assuming the AI always taking the high probability option when dealing with magical intervals, simple combinatorial game theory tells us that a situation is losing if and only if the number of Type I intervals is even. And as long as there is at least one magical interval remaining, however, we can effectively “cheat” by invoking the elimination option of a magical interval, turning a losing situation for us into another losing situation for the random AI.
All in all, this is a incredibly simple solution because it basically throws all the complexities away and deals with what’s remaining. It turns out to be very reliable: I simulated 5,000 games and it won all of them. This is probably why this problem is the first problem, and why it has the lowest total score.
Problem B Pancake Pyramid
You have just finished cooking for some diners at the Infinite House of Pancakes. There are S stacks of pancakes in all, and you have arranged them in a line, such that the i-th stack from the left (counting starting from 1) has Pi pancakes.
Your supervisor was about to bring out the stacks to the customers, but then it occurred to her that a picture of the stacks might make for a good advertisement. However, she is worried that there might be too many stacks, so she intends to remove the L leftmost stacks and the R rightmost stacks, where L and R are nonnegative integers such that L + R ≤ S – 3. (Notice that at least 3 stacks of pancakes will remain after the removal.)
Your supervisor also thinks the remaining stacks will look aesthetically pleasing if they have the pyramid property. A sequence of N stacks of heights H1, H2, … , HN has the pyramid property if there exists an integer j (1 ≤ j ≤ N) such that H1 ≤ H2 ≤ … ≤ Hj-1 ≤ Hj and Hj ≥ Hj+1 ≥ … ≥ HN-1 ≥ HN. (It is possible that this sequence might not look much like a typical “pyramid” — a group of stacks of the same size has the pyramid property, and so does a group in which the stack heights are nondecreasing from left to right, among other examples.)
Note that the sequence of stacks remaining after your supervisor removes the L leftmost and R rightmost stacks might not yet have the pyramid property… but you can fix that by adding pancakes to one or more of the stacks! The pyramidification cost of a sequence of stacks is the minimum total number of pancakes that must be added to stacks to give the sequence the pyramid property.
While your manager is carefully deciding which values of L and R to choose, you have started to wonder what the sum of the pyramidification costs over all valid choices of L and R is. Compute this sum, modulo the prime 109+7 (1000000007).
The annoying fact was that I failed to think of a solution that can work for the hard test case. Instead, I roughly knew the solution for the easy case and that was it.
Before venturing deeper, take notice that the “pyramidification cost” is essentially the same thing as the volume of water that would have been contained by a container of the shape of a histogram of the remaining stacks. They are very different ways to describe the same mathematical stuff, but some people, like me, would find the water metaphor easier to work with than the pyramidification metaphor.
Only when I started to write the O(n2) solution it turned out to be not so straightforward. I split the water volume into two kinds: the first kind is “ascending”, while the second kind is “non-ascending”. They are not exactly symmetric because we need to take account of the cases where there are multiple highest points, but for the sake of describing the algorithm, their difference is minor enough. Therefore, I will only describe the ascending part.
Actually, splitting the problem into two parts is the hard part. Dealing with the parts is easy. First, enumerate all starting points, or, using the terminology in the problem, enumerate L. For each starting point, enumerate all ending point in ascending order, maintaining the maximum point in the current interval as well as the “current water volume” in the interval. The “current water volume” only needs to be updated when the maximum point changes.
O(n log n) solution
Once I had finished the O(n2) solution, it became apparent that there were optimization opportunities. The water volume of the current interval only updates as we encounter a new maximal height. And by “updating” we mean “adding something to it”.
Similarly, lots of computations from different starting points are actually similar.
Taking this perspective, we look at an “ascending lake”. It turns out we know exactly how many times its volume gets counted! If the said lake is between i and j, then any interval that contains nothing before i that is higher than or equal to i, and contains the lake in whole, would count the volume of the lake once.
To calculate the total contribution of this lake, we need to find out for each position the leftmost position to its right that is higher than it, and the rightmost position to its left that is not lower than it. Both of which can be calculated in O(n log n) time with the help of an interval tree.
Problem C Datacenter Duplex
Two companies, Apricot Rules LLC and Banana Rocks Inc., are sharing the same datacenter. The datacenter is a matrix of R rows and C columns, with each cell containing a single server tower. Each tower contains intellectual property belonging to exactly one of the two companies.
At first, they built walls on the edges between cells assigned to different companies. This allowed orthogonally adjacent cells belonging to the same company to remain connected. Also, two cells x and y are considered connected if x is connected to a cell that is, directly or indirectly, connected to y. With this definition, it was possible that two cells assigned to the same company were not connected, which was unacceptable.
Both companies agreed to build narrow hallways running through cell corners that allow two diagonally adjacent cells to be connected directly. Let us write (i, j) to represent the cell at row i and column j. At most one narrow hallway can be built through any given vertex, which means either (i, j) and (i + 1, j + 1) can be connected, or (i + 1, j) and (i, j + 1) can be connected, or neither pair, but not both. Of course, only hallways between cells assigned to the same company can be built.
Given a matrix where each cell is labeled
B depending on which company it is assigned to, find a way to add connections through diagonal adjacencies such that all
As are connected and all
Bs are connected.
My (Lack of) Solution
I had no idea about the hard test case. The easy test case has the number of rows of at most 4, which enables some contrived dynamic programming where you store the connectivity in your state. Which I tried and failed to get done in 30 minutes.
My thoughts on the problem is that it is probably easier than it looks. That it is a planar grid really keeps its complexity in check. Here is the third sample input:
BBAA BABA BBAA
Funnily, I have been drawing more test cases on paper and none of them are essentially more complex than this one. I think the correct route for this problem is to take this observation and do something with it. The solution should be something like greedy algorithm (in that it always takes one option of the opposite one is obviously useless) and construction (in that after the greedy parts are done, the problem should boil down to a few patterns that can be solved with construction).
I will not write about the last problem since attempting it without finishing problem C would have been a huge mistake, and I did not interact with the problem in any meaningful way to talk about it.
I feel I started incredibly slowly. Reading all the problems and picking one of them is a good strategy when I can solve at least one of them quickly. Otherwise, I might as well just dive at the first problem and solve it while also warm up myself. Of course, there is also the question that whether I should start writing some code before the contest, but unfortunately, on the other hand I find myself tire rather quickly in coding competitions. If I have to make a trade off between these, then yes, I still feel I should have warmed up before the contest, as my mental stamina isn’t that bad.
I think I used a correct approach for problem A, but it turned out that I spent an incredible 40 minutes between implementing the naïve solution and the final solution. Some of the time was probably spent in thinking about problem B, but that would have been another mistake. I should focus on problem A anyways.
For problem B, it turned out that writing down the solution for the easy case helped me with the hard case. In other words, I knew how to solve the easy case, but I did not think about it clearly on paper, and had to think about the same problem again. If I could analyze it on paper as clearly as I did when I wrote the easy case solution, I might have saved valuable time to deal with problem C.
… and it comes to problem C. I should have noticed that solving the easy case is unlikely to help, especially when an easy case only solution is likely to be very contrived and contains little to no code reusable for the hard case.
Instead, I panicked when I realized I wasted so much time on problem A, and went to try hard to grab any point I could. It proved to be futile and detrimental for my overall chance of getting in the top 25. Which is, to be honest, never likely to happen though.