Categories
Uncategorized

Manual Score Adjustments

The second Beginners Contest of 2020, based on the Classics Collection of this blog, lies behind us. I consider the contest a success – in the sense that participation and results were comparable to those of the previous contest (it was my intention to design the puzzles similar in difficulty). There is something on my mind, though.

In this contest, as in previous ones, there were a considerable number of incorrect solution code submissions. Naturally, some of them originate from genuine solving mistakes. Others, I believe, are due to either inattentiveness or miscomprehension of the instructions regarding the solution code. Simply put, the participant appears to have solved the puzzle correctly and just to have entered the wrong answer in the submission form.

Over the last few years I have been wondering how to deal with such occurrences. In my early contests I used to manually adjust the scores in these cases, sometimes after contacting the respective participants. In time I grew unhappy with this approach; I felt that it punished the solvers who took greater care to enter the codes correctly (typically at the expense of losing valuable solving time).

Now, first of all, it was announced beforehand that there would be no manual score adjusmtents in the Beginners Contest. Under these circumstances, correcting the scores anyway (except in the case of technical errors on the part of the competition host) should be out of the question. The contest conditions must be observed. But assuming no such announcement had been made, what is the best way to deal with incorrect submissions?

The general idea of solution codes is to verify that a solver has completed the puzzle; anyone who has solved the puzzle correctly is supposed to get the points (see an article of mine on this subject from April). And if the submitted entry deviates from the correct one in a way that suggests an incorrect solution of the actual puzzle, no adjustment is in order. However, how do we deal with other cases?

If we were to adjust scores, it would be key to investigate the reason behind the incorrect entries. We must ask if the puzzle itself has been completed in accordance with the rules (and the mistake was made only at the point of entering the solution code), or if the puzzle solution already contains errors.

Even with the nature of the solution entry occasionally pointing in one direction, there seems to be no other viable way to find out except by asking the participant. In this regard I regret to say that I have had some questionable encounters. Although it is my experience that puzzle solvers can generally by trusted, I have come to realize to what lengths people sometimes go to get their points.

In a fully electronic environment (by that I mean not just the solution code functionality, but also an integrated solving interface) there would be no margin for interpretation. The contest engine would view the solution of, say, a Sudoku as a two-dimensional data field with numeric entries. In such a setting, the numbers are either all correct or not, and there is no middle ground.

Since this kind of environment is usually not at our disposal, we have to deal with what we got. I have already mentioned handwriting issues in the past, but even without those, there are a lot of things that can happen, especially in real-life events.

On the one hand, there are instances where the solution of the puzzle is not complete in a technical sense. Suppose only 80 out of 81 cells in a Sudoku are filled, with the last one either being empty or perhaps still containing several candidates. Or a loop is not fully closed; there is one line segment missing. Or a Magnets puzzle has one plate empty – that is, neither shaded nor containing a plus pole and a minus pole.

All these scenarios have occurred in past championships, and since the given task was not fulfilled, I think the puzzle had to be marked as unsolved. (One can argue that the puzzle is solved “for all practical purposes”, but such an argument would only lead to a discussion for the future at which point it will be considered unnecessary to complete the solution. Since there is no objective definition of solving steps or the like, the only real line that can be drawn is between a finished and an unfinished grid – no matter how small the negligence is.)

On the other hand, a different kind of scenario emerges if the completeness is subject to interpretation, perhaps due to the used notation. It has been customary for a long time to accept a different notation on the participant’s part, “as long as it is clear”. But what happens, for example, if a solver uses two different notations and switches between the two, as can be observed so very often in dissection puzzles?

Again, this has happened many times, and it is usually a mess for the scorers, to put it mildly. There have been attempts to clarify what the decision will be in certain specific situations. However, to my knowledge there is nothing even close to a standard for doubtful situations in general.

In this regard, solution codes are actually quiet convenient because they force the solver to translate his own solution in a language the contest engine understands – a sequence of numbers (or other characters, depending on the puzzle style). And the engine does not care about handwriting, candidates or any kind of notation mix-up; it only interprets the string according to its programming and makes a binary decision.

However, this comes at a price, namely the risk of translation errors, and I have seen a great variety of translation mistakes. For example, in my Skyscrapers contests (using the preferred code for Latin Squares, namely the contests of a fixed selection of rows) I had to handle, among others, the following mishaps:

  • two numbers have been switched;
  • a number is missing;
  • a number has been entered twice;
  • only one of two rows has been entered;
  • the wrong row has has been entered.

Let us keep in mind that this is one of the simplest categories of puzzles, when it comes to solution codes. In other puzzle styles, I have seen so many translation issues that it is impossible to list them here. And most of them were entirely consistent with a correctly solved puzzle. (For instance, someone had switched X’s and O’s for shaded/unshaded cells once. His entry was as much evidence of a correctly solved puzzle as any “correct” submission.)

A couple of years back I have adjusted the scores when it appeared likely that the puzzle itself was solved correctly. I don’t do that anymore. To be honest, I am tired of the discussions. People have often used the word “unfair” to complain about scoring decisions, which is why I have decided to no longer go down this road at all.

The point is, people bring forward all kinds of arguments why an incorrect solution should be treated as a correct one, and it is impossible to draw a fixed line here. Any flexible line, however, will lead to even more anger and frustration if people find themselves on the wrong side of said line, so I prefer not to be the reason. Bill Bryson, one of my favorite writers, once wrote about the notion “that no matter what happens, someone else must be responsible”. I strongly believe that people must learn to accept responsibility for their mistakes in this context.

It has been remarked that, although the intent behind logical puzzles lies in the word “logical”, a solver can basically choose whatever strategy he likes to find the solution (as long as no forbidden means are employed). He can deduce, bifurcate or guess as he pleases; he can use or avoid any solving techniques according to his personal perferences. In the end, all that matters in a contest is whether he arrives at the solution or not.

Likewise, we can now say that it is entirely up to the participant what he does with the grid, since it is only the solution code that matters. He can try to solve a Sudoku, he can submit a random sequence of 18 digits, or he can try to stand on his head for 90 minutes. By extending the “task” from the solving of a puzzle to the translation of his solution into a code string, we leave the decision of how to approach this secondary task to him. It is his own choice, and therefore ultimately his responsibility, how much time he invests.

All this may sound harsh. But in my (current) opinion, this is the best way to deal with this matter. It may be the only truly impartial way to decide whether a solution is right or wrong.

1 reply on “Manual Score Adjustments”

The last time I held an online contest, I didn’t do any manual score corrections, with one exception: If a correct solution is entered for a wrong puzzle (which is usually easy to recognize, without any doubt), I manually awarded full points for the puzzle that was actually solved.

Somehow this feels different than an incorrect solution code where you can only guess whether a competitor made their mistake in the actual solution or while translating the solution into the solution code.

Leave a Reply

Your email address will not be published. Required fields are marked *