Explanatory Information Is Revealed - Wordplay Blog - NYTimes.com
Source: The New York TimesDate: April 4, 2009
Byline: Jim Horne
Explanatory Information Is Revealed - Wordplay Blog - NYTimes.com
Note: The following is excerpted from a longer article.
Matt Ginsberg is another of those software geniuses who also construct crosswords. Today's puzzle includes an encoded algorithm. Debug that code, run the algorithm, and it eventually all works out.
As if constructing a Saturday puzzle isn't tough enough, Mr. Ginsberg decided that he had to up the degree of difficulty. The grid itself has one extra set of constraints and the clues have yet another. Either would be cool. Together they raise this puzzle to stratosphere.
I wondered how such a puzzle could even be possible to create so I contacted the constructor for an interview:
Interview with Matt Ginsberg
Tell me about yourself.
First and foremost, I'm a husband and a father. I bring the average down a bit, but I'm still part of the best family in the world. Right now, we're on vacation in Hawaii -- Pamela and Skylar are getting rid of sand from a day at the beach, and Navarre is probably solving a KenKen. Nerd that I am, I get to spend the spare five minutes doing e-mail.
My "real" job is to run a small software company, On Time Systems. We help our clients optimize their operations; as an example, we save the Air Force about 20 million gallons of jet fuel annually. It's nice to have such a green job!
The time that's left over (such as it is), I spend making crosswords, flying around (upside down) in a plane I built in the '80s, and playing volleyball. Volleyball gets harder as I get older, and the other folks in the league don't seem to.
You played a key role at the American Crossword Puzzle Tournament this year. For the first time, the judges only had to mark up errors. You created a system that scanned the markups and calculated the scoring, making you the hero of every judge there. How did that came about?
I came to the tournament in 2008, and it seemed like a lot of the puzzle scoring work could be automated. I suggested what I thought would be a more efficient mechanism, which turned out to mean that I volunteered to put it together. It was an interesting project -- a bit of image manipulation stuff, and trying hard to ensure that the overall error rate went down as a result of my efforts.
Now that the dust has settled, how did it work out? Any surprises?
It was beta software, of course, since nothing had ever been tested. So there were surprises all over the place -- as an example, the judges marked squares that were wrong in yellow; if the mark itself was wrong, they went over it in green. The green highlighter I bought in Eugene and tested with wasn't as vibrant as the green highlighters in use at the A.C.P.T., though, which wrought havoc with my algorithm for figuring out which squares were right and wrong! I was doing a lot of pretty frantic coding throughout the event and didn't sleep a lot. But it all seemed to work out, and I was delighted that the other folks were so pleased. This is a great community, and I'm always pleased when I can help.
Another surprise was how much value the contestants got out of being able to see their scored sheets on the Web. We'll definitely be focusing on that next year -- the puzzles will be up sooner (we hope), and the images will be clearer and easier to download (they won't be upside down, we'll focus on the grid, etc.).
Are there modifications planned for next year?
Absolutely. We'll try to make it easier for the judges to score the dreaded puzzle No. 5, which many people get wrong more than they get right. And we'll make the whole thing run considerably more smoothly by avoiding the hiccups we had this time around.