The surprise of Division 4 so far is Wasp 3.2, clear first by a half a point and with three NN scalps
to its credit. Leela is second and Deus X trails by another point. These three engines seem to make up the class of the field so far, but we’ve got more than half the tournament left to go.
One item of concern is that Leela is coming away with draws against some of the lowest ranked engines in the tournament. She needs to start closing the deal on the door mats.
Today lc0 has been rolled out to the main server. The getting started page has been updated.
The main server is going to change over to using lc0 for all training games soon. If you would like to prepare for this in advance you can download the v16 client and lc0 releases. Note that while you can start them now, they will fail every 30 seconds until the server changes to be lc0 friendly. Once the server changes to be lc0 friendly, lczero will no longer work for training.
Right now the main net is being trained. In a week, training will turn to a 20b network (stay tuned). You have a choice between running opencl and cuda (which requires nvidia down loads.)
Third Party Downloads
You can find out which libraries to download and install here.
Which Version is For You?
“cudnn is for recent nvidia gpus, opencl for older nvidia and all amd gpus, blas is for cpu. With opencl also the blas backend is available, enabled with –backend=blas and may be faster.”
Yesterday, after a bit of a brouhaha on discord about the Deus X entry in TCEC,
cooler heads have prevailed. TCEC issued a set of three statements by Albert Silver,
the author of the Deus X NN, the Leela Chess authors, and the TCEC tournament director.
I thought the statement by the Leela Chess authors was particularly gracious.
The team is looking forward to exciting games in the upcoming Top Chess Engine Championship, and is very glad that both neural network based on “zero” principle and “non-zero” principle will participate this season. It’s going to be really interesting to observe their differences in style and strength. We hope to see even more neural network based engines in the next seasons of TCEC. It is now time for the highest level chess competition to begin and for the audience to enjoy the best chess in the world.
Again, lots of stuff going on, but we’ll focus on the burning question: when is lc0 getting rolled out to the main pipeline?
We almost started to move, but overfitting (the net is very accurate on existing data, but bad on new data) problems started. We are trying various solutions, including increasing CPUCT as suggested in a series of articles by Oracle on Alpha Zero Connect4 (part5).
It turns out that storage (for training games) is an issue that we haven’t yet solved. We expect to generate 110GB of data per day. And it would be good if that data was stored at least at two sites, for backup reasons. Solving this issue is in progress.
We need to enable multi-gpu support for Tensorflow. This requires some non-trivial surgery on the model. Otherwise we will
fall further and further behind the massive amounts of generated data. In progress.
All activities in the work queue can be found on the project board.
Again, all with a giant grain of salt, but the TCEC crew are supposedly in
possession of a dual 1080ti on remote Windows Server 2006. They are doing
final testing for a tentative start date to S13 of August 1st.
We tend to feature Leela wins on this blog. But sometimes you have to take
a look at your losses – whether as a person or an engine.
IM William Paschall has been kind enough to analyze this loss at the WCCC with
black against TheGridGinko. A theme seems to emerge: while hanging on to your pieces
when you’re up can be good, refusing to trade when you’re down can be bad.
Maybe looking at some of the parameters to make Leela play more conservatively
would be good, especially against stronger opponents.