Saturday, December 21, 2013

A program solving at 83.6%

I haven't posted in a while but this project will never die until I'm dead. I can usually put a few hours a month on it and the payback is great. I got an email from another computer solver and his program is solving at 83.6% on a thousand deals. His results can be found here along with his source code:

https://github.com/ShootMe/Klondike-Solver/blob/master/Statistics.txt

His program gives up after 60 million states so his deal processing rate is much higher than mine. 8.2 seconds a deal versus my 19 seconds a deal (hardware considerations aside). Here are my results on a thousand games I ran today as a comparison:

80.2% solved. Roughly 19 seconds on average to process a deal. Some deals are ridiculously difficult to process. I've seen some take about 10 minutes before they become unsolvable.

Why the discrepancy? If we have bugs in either of our programs, that would certainly result in the discrepancy. Another potential problem is generating random numbers. I use stl:mersenne_twister and he used his home grown version to generate numbers. I'll try his random number generator. I asked him to try stl:mersenne_twister.

I'm working on making my solver interactive and hope to have it ready to post in a month or two. This will be a fully interactive installable program that you can play Klondike Solitaire on (draw 3, infinite retries). In addition, you will be able to ask the computer to attempt to solve any given deal that you lost and allow you to step through the solution. In addition, you'll be able to have the computer make a move at any given time in the game. Undo/Redo functionality to back up and go forward on moves. Version 2 or the end product will model after Dr. BlackJack. You can have it warn you when the computer thinks you are making a mistake. I don't have a lot of time to spend on this but surprisingly, I've made quite a bit of progress on it just looking at it a few hours a month.

Sunday, June 9, 2013

Random dot org

As much as I like STL mersenne_twister, I feel the need to verify my win percentages with a different random number generator. Of course, you can get random numbers online from http://www.random.org/. This is a pretty cool site and he uses atmospheric noise to generate random numbers. How good is that! It's a free service and he doles out "bit allowances" for generating numbers. His API is described here:

http://www.random.org/clients/http/

Of course he will sell additional allowances and I like his scheme. That said, I doubt I can pony up the dough for running a million deals. Like most Americans, I am nothing more than a wage/debt slave. The free bit allowance allows me to play about 600 hands so I played 500 and the game player won about 1 in 5.7 deals. This is within the ball park of my usual 1 in 5.37 deals so I am pretty happy with the results. I hope to email him with my hat in hand and see if he'll give me a million deals in the off hours or something.

So how hard was it to get random numbers off the internet? Not very. He has some samples posted and of course the link is stale. I googled the sample and found a good link. Here is a C++ program written by Doug Hague which will download the numbers:

https://github.com/doughague/random-dot-org

I pulled this down and tried to build it in VS2010. Of course it uses something called LibCurl. Édouard Tallent's blog has a nice write up on how to get it up and running under VS2010.

http://quantcorner.wordpress.com/2012/04/08/using-libcurl-with-visual-c-2010/

The thing also uses GNU option parsing but I know I don't need that. #ifdef NOTUSED. haha. I manually hack in the options I want. So I build the thing and of course there's a missing dll. The ole libsasl.dll is missing and I hunt this down and find it here:

libsasl.dll

Run and test this standalone. Works a treat! I add a function in place of main(), change the project to be a static library. I still need all the bonus dll's though. I add the static library project to my solitaire solution and I'm pretty close. I modify my deck object to deal using a sequence of numbers from random.org and there you have it. Solitaire dealt off of atmospheric noise.





Saturday, June 8, 2013

Taking it to the next level

I have ported the program to x64. I have added multi-threading. I can keep eight cores going at 50% without increasing priority. Why do I do this? The holy grail is to prove that good deals are hard to lose. My preliminary runs support that hypothesis. You'll be hard pressed to lose a good deal regardless of the number of mistakes you make. I'll quantify this in upcoming posts.

From a computer geek point of view, reader/writer locks are your friends. I can't utilize processor unless I do a reader/writer lock on the list of unique card stacks. What's a reader/writer lock? I had a guy at work who went ballistic over this concept but it's pretty simple. If you are comparing card stacks, you can have as many threads as you want comparing card stacks. When you go to add a stack to the list, it is imperative that everyone be locked out of accessing the list of cards stacks. Thus a reader/writer lock lets all readers in, keeps writers out. When all pending readers are finished, the writer comes in and takes exclusive control of the list of card stacks. It works! She is your friend!

Saturday, May 18, 2013

Solving rate down to 77%

I am down to 77% solving rate. My initial 80% is attributable to bugs that I have since fixed. It seems to be consistent but I really haven't built up a large sample set. We're talking hundreds of games.

My list of lists isn't doing exactly what I thought it was. I was incorrectly collecting statistics. The number of stacks in a given list is not limited at all and grows quite large. Computational power is an issue.

I took a shot at trying to calculate how many ways a game can be solved. This is a really small sample set and some deals couldn't finish as you run out of memory. Running 32 bit, my process is limited to 2 gig. This little foray led to me to following conclusions:

1. My card object is too large. For this reason, I encoded everything into a single byte. The card is encoded in the low order nibble, the suit is encoded into the next two bits and the up card state is encoded in the high order bit. Stacks are much smaller now and stack comparisons are much faster.
2. I need to port this to 64 bit. 2 gig has no chance of dealing with this problem.
3. Even then, I will probably have to implement a caching scheme for writing stacks to disk.
4. I need to go multi-threaded and run this thing on eight cores.
5. I need to implement a twelve key map which uses the number of cards on all the stacks as the keys.
6. Make an internet solver. This would be a service that could be installed on computers where the main solver could dole out stacks to be solved. Sort of like the SETI at home thing. I'm certain there's a large community of people who would want to donate computing time in pursuit of meaningless endeavors like calculating the number of ways a deal can be won in solitaire! Er, uh, maybe not.
7. Utilize a database for caching stacks, managing solving and collecting statistics.

This little hobby is taking on more dimensions than unique solitaire card stacks. The funny thing is, I can probably chip away at this in my spare time with obsolete computer equipment. When I first started programming, 64K RAM and an 8 bit processor wouldn't have gotten you very far with this problem.

Saturday, May 4, 2013

80% of solitaire deals are now solvable!

I was able to get the solver to solve at around 80% of deals. It is extremely consistent. My sample sizes are only hundreds of deals as it is solving at a rate of about 25 deals an hour. It makes me wonder how they were getting variation in the white paper solver listed in the previous post.

As I suspected, the first version of the solver was flawed. My move logic was flawed and I was skipping permutations. At a cost of making the solver slower (first version solved at a rate of about 40 deals per hour), I was able get my solution rate up to 80%. I discarded the concept of grouping stack moves and went for a far simpler implementation. Stack moves are pretty much treated just like card plays off the play stack. I also refined my strategy for check pointing stacks. My initial implementation would unconditionally check point a stack after a significant action after a skipped action. When I would go play another list, I would look for the stacks in the list of check pointed stacks and not play the stacks if they were found. I implemented a new strategy where I would mark the stacks for pending check point after a skipped action. I would then continue to play the hand and try to check point the stacks after each significant action. If the stacks were found in the previously tried list, I would leave the pending flag set and continue playing. In this manner, the program now seeks out new stacks to solve. Only when I am able to check point the stack (i.e. it doesn't exist in previously tried stacks) do I clear the pending flag. This means I have to spend more time comparing stacks but less time actually playing cards.

Lists of lists are good. My initial implementation of the lists of check pointed stacks was a std::map<int, std::vector<std::vector<Stack>>>. I used the number of cards remaining on the play stack as the key. I never analyzed this map but I figured that some lists were getting exceedingly long and required many comparisons of the stacks. I implemented a new list of check pointed stacks. It is:

static std::map<int, std::map<int, std::map<int, std::vector<std::vector<Stack>>>>> sStacksPlayed;

It has three keys and I tried to do eight but the compiler barfed on it. Something about names being too long. Three was the most I could get. I didn't bother to investigate further but I suspect I could get eight if I got into the business of typedeffing. Now I have a three key map and my keys are as follows:

std::vector<std::vector<Stack>> &lStackList(sStacksPlayed[mCardStacks[eStackPlay].size()][mCardStacks[eStack4].size()][mCardStacks[eStack7].size()]);

I arbitrarily chose the number of cards in the play stack, the fourth main stack and seventh main stack. I started analyzing my triple map after solving a deal and found that the maximum number of stacks in a list was consistently twelve. I had an "oh duh" moment when I realized that the most number of cards you can have on a main stack is twelve. This is good as the most number of stacks comparisons I will do on any attempted check point is twelve.

Most deals solve or don't solve fairly quickly. Periodically a deal comes up where hundreds of thousands of unique stack check points are generated. This really kills my throughput. I would like to collect stats on this.

The main recursion loop is as follows:

void PlayList::playList(const std::vector<Stack> &arCardStacks, int aCardsRemaining, int aLevel)
{
   if (arCardStacks.size() == 0)
   {
      return;
   }

   // -1 means we won't skip any signifcant actions on this game
   Game lGame(mSortStacks, -1, aLevel, false, true);
   if (lGame.play(arCardStacks))
   {
      //std::wstringstream lStream;
      //lStream << _T("Solved!");
      //displayString(lStream.str());
      mSolved++;
      // game stops on first solve - I'll try to have it keep going some day
      return;
   }
   // the number of significant actions in the previous attempt
   int lEndSkip(lGame.skipPlayMoveCounter());
   for (int i = 0; i < lEndSkip && !mSolved && !mStop; i++)
   {
      // i identifies the significant action to skip for this game
      Game lGame(mSortStacks, i, aLevel, false, true);
      mGamesPlayed++;
      if (aLevel == 1)
      {
         std::wstringstream lStream;
         lStream << _T("Level one step ") << i+1 << _T(" of ") << lEndSkip;
         displayString(lStream.str());
      }
      if (lGame.play(arCardStacks))
      {
         //std::wstringstream lStream;
         //lStream << _T("Solved!");
         //displayString(lStream.str());
         mSolved++;
      }
      else
      {
         if (!mSolved && !mStop && lGame.checkPointStacks().size() > 0)
         {
            playList(lGame.checkPointStacks(), lGame.cardsRemaining(), aLevel+1);
         }
      }
   }
}

This recursive implementation leads to concerns about stack space and stack overruns. According to the doco, VS2008 default is 1 meg. My implementation consumes about 10K of stack for each level of recursion. I think I am OK but I need to track my maximum level of recursion. I'm printing out stats every thousand attempted solves and it never seems to solve above 30 levels of recursion. I am not sure why this is.

Monday, April 29, 2013

Solitaire solver

I have modified the game player to try and solve deals. It is currently running and solving in the range of around 73% (557 solves out 763 deals). There is a white paper which claims success at a rate of 82 to 91.44%. You can find it here:

http://web.engr.oregonstate.edu/~afern/papers/solitaire.pdf

I think I found the source and I might try building it and reproducing their results (depends on time available).

My implementation is fairly straightforward and brute force. A game of solitaire can be broken down into a series of actions. I'll classify the actions into three basic groups based on my implementation of the game player. Stack moves and exposing cards on the tableau stacks are the first type of action. The other type of action is moving a card from the play stack (stock/talon) to the main stacks or ace stacks (foundation stacks). The third type of action is shifting piles and moving cards to/from the main stacks to/from the ace stacks in hopes of creating a space for the card showing on the talon stack. The solver is really only concerned with the first two types of actions. Shifting stacks and moving cards really doesn't do anything until you actually move a card from the talon.

The first step in modifying the program to solve deals was to track the first two types of actions. I did this with a simple counter. Once I was able to identify actions, I was then able to envision making "mistakes" when a game is played. A "mistake" being that you fail to move a stack/card or you fail to play a card from the talon. Skipping a card play from the talon is trivial. Skipping a move is non-trivial as once you skip one move, this can have effects on successive moves. Thus the stack moves identified in step one of my play algorithm were grouped into a series of moves. When I attempt to skip a move within a series of moves, I must take care not to make any other moves within the series that touches the stacks involved in the initial skipped move. The excluded stack list is cumulative as any stack involved in the new skipped move must be added to the list. I am worried that this logic is insufficient and may be the cause of my discrepancy with the white paper results above.

Once I have a list of significant actions, I can then replay a lost game by skipping significant actions. For example, if the lost hand involved 35 significant actions, I can then replay the hand, skipping each action. It's easy to see the permutations involved grows quickly as each of these hands can then generate another list of games with skipped actions. Rather than build lists of skipped actions, the algorithm lends itself easily to a recursive solution. Take a lost hand and play a list. Playing a list involves skipping the 1 through n actions. While playing these hands, after you skip an action and then make the next significant action, you then have stacks which can be "check pointed" (i.e. I save off the state of the stacks). If that hand fails to win, you can then play a list of games off the check pointed stacks creating all permutations of the initial skipped action stacks. This clearly leads to massive numbers of potential solutions. Clearly there will be deals which cause the computer to go off into solve mode, never to return.

A definition of insanity is doing the same thing over and over, expecting different results. A definition of an insane computer program would be something similar. In order to lend some sanity to this solution, I implemented a std::map of stacks. I use the number of cards remaining in the play stack after a failed try as the key. As I play lists, I save off the state of the stacks into the map. Before I play a new list, I check to make sure that this particular set up of stacks has not already been tried or is being tried. If it has, no sense running a play list on it. With this approach, I am able to process deals at a rate of about 50 per hour. This is on a pretty old computer running a single thread on 32 bit.

I've invested parts of four weekends on this little foray and I'm not certain I want to continue. The TODO list is starting to grow. Based on my order of priority:

1. Clean up source code and post it.
2. Carefully review my skipped move logic above.
3. Build the white paper source and try and verify their results.
4. Modify the solver to count up solutions to a deal rather than stopping after solving it once. Generate statistics about how many solutions exist on average for any given deal. This would give us an idea of how "forgiving" deals are (i.e. how punitive mistakes are). My intuition tells me that good deals are hard to lose.
5. Buy a hardware random number generator (cost of about $230?) and run my games with that.
6. Port to x64.
7. Enhance to use multiple threads. I should be able to get this running on a machine with 8 cores and I'll try running 7 threads.

I am committed to getting item 1 done next week, after that I don't know.

Saturday, April 27, 2013

Odds of winning a game of solitaire


I have been playing solitaire as of lately and I was wondering what the exact odds were of winning. I decided to write my own simple C++ program that plays solitaire. It uses STL extensively and I whacked something together fairly quickly.

In the hopes of validating my results, here is my algorithm. The program implements 12 stacks of cards. The seven main stacks, the four ace stacks and the one play stack. The algorithm does passes on the play stack. For each pass, I do the following:

1. Expose as many cards as possible by moving cards and stacks around the 7 main stacks and to the ace stacks as follows:
A. Move exposed aces and twos to the ace stack.
B. Move cards within the 7 main stacks.
C. Move cards from the main stacks to the ace stacks.
2. Attempt to play a card off the play stack as follows:
A. Move an ace card to the ace stack.
B. Move the card to one of the seven main stacks.
C. Move the card to one of the ace stacks.
D. Become desperate and do the following:
D1. Move cards from the seven main stacks to the ace stacks.
D2. Attempt to place the card on the seven main stacks.
D3. Move cards from the ace stacks to the main stacks.
D4. Attempt to place the card on the seven main stacks.
D5. Shift a stack within the seven main stacks taking care not to repeat the same shift twice. The definition of a shift would be moving everything under a seven of clubs to an exposed 7 of spades. If a unique shift occurs (i.e. I haven’t already done that particular shift, I’ll go back to step D1. This whole shifting scheme took me from about 1 in 5.8 games down to the current 1 in 5.4 games.

After going through the play stack three cards at a time, the pass ends and we go back to step 1 if we played at least one card off the play stack. Otherwise the game is lost. I also implemented basic rules like never clearing  whole stacks/cards unless kings are available.

I used STL mersenne_twister to generate my deals. No small feat with Visual Studio 8 as that was a real pain. Here are some fun statistics that I have generated based on the program.

1. Odds of winning are about 1 in 5.4 games
2. Odds of not being able to play a single card off the play stack (aka stock/talon) are about 1 in 55 games.
3. Odds of not being able to move a card on the main stacks (aka tableau) and play a single card off the play stack are about 1 in 377 games.
4. Odds of all seven main stacks showing the same color on a deal are about 1 in 101 deals.
5. The average number of cards remaining in the play stack on any given game is about 10.1

The next step is to modify the program to try and solve a random deal. Solving the deal would involve playing all possible permutations of the deal to see if it can be won. I intend to implement this recursively.

Any help on making this program win more games would be greatly appreciated. I also have concerns about bugs which invalidate the above results.

If any of this vaguely interests you, I’ll be happy to read your email. Email rstefan912 at charter dot net or you can comment here directly.

Here is the source code:


Here is a screen shot of the program: