Saturday, May 18, 2013

Solving rate down to 77%

I am down to 77% solving rate. My initial 80% is attributable to bugs that I have since fixed. It seems to be consistent but I really haven't built up a large sample set. We're talking hundreds of games.

My list of lists isn't doing exactly what I thought it was. I was incorrectly collecting statistics. The number of stacks in a given list is not limited at all and grows quite large. Computational power is an issue.

I took a shot at trying to calculate how many ways a game can be solved. This is a really small sample set and some deals couldn't finish as you run out of memory. Running 32 bit, my process is limited to 2 gig. This little foray led to me to following conclusions:

1. My card object is too large. For this reason, I encoded everything into a single byte. The card is encoded in the low order nibble, the suit is encoded into the next two bits and the up card state is encoded in the high order bit. Stacks are much smaller now and stack comparisons are much faster.
2. I need to port this to 64 bit. 2 gig has no chance of dealing with this problem.
3. Even then, I will probably have to implement a caching scheme for writing stacks to disk.
4. I need to go multi-threaded and run this thing on eight cores.
5. I need to implement a twelve key map which uses the number of cards on all the stacks as the keys.
6. Make an internet solver. This would be a service that could be installed on computers where the main solver could dole out stacks to be solved. Sort of like the SETI at home thing. I'm certain there's a large community of people who would want to donate computing time in pursuit of meaningless endeavors like calculating the number of ways a deal can be won in solitaire! Er, uh, maybe not.
7. Utilize a database for caching stacks, managing solving and collecting statistics.

This little hobby is taking on more dimensions than unique solitaire card stacks. The funny thing is, I can probably chip away at this in my spare time with obsolete computer equipment. When I first started programming, 64K RAM and an 8 bit processor wouldn't have gotten you very far with this problem.

Saturday, May 4, 2013

80% of solitaire deals are now solvable!

I was able to get the solver to solve at around 80% of deals. It is extremely consistent. My sample sizes are only hundreds of deals as it is solving at a rate of about 25 deals an hour. It makes me wonder how they were getting variation in the white paper solver listed in the previous post.

As I suspected, the first version of the solver was flawed. My move logic was flawed and I was skipping permutations. At a cost of making the solver slower (first version solved at a rate of about 40 deals per hour), I was able get my solution rate up to 80%. I discarded the concept of grouping stack moves and went for a far simpler implementation. Stack moves are pretty much treated just like card plays off the play stack. I also refined my strategy for check pointing stacks. My initial implementation would unconditionally check point a stack after a significant action after a skipped action. When I would go play another list, I would look for the stacks in the list of check pointed stacks and not play the stacks if they were found. I implemented a new strategy where I would mark the stacks for pending check point after a skipped action. I would then continue to play the hand and try to check point the stacks after each significant action. If the stacks were found in the previously tried list, I would leave the pending flag set and continue playing. In this manner, the program now seeks out new stacks to solve. Only when I am able to check point the stack (i.e. it doesn't exist in previously tried stacks) do I clear the pending flag. This means I have to spend more time comparing stacks but less time actually playing cards.

Lists of lists are good. My initial implementation of the lists of check pointed stacks was a std::map<int, std::vector<std::vector<Stack>>>. I used the number of cards remaining on the play stack as the key. I never analyzed this map but I figured that some lists were getting exceedingly long and required many comparisons of the stacks. I implemented a new list of check pointed stacks. It is:

static std::map<int, std::map<int, std::map<int, std::vector<std::vector<Stack>>>>> sStacksPlayed;

It has three keys and I tried to do eight but the compiler barfed on it. Something about names being too long. Three was the most I could get. I didn't bother to investigate further but I suspect I could get eight if I got into the business of typedeffing. Now I have a three key map and my keys are as follows:

std::vector<std::vector<Stack>> &lStackList(sStacksPlayed[mCardStacks[eStackPlay].size()][mCardStacks[eStack4].size()][mCardStacks[eStack7].size()]);

I arbitrarily chose the number of cards in the play stack, the fourth main stack and seventh main stack. I started analyzing my triple map after solving a deal and found that the maximum number of stacks in a list was consistently twelve. I had an "oh duh" moment when I realized that the most number of cards you can have on a main stack is twelve. This is good as the most number of stacks comparisons I will do on any attempted check point is twelve.

Most deals solve or don't solve fairly quickly. Periodically a deal comes up where hundreds of thousands of unique stack check points are generated. This really kills my throughput. I would like to collect stats on this.

The main recursion loop is as follows:

void PlayList::playList(const std::vector<Stack> &arCardStacks, int aCardsRemaining, int aLevel)
{
   if (arCardStacks.size() == 0)
   {
      return;
   }

   // -1 means we won't skip any signifcant actions on this game
   Game lGame(mSortStacks, -1, aLevel, false, true);
   if (lGame.play(arCardStacks))
   {
      //std::wstringstream lStream;
      //lStream << _T("Solved!");
      //displayString(lStream.str());
      mSolved++;
      // game stops on first solve - I'll try to have it keep going some day
      return;
   }
   // the number of significant actions in the previous attempt
   int lEndSkip(lGame.skipPlayMoveCounter());
   for (int i = 0; i < lEndSkip && !mSolved && !mStop; i++)
   {
      // i identifies the significant action to skip for this game
      Game lGame(mSortStacks, i, aLevel, false, true);
      mGamesPlayed++;
      if (aLevel == 1)
      {
         std::wstringstream lStream;
         lStream << _T("Level one step ") << i+1 << _T(" of ") << lEndSkip;
         displayString(lStream.str());
      }
      if (lGame.play(arCardStacks))
      {
         //std::wstringstream lStream;
         //lStream << _T("Solved!");
         //displayString(lStream.str());
         mSolved++;
      }
      else
      {
         if (!mSolved && !mStop && lGame.checkPointStacks().size() > 0)
         {
            playList(lGame.checkPointStacks(), lGame.cardsRemaining(), aLevel+1);
         }
      }
   }
}

This recursive implementation leads to concerns about stack space and stack overruns. According to the doco, VS2008 default is 1 meg. My implementation consumes about 10K of stack for each level of recursion. I think I am OK but I need to track my maximum level of recursion. I'm printing out stats every thousand attempted solves and it never seems to solve above 30 levels of recursion. I am not sure why this is.