# Numb3r – Part 2

Hi again! In part 1 I set out what the goal of this project was and how the basics of the memory technique I wanted to automate works. Before you read on, please have a look at that first. In this second part I am going to talk about the python notebook I created that does the job and the hardships I faced.

## Concrete Nouns

The first step was to describe what I needed and how I wanted it to work. First off, I needed a list of words. Seems easy right? Well… some words are easier to memorize than others. For example, using the same table as in part 1 we could convert 43 into multiple words like aRM or kaRMa. The first one is what we call a concrete noun and represents a physical object. As you might expect, Arm is much easier to memorize than an abstract concept like Karma. As such, I had to put together a simple list of concrete nouns myself (it is not very good but you can find it here).

## Frequency/Popularity

If you remember the example in part 1, we were trying to convert

454433902 –> RVRRMMGZ(S)N

into a set of words and we came up with RiVeR aRMy MaGaZiNe. You might have realised by now that other solutions exist like RiVeR aRM MaGaZiNe or even RiVeR aRM MaGic SoN. How will the computer ever know which of these solutions is better? That is when word frequency comes into play. For each word, a count of its occurence in wikipedia was computed and this comes into play when comparing different solutions (more on that further ahead).

## The algorithm

Now that we have a list of words to use and their frequency we can start playing around.

### The basic approach

The first solution that comes to mind was to start transforming the digits into letters and then, reading it left to right, try and fit in the biggest word. You can further expand this by giving each possible word that you can fit a score. This score can take into account not just the size but also the frequency (or popularity) and penalize words that have already been used (when you are memorising a sequence you want to avoid repeating the same word/object).

However, this solution might ignore the fact that choosing a smaller word at first might let you pick a larger word later. Take for example:

65272572 –> B(P)F(V)NTNF(V)TN

This algorithm would try to pick the biggest most popular word and take PaVed. This means that the best words following that decision would be NeT kNiFe TowN. However, if the first picked word was to be Book, then we could pick FouNdaTioN FuToN making it only 3 words to memorise instead of 4 which should be easier.

### The complicated approach

It seems that an exhaustive search of all possible solutions would be necessary to solve this problem. This would mean trying out all the combinations of words that fit a sequence of letters. This rapidly explodes into an exponential search space. Take for example the number 43 and all the words that can be portrayed by its letters (RM):

aRMy, RooM, aRM, dRuM, RaM, cReaM, woRM

For just the digit ‘4’ we have 38 possible words (waR for example) and for the digit ‘3’ we have 8. This means that the number of solutions for a simple 2 digit number would be 38*8 + 7 = 311. You can quickly see that this approach would not work. However, we don’t really care about all the possible combinations, not at first at least. We could first compute all the ‘blocks’ of numbers we have and we can fit in the number sequence. In the example above, 65272572 we could split this sequence in a number of different blocks, in our case we divided it in: 65 27 25 72 and 6 5272 572.

We could then, for each block, get the most popular word. In the case of ’27’ we could have NeT, auNT, aNT, kNoT, NuT, hyaciNTh. ‘Net’ seems to be the most popular one so we would pick that one. Once we have done this for every block we would have come up with the ‘best words’ for this sequence of blocks. In case the blocks repeat, we just take the second most popular for the second time the block appears and so forth and so on. Just like before, once we convert this ‘block solution’ into a sequence of words, we can follow the same process and score the solution.

This however did not seem to be enough. Once we get to a certain size we start getting too many combinations:

### Divide and conquer

To overcome this problem I decided that a divide and conquer solution was needed. The last improvement of the algorithm was to breakdown the big number into smaller pieces that are solved independently. It does this by looking at all the possible words that fit in between position 10 and 20 of the sequence of numbers and picking the best one. It then solves the sequence until that point and repeats the whole process for what is left (or right – sorry for the pun). It will still spit out all the possible combinations of blocks so that we can repeat the process of converting the blocks to the best words available just like before. This makes sure that we do not repeat words in different cuts of the sequence. And it works! The question now is, did all this hard work pay off?

## The results

As you can see, for a small sequence of numbers like the first 20 digits of PI all 3 methods get to the same answer which is great. However you might notice that the basic solution (picking the very best word from left to right) is incredibly fast.

Once you get to bigger sequences like the first 60 digits of PI, the Divide and Conquer method seems to pick longer words. However, it does not manage to avoid repeating one – while in the basic solution repeated words get penalized and are not picked, in the divide and conquer method we are looking at smaller sequences at a time and we lose track of which ones we have actually used in other sequences.

When we get to the biggest sequences like the first 1000 digits of PI we can see that both methods have their flaws. On one hand, Divide and Conquer returns a smaller number of words which is great, however it also repeats more words which is not great. If execution time was important the basic solution would also fare much better (60x faster).

## Conclusion

I think the most impressive (and depressing) conclusion to take from this small project was that it was really hard to define from the start what was a good enough solution. One could argue that the basic solution works perfectly well and is very close (if not even better) to the more complicated/elaborated solution. However, the idea that it was not the ‘perfect’ solution pushed me to waste a lot of time. This does not mean that I did not have fun! I even tried some very cool things (all the functions are recursive for example) but it was a nice example of how easily one can get bogged down by the pesky little details.

All in all, neither of the solutions could be any good without a nice vocabulary. The ones I am using here are quite limited and contain words that are not concrete nouns. More work needs to be done on it but I hope it is a good start for anyone considering doing something similar.

I hope you enjoyed this two part post. It was a bit too technical and probably not useful at all to anyone but me but this is definitely not a blog so why should I care?

See you next time!