Monday, June 29, 2009

After Netflix

Well - a (combined) team has finally managed to get to the finishing line - many,many congratulations to them. I must admit I feel a mix of regret not to be slightly further up the leaderboard and relief that I can now (bar a few desperate throws of the dice) concentrate on taking the learnings from Netflix elsewhere.

The competition has been very good to me, and I'm now engaged on a variety of projects trying to leverage the skills learnt including:

  • Producing a film and television recommendation system$8_Million_For_Stunning_Personal_TV_Recommendation_System
  • Working for a number of dating agencies trying to help them identify compatible people - the interesting twist here is that as well as the person having to like the movie the movie has to like the person as well - if you see what I mean)
  • Identifying who might have to go to the accident and emergency department of a hospital so that careplans can be put in place to reduce the likelihood of an emergency admission thereby reducing costs and improving patient satisfaction. (the movie equivalent here is the treatments they received in the last year).
  • Working on a project to predict the prices of ... (I'm afraid I can't talk about this one just yet).
The interesting thing is, that in all these cases, the application of the Netflix algorithms makes a substantial improvement over the status quo - I think the learnings from the Netflix competition have enormous applications both within recommendation systems and elsewhere. Hats off to Netflix for producing such a valuable advance in both the science (and probably more importantly) the number of people who can now tackle these kinds of problem.

If any of the Netflix contestants are interested in working on "real problems" please don't hesitate to get in touch. I've more work than I can handle at the moment.

Friday, June 19, 2009

How Netflix predicts the price of wine

I, and I know, a number of others are beginning to be sidetracked into other things that we might do with the knowledge that we have garned from the Netflix prize whilst we let our betters (go for it Pragmatic Theory) battle to get that last little of rmse that will land them the $1million prize.

I'll publish a number of ideas that I've been involved in over the last year. One that has surprised me is the ability to predict the price of a wine from comments collected from the web. Its early days yet, but a project that I've been involved in is looking to see whether we can predict the price of clarets (ranging from $3,000 a case to $300 a case) based solely on wine reviews.

Slightly surprisingly this is working very well. The picture above shows the fit (in £(UK)) of the price of around 100 wines to their actual values. In Netflix terms the rmse of the prices is around £370 a case (once the mean price is subtracted), once you include the contributions from the words the rmse falls to around £140 a case, so slightly over half the variance can be accounted for.

What is also interesting is some of the key words that indicate a high price. These words are in order of importance with the words at the bottom of the list being negative indicators of price.


So woody and pencil are the words to look for when choosing expensive wines from Bordeaux. Try it when you next purchase a wine, its already changed what I look for on a wine description.

Why do it. Well its a little bit of a labour of love, to see if we can produce a system that can identify underpriced wines to buy. However, the success so far has suggested that if we can find a more liquid market (no pun intended), then there might be the potential to make some money by identifying underpriced opportunities, and we are currently exploring a few other ideas that are looking promising.

Tuesday, June 9, 2009

The psychological meaning of billions of parameters

The leaders in the Neflix competition have made great strides since my last post.

Essentially my understanding is that they have done this by modelling thousands of factors on a daily basis. i.e for each person they model (say 2000) factors on an individual and individual day basis. The set of ratings provided for the competition gives enough information so that you can work out that a particular person had a particular preference of a particular strength on a particular day to watch something funny (or given that there are 2000 factors or so) something rather more obscure (maybe watch something in sepia or something). The ratings also enable you to calculate how much a film meets those requirements (again on a particular day - what seemed funny at one time period may not seem funny at another).

By combining the two sets of factors you can then work out how a person will rate a particular movie and improve your score in the competition. This is an undoubtedly impressive feat from a statistical / machine learning viewpoint.

It strikes me that this is also interesting from a psychological viewpoint - do we really believe that people have such nuanced preferences across such a large number of dimensions. I have an open mind about this - apriori I would have thought people would use many fewer factors in arriving at a rating decision - certainly 2000 factors (or even 20) can't all be combined consciously - the subconscious must be heavily involved. Maybe, on the other hand, there are only a few factors that we take into account - but they are different per person and the only way in which they can be explained is by taking a mix of the 2000 or so factors that are modelled.

It strikes me that depending on your view on the above your choice of research direction on the Netflix competition, recommendation systems and indeed psychological processes in general will vary.

I'd welcome views.