SCHEDULE A CALL WITH A PERSONALIZATION EXPERT!

A Marathon Won by a Hair’s Breadth: Our Story of the Netflix Prize

Every brand, every organization is built on a story. As for Yusp by Gravity R&D, a provider of recommendation systems, not just its brand narrative, but its very existence is rooted in the story of the Netflix Prize. 

With its vast offering of movies online, Netflix relies heavily on accurate recommendations. So much so that according to its own estimates, as of 2016, its personalized recommendation system was worth one billion dollars. Per year.

This world-class personalization solution got a major boost a decade earlier with the Netflix Prize, when brilliant engineers and scientists, the founders of Gravity R&D among them, crunched tons of data and burnt midnight oil in a quest to improve the accuracy of Netflix movie recommendations. And, by the way, to win a million dollars. 

This is what gave rise to Gravity’s origin story, itself verging on movie material. Read on to find out how four Hungarian data scientists went from newbies in the field of recommendation systems to being tied for first place in the Netflix Prize; how they did not get the fish but learned how to fish instead.

The Stakes 

“We’re quite curious, really. To the tune of one million dollars.” This is how Netflix introduced the initiative of a contest aimed at improving its movie recommendations back in 2006. Netflix was already a household name at the time. But instead of video streaming, which didn’t exist yet, the company was in the business of DVD rental by mail, with the website serving as its online library. 

CinematchSM, the site’s recommendation system, was “doing pretty well”, Netflix reckoned, but could surely be made even better. To this end, Netflix offered a Grand Prize of one million dollars to anyone who could come up with an algorithm that beat the accuracy of CinematchSM recommendations by a margin of ten percent. Accuracy referred to how closely predicted ratings of movies would match subsequent actual ratings.

The Challenge 

The Netflix Prize contest kicked off in October 2006. Enter the Gravity team: Gábor Takács, István Pilászy, and Bottyán Németh, headed by Domonkos Tikk, currently the CEO of Gravity R&D. They were all well versed in the field of machine learning and had ranked high in the 2005 and 2006 KDD (Knowledge Discovery and Data Mining) Cups. But they were facing tough competition: more than 41 000 teams from 186 countries, including some of the world’s best data scientists and computer engineers. 

According to Domonkos, nobody had the slightest idea how big the task would be. The organizers might have had an inkling that it’s a tall order:  “We suspect the 10% improvement is pretty tough… It may take months; it might take years”, they wrote in the contest announcement. 

In fact, the Netflix Prize contest lasted over two and a half years. All this time, team Gravity stayed in the highest ranks of the public leaderboard.

In retrospect, ten percent was a fairly random number, and indeed difficult to reach, Domonkos says. While in the initial stages of the contest the margin of improvement progressed in leaps and bounds, it slowed down to a crawl towards the end. 

A snapshot of the leaderboard from February 2007, with team Gravity in first position

The Endgame 

As the contest wore on, various teams started merging and combining their algorithms in a concerted effort to reach the elusive ten-percent threshold. In early 2009, on an initiative by Domonkos, team Gravity joined forces with a handful of other teams under the name Grand Prize Team, while their rivals formed the conglomerate BellKor’s Pragmatic Chaos. 

It was this latter that first declared having achieved the ten-percent improvement in movie ratings, in the summer of 2009. According to the rules, contestants had a 30-day period to outdo this result. 

In a new merger, the Grand Prize Team and other top contenders formed an even bigger consortium called The Ensemble, with Gravity’s Gábor Takács at its helm. In a quickfire succession of new entries, the two leading rival conglomerates inched ahead side by side. As the final deadline drew close, on July 26, BellKor’s Pragmatic Chaos and The Ensemble both submitted winning entries in a “photo finish” situation: they produced identical test scores 20 minutes apart.  

It took a few days to sort out which team had actually won. Finally, BellKor’s Pragmatic Chaos was declared Grand Prize winner because their entry had arrived first. As Domonkos put it, “This was like a marathon that somebody won by a hair’s breadth”.

The Takeaway 

The Netflix Prize contest ended officially on September 1, 2009, after the winners went through the motions of validating their entry. Although the Gravity team fell short of winning the Grand Prize of a million dollars, the benefits of being the runner-up were lucrative: 

The positive effects of the Netflix Prize were just the beginning. Today, Gravity R&D is a prominent player in the field of personalization engines, anchored in an international network of strategic partnerships and serving major clients worldwide. 

Thanks to Gravity’s powerful algorithm portfolio and over a decade of experience, it regularly outperforms its competition in A/B tests (link to A/B testing article). It has six data centers across the globe, ensuring data security and real-time service. 

The digital output of Gravity R&D has evolved into two distinct brands: Yusp, a tailormade bundle of personalization solutions for enterprise clients in various industries; and Yuspify, its lightweight, ready-to-use version for small or medium e-commerce businesses.

On top of the tangible results, the Netflix Prize contest also provided an important learning that has shaped the outlook of Domonkos and his colleagues ever since. They realized that, while location is crucial for running a successful business, in terms of professional achievement, it doesn’t matter if you’re in Silicon Valley or in humble Budapest. As long as you’re good, you can be anywhere, and still be a game-changer.