Personalization Showdown: Nara Logics vs. Collaborative Filtering

A benchmark report comparing Nara Logics' recommendation performance to industry algorithm favorite - collaborative filtering


At Nara Logics, we’re of
ten asked how our neuroscience-based synaptic intelligence platform compares to other recommendation engine approaches. While we have comparison results on specific business problems from customers, we wanted to take things a step further and see how our platform performs when challenged by the most commonly used recommendation algorithms.

Our first challenger in the Personalization Showdown series is...collaborative filtering (CF)! This widely used technique is based on the principle that people who liked, purchased, watched and/or read the same “thing” will also be interested in the other “things” those people have bought or consumed. In fact, CF algorithms are behind much of the content we see online, from Amazon to Google, Netflix to iTunes, and on the ever-evolving Facebook. Internet companies are always looking to improve these algorithms to make customer experiences more personal; Netflix famously offered a $1 million prize in 2009 just to find a CF algorithm that would improve their movie recommendations by 10%.


Our results showed that Nara Logics’ synaptic intelligence platform identified more of users’ highly rated items than these competitors, performing 32% better than the next best performing algorithm for Top 10 recommendations and over 100% better for Top 3 recommendations.

In this white paper you’ll learn:

  • what CF tools are currently being used for making recommendations in the industry
  • how Nara Logics’ platform differs from these CF methods
  • methods and results for determining who makes the best recommendations

  << Download other papers