How Proof of Concept Testing Makes for Better Personalization

Gabriella Vas Gabriella Vas
9 min read | June 8, 2022

In today’s booming startup culture, proof of concept (POC) has become a frequently used term, alongside its next of kin, MVP and prototypes. It’s generally referred to in a business or product development context. However, in the field of SaaS, and more specifically, personalization, proof of concept has some distinctive characteristics that most people may not be familiar with. Read on to learn more about these differences, and about the ultimate benefits of POC: why it makes sense to do proof of concept testing in the course of deploying a new recommendation system or feature. 

Understanding Proof of Concept in Personalization

In our field, proof of concept has a double meaning:  

  • It refers to a phase within the implementation process that involves testing the viability of a personalization solution on a smaller scope (a selection of recommendation placements or scenarios) and/or on a sample audience, for a limited time, and making the necessary adjustments before the full-scale rollout. 
  • The term is also interpreted as affirmative evidence resulting from this testing phase. 

As opposed to product development, in personalization, a proof of concept period is often, but not always necessary. Here are some possible scenarios: 

  • For companies already using a recommender system, whether as SaaS or built in-house, testing it against a competing product calls for a POC.
  • In case a company’s existing personalization solution is obviously less sophisticated than the alternative it considers introducing, the proof of concept period will not involve A/B testing, just customizing (more on this later). 
  • In the unlikely event that an enterprise has no recommendation functions at all, and needs just a very basic “starter kit”, the POC can be omitted. 
  • In an ongoing collaboration between a client and its provider, minor changes in the personalization strategy can be implemented without going through the POC phase. But a major overhaul definitely requires proof of concept testing. 

In product development, the proof of concept forms the base of building a prototype or a minimum viable product (MVP). In a SaaS context, this logic is not applicable, simply because the product – in our case, the recommendation system – already exists. Here, the proof of concept period is a step in the process of tailoring the standard product to a specific client’s infrastructure and requirements. It’s preceded by an assessment and evaluation phase, as well as data integration and testing. (Read more about the details of deploying enterprise-grade personalization systems here). 

Types of Proof of Concept Testing

Proof of concept is often conflated with A/B testing. In fact, POC is a broader category, encompassing several ways to find the best personalization solution for the given client:  

  • Customizing – limited-scope or sample-audience testing of the standard recommendation product in order to adjust it to a new client’s specificities.
  • A/B testing – an objective, well-defined comparison of two recommendation setups, based on clear KPIs.
  • Experimenting – an iterative co-creation process involving both client and provider, with the aim of developing a brand new personalization function.

A Closer Look at Experimenting

Now, wait a minute. Didn’t we just say that in SaaS, proof of concept testing doesn’t involve building from scratch, because the product already exists? Nice catch – we did. 

Experimenting, however, is a bit of an exception. When an organization wants to develop a new digital tool with personalization potential, it enlists the help of a personalization provider from the very first stages of ideation. Together, they start building the user experience from the ground up. In this case, the product – the app or website with its native recommendation features – truly doesn’t exist yet. A long and winding road of iterations and testing leads towards its production, with many sinkholes along the way.  

Therefore, experimenting in the proof of concept phase is an exciting challenge for both parties involved. Typically, there are few existing references and no direct precedent to take cues from; no data or tangible details to build on. Starting with a blank slate also means more factors and input sources to consider. In order to come up with new recommendation scenarios or custom-made personalization models, flexibility and close collaboration are essential. 

Because there are so many unknown elements and moving parts, an experiment is much more fluid, less tightly defined than an A/B test. There is no preset, step-by-step process and the success criteria aren’t always clear-cut, either. If a concept fails, it could be due to a number of factors, of which personalization is just one.

Here’s a recent example from our records. Aware that eating out is a top spending category for its customers, a major Asian bank contemplated launching a dining guide app featuring geolocation-based restaurant recommendations, a personalized search function, reviews and rating options, as well as lifestyle content around the topic of gastronomy and wine. During the proof of concept period, which lasted about a year, the Yusp team and the client set up a roadmap, sketched out various recommendation placements, and refined algorithms. A demo version of the app was built and tested on a sample audience. The client eventually abandoned the project, having come to the conclusion that the dining guide concept wasn’t the best way forward. 

Proof of Concept Practicalities: Timeframe and Cost

In several senses, the above example is extreme – proof of concept tests lasting many months, only to result in a no-go, are fortunately rare. In case of A/B testing or more conventional experimenting (where a higher proportion of factors shaping the outcome are known), the normal duration of the POC period is about eight weeks. This timeframe is long enough for the system to generate measurable impact; but not so long as to be a significant waste of resources in case the proposed solution is not working as it should.  

Because, let’s be clear: proof of concept testing is an investment of talent, energy, and time on both sides. The degree of necessary implementation depends on the complexity of the personalization solution being tested. Under exceptional circumstances, when a robust, well-documented recommendation system needs only a few tweaks, the minimal POC effort required can be free of charge. By default, though, proof of concept testing is provided as a paid service. Remember, during this phase, the recommendation system is already generating some results, and these usually indicate that a full-scale rollout is going to be ROI-positive. 

Proof of Concept Pitfalls and Prospects

We’ve mentioned failure in passing. But what exactly does it mean in the context of proof of concept testing in the personalization industry? What can go wrong, and how does that affect the outcome? 

According to our experts, one typical problem is poor data quality – if the client’s item or user catalog is unstructured, if it’s missing key elements, or if it’s not updated regularly. (Lack of frequent updates is especially acute in the case of online marketplaces with huge, decentralized, ever-changing item catalogs.) 

What can be done about it? There are fallback algorithms that kick into gear if the ones relying on item catalog data cannot function properly. Still, the relevance of the recommendations generated this way will reflect the deficiencies in the data. In order to bring the quality of the input (and, consequently, of the output) up to scratch, the provider team has to put in more hours, so the POC operation will last longer than usual.  

Another issue that may arise during a proof of concept phase is a change in the objectives or priorities of the client for their personalization strategy. In this case, we’re shooting at a moving target, so to speak. In order to align with the new goals, the provider has to make adjustments in configuration or implementation, and this could also mean a longer POC period and less spectacular results. 

Lastly, here’s a problem affecting proof of concept testing that’s hardly typical, but worth mentioning because of its complication potential. Contrary to the textbook description of A/B testing recommendation systems, it is technically possible to compare two solutions with parameters that are not exactly identical. Upon explicit client request (for business or other reasons), the audience sample sizes, or the position and visibility of placements might be different for versions A and B. Although we can use complex formulas to compensate for this imbalance, the outcome of the test will be difficult to interpret accurately, which may lead to disagreements between the client and the provider. (It helps to mitigate this risk if the parties agree in advance not just on the KPIs of the test, but on who will measure the results and how: what method, what tools they’ll use to do so.) 

For sure, these are setbacks in the proof of concept process, but they don’t mean failure in the sense that the project will be abandoned because it’s not viable. There is, however, a certain percentage of POC endeavors that do hit the ground. 

In our experience, this “mortality rate” varies according to the type of POC test. In the case of A/B testing, it’s practically zero, at least for Yusp: our recommendation engine has consistently outperformed rivals in all A/B tests conducted so far. In the case of customizing and moderate experimenting, the chance of failure is minimal. If it does occur, the reason isn’t always obvious, or it’s not clearly connected to the performance of the recommendation feature. With full-blown experiments, the odds of survival are low – but remember, many factors affect an idea’s viability besides personalization. 

Why Proof of Concept Testing Is a Worthwhile Investment

Having read about all the costs and risks above, you may be wondering: What’s the point? Why bother with new recommendation features if they must be tested? And why is it so important to obtain proof of concept, anyway? 

As opposed to sticking with your existing personalization setup – whether it’s a recommender system built in-house or run by an external provider, or no personalization at all –, trying out something new always involves a higher degree of uncertainty. And integrating a new solution, or even a lesser adjustment, cannot happen without your active contribution. 

But the added value more than makes up for the investment. As the result of proof of concept testing or experimentation, we’re able to serve new personalization use cases, with more relevant recommendations that will ultimately improve your bottom line. What’s more, we’re able to develop innovative solutions that will benefit our clients as well as the personalization industry in the long run.  

In addition, a proof of concept provides lasting evidence. At the outset, it proves that the tested recommendation feature is ROI-positive; and for a long time to come, it helps to dissipate any doubts about the added value of said feature. So if there’s a management change or an audit at the company, and someone asks, “Is this personalization system worth the money we’re paying for it?”, having the POC results on hand helps to settle the discussion. 

Even when its results are deemed to have expired, a proof of concept test can be repeated to reaffirm the personalization platform’s added value. This was the case with Cora Romania, a longtime Yusp client. When the hypermarket chain first ventured into digital, it had Yusp A/B test recommendation scenarios on its website against the original, unpersonalized setup. The recommendations won. 

Six years later, Cora built a new website, and asked Yusp to conduct the same type of A/B test again, to establish exactly how much added value our recommendation system generates. Once more, the outcome confirmed that personalization by Yusp was ROI-positive. In fact, the return on investment rate was slightly higher than the first time around, thanks to more traffic on the site generating more data, meaning more fuel for the recommendation engine.

Having obtained this proof of concept result, Cora decided against involving new providers, and continued using Yusp. 

Test Your Idea With Us

Here at Yusp we have extensive experience in proof of concept testing. Considering a change in your personalization strategy? Challenge our recommendation engine with an A/B test, or let’s roll up our sleeves and embark on a more complex experiment together. Just contact us for a kickoff call.

What to read next

Join our newsletter

Get to know the ins and outs of personalization