When we first introduced personalization at Prepr, we did it with conviction. We believed that adaptive content and experiments could help teams build better digital experiences, experiences that respond to real people, not assumptions. And over time, that belief proved right. Teams started personalizing homepages, testing hero messages, and tailoring content to different audiences.
But from the very beginning, we also knew one important piece was missing.
With the release of Impact Goals for personalization and A/B testing, that missing piece is now in place. Impact Goals make it possible to measure what experiments actually achieve, directly inside Prepr. They complete a vision we started working toward years ago: not just running experiments, but clearly understanding their effect on meaningful outcomes.
Why CTR alone was never enough
For a long time, optimization in digital teams revolved around what was easiest to measure. Clicks. Views. Click-through rate. These metrics were visible, immediate, and simple to compare across variants. When you launched an A/B test on a homepage banner or personalized a hero message, CTR was usually the first signal you looked at.
And CTR does matter. It tells you whether something catches attention. It shows whether a message resonates enough for someone to interact. But in practice, a higher click-through rate did not always translate into better outcomes.
A variant could win on clicks and still fail to move users closer to a real goal. Visitors might click a button but never continue their journey. They might engage with a personalized message and still leave without signing up, requesting a quote, or returning later. In other words, surface-level engagement was easy to measure, but it didn’t explain what happened next.
To understand that deeper impact, marketers had to rely on external analytics tools. Data had to be sent to platforms like Google Analytics, goals had to be recreated there, and results had to be matched back to experiments manually. This made experimentation slower, more complex, and harder to trust.
Over time, this created a clear split. Experiments were running inside Prepr, but the real answers lived somewhere else. As long as that gap existed, optimization was always based on partial insight.









