Redesigning NPR Rail Suggestion Algorithm
to Lift AOV and Reduce Perishable Waste

Blinkit’s Next Product Recommendation, or NPR, rail is the strip of suggested products that appears after a customer adds something to cart. Its purpose is to help customers discover what they are most likely to want next.

The rail already existed, but the recommendations were often too narrow to be useful. In many cases, it surfaced products from the same type as whatever had just been added to cart. If someone added bread, they were still likely to see more bread, instead of products they were more likely to add next. That limited the rail’s value as a cross-sell surface and left room to make it more useful for both customers and the business.

The challenge was not just deciding what to recommend, but building a system that could make those recommendations feel relevant to the cart and still work in a real business setting.

₹8

AOV uplift per order

+1.8

Additional items ATCd
in rail-driven baskets

16% → 25%

Rail Impression → ATC

~6%

reduction in
perishable disposal losses

ROLE

Senior Business Analyst /
Product Strategy

PROJECT TYPE

Recommendation
algorithm design

STAKEHOLDERS

Growth, Category,
Merchandising, Supply Chain

TIMELINE

~ 6 Weeks

TOOLS

SQL, BI dashboards,
PMI analysis, A/B testing

MY CONTRIBUTION

Defined scoring logic, signal weights, filter sequence, Perishable overrides, and cold start rules

The team's ask

Blinkit already had a Next Product Recommendation, or NPR, rail that appeared after a customer added something to cart. But the logic behind it was too narrow. It often surfaced products similar to what was already in the basket, instead of products the customer was more likely to add next. The team needed a better recommendation approach that could make the rail more useful as a cross-sell surface, while still accounting for store-level inventory and category priorities.

How might we recommend the right next product? One that fits the cart, reflects personal history, and actually gets added.

OLD LOGIC

Overall product popularity ranked

Items ranked by platform-wide popularity,
often from the same ptype suggested.

🛒 ADD TO CART : 🥯 BAGELS

🥛 MILK

🍞 BREAD

🥐 CROISSANT

🫓 PITA BREAD

NEAR-ZERO INCREMENTAL ATC

Why this was harder than it looked

This was not just a ranking problem. To make the recommendations work, the algorithm had to balance what felt like a natural next add for the customer, the right price point for impulse buying, live inventory at the store level, support for niche categories, and the business goal of reducing perishable disposals.

How I approached the problem

The starting point was understanding why the old rail failed. It wasn't a data problem, Blinkit had rich ATC and order history data. It was a logic problem. The rail was using the wrong signal: item popularity instead of cart affinity.

The solution needed three things: a scoring system that combined multiple signals with the right weights, a time-aware layer for Perishables, and guardrails that kept recommendations trustworthy at the region level.

The Scoring System

For each cart state, the system generated a shortlist of candidate products. Each one was scored using three weighted signals, with cart context carrying the most weight, followed by user purchase history and subtype patterns.

Cart State

Customer adds item to cart

Score ~30 candidates

PMI + Persona + Subtype affinity

Filter and rank

Perishable boost → Price Evaluation
→ Inventory check

Top 15 tiles served

Ranked, filtered, real-time

Why these weights:



Cart context mattered most because the rail needed to respond to what was already in the basket.



Personalisation helped when history was available.



Subtype affinity acted as a fallback layer and supported broader category pairings.

Formula

Final score = (0.5 × PMI basket) + (0.3 × personalisation) + (0.2 × subtype affinity)

How the ATC basket score works

Every SKU in the catalogue was scored against every other SKU based on how often they appeared in the same cart. But raw co-occurrence counts have a bias problem i.e. universally popular items like milk and eggs inflate scores simply by being in enough carts. PMI corrects for this.

PROBLEM WITH RAW COUNTS

Popular items dominate

Milk co-occurs with almost everything, not because it pairs well, but because it's bought frequently.
Raw counts would surface milk for every cart regardless of context.

🛒 Cart: 🌰 Nutella → Raw top match: 🥛 Milk (high volume, low relevance)

PMI SOLUTION

Normalize for popularity

PMI measures how much more often two items appear together than chance would predict. A strong niche pairing scores higher than a generic popular one.

🛒 Cart: 🌰 Nutella → PMI top match: 🍓Strawberries (low volume, higher affinity)

Why PMI instead of raw co-occurrence

Raw counts tend to over-rank universally popular products like milk and eggs. PMI corrects for base popularity, so stronger but less obvious pairings can surface higher.

PMI(A,B) = log [ P(A,B) / P(A) × P(B) ]

A smarter personalisation layer: user personas

Individual-level personalization is noisy in quick-commerce. Most users order 2-4 times a month so that's 8-12 orders of history in 90 days, which is thin signal. Instead of scoring at the individual level, users are clustered into behavioural personas based on order patterns. Each persona creates a dense, reliable signal pool.

THE PERSONA FRAMEWORK
Users can belong to multiple personas, recommendations combine signals across all active personas

Handling new users: cold start

For users with no order history, persona classification happens after the first 2-3 orders. Until then, weights shift to subtype affinity.

The personalisation score (30%) was redistributed to subtype affinity, making effective weights 50% ATC + 50% subtype affinity. New users got strong cross-category recommendations without defaulting to generic bestsellers.

New user : 50% ATC · 0% Personal · 50% Subtype

Post Scoring Filters - Applied in Order

Scoring ranks the candidates. Filters decide what actually gets served. These run sequentially before tiles are populated.

A tradeoff I had to defend

Leadership concern

"Surfacing perishables after 7PM might feel like the app is offloading unwanted products which might hurt trust."

How I defended the approach

The override was invisible to customers, perishable items were ranked higher within an already-personalised list, not surfaced in a separate clearance experience.

Post-launch: no drop in repeat order rate or rail engagement. 5.8% disposal reduction confirmed.

The recommendation rail - in action

SIGNAL 1

Cross - ptype affinity

PMI normalised cart pairing

SIGNAL 2

Personalisation

Behavioural persona cluster.

SIGNAL 3

Time of Day Boost

Perishable multiplier post 7PM

Validation and Pilot Learnings

Before national rollout, the algorithm was soft-launched to validate core scoring logic and surface edge cases. The pilot revealed both signal quality issues and the critical importance of the inventory guardrail.

Three carts, three outputs

The same algorithm produces meaningfully different results based on cart contents, user history, and time of day. Each card below shows how the scoring system adapts.

Impact Charts

The rail shipped to all stores. Conversion improved, AOV lifted, and perishable disposal fell. All without a separate customer-facing experience."

SNIGDHA NAGPAL
Built at Blinkit · 2023