
Amazon Rufus uses a combination of semantic AI processing, retrieval-augmented generation (RAG) across four confirmed training data sources, a personalisation layer based on individual account activity, and review sentiment analysis to decide which products to recommend - replacing the keyword-matching logic of Amazon's legacy A9 algorithm with an AI layer that interprets why a customer is buying, not just what they typed. Amazon's COSMO knowledge graph, deployed across Amazon's search applications, is widely understood within the industry to power the contextual intelligence behind Rufus's intent matching. With over 300 million users and nearly $12 billion in incremental annualised sales confirmed by Amazon's Q4 2025 earnings, Rufus is now the single most consequential factor in Amazon product discovery. Sellers who understand the decision architecture behind Rufus's recommendations gain a structural advantage over those still optimising for keyword density alone.
This article breaks down the specific signals, systems, and data sources Rufus uses to decide which products to surface - and which to ignore. It draws on Amazon's published COSMO research (ACM SIGMOD 2024), Amazon's official launch disclosures, independent empirical studies of Rufus recommendation patterns, and practitioner analysis from listing optimisation professionals. Where a claim is sourced from Amazon directly, we say so. Where it reflects industry observation and logical inference, we say that too.
Rufus is trained on four data sources that Amazon confirmed in its original launch announcement: Amazon's product catalogue, customer reviews, community Q&A, and information from across the web. A fifth dimension - individual account-level personalisation based on browsing and purchase history - was announced separately in November 2025 as a feature enhancement, not as an original training data source. This distinction matters: the four training inputs are what Rufus draws from to understand products generally, while the personalisation layer determines which products Rufus surfaces for a specific individual. Understanding both is the foundation of any serious optimisation strategy.
1. Product listing content is the most directly controllable of the four confirmed training inputs. Amazon confirmed that Rufus reads product listings - titles, bullet points, and product descriptions - to construct an understanding of what a product is, who it serves, and what problems it solves. Amazon's broader AI search infrastructure also uses retrieval-augmented generation across product page data, which means A+ Content and backend attributes likely contribute to the data environment Rufus draws from - though Amazon has not separately confirmed these as distinct Rufus inputs. Unlike Amazon's A9 algorithm - which indexed listings primarily for keyword presence - Rufus processes listing content semantically, evaluating clarity, completeness, specificity, and internal consistency rather than scanning for keyword matches. A title that says "Stainless Steel Water Bottle 32oz BPA-Free" gives Rufus fewer usable knowledge nodes than "32oz Double-Wall Vacuum-Insulated Stainless Steel Water Bottle - Keeps Drinks Cold 24 Hours, BPA-Free, Leak-Proof Lid, Designed for Gym and Outdoor Use." The second version provides material, insulation method, performance claim, safety specification, and two use contexts - each one a potential connection point in Amazon's COSMO knowledge graph.
2. Customer reviews function as an independent verification layer. Rufus mines review text for natural-language descriptions of how customers actually use a product, what problems it solved, and what disappointed them. When a shopper asks Rufus "is this blender quiet enough for early mornings?", Rufus can pull sentiment directly from reviews that mention noise levels - and it will surface the language customers actually used, not the language the seller wrote. This creates a critical dynamic that practitioner testing consistently reveals: when listing claims conflict with review consensus, Rufus tends to surface the review language rather than the seller's copy. Sellers whose bullets say "whisper-quiet motor" while reviews consistently describe "not silent but reasonably quiet" will often find Rufus citing the review phrasing instead.
3. Community Q&A provides another structured data layer. Rufus reads the questions shoppers have asked and the answers provided by both sellers and other customers. A robust Q&A section - particularly one where the seller has answered directly - gives Rufus additional factual anchors to draw from when responding to conversational queries. Listings with empty Q&A sections forfeit this entire data channel.
4. Web content provides the broadest contextual layer. Amazon confirmed in its launch announcement that Rufus draws on web-based information beyond its own marketplace data, though the specific sources and weighting are not publicly detailed. This likely includes product category knowledge, brand information, and general product education content that helps Rufus understand the broader context of a product category.
Beyond the four training data sources above, Rufus applies a personalisation layer that adjusts recommendations per individual shopper. In November 2025, Amazon announced that Rufus now incorporates account memory based on individual shopping activity - and that this memory will extend across Amazon's broader ecosystem (Kindle reading habits, Prime Video viewing, Audible listening) in the coming months. This is a distinct mechanism from the training data: where the four confirmed sources inform Rufus's understanding of products in general, account memory determines which products Rufus surfaces for a specific shopper based on their purchase patterns, category affinities, and behavioural signals. Two shoppers asking the same question receive different product suggestions as a result.
COSMO (Common Sense Knowledge Generation and Serving System) is Amazon's commonsense knowledge graph, designed to map the real-world relationships between products, customer intentions, and shopping contexts. Published as a peer-reviewed paper at ACM SIGMOD 2024, COSMO represents Amazon's most significant investment in moving product discovery beyond keyword matching. Amazon has confirmed that COSMO is deployed in search applications including search navigation, and the system's intent-mapping architecture is widely understood within the industry to inform Rufus's contextual intelligence - though Amazon has not published a single document explicitly stating "COSMO powers Rufus." The connection is strongly supported by the technical architecture (both systems serve the same discovery pipeline) and by Amazon's own descriptions of Rufus using knowledge-graph-style intent matching.
COSMO's knowledge graph is constructed primarily from two types of behavioural data: search-buy pairs (what customers searched for and subsequently purchased within a defined time window) and co-purchase pairs (products bought together in the same shopping session). An LLM processes these behavioural signals to generate hypotheses about the commonsense relationships between products and human intentions. Those hypotheses are then validated through a combination of machine-learning classifiers and human-in-the-loop annotation before being codified as knowledge graph triples
The COSMO paper defines five primary relationship types (and 10 more) that structure this knowledge:
sed_for_activity connects products to the activities they serve. Trail running shoes are linked to trail running; a camping stove is linked to outdoor cooking. Listings that name the specific activity a product serves create stronger nodes in this relationship category.
used_for_audience connects products to the people who use them. The paper's canonical example: slip-resistant shoes linked to pregnant women. Listings that explicitly name their target audience - rather than relying on shoppers to infer it - create addressable nodes that COSMO can match to intent queries.
used_with captures product compatibility relationships. A blender linked to frozen fruit, ice, and protein powder. Listings that name compatible inputs, accessories, and complementary products strengthen these co-purchase connections.
capable_of maps functional capabilities. A headlamp linked to increasing visibility for motorists. This relationship rewards specificity: "increases visibility" is weaker than "200-lumen beam visible at 150 metres" because the latter provides a testable, anchored claim.
isA provides taxonomic classification beyond Amazon's standard browse node hierarchy. A product might be classified as a "normal suit" by concept rather than rigid category tree alone.
The practical implication is significant. When a shopper asks Rufus "what's a good gift for someone who just started trail running?", COSMO's knowledge graph connects that intent to products with used_for_activity → trail running and used_for_audience → beginner runner relationships. An important distinction: COSMO builds its knowledge graph primarily from customer behaviour data (search-buy and co-purchase patterns), not by directly parsing your listing text. However, product catalogue information (titles, descriptions, attributes) is used as input when COSMO's knowledge is applied to downstream tasks like search relevance scoring. This means your listing content shapes how well COSMO's knowledge maps to your specific product - even though it is not the primary source of the knowledge itself.
In Amazon's own evaluation, a cross-encoder model enhanced with COSMO knowledge graph data achieved a 60% improvement in macro F1 score for product relevance over a baseline model without COSMO data. Even with fine-tuned encoders (a more realistic production scenario), the improvement was 22-28%. COSMO has been deployed across 18 major Amazon product categories, with millions of high-quality knowledge assertions generated from only 30,000 annotated instructions.
Yes - with strong empirical evidence. Independent research consistently shows that Rufus overwhelmingly recommends products with strong review profiles, though the specific thresholds differ slightly between studies.
A study by Mars United Commerce and Profitero+, based on analysis of over 1,000 products recommended across approximately 300 Rufus prompts, found three clear patterns. First, Rufus only recommends products with a rating of 4 stars or higher. Second, the average number of reviews for recommended products is approximately 9,000. Third, products with very low review counts are rarely surfaced - items with only 1 review appeared in just 0.2% of recommendations.
A separate study by Amalytix, which analysed over 1,300 Rufus-recommended products across 500 generic U.S. search terms, found complementary patterns. The median star rating among recommended products was 4.4, with an interquartile range of 4.2 to 4.6, indicating that Rufus's recommendation set is tightly clustered around the 4-star-plus threshold. The study also found that 41.8% of recommended products carried the Amazon's Choice badge - a significant over-representation relative to the overall marketplace, though not a strict prerequisite.
For sellers, the implication is direct: review quality and quantity function as gating criteria for Rufus recommendations. A listing can be semantically optimised to perfection, but if its review profile falls below these thresholds, Rufus is unlikely to surface it in conversational recommendations. This makes review strategy a prerequisite to Rufus optimisation - not an afterthought.
Amazon's broader AI search infrastructure uses computer vision across product pages, and practitioner testing suggests Rufus processes visual content as part of its recommendation logic - though Amazon has not published specific documentation confirming that Rufus uses computer vision or OCR in its recommendation pipeline. What is observable is that product images correlate strongly with recommendation outcomes in empirical studies, and that Amazon's product page architecture treats images as structured data (with alt text, image classification, and visual search features) that feed into the wider discovery ecosystem.
The Amalytix study found that Rufus-recommended products carried a median of 7 images per listing, with an interquartile range of 6 to 9. Products with fewer than 4 images were rare among recommendations. For videos, the median was 3 per listing, though 34% of recommended products had no videos at all - suggesting that while video presence is advantageous, it is not a strict requirement.
Whether or not Rufus directly analyses image content, there is a strong practical case for treating images as data inputs rather than purely visual marketing. Infographic images with text overlays, feature callout images, comparison charts, and size diagrams provide additional readable content on the product detail page - content that Amazon's various AI systems can potentially extract and reference. A lifestyle image showing a product being used in a specific context (e.g., a headlamp being worn while trail running at dusk) reinforces the listing's text-based claims through visual evidence.
Alt text in A+ Content, which sellers historically ignored because it had minimal impact on A9 keyword ranking, is worth populating with descriptive content. While its specific role in Rufus's processing is unconfirmed, alt text provides structured metadata that Amazon's broader AI infrastructure can access. An alt text reading "Woman making green smoothie with high-speed blender using tamper tool to crush ice" communicates three product associations (green smoothies, ice crushing, tamper tool inclusion) that may contribute to the data environment informing product recommendations.
When multiple products satisfy a shopper's query, Rufus applies a multi-layered evaluation that goes well beyond the keyword relevance and sales velocity signals that dominated A9 ranking. Amazon has not published Rufus's internal ranking hierarchy, but a combination of Amazon's technical disclosures, the COSMO research paper, and systematic practitioner observation points to several evaluation dimensions that demonstrably influence which products Rufus surfaces.
Semantic relevance appears to be the primary filter, based on how RAG-based systems function and on observable Rufus behaviour. Rufus evaluates how well a listing's content answers the specific question the shopper asked - not how many keywords it contains. A shopper asking "what's the best knife for cutting raw chicken?" will see listings that explicitly address poultry preparation, blade sharpness for proteins, and food-safe materials ranked above listings that generically describe themselves as "kitchen knives."
Claim specificity influences recommendation confidence. Practitioner testing consistently shows that Rufus cites specific bullet content when generating conversational responses - which means listings with verifiable, detailed claims give Rufus more material to work with. A listing that says "premium quality" provides no citable evidence; a listing that says "VG-10 stainless steel, 60 HRC hardness rating" provides a verifiable specification that Rufus can present to the shopper. This is why ZonGuru's Listing Engineering approach treats every listing as a structured product knowledge system - engineering specific, verifiable claims that give Rufus the data it needs to make confident recommendations.
Review-listing consistency appears to act as a trust signal, based on observable Rufus behaviour. When listing claims align with what customers say in reviews, Rufus demonstrates higher confidence in the product. When they conflict, Rufus tends to surface the review language instead. This observable dynamic creates a feedback loop: accurate listing claims generate aligned reviews, which reinforce recommendation confidence, which generates more recommendations. The inverse loop is equally consequential.
Engagement signals provide behavioural validation. Amazon has not published the specific engagement metrics Rufus evaluates, but industry analysis and practitioner observation suggest that dwell time on product detail pages, image interaction patterns (zooming, scrolling through carousel), Q&A engagement, and post-click behaviour all contribute to what amounts to a quality score. A high click-through rate followed by a quick exit can hurt rather than help, because it signals to Rufus that the listing did not deliver on its promise.
Personalisation layer adjusts recommendations per individual. Two shoppers asking the same question can receive different product recommendations based on their purchase history, category affinities, price sensitivity signals, and - increasingly - their broader Amazon ecosystem activity. This means that a product's visibility in Rufus is not a single fixed position but a dynamic score that shifts based on who is asking.
The available data strongly suggests that Prime eligibility is a significant factor in Rufus's recommendations, though Amazon has not confirmed it as a direct ranking signal.
The Amalytix study of 1,300+ recommended products included fulfilment type as one of its analysed variables, and the data showed that Prime-eligible, Amazon-fulfilled products dominated the recommendation set. This is consistent with Amazon's broader platform incentives - Prime eligibility correlates with faster delivery, easier returns, and higher customer satisfaction, all of which align with Rufus's apparent goal of recommending products that lead to positive shopping outcomes. However, as with all correlational findings, this pattern may reflect broader marketplace dynamics (Prime-eligible products tend to have higher sales velocity, more reviews, and better content) rather than Prime eligibility functioning as a direct ranking signal.
For sellers, the practical takeaway is that FBA fulfilment and Prime eligibility appear to function as strong correlates - if not baseline requirements - for meaningful Rufus visibility. Products fulfilled by merchant (FBM) without Prime badges may face a structural disadvantage, though isolating Prime as an independent variable is difficult given how tightly it correlates with other positive listing attributes.
The difference is architectural, not incremental. Amazon's A9/A10 algorithm and Rufus operate from fundamentally different paradigms - and understanding this distinction is what separates sellers who adapt successfully from those who continue optimising for a system whose share of Amazon's discovery traffic is being progressively supplemented by conversational AI.
A9 is a keyword-matching and ranking system. It evaluates whether a listing contains the terms a shopper searched for, then ranks results based on a combination of keyword relevance, conversion rate, sales velocity, pricing competitiveness, and fulfilment method. A9 answers the question: "Which products match this search term and sell well?"
Rufus is a semantic understanding and recommendation system. It evaluates whether a product satisfies a shopper's intent - often expressed as a natural-language question or conversational request - by processing structured and unstructured data across multiple modalities. Rufus answers a different question: "Which product best solves this shopper's specific problem, and can I confidently explain why?"
Both systems coexist. Amazon has not switched A9 off. As of early 2026, keyword relevance, conversion rates, sales velocity, and review quality remain foundational ranking signals for traditional search results. Rufus adds a semantic intelligence layer on top of this foundation, operating in parallel through the conversational interface that now mediates a substantial and growing share of Amazon shopping sessions.
The practical difference is what each system rewards. A9 rewards keyword coverage - the more search terms your listing is indexed for, the more queries it appears in. Rufus rewards knowledge density - the more specific, verifiable, intent-relevant information your listing communicates, the more confidently Rufus can recommend it.
Rufus demonstrates a high degree of consistency in its core recommendations, with meaningful variability around the edges. This is a deliberate design pattern, not a flaw.
The Amalytix study tested this directly by running 10 keywords through Rufus five times each and comparing results. The findings: 2-3 ASINs per keyword appeared in 100% of test runs, indicating a strong "anchor set" that Rufus consistently recommends. An additional 4-6 ASINs appeared inconsistently across runs, rotating in and out to provide shoppers with diverse options.
This pattern suggests that Rufus maintains a core set of highly confident recommendations for each query - products it has strong evidence to support - while introducing controlled variability to surface alternative options. For sellers, this means that earning a place in the anchor set requires strong, consistent signals across all of Rufus's evaluation criteria: semantic relevance, review quality, listing specificity, and engagement metrics. Products in the variable set may appear intermittently, potentially cycling in and out as Rufus experiments with different recommendations.
The personalisation layer adds another dimension: what appears "inconsistent" in aggregate testing may reflect different user profiles receiving different results based on their individual shopping history and preferences.
There is no single answer, because the timeline depends on which layer of Rufus's intelligence you are trying to influence.
Listing text changes (titles, bullets, descriptions) are the fastest to register. Amazon has confirmed that Rufus is a RAG system with responses enhanced by retrieving product information at inference time, which means changes to on-page text should be reflected relatively quickly as Amazon's systems re-index the listing. Precise timelines are not publicly documented, but practitioners generally report seeing changes reflected within days to weeks.
COSMO knowledge graph updates take significantly longer. Because COSMO's knowledge is constructed from aggregated behavioural data (search-buy and co-purchase patterns), changes to your listing content only affect COSMO's model once enough new shopping behaviour accumulates to shift the graph's understanding of your product. The COSMO paper describes batch processing for knowledge graph construction, and industry practitioners report that COSMO-level changes can take several months to fully register.
Review profile changes are the slowest to influence. If your listing's review sentiment shifts (e.g., a manufacturing improvement eliminates a common complaint), Rufus will eventually reflect the new consensus - but only after enough new reviews accumulate to outweigh the historical sentiment signal.
The strategic implication: sellers should treat Rufus optimisation as a compounding investment rather than a one-time fix. The earlier you restructure your listing content for semantic relevance and knowledge density, the sooner the behavioural data that feeds COSMO begins to shift in your favour.
The most effective immediate action is to audit your listing against the specific signals Rufus evaluates - and close the gaps between what your listing communicates and what Rufus needs to make a confident recommendation.
Restructure bullet points as answers to customer questions. Practitioner testing consistently shows that Rufus cites specific bullet content when generating conversational responses - which means each bullet functions as a potential data source the AI can reference. Each bullet should answer at least one question a real shopper would ask - and it should answer it with specific, verifiable information, not generic marketing language. "Durable construction" gives Rufus nothing to cite. "18/10 stainless steel construction, tested to 20,000 squeeze cycles, dishwasher safe on top rack" gives Rufus three distinct, citable data points.
Map your listing to COSMO's 15 relationship types. Review your title, bullets, and description against the used_for_activity, used_for_audience, used_with, capable_of, and isA relationships, the least. If your listing does not explicitly name the activity your product serves, the audience it is designed for, compatible products, and specific functional capabilities, you are leaving knowledge nodes empty - and those empty nodes are discovery pathways Rufus cannot find.
Align listing claims with review evidence. Audit your top 50 reviews and identify the language customers actually use to describe your product. Where listing claims and review language diverge, update the listing to match verified reality. Over-promising relative to review evidence actively undermines Rufus's recommendation confidence.
Populate your Q&A section. Identify the 10 most common questions shoppers ask in your category (use competitor Q&A sections, customer reviews, and "People Also Ask" data as sources). Ensure your Q&A section addresses each one with a specific, helpful answer. An empty Q&A section means Rufus has one fewer data channel to draw from.
Invest in multimodal content. Ensure your image carousel includes infographic images with readable text, feature comparison charts, and lifestyle images showing the product in its primary use context. Add descriptive alt text to A+ Content images. Whether or not Rufus directly processes visual content through computer vision (which Amazon has not confirmed), image quality and quantity correlate strongly with recommendation outcomes in empirical studies - and the additional on-page content these images provide contributes to the broader data environment Amazon's AI systems draw from.
For sellers who want a systematic assessment rather than manual audit, ZonGuru's free COSMO Readiness Report scores your listing across the dimensions Rufus and COSMO evaluate - providing a structured diagnostic before optimisation begins. For comprehensive listing transformation, ZonGuru's COSMO Transformation Service applies the Listing Engineering methodology: ingesting product truth, analysing the competitive landscape, mapping attributes to COSMO's relationship architecture, and engineering structured product knowledge designed for both AI discoverability and human conversion.
Amazon Rufus is Amazon's generative AI-powered conversational shopping assistant, built on Amazon Bedrock using multiple large language models including Anthropic's Claude Sonnet, Amazon Nova, and a custom model trained specifically on Amazon's product catalogue and customer behaviour data. Rufus allows shoppers to ask questions, compare products, and receive personalized recommendations in natural language. Over 300 million customers have used Rufus as of early 2026.
Rufus generates recommendations by combining semantic analysis of four confirmed training data sources - product catalogue content, customer reviews, community Q&A, and web-based information - with an individual personalisation layer based on account-level browsing and purchase history. Amazon's COSMO knowledge graph, which maps commonsense relationships between products and human intentions, is deployed in Amazon's search applications and is widely understood to inform Rufus's contextual intelligence.
No. A9 and Rufus operate in parallel. A9 handles traditional keyword-based search results, while Rufus powers the conversational AI shopping experience. Both systems influence product discovery, and optimising for both is necessary as of 2026.
Empirical research by Mars United Commerce and Profitero+ found that Rufus only recommends products with a 4-star rating or higher. Products below this threshold are effectively excluded from conversational recommendations.
The Mars United/Profitero+ study found an average of approximately 9,000 reviews among recommended products, with products carrying only 1 review appearing in just 0.2% of recommendations. While there is no confirmed minimum threshold, a substantial review count significantly increases recommendation likelihood.
COSMO (Common Sense Knowledge Generation and Serving System) is Amazon's AI-powered knowledge graph that maps relationships between products, customer intentions, and real-world contexts. Published at ACM SIGMOD 2024, COSMO is deployed in Amazon search applications including search navigation. The system's intent-mapping architecture is widely understood to inform Rufus's contextual intelligence - though Amazon has not published a single document explicitly stating "COSMO powers Rufus." The technical architecture and Amazon's own descriptions of Rufus's intent-matching capabilities strongly support the connection.
Amazon's broader AI search infrastructure uses computer vision across product pages, and image quality correlates strongly with Rufus recommendation outcomes in empirical studies. However, Amazon has not published specific documentation confirming that Rufus uses computer vision or OCR in its recommendation pipeline. Regardless of the specific mechanism, treating images as structured data inputs - with readable text overlays, clear feature callouts, and descriptive alt text - aligns with best practice for AI-driven discovery.
Analysis of 1,300+ Rufus-recommended products found a median of 7 images per listing, with an interquartile range of 6 to 9. Products with fewer than 4 images were rare among recommendations.
The available evidence strongly suggests yes. Prime-eligible, Amazon-fulfilled products dominated the recommendation set in empirical studies of Rufus outputs. While Amazon has not confirmed Prime as a direct Rufus ranking signal, and the correlation may partly reflect the fact that Prime-eligible products tend to have stronger review profiles and better content overall, FBA fulfilment appears to function as a strong correlate of meaningful Rufus visibility.
Listing text changes (titles, bullets, descriptions) are the fastest to register - Amazon has confirmed Rufus is a RAG system that retrieves product information at inference time, and practitioners generally report days to weeks for changes to reflect, though Amazon has not published specific timelines. COSMO knowledge graph updates take months, as they depend on accumulated shopping behaviour data. Review profile shifts are the slowest, requiring enough new reviews to outweigh historical sentiment.
Rufus maintains a core anchor set of 2-3 products per query that appear consistently, with an additional 4-6 products rotating for diversity. Personalisation further adjusts recommendations per individual shopper based on purchase history and behavioural signals.
The most damaging mistake is keyword stuffing. Rufus evaluates semantic coherence, not keyword density. A bullet filled with repetitive keywords provides Rufus with near-zero usable information and actively lowers the listing's trust signal. The second most damaging mistake is making benefit claims (e.g., "premium quality," "best in class") without anchoring them to specific, verifiable attributes.
Rufus mines review text for natural-language descriptions of product use cases, satisfaction drivers, and complaints. It synthesises this information into conversational summaries when responding to shoppers. Practitioner testing consistently shows that review language tends to take precedence over listing language when the two conflict.
Yes. Open the Amazon Shopping app, tap the Rufus chat icon, and ask the category-level and product-specific questions your target customers would ask. Document whether your product appears, what information Rufus presents about it, and whether any information is inaccurate or missing. Repeat monthly as Amazon's models evolve. For a systematic assessment, ZonGuru's free COSMO Readiness Report provides a structured evaluation across all major Rufus and COSMO scoring dimensions.
Listing Engineering is the discipline of structuring product listings as knowledge systems designed for both AI discoverability and human conversion - rather than as keyword-optimised marketing copy. ZonGuru's Listing Engineering methodology, powered by the Helix™ framework, maps product attributes to COSMO's relationship architecture, validates claims against review evidence, and engineers structured content that gives Rufus the data confidence it needs to recommend. It is the operational response to the shift from A9 keyword matching to COSMO-powered semantic discovery.
Discover opportunities. Maximize your sales. Grow your Amazon business!
Get started with ZonGuru, access all the tools with a FREE trial.
Start FREE TrialAmazon’s Algo Has Changed. Get Your Listings AI-Mapped.
Claim Limited Offer