top of page

Product-Market Fit with no product

Image by Hal Gatewood

Context & Challenges

SaferData.io was exploring how to bring a predictive AI model to market. The challenge wasn’t technical—it was defining customer needs and validating assumptions about product, market, and users without a real product to point to. Explaining the technology often confused participants, who struggled to think openly and instead tried to frame it through existing solutions—despite there being no direct competitors. High-fidelity visuals risked reinforcing this bias, anchoring feedback to surface preferences rather than uncovering deeper insights.

​Needs:

Test the core value proposition of our novel statistical AI technology .

Keep discovery lightweight and unbiased.

Validate assumptions early to guide product direction.

Problems...      

Perception

Crowd

Large-scale population segmentation

Identify nuanced patterns across millions of consumer records. Enabling a users to bring unrelated, non-linked, unidentified, records across sources for analysis.  

Perception

Understood as “just another audience filter”

Compared to: Marketing automation tools (e.g., HubSpot, Marketo).

Image by Markus Winkler

Attribute generation

Enrich data sets with unknown attributes from the model or create attributes specific to the organization to have the model produce probability of association.

Perception

Interpreted as “profile auto-fill”

Compared to: CRM enrichment plug-ins. and DAP enrichment tools. 

Digital Network

Probabilistic inference

Estimate the likelihood of data backed signals, like product uptake or churn.​​

Perception

Understood as “basic scoring” or “risk flags.”

Compared to: Credit scoring systems that were Machine Learning based and not derived of probabilistic models.

Image by Pierre Bamin

Imputation

Fill missing values with statistically faithful estimates; analyzing millions of records to fill-in-the-blank (impute) more precisely than any technology in the market. 

Perception

Dismissed as “Excel’s fill-down,” or auto-correct. 

Compared to: Spreadsheet tools or data cleaning utilities.

Solution...          

Research Framework

A modular, low-fidelity prototype to be presented in segments during discovery interviews. This strategy allowed me to:

Guide future development: The feedback and insights gathered from these interviews informed our project’s core requirements, serving as a blueprint for the technical team and validating our design vision.

Focus the conversation: By breaking the prototype into separate modules, we could guide the discussion to specific user problems and needs.

Validate core concepts: We confirmed our understanding of user pain points and their desired solutions without the distraction of premature design feedback.

Ideation

With the current capability of AI model in mind we developed potential processes to demo for our users. Working closely with the data scientist to ensure animations and designs did not misrepresent concepts, These ideation sketches allowed us to coordinate which model features to further improve visually. Concurrently a plan was designed to include the specific questions and a Bolt coded prototype that would accompany the segments by user type. 

Development

I built a modular discovery prototype to showcase each capability of the Generative Consumer Model—segmentation, attribute generation, inference, and imputation—through simple, low-fidelity interfaces. Each module was paired with a scripted interview and targeted questions to isolate user feedback. In parallel, a complementary Bolt-coded prototype and guiding script stitched the modules together, allowing us to test product, market, customer, and pricing assumptions as a cohesive whole.

Large-scale population segmentation
Leveraged various data UI representations (distribution plots, tables, natural language query, drag and drop, etc) of populations across questions to expand thinking beyond a single platform; each represented analyses of different populations through different angles of analysis. 

Attribute generation
Selected novel attributes not in traditional banking datasets to show generative ability (e.g. gamer population). 

Probabilistic inference
Leveraged an LLM interface to elevate thinking beyond typical prediction outputs, to more nuanced conversations with data such as share-of-wallet with regional organic shoppers.  

Data imputation

Showed the model “thinking” displaying confidence interval as each cell is imputed to serve as a visual aid during teaching of our AI’s imputation. 

Script & Bolt Prototype: Each module was presented within a script accompanied by targeted questions and a Bolt coded prototype to validate Product, Market, Customer, and Pricing assumptions

sd gcm research proto script_edited.jpg

Bringing it Together...                                     

Product-Market Fit

Rolling Market Analysis

In parallel with discovery interviews, I conducted a deep market analysis—mapping competitors, strategies, share of market, and emerging trends. This provided a structured view of the broader landscape and highlighted where a new predictive AI solution might create differentiated value.

Screenshot 2025-09-09 at 2.05_edited.png

User Research Findings

These insights were then merged with findings from user research. By consolidating participant feedback with market signals, I was able to distill which problems were most pressing, which capabilities resonated most strongly, and where adoption barriers might exist.

Screenshot 2025-09-16 at 10.37_edited.jp

Together, this process surfaced viable paths to product–market fit and clarified the opportunities most worth pursuing leading to the 

 

assorted-color abstract painting_edited.

company's initial MVP.
 

 

     click to see project:

Created by Rico Garcia

  • LinkedIn - White Circle
bottom of page