Rapid AI MVP Product Development
How to move from a program increment mindset to short-term ROI in AI product development
Introduction
Large companies often approach the development of transformational AI MVPs in hefty platform builds and/or lengthy program increments. While my work has been mostly in financial services and health care, I would venture to guess that a lot of industries outside of tech plan non-trivial AI/ML products in the same way. That is, there is a lot of strategizing with a ton of decks, wireframes, rounds of user research, and meetings. Once a vision is set forth, multiple implementation squads run in parallel, and the build effort may take months before hypotheses can be tested as to whether the proposed intelligence will actually provide the product with a source of competitive advantage. With typical software development projects, the time for feedback on product features can be gathered almost instantaneously after each sprint, but that’s not possible with AI/ML products early in the MVP cycle. Read on to learn why and what can be done to improve how AI MVPs first launch.
AI MVP building blocks
My AI firm is in its infancy, and one of the challenges I faced is how to define and build an MVP to search for product-market fit. Initially, since I thought AI/ML would be at the heart of my competitive advantage, I thought that I would need several quality AI/ML capabilities in my MVP. But, then I watched a video from Y Combinator’s Start Up School describing why that’s a terrible idea outside of specific, long runway product domains such as space exploration or drug development. And tactically speaking, drawn-out AI MVP design and build cycles actually become a lot harder to plan for once you decompose the basic requirement of any successful data science or research project: quality data.
Andrew Ng, the well-known deep learning scholar from Stanford, proposed a simple, yet effective model for successful AI products. First, the product vision must be defined well enough to attract users. But then users, as they use the app, create data upon which models can be trained. Then, model insights can be brought back into the product to increase utility for users, thereby informing next steps of the product strategy and encouraging new users to join (or existing users to broaden their use). This is a virtuous cycle that can result in competitive advantage for AI products. (The diagram below reflects some of my enhancements.)
If you’re lucky enough to be in this virtuous cycle already, then congrats! But what if you’re not? How do we get it going in the MVP’s initial stages? According to YC’s school of thought, product management, software engineers, and data scientists will need to work closely to design and build an MVP within a couple of months max.
Tackling the rapid AI MVP
Let’s first assume that everyone is already on the same page regarding what problem the product is solving. Next, we need to hypothesize how intelligence can be helpful in solving the user’s needs. This can be in terms of recommendations, behavioral nudges, orchestrated journey steps, etc. Product managers can vet these ideas with actual users in wireframes and clickable prototypes, and data scientists should work with UX folks to understand how the proposed model results will appear in the experience. Additionally, data scientists can partner with domain experts to use creative thinking and social science (and other) theories to extrapolate data requirements.
In terms of implementation plan (also illustrated below):
Begin with business architecture decomposition to determine what kinds of intelligence your app may benefit from. Create your product features as usual.
Work these product features into UX screens (wireframes and/or clickable prototypes). Instead of building in actual intelligence (because you probably don’t have the right data), allude to how the app can be built to be smarter, but importantly, capture the data requirements for that intelligence to be possible
Don’t train any models at this point. Instead, data scientists should work with domain experts at this time to brainstorm data requirements. At the same time, test the utility of the product features through the UX. Does the product as designed appear likely to attract users based on qualitative feedback? If the answer is no, then iterate until you get some users interested in the product
After users start using the live product, carve out one or two simple models to deploy. This will give your data scientists and ML engineers an opportunity to test out the MVP’s AI/ML architecture without worrying too much about complex data processing pipelines
Begin A/B tests on the app to determine if the product really does benefit from the hypothesized intelligence. Preferably you’ll have enough data to start quantitative testing, but qualitative feedback is helpful, too. At this point, if the results are not promising, you may need better data or better models — but more likely, you may need to iterate on the AI/ML value proposition. Probing the value proposition is key to making sure your competitive advantage materializes down the road.
With this approach, within a few months rather than a year, you should know whether your AI product will be as successful as you envision.