Make a Great Product out of your Product Proposal - Part II

11 minute read

In Part I of this post, I proposed a framework for how to organize a product proposal; in Part II, I present few sample paragraphs which I have used before for each section. I’m hoping that the framework and these examples will help or inspire entrepreneurs, consultants and product managers out there - please don’t be shy and send me your thoughts and feedback.

A framework alone isn’t very useful so I’m giving you sanitized examples from prior proposals I’ve written. Please feel free to adapt to your needs.

1. Personalized letter

<Date>

Dear <Names>,

Thank you for the invitation to discuss the <product idea> opportunity with you. I want to help you achieve <business goal> by targeting <customer segment> with a brand positioning inspired by <sample product>.

The <product> can be placed in <channel>, promoted to people with <incentives> to increase adoption, and priced according to <pricing hypothesis>.

I’ve been thinking a lot about <end user/customer>, and realized that the most critical <pain points> can be addressed by the following features:

  • <feature1: benefit1>
  • <feature2: benefit2>
  • <feature3: benefit3>

While there are some tradeoffs to consider, such as <list tradeoffs>, imagine a future where <product vision>.

I’m excited to work with you on this because I firmly believe in <sponsor's company> role to ensure that <company purpose>.

Looking forward to a kickoff on <Date>.

Thank you,

<Your name and contact info>

2. Rundown

2.1 Situation

The <industry> currently is adopting <industry trends>. More and more pressure (or opportunities) is coming from <substitute products> and new <competitors>. Customers now expect <sample feature> and have more market power than before.

2.2 Complication (problem statement)

Your organization is accountable for <business model> to its customers. You are now positioned to capture <opportunity size>.

<list the questions or most important user journey stories>

2.3 Resolution

By building <list of features>, we can address the customer needs <list of benefits>. I estimate that it will take <a few weeks/months> for a team of <2-10> people to deliver those features for a cost of <estimate>.

3. Outline of work

Based on your feedback, I propose a proof of concept phase, followed by a three-phased approach to build <product name>, from conception to quantifiable impact. At the end of each of the four phases, a stage gate meeting will be held to review completion of the work and make a decision on the best next steps. All of this will be done in close collaboration with <relevant teams>.

  1. Proof of concept (POC)
    1. Conduct Google Ventures Design Sprint.
    2. Evaluate current data sources and pipelines.
    3. Implement proof of concept <first feature set> with base data set
  2. User experience (UX), data gathering, and baseline tool
    1. Interview stakeholders to understand the desired end user experience.
    2. Gather, test and validate data (internal, external, and vendor).
    3. Build a prototype that provides <list private test users> with something tangible and is useful.
    4. Validate that the prototype brings <desired benefit/value> to the end user.
  3. Feature expansion
    1. Develop <set of features>
    2. Setup the base processes (continuous integration/delivery, cyber security, quality assurance and testing, etc.).
    3. Release the product to a small set of test users, expanding from the initial private users.
    4. Validate UX hypothesis formulated in Phase 1 and adjust as necessary.
    5. Put in place metrics to start to quantitatively (and objectively) measure the operational impact.
  4. Finalize the product and launch
    1. Decide on the most valuable set of features to include in V0.1 of the product
    2. Implement required processes <(cyber security, compliant, documentation, quality assurance, testing, etc.)>
    3. Implement, monitor, and stabilize the product.
    4. Prepare deployment, and provide real-time world class support to ensure smooth and sequenced roll out to organization.

4. Design and user insights

We will be crafting the user experience alongside engineering developments and will be just as iterative to ensure things are moving along. We want to start with a base level of clear access to needed information to make sure features are usable from the get go. We will then continually revise and improve application usability based on user feedback. To make the most of <important feature> we will be considering things like <e.g., easy information entry, intuitive controls, and readable output of useful information>.

  1. Design. UX deliverable(s): Low and high fidelity prototypes
    1. Low-fidelity prototypes (wireframe). Illustrate how the content will be laid out on each screen. We will be omitting any aesthetic design details, focussing on creating a visual framework for stakeholders, designers, and developers. This allows all parties to get a feel of how and where content should be placed.
    2. Design review. The low-fidelity prototype is evaluated against its requirements in order to verify the outcomes of previous activities, and identify issues before committing to - and if need to be re-prioritized - further work.
    3. High-fidelity prototypes. These will show all the intended visual and typographic design details of the application, as it would be on final output. This will be handed over to the Engineering team for development.
  2. Internal testing (within staging). UX deliverable(s): adjusted high-fidelity prototypes
    1. Design quality assurance. To keep the integrity of the user experience, we will conduct Design QA, in collaboration with the engineering team. This is a step during development, to review the coded version of the UI (prior to testing). This involves working with the engineers to make updates to the UI in code.
  3. Evaluation
    1. Post-launch usability testing. To optimize the application’s usage and the viability of its features, we can conduct a usability test, post-launch.
    2. Quantitative - We’ll look at the analytics of the application, and evaluate the necessary data that correlates to UX/UI pain points, such as the number of errors, number of clicks, or time taken to complete the task. From the data gathered, we’ll come up with suggestions for improvements, and potential next step.
    3. Qualitative - We’ll send out a survey, and conduct interviews with the primary and secondary users, to evaluate the features and functionality that can be improved. We’ll propose enhancements that can be made, for future iterations of the application

Based on our preliminary study of <user segment>, here’s a sample user persona and an associated journey map.

<insert person and journey map>

We believe that the key pain points are <pain point> and the benefits associated with relieving those are <benefits>. Therefore, a prioritized list of user stories/features we would like to tackled first include:

  • <feature1: benefit1>
  • <feature2: benefit2>
  • <feature3: benefit3>

5. Understanding of implementation challenges

5.1 Architecture and technology stack

While we are flexible on technology underlying your architecture, our understanding is that you have existing infrastructure in <Azure, AWS, GCP>. The Architecture below is a starting point for addressing the implementation of your product.

<insert relevant diagrams, like an architecture diagram, data pipelines, etc.>

At the core is <e.g., the simulator engine>, written in <e.g., Python> and utilizing <e.g., AWS Lambda>. On top are the <e.g., Command Line Interface (CLI) application and an API that calls the engine>. The functionality of <e.g., the simulator engine> will be exposed via <e.g., the API Management service to both the CLI and Web applications>.

We are aware of legacy tools for <legacy products> built in <e.g., .Net and SQL server> that connect directly with <e.g., IBM Maximo>. Our approach is to build our tools as APIs (e.g., using <e.g., Apigee>) to build a service architecture. This will also enable greater efficiency with third parties.

5.2 Existing data sources

We will study aspects of existing data sources such as centralization, accessibility, performance, structure, and more. This will be achieved through <e.g., a “data maturity assessment” (DMA)> developed by <e.g.,TDWI>. This foundational exercise will give us an understanding of the current gaps and opportunities.

5.3 New data sources

Following the analysis of existing data sources, we will develop a plan to connect, structure, and extend to new data sources that are required.

6. Cost estimates

Here I provide you with a commentary instead of sample paragraphs. I propose three ways to estimate the costs.

6.1 Time and material estimate

First is the time and material estimate. Expect that a product team will have between 1 to 10 people at any given time. Anything more than 10 is becoming at risk of too much communication overhead. It is probably wiser to reduce the team size and reduce the set of features. If that’s not possible, then consider splitting the product into two products.

Blended market hourly rates for outsourced people in India, Philippines, or Eastern Europe will range from $20 to $80. If the work is performed on site in US/Canada/Europe, expect to pay $60 to $250. Anything more than that you are most likely paying overhead costs to the staffing agency/consulting company. For instance, it is not unheard of to see hourly rates in the $1,000+ range for partners in prestigious consulting firms (similar to top law firms).

6.2 Fixed cost estimate

The main advantage of a fixed cost estimate (e.g., broken down by phase) is its simplicity. The main drawback is that building a product is akin to doing research - the outcomes are always uncertain. While the time and material estimate is biased in favor of the people doing the work (because more time means more money), the fixed cost estimate is biased in favor of the company purchasing the product building services, as long as the bar on quality is maintained (which unfortunately is often not the case).

6.3 Target and cap cost

This is my preferred contract, but I have yet to meet a procurement team of a large enterprise who will accept that because it’s hard to benchmark. The contract is based on two figures: the “target price” and the “cap”. The cap is the maximum that the company financing the work will pay. The target is lower than the cap and the contract gives both parties a financial incentive to meet the target. If the provider of work comes in under the target, the savings are shared equally between both parties. Likewise, if we come in over the target, the extra cost is shared evenly – but only up to the cap. If we reach the cap, it acts like a fixed price. The costs listed are target costs. Assume 25% extra for the cap.

6.4 Co-location costs

Estimated collocation costs for each team member is between <e.g., $2,000 for 1 week, or $4,300 for one month>. We include flights (round trip), travel insurance, accommodation, and per diem.

  For one week For one month
Flights $1,000 $1,000
Accommodation $500 $1,500
Insurance, Lyft/Uber, telecom $175 $500
Per diem $325 $1,300
Estimated total $2,000 $4,300

You will be billed actual cost within reasonable deviations. These costs could be reduced, for instance, by leveraging your existing corporate housing, if available.

7. Team, tools and process

7.1 Team

Team member profiles should be short and to the point. Ideally you should include information such as:

  1. Name
  2. Role/function (and seniority level)
  3. Short bio and/or link to LinkedIn profile
  4. (Optional) Daily rate

7.2 QA and testing

Quality assurance, automation and testing are crucial to the process of implementing software with a high degree of quality and reliability. This process validates and ensures functioning of software by going through phases such as smoke tests (validating main functionality), comprehensive tests (manual detailed testing and verification for low volume edge cases and mission critical components), and regression tests (validating that older parts of the software still work when the new part is introduced). Finally QA defines the processes and tooling for testing and deployment automation to ensure an auditable release process.

7.3 Cyber security

This is to be completed based on <company> Cyber Security requirements/processes (sanitized, best in class examples from other clients are available).

Best practice is to leverage the help of an expert consultant who is separate from <the development team> and provides an audit of the system independent of the implementation.

7.4 Productivity tools

We are open to adopting <company> preferred/existing set of collaboration tools <(e.g., Microsoft Team, GitHub)>. We recommend <e.g., BitBucket, JIRA, and Confluence> for engaging with product and technical teams and <e.g., Slack> is recommended for team communication.

7.5 Processes

As a general engagement model and base process, an agile approach is taken as the basis. However more traditional models such as waterfall have their benefits and thus the final shape of the process is a hybrid, where quality planning is done on a milestone level usually on a monthly or quarterly basis depending on the scope and length of the project. Meanwhile the biweekly sprints allow for agility and flexibility to shape priorities during the development.

7.6 Metrics

To survive and thrive in the age of software, I recommend measuring important metrics to measure the state of your delivery engine. Please do not fall into the Agile trap. It’s a lot easier to look Agile than to be Agile. And even Nokia, the poster child of Agile in the late 2000s with a market cap of above $90B at the time, failed miserably. Writing large software is one of the most difficult (and humbling) exercise there is. I highly recommend that you read the following two books on the topic of software delivery:


You’ve reached the end. Now go write that proposal and let me know how it goes.