What is the data analysis MVP method? How to use it?

What is the data analysis MVP method? How to use it?

If you want to achieve success in your work, mastering the data analysis MVP method is a necessary step. The author provides us with a detailed analysis of different versions, let's take a look!

Many students are ambitious to achieve success in their work. Here we recommend the MVP method of data analysis, which can escort your work. Students, please sit tight and hold on. Let's start sharing below.

1. What is the MVP of data analysis?

MVP (Minimum Viable Product) was originally a method used in product design. It means that before officially launching a product, a simple version with core functions is launched to test user needs and feedback, so as to quickly determine whether the product meets market demand and make adjustments.

The MVP method of data analysis is to provide virtual data results based on data requirements and usage scenarios before the data is officially produced, so as to verify the validity of the data and discover the real data needs.

This method is very useful in the field of data analysis! Because it can solve the core problem of data analysis: working for a long time, but getting nothing done. There are so many theories behind data analysis, such as "Statistics", "Mathematics", "Operations Research", "Game Theory" and "Machine Learning", so it is easy to get excited.

The data workers were very excited, and all kinds of theories were calculated. When it came to the users:

  • "I knew it!"
  • "What's the use of what you do!"
  • "How did you do it?"

One key three times in a row. This project is doomed to fail.

The purpose of the MVP method of data analysis is to sort out in advance the logic of how data is useful to the business, so as to avoid the above tragedy. However, there are many data analyses that seem awesome but are actually useless in reality...

2. Version 1.0 MVP

To give a simple example, the Internet platform-advertising sales team proposed: "We need to establish user portraits for salesmen, understand the gender, age, behavior, and conversion rate of each salesman, in order to improve performance."

What should I do now?

If you use the MVP approach, don't rush to run the numbers, or list a bunch of "standard user portrait indicators". Instead, take the initial requirements raised by the business side: "Gender, age, behavior, conversion rate, to improve performance" and give a virtual result directly, and then confirm: "If I really provide these things, can you really improve performance?" - Let him confirm.

‍‍‍‍‍

At least based on this sentence alone, the conclusions that data analysis can output are completely useless. If the MVP test of version 1.0 fails, either give up this requirement or continue to think about how to better grasp the user's pain points. In this way, the data can be pushed to version 2.0.

3. Version 2.0 MVP

Looking further, the problem with version 1.0 is that there is no clear goal. There are a lot of so-called portrait indicators, but I don’t know what to do after looking at them. If we focus on the goal, for example, to find salespeople with good performance, it will be clearer.

More analysis is needed here, because "good" and "bad" themselves need to be analyzed.

  • What indicators are used to measure
  • Is continuous or single better?
  • In what scope should the selection be made

At this stage, when making MVP, you can directly throw out some predictable and tangled problems in advance, and think about the response plan with the business side in advance, instead of waiting to run a lot of data and calculate several rounds before discussing. The earlier the discussion, the more you can avoid unnecessary work.

For example, the common problem of overlapping multiple indicators in evaluation: "good/bad" (as shown in the figure below).

For example, the problem of unstable performance (as shown below):

As for indicators that are not relevant to this stage, you can boldly do subtraction and throw them away. When new goals come out, organize the data around the new goals. Avoid the practice of indiscriminately collecting a lot of numbers first - data analysts cannot leave work on time because of these trivial matters.

After sorting these out, we have the 2.0 version of MVP. (As shown below)

It seems to be much clearer than version 1.0, with many invalid indicators removed and a clear goal focused. Note that no data has been run yet, it is just a simulation based on experience, but it can expose the data that "has long been known", filter out the indicators that are "actually useless", and discuss the ambiguous areas in the form of specific cases, thus greatly avoiding problems.

But please note that this is not a qualified MVP, because knowing who is good and who is bad, what does it matter? Knowing that Li Si is really good, can everyone become Li Si? Or is Li Si simply not replicable, and I need to find more people like Li Si to come in? There are no answers to these questions. So at this point, it is still impossible to directly conclude that this data can improve performance. MVP test failed, continue!

4. Version 3.0 MVP

Simply telling who is good and who is not good will not improve performance. Performance is achieved by the frontline, and what the frontline needs is SOP and ammunition, so the data needs to be further processed, such as:

1. Data indicators of excellent benchmarks (number of calls? Time allocation? Follow-up opportunities?)

2. Target customers of excellent benchmarks (are specific customers more likely to succeed?)

3. Sales skills of excellent benchmarks (what words are used? What materials/activities are utilized?)

Note that this is not just about data. Data can only be labeled and listed as indicators. However, the words, tone, and timing need to be provided by the training/business department. Therefore, when doing MVP at this stage, you can directly clarify with the business department: whether only outputting data can meet the needs. If not, get other departments to work together as soon as possible, and don't bury your head in data.

5. Version 4.0 MVP

It seems that version 3.0 is already very powerful. However, there is a hidden bug, which is whether others can learn it. Note that this unknown will greatly hinder the business from recognizing the results of data analysis - is it because the data analysis conclusion is wrong or the execution is not in place when the implementation is not effective? This must be arranged in advance, otherwise you will be blamed in a matter of minutes.

Therefore, it is necessary to add testing steps based on the current version to test whether it is useful.

This, in turn, involves:

1. How large a range should you choose for testing?

2. How long is the test period?

3. How to exclude other factors such as holidays and events

4. Test result certification standards

Once you think these things through, you will have version 4.0.

At this stage, we can finally point the data requirements to the expected "performance improvement" result of the business. And the final result is verified by test data recovery. Even if the test fails, there is a backup plan. At this time, you can run the data with confidence, and the results will definitely be useful.

6. Wide application of MVP testing

Note that MVP testing is closely centered around user needs. The reason why the above example has several versions is that user expectations are high and they expect to see direct results. If user expectations are not high, MVP testing can be very simple.

for example:

  • User demand is: No data at present → Provide data as soon as possible
  • User demand is: too much data at present → delete useless indicators
  • User needs: target data is too messy → reorganize logic
  • User needs: Not sure where the problem is → Output quantifiable problem points

These can be solved by simulating data in advance and making a diagram to confirm the requirements.

For a slightly more complicated case, for example, if the user demand is to accurately predict sales, it may only take two or three steps to further refine the scope and improve the usefulness (as shown in the figure below).

7. Why should we promote the MVP method?

In the field of data analysis, there has always been an octopus school that is popular. Regardless of whether it is useful or not, regardless of whether it is logical or not, just throw a lot of indicators like an octopus (as shown below)

This approach may seem powerful, but it is actually the root cause of project failure. It makes people who work with data mistakenly think that their job is just to do homework, and they don't consider the actual effect, but just want more and more, and finally they are exhausted and still not appreciated.

In contrast, only by doing the following can we accumulate analytical experience faster and make better use of data.

  • Research the basic forms of business data
  • Discover the actual data needs of multiple businesses
  • Multiple test data is useful
  • Eliminate useless, empty, and high-profile indicators

Author: Down-to-earth Teacher Chen

WeChat public account: Down-to-earth Teacher Chen (WeChat public account: gh_abf29df6ada8)

<<:  Meta paid ad removal faces complaints, has privacy become a privilege for the rich?

>>:  Are public accounts facing challenges in private domain traffic?

Recommend

How to view and deal with slow-selling products on Amazon?

After we opened a store on Amazon, we need to cons...

What? The Yogurt Assassin is here too!

The high price of yogurt marketing may be because ...

In which country does Amazon not have a site? Which site is better?

As a cross-border e-commerce platform, Amazon has ...