Who are the product analysts and why are they needed in the team?
All companies today love “big data”, and practically in each of them there will definitely be a department of data science analysts. However, there is no clear understanding in the industry of who such a product analyst is and how it differs from a data scientist or a UX researcher who focuses on quantitative methods.
Increasingly, the division of product analysts, which:
- set goals and metrics, determine the product development vector
- investigate the nature of phenomena, reveal causal relationships
- build predictive algorithms
For example, in a similar way this structure in the company Indeed:
In this article I want to abstract a little from the specialists who are engaged exclusively in machine learning, and to tell about the vision of the role of the product analyst in our Wrike and about the tasks with which our food team has to work every day.
Qualitative vs. quantitative
As a rule, developers and companies like numbers: quantitative data helps to accurately fix the current state, show dynamics, assess market prospects. At the same time, it is often forgotten that the numbers themselves do not give an opportunity to give an answer about the motivation of people, the root cause of their choice and further actions.
Qualitative before Quantitative: How to Qualitative Methods Support Better Data Science
Therefore, in Wirke we do not make a clear division between analysts who organize qualitative and quantitative research. On the contrary, in our opinion, in a small team (about 10 people) we need to be able to combine these skills as much as possible, using quantitative methods to develop ideas of qualitative analysis , which is often carried out in conjunction with a product manager and designer.
In fact, when it comes to research, we have two expectations from the analyst. He should be able to:
- find promising growth points
- validate the problem by formulating and scaling it
Next, we will discuss in more detail about these two expectations and show how exactly the analyst performs the connecting role between the business understanding of the problem and the quantitative methods that help them scale and validate.
1. Find the points of growth of the product
An analyst is a person who finds promising points of growth for a product by scaling up problems and tasks.
The very first step in understanding any task for a product analyst is determining which class of problems it belongs to. Three types of research are usually distinguished:
- Problem discovery (problem discovery) - when we do not know what problems the users have outside the specific product functionality. This is usually the interview stage.
- Problem validation — when we seem to know that there are certain tasks, but we want to check that a really large number of users have them. This is a stage of various polls.
- Solution validation (solution validation) - when we already test specific solutions that were invented or prototyped. Stage prototyping or beta testing.
The analyst participates in all three phases of research, but the main focus of the work is usually still on the validation of problems and solutions. Suppose a product manager, along with an analyst and a marketer, conducted twenty interviews with different clients. How can we understand that these conclusions can be trusted and the problems that have been voiced are really relevant to all users? How to ensure objectivity of the found potential of development due to the scale assessment? In other words, how can we verify that what we found in the interview is really a potential growth point for a product?
This is where it turns out to maximize the use of tools and knowledge of working with data that link qualitative and quantitative research. Understand the scale and find the most correct way to determine them - this is the key competence of the product analyst. Here is just one small example, when an analytical approach allowed us to change our process of collecting customer pains and differently approach their validation by the product team.
Recognize and analyze conversations
Wrike has a division of account managers ( customer success managers ) whose main task is to support customers not for the purpose of sales, but to improve their experience in using the product. They call up video calls with customers, discuss their current pains, tell the best practices, offer workflows and report on the development status of new features. All these conversations were recorded for a long time and were practically not used by the grocery organization - the companies preferred to communicate personally with their account managers in order to get some general idea of the clients' pains. This could add an element of a “damaged phone” and did not always reveal the context in which the user encountered this problem.
One of the initiative projects of product analytics was the development of the pipeline, which turned the conversation into a clear text format. Using the Google Speech API, as well as several additional models for the placement of punctuation, we were able to quickly get an idea of the scale of some of the problems and functionality requirements based on the many conversations of managers with clients, rather than a single interview.. Thanks to such a simple source, it was possible to carry out a full-scale search by keywords related to some functionality or problematics, to assess the nature of users who demanded a particular solution, and also to understand the context in which it most often surfaced. Now we are also testing a model of sentimental analysis, which helps us automatically catch the average level of satisfaction with individual parts of the product and notify the product team accordingly.
2. We form, scale and validate hypotheses
An analyst is a person who can formulate a problem at the proper level of abstraction, measure it and check for significance, offer recommendations for actions.
Regardless of the stage of the study, there are different levels of hypotheses (we will describe them in detail below), which help to evaluate user interaction with the product and build further development plans. Here the task often arises to correctly assess the required level of the hypothesis, to select a tool for collecting information or validating it. In fact, the process is as follows:
- Formulation of hypotheses - for example: “for admin users from a certain cohort, it is important to be able to invoice based on a weekly report.”
- Collecting usage statistics — a classic analytics task — to understand whether the numbers are capable of responding to the hypotheses formulated above.
- Collecting feedback - conducting research through marketing, newsletters or through internal feedback tools
- Analysis and validation of results - checking results on a stat. significance
Let us dwell on the third paragraph, since it is often he who distinguishes the product analyst from just a person who is well versed in statistics.
Feedback collection
Many companies believe that after they set up a logging system, they hook analytical services like Google Analytics to their product, the preparation of the platform for analytics ends there. However, unfortunately, with this approach, the most important element is forgotten - the need for user feedback, the ability to ask him at the right time about his tasks and the difficulties he faces.
Thus, it is critically important that the team have enough tools to unobtrusively interview users and collect feedback from them, not only through some kind of marketing survey, but also through an internal mechanism.
We use the QFF (qualitative feedback form) internal tool to formulate and validate hypotheses and consider possible user experience scenarios as a three-step pyramid (product → feature → interaction):
- Product level
- Level of functionality
- Level of specific interaction
Let us dwell on each of them in a little more detail and show what metrics we use to understand their problems.
1. Product Level
Here it is important for us to understand the broadest, most cross-functional parts of the user experience funnel. This is the desire to find answers to the most global questions, whether it is satisfaction with the product as a whole or a set of functionality for solving a single task (for example, coordinating leave may require interaction between the functionality of calendars, task statuses, scheduling algorithms, etc.).
There are no clearly regulated metrics that need to be applied in such situations, there are always nuances. However, as a rule, at this level of abstraction we are talking about NPS metrics (net promoter score) or SUS (system usability scale). The metrics are not indisputable, but, as a rule, they are still industry standards and help orient oneself for goal-setting on a scale of several quarters.
2. The level of functionality
At this level, we already ask more specific questions that relate directly to a specific functional. From the example above - we can already look separately at the problem of “coordinating vacations” in general, but take only a specific part of the product, for example, calendars. How comfortable are they for perception? Why do people use them?
Depending on the stage of our research, not only the questions, but also the indicators that we collect from our users may differ. The simplest is the level of satisfaction, which can be read out from task to task using different scales (three emoticons or Likert scale), CES (customer effort score) - how difficult or easy it is for a user to accomplish some tasks.
3. The level of interaction
The task of this level is to evaluate a specific iteration that the user made with the product (for example, he pressed a certain button). It is important that the result of this interaction was some kind of action or decision that we can not measure or control. As a rule, here we are talking about levels of satisfaction and making some subsequent decisions: for example, did the manager, looking at the calendar, manage to understand when an employee is on vacation? Has the data export format been suitable for the user? Since all further actions occur either only in the user's head or outside of our product, we do not have any other method for evaluating the iteration.
In fact, the level of interaction assessment is an attempt to evaluate the CSAT (customer satisfaction) metric, which is often used in support and other services where you need to rate a specific event. In this case, there can also be used metrics like CES, but in a more "local" formulation.
Analysis and validation of results
After we fixed the hypotheses, formulated the questions and conducted our validation polls at the proper level of user experience with the product, a problem arises, which again requires special talent from the analyst - this time in the field of statistics and hypothesis testing.
In fact, after each survey, the analyst must be satisfied with what degree of confidence you can trust the results, including the results of your work. Does the factor of working in a large company influence the answer? And the position of the employee?
All these hypotheses are thoroughly tested using the necessary tools: like the correct A / B test, it depends directly on the analyst which particular approaches are applicable in each particular situation. As a rule, regression analysis can often be used, however, it is not the only universal solution, since has its own areas of application and interpretation. Specific methods are always at the discretion of the analyst.
Instead of conclusion
Above, we have revealed only two main cases in the work of the analyst, and at the same time we deliberately did not talk about all the stages of his work - a detailed description of all types of research, formulations of hypotheses and proper data collection are worth separate articles. However, we believe that even such a high-level formulation of expectations from analytics and fixing the key methods of its work will significantly strengthen any product team and help to make better products.
The ability to find growth points in the data (no matter how unstructured they may be), correctly form them into hypotheses, scale and validate for all current and future users - those qualities that distinguish our product analysts. Therefore, we know for sure that such requirements give the most tangible results and do not allow it to slide into the operational routine, and therefore we so boldly recommend these principles to other teams.
And if you want to talk about quantitative analytics, big data and infrastructural things that support all analytics in Wrike, visit us at the meeting in the St. Petersburg office . Well, or just come to visit.