How user factors influence ranking: 2 critical factors

Original author: Peter Meyers
  • Transfer


We continue the series of translations of articles devoted to various aspects of behavioral factors and how PF affect the site’s position in the search. Summarizing previous publications , PFs are one of the most difficult to cheat, and therefore the most reliable ways for search engines to determine "quality" sites. Simply put, the better the PF of your site, the higher the site is in organic SERPs. In this post, which was released back in 2012 on SEOMoz.com, Peter Meyers reveals the details of the PF accounting algorithm by Google and Bing systems, which in our opinion are actively developing to this day. SERPClickdeveloped including on the basis of data discussed in detail in this post. It allows you to implement PF campaigns aimed at improving the ranking of your site through improving indicators of behavioral factors. We thank the analytical department of ALTWeb Group for the provided translation.

The consequences of updating the Panda algorithm made us all worried about the so-called custom metrics and how they would affect optimization. Many began to fear that “bad” signals from analytics, especially a high failure rate and a short time on the site, could potentially worsen the position of the site.

I believe (and later explain why) that Google does not look directly at our analytics. And I don’t think they need it. Because there are two metrics that Bing and Google have direct access to:

(1) SERP CTR)
(2) Return time to search ("Dwell time")

And I think these two metrics can tell search engines enough about your site.

Google analytics and optimization


Google's official position on this issue is this: data from analytics for ranking are not used . I will not speak out on this issue, here you can decide for yourself whether to believe it or not. I only note that little is so strongly emphasized by Matt. I think that the arguments against directly using analytics for ranking are practical.

(1) Not everyone uses GA (Google Analytics)

It is difficult to say unequivocally what percentage of sites GA is currently using, however, a large study conducted in 2009 showed a figure of 28%. I, in turn, saw figures in 40% in some sources, but it can be assumed that on average 2/3 of sites do not have a connected GA. Therefore, it is logical to assume that it is rather difficult for Google to rank or ban a site based on a factor that is present only in 1/3 of all Internet resources. What else aggravates the picture: the largest sites do without GA, because they can afford traditional corporate analytics systems (WebTrends, Omniture, and the like).

(2) GA can be crookedly installed

Google can not control the quality of the GA installation on those sites that use the system. From my consulting experience and from the Q & A section here on Moz, I can say that analytics is often poorly screwed. Because of this, bounce rates, time on the site may not be very pleasant, plus this creates a certain mess in the system.

(3) GA can be wound

As a "hacker" option (2) - analytics can be installed "crookedly" on purpose. Thus, most custom metrics can be manipulated. In this case, Google has no way to double-check the quality of each installation. The moment GA tags are in your hands, Google itself has little to control.

In fairness, it should be noted that there is an opinion that Google uses any available data. I even met indirect evidence that the bounce rate is significant . I can argue with that: Google and Bing do not need analytics data or bounce rates. They have all the necessary data in their own logs.

The main reason why I do not believe in it

The most common argument is that Google cannot theoretically use metrics like the bounce rate as a ranking signal, since the bounce rate varies greatly depending on the specifics of the site and cannot be estimated unambiguously. I have to hear this so often that here I would like to dwell in more detail, because I have a very clear reason not to believe it.

ANY ranking signal, in and of itself, cannot be regarded unambiguously. I do not know a single optimizer that would say that the page title does not matter. Nevertheless, it is the page title that is easiest to turn the way you need it. In principle, all the factors related to the site’s pages can be summarized: that’s why Google added link accounting to its ranking algorithm. Links, in turn, can also be spammed - so Google also added social and user ranking factors to the overall algorithm. Thus, we get more than 200 ranking factors. Bing claims to take into account 1000 ranking factors. Which means that none of these hundreds of factors is perfect.

Metric # 1: SERP CTR


The main metric that, I think, is widely used in Google is the clickability of the SERP page. Whether the user clicked on the result of the issue or not - for Google and Bing, this is the main indicator of whether the result of the issue meets the request. We know that both Google and Bing collect this data because it indirectly refers to it. (The relevance of this conclusion is confirmed in the previously translated Yandex report by us for the search engine of the same name. The report tells about the algorithm by which users clicks from the SERP page can talk about the quality of the SERP in general and this page - indirectly. - approx.

In the Google Webmaster Tools, you can find click data in the “Your site on the web”> “Your site on the web”> “Search queries” section. It looks something like this:

image

Bing reflects similar data: in the “Dashboard” panel, click on “Traffic Summary”:
image

Of course, we also know that Google relies on the clickability of its issuing page when they give it a quality assessment , and Bing has been following this example for the past few years. The paid search algorithm is significantly different from the organic search algorithm, however, CTR is important there. More relevant results attract more clicks.

Metric # 2: Dwell Time


Last year, Duan Forrester of Bing wrote a post titled “How to Create Quality Content,” in which he referred to a term such as return time to the search:

Your goal should be a situation in which the user comes to your page, the content satisfies all his questions and needs, and the user remains with you. If the content does not force the user to stay on your site, the user leaves. The search system records this in terms of return time to the search . The time between when the user clicked from the search and went to your site and when the user came back to the search from your site, potentially everything says about the quality of your content. A minute or two is sufficient, as this may mean that the user has read the content on your site. A couple of seconds, sufficient for a quick look - a bad result.


The return time to the search kind of reflects both the percentage of failures and the metric of the time spent on the site: this metric shows how much time has passed before the user returns to the SERP after leaving it on your page (and these numbers can be directly found in the logs of the search engine).

Google has never been unambiguous on this issue, but circumstantial evidence suggests that Google is using a measure of the time it takes to return to the search (or some similar indicator). Last year, Google tested the functionality, in which if you click on the result of the issue and then quickly return to it (i.e. the return time to the issue is minimal), then you get the option to block this site:
image

At the moment, functionality is not available to a wide audience. With the launch of a personalized search, Google temporarily abandoned this option. But the fact that a quick return to the search was a trigger to enable the site blocking option shows that Google relies on the return time to the search as a signal of site quality.

1 + 2 = Slaughter combination


Both metrics give brilliant results in combination. CTR itself can be tricked: you can put misleading page headers and meta tags that have little to do with the content of the page itself. But on the other hand, this kind of manipulation will lead to a low return time to the search. That is, you wind up the CTR, and then the site does not meet the expectations of the user that appeared after viewing the snippet, and people go back to search. The combination of CTR and the return time to the search gives a tangible opportunity to track the quality of pages and results, while relying on only 2 metrics. If you have both high CTR and return time to the search, most likely your result is satisfactory and relevant.

Are other metrics taken into account?


I am not saying that failure rates and other parameters of behavioral factors are not taken into account. As I said, the time to return to the page is connected (and possibly correlates) with the percentage of failures and the time spent by the user on the site. Glenn Gabe published a good post about the "Real failure rate", where he also talked about why return times can more accurately reflect the situation compared to the failure rate. As before, you need to take into account the traditional behavioral factors that we see from analytics, and also not to forget broader indicators, such as site speed and signals from social networks, which, in turn, are also associated with behavioral factors.

I would like you to have a broader understanding of behavioral factors. Look at them from the point of view of the search engine: it's time to get a little distracted from the actual site analytics. I recently saw examples where GA removed or manipulated tags in order to cheat indicators, because they were afraid of the consequences of the search engine. But in fact, this led to a bad result: the reliability of their own data was lost. I don’t think Google or Bing uses data from our analytics. And even if they had to resort to it, they would analyze this data along with data from their logs and taking into account other factors.

So what should I do?


Create snippets that lead to clicks on relevant pages (Important! This is a tip for readers in 2012, now Google itself can choose the snippet text - approx. Transl.). Create pages where users stay. In the end, this is quite obvious, besides - it has a positive effect on both optimization indicators and conversion. Especially think about the combination: simply attracting user clicks is useless (and can even be harmful to maintain the position of the site) if people immediately leave the site after the click. Work to create a balance between relevant keys and quality transitions.

As a translator: we also recommend that you use the SERPClick product to improve the position of your site by improving the performance of behavioral factors.

Also popular now: