For a long time, keyword rankings were a staple part of any SEO campaign. In a lot of cases they were a primary metric used to judge performance.
Go back five or six years and we had so much more information on the keywords that users were searching for to reach our web content. All of this information was available transparently within Google Analytics, and you could get relatively accurate search volume estimates from within Google’s Keyword Tool.
The first major update that changed this was Google’s move to encrypted search and the dreaded appearance of “not provided” within Google Analytics.
This created a ripple effect across many SEO software providers that made a lot of their tools less effective — or at least tougher — to measure the impact coming from organic search on a granular level.
Next up, and more recently, was Google’s decision to move search volume estimate within their Keyword Planner tool to show estimates in broad ranges. Instead of learning that a keyword was being searched for 1,400 times each month, we’re told that it’s searched between 1k-10k times per month. This isn’t overly helpful.
These changes have forced marketers to adapt their search strategy to focus less on individual keywords and shift to a topic-centric content strategy, especially for content sitting at the top of the funnel.
Keyword Rankings are Inaccurate
One of the major criticisms of keyword ranking data is the fact that it is largely inaccurate. Many industry leaders and even software providers of rank tracking data have admitted that this is the case.
The reasons behind this can be broken down into three broad buckets:
Around the time of the launch of Google+, the SEO industry was talking a lot about personalization within search. Even after the death of Google+, personalization has remained a big consideration.
Bonus points if you remember Authorship snippets (circa 2012).
Ultimately, Google will deliver results that are personalized to a user based on their search history. This means that if I were to search for a query like “electric cars” and I’d previously been browsing the Tesla website, it’s a possibility that Google would tailor the rankings of the search results to show Tesla near the top.
This wouldn’t be the case for someone that hasn’t previously visited Tesla’s website, which makes it very tough to determine which website actually ranks #1 (because it can be different from one person to the next).
Device and Location
Whilst personalization plays a part in the ambiguity of keyword rankings, it’s nothing compared to the role of implicit query factors like device and location.
One of Google major advancements in search over the past five years has been its ability to take into account aspects of a search query that aren’t explicitly stated. To make sense of what I’ve just said, let’s take a query like, “Boston restaurants”.
Go back to 2010 and a search for “Boston restaurants” would yield a list of relatively generic websites that either talk about Boston restaurants or maybe are a restaurant.
Fast-forward to 2018 and a simple search for “Boston restaurants” will arm Google with a whole lot more information than before. They’re able to see which device you’ve searched from, where you’re located whilst you’re searching, even if you’re currently on the move.
Let’s say that you searched on an iPhone and you’re walking around in the center of Boston at 11:30 am. Here’s what this query would actually look like to Google:
“Which restaurants are currently open for lunch within walking distance of my current location in the center of Boston, MA?”
They’ve gathered all of this information without the individual even having to type it. As a result, they’re able to completely tailor the search results to this individual searchers’ current situation.
So … to answer the question of who ranks #1 for “Boston restaurants” becomes an even more challenging task.
Keyword Rankings are Directional at Best
Strong keyword rankings don’t always equate to high volumes of organic traffic, let alone improvements in revenue. As I mentioned at the beginning, we’ve lost a lot of visibility on search volume metrics, which makes it very difficult to accurately estimate the amount of traffic you can gain from an individual keyword. Factor in the changing appearance of the search engine results page (e.g. the widespread increase in featured snippets) and it becomes an even more daunting task.