News of a Google algorithm leak has surfaced, unveiling a wealth of documents that shed light on Google Search “signals.” These records reveal an astonishing number of over 14,000 ranking factors or signals. As the community delves into these documents, conducting their own research and testing, a flood of insights is bound to emerge in the coming days.
The biggest issue with this new documentation is that for years, Google has insisted that factors such as Chrome data, clicks and domain authority aren’t factors.
One intriguing revelation is that Google has access to an extensive array of data tables encompassing various information about webpages, domains, content, users, and potentially other third-party data. The question arises: how are these diverse data sets utilized in Google’s ranking algorithm? It’s a mystery that only a select few within Google might hold the key to.
Should we say Google was having their representatives lie on purpose?
I believe, as I did when I wrote this post on Google’s Search Magic, which was revealed in 2013, that it was just a case of obfuscation. Think about all the data Google has on you. Do they purposely take that data and violate privacy policies? Probably not, but could they find a way to leverage the data without crossing lines? I believe so. For example, could they take every email order confirmation delivered to every inbox to understand what items are most popular to be purchased online and from which vendors? What do you think?
In 2013, as Big Data was becoming popular with marketing tools, I presented a session at Search Engine Strategies titled “Mad Men of Search: How They Leverage Data to Rock the Customer Experience“ to make people aware of how companies like Google, Facebook and Target could use big data and how some solutions were becoming available to the general market. I posted that presentation to Slideshare at the time, and for some reason, all the images of the slides are no longer available. Do I believe this is some conspiracy theory? Possibly.
Here is a quick video of some of the critical slides from that presentation:
Search engines like Google aim to replicate what a human searcher would like to experience, which has been the foundation of SEO since the early days of the internet.
These are the 5 Rs of SEO:
(1) being Relevant
(2) building your Reputation
(3) being Remarkable
(4) ensuring your content is Readable (or with the amount of video content today, it could be called Consumability)
(5) having sufficient reach and interest in your customers.
Search engine engineers use various signals in their algorithms to make this work. Almost a decade ago, they revealed that they use about ten times the signals for personalization over the links, content, and coding to try and be relevant to the searcher’s query. If you were Google’s search engine engineers and had over 14,000 signals and possibly others not listed, how would you use them to achieve relevance for your users?