The latest Google update is here, and I wanted to present a few ideas to help you take advantage of it. BERT was trained on Wikipedia among others, using 2,500M words and now it’s here to help Google present better ‘question answering’ in the results.

Back after a long hiatus from writing anything SEO related, I’m back as this topic has really got me interested again in what Google is up to.

I’ll cover what can we, SEOs, do about this new BERT update and what we should be doing for our clients to future proof their rankings.

Just as a note, I’ve based these thoughts largely on Google’s Paper and github page, sprinkled by what I’ve seen ‘in the wild’ over the last few years. References at the end of the post.

What is BERT according to Google’s own page?

BERT is a method of pre-training language representations, meaning that we train a general-purpose “language understanding” model on a large text corpus (like Wikipedia), and then use that model for downstream NLP tasks that we care about (like question answering). BERT outperforms previous methods because it is the first unsupervised, deeply bidirectional system for pre-training NLP.

What is the BERT Google update for us, SEOs?

BERT Google update is a strong move towards more Natural Language Processing from Google, showing us, SEOs, that we can’t ignore the intent, context and semantics of any written content on our clients’ websites.

A few terms to understand first

Pre-training is a stage where BERT was essentially trained, Google taught BERT based on millions of sentences.

NLP = Natural Language Processing

OK, enough with the theory, what are the 5 actionables?

1 Google BERT is all about ‘question answering’ so give them questions and answers.

Add FAQs to your key pages, list questions and answers – mark them up using FAQ schema.

Here’s why Google cares: voice search and teaching BERT. Google cares most about ‘question answering’.

https://github.com/google-research/bert

Add in example of money supermarket page with FAQs.

https://www.moneysupermarket.com/breakdown-cover/

2 Optimising for entities and tokens, is your content ‘deep’ enough or just shallow?

Google is really going after sentence prediction and named entity recognition. The more entities you mention, the stronger ‘deep’ support will there be for your content.

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

Here’s an example of depth of content your competitors (best rankings ones) may have in their copy:

Let’s take ‘blue widgets’ as an example: they mention buying them, installing, maintaining and cleaning a ‘blue widget’ 3 times per week, but with a special blue widget oil diluted in water 5% for that extra blue widget shine, ensuring that the key part of the ‘blue widget’ = the blue widgetator is tuned to 10 degrees south when operating blue widget.

If in your text, you just say ‘we sell the best ‘blue widgets’ and the oil to clean them. And then you talk for 300 words about how your blue widgets are cleaner and shinier than theirs, it’s not gonna work. Google will look at their copy, analyse it based on entities, concepts and context they contain, understand that their copy is way DEEPER into the topic, provides answers to most common questions that happen during ‘blue widget’ buying journey, and rank theirs instead of yours.

So the whole notion of ‘you can’t improve your SEO to work together with Google’s algorithms’ is not true, yes we can improve it, it just takes a whole different approach to presenting data and helping users through your content.

3 Ensure that all your targeting is in the main copy

Not just in headings, tables or lists, it MAY BE that Google treats the main text data closer (sentences and paragraphs) than list items or tables or headings.

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

BERT was ‘pre-trained’ largely using ‘body copy’ – avoiding bullet lists and tables. My inclination is that Google bot may do similar things on your website. Although Direct Answers use tables and lists very often, I think they may not be included as a ranking factor itself.

4 Write some of your content Wikipedia-like

As per above screenshot from Google’s paper, BERT was taught largely using Wikipedia. Wikipedia is full of:

These are the kinds of aspects that I can already see playing a large part in helping pages rank, that we can take advantage of.

I’m not saying that every single page has to look like a Wikipedia page, but looking at some of best ranking sites in multiple industries – I can see how ‘heavy’ and meaningful their copy is, and how many facts and terms they present with good interlinking. This is not ‘brand new information’ its’ just another Google’s move in rewarding that approach.

5 Sentences, paragraphs and next steps (next sentences)

In addition to optimising for keywords and intents, we should optimise the copy for sentences, context and ‘next step’ that a user may want to take.

Google has multiple times mentioned that BERT is all about understanding sentences better.

If your copy only talks in short bullets, and doesn’t include ‘a flow of thought’ – it may perform weaker.

The analysis also talks a lot about ‘next sentence prediction’.

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

I hope you enjoyed this read and that you can and will apply some of these ideas.

If you’ve any need for a SEO consultant (in London or anywhere else) drop me a message and let’s arrange a time to speak.

References:

Google’s paper: https://arxiv.org/pdf/1810.04805.pdf 

Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.

Github page for the project: https://github.com/google-research/bert

A great Medium post on BERT: https://medium.com/the-artificial-impostor/news-topic-similarity-measure-using-pretrained-bert-model-1dbfe6a66f1d

And part 2 from 2018 dissecting BERT: https://medium.com/dissecting-bert/dissecting-bert-part2-335ff2ed9c73 

It has now been enough time since the September 2019 update to gauge what it targeted more or less and what changes should be made to regain your rankings before or during the next big core update.
What I present here are not individual fixes as they depend on each website, but more of a concise approach that I have used in the past to recover multiple websites from the rankings peril.

Why ‘Leave No Stone Unturned’?

I will also talk about an approach of ‘leave no stone unturned’ SEO auditing to ensure recovering from a penalty. Inspired heavily by Glenn Gabe with his American version ‘kitchen sink’ approach (read more about it here). For some it’s a nobrainer that with multiple Google algorithms at play, you need to cover everything on a site and not just cherrypick your fixes. However – I now use Glenn’s articles (linked above) and research to explain to fellow SEOs or/and clients that this is a valid approach. That often a Google query penalises a site for a number of reasons and if we just carry out one test – the chances of that one element being the only thing at play are miniscule.

I’ve been a SEO consultant in London for many years and I have never seen so much turmoil in Google’s updates as during 2019.

In a nutshell – noone but Google know definitely what exactly triggered the penalty. They did state they’re looking for a big improvement in the website quality overall. So by covering all main aspects of SEO and not only implementing one or two tests that seem most likely – we have the biggest chance of seeing a reversal of any penalty/filter they have applied.

Why it’s necessary to look at these updates ‘holistically’?

These big core updates should be looked at not as individual factors. We don’t get ‘penguin’ or ‘panda’ anymore – they’re not single big changes of one factor.
These new core updates, as much as multiple smaller updates throughout the year, are a collection of sets of ranking adjustments. Alltogether designed to change the landscape.

Sure you may say – the ‘Medic’ update targeted one particular niche more than others, for me – it was targeting trust on the SERPs landscape and, the accuracy of data presented, depth of topic coverage and things like (perhaps) authority or trustworthiness of each individual author.

What does Google want from us?

Here are a few directions Google moves towards with their individual events/focus areas I’ll be discussing:
* Security: HTTPs as ranking factor, recent announcement on blocking partial content
* Mobile experience: speed on mobile and smartphone crawler being used, prioritising mobile experience over desktop for rankings
* Trust: schema and multiple coverage of author as part of the algorithm, knowledge graph taking over more and more, Q&A schema in the SERPs gaining prominence
The new Google core algo update in September 2019 should be looked at through the lens of these changes, as it’s part of a continued process to bring us mobile first, secure, trustworthy search results.

Do you have any case studies of a site making a recovery during the September 2019 Google Core update?

Yes I do, as a freelance SEO consultant it is my job to record and after having asked permission from clients, present the results of the SEO capmaigns. The below is one of the clients I’ve worked with prior to September 2019:

Google Medic recovery in September 2019 Core Update.

As you can see they’ve experienced a drop back in September 2018, then had a small very short recovery in March 2019 and May 2019 in time with other Google updates, but then a slow decline followed.

What were the adjustments carried out to address the September 2019 Google Core update?

Here are the main themes of the updates carried out to improve the situation:
1 Improving the navigation across the site, interlinking and parts of the architecture (there were some potential duplicates, or pages belonging to a section but being in a different folder due to CMS).
2 Addressing the intent vs search query – on multiple pages the visitors were looking for one thing, say ‘apples’ and because our page was about ‘fruit’ in general, we believed we meet the intent, but we never really talked about ‘apples’ specifically. Updates were made to address keywords and queries more appropriately and in a more indepth approach.
Here’s another example of this:
3 Using the ‘left no stone unturned’ and improving the overall website quality, tackling anything significant from the common ‘Google Quality Guidelines’ issues – speed of the site, content duplication, bad experience on the pages, lowering bounce rate etc.

What are the most important (in my opinion) elements to address on any site that is struggling post Medic from 2018 OR September 2019 Core update?

Here is a step by step process I use to review query-to-page relevancy and if the intent is met:

What are some of the most common technical onsite issues that combined may cause problems with Google core updates?

End notes

I hope this post and ideas help you, if you’ve been struck by one of past Google updates. Feel free to get in touch if you’re a bit stuck on what to tackle next on your site, always happy to help.

NewTide Digital Ltd, 86-90 Paul Street, London, EC2A 4NE. Phone: +447455183193 Company Reg: 9685847 VAT: 265 4314 08
linkedin-squaretwitterbehancecross