Skip to main content
Instapage

Why Wikipedia is still visible across Google’s SERPs in 2018

Google is always evolving. But some things in the world of search never change.

One such thing is the presence of Wikipedia across the Google SERPs. From queries about products and brands to celebrities and topical events, Wikipedia still features heavily across Google searches – even while our habits as search engine users change (with voice and mobile increasingly having an impact), and while Google itself works to make its results more intuitive and full of rich features.

Back in May my piece No need for Google argued that wikis themselves were fantastic search engines in their own right (check out wiki.com if you want search results that delve into the content on Wikipedia as well as other numerous wikis). Wikipedia’s visibility on Google is testament to the continuing value and usefulness of “the free encyclopedia anyone can edit.”

So how does Wikipedia manage to maintain this visibility in 2018?

Natural ranking

Even in 2018, Google’s SERPs are still dominated by the organic rankings – a list of web pages it deems relevant to your query based on a number of factors such as size, freshness of content, and the number of other sites linking into it.

Unsurprisingly, Wikipedia’s pages still do the job when it comes to appearing in Google’s organic rankings. It has massive authority, having been established for nearly 20 years and now boasting almost 6 million content pages. There has been plenty of time for inbound links to build up and there are an ever-growing number of pages on the domain giving other sites more reason to link back.

So Wikipedia is a massive, well-established site. It also does really well in the fresh content stakes. Around 35 million registered editors keep tweaking and adding to the site’s content, as well as countless more users who make changes without signing in. Additionally, more than a thousand administrators check and verify whether changes should be kept or reverted. This ensures the site is being amended around the clock – and Google is always keen to rank sites which are actively updated ahead of those which are published and never touched again.

Another element of Wikipedia’s natural ranking prowess is thanks to its on-site SEO. Here I’m referring to things like how it uses internal links to reference its own pages which are handy for both users and Google’s crawlers. Internal links are super easy to add when editing Wiki-pages – and thus appear peppered throughout most articles on the site. Also, note the site’s use of title and header tags, as well as its clean URLs.

These features aren’t the toughest SEO hacks in the world, but you can see how added together they keep Wikipedia visible across Google’s organic rankings in the face of increasing competition and ever-emerging SEO tactics.

Featured snippets

 

Wikipedia is doing well at remaining visible in other parts of the SERPs too. Featured snippets are the box-outs on Google’s results pages which appear above the natural results. They seek to give a summary answer to the searcher’s question, without the user needing to click beyond the SERP.

There is no mark-up that Wikipedia is including on its pages in order for its content to be included in featured snippets. Rather, it is the strength of the site’s content – which usually see a concise and clear (read: edited) summary at the top of each article page – which is helping Google’s crawlers to ascertain what information on the page would be useful to the user in that context.

“People also ask”

It follows that if Google is including Wikipedia articles in its featured snippets, that it also retains visibility (albeit small, before a user makes a click) in the search engine’s “People also ask” boxes.

Again, Google is crawling and delivering this content programmatically. When searching for “midterms 2018,” Google’s algorithm is smart enough to understand that searchers are also asking more long tail questions around that search term – and even if Wikipedia doesn’t have a presence in the organic listings (in this instance, most of these places are given over to news sites), it still receives some visibility and traffic by virtue of its clear, concise and crawlable content.

Knowledge graphs

Knowledge graphs appear towards the right hand side of the Google SERPs. They typically feature a snippet of summary text…

Images and/or maps…

And a plethora of scannable details and handy links…

They are generated in part from Google drawing on content the algorithm crawls programmatically (as in featured snippets), as well as that which is marked-up to alert the search engine to useful details. Businesses can increase their chances of being included in knowledge graphs by signing up to Google My Business and adding the necessary information to their profile, as well as by using on-site mark up.

As you can see from the examples above, Wikipedia content is frequently used by Google to populate knowledge graphs. Again, this is likely due to the natural authority of the site and the easily-crawlable text, rather than any SEO-responsible mark up. But it is a good example of showing how visible the domain is thanks to the strength of its content. Frequently, as is the case with the above “landmarks in plymouth” query, Google will opt to display the informational Wikipedia content (and elements from other sources) in the knowledge graph while giving over the rest of the SERPs to other pages – but it is still visible.

Site links

Another way Wikipedia is good at ensuring it grabs another bit of SERP real estate – as well as giving searchers more reason to click through to the domain – is by giving Google good reason to display its site links.

These are generated by a mixture of relevant links Google crawls from the page in question (“United States Senate elections” in the above example), as well as other related pages on the same domain (“Ras Baraka” is the re-elected Mayor of Newark, but his page is not linked from the elections page).

Wikipedia succeeds here, where the BBC doesn’t, by virtue of its flawless site-structure and liberal use of internal linking – making it easy for Google to draw out the most relevant links for the query.

Takeaways

There are a number of places in Google’s increasingly rich SERPs that Wikipedia doesn’t tend to appear as frequently, if at all. These include: image packs, video results, news and social carousels, sponsored (and retail orientated content), and local results. The reason for this is obvious in most cases, but not in all. Images and video do, of course, feature across thousands of Wiki-pages, but it is arguable that other sites are that bit better at optimizing this kind of content. After all, wiki software was established when much of the web was text based, so we can understand why Google may be more likely to display this content from more modern CMSs.

With that being said, seeing the degree to which Wikipedia is still visible across the SERPs not only highlights the increasing opportunity for SEOs to find some visibility amid increasingly competitive results pages, but also goes to show how important domain authority, good (updated, concise, edited, readable and crawlable) content and excellent internal linking is to acquire and maintain visibility on Google in 2018.



via Search Engine Watch

Comments

Popular posts from this blog

6 types of negative SEO to watch out for

The threat of negative SEO is remote but daunting. How easy is it to for a competitor to ruin your rankings, and how do you protect your site? But before we start, let’s make sure we’re clear on what negative SEO is, and what it definitely isn’t.Negative SEO is a set of activities aimed at lowering a competitor’s rankings in search results. These activities are more often off-page (e.g., building unnatural links to the site or scraping and reposting its content); but in some cases, they may also involve hacking the site and modifying its content.Negative SEO isn’t the most likely explanation for a sudden ranking drop. Before you decide someone may be deliberately hurting your rankings, factor out the more common reasons for ranking drops. You’ll find a comprehensive list here.Negative off-page SEOThis kind of negative SEO targets the site without internally interfering with it. Here are the most common shapes negative off-page SEO can take.Link farmsOne or two spammy links likely won’…

Another SEO tool drops the word “SEO”

This guest post is by Majestic’s Marketing Director, Dixon Jones, who explains the reasons for their recent name change.
Majestic, the link intelligence database that many SEOs have come to use on a daily basis, has dropped the “SEO” from it’s brand and from its domain name, to become majestic.com. Since most people won’t have used Google’s site migration tool before, here’s what it looks like once you press the “go” button:

In actual fact – there’s a minor bug in the tool. The address change is to the https version of majestic.com (which GWT makes us register as a separate site) but that message incorrectly omits that. Fortunately, elsewhere in GWT its clear the omission is on Google’s side, not a typo from the SEO. It is most likely that the migration tool was developed before the need for Google to have separate verification codes for http and https versions of the site.
The hidden costs of a name change
There were a few “nay sayers” on Twitter upset that Majestic might be deserting it…

Software Review Site TrustRadius Has A New Way to Treat Reviews Obtained Through Vendors

Online user reviews are the most powerful marketing technique for influencing purchase decisions. But do they accurately represent the views of most users?Today, business software review platform TrustRadius is announcing a new way — called trScore — to handle the bias introduced in reviews by users obtained through the vendor of the reviewed software product. The site says more than two million software buyers visit each year to check out its product reviews.To understand trScore, let’s first look at TrustRadius’ approach.The site says it authenticates all users through their LinkedIn profiles. It also requires users to answer eight to ten questions about the product, in order to weed out users having no familiarity. Additionally, a staff person reads every review before it is posted, and the site says about three percent of reviews are rejected for not meeting guidelines.As for the reviews themselves, TrustRadius puts them into two main buckets: independently-sourced reviews and ven…