Panda is the name of a Google algorithm update designed to reduce the low-quality and/or duplicate content’s prevalence to provide users with only relevant and unique content. This filter came about as content farms became common and were heavily polluting the web.
Panda was first introduced on 23 February 2011. This algorithm is responsible for assigning a quality rating to pages, which is a ranking factor in SERPs. Its classification criteria are above all based on “human” thinking. Do you think this site is trustworthy? Would you give it your bank details? Is the content relevant? Do ads interfere with the user experience? etc.
Amit Singhal, one of the managers at Google, even published the 23 guiding questions that Panda is based on. We find it interesting to share them with you because they largely integrate the algorithm’s criteria:
- Would you trust the information presented in this article?
- Is this article written by an expert or a passionate master of the topic?
- Does the website have duplicate or redundant themed articles?
- Would you confidently make a payment on this website?
- Are the articles on the website well written? Do they contain spelling, style, grammar mistakes?
- Are the topics driven by the readers’ needs or does the website generate content purely with the purpose of ranking in search results?
- Does the article provide exclusive content or information?
- Does the page offer substantial value compared to other pages in the search results?
- How many quality checks are carried out?
- Does the article describe both sides of a story or is it purely subjective?
- Is the website recognized for its relevance in a topic?
- Does the content appear to be mass produced and/or distributed to a large number of websites?
- Was the article edited and laid out correctly or does it look botched? (no images, structure, paragraphs, etc.)
- For a health-related query, would you trust the information on this website?
- Would you qualify this website as an authoritative source?
- Is this article comprehensive?
- Does this article contain in-depth analysis or is it superficial?
- Would you recommend this website and its articles?
- Are there too many advertisements in this article that interfere with the navigation and distract you from the original topic?
- Could this article be easily published in a magazine, encyclopedia, or book?
- Are the articles too short and not specific enough?
- Are the pages carefully designed?
- Would users complain when they visit a page on the website? (slowness, poor navigation...)
Panda now operates using machine learning to guess how users rate the content’s quality. This system is hardly unanimous among SEOs, who now find it increasingly difficult to perceive and understand Panda's expectations through time.
In fact, unlike Penguin, all websites can potentially be in Panda's sights as it no longer only checks for duplicate content. It goes much further.
Websites with “thin” content
Panda tracks down poor pages, which have little text and little relevant resources. Today, it’s estimated that a minimum of 300 words is necessary in order to hope to “seduce” Google. But this estimation is very generic. Some niche industries will easily stand out with 300 words, while others, much more competitive, will need 1,000 words or more.
This is content copied from one website to another, and may appear multiple times in search results. Internal duplication is also very common. It occurs when 2 pages of the same website are too similar. E-commerce websites are particularly affected by this phenomenon because of faceted navigation, pagination, etc.
Low quality content
Panda is also fighting against pages that offer no added value and appear to be written only to “seduce” bots. These types of pages were ubiquitous in the days of content farms and “easy” SEO! Panda stalks the remaining few on a daily basis.
Lack of authority or reliability
Known as “Google E-A-T” (Expertise Authority and Trust), this criterion is one of the most important today. The content produced by sources which are not considered reliable represent a risk for the website’s position.
Low quality user-generated content
Some websites or blogs only post articles to “fill in”. The most common example is blogs that publish short guest articles, full of spelling and grammar mistakes, and lacking in relevant information.
Too intrusive advertising
Pages primarily based on paid advertising rather than original content are severely sanctioned by Panda. And for a good reason: they not only disrupt the UX, but also bring no added value to the website. Google has published in its support page a guide to integrating ads into web pages.
Affiliate product feeds
Websites with affiliate links and paid ads are not directly targeted. However, some of them are regularly sanctioned by Panda. This is the case with websites whose content is almost exclusively based on product feeds or affiliate elements, which do not contribute to the page’s enrichment.
The way to get up after a Panda attack is both simple and complex.
Considering that Panda rewards websites with high quality content, the solution is to improve the content’s quality and uniqueness. While this is not always easy, especially when the website has several thousand pages, it has been proven many times to be exactly what it takes to get out of a Panda penalty. So, there is a real interest in taking the time to go through all these pages and enrich them.
Remove low-quality content
It’s possible to bounce a website up in the SERPs by removing poor, low-quality content that has never shown any good results (high bounce and exit rate, very short time spent on the page). If for any reason you wish to keep this content, then you must ensure that it’s enriched and cleared of all mistakes. If this is not possible (lack of time or human resources), it’s better to do without. We would rather cut the foot now than the whole leg later.
Fight duplicate content
Sometimes we are not the duplicate content’s source. Some bloggers or novice internet users may take our content because they like it or for convenience, even if it’s illegal. To avoid it, you can disable the right click and copy text, but the latter is still available in the page’s source code. The most sustainable method is to add a canonical tag on each of your pages. By doing so, you let Google know that the “original” content is yours.
However, while Google teams have repeatedly said that Panda encourages unique content, they also confirm that website owners shouldn't stop there. What Panda is looking for is accurate information that offers unique value to users. So, we are not only talking about a text’s singularity, but also information’s uniqueness.
According to John Mueller, webmaster trend analyst at Google, removing technical duplicates is not even a priority over information’s relevance and uniqueness. In his post, he invites you to "think about what makes a website different from the absolute best website in the niche you are in." But let's be honest, the ideal is still to have unique pages AND highly relevant content.
Think about the number of words needed
Word count is another one of Panda’s aspects that is often misunderstood by SEOs, advertisers or bloggers. Indeed, some websites make the mistake of imposing a minimum number of words without taking into account the Internet user’s needs and search intent. In fact, Google recommends defining the words’ number that the content must have to meet user expectations. If the page simply shows “Tokyo time”, no one is asking you to dive into the city’s history through the ages! Writing a long article would make no sense here. Nevertheless, there are many pages with very little content which are privileged by Google. While word count can be a convenient way to identify undersized pages, it’s not a determining factor for Panda.
Improve the website’s structure
Technically, Panda focuses only on content and not on structure or performance. That said, these last two points can impact the website’s overall ranking, and should not be ignored. By improving the structure, you can, for example, correct duplicate content errors, which are monitored by Panda.
Algorithmic penalties are much more difficult to identify than manual penalties which are sent directly by message on the Google Search Console. A Panda penalty initially results in a drop in organic ranking, but this indicator is far from sufficient because the drop in traffic can have other explanations (seasonality, trends, etc.). A comparison with the previous year can help you rule out other reasons.
Once you’ve made sure that the drop in traffic is not due to an external factor, you need to define which pages and terms’ groups are affected. You can do this using Search Console or other position tracking tools like SeeUrank. If the drop in traffic is persistent, and if positions do not rise, it’s very likely that the website has received an algorithmic penalty.
A Panda penalty can be quite severe, especially with duplicate content. But if you react quickly enough, and if the website has a good crawl rate, you should quickly resurface.
Google Panda is a rather harsh algorithm, which can penalize an entire website. However, websites that prioritize quality and unique content have little to fear. It can even become a real ally in some areas that lack relevant information. To please the Panda, all you have to do is think about new exclusive content, rather than produce low quality content by the dozen.