Most SEO experts flinch at the mention of “duplicate content penalty.” Online marketers who have little or no SEO experience love using this term even though most of them are unaware of Google guidelines on duplicate content. They assume that if an article or just a paragraph appears twice online, Google penalties must be close behind.
Today, we will debunk three common myths about duplicate content that has over been misleading people for years now.
Myth 1# Unoriginal Content on Your Site will Compromise your Rankings Across your Domain
Ever since I started offering SEO services, I’m yet to see real evidence that non-original content affects site ranking except for one extreme case. In this case, a new website was launched, and one of the personnel at the contracted public relations company copy-pasted the home page text into a press release and distributed it to thousands of platforms thereby creating hundreds of versions of the original page. This move caught the attention of Google who manually blacklisted the domain.
It was ugly since we were the web development company that had been hired to develop the site. We were blamed for the misfortune, but luckily the domain was re-indexed after we filed a reconsideration request and explained the situation to Google moderators.
Based on this example, there are three points to note:
- Volume: There were thousands of same texts on the web
- Timing: All the content was duplicated and published online at the same time
- Context: The content was for a home page of a brand new domain
But this is not what people mean when they use the phrase “duplicate content.” A 1000 words article on a page of a well-established site is not enough to trigger Google to blacklist the site. Most of the sites, including the authority blogs, periodically repost articles that were first published on other sites. Sure, they do not expect the content to rank, but they also know that it will not adversely affect the credibility of the domain.
Myth 2# Scrappers Will Compromise your Site
One of my friends who is blogger is very keen on making sure that he does not violate Google Webmaster Tools. Whenever a scraper site copies one of his blog posts, he quickly disavows any links to his site to avoid hurting the credibility of his domain. He is yet to read Google’s guidelines for disavows and duplicate content.
In the past, I have checked the analytics of several major blogs, and surprisingly, their content gets scraped multiple times per day. The thought that they have a full-time employee whose role is to watch GWT and disavow links is outrageous. They know that duplicate content will not affect their credibility.
The bottom line is, scrappers will not help or hurt your domain or brand name. Most of the scrapers copy-paste the entire article together with the links. Even though the links in the scrapped version of the article will not pass authority to your site, you may get occasional referrals.
However, if the actions of the scraper outrank your site, you need to report the case to Google. Submit the complaint using their Scrapper Report Tool.
Digitally signing your content using Google Authorship will help the search engine to know that you are the original owner of the content. No matter the number of times an article that is scrapped, it will still be linked back to you if you signed it.
It is also important to note that there is a difference between copyright infringement and scraped content. Someone might decide to copy your entire site content and claim it to be their own creation.
Plagiarism is the practice of using someone’s work and passing it off as your own. Scrapers will rarely do that, but some could decide to sign their name on your content. That’s illegal and is the main reason why you need to have a copyright symbol in your footer.
Myth 3# Republishing Your Guest Posts on Your Site Will Hurt its Ranking
I write hundreds of guest posts per month, it is highly unlikely that my audience see all these posts. So, I often republish the posts on my blog to get as much readership as possible. Personally, I make sure that the content is 100% original, not because of fear of a penalty, but the desire to consistently offer value to my users.
Have you ever written an article for an authority blog? I have, and they usually request me to republish the post on my site a few weeks after it’s published. Some could even ask you to incorporate a small HTML tag to the post “rel=“ canonical” Tag.
Canonical is a term that is used to mean “official version.” When you republish an article that was posted on other sites, you can inform search engines the particular site where the article was originally posted by using a canonical tag.
Apply the Evil Twin Tactic
If the original article that you are considering to republish is a “how to” post, you can change it into a “how not to” post. Base the contents on the original research and concept but makes sure that you use different examples and offer more value to the readers. The “evil twin” will look similar to the first one, but it will still be original.
The Bottom Line
Googlebot crawls sites multiple times per day; it can tell where the original article was published if it finds a copied version of an article a week later on another website, But, does it get angry and impose a penalty on the site? No. That’s basically everything that you need to know about duplicate content.