Pages

Categories

Search



Why Everything You Know About Duplicate Content Is A Lie

Why Everything You Know About Duplicate Content Is A Lie

by
December 30, 2015
SEO
No Comment

Duplicate content is not always about copied content. Of course, there is plagiarism and downright copying and pasting. And yes, doing both of them is illegal, even online. But, yet, there are less obvious ways that duplicate content can present itself on a website. Let’s take a look at some of the more obvious and less obvious ways that duplicate content can end up on a website.

Using The Same Content From Page To Page For Local SEO

This is one of the more obvious ones. You have a site optimized for local SEO. But, as you add content, you just add the same content from page to page. “Google won’t know the difference,” you say. You switch the local information and replace it with information for each local page. A few weeks later you see your site has dropped in local results, as well as traffic. What happened?

You may have gotten filtered out of the search results. Or, you could have suffered a Panda penalty if the violations are at a large enough scale.

Be sure to change your content from page to page. And not just the local information. Make sure that you create unique pieces of content for every page on your site. This is the path to quality SEO.

Not Observing Syndication or Attribution Best Practices

There are only two legit ways Google sees you using content from one website on another: through syndication, and through paraphrasing and attributing your research to proper sources.

When you use syndication best practices, you make sure that sites who take part in such don’t get dinged.

Best practices include:

  1. Attribution

The least best method of syndication is creating an attribution link. This link should link back to the site who created the article. While it is a method of telling Google who the original source is, it is unreliable. The link does not have enough information to tell Google that you are the original source. It creates a plethora of other SEO issues too. Noisy ranking signals being another one.

  1. Using the syndication source meta tag.

A great way to ensure your site is first before the other syndication site is to use the syndication-source tag. This tells Google that your content should be first, not the other site’s version. Google also has an original-source meta tag. This lets you make sure Google knows that your site is the – well – original source. Using both tags will help result in better syndication and avoid duplicate content issues.

  1. Using noindex.

This is not the best option. You will not always receive a reply when requesting another site to noindex your site. Doing this will not let Google crawl and discover the second instance of that content. If you can get in touch with those who are in charge of this, and make sure that they get it done, this is an OK option to go for.

  1. Use rel=canonical

To syndicate content well, Google needs to know which content is the original version. To tell Google about this, use the rel=canonical tag on the other site pointing back to your site. This way, you don’t run the risk of this content getting identified as duplicate.

Not Unifying URLs Into One URL

Let’s take a look at a scenario. You have a website with 500 or so pages. You have done no redirects, no changes to the URL structure since it has gone live. The server has no settings to control these redirects in place. One day you are surfing the site and you notice that it has more than one URL. Let’s look at an example:

  1. https://www.somesite.com/pagename.html
  2. https://www.somesite.com/pagename.htm
  3. https://www.somesite.com/page-name.html
  4. https://www.somesite.com/page-name.htm
  5. https:// somesite.com/pagename.html
  6. https:// somesite.com/pagename.htm
  7. https:// somesite.com/page-name.html
  8. https:// somesite.com/page-name.htm

This is a situation that results in a massive SEO issue of URLs, all displaying the exact same content. From an SEO best practices standpoint, you want one URL displaying that content. The problem from Google’s view is that they can crawl all these URLs by default. If 500 versions of a URL that is accessible with all the same content, it increases to 1200 URLs. That is, if your configuration is not correct.

To avoid this, create redirects that redirect all 7 URLs to one URL. Then, use rel=canonical and make sure the canonical URL points back to the #1 URL you want shown. When this gets completed, you should have one URL regardless of which URL you enter. This is because every variation will redirect back to the one URL you choose.

Duplicate Content Is Not Always About Copied Content

Copied content is at the root of most duplicate content issues. These types of issues are not always about copied content. Duplicate content can result from local SEO, spammy practices, and not observing syndication practices. Massive URL issues can also be the root cause of duplicate content. This can be true even if you did not have the intention to create such content. It will be necessary to perform a site audit to identify the issues your site has. Then, you can actually fix them.

Source: http://searchengineland.com/syndicated-content-189097



Leave a Reply

Your email address will not be published. Required fields are marked *

Site Objective Main Menu
×