June 29, 2017

Join our readers and get free daily updates by RSS feed or email ( RSS | Email )

Google Panda and Now Google Penguin – What To Do Next

If you closely track organic search traffic to your blog over the past 18 months you might have been part of the roller coaster of the last year. Back in February 2011 Google rolled out an update called Google Panda which was named after an engineer at Google. This update attempted to weed out bad content from the search results and replace it with top quality content. A lot was written this past year about it, so I wont go over it again here.

Image Credit – Labnol.org traffic is listed above and shows the effects of Panda. Luckily they made a quick recovery.

The effect that Google Panda had was that some sites that were doing well suddenly disappeared from the search results (or got moved quite far down the list) while others were promoted to that spot. So, some people saw a bad decrease in traffic while others saw a very nice increase in search traffic.

Google is always striving to get the best results to the end users so that when they search for something, they get fast and high quality information back and not some scraping site covered with ads and content that makes no sense.

As well as looking for quality of content (with the Panda updates), Google also had a manual team that looked for violations of the Google webmaster guidelines. Those who were found stuffing hidden text on pages and those who used obscure linking practises received a penalty and didn’t get back in to the results until the bad stuff was cleared up. This leads on to the Google Penguin update. This particular update appears to be doing the job that Google has been manually doing for a number of years. Being specific, that means that if you do one of the following you were likely to have been caught in a bad way when Penguin rolled out in the past couple of weeks: (view the update in more details over here)

  • Link manipulation
  • Cloaking
  • Malware
  • Content stuffing
  • Sneaky redirects
  • Bad neighbourhoods
  • Doorway pages
  • Automated queries to Google

Google is now automatically trying to track these kinds of issues that it doesn’t agree with. If it works like Panda worked then there seems to be a threshold. In the case of Panda it appeared to work on a % of bad content and if a lot of your site was bad/poor quality content then you got hit site wide. If Penguin works the same way then you will either fall on the good side and get good traffic or fall on the bad side and get a lot less traffic.

Why Is This Good News

Depending on what side of the line you fall on this could be good news. If you have worked hard on your site and kept to the guidelines then you should see some good amounts of traffic. There’s of course some unfortunate errors (false positives) where a good site gets hit and a bad site doesn’t. These are ironed out in further iterations.

If you haven’t done well with this update which rolled out a few weeks back then step back and look at your site and the general Google guidelines. Are you doing anything sneaky? are you copying/pasting content and building up far too many links from poor quality sites? If so, look at ways you can get better and higher quality links.

The reason this is good news is that if Google has got the update correct then you should have a better chance at getting more traffic for your work. Post your observations in the comments, especially if you have been effected. My own site was hit by Panda and then several months later showed signs of recovery. For Penguin, I haven’t seen any chance at all just yet.

Speak Your Mind