Impact of Google bot’s User Agent on Website Traffic
Getting more traffic to your website in the age of A.I search engine is a bit tricky. You have to get yourself familiar with the technicality of website ranking in order to increase your website’s visibility online. In December 2019 Google updated Googlebot user agent strings to reflect the new browser version, and periodically updated the version numbers to match Chrome updates in Googlebot. Below depicts user agents in robots.txt where several user-agents are recognized in the robots.txt.
If you want to block or allow Google’s crawlers from accessing some of your content, you can do this by specifying Googlebot as the user-agent. For example, if you want all your pages to appear in Google search, and if you want AdSense ads to appear on your pages, you don’t need a robots.txt file. Similarly, if you want to block some pages from Google altogether, blocking the user-agent Googlebot will also block all Google’s other user-agents.
But if you want more fine-grained control, you can get more specific. For example, you might want all your pages to appear in Google Search, but you don’t want images in your personal directory to be crawled. In this case, use robots.txt to disallow the user-agent Googlebot-image from crawling the files in your /personal directory (while allowing Googlebot to crawl all files), like this:
User-agent: Googlebot Disallow: User-agent: Googlebot-Image Disallow: /personal
To take another example, say that you want ads on all your pages, but you don’t want those pages to appear in Google Search. Here, you’d block Googlebot, but allow Mediapartners-Google, like this:
User-agent: Googlebot Disallow: / User-agent: Mediapartners-Google Disallow:
User agents in robots meta tags
Some pages use multiple robots meta
tags to specify directives for different crawlers, like this:
<meta name="robots" content="nofollow"><meta name="googlebot" content="noindex">
Googlebot user agents Mobile:
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Desktop:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
OR
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Safari/537.36
The new evergreen Googlebot and its user agent
In December Google will start periodically updating the above user agent strings to reflect the version of Chrome used in Googlebot. In the following user agent strings, “W.X.Y.Z” will be substituted with the Chrome version we’re using. For example, instead of W.X.Y.Z you’ll see something similar to “76.0.3809.100”. This version number will update on a regular basis.
Mobile:
Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/W.X.Y.Z Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Desktop:
Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
OR
Mozilla/5.0 AppleWebKit/537.36 (KHTML, like Gecko; compatible; Googlebot/2.1; +http://www.google.com/bot.html) Chrome/W.X.Y.Z Safari/537.36
How to test your site
Google has run an evaluation so that the most websites will not be affected by the change.
Sites that follow Google’s recommendations to use feature detection and progressive enhancement instead of user agent sniffing should continue to work without any changes.
If your site looks for a specific user agent, it may be affected. You should use feature detection instead of user agent sniffing.
If you cannot use feature detection and need to detect Googlebot via the user agent, then look for “Googlebot” within the user agent.
Some common issues you may explore during the change:
Pages that present an error message instead of normal page contents. For example, a page may assume Googlebot is a user with an ad-blocker, and accidentally prevent it from accessing page contents.
Pages that redirect to a roboted or noindex document.
If you’re not sure whether your website is affected or not, you can try loading your webpage in your browser using the new Googlebot user agent. These instructions show how to override your user agent in Chrome.
Getting more visit from Google Bots
Thank you for another great article
You are welcome;-)
Google keeps changing its algorithm and services. It’s very tiresome.
You just need to keep up with search engines updates if you are doing online business:-)
If anybody wants to do business online, need to follow Maria Johnsen’s articles. Thanks for sharing. I wasn’t aware of some of these changes.
That’s so sweet;-)
I read all your books, any new book coming up, Maria. I can’t wait to read them.
Yes, i am working on my new Blockchain book. It will be ready for sales in bookstores.
Oh boy! Google changes its user agent, watch out for new complications and problems with our visibility, but the good news, is that Maria added bunch of tips to avoid problems.
Thank you Lisa! 🙂
Thanks for the heads-up, Maria.
You are welcome:-)
How to block bad bots from accessing my website, Maria? I appreciate for your help.
You add a code ( annoyingbot can be the name of any bot you want to block from accessing your site. Add in your robot.txt like this:
User-agent: annoyingbot
Disallow: /
—-
.htaccess:
SetEnvIfNoCase User-Agent .*annoyingbot.* bad_bot
Order Allow,Deny
Allow from all
Deny from env=bad_bot
Thanks for letting us know about the new update. My website is still recovering from Google’s previous update.
Sorry to read that you have been through some problems. Let me know if you need any help;.)
I am a big fan, Maria. Thanks for sharing!