"In another episode of AskGoogleBot in Google Search Central Channel on youtube, John Muller gives explanations for measurable tweets in the social media channel Twitter to create awareness for the users worldwide.
The question raised by @HamzaAli353 is that ‘I have often heard that the UPPERCASE letters are not good for a website. Is that the case? Does My website rankings get affected by the case of letters in the URL?
What is meant by URL?
Before entering into his explanations, let us know what URL and Case-sensitive letters mean?
URL, Uniform Resource Locator is an identifier of a web address. If we submit our website for page submission in search engine optimization, the engine crawls and indexes our website through URL only. URL form is as http://www.webanalyst.in/blog.html Case Sensitives are classified as uppercase letters and lowercase letters which signifies whether it has the priority while it is used.
URLs are case sensitive in a website and especially slash symbols(/) at the end are very important. These factors make the crawler feel difficult while crawling your website. This makes the process slower in identifying your pages. This may fail to lead your pages from being indexed. There also may be much relevant content on the websites that make it difficult for search engines to identify the original content. For the identification of the search engine to choose the original URL, canonical implementation is processed.
Canonicalization is coded in an HTML document to convey to the search engine about your original webpage. Thus, the search engine crawls the specific canonical webpage for the ranking factor in SEO. To avoid multiple URLs or duplicate webpage creation, a canonical tag is used. Using canonical implementation, you can avoid misusing your page content on other websites without prior permission. With your permission and canonicalization, you can spread your content to any number of sites, but the search engine crawls and ranks the only specific site which is optimized by the canonical tag.
Another utilization of the URL is in the Robot.txt file. In the Robot.txt file, we can give information to the search engine on what pages should not be crawled.
WHAT DOES ROBOT.TXT DO ON A WEBPAGE?
A robot.txt file reads the searcher URL and decides which URL of the websites have to enter into your website. A robot or spider crawls and indexes the URL website and protects your website from overloading the searcher's request to view your page. If the websites are overloaded, then the robot.txt file displays your page by indexing the block with no index or by enabling a password. Noindex helps you to block the search page to enter into your website.
Robot.txt file is used for the web pages to manage the crawling traffic of google's crawler when you feel the server is overloaded to enter your website. It blocks the images, videos, descriptions on the search result page also. It is in the format that should be applied as
#Exceptions for webanalyst Images
User-agent: webanalyst - Image
#doesn’t disallow /images/DOGS.JPG
Using internal linking, we can make preferences towards the pages. Also, you can mark duplicate pages to Google as rel=” canonical”
So, Uppercase letters and lowercase letters are matters in the URLs in a website that should not be affected while search engines crawl for the page. By practicing them consistently whenever submitting your site towards indexing, you won't feel difficult in handling these letters in URLs. This factor also makes your website towards the organic traffic in the search engine result page.