SEO can be a long and tedious process. You and your website need to be patient. But, that doesn’t mean that you can’t get some quick wins. Getting quick early wins can put you on an SEO journey that can lead to success.
This article focuses specifically on a tool that’ll help you gain the quick wins. The tool is ScreamingFrog. ScreamingFrog is an amazing SEO tool that can quickly help you find flaws in a website. I personally love this tool (not just because of its name) but because it’s so powerful, yet, so simple to use.
If you’re already using ScreamingFrog for a while, you might already be aware of some or all of these tricks. But, if you’re new to Screaming Frog, you’re in for a treat!
Alright, enough with praising the frog. Let’s get started.
Find out the “meh” internal links
The first trick can help you find broken or 301 redirecting internal links in a website. You don’t want such internal links because they’ll only either hurt your SEO or lengthen the process of getting ranked. All the internal links should lead to 200 OK returning pages.
This one’s quick. Once you’ve crawled the website, go to Bulk Export > All Inlinks.
After exporting the inlinks, you can filter all the non-200 returning destination links. There you go, you have a list of all the broken & redirecting internal links.
Check the crawlability of your website
Okay, so you’ve created a beautifully looking website which you’re really proud of. But, is it of any use if the crawlers can’t see it? And if they can’t see it, you’re going to have a hard time ranking in the SERPs.
As you might know, crawlers scan your website’s code and crawls each and every image (via alt text), link, and text that it can see. Here’s the catch! Crawlers can only crawl the code that it can see (pre-rendered HTML). So, even if you have added a link on your webpage, but, it isn’t present in the pre-rendered HTML, it’s as good as useless.
So, here’s where the second trick can help you. Using Screaming Frog, you can see which all links are present in the pre-rendered HTML code of your website.
Let’s see how. You’ll need to crawl the URL whose crawlability you need to check. Once you’re done, right click on the URL and export the Outlinks.
Once you export the outlinks, the links that you get are basically all the URLs the crawler will be able to see and crawl on that particular page. There you go, now all you have to do is, go through the links to see if any are missing.
Find near duplicate content
The lesser the duplicate content, the better it is for your website. Duplicate content makes it difficult for the search engines to rank your pages for a query. ScreamingFrog by default automatically identifies the exact duplicate pages.
Let’s see how you can use the tool to find near duplicate content.
First, go to Config > Content > Duplicates to enable ‘Near Duplicates’ and then set the similarity threshold. In the below image, you can see that the threshold has been set to 90%, which means that the pages which are 90% similar will be identified by the SEO Spider.
After crawling the site, go to Crawl Analysis and click on Start.
Once the crawl analysis is done, the ‘Closest Similarity Match’ filter, the ‘Near Duplicates’ and ‘No Near Duplicates’ columns will be populated. You’ll only be able to see the URLs that has content over the selected similarity threshold and the others will remain blank.
You can see above that two near duplicates with 92% similarity were found in ScreamingFrog’s website. This is how you can find near duplicates in your website and the further optimize the pages to remove duplication.
Check sitemap for errors
It is important that your XML sitemap only consist of links that return a 200 OK response code. So, how can you ensure this? Well, of course there’s the coverage errors report in Google Search Console. But, using ScreamingFrog for crawling you XML sitemap will not only help you identify the errors, but you’ll get much more information that’ll help you optimize it.
So, here it goes. All you have to do is, copy the URL of your XML sitemap and in the list mode, go to Upload > Download XML Sitemap then paste the URL and click on OK.
Once done, ScreamingFrog will then tell you the number of URLs found, then click on OK. SF will then start crawling your XML sitemap.
The crawl will help you identify:
- Non-200 URLs
- Non-canonicalized URLs
- Incorrectly canonicalized URLs
And many more such things that’ll help you fix you XML sitemap and better index your website.
Check redirections after migration
Have you recently migrated from HTTP to HTTPS? How would you know that you’ve redirected all the important URLs? If it isn’t done properly, you can seriously harm the SEO of your website. Don’t worry though, ScreamingFrog’s got you covered.
All you have to do is, copy all your HTTP (old) URLs and paste them in the ‘List’ mode of ScreamingFrog and run the crawl.
If the old URLs are showing 404’s or any other response code but 301, then you’ll know which URLs still need to be redirected.
Identify thin content
It is important for you to identify pages of your website that are adding little or no value. Such pages can be defined as pages having thin content. You can quickly identify pages having thin content using ScreamingFrog.
All you have to do is crawl your website using ScreamingFrog. Once the crawl is complete, go to the ‘Internal’ tab, filter by HTML, then scroll to the right to the ‘Word Count’ column. Then, sort the Word Count’ column from low to high to identify the pages with thin content.
Implementing schema markup is an amazing way to provide valuable information to searchers about your site’s content. Schema markup can display your pages with eye-catching visuals that attract the attention of searchers.
ScreamingFrog has the ability to crawl, extract, and validate structured data directly from the crawl. Validate any Microdata, JSON-LD, or RDFa structured data against the specifications from Google or guidelines from Schema.org in real-time as you crawl.
Let’s see how you can leverage ScreamingFrog for auditing the structured data implementation on your client’s website.
For accessing the structured data validation tools, select the options under Config > Spider > Advanced.
You can access the Structured Data tab within the main interface that’ll allow you to toggle between pages which have structured data, pages missing structured data, and the pages that have errors or warnings in them.
You can also export all the structured data issues in bulk by going to Reports > Structured Data > Validation Errors & Warnings.
These tricks will quickly help you find issues in your client’s website. It can be a good start towards creating a search engine friendly website. Apart from these 7 tricks, there’s a lot more that ScreamingFrog can do. What are you waiting for? Go explore this amazing SEO tool!