Transcripted data from the slides:

 

There is nothing worse than working with limited data.

Search console should be a bible for most people, but, more than 60% of SEO’s still are unaware that using API and tools like search analytics for sheets, you can access far more data than the 1000 row limit imposed by Google Search Console.

Not only that, but, search console is rigid, it doesn’t allow for multiple filtering at once whereas tools like search analytics do.

This guide will show you the basics of using this powerful tool and how you can maximise your SEO strategy with quick and easy to get “Data”.

Connect with me – https://www.linkedin.com/in/daniel-foley-assertive/

 

SLIDE 2

To carry out this exercise you will need:

 

 

 

SLIDE 3

Open up a new Google Sheet.

Click add-ons.

Click Get Add-ons.

Pop up box – search for search analytics

Install.

 

SLIDE 4

Once the add on is installed, simply select Add-ons, select Search Analytics for Sheets and then Open Sidebar.

SLIDE 5

The Google Account you are signed in to (which you;ve used to access Google Sheets) will be the profile where search console websites are pulled in under “Verified Sites”.

 

Remember – if you are using CHROME and are signed in then you will only be able to use search analytics without that account UNLESS you open an incognito window, sign in to another account and access Google sheets UNDER that account.

 

Select your verified website and the date range. On the following slide we look at bit closer at the other options.

 

SLIDE 6

Select your date range – for the first exercise I recommend you export data for 16 months (which is the default selected timeframe).

Leave search type as web unless you want to isolate IMAGE / VIDEO performance.

This allows us to aggregate data either by page or property. 

Here we can filter how much data is returned along with the sheet added to

 

SLIDE 7

GROUP BY is an important filter – please remember GROUPING layers will exponentially increase the volume of data rows returned – for high search volume websites / large websites this is likely to timeout without the right filtering.

 

On the following slides we’ll elaborate on each filter.

Add Filter functionality allows us to filter by query, page, country, device etc, much like we would IN search console. Again, we elaborate on this further in the next slides.

 

SLIDE 8

GROUP BY FILTER

The GROUP BY filter is important as it allows us to change the data set returned. 

For example, if we just selected QUERY we would ONLY get queries for the dates selected (subject to any further filtering).

On the following slide we look at how the data might look applying different GROUPS of data.

SLIDE 9

GROUP BY FILTER

If we select GROUP BY QUERY only – then, we get an overview of CLICKS / IMPRESSIONS / CTR by the GROUP filter only.

So in the right hand snapshot, the QUERY group by means ONLY queries are returned along with click / impression data.

SLIDE 10

GROUP BY FILTER

If we select QUERY and PAGE notice how now we get the query along with the returned page along with the usual click, impression, CTR and position data.

 

SLIDE 11

GROUP BY FILTER – WHY THIS IS SO USEFUL?

Isolating QUERIES and PAGES allows you to find cannibalisation much easier, it allows you to quickly isolate UNDERPERFORMING content as well as keyword cross over and much much more!

BE MINDFUL!

Layering GROUPS will exponentially increase the volume of keyword rows? Why? Well, image you look at JUST queries you will get 1 query per URL. If you add QUERY + PAGE then the rows expand for each time a keyword is associated with a page.

 

SLIDE 12

FILTER BY

 

Filter by is a combination filter. It allows you to manipulate the GROUP BY data or, the whole dataset.

 

For example, if you selected GROUP BY query you can then use Add Filter to manipulate which queries are returned.

 

Same for any other filters – GROUP BY PAGE – you can add a filter to return a specific page.

 

SLIDE 13

FILTER BY

 

QUERY

 

Manipulate which queries are returned from your dataset.

 

PAGE

 

Manipulate which pages are returned from your dataset.

 

COUNTRY

 

Filter which country the data is returned for.

 

DEVICE

 

Specify the query / page data by device type.

 

SEARCH APPEARANCE

 

Specify data returned for the appearance type in search.

 

SLIDE 14

FILTER BY

 

Let’s say we only want to return

The queries containing “tiling courses”.

 

We’d set the filter up as follows >

 

SLIDE 15

FILTER BY

 

Now looking at the requested data
We only see data where the query
CONTAINS tiling courses.

 

Because we have the GROUP BY

Filter set to “Query” and “PAGE” we

Can see all the tiling courses
Keywords along with their respective
pages.

 

SLIDE 16

FILTER BY

 

Let’s say now that we want to see all of the keywords in the domain profile that do NOT contain tiling course we simply set the filter to NOTCONTAINS.

 

SLIDE 17

FILTER BY

 

Here we can see all of the websites queries that DO not contain tiling courses.

 

Now you should be able to see how this kind of data filtering (with layering) can become tremendously powerful when analysing datasets.

 

SLIDE 18

FILTER BY

 

Let’s say now we want to EXCLUDE the websites BLOGS from the mix.

 

So we know that the websites blog structure means BLOGS are using a URL with /blog/ within.

 

This means, if we EXCLUDE any data where the PAGE contains blog we can filter this traffic out.

 

SLIDE 19

FILTER BY

 

Here we can see the data set excludes  blogs.

 

BUT, if we wanted, based on the screen shot we could filter out URL’s containing “how-to”, “tips” etc.

 

The possibilities are endless, whether you select page, query etc.

 

SLIDE 20

FILTER BY – WHY IS THIS SO POWERFUL?

 

Using the FILTER BY, you can break up your search console profile data in any which way.

 

Perhaps you want to look at the domains total QUERY, CLICK and IMPRESSION performance exclusive of “BRAND SEARCHES” & BLOGS.

 

Perhaps you want to see how many URL’s are returned for a single query?

 

Perhaps you want to isolate BRAND searches only?

 

SLIDE 21

BUT WAIT A MINUTE?

 

I CAN ALREADY DO THIS IN SEARCH CONSOLE?

 

WRONG!

 

You can’t add multiple filters, nor can you export past 1000 rows, which, limits the data you have access to massively.

 

SLIDE 22

So can I find out

 

Am I suffering from cannibalisation / canonicalisation issues?

 

YES!

 

Using simple data analysis – it’s much easier to filter your data to find out what is going on.

 

SLIDE 23

CROSS COMPETING PAGES

 

Pages that cross compete can and will often
Loose average positions. This is usually visible
In search console either as “more than 1 page”
Being served simultaneously.

More than 1 page being returned per query with
NO broken average position line and where
There is sufficient impression share between
The pages.

If 1 page has 1000 impressions and another has
5 impressions this isn’t an issue. If the proceeding
Impressions are sizable enough, then, it can be
Classed as a “cannibalising result”.

 

SLIDE 24

COMPLETE CANNIBALISATION

 

Fragmented average position lines with
Multiple pages showing click and impression Data is a clearer indication of cannibalisation.

 

Typically worse than “cross competing pages” as  As average positions are likely to be more erratic and lower with periods of complete page drop out.

 

Typically, Google is unable to determine if one or both pages being returned are ideal due to content and quality issues.

 

SLIDE 25

So, set your filters up to start looking for the issues

 

You can either focus on specific pages at one time, OR, work your way through an entire index check.

 

I recommend doing the later.

 

Simply set your filters as follows and then click REQUEST DATA

 

Important!

 

If you have changed your websites build / URL structure or made significant structural changes it will impact the data as Google search console stores click / impression / ctr / position data for historic URLS which will be returned. If this is the case, shorten your DATE RANGE.

 

SLIDE 26

Freeze your top row of returned data

 

This makes data analysis that little bit easier.

 

Highlight ROW 1, click VIEW, Freeze > 1 ROW.

 

 

SLIDE 27

Now let’s sort the data by QUERY

 

Press CTRL + A in your sheet to highlight all the data.

 

Then select DATA > SORT THE RANGE.

 

SLIDE 28

Now let’s sort the data by QUERY

 

Select Data has HEADER ROW.

 

Sort by IMPRESSION.

 

Z > A (Highest to lowest) 

 

Click sort.

 

 

SLIDE 29

Now Let’s Find Potential Issues

 

There are various ways you can do this.

You can manually scroll through the list looking for KEYWORDS where there are IMPRESSIONS and variant URLS.

 

For example – in this screenshot already I can see an issue >

 

Tiling Courses near me is returning 2 different URLS, both with a sizeable share of impressions.

 

SLIDE 30

Applying conditional formatting

 

We can apply CONDITIONAL formatting to find duplicates.

 

This gives us a better chance of isolating dual queries / dual pages.

Highlight the column (A) you want to search through.

 

Click Format > Conditional Formatting. 

 

The Conditional Formatting menu opens on the right.

 

SLIDE 31

Applying conditional formatting

 

In the Format cells if… drop-down, select Custom formula is

 

A new field appears below it.

Enter the following formula in the new field, adjusting the letters for the column range you selected:


=countif(A:A,A1)>1

 

SLIDE 32

Applying conditional formatting

 

Once you have entered the formula and selected the formatting style. You should then see your QUERY COLUMN come to life.

 

SLIDE 33

 

Now you have your data

 

Now you’ve got your data – it’s clear to see what queries are duplicated.

 

If a QUERY appears more than once then another URL is returning that QUERY.

 

BUT! BUT! BUT!

 

This isn’t indicative of cannibalisation or canonicalisation. The next part is finding 2 or more pages for 1 query where there is sufficient IMPRESSION volumes side by side.

SLIDE 34

Filter Data Further to Speed Up Efforts

 

Based on the “Tiling Courses Near Me” issue highlighted on the previous slide – I could now setup the filter to RETURN ALL Queries (so I can see all the offending pages).

 

If, during your manual analysis you find offending terms, you can then start filtering your data further.

 

SLIDE 35

Here it is clear to see that tiling courses near me is returning various pages that are generating equally sizable impression volumes.

 

DEPENDING ON YOUR TIME WINDOW It could be a case of cannibalisation/canonical issues or natural changes to pages being returned depending on your SEO / Content activities.

 

SLIDE 36

If you find examples of the same query returning lots of pages with a good proportion of impressions – you can then revert to search console to ascertain if the pages cross-compete or whether there is complete cannibalisation.

 

In the tiling courses near me – I can see a static line, which indicates that pages are likely cross competing (as, internally they are of similar relevance – so Google is unsure which to return, but, it returns 1 or more at any one time)

 

SLIDE 37

So can I find out

 

Should I cull pages on website?

 

YES!

 

Looking back over time and filtering your content – you can decide whether or not to cull pages based on performance.

 

SLIDE 38

If your website has content that doesn’t do anything – why keep it?

 

We’ve all been there. Whether we write long form articles or blogs – it doesn’t matter.

 

Google wants sites to be useful for end users, typically this means a website should be taken care of (avoiding the accruement of low value content).

 

SLIDE 39

Got content in your website that over 6, 12, 16 months generates sod all clicks or impressions?

 

Get rid of it.

 

It doesn’t matter if you think its of value.

 

If search console churns out loads of URLS from your website where over time there are low impression volumes and little to no clicks, then, it’s time for a cull.

 

SLIDE 40

It’s quick and easy to do.

 

Simply reset your filters (from anything you’ve been doing previously).

 

Select GROUP BY PAGE.

 

Then select the longest time-frame (16 months) and click request data.

SLIDE 41

With the returned PAGE data we now need to sort by impression

 

Press CTRL + A and select the entire sheets worth of data.

 

Click Data > Sort the range.

 

Click data has header row.

 

Select IMPRESSIONS and then select A > Z (Lowest to highest).

SLIDE 42

It’s likely you’ll have a load of
Index rubbish – now time to layer filtering

 

With the first batch of data you can begin to generate filters to get rid of it.

 

For most people running WordPress, you’ll get tag pages, old image and media references and plugin folders amongst other things.

 

You need to simply make a note of the STRINGS in the URLS that we can start excluding i.e.

SLIDE 43

It’s likely you’ll have a load of
Index rubbish – now time to layer filtering

 

We can get rid of URL’s containing

 

Banner-cat

/page/

Edited_tiling

gallery/img

gallery/web

 

SLIDE 44

Using the Filter By function, we can add multiple exclusion filters for URL strings.

 

Notice how I have added FILTER > PAGE > NOTCONTAINS

 

Basically, any pages that contain these items will NOT be returned on the next data request.

 

You need to be patient – it’s likely you’ll have to cycle through REQUESTING DATA after each batch of filters are added before you get to raw content. 

 

SLIDE 45

Eventually – you’ll start ending up with a list of actual pages (be it blogs or service/products)

 

It’s from here that you’d need to make a call on whether the page could be scrapped.

 

If the page has existed for a long enough period of time but has failed to accrue any impressions (of volume in proportion to the background impression performance of pages in the website)  then it’s likely just not going to do anything other than drag overall site performance backwards.

SLIDE 46

12 Months old? 100 Impressions? 16 Months old? 69 impressions?  6 months old? 200 impressions

 

This exercise is about proportionality. You have to understand the background search profile, impression count and understand if a page genuinely isn;t accruing impressions because there is similar content elsewhere on the website performing, or, whether what exists does not perform because the content is poor, old, wrong, not well internally linked amongst other things.

 

Share :
Contact