How to optimise your website performance for marketers and developers

WEBSITE PERFORMANCE

I signed up for a Learn Inbound Marketing event a few months ago and I must say the content of the Website Performance – A marketing priority presentation was outstanding! It also complements very well my previous blog post on how to understand your website traffic data with Google Tag Manager.

Website performance

This presentation delivered by Emily Grossman is divided into 6 topics:

  1. Definition and importance of web performance to marketers

2. Why might it be valuable for SEO (Search Engine Optimisation)?

3. Why do we suck at this?

4. Measuring performance

5. Auditing performance through Lab tests and Real User Metrics tests (RUM)

6. Optimising your site, your UX (user experience) and your Business.

If you prefer listening to a podcast than reading, please find the presentation recording below.

If you have a more visual memory, you will find the podcast transcript and PDF presentation further in this article.

PODCAST TRANSCRIPT

1.Definition and importance of web performance to marketers

  • Definition

What is web performance? Performance is the speed in which web pages are downloaded and displayed on the user’s web browser. Web Performance Optimisation (WPO) or website optimisation is the field of knowledge about increasing web performance.

  • Why is that important to marketers?

Let’s go back to Maslow’s modernised hierarchy of needs with Wifi access added to the pyramid. People feel that slow wifi is worse than no wifi at all. Waiting for something to load is stressful and annoying. And as marketers in general, we try not to piss off the people, who make us money. So you can see why this might be a problem.

But even if we look at it quantitatively, this could be a really big problem, like 10% of your audience lost. Luckily, the flipside of this is that when we do well with delivering great experiences to our customers at a fast pace, they also reward us. We get:

  • an increase in our conversion rate and engagement
  • a decrease in bounce rates in orders on the e-commerce site
  • an increase in conversions amongst new customers.

This can translate to real money. It can an increase in revenue and in customer spending.  So performance can really be valuable for marketers.

2. Why might it be valuable for SEO?

In terms of SEO, earlier this year, people announced something called the ‘speed update‘. Basically, it is a new update to the algorithm that adds a ranking impact to sites based on the site’s speed in mobile search results for the first time.

However, this update only impacted slow sites. The idea was that if you were really slow, you might get demoted. If you were super fast, it wouldn’t impact you. Actually, I would say, the speed of your site performance is critical for searches because it impacts their experience in an interesting way when viewing a search contact.

If you imagine that all the sites in Google are like products in a grocery shop, you’ll know that your competitors are right next to you in breathing. If your product is broken and busted, leaking all over the place, nobody wants to deal with it. Not only will you lose that customer but they will probably put money right into the pocket of your competitors who are lurking there.

So, there are tons of reasons to care about performance. As marketers, you would think that the web would be blazingly fast but that’s not true. In fact, Nicola did this incredibly intensive study in the UK. She looked at 1000 of the top UK domains and found that a lot of them were really struggling.

They were struggling to provide an interactive experience to the users in less than 10 seconds. Irish websites can be on this struggle bus, too, at getting a navigation up to users in a reasonable amount of time on different networks.

3. Why do we suck at this?

It’s hard. A developer evangelist posted a blog post detailing all the challenges that the developers are going through right now in 2018. A huge section is about optimising a website for performance.

I’d like to focus on 2 main issues:

  • Developers don’t know what the goals they need to aim for are.

Indeed, developers do not have all the information about their user base and the impact their decisions have on them. But marketers love user base data collection and impacts.

  • How do we fix slow web performance?

Today, I would like to talk about involving marketing in this conversation around measuring performance, auditing performance and optimising performance.

4. Measuring WEBSITE performance

Measurements are just actually a proxy for feelings. But how do we know what a fast experience feels like? Can we associate that with something else?

Google has done a good job at labelling what kinds of things users might be looking for, indicators that things are moving along quickly and fastly from experience.

They want to know: ‘Is it happening? Is it useful? Is it usable?’ If we understand that these are our users’ expectations, we can start to associate various measurements with those feelings. Those measurements might have interesting different names, things like ‘First Contentful Paint’, ‘First Meaningful Paint’, ‘Time to Interactive’.

But what we are really trying to figure out for these users is: ‘Do they know that it is happening? Do they know that it is useful? Do they know that it’s usable?’

As marketers, getting involved in these conversations allows us to make our measurements truly meaningful to us when we get them back to our engineers. It also helps engineers to know:

‘What matters at the marketing level? Does this content need a picture loaded for it to feel meaningful or is that image irrelevant?’ These are the kinds of decisions we have to make hand-in-hand with our developers.

5. AUDITING WEBSITE PERFORMANCE

We know what we want to measure but how do we do that? This is very tricky and in general in-performance optimisation tasks. For this, we are looking at and going to do two different kinds of measurements:

  • Lab tests or simulated tests
  • Real User Metrics tests (RUM).
  • Lab tests sometimes referred to as ‘simulated tests’.

There are lots of different tools that will allow you to do these lab tests. Basically, what you are doing is inputting a URL. Then, you are getting out some information from a simulated test environment. There’s a machine somewhere that says:

‘We are going to try and simulate what a user might experience over various different connections or the connection that you set yourself. We will give you back some results.’

You might get back something like this from a Lab test. It’s a set of ‘timings’ that are going to indicate some of the measurement that we talked about before. You can certainly set those up yourself as well.

You might also get what looks like a film strip. The ‘film strip’ shows you what is visually happening while those calculations are made. In the case of webpage test, which is the tool I’m using to show you information.

Another alternative is to get ‘waterfall‘. It allows you to view large sites/pages. Those little bars show you the requests you made. You can see that in a lot of cases, there’s a lot of Javascript, some CSS and some images. These are the building blocks that make up your site. These tools can help you segment each individual request so that you know how long each request is taking.

So, there is a benefit in running lab test. There’s almost no set up required. You can input a URL and go, which means it’s also very easy to track your competitors.

Because you can test pages before they launch, you can see how certain pages are going to run ahead of time. You can also do interesting tasks with the controlled ‘variables‘.  So, if you want to test something before it goes live like adding or removing something, you can see what happens.

You don’t have to deal with other variables in the real world. You can also test for things on multiple networks and compare how things changed when you moved to, say, a 4G network connection to a 3G network connection.

However, the problem with these lab tests is that they can be hard to scale and keep current. We are doing everything at the URL level. They can be automated but it takes some manual labour. You often have to run multiple tests to get some real results.

So, in webpage test, for eg, we’ll run 3 tests and take the median results, to get rid of our data layers. Because there are no variables, we have issues understanding the real impact on our users. If we are testing on 4G but 75% of our users access the internet through a 3G connection, how much is that telling us?

It can also be really difficult to measure these pages when they are dynamic. When they are having ads changing sizes, we are also not getting an understanding of how things look like for users. What we are testing with our users is their experience. It’s actually what we were talking about with Google Tag Manager (GTM) before. We want to track how far our user gets down our page with GTM.

  • Real User Metrics tests (RUM)

With Real-User Performance Monitoring, we want to check how far along the loading process our users are getting. So, the deal you get back gets a little bit different.

Suddenly, your performance metrics are not a single number but a widespread of numbers. You can break these down by dividers but there’s no real way around it. You are going to get a lot bigger spread of numbers when you look at real users.

Sometimes it’s easier to break down this data into a table. For eg, in this table, we can see that 10% of our users are struggling to get time to interact in less than 12.6 seconds. This is the kind of information we can use to truly understand what is going on with our user base in a real-world context.

There are pros and cons to this table.

  • Pros:

The pros are the inverse of the lab tests. It’s very scalable. It’s great for seeing the customer pains in real-time. We don’t have to run the test every so often, it just comes in as our users do.

  • Cons:

This is going to require a lot more engineering support to set up. You have to load some software, put some ‘event tracking‘ to understand what’s happening. You also have to deal with the ‘survivorship bias‘. This is an issue, where for us to understand how long it took somebody to get time to interact, you actually have to get to ‘time to interactive’.

If your webpage is so slow that people are willing to weed it out, you are not going to get these data points as they are waiting for the page to load. This is important to understand and measure against your lab tests as well. There are also some issues with variables. There are a lot more processes involved with this data in your marketing procedure.

But if you are thinking it may be nice to look at this RUM data and the lab testing together, then you would be right. In fact, most organisations that do some sort of ongoing performance optimisation will involve a cycle like this. Where they will write code, test it in the lab to make sure that it meets their standards. Then they’ll deliver it to their users, validate that data with RUM to make sure that users are experiencing the lift they predicted in the lab.

I also think it’s important to combine your lab and RUM test when it comes to auditing. And here is why.

When you think about what your developers can do right now, they can add it to a lab test data. They can understand what are the real users’ pains but also what we do think this website could be. Where are the potential issues that we are seeing in our lab tests? Remember that the developers could potentially do that and what they really need is information from ourselves about who our users are, what our user base looks like and the impact of their potential changes.

So, if you can, later on, look for the analytics information about:

  • the traffic to your site or maybe more specifically
  • the search traffic to your site
  • your conversion rates and maybe even your click-through-rate (CTR) from your search console.

You would then start understanding what’s important and start helping developers to prioritise. You could also develop with them an ‘efforts’ squad. This made-up squad will help you understand how much work it will take to improve your performance on those various pages.

Then, at the end of your audit, you have an understanding of how bad shit sucks, but also what are the most important pieces of content/page templates/URLs for you to try and fix first.

Today, I hope that you are able to understand that performance isn’t just about improving your site speed. This is only part of the performance optimisation process.

6. Optimising your  website – actual speed

I also want to open your mind to the idea that site optimisation can be about optimising your business and its processes. this will ensure that over time you develop a culture that is going to prioritise improving your performance metrics.

Now, when you are working on optimising your performance in your organisation, most of you are not going to be coding these improvements yourself. You are going to be working with a development team.

How to not motivate your developers:

The number one thing not to do with developers is just giving them tasks, assignments and they’ll resent you forever.

How to motivate your developers:

Remember that developers are problem-solvers. So, if you frame your request as a problem statement instead of a command, you have much more success with your development team. Let them in on your goals, give them access to your users’ information. That’s what they want and need to be empowered and successful.

But if you are worried about what they are going to do when they get their hands on the site and start working on this goal of improved performance:

‘it mostly boils down to ship less stuff to your customers and what you do ship, try and deliver it in an optimal order.’

I love this quote by Patrick Meenan, creator of webpagetest.org because you go and read decades-old books on performance optimisation, so much of them still hold true.

I also want to spend some time talking about some of the noddy requests that, we, as marketers, will make to our development teams. Because I want to make sure we are aware of the performance impact of those requests so that when we are making those requests, we understand what we are asking them to do.

Images are still the number one cause of bloat on the web because we love images. If you would like to know what it is like to optimise images on your site, please read this extensive guide. We read it all through and find all the different ways the developers have had to clean up after us in our giant image requests. It’s really interesting.

But let’s move to something called ‘Third-Party Scripts‘. They are translated as things like ads, analytics, widgets, things that can be embedded into any sites that come from a 3rd party source. We, marketers, love to pop things into a website.

But remember that asking developers to do this is like asking them to put a loudspeaker on a finely tuned car. You can optimise the car as much as you like. It’s not going to fix the fact that there’s a loudspeaker on top. So, the real question we need to ask ourselves as marketers is: ‘Do we really need the loudspeaker?’ Before we go and make supplementary requests, we need to be aware that developers can’t always control what it will do on the other end.

Now, a few days ago, someone in the SEO space made a great post about how we can go into the development tour’s part of Chrome and check how many requests from our site are actually coming from third-party scripts.  Through Chrome Dev, you can also run a site speed. You can see on a simulated test how much site speed improvement do we get from turning those off. When you do this, you’ll probably figure out just how much pain your users are feeling, not because of these extra scripts you keep adding to your site.

This is something you can also do on webpage test. You can see in side-by-side ‘film strip views’ how fast your site might get without your scripts. You can then go back and look out all the things you’ve requested on your site and clean them up.

The other thing that can be sometimes an issue with third-party scripts is when they are rendered blocking. ‘Render Blocking Scripts‘ are special. They prevent the webpage from being displayed until they are downloaded and processed themselves. They are like roadblocks that come in and say ‘Wait for me, I’m important’.  You might actually want your CSS to be rendered blocking because you don’t want your users to see a flash of unstyled text. You want them to see it the way it’s supposed to look.

But there are some other scripts we sometimes add to our sites that shouldn’t be render blocking, as they cause huge delays. Some of those are ‘A/B Testing Scripts‘.  Most A/B Testing tools will default to being rendered on the client’s side. What this means is your website says’ Hey, there’s a user here we want us to send the website test’. And then they go and get the website from the server. Then, the server comes into the browser and says ‘Hey, I’ve got the website’. The browser then edits the site. It inserts the Javascript it’s using to make changes to the site and then renders it for the user. This part can take some time to be executed.

The other option that you might have is something called ‘Server-Side Experimentation’. If you are doing A/B Testing, you want to see if this is an option for you because it can cut down substantially on load times. In this case, the experiment decisions are made. Then, when it gets sent back to the browser, the browser doesn’t have to submit extra processing time making that decision.

Another thing I want to briefly mention is that Google Tag Manager can also sometimes be rendered blocking. If you want to make sure that the decisions you are making in your GTM aren’t going to cause delays to your site, you need to make sure that not only is the Tag Manager loaded asynchronously (not render blocking) but also that all the things it’s doing aren’t  going to block render as well.

The other thing that you might come across as a marketer are these very interesting new websites entirely built with Javascript frameworks. They have fun names such as React, Angular, Amber, Preact… You might consider working with your developing team to figure out whether they should do something called ‘Client-Side Rendering‘ (CSR) or ‘Server-Side Rendering‘ (SSR).

  • Client-Side Rendering

I’d like to talk about the impact this has on loading. In a CRS situation, the servers are responsible for the browser. The browser downloads the Javascript and executes it. The whole page is now viewable and interactive.

  • Server-Side Rendering

SSR can be a little bit different.in this instance, the server is already sending some rendered HTML to the browser. The browser can then render. The browser downloads the Javascript executes it and now the page in interactive. It’s important to think about how you might perceive the SSR approach to be faster (image shows up sooner). But we have to remember that there is a potentially a delay between when the content is viewable and when the page is interactive. This means that you can get something that looks like a visually ready page but when you tap on a button, it’s actually not responding to you.

This is the problem we sometimes run into with SSR content. To solve this, we need to do something called ‘Code splitting‘, which essentially breaks out that Javascript into small pieces. This will focus on executing one piece of inactivity at a time so that we can load something much faster than that whole Javascript file.

The other things you can do are optimising for that ‘Repeat Views‘. So, if someone hits your website for the first time, there’s not a lot of things you can do to serve them. But what if they are coming back for the second time? It is possible for us to change things so that we don’t actually have to go back to the Internet every single time we want to get ‘assets‘? can we actually save that information on their device?

There’s a new technology called ‘Service Worker‘ API.  It is about to be supported in Safari and allows us to do just that. With the ‘Service Worker’, you can actually intercept those requests and store some items in your Service Worker cache. Then, if the user needs them again, we can just go to the cache. This can save a lot of repeated load time.

The last thing I want to leave with you in this section is a process called ‘Resource Hinting‘. It is using our users’ downtime to start downloading assets we know they are going to need for the next page.

So, imagine you own a business that sells cat toys and you have a giant page of cat toys. You know that at the end of that page, the user is probably going to click on your check-out page that contains a giff image. You like that image and don’t want to sacrifice it. But you think nobody is getting to my check-out page from anywhere else. They have to be on the resale cat toys page first. So, while the user is spending time browsing back, can I start to download that cat giff for the next page and just save it until they click that button? Yes, you can and that’s through something called ‘Resource Hint‘. If you can predict where the user is going to go next, you can actually start downloading assets for that next page ahead of time and save them.

7. Optimising UX – user perception

I talked about how measurements are proxy for feelings and in some cases, we may have difficulty influencing those metrics. But if we can impact the user’s feelings, that’s still ok. We may bypass the proxy but we can still read the end results., the improved conversions and engagement…

So, I want you to think about two different kinds of queues you have been in your life. There’s a queue that moves really slow and another one that moves really fast. I think about two processes.  I think about when I am at Dublin airport and have to wait for an hour and a half. There’s a painfully long process versus when I go to a restaurant in London. In fact, the quoted waiting time is the exact same in the airport and in the restaurant.

The difference is that at the restaurant they shuffle you in different places: outside, sitting down in a place inside, then going to the bar to have a drink. Then they send you to a different bar before sitting you at a table. By the time you are done, you think ‘Hey, that is really fast’. but isn’t. It’s just that you are constantly in an active state. Things are still happening. If you are still walking and moving into that queue, you feel like it gets fast, even if you are waiting just as long. You can use this same tactic when it comes to your users.

So, the next time you log into Slack, think about what Slacks does when they shuffle you through different states. When they put you in an active state, they are making you forget how long it’s actually taking for their product to load.

This is also the same principle behind skeleton screen, you get this kind of flash of something that looks like content and it changes our mind. You start thinking ‘Hey, maybe I’m ready for content now’. It gives you just that extra to time to get users into a state to make them feel they are not waiting that long. But on an even more practical level, your standard progress bars can feel slower or faster depending on how they are designed. There’s a great study with stylised different progress bars. They track users ‘ perception based on those progress bars. They found out when they animated backwards bars on the progress bars, they felt faster to users than the standard progress bars.

8. Optimising your business – priorities and process

The last thing I want to touch on is how to optimise your business for future success. It’s really important for your business that you rally everyone behind this effort.

So, that means you have to simplify your Key Performance Indicators (KPIs). You must understand everything you want to measure. But what are the two KPIs that really affect your bottom line? When you associate them with money, to make sure that everybody in your organisation understands how important 200 milliseconds really means. Once you have this culture of everybody in the organisation knowing how important these 200 milliseconds are, you will find that people will start asking questions like ‘Can we afford it?

When the marketing team wants a script implemented, everybody wants to know ‘What does that do to our load time? How much is that going to cost us in users?’

When you have those situations where you can’t compromise, you have to compromise on something that isn’t performance. That can be really challenging. But ultimately when you are able to tie your performance decisions back to your bottom line, that’s something you can do. Even the BBC says that in peak use times when their servers are overloaded and things are getting incredibly slow, they are willing to sacrifice a lot of marketing features on their site for the sake of performance. Tha’s because they know that one second added is 10% of their audience.

So, I hope you can start thinking about what time can mean to you. Does it mean 300 000 $ in revenue? Does it mean 800 million £ every year in increased customer spending? How much are you leaving on the table by not investing in performance?

Finally, for those who would like to download the PDF document containing more visuals and her contact details, click on the link below:

Web performance PDF presentation

How to understand your website traffic data with Google Tag Manager

Google Tag Manager

I signed up for a Learn Inbound Marketing event a few months ago on Google Tag Manager data insights!

The presentation delivered by Tom Bennett is divided into 5 topics:

  1. Understand and invest in your data
  2. The challenges of engagement traffic
  3. Google Tag Manager can help us improve our data collection
  4. Smarter segmentation
  5. Work with your developers.

Since it is quite technical, I recommend you to sign up for Google Tag Manager and follow the process he is talking us through.

If you have a more audio or visual memory, you will find the podcast transcript and powerpoint presentation further in this article.

PODCAST TRANSCRIPT

1. Understand and invest in your data

Google Tag Manager helps you measure success in Google Analytics.

If you take away only one thing from this evening, it’s understanding and investing in your data.

Google Analytics is designed to work well. Out of the box implementation with zero customisation, it’s very easy to set up.

But let’s be honest, ‘the one size fits all’ approach to marketing is rarely the best. Indeed, the needs of your business and the Key Performance Indicators (KPIs)  of your website are unique.

Consequently, data collection is crucial for the entirety of the analysis process. It doesn’t matter:

  • how many segments you build
  • or how  many goals you define,

if you mess up your data collection, it will screw up every other stage, too. So, what the value of the insights your analytics software will give you is directly tied back to the investment you have made in data collection at first stage of the whole process.

So, today I’m going to run through few examples of how a smart implementation of Google Tag Manager (GTM) can dramatically improve the relevance and quality of the available data in Google Analytics.

There are no magic bullets, but I hope everyone here will be able to take away at least one technique they weren’t previously aware of and get some of the value from it.

2. The challenges of engagement traffic 

So, we are going to start with engagement tactics, specifically content engagement, because so many organisations are stuck trying to answer meaningless questions like ‘why is that Bounce Rate so high?’

The problem with that is that you see reports saying things like ‘Our content is really good because our sitewide’s average bounce rate is down to 10%’. But this statement is worse than misleading and is often inaccurate.

in fact, many people who use Bounce Rate as the primary KPI don’t actually understand what Bounce Rate is measuring. The effect of this is that the individuals are encouraged to fix the metric rather than the underlying problems, which are of course unique to your site.

So, let’s refresh ourselves with the definition of a Bounce Rate.

Google finds a single page session calculated as only being a single request through the analytics server. What that typically means is that a user arrives and leaves your site via a single page without doing anything on any other pages in-between.

It’s important to remember that sessions are really these fictional constructs Analytics come up with when it processes your data.

Analytics doesn’t know how long a user spends looking at a particular page. It doesn’t set any kind of timer to measure when a session started and when it ended. All it has is this raw hit data:

  • pageviews
  • events
  • transactions.

From this data, it extrapolates and builds this arbitrary notion of a session, which starts and ends after 30 minutes of inactivity (a time gap between hits, midnight or a campaign change).

Now, incidentally, this is why if you commit the sin of tagging your internal links with UTM parameters, you generally see a very high Bounce Rate on most pages. Navigation via those links will result in a new session starting.

So, in order to calculate, divide as the ‘average time on page’, it actually measures how long it takes until the next page is received. To get the session duration, it just measures the time between the first and the last ‘hit‘ in that session.

So, when it uses ‘Bounces’, GA doesn’t have enough data to generate all those metrics it reports such as average time on page, for example.

Indeed, there is no second hit it can measure against to calculate ‘time on page’, which is why it’s not a really good metric to use as your sole KPI, especially when used in aggregate. It becomes meaningless because the questions we can’t answer are substantial.

We don’t know what the user did on the page, how valuable they are to us as potential customers. We don’t even know either if:

  • the website functions properly on that device
  • they read every single word of that content and
  • they bookmarked it to come back later.

Ultimately we lack data.

3. Google Tag Manager can help us improve our data collection

A smart implementation of Google Tag Manager (GTA) is necessary.

  • CONTROLLING AND TWEAKING THE BOUNCE RATE

So, we will stick to the ‘Bounce Rate’ for a while because it demonstrates some good points. You do have control over the bounce rate calculation.

Indeed, you can control which hits will affect Bounce Rate(BR) and which don’t. To illustrate this point, this is an example of a client I recently on-boarded. They received a 0% BR on most of their pages and couldn’t figure out why.

Ultimately, what happened is the development team, which configured not just the standard page but also an ‘Event‘ that fired when all the dependent resources on the page were ready (images, skyscrapers…).

Consequently, it was impossible to have a single hit session because every page viewed was firing two hits. That’s the same principle why really bad WordPress implementations will often see low Bounce Rate because you get duplicate tracking code, i.e two hits per page.

But don’t worry, you can control which ‘events’ effect the Bounce Rate by using the ‘Non-Interaction Hit‘ Flag. You can set this very easily in GTM when you are configuring your ‘event’ tag to  ‘Non-Interaction Hit’ to ‘True’. The BR for the page, on which this ‘event’ fires will be calculated as if the event wasn’t there.

So, for example, if you absolutely have to fire an event when an auto-playing video starts, just set ‘Non-Interaction Hit’ to ‘True’ and the BR will be calculated as if our second hit wasn’t there and would be more accurate.

This idea of using ‘events’ to control our BR plays nicely into the whole idea of ‘On-page Engagement Tracking‘, in a single page new session for eg.

A lot of people started using some of GTM built-in triggers to try and manipulate the BR. For example, GTM has a ‘Timer‘ trigger and by using that, you can avoid relying on GTM arbitrary ‘time-on-page’ calculations.

But one trigger I’m really fond of is the new ‘Element Visibility‘ trigger. To illustrate my point, I picked random examples from the Learn Inbound website. Let’s say you have strategically distributed throughout your longer pieces of content ‘Calls-to-Action‘ like this email sign-up widget.

You may be interested in who is getting to that position in your content or preventing people who got that far through your guides from being counted as Bounces.

If you strategically position these kinds of elements at different positions throughout your various page types, then the ‘element visibility’ trigger can be a powerful way to take advantage of this.

So, we’ll set up a trigger now. As you can see, it lets us define an ‘event’ based on either an ID or a CSS selector. We have control over when this trigger will fire. We can set it to fire when the element is on-screen for a certain duration as your user scrolls through your content. Or it has to be visible for a certain percentage of the element in the ‘View post’. You can even control how many times it will fire if the element appears multiple times per page.

So, in this example, we use this trigger and other triggers to fire an ‘event’ when someone starts scrolling through our content. Obviously, that would be a ‘Non-Interaction Hit’ trigger, when they view the ‘call-to-action’ and then when they reach the footer.

So, by drilling down to a particular page and then viewing this kind of ‘event’ data, it can be very powerful in allowing us to get a sense of who is actually reading our content versus just bouncing immediately.

It can also be segmented by audience types and page to give us insight. This way, we can actually stir our internal linking or content strategy, based on what we learnt about which pages people are engaging with. It can be specific to your other page types. So, needless to say, it goes much further than tweaking the Bounce Rate.

  • TAILORING YOUR DATA COLLECTION METHOD AROUND THE PAGE TYPES

Your data collection method needs to be tailored not just to your business but to that different page types, the different page types of content on your site.

As an example, we are going to look at ‘Interactive Content‘. It’s an interactive piece of content marketing which lets you calculate the heating costs for their home. You can select your ‘Room Types’, ‘Sizes’ and ‘Glazing’. Then it will give you an approximate cost for heating.

Now, in a classic example of ineffective communication between marketing and developing teams, this was pushed out of the door with very little consideration given to its tracking requirements.

It is a shame because GTM is really good at letting us track high relevant interactions that would be taking place on a piece of content like this. Interactions which are very relevant to the kind of audience we are trying to appeal to with this content.

One of the best ways that allow us to do that is with the ‘Custom Event‘ trigger type. In practice, you will ask your developers to implement a piece of Javascript code into your Application. This will push an ‘Event’ to the ‘Data Layer’. All it does is provide us with something that we can listen for at the other end in GTM.

In this instance, we have touched the ‘Data Layer’.push’ in the ‘Event’  and we have pulled ‘CalculatorGo’.  To listen for this as a trigger in GTM, all we do is set up a ‘Custom Event’ trigger. Then, name the ‘Event’ that will appear in ‘Data Layer’ ‘CalculatorGo’.  We can use this to fire a Google Analytics Event Tag, so we know how many people are using interactive.

  • USING CUSTOM VARIABLES TO GET MORE GRANULAR

We want to know how people are using this content. The purpose of it is to appeal a wide audience and drive more revenue. Ultimately we want to know how people are engaging with this content we built.

So, let’s say, for example, we want to know which option uses our selecting when they use our calculator. We can supplement our ‘Data Layer’ Event with two data variables. We’ve gone from ‘Room Type’ to ‘Glazing Type’. These simply populate the ‘Data Layer’ with variables reflecting the user choices at the moment. At the moment, they hit ‘Go’.

Then, we set these as data layer variables in GTM. This means they are now available for us to use in our tags, in our Google Analytics ‘Event’ Tag, for eg.

So, here we have referenced down variables as the ‘Event’ action label respectively. This will give us relevant data about:

  • our audience
  • what they are using our interactive content for
  • and what they are looking for.

We can use this to iterate not just the layout, the functionality of the page, but also use it as the basis for guiding our content strategy or improving our lead nurturing process.

You can extend this approach a long way by using our ‘Goals’. By segmenting to a particular campaign for eg., we can then see how people are engaging with this content and analyse that in isolation.

Thanks to native ‘variable types’, we can get quite creative.

So, to keep the same example, we could set up an ‘Event’ value which fires when someone engages with our piece of content and we can set the value based on what we know about them as users.

We could come up with systems using ‘Lookup Tables’  or even ‘Custom Job Description’ running in GTM, which will assign an arbitrary value to them based on how valuable they are to us as ‘leads’. Then set this as the ‘Goal’ value in GA.

This will give us a sense of how valuable that traffic is as potential customers. So, we can see the absolute number of conversion, but also an approximation of the fair value to us as customers.

And of course, when segmented based on a particular campaign, we can start to gauge the content value of our marketing content efforts.

4. SMARTer SEGMENTATION

The last area I want to explore is using GTM to better group our content.

  • CONTENT GROUPING

For example, if we wish to segment our content strategy into different groups based on the offer, we can do that with the ‘Content Grouping’. It’s very easy to implement.

We can create the ‘Content Grouping’ at a ‘View’ Level. Then, we enable a content tracking code based implementation, and give it an ‘Index Number’ of ‘1’. Afterwards, we can set up the actual author using a ‘Data Layer’Variable’.

By using the ‘Data Layer’, you can work much more smartly. We get our development team to implement the ‘Blog Author‘ as a ‘Data Layer Variable’.

Same principle as we did earlier for our interactive content and then we can reference that in our ‘Pageview’ Tag.  Under ‘More Settings’, we can reference the ‘Data Layer Variable’ in there, so that every page you hit will fetch the account of the author from the ‘Data Layer’. Then it will fire that as the value for that ‘Content Grouping’.

As a result of this, you can view an aggregate performance of pages by particular authors and get a sense of how they perform as a whole. That’s very useful data when it comes to assessing how well your content strategy is performing.

  • CUSTOM DIMENSIONS

To segment further users, let’s look at particular groups of our audience like ‘Behaviours‘.

For example, we might decide to track users who comment on our blog. Then, view that ‘Audience’ group as a separate segment of traffic with ‘Custom Dimensions‘.

Whereas ‘Content Grouping’ allows us to organise our pages into logical groups, ‘Custom Dimensions’ let us record extra like non-standing data on top of GA standard dimensions. They are very flexible in how they let us do this as well.

Remember that every hit which goes to GA has a different scope. For eg, the ‘Pageview Hit’ has a scope limited to that page view. But ‘Landing Page‘ has a scope which applies to the whole session.

 Now, it’s the ‘User Level Scope‘ we are interested in because it lets us apply the data from that hit from the user and all of their subsequent interactions on that website.

So we set it up at the ‘Property Level’, giving 20 ‘Dimensions’ per ‘Property’. We’ll give an ‘Index number’ of ‘1’ and set the ‘Scope’ as ‘User’. So, back in GTM, we are going to fire these ‘Custom Dimensions’ as part of an ‘Event’ hit that will be launched when someone is coming on our blog.

Then under ‘More Settings’, we can set the ‘Custom Dimensions’. We will put an ‘Index number’ of ‘1’ and a ‘Dimension Value’ of ‘Commenter.

In terms of trigger, we can once again use a ‘Data Layer Event’. To run through what happened in the back of this, I user a ‘User Submitted Content’. That action will push an ‘Event’ to the ‘Data Layer’, which we are listening for in GTM.  GTM fires out a normal GA Tag ‘Event’. That hit goes on and includes a ‘Custom Dimension’, which defines the user as a commenter and that will apply to all his subsequent actions on the site as well.

As a result, we can now view the behaviour of our engaged users as a segment in GA. We can also see how they differ from our wider readership. We can use that as the primary dimension in a report to analyse the results in our funnel.

5. Work with your developers

It is important to collaborate with your development team when it comes to data collection.

It is really vital that you understand how these technologies work so that you can communicate effectively with your development team.

Google Tag Manager is kind of unique in it’s an inextricable tool for both marketers and developers. They are about tracking what users do, how valuable they are for us as customers. But Google Tag Manager is also a complex Javascript Application. You need to have a familiarity with Javascript in order to work properly with it.

The ‘Data Layer’, which kind of underpins a lot of the techniques that run today, is in international waters. If you look at the kind of data encoded into the ‘Data Layer’, its semantic information about:

  • our audience and our customers,
  • what they are doing

enforces a shared language.

A well defined and maintained ‘Data Layer’ means the data about your content and interaction that take place are accessible in a format independent of any platforms or technology. You are not reliant on scraping your HTML. You can instead make the data points you are interested in available to use.

However, you need to get your development team to implement it. Indeed, it is a very powerful tool that can easily break your website. The ‘Data Layer‘ should be regarded as a pre-requisite for good measurement.

I will give you a gift for your developers. It is the ‘Javascript Error‘ trigger tag. All it does is fire an ‘Event’ tag when the browser encounters an unquoted Javascript error.  This is normally the information only available in Javascript Console on your developers’ machine. It lets you fire an ‘Event’ whenever a user’s browser encounters an error in GA.

Thanks to the built-in variables of error messages, error URL, error line, information which the user wouldn’t be seeing, we can the fire the information to GA on real-world usability issues. Don’t forget to set that ‘Non-Interaction Hit’ to ‘True’. This will take no more than 5 minutes to implement. It will get real-world testing of your data about:

  • what’s breaking on your website
  • where
  • and for who.

You can cross-reference it with the other built-in dimensions as well, like upgrading system and browser. You can give that information to your developers, segment it by page. And you will make your website more accessible, functional. The value of the insight you can get from your analytics software is tied to the investment you make in data collection.

By demonstrating success and by unlocking the kind of actionable insights that you need, you can justify whatever it is that you are looking for:

  • bigger budgets
  • more innovative projects
  • more development time for your team
  • and ultimately whatever you need to do your job better.

For those who would like to download the Powerpoint slides containing more visuals and his contact details, click on the link below:

Google Tag Manager Insights Powerpoint presentation