Skip to main content
Instapage

Building a learning agenda into your marketing program

We all want our marketing programs to improve continuously. Sure, part of that quest is about beating goals and earning bonuses. But most good marketers are genuinely inquisitive about what works and what doesn’t work.

All too often, though, we focus effort on tests that don’t increase our long-term knowledge, or we hold onto long-held, unchallenged beliefs of what works and what doesn’t. Marketers today need to organize their learning roadmap and develop a testing roadmap that aligns with it.

What to learn

The first order of business is to define what you care about learning. In terms of a learning agenda, there are thousands of insights you could glean and create tests to define. It is important to prioritize what you want to test, which means comparing the cost and effort required to learn further to the safety of sticking with a common, business-as-usual program.

Determining what to learn should be a function of your role in the customer life cycle. Be sure to develop a plan that checks, validates and improves the underlying assumptions that drive the way you market to your customers.

While the concept applies across channels, I’ll use email as an example. First, create a base knowledge map that serves as the foundation of your program. Make a record of it, keep it in a centralized place, and reference it regularly to identify gaps in learnings, as well as knowledge that you want to challenge again.

Foundational items to analyze, solidify and monitor include:

  • The speed of the confirmation/welcome email.
  • The ideal time of day to send an email. And an evolution on this is when to send by the recipient’s time zone vs. the marketer’s. A further evolution is send-time optimization. Depending on your product or service, this may be different, based on the day of the week or relative to an event.
  • The definition of an “inactive.” Striking the right balance between risk and reward. It cannot be based only upon engagement. You need to consider source, time on file, last engagement date, domain and other factors.
  • The best interval to send a cart or browse abandon email. Is it immediate? Is there a delay? How long is the delay?
  • The tone of the subject line, depending on distance to event and engagement segment.
  • The best frequency and sequence of messaging, based on the type of event. If you have truly time-sensitive messages or promotions, such as expiring sales or content, what works best? For example, do you promote the same day only? Or do you send a promotion message two days prior and then follow up with an “act now” message while the event is going on?
  • A decision between multimessage content vs. singular focus. For instance, does your email perform better if you have multiple offers? Or does a single focus and benefit better capture the recipient’s attention?

The point is that there are several “plays” to define. As situations come up for new campaigns, you can leverage what you have learned and execute on what you know works — or you can guess again. But to get there you need to…

Catalog your learnings in a central place

Running a test is great. You need to do it at least one more time to confirm it. But once you feel it is an insight, it should be documented.

Documentation does not mean some test-specific write-up on the test and so on. You need a matrix of your program that is a level up from the day-to-day calendar into which these learnings get inserted.  An easy construct to think about for ad hoc emails would be:

Email type 1 Email type 2 Email type 3 Email type 4 Email type 5
Description
Target audience
Time of day to send
Relative time from event to send
Subject line factors that matter
Clarity of message
Multiproduct or single product focus?
Animation enhances results?
Responsive design enhances results?
Live text buttons or image buttons
Preheader or no?

This is not just a post-test consideration; it’s important to design the test so insights that are gained by one specific test can be generalized across the other types of marketing you do.

For example, when testing browse-abandon tactics, be sure that the data and test design allow you to look at that same impact from the perspective of site segments. And look at the impact not just on clicks and opens, but also on site key performance indicators (KPIs). Ensure you define the audiences, journey stages and KPIs in the same terms.

Revisit your beliefs from time to time

Time inevitably changes results; as the market shifts, customer expectations change and your competitors adapt, there are inevitable variations in the continuing validity of what you’ve learned.

Just because it got you here doesn’t mean it will get you there. You need to challenge your assumptions on a regular basis, which means at least twice per year for your core elements. Other items may be challenged more or less frequently, but the point is that you don’t always have control over the specifics of the content.

You can, however, challenge when it is sent, how the subject line reads and when it launches relative to key events. The situations are different in every company, so, of course, your approach must be adapted accordingly.

Don’t waste time on things that don’t add to your learning agenda

In every email, you could test several components — day of week, time of day, subject line, preheader, call to action, colors, layout, offer and so on. All of them are technically tests. When devising your plan, be specific about the outcome you are trying to influence and how your test has implications for that outcome.

Don’t test just because someone wants to. What will you do with the test result information? If the answer is “remember” it for the future, don’t waste your time. But if you want to catalog it as a test, interrogate the results and create an action plan from the test, then that is your best plan of attack. And be sure to add that action plan to the learning agenda.



via Marketing Land

Comments

Popular posts from this blog

6 types of negative SEO to watch out for

The threat of negative SEO is remote but daunting. How easy is it to for a competitor to ruin your rankings, and how do you protect your site? But before we start, let’s make sure we’re clear on what negative SEO is, and what it definitely isn’t.Negative SEO is a set of activities aimed at lowering a competitor’s rankings in search results. These activities are more often off-page (e.g., building unnatural links to the site or scraping and reposting its content); but in some cases, they may also involve hacking the site and modifying its content.Negative SEO isn’t the most likely explanation for a sudden ranking drop. Before you decide someone may be deliberately hurting your rankings, factor out the more common reasons for ranking drops. You’ll find a comprehensive list here.Negative off-page SEOThis kind of negative SEO targets the site without internally interfering with it. Here are the most common shapes negative off-page SEO can take.Link farmsOne or two spammy links likely won’…

Another SEO tool drops the word “SEO”

This guest post is by Majestic’s Marketing Director, Dixon Jones, who explains the reasons for their recent name change.
Majestic, the link intelligence database that many SEOs have come to use on a daily basis, has dropped the “SEO” from it’s brand and from its domain name, to become majestic.com. Since most people won’t have used Google’s site migration tool before, here’s what it looks like once you press the “go” button:

In actual fact – there’s a minor bug in the tool. The address change is to the https version of majestic.com (which GWT makes us register as a separate site) but that message incorrectly omits that. Fortunately, elsewhere in GWT its clear the omission is on Google’s side, not a typo from the SEO. It is most likely that the migration tool was developed before the need for Google to have separate verification codes for http and https versions of the site.
The hidden costs of a name change
There were a few “nay sayers” on Twitter upset that Majestic might be deserting it…

What will happen to influencer marketing if Instagram ‘Likes’ go away?

In April, app researcher Jane Manchun Wong discovered Instagram was testing removing “Like” counts on posts. At the time, an Instagram spokesperson told TechCrunch it was not a public test, but an internal prototype and that the company was “exploring” new ways to reduce pressure on Instagram.The possibility that Instagram – a primary platform for influencer marketing – may potentially eliminate “Likes” could impact the influencer community, causing brands to question whether or not an influencer has enough sway to contribute to the brand’s marketing efforts. Without an outward facing metric such as “Likes,” influencers would have to rely on other resources to prove their content is worthwhile – once such resource: influencer marketing agencies.Good news for agencies“I do see it as a good thing for influencer marketing agencies and platform providers,” said Leah Logan, VP of media product strategy and marketing for Collective Bias.Logan’s influencer marketing agency works with a numbe…