Attribution Intro With Visual IQ CMO Bill Muller

Cross-channel attribution is a hot topic these days.  We’ve been asked by many clients recently what they need to know about attribution and how it could be used to help improve their marketing results.  To get answers, we went to industry leader (and current client) Visual IQ and sat down with their CMO, Bill Muller.  Bill’s responses to the key questions related to attribution can be found below.  This is a must-read for anyone new to attribution or for anyone considering investing in a cross-channel attribution platform.

 

Q: Can you explain for folks new to attribution, how does cross-channel attribution work? What are the main benefits of using a cross-channel attribution platform?

A: Cross-channel attribution, much like any discipline, can vary widely depending on the degree of sophistication and complexity of the platform that you use. It’s like asking, “How much does a car cost?” Well, it depends on whether it’s a Prius or a Ferrari.

The way we perform cross-channel attribution is a methodology called “algorithmic” or model-based attribution, which differs dramatically from rules-based methodologies that tend to be flawed and subjective. Algorithmic attribution works as a platform that ingests marketing performance data from both digital and non-digital sources. In the case of digital or “tagable” sources, we often use the ad server tracking that’s already being used by a client. We also use our own pixel to stitch together the various touchpoints that are involved in a user’s journey to a conversion.

That data is then fed into an attribution engine, which is a series of algorithms and machine-learning technologies that chew through the data and fractionally attribute credit for a conversion across the various touchpoints experienced by a user. Rather than simply looking at the order in which those touchpoints took place, the engine measures all of the individual components that make up those touchpoints; for example, channel, ad size, creative, keyword, or placement.

By doing this across an entire universe of users who are exposed to your marketing efforts, the software can calculate success metrics across all channels to show exactly how much credit each touchpoint and each channel deserves. Almost always, when that calculation gets performed, you get a very different picture of which channels, campaigns, and granular-level tactics are contributing to your overall success.

The main benefits are better decision-making and better allocation of budget. Ultimately what people do with the output of the attribution is reallocate budget to any channels, campaigns, and tactics that they previously undervalued. They then fund those by taking budget away from the channels that they’ve historically overvalued, the losers, and provide it to the winners.

Q: Does the platform tend to work better for certain industries?

A: To determine fit, we tend to look at “business models” more than “industries.” Until recently, attribution had been a direct response-related endeavor, meaning that companies using digital and/or digital combined with offline to produce hard and fast conversions, such as an e-commerce transaction, a lead, or a quote, will best benefit from the software. There are many industries that align with this type of business model.

In terms of attribution, business models that historically have been left out in the cold have been companies that do not have those types of transactions in place. In terms of their objective, attribution has primarily been about generating brand engagement, because they do not have a direct line to their conversion event.

Think about, for example, pharmaceutical companies. You are not buying a drug on their website or buying drugs as a result of seeing their TV advertisement, but there are marketing activities that are causing you to experience some brand engagement. Ultimately, you may be prescribed the drug and purchase it, but there is no linkage between their marketing and your purchase. There are no conclusions to draw.

This business model, as a result, has been difficult for attribution to conquer in the past because there hasn’t been a tie between media stimulation and the eventual consumption of an end product. Until recently.

Q: What kinds of recommendations will an attribution platform make?  Are they typically budget related or otherwise?  Are they typically real-time, on-going, or one-time recommendations?

A: The recommendations are typically budget-related, as we are talking about spending money on individual tactics: moving budget off of less successful ones, onto more successful ones. They are typically not real-time, but daily, because we can only make recommendations at the pace of which our attribution engine is fed with performance data.

The recommendations do, however, absolutely need to be ongoing. Much like a search campaign, it’s not ‘set it and forget it.’ The environment in which you operate is not a static one. It is constantly based on the marketplace, on what competitors are doing, on econometric factors, on global events, etc. It constantly needs to be adjusted based on the dynamic nature of the marketplace. This is ongoing and not a one-time recommendation.

Q: How drastic will the recommended changes be?

A: The type of recommendations can be as granular as the characteristics of the data that is provided. When a lot of people think of attribution, they think totally about the chronology of the touchpoints that have taken place in relation to the number of conversions. They think, ‘This happened first, this happened second, this happened third, and I really can’t control those things.’

What they often don’t realize is that these touchpoints are made up of various characteristics. If it was a display ad, there is size, placement, offer, and publisher to consider. If it’s the search channel, one can consider if it was paid or organic, keywords, impressions, or clicks. So the recommendations that come out of our application are often things like, “Stop spending $500 a month on this ad, of this size, with this creative, on this publisher, on these days, per week. Now take that money and put into this keyword, on this search engine, with this creative, and this offer, on these days of the week.” We include every characteristic of every touchpoint in the model to find out which has the most impact on a client’s overall success.

The recommendations can also be as dramatic as, “Stop spending on certain placements altogether,” or the opposite. We had a client recently that was going to eliminate spending on one display publisher altogether. When they looked at their attribution results, they recognized that instead of it being their worst publisher, it was the publisher that most contributed to their success. They then tripled the amount of spend on the publisher that they were originally going to eliminate from their marketing mix.

Q: Are there channels (Paid Search, SEO, Offline, etc.) that repeatedly prove to drive more or less value than previously believed?

A: Yes – Many clients are highly invested in paid search, but we’ve found that paid search is one of the channels that tends to be universally overvalued in a last-click methodology.

In other words, most of the world is using a last-click methodology to assign conversion credit. If an individual has touched four different times prior to a conversion, odds are you don’t have a methodology in place that can link those four touchpoints together. You don’t always know that the user had touched four times—All you know is that a person converted as a result of a search and a click on a paid search term.

Attribution allows you to tie together the otherwise unknown factors. If somebody was exposed to impressions of a display ad five times prior to their click on a paid search ad, and it ultimately led to a conversion, we can see that.

Q: How does the attribution model handle view-through conversions?

A: Our methodology not only ingests touchpoints that resulted in clicks, but it ingests touchpoints where there was only an impression. For example, you do not have to click to be cookied. When a touchpoint is analyzed, we look at all the constituent parts of it—its size, its publisher, its placement.

Using that data, our solution then calculates how much value a “mere” impression had in the grand scheme of things: What was the difference in performance between those people that were not exposed to the ad and eventually converted, compared to those that were exposed to the ad?

Q: Where do you see attribution technology evolving over the next five years?  What will we be able to measure and/or optimize better by 2020?

A: As I mentioned previously, until now, attribution has very much been a direct-response technology. Recently, however, Visual IQ released a methodology that allows us to extend our solutions much beyond direct-response business models.  Instead of ingesting direct-response conversions, it uses brand engagement touches— first visits to a website, video views, media asset downloads for example — to come up with a common brand engagement score. The attribution product then optimizes or makes recommendations on how to maximize that assigned brand engagement score.

Not only does this allow us to focus on companies that are pure brand engagement, but it also allows us to help the side of the house that has not been able to benefit from attribution in the past.  And frankly, at some companies brand spending far outweighs direct response spending.

Q: What makes Visual IQ different from the other cross-channel attribution vendors in the space?

A: Part of it is our legacy, in that we were one of the first attribution vendors in the space, and that we were the first attribution vendor to offer algorithmic attribution.

From the very beginning, we tackled granularity. We let the machine-learning and the mathematical science do the calculations so that the data we receive tells the story. Because we’ve done this since the beginning, we’ve been able to improve the level of sophistication of our product.

Visual IQ’s products are smarter products. We’ve continued to innovate things like attribution branding and offline media attribution. We have a television attribution product. We are consistently offering features, benefits, and values to our clients before our competitors.

We’ve also been working with enterprise-sized clients since the inception of our organization. The largest, most successful brands in marketplace and some of the most demanding marketers in the world are using our products. We’ve developed our products over the past decade based on their needs and demands.

If we can bring in 17 different channels from one of the world’s largest credit card companies, across multiple countries and business units, and provide them with actionable business recommendations that they can act on to generate millions of dollars-worth of media efficiency, then we certainly have the ability to handle 99 percent of the potential businesses out there. Without our legacy and history of innovating, longevity, and continuing to improve our product, we wouldn’t have that capability today.

Q: For those who are interested in learning more about your platform, what’s the best way for them to get in touch with you?

A:  If you have any questions surrounding cross-channel attribution, or to want to learn if Visual IQ attribution software is right for your business, please email me at Bill.muller@visualiq.com.

For folks who are trying to better understand us in the attribution space, we have been at the top of the last three Wave Reports done on our marketplace. By talking to Visual IQ, you can rest assured that you are truly talking to the industry leader.

Why First Click Attribution is Critical For E-Commerce Companies

Picture yourself searching online for that special, “stand out” birthday gift for your dad, who is not exactly easy to impress.  You may start off searching for “best gifts for dad.” This search leads you to click on a paid search ad, advertising the perfect watch for Dad.  Right then and there, you click “buy now,” plug in your credit card, and boom…you’re done.  While for some impulsive folks this may sound normal, most people don’t buy the first thing they click on.  The more common scenario might be that you do click on that PPC ad, but decide to take some time to think it over. After all, that watch isn’t exactly in your budget. The next day you see a remarketing banner for the watch, which reminds you that Dad’s birthday is fast approaching. You click on that ad, but still want to explore other options.  Two weeks later, you’re in a time crunch.  Dad’s birthday is next weekend.  Your quickly type in the name of the watch into Google, and organically navigate to the site to purchase.

Right now, most digital marketers live in a “last click” world when it comes to optimizations and reporting. In this world, the last step or last interaction a user has before the conversion, in this case the organic search, gets all the credit. For many organizations this is a deeply flawed reporting methodology. There are five core traffic sources or “channels” that drive traffic to your website. The perception is that these channels operate by themselves and single handedly generate conversions and sales.  While this is sometimes true, in the world of e-commerce, it is not common.

A user often interacts with multiple channels, like we saw in the example above, before becoming a customer.  So the question then arises: which channel should get the credit for producing a conversion, or conversely, the blame for failing to produce a conversion?  In the digital marketing world, it is crucial that we take into consideration first click attribution as a primary attribution model when reporting and making optimizations.  You may say, OK, isn’t this just a different format of reporting or presenting data?  The answer to that question for e-commerce businesses is no.  When using first click attribution, we can see what specific keywords are driving traffic and ultimately, producing revenue. This can lead us to revisit where we are allocating budget and more importantly it can reveal “low hanging fruit” optimization opportunities that are often masked when using last click attribution.

To see this in more detail, let’s take a look at an example.

chart

The following example represents revenue disparity between first click and last click performance for just one non-branded keyword.  Last click ROI and revenue volume wrongfully indicate that the term is inefficient (based on an ROI goal of 2) and would require significant bid reductions to cut overall spend and increase profitability.  For any digital marketing strategist, this would be an obvious optimization. However, if we look at first click attribution as an option, it tells a significantly different story. As you can see, the keyword drives 163% higher revenue with a significantly better ROI. We can now use this data to take action. By increasing bids, we can drive incremental revenue and growth on a term we previously were not capitalizing on.

Spending time looking at different attribution models and finding out what model fits your company best is crucial for any digital marketer.  Identifying first click attribution as an option can be the first step in unlocking an abundance of revenue driving opportunities.  For a relatively unknown e-commerce company, first click data can be key in discovering what is driving overall brand awareness and educating users about the company itself or the products it sells.  For example, a brand new coffee bean company may utilize first click attribution data to see what terms are generating interest in their product and driving users to their site. However, a massive company like Macy’s already has developed a brand, and is most likely only interested in what marketing channel is driving that final sale, not how a user originally found their site.

You can easily start looking at various attribution models in Google Analytics.  The Model Comparison Tool allows you to compare attribution models to see what keywords or campaigns are significantly contributing to revenue.  This easy-to-access insight can be an important in helping make sure you’re making correct marketing decisions and maximizing the impact of each marketing channel.

4 Ways the Removal of Right-Hand Rail Ads Impacts PPC

In late February of this year, Google confirmed that they will no longer be serving PPC ads in the right hand rail of the search results. While this came as a shock to many, it is something Google has been testing since 2010 and just recently decided to roll out permanently. The online giant has a long standing history of discreetly testing out new updates to search engine results, and this one was no different as an anonymous Google employee leaked the permanent change to the media on February 19.

So what exactly does this change mean for paid search advertisers? What shift in results can digital marketers and advertisers expect to see over the next several months as this change in the search engine landscape rolls out? Below are 4 potential shifts to look out for with this recent update in the Google search results.

1) CPCs Might Increase

Over the next several months as more marketers and clients alike begin to notice the change in Google search results, the competition for the top 3-4 PPC search results is going to gain momentum. It is common knowledge in search that users tend to not spend a lot of time scrolling down to look at results below the fold, so marketers are going to be increasing bids to battle it out for the top paid search slots. There are a couple different scenarios to consider here. CPCs have the potential to increase as marketers compete to own those top spots. Alternatively, it is possible that Google may change the minimum Ad Rank requirements so that ads are showing more often and rotate in more evenly. Some of our clients have seen around a 5% increase in CPCs since the new update rolled out over the last couple months. We will be interested to observe how CPC shifts over the next few months after advertisers have had more time to settle in with this particular update.

2) Impression Share Could Be Harder to Maintain and QS May Carry More Weight

As more advertisers notice the change in SERP results, they will begin competing for the top 4 paid search spots which may make it more difficult for advertisers to maintain stronger impression share on their core terms. How will Google determine which ads to rotate in to those top 4 spots? How will that impact impression share? Will it be tougher to maintain strong impression share for your top terms or will Google loosen up the criteria for Ad Rank and rotate competitors in more evenly? One certainty here is that it will be critical to re-evaluate Quality Score on your most important terms to set yourself up for success with all the unknowns of Google’s next steps.

3) More Advertisers Will Likely Be Shifting into PPC

With this new change rolling out, the amount of paid ad space available on the SERP has decreased from up to 11 down to 7. There is, however, one additional spot available at the top of the page for a total of 4 paid search slots, as opposed to 3 in the past. What does this mean for SEO results? They will be pushed further down the page, bringing a higher number of SEO results below fold. Because of this shift in SEO positioning (and drop in traffic) more advertisers will likely be looking into setting up their own paid search campaigns to compete for the top page spots. This may end up adding another layer of competition to the paid search space, which could also have an impact both on CPC and impression share.

4) ecommerce Advertisers Will Likely Invest More Heavily in PLAs, and non-ecommerce Advertisers Will Be Awaiting Their Solution

While right hand rail paid search ads are disappearing completely, Google has confirmed that this change will not impact the Knowledge Panel or the Product Listing Ads on the side rail of the SERP. The strong positioning of PLA ads is optimal for ecommerce companies and retailers who are likely already heavily investing in PLA advertising. This is great news for ecommerce businesses, but there is no alternative solution for either B2B or B2C companies that do not have specific products for sale on their site.

There is currently a lot of speculation circling around the paid search world about how this major shift in search engine results is going to impact marketers and advertisers. Ultimately, the impact will depend upon how advertisers react to this change in landscape. Will they get more aggressive with bids right away, driving up CPCs? Will they take a step back to revise their keyword set and max out impression share on their most efficient terms? Whichever direction the reaction trends, marketers should take a step back to re-evaluate strategy and results to make sure no major dips in performance have occurred.

Some different types of analysis that may be helpful include segmenting traffic and leads by ‘top of page’ results versus ‘other’ both before and after the update to see if there is cause for worry.  Advertisers will also want to look into improving Quality Score since it may end up carrying even more weight. To improve QS, advertisers can try segmenting keywords out into more granular ad groups and looking into ad copy and landing page content that is more relevant to the keywords within those given ad groups.  To improve expected CTR, try testing queries on high volume terms to see how competitors are positioning themselves and adjust your copy to be more in line with the competition. Is there room to broaden your customer base? Are there unnecessary qualifiers currently in place within your ad copy? Improving overall QS should help minimize the impact of potential CPC increases, and hopefully lead to better overall positioning with negligible impact on CPCs.

Blogging Trends for Business Owners in 2016

Blogging has evolved greatly since its emergence in the 90s. What started simply as a lifecasting outlet for individual content creators has progressively matured into an integral part of every business’ digital strategy. Blogging is no longer limited to leisure and lifestyle. It is no longer exclusive to foodie writers, exercise gurus, and relationship connoisseurs. Today, blogging means business.

Over the years, blogging has blossomed into a full-time profession, a fruitful digital marketing tactic, and an essential measure for organizations looking to establish their authority online. For businesses especially, blogging has become a standard in the online landscape.

If you want to make it, you have to create it. Today, nearly every successful online business is employing content writing in some way. According to the 2016 B2B Content Marketing Report, 81 percent of businesses now consider blogging to be a core component of their content marketing strategy.

From large companies to small businesses, CEOs to subject matter experts, mom-and-pop shops to your very own mother-in-law—It seems that virtually everyone is now taking advantage of what blogging has to offer. How will you keep up, stand out, and rise above the competition?

Let’s take a look at four blogging trends that will change how you approach business blogging in 2016.

1.) Size Now Matters

More people are getting their content online now than ever before, making it more difficult for businesses to truly thrive online. Today, the web is flooding with futile blog posts, articles, and ‘click-bait’ to reel in readers. The question of 2016 has become: how can we, as businesses, cut through the noise and get our content in front of the right eyes? How can we deliver quality content to new prospects?

The answer lies right in the question. To get our blogs in front of new eyes, we must take the time to create and deliver great content. According to marketing expert Ann Hadley, this year will be the year of highly relevant, high-quality, at-length material. She notes in an Orbit Media report,

To thrive in an over-saturated content world, you’ll need to constantly write or produce (and syndicate) content with depth. Longer posts, more substantive content that people find useful and inspired.”

It appears as though long-form will become the new blogging norm. A couple hundred words will no longer appear as useful as an extensive, data-driven, thousand-word post. While the average blog today is about 900 words, 1 in 10 writers are consistently writing 1,500+ word posts. This year, size will matter.

2.) Quality Over Quantity

As long-form blogs start to take the stage, exceedingly succinct blogs will be pushed to the backburner. In the case of quantity vs. quality for blog writing in 2016, it seems that less is actually more.

Many businesses make the mistake of creating content for content’s sake. The consistent focus on churning out as many blogs as possible, however, can actually diminish the value of the writing over time. The fact is, readers are looking for quality content supported by stats (the more primary research you have, the better). They are looking for answers, for insights, for fresh ideas. They are looking for solutions. They are looking to be engaged.

3.) Engagement Will Become Your New Best Friend

In the past, content creation was largely about generating traffic. While organic traffic, page views, and unique visitors are all still important metrics, business bloggers will analyze success a bit differently in 2016. This year, the biggest measurement to pay attention to will be your audience’s engagement rate.

Are you able to attract readers with your content? More importantly, are you able to keep them on and engaged with your website? Are you giving your audience something to consider, something to act on, or something to return to in the future? Are readers likely to share your content through social, and get others talking about your brand, as well?

Can you say ‘yes’?

4.) Visual is Now in Focus

Blogging is not solely reliant on writing: how you frame your writing, and how you deliver your information, is vital to your blog posts’ success. In order to engage with your audience on a deeper level today, you must balance your blog text with a good structure and compelling graphics.

Recent research has shown that adding visual elements and graphics to your blog posts can help you generate up to 94 percent more views. It is no wonder why. As long-form blogs become more frequent, so will the need for graphics to break up the post. Text-dense articles can be heavy on the eyes. It is important, therefore, for business writers to balance their information with a multi-image format. If you are discussing a new product, then include multiple real-life, relatable photos to better engage your readers.

Images are not everything, though. This year, you may consider embracing new visually appealing elements, such as video, audio, quotes, and embedded social media, in your blog content. You may consider featuring free downloadable assets, infographics, eBooks, or sharing podcasts and webinars by your internal subject matter experts. Continuously sharing fresh, engaging content will not only break up heavy blog posts, but it will also lead to greater engagement among your readership.

5.) Time to Take Mobile Seriously

On average, people pick up their phones 150 to 200 times per day. That means that, in the United States, there are nearly 30 billion mobile moments in total each day. Still, some businesses have yet to acknowledge the importance of mobile as a part of their online strategy.

To efficiently generate clicks, leads, and sales through your website or blog, you must tailor it to the mobile user experience. Easy-to-read, digestible content, clickable links, and optimized images will make for the most cohesive and dynamic mobile design.

If you are not mobile-friendly, users will know it. Google now labels Synapse Google Listingwebsites and blogs that are mobile friendly right there in the search results on your mobile device. If your website is not mobile responsive, you may get left in the dark. Optimizing your website or blog to be mobile-friendly, therefore, must be of high priority in 2016. See if your website is mobile responsive using Google’s Mobile-Friendly test.

Having a blog in place on your business website can help gain traction in the online landscape. Having a blog that is valuable, visually-compelling, and mobile-friendly, however, can help you attract qualified traffic, convert engaged readers, and secure new business online. Don’t fall behind the competition. Make blogging, and these core blogging trends, a priority in your digital marketing strategy this year.

If you’d like to learn more about Synapse SEM, please complete our contact form or call us at 781-591-0752.

2016’s Top Search Engine Marketing Trends

2016 will be a revolutionary year for the digital marketing industry.  After a historic 2015, a year in which we saw mobile searches overtake desktop searches, industry analysts are projecting that digital media spend will overtake traditional channels like TV for the first time.  Apart from these macro changes, there are more technical developments that will be also be affecting digital marketers in the new-year.  We share details on 5 critical trends that should be on your radar for 2016:

 

  • Google Penguin Update – Google Penguin is not a new name to search engine optimization professionals. For those less familiar, Google Penguin is a layer to Google’s organic algorithmic that specifically evaluates link quality.  Penguin is designed to discount or even penalize disreputable and manually engineered external followed links.  Penguin was originally released in April 2012 and since then it has been refreshed around a half dozen times.  This forthcoming release of Penguin, slated to launch in early 2016, marks a major change for the algorithmic layer.  Instead of releasing periodic updates, the new release of Penguin will be run in real-time.  This is both good and bad for advertisers.  For websites suffering from historically spam-rich linking profiles, the benefits of any link cleansing work and disavowals will be felt quicker.  In contrast, for websites who aggressively push the envelope on their link-building strategies, penalties and ranking drops will also be assessed and felt faster.  The updates coming to Penguin underscore the importance of what has already been a link acquisition best practice for several years.  Instead of building links, marketers should instead be focused on cultivating links—organically generating link backs by promoting unique, engaging content.

 

  • Real-Time Personalization – Real time personalization is a growing technology that allows content management systems and advertising platforms to dynamically serve customized content for different cohorts of users. The technology, which is offered through marketing automation solutions like Marketo and CMS platforms like Sitefinity, works by integrating with an organization’s CRM system.  A website visitor will get cookied and then the marketer can define different personas or user groups.  One persona, say a C-Level executive, can then be served a different website experience (different messaging, calls-to-action, etc.) than, say, a specialist-level user.  The same type of personalization can be embedded into paid pay-per-click ad copy and landing pages.  This is invaluable technology than can lead to significant improvements in conversion rate and online revenue.  If you’re a retailer, you can customize Branded paid search ads to be focused on the previous purchases on repeat customers.  If you’re a B2B organization, you can tailor your website experience to the role of your visitor.  A researcher may be prompted to download white papers and industry reports, while a decision- maker like an executive could be served deep-funnel calls to action like a demo request.

 

  • RLSA – In mid-2015 Google AdWords expanded their RLSA or “Remarketing Lists for Search Ads” technology so that campaigns can leverage Google Analytics remarketing lists. RLSA allows marketers to integrate retargeting lists with their Search Network pay-per-click ads.  Marketers can specifically target (or exclude) past website visitors, based on the pages they visited, or their on-site behavior.  Past website visitors are typically more qualified users, so marketers can take a broader approach with their keyword set, and a more aggressive approach with their bidding strategy.  As an example, if you’re an online retailer that sells luxury watches, a keyword like “gifts for my husband” would likely yield highly irrelevant/unconvertible traffic.  However, with an RLSA campaign, we can aggressively bid on a keyword like “gifts for my husband” because we know the user has already expressed interest in our website/product.  Similarly, RLSA can be used to improve traffic quality on traditional Search Network campaigns.  For example, B2B SaaS websites often field significant traffic from existing customers who log in to the product through the website.  As a marketer you might be running a Branded search campaign aimed at demand generation.  Unfortunately, major amounts of your ad spend are likely being wasted on these existing customers who are simply trying to login to their account.  With RLSA, we can create a remarketing list of all users who have reached a website’s login page.  We can then exclude that list from seeing our Branded ads in our AdWords campaign.

 

  • Mobile Conversion Optimization – Mobile users overtook desktop users for the first time in 2015. With mobile traffic becoming a bigger and bigger percentage of total traffic each year, it’s critical that marketers implement a mobile-specific conversion strategy on their website.  In addition to ensuring that your website is fully responsive, marketers can use device detection scripts to serve customized content.  For example, a marketer could setup a page to display consolidated messaging, shorter forms (less fields), or different navigation links when a user browses from a device with a smaller screen size like a smart phone or tablet.  These types of adjustments can be implemented at the page level and they can have a profound impact on your mobile conversion rates.

 

  • Dark Traffic – As mobile traffic continues to grow, so too does untrackable (not set) or “dark” traffic in our analytics platforms.  Dark traffic typically originates from mobile social media and messaging apps.  Many of these mobile apps trigger new windows when referral links are clicked.  From Google Analytics’ perspective, the user is direct navigating to your website; in reality, the user is arriving vis-à-vis a referral source.  For some B2C retail websites, dark traffic is becoming incredibly problematic, and in some extreme cases, it’s comprising over 50% of total website traffic.  Many firms are trying to get around this issue by running landing page reports and making educated guesses on the original traffic source of the user.  That approach is imprecise at best.  Our firm has developed a sleeker solution.  The apps engendering dark traffic kill the original traffic source of the user by opening up new browser windows.  The majority of these apps also prevent marketers from manually tagging website links with UTM parameters.  There is, however, a workaround that can be employed with an interstitial redirect.  For example, let’s say a company includes a link to their website in their Instagram profile. The marketer can link to a unique URL that has a delayed interstitial redirect on it that points to a URL (e.g. the homepage) tagged with UTM parameters that communicate the user’s original traffic source.  In this example the redirected URL could point to www.acme.com/?utm_medium=referral&utm_source=Instagram&utm_campaign=InstagramProfileLinkClick.  This will tell Google analytics that the user came from the Referral medium, from the Instagram app/site, and from an Instagram profile link click.

How Google Determines Actual CPCs Will Surprise You

OK, so you’ve been investing in PPC advertising for years. You know how your KPIs are performing, and how much you’re spending each month. Your in-house team or agency reports back to you on overall performance and dazzles you with insightful and actionable analyses each week. You feel very comfortable with their PPC knowledge and then one day you ask one of the most basic PPC questions: how are CPCs in AdWords calculated? Their response is questionable at best, and you start to think their so-called “expertise” is a sham. They should be able to easily answer this question, right?

Well, before you judge your team too harshly, let us walk you through how CPCs are actually determined and why it’s not a question so easily answered.

How Are CPCs Actually Calculated?
Before we get into the specific calculations, we need to first talk about the AdWords auction and what influences your CPC, position and impression share, since these three metrics are all related. All three metrics are determined by your Ad Rank, a metric that includes your Quality Score, maximum CPC and expected impact of ad formats. The advertiser that shows in position 1 (“Advertiser 1”) is the advertiser whose combination of Quality Score, expected impact of ad formats and maximum CPC is highest. Google first determines the position for each advertiser, and then calculates the actual CPC for each advertiser based on that position.

To really see how this plays out, let’s look at an example:

CPCs

 

 

 

 

 

 

 

In the chart above, Advertiser 1 will show in position 1 because they have the highest Ad Rank. Once position is determined, the AdWords system then determines the actual CPC that advertiser will pay. Keep in mind that the idea that each advertiser only pays $0.01 more than the next advertiser no longer applies (unless all advertisers have the exact same Quality Score and Format Impact). In fact, it is entirely possible for an advertiser in position 2 to pay a higher CPC than the advertiser in position 1. Advertiser 1’s actual CPC is the lowest amount they can pay while still achieving an Ad Rank higher than Advertiser 2. The CPCs for the other advertisers are calculated using the same logic. Since Advertiser 4 has by far the lowest Quality Score and max CPC, they are likely to be ineligible to show or have extremely limited impression share.

How Can I Use CPC Calculations to My Benefit?
Now that we know how CPCs are determined, how can this help you improve your performance? First of all, keep in mind that ad extensions do play a role in determining your Quality Score, and Google is introducing new ad extensions all the time (they just recently announced structured snippets, for example). You should be using as many ad extensions as reasonably possible, and optimizing your ad extensions at least as often as you’re updating your main ad copy. This will help improve Quality Score, which can help reduce your CPCs and/or improve your position.

Also, this may be obvious, but you should be making regular ad copy testing a top priority. With expected click-through rate and ad relevance accounting for a majority of your Quality Score, it’s critical that you’re using relevant headlines and descriptions that are truly differentiated from your competition and highly enticing to your audience.

Lastly, keep in mind that it’s extremely difficult to run PPC ads profitably with low Quality Scores. Constantly inflating your max CPCs to drive impression share and high positions is not a sustainable strategy.  Other advertisers are typically setting their bids to meet profitability, and if they’re showing more often and more prominently, it likely means they have higher Quality Scores. If you end up paying significantly more per click, you should have a strong business case for doing so (e.g. significantly higher conversion rates, better lead close rates, higher customer lifetime value, etc.). You should also continually focus on ad improvements and ensure a relevant landing page experience. The steady, consistent path of testing and analysis (in replace of or in addition to aggressive bid increases) will help you to maintain efficiency as you expand and as competition increases.

If you’re interested in learning more about how CPCs are calculated, see a great video by Google’s chief economist, Hal Varian, or check out these two articles that cover CPCs for the Search network and CPCs for the Display network.

If you’d like to learn more about Synapse SEM, please complete our contact form or call us at 781-591-0752.

Advanced PPC Series: Your Test Results Can’t Be Trusted

Your Ad Copy Test Results Can’t Be Trusted: A Need-to-Read Article for Search Engine Marketers

If you are like us, you’re constantly running A/B ad copy tests in your AdWords campaigns.  It’s possible that over the last several years you’ve been making the wrong decisions based on very misleading data.

Many of us use metrics such as conversion rate, average order value (AOV) and revenue per impression to choose a winner in an A/B ad copy test.  Based on the statistically significant ad copy test results below, which ad would you choose to run?

Ad Iteration

AOV

Conversation Rate

ROI

Revenue/Impression

Ad A (control)

$225

3.15%

$7.79

$0.42

Ad B (test)

$200 2.65% $6.79

$0.37

 

The answer couldn’t be clearer.  You should run ad copy A, right?  After all, it does have a higher AOV, a higher conversion rate, a higher ROI and it produces more revenue per impression than Ad B.  What on earth could possibly convince you otherwise?  The metrics above tell a very clear story.  But are these the right metrics to look at?

Measuring A/B Tests: What Metrics Should You Consider?

Conventional wisdom tells us that if we’re running a true A/B test, then impressions will be split 50/50 between the two ad iterations.  If this assumption holds true, then the metric we really should be focused on is revenue per impression.  This metric tells us how much revenue we’ll generate for every impression served, which accounts for differences in CTR, AOV and conversion rate.  If your business is focused on maximizing growth, then this may be the only metric to consider.  If you also are focused on efficiency, then you will consider ROI and choose the ad that you believe provides the optimal combination of revenue and efficiency.  While this approach is common, it is also fatally flawed.  Here’s why…

Why Google Can’t Guarantee True A/B Ad Copy Tests

Earlier, we made the assumption that impressions are split 50/50 in an A/B test.  However, when running our own A/B tests we noticed that certain ads were receiving well over 50% of the impressions, and in some cases, upwards of 70-90% of the impressions.  We experienced these results when selecting the ‘rotate indefinitely’ ad setting, as well as in AdWords Campaign Experiments (ACE) tests.  So why were we seeing an uneven impression split?  Did we do something wrong?  Well, yes: we made the mistake of assuming that impressions would be split 50/50.

How Google Serves Ads – And Why Quality Score Is Not a Keyword-Exclusive Metric

When you set up an A/B ad copy test in AdWords, Google will split eligible impressions 50/50, but served impressions are not guaranteed to be split 50/50, or even close to 50/50.  Eligible impressions will differ from served impressions when one ad produces a higher CTR than the other.  Since CTR is the primary determinant of Quality Score (and thus, Ad Rank), the AdWords system may actually serve a higher CTR ad more often than a lower CTR ad.  This happens because your keywords’ Quality Scores will change for each impression depending on which ad is eligible to show for that impression.  In other words, each time the lower CTR ad is eligible to show, the keyword that triggered the ad will have a lower Quality Score for that impression, and thus, a lower Ad Rank (because the expected CTR is lower with that ad), so the lower CTR ad will win the auction less often than the higher CTR ad.  Naturally, this results in more impressions for the higher CTR ad, even though the two ads each receive roughly 50% of eligible impressions.  If you use revenue per impression, one of the metrics we suggested earlier, then you will have failed to account for the discrepancy in impressions caused by varying CTRs.  So, does this mean that your A/B ad copy test results are now meaningless?  Not so fast.

Evaluating Test Results Are Easier Than You Think – Just Look at Revenue (or Revenue per Eligible Impression)

Let’s assume that your goal is to maximize revenue.  The simplest metric to look at in an A/B ad copy test is revenue, but you can also look at revenue per eligible impression.  Both metrics allow you to account for the variations in impressions due to different CTRs.  To calculate revenue per eligible impression for each ad, divide the revenue from that ad by the impressions from whichever ad produced the higher number of impressions.  Here’s an example:  let’s assume Ad A generated a CTR of 6% and received 50,000 impressions and Ad B generated a 4.5% CTR and received 30,000 impressions.  Between the two ads, Ad A received more impressions, so we can conclude that there were 100,000 total eligible impressions (twice the number of impressions generated by Ad A).   Ad B was not served for 20,000 of the eligible 50,000 impressions due to a lower CTR (which impacted the keywords’ Quality Scores and Ad Rank for those impressions).  If the revenue per impression metric is confusing, just focus on revenue: it will give you the same outcome.  Let’s revisit the test results we showed earlier, which now include additional data.

Ad Iteration

Impressions CTR Revenue Transactions AOV Conv. Rate ROI Revenue / Impression Revenue / Eligible Impression

Ad A

114,048 5.95% $48,095 214 $225 3.15% $7.79 $0.42

$0.36

Ad B

135,000 7.00% $50,085 250 $200 2.65% $6.79 $0.37

$0.37

While Ad A outperformed Ad B based on its revenue per impression, it actually generated less revenue and less revenue per eligible impression than Ad A.  Ad A did generate a higher ROI, however, so the tradeoff between efficiency and revenue should also be taken into account.

Interestingly, Ad A’s 19% higher conversion rate and 13% higher AOV still couldn’t make up for Ad B’s 18% higher CTR.  This is because Ad A also received 16% fewer impressions than Ad B.  Remember, a lower CTR will lead to fewer clicks AND fewer impressions – the double whammy.

The Conclusion – Focus Less on Revenue/Impression and More on CTR

Historically we have treated CTR as a secondary metric when evaluating ad copy performance.  It’s easy to manipulate CTR with Keyword Insertion or misleading offers, but it’s quite difficult to generate more revenue and/or improve efficiency with new ad messaging.  However, with a renewed understanding of how CTR can impact impression share, we are now focused on CTR when testing new ads.  As we saw in the example above, if your new ad produces a significantly lower CTR than the existing ad, it will take massive increases in AOV and/or conversion rate to make up for the lost revenue due to fewer impressions and clicks.  Therefore, when writing new ads we recommend that you focus on improving CTR (assuming the ads still attract the right audience).  This will produce three distinct benefits:

  1. Greater click volume due to increased CTR
  2. Higher Quality Score due to increased CTR, which produces lower CPCs and/or higher ad position
  3. Increased click volume due to higher impression share

We are all familiar with the first two benefits, but the third benefit represents the most value and is the one most often overlooked.

Next time you run an A/B ad copy test be sure to consider the impact CTR and impression share have on your test results.  Avoid focusing on revenue/impression, AOV and conversion rate to determine a winner and instead focus on revenue/eligible impression or total revenue.  This will ensure that differences in impression share are accounted for, and, ultimately, that the higher revenue producing ad is correctly identified.  If efficiency is a key consideration, keep ROI in mind as well.

Oscar Predictions 2015 Recap

Last night’s Oscars proved to be quite the spectacle, with Neil Patrick Harris walking around in his underwear and everyone finding out who John Legend’s and Common’s real names are.  The results were interesting as well, with only a handful of upsets.  Having taken it all in, there are a few things I noticed regarding the selections made by some of the sites I polled, and the effectiveness of various winner-choosing methods.

First, how did I perform?  Well, I ended up winning 20 out of the 24 categories (or 83%).  This fared pretty well against the sites that I polled; only one of the nine sites beat me, I tied with one other site and the other sites all won fewer categories than I, with Hollywood Reporter performing the worst with only 13 wins.

When I took a closer look at which sites performed well and which ones did not, one thing became immediately clear; the individuals or sites that used statistics vastly outperformed the sites that did not.  For example, Ben Zauzmer won 21 categories (beat me by 1) and GoldDerby’s predictions led to 20 category wins (tied me).  The other sites I polled averaged roughly 16.5 wins, which is about 18% worse than the stats-based sites.

As I mentioned in my original article, the film that wins Best Film also wins Best Director about 72% of the time.  Interestingly, 4 of the sites I polled actually chose two different films for Best Film and Best Director, which strongly indicates they were making decisions with their gut rather than with calculated probabilities.

I made the mistake of going with my gut when I chose “Joanna” for Best Documentary Short Subject, even though “Crisis Hotline” was a decisive favorite.  I chose “Joanna” because I saw both films and simply felt it was a better film than “Crisis Hotline.”  Unfortunately, there is no correlation between who I feel will win and who actually wins, so it was a poor decision on my part.  Ironically, at the Oscar party I attended I ended up tying for first instead of winning first because of the one pick where I strayed from the probabilistic approach.  I’ve learned my lesson as it pertains to selecting Oscar winners, and as a search engine marketer I was reminded that we cannot ignore what the data is telling us.  A probabilistic approach can provide huge advantages when making key optimization decisions within your digital marketing campaigns.

One last thing I’ll mention is that Ben Zauzmer, whom I mentioned earlier, made a very astute observation regarding the Best Original Screenplay category.  He noticed that the WGA correctly forecasts the Oscar winner for this category 70% of the time, which would have meant that “The Grand Budapest Hotel” was the favorite to win.  However, “Birdman” was ruled ineligible by the WGA so it didn’t have an opportunity to win this award.  Instead of blindly believing the numbers, he adjusted the model to account for the likelihood that “Birdman” would have won if it had been eligible, which resulted in predicting that “Birdman” would win Best Original Screenplay, which it did.  As marketers, we are required to constantly synthesize, and sometimes question, the data to ensure we’re making decisions based on signals rather than noise (shout out to Nate Silver).  This approach has shaped our campaign management strategies, and I’m hoping it will also help you make better marketing decisions moving forward.

Oscar Predictions 2015: Beat My Ballot and Win a Free PPC Audit

The Oscars are right around the corner.  If you’re like me, you’re jittery with anticipation.  After all, how many other nights of the year provide such an amazing opportunity to put probabilistic theory to work?

Now I know what you’re asking yourself: why on Earth is a search engine marketer writing about the Oscars?  Well, for one, choosing Oscar winners is a lot like choosing which landing page to use, or which ad copy to run; informed decisions require statistical insights and we use stats-based Bayesian models to help us make better marketing decisions for our clients on a regular basis.  We’re applying those same principles to help us choose Oscar winners.  Second (and the real motivator), I filled out an Oscar ballot last year for the first time and have been fascinated with the selection process ever since.

So before I reveal this year’s winners (or at least those that are favored to win), let’s lay the foundation for the logic behind choosing the winners.

First, I considered the popular opinion of top critics (GoldDerby does a great job of consolidating this information).  The greater the consensus was among these critics, the more confident I felt in my decision.  Second, I looked at previous winners to see if there are any trends or consistencies that could be applied to this year’s nominees.  For example, of the 86 films that have been awarded Best Picture, 62 (or 72%) have also been awarded Best Director.  So, if you think a film is going to win Best Picture, you should almost always pick the director of that film to win Best Director.  Also, the number and types of awards a film has already won are the strongest indicators of its success at the Oscars.  For example, most critics have chosen Birdman as the favorite to win Best Picture because it won the Directors Guild Award (among other awards), which is the strongest predictor of Oscar success for this category (over the last 15 years, the Best Picture winner in the Oscars also won the Directors Guild Award 80% of the time).

With these insights, I have carefully chosen this year’s Oscar winners for each category.  If you send me your predictions ahead of time and win more categories than I do, Synapse will provide you with a free PPC audit.  In the unlikely event that more than one ballot beats mine, the PPC audit will go to the lucky one who won the most categories.  So, without further ado, here are my predictions:

  • Best Picture: “Birdman” (it’s a slight favorite over “Boyhood”, but statistically it’s very close)
  • Best Director: “Birdman,” Alejandro Gonzalez (going with the 72% stat here, plus the fact that Gonzalez has already won the Directors Guild Award, which is the strongest predictor of who will win best director at the Oscars)
  • Best Lead Actor: Eddie Redmayne in “The Theory of Everything” (he’s nearly a 3:1 favorite over Michael Keaton, since he’s already won the SAGs, the BAFTAs and the Golden Globes)
  • Best Supporting Actor: J.K. Simmons in “Whiplash” (he’s over a 90% favorite to win)
  • Best Lead Actress: Julianne Moore in “Still Alice” (she swept the Golden Globes, the SAGs and the BAFTAs)
  • Best Supporting Actress: Patricia Arquette in “Boyhood” (is anyone voting for anyone else?)
  • Best Animated Feature: “How to Train Your Dragon 2” (it’s a sizable favorite, although some critics believe Big Hero 6 will win)
  • Best Documentary Feature: “Citizenfour” (all nine sites I polled chose Citizenfour)
  • Best Foreign-Language Film: “Ida,” Poland (“Ida” is a huge favorite)
  • Best Adapted Screenplay: “The Imitation Game” (its WGA victory puts it slightly ahead of the pack)
  • Best Original Screenplay: “Birdman” (this one is quite tricky because “Birdman” was ruled ineligible for WGA, but the WGA winner is typically a 70% favorite to win this category. Without this insight, the numbers would say “The Grand Budapest Hotel” should win, but Golden Globe and Critics Choice wins make “Birdman” the slight favorite)
  • Best Cinematography: “Birdman” (this is a slight favorite)
  • Best Costume Design: “The Grand Budapest Hotel” (heavy favorite over “Into The Woods”)
  • Best Film Editing: “Boyhood” (its win at the American Cinema Editors guild makes it the favorite)
  • Best Makeup and Hairstyling: “The Grand Budapest Hotel” (this is a heavy favorite based on its BAFTA and guild wins)
  • Best Original Score: “Theory of Everything,” Johann Johannsson (about a 2:1 favorite over “The Grand Budapest Hotel”)
  • Best Original Song: “Glory” (it won the Golden Globe and Critics Choice)
  • Best Production Design: “The Grand Budapest Hotel” (it won the BAFTA and the Art Directors Guild award, which make it a heavy favorite)
  • Best Sound Editing: “American Sniper” (one of the tightest races of this year’s Oscars, but “American Sniper” is a slight favorite)
  • Best Sound Mixing: “Whiplash” (this one is tight, but its BAFTA victory puts “Whiplash” slightly ahead of “American Sniper”)
  • Best Visual Effects: “Interstellar” (“Dawn of the Planet of the Apes” could upset, but “Interstellar” is the only film with more than two nominations in this category)
  • Best Animated Short Film: “Feast” (last year the favorite lost, so watch out for “The Dam Keeper”)
  • Best Live-Action Short Film: “The Phone Call” (heavily favored, although both Entertainment Weekly and IndieWire have predicted that “Boogaloo and Graham” will win)
  • Best Documentary Short Subject: “Joanna” (I’m going against the stats on this one because I saw this film and it was amazing, and in my opinion, better than “Crisis Hotline”)

So I’m going with a completely probabilistic approach, with the exception of the last category.  Based on the probabilities for each category, I am expected to win roughly 17-19 categories.  Think you can beat me?  Reply to this post or email me your selections at paul@synapsesem.com.  Let the best probabilistic mind win!

 

Sources

http://www.latimes.com/entertainment/envelope/la-et-mn-en-oscars-2015-ballot-predictions-20150211-column.html#page=1

http://www.ropeofsilicon.com/oscar-contenders/oscar-predictions/

http://www.theguardian.com/film/series/oscar-predictions-2015

http://www.bostonglobe.com/arts/movies/2015/02/18/oscarharvard-side/a24fmMCYN0ot5pZ8ZwAyYP/story.html

http://www.goldderby.com/odds/experts/200/

http://www.ew.com/article/2015/02/13/oscars-predictions-2015-who-will-win

http://www.indiewire.com/article/2015-oscar-predictions

http://www.hollywoodreporter.com/awards/predictions/oscars/2015/oscars-2112015

http://www.awardscircuit.com/oscar-predictions/

 

Are All Impressions Created Equally?

Cracking the Mystery Behind Budget-Limited Impression Distribution

Over the course of my career I’ve learned that the vast majority of paid search accounts will at some point be affected by budget limitations.  There are numerous reasons why budgets may become capped.  You may need to temporarily pull-back spend to adhere to internal budgets. Maybe you add a new campaign to your account which leaves existing campaigns fighting for fixed resources.  Or, an aggressive bid escalation strategy could make historical daily budgets insufficient.  Whatever the reason might be, limited daily budgets affect most accounts at some point or another.

For most marketers, campaign level budget adjustments have become the go-to budget management strategy.  You need to drop you spend by 25%?  Sure, I’ll knock down your daily budget by 25%.  Problem solved!  It’s easy to understand why we’re so quick to change campaign budgets.  It’s a simple, reliable method of controlling spend, and in an industry where time is at a premium, it’s a change that takes just a couple of seconds to implement.  But do we really understand how we’re influencing performance when we change campaign budgets?

Interestingly, Google does not have much to say about the matter.  They explain that when budgets are capped “ads in the campaign can still appear, but [they] might not appear as often as they could.”  Talk about an exhaustive scientific conclusion!  Unfortunately, we too often assume that this impression rotation is going to occur on a pro rata basis.  In the past, I have aggressively dropped budgets with the expectation that I am going to see a linear drop-off in performance.  For example, if I drop by budget by 50%, then I would expect my impressions, clicks and conversions to all drop by the same 50%.  Click-through-rate and conversion rate shouldn’t be affected if Google is reducing my impression share at random and on a pro rata basis.  Although this logic makes perfect sense, the data we’ve collected after limiting budgets has told a different story.

When we cap campaign budgets the more important question we should be asking is “which impressions am I limiting?”  Let’s go back to our hypothetical budget cut.  If we drop campaigns budgets by 50%, we said we would expect impressions to drop by 50%.  Stop right there.  While it’s true that a 50% reduction in budget will likely lead to a 50% drop in campaign level impressions, we need to stop and consider which impressions are going to be affected.  The obvious first consideration is whether impressions will decline equally across the entire keyword set.  But there are also less tangible and less measurable dynamics to consider.  How will budget limitations impact the time of day my ads get served, the geography they get served in, and the devices on which my ads are showing? Google basically has free reign to decide where and how they are going to rotate our impressions.

Our experience has shown us that budget limitations (and even minor <15% limitations) can significantly impact the quality of our campaigns’ impressions.  Our hypothesis is that when excessive impressions are available, Google will serve your ads across segments (times of day, geographies, devices, etc.) that offer the lowest competition.  In other words, Google is not rotating impressions at random.  It’s in Google’s best financial interest to take this approach as they’re now creating a bigger market (with higher bids) when and/or where there was previously less activity.  Unfortunately, the lower competition segments are often less active because they relate to lower quality impressions that produce lower conversion rates.

So where’s the proof of this impression variability?  Let’s look at three real-life data sets from our client base:

    • Example One: One of our non-profit clients ran state-level campaigns across the nation. They did not have the budget to support this effort, but they wanted some visibility in every state. Most campaigns had impression shares (due to budget) limited by 25-40%.  After explaining the risks of this strategy we were able to get the client to agree to experimentally increase our monthly budget so that we could run the campaigns with completely uncapped budgets.  Very few changes (optimizations, testing, etc.) were made to the campaigns during this time, and the client is not significantly impacted by seasonality.  Before and after results are shared below:

authortag

    • Example 2: One of our B2B software clients was reluctant to significantly invest in a Branded advertising campaign. After a full year of running a campaign with a lost IS of 31%, the client decided to maximize branded spend.  It is worth noting that 95% of the traffic and impressions from this campaign originated from the exact match iteration of its brand name.  Again, very few changes were made to the campaigns during this time, and the client is not significantly impacted by seasonality.  Before and after results are shared below:

authortag

  • Example 3: Another one of our B2B software clients introduced several new campaigns related to an expanding product line.  This required us to limit one of our existing campaign’s impressions by 15%.  Once again, very few changes were made to the campaigns during this time, and the client is not significantly impacted by seasonality.  Before and after results are shared below:

authortag
This data clearly suggests that there is an inverse relationship between lost impression share and impression quality and efficiency.  It is critical that we consider the ramifications that campaign-level budget changes may have on performance.  Before dropping budgets, dive deeper into your accounts and ‘cut-the-fat’ at the most granular level possible.  Even better, when building out new accounts, consider how your campaign structure might influence budget management.  Are there keywords that you will never want capped because they are exceptional performers?  If there are, they should be broken out at the campaign level.  Even if you didn’t structure your account like this when you launched, it’s not too late.  Scan your account for your top performing keywords and consider moving those terms to a new “high priority” campaign that you can ensure receives sufficient daily budget.

All impressions are not created equal.  Google has far too much autonomy in their impression rotation methodology to simply assume you can linearly scale performance up or down with budget.  When facing budget limitations, take the time to cut unproductive spend by optimizing targeting settings and/or by pruning keywords.  Break out top performers into separate uncapped campaigns.  Campaign level budget reductions should be reserved as a last-resort optimization.

Google Authorship: An Often Overlooked SEO Strategy

In the SEO world, quick and simple opportunities to boost performance are about as common as a cactus in Siberia.  The reality is that successful SEO strategies are dependent on intensive content development efforts, carefully optimized on-page tags, and a thorough attention to detail with your site’s technical health.  Those tasks don’t come easily, so when an opportunity as simple and impactful as Google Authorship comes along, SEO professionals should take notice.

Despite its low-hanging benefits, Google Authorship has surprisingly gone overlooked by many marketers.  In this post we’ll discuss the key benefits of the program and why you should spend the minimal time required to enroll.

The most tangible benefit of Google Authorship is its ability to differentiate your search engine listings and ultimately increase click-through rate.  When content on your site is written by authors enrolled in Google Authorship, the ranking page is eligible to appear with a headshot from the author’s Google Plus profile.  In addition, the listing will appear with a byline giving credit to the individual author.  Consider the results that appear for the Google search on the term “Bid Op Tool:”

authortag
These two factors increase the real estate of your listing and add visual appeal, both of which are likely to positively influence your click-through-rate.  Google recently completed an eye tracking study to measure the impact of Authorship Tags on user behavior.  According to their study, they found that users had a “60% chance of fixating on the annotation when placed at the top of the snippet block.”  In plain English, search results with an Authorship tag are likely to get clicked more than standard results.

It is still unconfirmed whether Google Authorship is an active signal in Google’s algorithm, but many SEO professionals feel that strong “Author Rank” will be an important ranking factor in the future.  Google’s Executive Chairman, Eric Schmidt, stated in his book that “information tied to verified online profiles will be ranked higher than content without such verification, which will result in most users naturally clicking on the top (verified) results. The true cost of remaining anonymous, then, might be irrelevance.”  If this is true, we can also suspect that domains are likely to benefit if Google sees “expert” authors regularly publishing on their site.  This would give organizations the incentive to build in-house content development teams focused on creating unique and valuable content.

These are obviously less tangible benefits that are harder to measure, but it is clear that Google values credibility and expertise.  With increasingly more quantifiable metrics to support these qualities, it would be prudent to anticipate their inevitable impact on rankings.

Writers can sign up for Google’s Authorship Program in just a few minutes.  You’ll need to create a Google Plus profile (if you don’t already have one) with a recognizable headshot, and a work email linked to the domain(s) on which you regularly publish.

For such a simple process, Google Authorship can have a strong impact on your SEO strategies.  To learn more about building a Google Authorship strategy visit Google’s Inside Search Feature or contact us today.

Guest Blogging is Alive and Well

Overview
In a recent blog post, Google’s Matt Cutts discusses the pervasiveness of spammy guest blogging activities.  The head of Google’s Webspam team goes so far as to state, “So stick a fork in it: guest blogging is done; it’s just gotten too spammy.”  The purpose of this article is to clarify Cutt’s statements and help marketers decide how to leverage guest blogging efforts moving forward.

Clarifying the Details
The main point of clarification is that guest blogging is still alive and well.  Cutt’s article and prior videos on the topic were specifically targeting two audiences: blog owners and companies using guest blogs in ways that violate Google’s quality guidelines.  Best practices related to guest blogging are summarized below for each audience.

Blog owners: Historically, numerous blog owners have accepted guest blog posts from various sources, including businesses with which they had no prior relationship.  Google is simply encouraging these websites to properly scrutinize submissions to ensure they are original and high quality and offer relevant content to their blog readers.  This suggestion is given to website owners to help them avoid damaging the reputation of their blog.  For blog owners the premise is simple: offer your readers high-quality content and we’ll consider your blog high-quality, but offer them low-quality content and we’ll consider your blog low-quality.  Of course, higher quality blogs will rank better than lower quality blogs.
Guest Bloggers violating Google’s quality guidelines: Websites can violate Google’s quality guidelines in several ways.  Cutt’s article specifically brings to attention the following violations:

  • Buying links: This one is pretty cut and dry.  You shouldn’t pay for links.
  • Requesting followed links: Google doesn’t like it if you specifically ask for a followed link.  Followed links should be a natural result of other efforts, including your content marketing initiatives.
  • Spinning articles: Article spinning (the practice of syndicating the same article or similar iterations of the same article to multiple websites) is frowned upon.  After all, how can you provide unique content if you’re syndicating the same content to multiple outlets?

In terms of link building, guest blogging should only be used as a means to acquire high quality links (as opposed to a large quantity of links).  Additionally, companies shouldn’t use guest blogs as a primary, or even secondary, method for acquiring links.  The issue that Google identifies is that many companies realized that they could drive significant inbound link volume by mass producing guest blogs. When you’re after volume, your content development efforts will inevitably yield lower quality, less unique content.  When this effort is multiplied by the thousands of companies engaged in guest blogging, the product is a sea of low quality, or even spammy, content that damages the integrity of search results.  Clearly, this is something Google would like to minimize or stop altogether.

Dos and Don’ts of Future Guest Blogging
To ensure your guest blogging efforts are providing value and are not violating any of Google’s quality guidelines, we recommend you follow the following dos and don’ts of guest blogging.

Dos
Focus on quality, not quantity: This should hold true for all of your content development efforts.  High quality, unique and compelling content is far more valuable than stale, mediocre content.  Strong content helps build links naturally, improves brand equity and can be used more effectively as part of an overall content marketing strategy (see #4 below).
Target the right audience: Your content is only valuable if it’s seen by the right audience: your target market.  Focus on outlets that allow you to gain visibility among your customers.
Build relationships with 3rd party websites: Once you find specific outlets that help you access your customers, build a strong relationship with them so you have the opportunity to promote content with them on an on-going basis.  Frequency is an important metric in the context of content marketing and brand equity.
Think holistically: Your content should be developed as part of an overall content marketing strategy designed to help you generate more business.  Don’t spend your time writing content simply to get a link; the link alone is rarely worth it.
If you’re a blog owner, have a solid review and submission policy for all guest posts: If you’re accepting guest blog post submissions, make sure you review the articles prior to publishing them. You should be looking for high quality, unique content that provides value to your readers.

Don’ts
Don’t pay for links: This is the most obvious violation.  Google has been pretty adamant about this for years.  Never pay for links.
Don’t request followed links: This is almost as obvious as #1.  Followed links should happen naturally, but requesting them is a red flag to Google and should be a red flag to blog owners as well.
Don’t engage in article spinning: Repurposing the same article and submitting to multiple blogs is a clear violation to Google’s quality guidelines.  If you’re even considering doing this, then you should rethink your entire content development strategy.
Don’t overreact: Google does not take kindly to “black hat” SEO practices and guest blogging is no exception.  However, if you are developing and syndicating content the right way (as discussed above), you should have no concerns regarding your guest blogging efforts.  Companies should be able and willing to collaborate with 3rd parties to develop unique, interesting and original content to share with their customers and other interested readers.

Conclusion
Keep in mind that Google’s ultimate goal is to provide the most relevant search results possible.  Over time, guest blogging has been abused so much for ranking purposes by certain marketers (for some that term is far too complimentary) that a shadow has been cast over the entire guest blogging community.  If you’ve been using guest blogging properly as part of an overall content marketing strategy, then you should have nothing to worry about.  If you’ve been using guest blogs in an effort to manipulate search results, then you should stop immediately and rethink your strategy.  Guest blogging isn’t dead; in fact, it’s more alive now than ever for those who choose to utilize it properly.
If you need help developing a comprehensive content marketing strategy, or if you’re interested in learning more about how guest blogging should be leveraged moving forward, please don’t hesitate to contact us.

Driving Sales From Your Facebook Followers

It seems that just about every company has a Facebook page these days. Many companies are even devoting serious resources to develop new and fresh social content. From our perspective, it often seems that the motivation to join the Facebook fray is in itself ‘social.’ Companies need to have a Facebook page because their ‘competitors have one—and hey, everyone has one!’

All of these social efforts, whether strategic or not, have given many companies an impressive online following; it’s not uncommon for even small businesses to have upwards of 10,000 Facebook likes.

Facebook is, of course, a powerful communication tool. Companies can advertise their latest promotions, PR initiatives, and other relevant information to their followers vis-à-vis their Facebook page. But can Facebook pages be used outside of a communication function, and help to directly drive new sales?
First, let’s look at the composition of a typical Facebook audience. Facebook followers are predominately made up of existing customers. Facebook pages rarely rank organically for product/service level keywords, and product/service research is not something that is historically performed through social media. For these reasons, if a user has navigated to your Facebook page, it is most likely because they are already aware of your brand. Second, if a user has gone so far as to “like” your page, they are demonstrating some pretty serious enthusiasm in your company. So, we can infer that the majority of your Facebook audience will be comprised of enthusiastic existing or prospective customers. This is obviously good news to the sales-minded business owner, because an enthusiastic audience that has already qualified their interest in your business is certainly ripe for a well-timed direct marketing or remarketing campaign.

So, how does a business reach these customers? The issue with Facebook followers is that they are anonymous. There’s no name, email, or phone number to associate with a follower—just a digitized “like.” The obvious answer is to reach these customers through Facebook wall posts, but this is a very limited strategy for several reasons. Facebook wall posts cannot be personalized to speak to specific segments of your customer base. Also, wall posts are highly likely to be lost in the fodder of an active news feed.

The solution to these issues is to be able to relate a “like” to a unique customer. Fortunately, we have developed a strategy to address this dilemma. There are several Facebook apps on the market today that help recruit user information. For example, North Social’s “deal share” app is designed as a “gate” for a deal or promotion. The deal is advertised on the company’s Facebook page, and accessed through the North Social app. In order to qualify for the deal, the user is instructed to complete a set of customizable criteria. For example, you could require a customer to enter their name, email and phone number to gain access to the deal. Alternatively, you can “gate” a deal by making users have to like your page (if they haven’t already done so) or share the deal with their own friends. The user’s information is automatically logged into North Social’s interface, and data can be downloaded to an Excel report at any time. Logically, the more enticing the deal, the more interaction it will receive. Over time, a company can effectively build a customer contact list by acquiring the contact information of their Facebook followers.

Once user information is obtained, companies can relate customer names with specific demographic segments or even past purchase history. Customized email campaigns can be launched to push frequency of purchase or cross-selling opportunities.

This “deal share” strategy is a great way to unlock the sales potential of your Facebook page. By turning anonymous Facebook followers into identifiable and unique users, a company can grow what is otherwise a communication portal into a powerful direct marketing and remarketing tool.

Is Your Anchor Text Diversified?

How to Evaluate Your Anchor Text Profile:
Google’s latest algorithmic update, dubbed “Penguin,” is carefully evaluating anchor text diversification when determining website rankings.  In this article we will demonstrate how to evaluate your anchor text profile.

Read the full article

Making Optimization Signficant: The Role of Statistical Analysis

Paul Benson and Mark Casali, co-founders of the online marketing firm Synapse SEM, LLC, have been published in the Search Marketing Standard Magazine.  Following an intensive research project with Babson College Professor Dessislava Pachamanova, Ph.D., Synapse SEM has developed a new suite of statistical applications and tools.  The corresponding article “Making Optimization Significant: The Role of Statistical Analysis,” co-authored with Dr. Pachamanova, explains how these tools can be introduced to help search marketers make better decisions and obtain deeper insights from the data they analyze every day.

Read the full article

Paid Search Branded Campaigns: Effective or Excessive?

Every paid search campaign we build seems to elicit the same response.  No matter who the client is—a small local business, or a fortune 500 company—every client questions whether our campaigns should include branded keywords and ads.  In other words, should the campaign target keywords that include the company name and similar iterations?

We understand the client’s skepticism because paid search is often being used as a vehicle to gain visibility on search terms that otherwise organically fail to rank.  Branded terms, of course, typically have the strongest organic visibility.  So, let us reiterate the age old question from our clients and ask, why add a paid advertisement to a search results page that is almost guaranteed to have a first position organic listing?

In our experience paid branded ads have multiple benefits:

  • Competition – Displaying branded ads can be viewed as a defensive mechanism to protect against competitors usurping prized positions on branded keywords.  Stated differently, a company should never be in a position where a competitor is the first ad/listing the user sees on the page.
  • Real Estate – With all of the ad extensions now available, paid search ads can take up some pretty impressive real estate on the results page.  Depending on the level of competition on branded keywords, it is entirely possible to dominate 100% of the above-the-fold results with a paid advertisement and first position organic listing.
  • Psychological Effects – There have been numerous studies performed showing that when search results contain both a paid listing and an organic listing a user is far more likely to click-through to your website.  The two channels are proven to work symbiotically, and enhance your site’s credibility.
  • Cost – Ultimately, the inclusion of branded terms in a paid search advertising campaign comes down to cost-benefit.  Cost-per-click bids on branded terms are typically very affordable.  Consequently, total cost incurred on branded terms should make-up only a small portion of an overall paid advertising budget.

So, to offer a conclusive answer to our clients—we include branded keywords in your campaigns because it is the best thing for your campaigns.  Bidding on branded queries allows us to protect against competition, dominate above-the-fold results, and improve the credibility of your site; and all this can be accomplished within a very affordable budget.

Is Your Bid Op Tool Worth The Cost?

By Paul Benson
Thousands of online advertisers in the U.S. rely on bid optimization tools (BOTs) to make their paid search campaigns more efficient.  These software “solutions” or “platforms” which typically leverage rule-based or algorithmic bidding, claim the upper hand against manual bidding strategies.  While it’s certainly true that in some cases these tools vastly outperform manual bidding, the challenge for advertisers is determining whether a BOT will prove cost-effective for their particular campaigns.

BOTs are perceived as superior to manual bidding because they are able to leverage historical data and automatically update bids based on the day of week, time of day, and other factors.  They also save time as advertisers no longer need to analyze performance and make bid changes on a relatively routine basis.  Unfortunately, just as BOTs create many efficiencies, they also generate significant costs.

On average, advertisers pay 5% of spend for BOTs.  For an advertiser spending $100,000 a month, this equates to $60,000 per year.  Some providers also charge a one-time implementation or launch fee up to $10,000.  At this cost, a company could hire a new specialist simply to handle bidding for the account..  It is also true that BOTs require integration with existing platforms, which may cause on-going frustration among advertisers looking for a truly “automated” solution.  This frustration is intensified when advertisers learn that BOTs will require consistent manual oversight, and sometimes intervention.  On a past account audit it was determined that the advertiser’s automated tool had spent nearly $6,000 in two months on a single keyword that generated an ROAS of just 27% (significantly below the ROAS goal).  Neither the tool, nor the keyword, was a new addition to the campaign, which made this gross oversight on the part of the BOT even more perplexing.

For this same advertiser, we noticed over the course of several days that only 4% of converting terms, on average, received a daily bid change, and 88% of converting keywords didn’t receive a single bid change throughout the monitored time period.  Furthermore, keywords that comprised nearly 80% of all conversions were displayed in a position of 1.7 or better.  Nearly 94% of these terms had a CPA at or below goal, signifying that little optimization could be done from a bidding standpoint to improve performance on these terms.  After analyzing the number of bid changes, and the size of each bid change, it was concluded that this particular bid tool saved the client between $500 and $1,000 per month; the bid tool alone cost more than four times that amount to run.
Innovations within the primary advertising platforms, AdWords and AdCenter, make manual bidding all the more attractive.   For example, marketers formerly relied on BOTs to factor in day of week and time of day when making bid changes.  However, now Google AdWords’ Ad Scheduling functionality allows advertisers to adjust bids (up or down) during specified days or times of day.  AdWords also gives insight into performance by time of day or day of week (as long as you have a Google Conversion Pixel in place).  As a result, advertisers can leverage day parting to effectively change bids by time of day and day of week, just like a BOT can.  All of this functionality is available for free directly within the AdWords interface; you don’t need to access, pay for, or become familiar with a 3rd party interface.

Despite the aforementioned challenges, there are certainly scenarios where a BOT is still a valuable and unrivaled tool.  Ultimately, the decision to implement a BOT should be evaluated on a case by case basis.  Fortunately, there are three criteria that will help point you in the right direction.  First and foremost, consider your industry.  Advertisers in the retail space, especially those with thousands of SKUs, are more likely to benefit from a BOT.  This is true simply because a BOT’s value increases as the number of keywords increases.  Sophisticated BOTS can accurately determine the right CPC based on not only historical performance of a particular keyword, but also on historical performance of related keywords.  This insight, although small on an individual keyword basis, can add up to thousands of dollars across an account.

The second consideration can be referred to as the PPC Gini Coefficient.  A variant from its economic definition, the PPC Gini Coefficient can be calculated by dividing the number of converting keywords that are meeting or exceeding your goal and are in an average position better than 2 (1.9, 1.8, etc.) by the number of total converting keywords.  The higher your PPC Gini coefficient, the less opportunity there is for bid optimizations, and therefore the less useful a BOT will be.  A coefficient of 70% is considered high, while 50% is considered relatively normal.  It’s possible that your campaigns contain keywords that are exceeding your goal, but are in a position below 2 (2.1, 2.2, etc.) simply because the CPCs haven’t been properly adjusted.  If this is the case, these terms should be removed from the calculation.

Finally, if you’re currently using a BOT, you should also consider the frequency and size of bid changes.  This data, along with click volume, can give you a reasonable sense for how much money the BOT is saving you every month.  Keep in mind, however, that every time a BOT decreases a CPC to gain efficiency, it may be also sacrificing conversions.  It’s also important to understand whether the bid changes are more commonly max CPC increases or decreases.  Surprisingly, out of all the bid changes implemented for the advertiser mentioned earlier, 94% were decreases in max CPCs.  Therefore, in this particular case, only cost savings were considered; potential revenue generated from increased CPCs was ignored.

If you’re still struggling with the decision of whether to implement a BOT, consider contacting one of the providers to request an estimate.  Marin Software has an internal tool that estimates your increased ROAS if you were to adopt their software.  While these estimates are commonly ‘optimistic,’ this additional information may aid in your decision.  It’s difficult to test a BOT, since significant upfront work is required to integrate the software with your account.  Therefore, you should collaborate with an experienced SEM expert or agency to conduct a thorough analysis based on historical campaign performance.  If you’re already using a BOT, don’t fall victim to the sunk cost fallacy.  Carefully evaluating the efficacy of your bid tool could save you thousands.

Google Introduces AdWords For Video

Earlier this spring, Google expanded their paid search advertising capabilities by launching their new video advertising platform “AdWords for Video.”  The program’s launch immediately turned video content advertising into an affordable, targetable, and measurable medium.  AdWords for Video operates under a pay-per-view model where the advertiser is only charged when users have watched their video in entirety, or for thirty seconds—whichever is shorter.
In terms of set-up, all an advertiser needs to get started is a YouTube Account.  The AdWords for Video platform is programed so that videos from any linked YouTube Account can be pulled directly into new ads in the account.

AdWords for Video regulations allow for four different ad formats:

  • In-Search – As a featured ad above the YouTube search results (similar to the ad locations for search network text ads).  This ad format was formerly known as “promoted videos” on YouTube.
  • In-Slate – As a an uninterrupted featured video that plays before targeted content
  • In-Display – As a suggestion to the right of a targeted YouTube on the video watch page
  • In-Stream – As a ‘skippable’ video that plays before targeted content

Upon learning about the new AdWords for Video platform, our team wondered how it differed from having traditional video ads on the display network.  In our experience, so far there are several key differences.  First, as described above, AdWords for Video allows for cost-per-view bidding.  Display network bidding only allows for CPC, CPM or conversion focused bidding.  Second, the display network allows for non-TrueView format videos to be incorporated into click-to-play or in-stream ads.  Display network video ads also currently allow for placements on YouTube, but Google has already hinted that these ad formats will be phased out to give way for AdWords for Video.  Third, the ability to target ads is different in AdWords for Video.  Instead of having Ad Groups, AdWords for Video organizes ads into “targeting groups” set at the campaign level.  Targeting groups allow for demographic, topic, interest, placement, remarketing, contextual keyword, and search keyword inclusions and exclusions.  Finally, in AdWords for Video, one video ad can be applied to multiple ad formats.  On the display network, each video has unique content and format.

So what type of benefits can AdWords for Video bring search marketers?  There are certainly long term benefits associated with the efficiencies AdWords for Video brings to video content management and optimization, but there is also a more immediate advantage.   Because AdWords for Video is so new, it is unsaturated.  The paid video advertising market is just beginning to develop, and consequently cost-per-view prices in many industries are still available for just a couple of cents.  Also, the cost-per-view bidding format, in of itself, precipitates a major advantage.  Since the advertiser is only charged when a viewer watches their entire video, there is the potential for a lot of “free branding.”  In other words, a partial video views may not be a bad thing.  In fact, if your ads are engineered well and you can get your message across in the first five to ten seconds of your video, you are likely to reach a significant amount of customers for free.

Implementing Segmented ROI Analysis

By Mark Casali
To succeed in online marketing, an agency must be committed to adaptation, innovation, and, of course, optimization.  The industry seems to produce a never ending array of methods to improve and refine account performance, but as search marketers, our very own measure of success remains a stagnant and undeveloped concept.  Return on Investment, or “ROI,” is typically applied as a blanket-metric, and only assessed at the account level.  This evaluation of ROI is distorting goals and leaving perhaps the most obvious optimization opportunities unexposed. In an industry that survives on advancements, it is essential that we begin assessing value and success with more comprehensive and revealing measures.

The easiest method of improving the ROI measurement is to move away from an account level assessment.  Regardless of how the marketer ultimately defines ROI, each product/service category should be tracked with an individual return on investment goal.  Consider a sporting goods store that has a mature paid search marketing account to promote baseball, football, and soccer ball sales.  The store incurs $1,000 of engine spend on each of the three product categories.  Revenue earned by the paid search account is $2,500, $2,000, and $800, for baseball, football, and soccer balls, respectively.  The store has identified an ROI goal of 1.5 as their break-even threshold; this goal includes variables such as product cost, growth strategies, and lifetime value.  Account level ROI is calculated as $5,300 of total revenue divided by a total $3,000 of engine spend, for a return of 1.8.

Most agencies will look at this data and determine that the account is healthy with an ROI well over their goal of 1.5.  Additional optimization techniques could be performed within the account to improve click through rates, strengthen conversion rates, and refine cost per click bids.  These strategies may well benefit account performance, however, there is a more immediate problem that will be exposed by refining our ROI calculation.  If category level ROI metrics are analyzed, the marketer will determine that returns equal 2.5, 2.0, and 0.8 for baseballs, footballs, and soccer balls, respectively.  With an ROI less than 1.5, the client is earning additional incremental revenue in the soccer balls channel, but they are detracting from their overall profitability.  In fact, the soccer balls product channel would have to nearly double revenue to reach the desired ROI goal.  It is unlikely that traditional optimization techniques bridge this gap, especially in a mature campaign.  In reality, traditional optimization in this account is insignificant.  Segmented ROI analysis has revealed a far more pressing performance issue, and until the soccer ball campaign is suspended, the account, from a profitability standpoint, will significantly underachieve.

This example is admittedly basic, but the underlying principle of identifying and striving towards multiple ROI goals is a powerful optimization technique.  The sporting goods store has a simple product offering of three different types of equipment, but actual companies market countless amounts of products and services.  Logically, the marketing campaigns promoting these products and services will achieve various levels of success, and some will even fail.  So, with such drastic levels of performance across an account, it seems counterintuitive to optimize around a one-size-fits-all ROI goal.  A better approach is to calculate separate ROI objectives for each product/service category.  This strategy is very much in line with the way search marketers update bids based on keyword performance.  The marketer would never bid on a keyword with a negative contribution margin; why then would they apply budget to an entire product category that loses money?  This granular approach of assessing ROI on a category/product level is applicable to companies ranging from full scale e-retailers with numerous product lines to institutions of higher education with multiple programs of study.

Once marketers move towards a more segmented ROI analysis, they should also reconsider how they set performance goals.  When establishing an ROI goal, the agency or client should develop customized objectives for each product/service category based on a variety of variables including historical performance, the product costs and gross margins of the related products/services, growth strategies, and the assessed lifetime value of a customer associated with a product/service.  For example, a marketer may determine that a particular campaign drives so many recurring purchases that a return on investment goal under 1.0 could, in fact, be appropriate and representative of long-term profitability.  When establishing ROI goals, the marketer or client should also acknowledge the motives of the campaign.  If the ROI goal is set too high, the profitability of the account will be strong, but the volume and reach of the campaign will be restricted.  Conversely, if the ROI goal is too low, the campaign will see stronger volume but lower margins.

Overall, the limitations of an account level return on investment assessment are becoming more and more problematic for marketers.  Optimization opportunities are lost, and agencies are left striving towards goals that may or may not be advantageous to the client.  To correct this problem, companies should track performance for each product/service category and even identify separate financial goals for each of these categories.  If this level of granularity is already available, the data should be shared with the company’s search agency so the paid search campaigns can be managed more accurately.  The resulting account will offer a deeper level of visibility into performance and help online marketers quickly and easily improve profitability.  This is one of the last pieces of low hanging fruit available to paid search advertisers; don’t let it go to waste.