About a year and a half ago I set out to solve a painful problem I face several times a year: A close friend or a family member’s birthday is approaching — what gift do I buy them? I always wished I had this magic tool to suggest the perfect gift, a gift tailored specifically for that person. So I thought, why not hack it myself?
So I did. After some thinking I decided on the following approach: I’ll connect the user (gifter) via Facebook, ask for as many permissions as possible so I can get as much information about their friend (giftee), and semantically analyze the giftee’s profile (Thanks, Zemanta API). As a result I’ll have a set of themes (semantic entities), ordered by importance, that should represent the giftee’s personality. I will then use the semantic analysis to query against a gift catalogue and find the most suitable gift personally matched for the giftee. This should work like a charm, shouldn’t it?
I picked Amazon as the gift catalogue off course. It has a huge inventory of products, a very useful API, and… an affiliate program.
So I launched the 1st version of Gnift, and I immediately got very lucky. My submission to KillerStartups has been picked by someone from UrbanDaddy and a story about newly launched Gnift has landed in the mailboxes of several thousands UrbanDaddy subscribers, resulting in several hundreds new Gnift users – Exactly the playground I needed to test and tune the magic algorithm.
After some time and some algorithm enhancements I could very clearly see what was happening: The personal analysis seemed to work extremely well. Much better than I expected. For example, I have a friend who is going to Yoga classes, raising two dogs, and trekking in the weekends. Gnift analysed his Facebook profile as [Yoga, Dogs, Trekking]. In many cases Gnift even managed to order the semantic themes significance in accordance with the person’s real life preferences. So far so good.
However, the second part, the gift matching against the catalouge, well… to that same friend mentioned above Gnift suggested [Yoga Mat, Dog Training Book, Camping Tent]. At first glance that looks fine isn’t it? I mean, it’s suitable right? contextually in place, semantically positioned, isn’t it? or maybe just a bit superficial?
So the person is practicing Yoga alright, but is a Yoga Mat really a good gift for them? Can you imagine someone unwrapping a package and joyfully discover a Yoga Mat? a Dog Training Book just because he’s raising dogs? Isn’t that a little forced? A Camping Tent? Is it possible that the gift matching process was missing something? Maybe it lacked… a soul?
It so happens that when I wanted to actually buy a gift to that friend of mine, I brainstormed with another friend and we both got original and finally bought him a Beer Machine. And… he simply loved it! How come Gnift couldn’t come up with that suggestion based on the person semantic analysis?
At that point I realized that you simply can’t get the right gift by querying a gift catalouge API with the top semantic themes representing a person’s interests. It just doesn’t work.
But there must be a right way. So if it’s not matching people’s analysis to gifts, what can it be? Maybe matching people’s analysis to… other people’s analysis? I mean, let’s say I stumble upon a person with a very similar semantic analysis as my friend’s, this means they must have at least somewhat similar personalities. Does this mean that other person would love a Beer Machine as well?
This was something I really wanted to experiment with. Now I didn’t need the gift catalogue no more, but instead I needed to know what a subset of the users in my database consider a perfect gift for themselves. When I have that information I will be able to match any person’s semantic analysis against that subset, find the ones who are the closest and suggest the gifts they want to that that person to whom a gift suggestion is being searched for.
At first I thought Amazon wishlist will do. Those contain information of what people actually want, what they wished they had, and in practice, what they want their friends to buy them for their birthdays. If I can uniquely connect an Amazon wishlist to a Facebook account in my database… I can’t.
My second idea was Pinterest. Pinning something is like adding it to a wishlist sort of thing. If I pin something and you are similar to me in personality, wouldn’t you like to get what I’ve pinned as a gift? But Pinterest doesn’t have an API…
So what could I do?
Having nothing else in mind, I decided I might as well try simply to ask people to provide the information voluntarily. Or almost so… So I revived Gnift, and right now if you want to search for the perfect gift for any of your friends there’s something you’re asked to give in return. You’re being asked to let Gnift know what you liked as a gift or would have liked. Then your profile is analyzed, and added to the pool of people Gnift can match against. When a similar person to you is searched on Gnift, they are suggested the same gifts you liked.
Does it work better? Truly, I can’t tell yet. I’d be happy if you can give it a shot and maybe write some comments.
I’m a hacker. And a surfer (Surprisingly, not such a rare combination). About a year ago I decided I’ve had enough with the poor quality of real time surf reports. Advanced surf forecasts are available, some surf cams on various beaches are installed, and yet without having a precise view of the surf at any given moment, I found myself too often wasting precious time (otherwise spent hacking) driving to the beach to find poor surfing conditions, or on the other hand, dismissing the drive just to be sorry later when I got the “dude, it was pumpin so awesome, u should have come!” line from someone who did surf.
So I developed [the 1st version of] SwellPhone, a service for “producing and consuming real-time surf reports” in my declarative way of describing it. I reckoned that if I give the surfing community a tool to share photos and videos of the surf that they take just before or after surfing, I will revolutionize real-time surf reporting. Hell, why not? You have a smartphone right? Why not point it to the surf, take a video or photo and help other surfers tell the surfing conditions?
It was a noble idea. It even got to the front page of HackerNews. But I was so naive… Everybody wanted to “consume” the surf reports alright. Just about everybody. However, No one was willing to “produce” a surf report. Absolutely no one. And without anyone producing, there was nothing to actually consume.
I was trying to crowd source surf reporting. But the crowd simply didn’t want to source. Crowd sourcing sucked.
I let SwellPhone linger and diverted my energy elsewhere…
Until some day, when while at Intsagram, I stumbled upon this pic of a cute little girl in the Maldives, and it made me laugh, so I Gimped it to express my thoughts:
Then I took another look at the pic and it hit me: Any surfer within a driving distance from this awesome tube who would get to see this photo in a timely manner after it had been taken, would simply leave everything and rush to surf there. Dude, this is not a pic of a cute girl. This is a surf report!
Wait, it’s taken from Instagram, do they have an API? Yes, they do!
Wait, do they geo tag the pics? Yes, they do!
Wait, do they timestamp the pic? Off course they do!
Wait, is there sufficient inventory of pics around known surf spots at any given time? Well, sometimes there is!
Hooray! SwellPhone can rise again! This time though, without the reliance on the crowd to source pics to SwellPhone. People already take thousands of pics of their kids, tanned legs and bellys, gorgeous looking girlfriends/boyfriends, and let’s not forget the empty beer bottles stuck in the sand. All of them have some view of the surf in the background. For us surfers, this is the foreground…
All in all, it was a lesson to me, and I learnt something. Crowd sourcing can sometimes suck. But that doesn’t mean the data you need is not already sourced in some other manner.
I recently pitched Triond to a founder of another web company. As usual, I started by explaining the problem that Triond solves. When I got to the part where I’m stressing that out of 133 million tracked blogs, only about 7 millions are really active, my listener replied with – “and that’s 7 million too many… I bet you too get a lot of rubbish submissions…”
Apparently, He didn’t highly appreciate the quality of user generated content. This made me think (again) about the question of what quality content is, especially in regards to user generated content.
User generated content is very disruptive to the standard perception of content quality. Just up until a few years ago, most of us were used to consume only content that was produced by professional editorial systems. Whether on TV, newspapers, books or even the internet in its infant years, the content that end users consumed was filtered out and edited by professionals that were implementing a somewhat narrow range of methodologies to their work. The limits were very clear and very accurately defined, and the result was a very unified style and spirit of content across all platforms.
The question of whether a content you were exposed to was a quality one never rose in those times. It was clear that if the content is out there, then it is of at least a minimal quality, otherwise – it wouldn’t have been there. The only thing that was left for the consumers to do in regards to evaluating content quality was to fine tune their consumption standards within a very narrow spectrum. The brand under which the content was published became the content’s seal of quality, and acting as the gate keepers of our content world, professional producers and editors made our content consuming experience safe and secure.
They did, however, narrowed our choice tremendously.
What user generated content did, was allowing anyone with content creation aspirations to walk pass the gate keepers and have their content out there, proposed to end users for consumption. Without the gate keepers, everything suddenly became legitimate, and the filtering mission was handed over to the consumers themselves. Having no training at all at content quality evaluation, confused consumers needed to either avoid user generated content at all, develop a sharp quality sense of their own, or – start relying on the innovative tools that started to appear in order to help measure content quality.
Those came in many forms, starting with the very basic Google’s PageRank algorithm that measured quality by numbers of incoming links, continuing with social bookmarking sites like del.icio.us that measured quality as number of people’s bookmarks and later on with social voting sites like digg, reddit and stumbleupon that simply let the crowd push what they consider as quality content to the top.
Engagement volume, expressed by the number of comments or ratings for a unit of content became another measurement for quality, and the latest trend is that people are becoming a content seal of quality themselves, simply by recommending content to their friends and followers using Facebook and Twitter.
If you stick to the traditional methods of content quality measurement, you would probably miss all of these posts. Are these content items of low quality? I’m not sure how a traditional editor would answer that. I’m sure though, that if you ask the hundreds of thousands of viewers of these articles or the thousands of the engaged people who took the time to comment or click ‘I Like It’, they would say “no.”
The traditional method of evaluating content quality is not dead. It is still in use by the professional publications and it does a great work of quality assurance. It still acts as a seal of quality for a major portion of content consumption. It did, however, became just a single method, one out of many others used to measure content quality and it is becoming less relevant as more people are getting used to – and are more willing to – consume content whose quality is measured differently.
So what is quality content? I don’t think I can answer this question. Once there were editors whom you could ask and they would determine the content’s quality. Today, I don’t think any single person can actually provide an answer. You have to take the content out there and let the web decide for itself.
The RightMedia Problem
The advertising exchange model is very promising and great in theory. Letting publishers put their display ad inventory up for bidding by advertisers really sounds like something that can improve efficiency both on the publishers and advertisers end. In practice however, things are different.
RightMedia is probably the largest advertising exchange today. Yet, publishers still use it mostly for their remnant ad space, allocating their quality ad space for other advertisers and top tier networks. Advertisers – completely aware of this fact – prefer to spend their display budgets with established and reputed ad networks, and leave RightMedia bidding for the smaller advertisers and networks. The result is that the average CPM on the exchange is low.
It seems that the exchange still didn’t reach a critical mass that will bring the breakthrough in its performance. The only way for the exchange’s average CPM to increase is by adding more advertisers and publishers into the game. Only then will publishers allocate quality ad space to the exchange, and large advertisers and networks will come along.
How does an ad exchange grow?
The Microsoft Lesson
According to Yaron Galai, who had suggested Microsoft to offer a 200% rev-share to all publishers, growing an advertising network – and I believe rules apply here for ad exchanges as well – is by growing the publishers base:
While the advertisers are the ones paying for everything, acquiring advertisers is a secondary concern for an ad network. A distant second. The #1 key to making an ad network work is the publisher side. Even though the publishers are being paid, it’s much more difficult to win publishers than it is to win paying advertisers.
Whether or not someone at Microsoft have read Galai’s post, it seems that they had followed his advice. They realized the only way to win market share over Google is by attracting Google’s publishers – not advertisers. So the rumor about an high-paying AdSense alternative spread and many publishers were eager to join once the private beta tag is off. Few weeks later though, when Microsoft opened up pubCenter for the public, payout has dramatically sunk.
RightMedia shouldn’t be learning anything from Microsoft about publishers retention. However, with acquisition in mind, Microsoft’s experience can serve as a good lesson for RightMedia . Tempting publishers works. If RightMedia could only find a way to tempt publishers to join, it will be able to grow its publishers base and advertisers will follow.
But there’s a catch – unlike Microsoft, RightMedia can’t simply double or triple publishers payout. Being just the exchange manager, RightMedia is not involved in the money flow. In the exchange, money flows directly from advertisers to publishers and not via RightMedia as the middle man. RightMedia is not in a position to offer any financial benefits for their publishers.
That’s a problem. And not only for RightMedia, but for any ad exchange out there. How can an ad exchange tempt publishers?
Why OpenX Will Win
If an ad exchange can’t compensate their publishers with money, it may never grow. Only an ad exchange that is able to offer another type of compensation, a service or a added value perhaps, has chances to appeal to publishers. OpenX is such an exchange.
Wait a second. Isn’t OpenX an ad serving solution?
Well, it is. OpenX is an open source ad serving software that is becoming very popular. It disrupts the ad serving business by eliminating a great portion of the ad serving costs. Disrupting is great, but OpenX is a business and needs to make money on its own. Having already a big (and steadily growing) amount of publishers, with a very close affiliation to their advertising space, it was only natural for OpenX to eventually launch an ad network of some sort. Better yet, it decided to take the ad exchange path.
Though still small, I believe OpenX exchange has better chances to make it. It has a large base of publishers enjoying a free or very cheap ad serving solution, whether as an hosted solution or as a stand alone installation. Converting those publishers to participate in the exchange may be much easier for OpenX than it would be for RightMedia to find new publishers out in the cold market.
In a sense, OpenX is not an exchange that offers an added value. It’s an added value that offers an exchange. And that’s why they’ll eventually win.
Online-advertising targeting methods are great. As an advertiser, they allow me to narrow the distribution of my campaigns and reach out to my clients more effectively. So if I am after clients from Phoenix, AZ, I’d use geo-targeting. If – on the other hand – I look for people who are interested in dog training, I can keyword-target this term. Newly buzzed Behavioral Targeting can get me even further and enables me to target clients according to their recent tracked online activity.
However, what if I actually know who the people I target are? What are my targeting options in this case?
First of all, you may stress that if I already know who I am targeting, I shouldn’t use advertising as the method for reaching them in the first place. I can try to contact them personally instead. Isn’t that what social networks are all about? Especially Linkedin, which has the introduction mechanism so well implemented in its core.
To this I will answer that:
- The amount of people to whom I want to communicate a single message can be too big to justify personal connection. It’s a simple scale issue.
- The time it takes to become friends or get introduced to those people may be long. In some cases, I need the message delivered immediately.
Current targeting options do not allow me to personally target advertising campaigns. I can use a mix of other targeting methods to try to narrow the distribution as much as possible, but that’s not really it. Let’s say that the common denominator of all the people that I target is that they’re all video producers from NYC. They’re not just video producers from NYC. They are very specific video producers that I have personally and manually targeted. I can launch a campaign that is geo-targeted to NYC, and keyword targeted around “video productions” very easily. This campaign may as well hit the exact people I am after, but it can also miss. And – it may reach people that I have never intended to reach. If they are exposed to the ad or even respond to it can create useless noise for me or even damage.
What I actually need in this case is a method of Personal Targeting, so I can communicate my message effectively only to those people that I am interested in. I want to be able to put my ads on web pages that those NYC video producers – and only them – are viewing. That’s a truly targeting heaven.
Imagine that while you’re browsing the web, the ads that you see are not ads that are targeted for your “type” but rather, directly to you.
Of course, the only way to practically achieve this is by knowing when its you who visits a page. And in order for this to happen, you must have identified at least once in the path to the page where the ad is. The natural implementers of this targeting method should therefore be social networks. Social networks are the places where you are willingly identify yourself by your real name with the intention of it to be publicly known that you are you.
There are surely some privacy issues here. Although I am not sure those are much of an issue, since the only information the personal targeting vendor would use is the person name. It wouldn’t use any data that this person shared on their social accounts, or any information the person may not to expose to others. Users names are usually completely public on social networks.
Another issue is scalability. It sounds like personal targeting involves very small numbers of ad impressions and future transactions or user actions. Advertising networks are used to work with multiples of tiny figures (costs) with huge numbers of impressions, clicks and actions, and why would any of them develop a targeting method that will allegedly reduce their business?
I think networks business wouldn’t be reduced, rather become more effective. Advertisers targeting personals will be willing to pay a lot in order to reach the exact people they’re after (think Bill Gates), or for their future actions. So the multiples will be of relatively high figures (costs) with relatively small numbers of impressions, clicks and actions. The network will have to carry much less overhead. That’s a benefit.
All in all, having to deal with the pain caused by not having this targeting method, I can’t see many drawbacks of Personal Targeting. Can anyone else see them? I just hope that the major social networks will develop Personal Targeting soon, or are they already developing it?
Many times, the first question I am asked about Triond is “What does Triond do?” Sometimes, I’d rather the first question be, “What problem does Triond solve?” After all, problem solving is how it all began for Triond.
To understand the problem that Triond solves, let’s begin with the birth of user generated content.
Introducing a near zero-cost distribution model, the web completely revolutionized the traditional publishing industry. Servers delivering webpages on demand to browsers around the world turned out to be much cheaper than the old method of printing- shipping- delivery-selling. The web enabled a much cheaper and more effective publishing process.
Now that publishing costs were down, a lot of web publishers arose. The demand for writers increased, and more writers than ever before were given the chance to have their writing published. In the meantime, users found themselves generating online content through their participation in communication applications, such as public email lists, forums and message boards. The concept of user generated content became more viable.
Yet, there wasn’t any online application that allowed you to express your creativity, knowledge and expertise for the initial intent of consumption by end users. Publishers were still in control of this type of content generation. However, even with the boom of online publishers, there were many more people wishing to be published than there were publishers willing to publish them.
This growing demand encouraged the second revolution: Web 2.0.
No other activity marked the beginning of the web2.0 era more than blogging. While web1.0 eliminated the distribution costs, web2.0 eliminated the technology costs. The content management systems and web publishing tools that enabled online publishing were mostly proprietary and expensive during the first web era.
Web2.0 introduced the free or nearly-free blogging platforms. All of a sudden, anyone with the slightest understanding of operating a computer and a web browser could operate their own publishing service. If you couldn”t find a web publisher willing to publish your work, just publish on your own. Better yet, now that you have the chance to publish on your own, why even bother looking for a publisher?
And so blogging began.
Has Blogging Proven to be Successful?
The blogging revolution has been tracked and analyzed by Technorati almost since it began. Every year, Technorati publishes the “State of the Blogosphere” report that analyzes blogging from many different aspects.
At first, you may be astounded to know that Technorati has tracked 133 million blogs since 2002. That’s a very impressive number. But watch as the numbers shrink significantly when describing the actual activity. In the 120 days before the report was published, as few as 7.4 million bloggers had posted new posts. That’s only 5.5% of the tracked blogosphere. Narrow the count to seven days, and the figure shrinks to only 1.5 millions – a mere 1.1% of the blogosphere.
Those figures reveal two significant facts:
- Blogging is something that millions of people were willing to try.
- Most of them – however – churned.
125 million churns translates to 125 million disappointed individuals. That makes blogging one of the most disappointing activities on earth.
Generally, blogging is perceived to be rooted solidly in web culture. Well, apparently it is not. It did leave its mark on a huge number of people, and there are many successful blogs that have a very significant impact in their niche. However, a 95% fallout rate is not something that represents a phenomena with a lot of traction. If email and instant messaging, for example, had the same churn rate, they wouldn’t be where they are today. It seems that even social networking – the younger web2.0 brother of blogging – has experienced more traction.
What Makes Blogging So Disappointing?
People don’t get disappointed unless they have preliminary expectations that aren’t met. Understanding what were bloggers expectations from blogging may shed some light about the reasons for their general disappointment.
Technorati asked bloggers for the reasons they blog. Reasons and expectations are quite parallel in this instance:
Considering more than 95% of bloggers were disappointed and as a result churned, we can assume that in 95% of the cases certain expectations weren’t met. So we can go on and generalize that bloggers are disappointed because:
- They don’t feel that they are being read enough
- Their expertise and experiences are not being shared with as many people as they hoped
- They aren’t meeting and connecting with like-minded people
- They aren’t being published or featured in traditional media
- Their resumes are not being enhanced to the extent they desired
- They don’t make as much money as they were hoping to make
This is not so surprising. It is very pretentious to expect all those things to happen simply because you write something and publish it on your blog. Writing alone is not enough.
Bloggers are not Publishers. They are Writers.
Herein lies the failure of blogging as a method. It extracted the technology from traditional publishing and provided a platform that anyone could use, but that’s the only thing it extracted. It did not provide all other components that are vital for effective publishing, just the naked technology. Blogging provided the platform and expected bloggers to come up with additional services themselves.
In other words, blogging forced writers to become publishers. Effective publishing incorporates a lot of elements: writing, editorial, marketing, distribution, sales, monetization, optimization, communication and much more. However, bloggers are not publishers. They are simply aspiring writers. Bloggers who weren’t willing to take on tasks other than writing, and furthermore, to become good at those tasks, didn’t stand a chance.
Triond: A New Approach for User Generated Content
With these millions of disappointed people in mind, my partners and I looked for a solution. We decided to implement a new approach for user generated content, something completely different from blogging. Something that will enable writers to be published effectively without forcing them to become full scale publishers.
And so we created Triond, our approach to solving the problems associated with blogging.
Did we suceed? You tell me.
In-text advertising is a controversial niche advertising market that on one hand has huge reach and on the other hand has absence of big players. It was a virtual paradise for in-text players during the last several flourishing years, but how will this niche market survive the downturn?
What is In-Text Advertising?
In-text advertising according to wikipedia is:
A form of contextual advertising where specific words within the text of a webpage are associated with advertising content.
In-text advertising is not a new type of online advertising. In its primitive form, it dates back to the early 2000’s when a company named eZula distributed an adware client that turned words into links while surfing. Later on, VibrantMedia launched intelliTXT, probably the first ever online in-text advertising product. When eZula morphed into Kontera, they joined VibrantMedia as a leader the in-text advertising market.
The basic idea behind in-text was to bridge the traditional separation between content and ad space derived from the newspaper advertising industry. Advertising was merged into the content itself. Kontera explains this nicely in their website:
In 1982, to increase the sagging sales for Reese’s Pieces, Hershey’s accepted a product placement deal in Steven Spielberg’s “E.T.”. After Elliot used Reese’s Pieces to lure E.T. from his hiding place, Reese’s Pieces experienced a 65% increase in sales and succeeded in reinvigorating the brand. Though this wasn’t the first case of product placement, it is one of the best examples of increasing sales and supporting brand marketing objectives through contextually relevant product placement.
Low click-through rates on graphical banner ads, in contrast to the relatively high rates and conversions of CPA ads embedded as text links, gave an additional push to in-text innovation during the early days. In-text innovators tried to implement an online advertising service that could take advantage of those high click-through rates from textual links.
In-text advertising was, and still is, somewhat controversial. For the content consumer, it may not be so obvious that a hyper link is a sponsored ad. Some publishers and critics claim that these links are deceiving in nature, and users’ reactions are mixed. I believe this is the reason that in-text advertising is not embraced publicly. Market leaders are also aware of this.
Yet this controversy served another purpose. It appears to have prevented the big players from joining the market. The absence of Google, Yahoo, Microsoft and AOL allowed VibrantMedia and Kontera to increase their distribution and control over this niche-market. In fact, VibrantMedia and Kontera have become so dominant that regardless of their niche, they are both considered to be among the top advertising networks today. With impressive numbers such as 42% US reach for VibrantMedia and 34% US reach for Kontera, they have proven that publishers do end up implementing in-text advertising, regardless of how controversial it may be.
When the online advertising industry started to flourish we saw many other networks launching their in-text services. These networks were all trying to get in on one of the only niches from which the big players were absent. Exponential’s EchoTopic, AdBrite’s Inline Ads, Clicksor and others are examples of these smaller networks.
None of these companies threatened VibrantMedia and Kontera. But on the other hand, none of them relied solely on in-text.
The lack of real competition from the big guys had its drawbacks. It didn’t drive VibrantMedia and Kontera to increase their efficiency. One of the strangest things to me is that neither company introduced a self-service interface for advertisers. Both still rely on their sales force and target only large advertisers. Moreover, they seem to prefer sharing their CPC revenue with other providers over getting higher rates themselves. Kontera is known to link its ads to Yahoo CPC ads:
The company also has access to thousands of advertisers through a partnership with Yahoo.
We can assume VibrantMedia does the same.
Now with this economic downturn upon us, the in-text advertising market is fragile. Because it is a controversial niche market, it may be one of the first cut from an advertiser’s budget. Moreover, as their partners begin to focus on their core business rather then nurture their in-text partnerships, VibrantMedia and Kontera will realize that relying on deals with other networks may have been a major error. Perhaps they should have developed direct relationships with advertisers as well.
Launching a self-service option for advertisers could help, but it may be too late. Advertisers traction is hard to gain in this environment.
The apparent solution for both companies is acquisition. Yes – their valuation may not be as high as it was last year, and yes – maybe they will not be hurt so badly from the downturn. But after all, they are VC funded start-ups looking for an exit and this downturn could be precisely what they need – a special opportunity for them to be acquired by a big player, despite the in-text controversy.
Lately, Google has demonstrated an increasing willingness to experiment with innovative, questionable and controversial advertising products. Just a few days after launching expandable ads, they also announced behavioral targeting. They are becoming more aggressive in penetrating advertising in new ways. This is understandable since Google started laying off employees and shutting projects down, proving it is not immune to the downturn. Don’t be surprised if Google enters the in-text market as well.
The question is, will Google acquire an existing player? Or will they develop their own product?
Google has shown that it acquires companies either for their distribution (YouTube, FeedBurner) or for their technology (Urchin). The bad news for VibrantMedia and Kontera is that Google neither their technology nor their distribution.
While in-text technology is not a known Google product, it may have already been developed. Even if it wasn’t, it’s not difficult for them to do. I don’t believe that Google will aquire VibrantMedia and Kontera for the amount they would like to see, especially when the technologies are so similar and could be implemented much better by Google themselves.
As for distribution, Google’s publishers network is much bigger than both VibrantMedia’s and Kontera’s. This certainly does not justify an acquisition.
So Google won’t be acquiring any of them. Who will, then?
Maybe another network will see value in their technology or distribution. Lately, Yahoo is a mystery, which leaves us with Microsoft – a great candidate for the first buyer in this market. They have the cash, they wish to expand their ad network reach, and they may be interested in in-text advertising. And, they already are working with Kontera in some capacity, which could be a catalyst for acquisition.
The Future of In-Text Advertising
If Microsoft ends up acquiring Kontera, another player (AOL? IAC?) will follow and acquire VibrantMedia. In this case, Google won’t sit on the sidelines; they will launch their own in-text advertising solution for sure. By the time we’re out of this downturn, all major players will run in-text advertising, thus legitimizing it for the future.
So all in all, if these predictions play out, the downturn will serve in-text advertising well. It won’t be lead anymore by relatively anonymous companies, and it may loose some of its charm, but it will become a legitimate way of advertising and will gain public acceptance.
And what if my predictions are wrong? Can the in-text market survive? What do you think?
In my last post, I wrote that CPA needs to be the next revolution in online advertising. I stressed that because CPA is the most efficient pricing method for advertisers, a wide adoption of CPA by publishers will drive the entire online advertising industry forward simply by helping it to gain market share over offline advertising. This is especially true in the current economic downturn, when advertising performance is becoming more and more significant.
I suggested that due to the lack of a scalable system that deploys CPA ads on publishers’ sites in a way that will generate high returns, CPA is therefor not widely adopted by publishers. A lot of innovation is required in order to make it scale.
However, I didn’t suggest any real innovative solution. I still don’t have a suggestion for a solution. But, I think I just found a company that does. Let’s take a look:
Effective CPA Implementation
CPA can be used very effectively as of today, but it doesn’t scale for most publishers. Filling pre-allocated ad spaces with graphical or textual CPA ads, contextual as they may be, simply doesn’t do the trick. Successful CPA publishers will tell you that the highest eCPM is gained when they manually embed a minimal number of text links and banners that link to the single most relevant product, in the right spot within the content.
Any solution for scaling CPA must answer these three questions with great accuracy, for every web page on which it is implemented:
- What product is the most relevant to be sold on the page?
- Where is the best location to link to the product within the content?
- Which linking method is the best for the product?
What product is the most relevant to be sold on the page?
Things that are very obvious to the human mind are not always so obvious to a machine. While a machine may be able to suggest some products after extracting keywords from a given text, it has no way of determining which of those keywords is a relevant product to sell. A machine can’t understand tone, humor and cynicism from the text. It can hardly tell if a product is mentioned in a positive or negative manner. Furthermore, it’s even harder for the machine to name a single product with the best chance of being sold on the web page.
In order for a machine to do all these things, it will need to understand the context of the page. One can suggest that Google does this. After all, they run the best contextual advertising system out there. But even Google is limited. And the fact is, their system is not so contextual.
Google’s alleged contextual abilities are a derivative of its great search technology. Search technology is all about matching search queries to indexed pages and assigning a score to each match. When AdSense ads are embedded on a web page, the relevancy isn’t gained by understanding what the page is about, rather it is achieved through matching ads to pages as if the ads themselves were search queries. That’s about all that Google’s technology does.
In a sense, Google doesn’t try to find the most relevant ads for any web page. On the contrary – it finds the most relevant pages for any given ad.
If Google can’t deliver the most relevant product right away, then there is still work to do. It seems like we need a whole new technology in order find the single most relevant product on a web page, something which is more contextual in its nature, and not based on search technology.
Where is the best location to link to the product within the content?
Let’s say we have a machine that understands what product to link to. Now, where should it embed the link? It’s a difficult decision for a machine, even harder than finding the relevant product itself.
It’s one thing to match ads to web pages (or web pages to ads) and fill pre-allocated ad spaces, but it’s another thing to actually allocate the ad space based on context. And we pretty much understand by now that even Google can’t do this.
This challenge doesn’t stop others from trying. In-text advertising is a somewhat new approach, implemented by companies like VibrantMedia, Kontera and even Amazon. All based on the assumption that a good link from within the content is worth more than a bunch of ads around it. Yet, those companies also don’t have the technology to determine the best spot for the link to be embedded. All they do is turn extracted keywords into links. Those are not necessarily the right words to link, and not necessarily in the right spot. Again, we see that technology can not yet deliver what an advertiser needs, in this case placement.
Which linking method is the best for the product?
Is this a product that requires a visual instead of a text link? And if so, which of the available visuals will perform better? Surely, no machine can provide the answer, yet…
It seems to me like machine scalable CPA is a romantic idea that belongs to the future. The requirements are just to much for the current technology. Nevertheless, we need a solution now. So what can be done? Crowd sourcing.
If the technology doesn’t exist yet, there is no other way to achieve scalable CPA but to outsource the job to people. This has already been done in other areas of online activity, for example Digg, Delicious and uTest.
Zemanta Crowd Sources Contextual CPA Advertising
Since every page that contains content is generated by humans, what could be more natural than outsourcing the contextual CPA ad embedding to those humans – the authors themselves?
I can think of one problem; authors are not always commercial savvy. They are mostly concerned with writing and not with marketing.
What Zemanta does, however, is provide them with a tool to enrich their content with tags, images, links and more. Zemanta integrates smoothly into the author’s domain and provides them with a great value. What if Zemanta offers some CPA links and banners in addition their other offerings of pictures, links and tags? And what if those links and banners would be seamlessly merged into Zemanta, thus providing a way for the author to integrate CPA in a truly organic fashion?
In an earlier post, I suggested that Zemanta would eventually look for revenue from the point of content consuming, despite that they recently announced a paid API model. Andraz Tori, a co-founder of Zemanta supports my assumption with his comment on CenterNetworks:
But we are definitely working on monetization.
For example when suggest link to Amazon and you specified your Affiliate ID, we insert it. If you haven’t specified it, we insert our own.
Presently, authors from around the world are using Zemanta and organically embedding CPA Amazon ads within their content. They embed ads to the right product, they embed them in the right spots within the content, and they decide whether or not to add a textual link or a product image. Best of all, the authors don’t do it with the intention of selling. They are doing it with the intention of increasing the value of their own content. As a result, ads are organically and naturally embedded within their content.
Of course this is just the tip of the iceberg. Those are just Amazon ads for now, and Zemanta’s method of revenue sharing is not yet solid. However, they have just recently started – and as far as I can see, Zemanta provides a great service, and it’s byproduct is crowd sourcing of contextual CPA advertising.
If that’s not CPA innovation, then what is?
As far as the online advertising market goes, we must remember that though the market has its own organic growth – which may have slowed down – online advertising is still a part of the whole advertising market. As such, it can generate growth simply by gaining market share over other advertising platforms.
The short nature of human memory makes us forget that online advertising’s ability to gain market share is exactly what drove the industry to its current size in the first place. Online Advertising gained market share over other advertising methods during bad economic times simply due to its ability to demonstrate better measurement and performance. The current slowdown, in my opinion, is a sign of a lack of innovation. This lack of innovation is what stops Online Advertising from growing even further.
Hey, maybe we should embrace the current crisis as a motivator for innovation. What kind of innovation? CPA, if you ask me. And here’s why:
Let’s begin with a look at the history of online advertising. CPM was the first and most simple pricing model for online advertising. It was a copycat of the traditional offline advertising CPM model, common in TV, printed media, billboards et al. All offline advertising methods price their inventory according to a quality variable (length of the TV ad, square inches of the newspaper ad, location of the billboard) multiplied by the anticipated number of eyeballs it will reach.
And so it was the same thing on the web – only that the web had two advantages over traditional offline CPM. Eyeballs were measured in a more precise way – looking at webserver logs to count an advertisment’s exposures is much more accurate than counting the number of cars going through an intersection at the rush time to assume the daily averages for billboard advertising.
The second advantage online CPM introduced was accurate counting for clicks. For the first time, advertisers could measure engagement through feedback from the target audience. On offline media this kind of data was only achievable using expensive market surveys with questionable results.
Online CPM was one thing that drove the online advertising market to grow. Advertisers eventually realized that they should spend their CPM dollars where they have better accuracy and engagement measurement.
But that wasn’t all.
The real thing that pushed the market forward was the development of performance advertising. Introduced by Overture in 1998 and perfected by Google a few years later, CPC was something completely different.
For the first time, advertisers could pay just for engagement. Even better, for the first time, advertisers shared the burden of goal achievement with the publisher. Gone were the days of a publisher just allocating the ad space and then forgetting about it. With CPC, publishers were motivated to increase their audience engagement for advertisers. They could change the locations of ads, their appearances, help increase their relevancy by providing keywords and so on. The more the publishers made an effort, the more they earned. A unique cooperation model was born.
Very quickly advertisers realized that CPC helps them achieve their goals. Whether it was sales, sign ups or leads, CPC took them one step closer to success. They could lower their investment in converting views to clicks – that was left for the motivated publishers – and instead concentrate on converting clicks to goals. Performance advertising was born.
More than CPM, CPC significantly drove advertisers to allocate a portion of their budget to online advertising. Additionally, Google’s method enabled small advertisers to join the game. With TV, you couldn’t even think about advertising if you had less than few thousand dollars. On Google, however, you could advertise your product with as little as 5 cents.
Of course CPC has its own problems. First and foremost is click fraud. Second, though it gets advertisers closer to their goals and helps them spend more effectively, it doesn’t bring them directly to achieving their goals the way CPA does.
CPA dates back to the same time as CPM, 1994. Thanks to Amazon, CPA gained a reputation as an essential part of the company’s growth.
CPA is the best pricing model for advertisers. It provides certainty regarding the future ROI of their whole campaign. It lessens the number of variables that are part of the calculations when determining ROI. When advertisers buy advertising using CPA, they know exactly how what they will recieve for each dollar spent. When they set their CPA rates, they are actually determining their own margins – as simple as that.
CPA is Not Scalable
However, Google and its CPC concept has taken the lead. Not because it was a more effective method, but because it was a scalable method. Compared to Google’s scalable CPC solution, CPA implementation was poor. Why?
The same advantage that CPA holds for advertisers turns out to be a disadvantage for publishers. While advertisers are spared from converting views to clicks to goals, the burden falls on the publishers. And as it turns out, the eCPM that publishers generate from CPA advertising is usually lower than that which is generated by CPM or CPC, hence its narrow adoption – only 7% of the market.
However, when CPA is used effectively, it generates a much higher eCPM than CPC and CPM because it is more effective by nature. When advertisers save $X thanks to CPA, a portion of this $X is passed on to the publisher’s eCPM. In fact, when used effectively CPA is such a powerful revenue generator that an entire mini industry of CPC to CPA arbitraging is prospering.
Yet, what prevents CPA from becoming a commodity for publishers is its scalability. Those who fully utilize it are only small publishers; the ones who can manually optimize their CPA campaigns, a tedious process. Optimizing Google AdSense CPC isn’t such a hassle – Google provides the contextual elements, and publishers can scale location and appearance easily. Scaling CPA in a way that both saves the manual optimization from the publishers and generates a high eCPM is not yet available.
CPA is the Future
We must develop an innovative way to scale CPA. Imagine the possibilities if such a tool existed. If publishers adopt it widely, it could drive more advertisers to use the CPA model. Consequentially, the whole market will act more effectively. Eventually, this will lead to advertisers re-allocating their offline budgets to online, and our industry will be saved!
Is there currently any innovation taking this route? Not really. Google experiments in it, and we could see something robust from them in the future. CPA networks don’t seem to be too innovative, and while there are few startups tackling this model, none of them seem to be revolutionary enough.
However, we are only at the start of tough times. The longer this downturn lasts, the better the chances are to see innovation, especially when the entrepreneurs realize that whoever succeeds to innovate in this market, might as well be the next Google.
Lately, the revenue generating intentions of Twitter are becoming clearer. Twitter already understands that businesses see a great value in operating a twitter account, and they tend to search for their revenue there. It sounds like a great idea.
However, a phenomenom that has been quite negligible up until now has become significant: cybersquatting over twitter. The value of having a branded Twitter account is so clear to businesses and Twitter alike; this value is also clear to opportunists who are squatting the twitter account names from big companies, probably with future monetization ideas in mind.
Try it for yourself, it’s fun! Just think of a big corporation and see if there is already a twitter account with its name. While some businesses seem to have been fast enough to secure their Twitter account (@google, @facebook), some slower reacting corporations have been squatted already. It’s hard to believe @microsoft, @apple, @chevrolet and @audi are really representatives from the corresponding companies.
I wonder when Twitter will start paying attention to this issue and how they intend to solve it. Any ideas?