| 0 comments ]

Mark Zuckerberg and mega-social media site Facebook have both been in the news quite a bit recently, between the company’s disastrous IPO and the many, many people who are unhappy with it. Now, a class-action lawsuit is being brought against Zuckerberg by some of those unhappy investors, who claim the mogul unloaded a huge amount of stock on inside information that it wasn’t worth its estimated value.
Zuckerberg is no stranger to legal battles; last year he was sued over a Facebook page that garnered a wave of backlash from the Jewish community, and just a week after the company went public, they were sued for hiding “unfavorable growth forecasts” before the IPO. Oh, and there was that whole Winklevoss scandal.
This lawsuit, however, could get big very quickly. It claims that JP Morgan, Goldman Sachs, and Morgan Stanley all tipped off only the investors with the largest stake in the company about the serious undervalue of the stock well before the IPO, leaving everyone else who invested their hard earned dollars in the dust. As of Monday morning, shares had fallen to around $26.00 apiece, far below the initial $38.00 projected value.
Former Wall Street analyst Henry Blodget spoke up on behalf of investors, calling the insider trading allegations “absurd and unfair.” He goes on to say that the SEC should change their rules about such information being shared, asserting that every investor has the right to know what’s going on with the IPO.
“This is an absurd and unfair practice,” he said. “The estimates themselves are material information–the consensus of smart, well-trained analysts who have worked with the company’s management to develop realistic forecasts. Most investors don’t even know that these estimates exist, let alone that they’re whispered verbally to only a handful of big investors. All potential investors should have easy access to these estimates, as well as to any logic underlying them. The SEC needs to change the rules here.”
While the exact amount of Zuckerberg’s sold shares isn’t known, rumors put it around a billion dollars, and that adds up to a lot of angry stockholders. There have been accusations that Zuckerberg himself is to blame for the disastrous IPO on the grounds that he is an egomaniac who allowed the company to offer inflated projections in order to justify Facebook’s $100 billion valuation.
The fact that Facebook’s head honcho took off on a honeymoon right after the stocks began to tank isn’t sitting well with investors, either. It looks like this case could get nasty very quickly as the very people who are supporting Facebook are demanding answers and accountability from its owner.

RT @AskKissy LOL. Mark got over.. Aren’t you glad you couldn’t afford to buy the stock? http://t.co/mmDLvsoT
2 minutes ago via UberSocial for Android · powered by @socialditto
 Reply  · Retweet  · Favorite

When are people going to realize Zuckerberg is a horrible person? “Facebook, Mark Zuckerberg, Banks Sued Over IPO” http://t.co/wJIwuKEN
27 minutes ago via Tweetbot for iOS · powered by @socialditto
 Reply  · Retweet  · Favorite

I guess someone is having a more stressful day than me…Mark Zuckerberg Sued For Unloading Facebook Stock #WebProNews http://t.co/LyD1A4Ip
46 minutes ago via Tweet Button · powered by @socialditto
 Reply  · Retweet  · Favorite

| 2 comments ]

Google announced that it will roll out some new tools for DoubleClick Ad Exchange buyers. These features, the company says, will help buyers buy quality inventory and check their campaigns.

The DoubleClick Ad Exchange was launched in September as a real-time marketplace where online publishers and ad networks/agencies can buy and sell ad space for prices set in a real-time auction.

One new feature is called "Site Packs" which the company describes as "manually crafted collections of like sites based on DoubleClick Ad Planner and internal classifications, vetted for quality."

DoubleClick Google is also making changes to its Real-time Bidder. "The biggest change here is for Ad Exchange clients who work with DSPs," says DoubleClick Ad Exchange Product Manager, Scott Spencer in an interview with AdExchanger.com (reposted to Google's DoubleClick Advertiser Blog). "Historically, Ad Exchange buyers were hidden from publishers behind their DSP. By introducing a way to segment out each individual client's ad calls, inventory can be sent exclusively to an Ad Exchange buyer even when that buyer uses a DSP. It increases transparency for publishers and potentially give buyers more access to the highest quality inventory, like 'exclusive ad slots' – high quality inventory offered to only a few, select buyers as determined by the publisher."

Google will also roll out a beta of a feature called "Data Transfer", which is a report of all transactions bought/sold by clients on the Ad Exchange.

| 0 comments ]

Rumors surfaced recently that Google was buying Like.com. Those rumors have now been confirmed, as Like.com has announced the news on their site. Founder and CEO Munjal Shah writes:

Since 2006, Like.com has been moving the frontiers of eCommerce forward one steap at a time. We were the first to bring visual search to shopping, the first to build an automated cross-matching system for clothing, and more. We didn't stop there, and don't have plans to stop now. We see joining Google as a way to supersize our vision and supercharger our passion. This is something we are truly excited about.

Along the way we built a team that was not just hard working but obsessed with the mission at hand. We are so very proud of this team, and they deserve all the credit for how far we have come. In addition, there are many folks outside the company who have been pivotal to our success. All the Like.com alumni are incredible folks who left our little company better than they found it. Our investors were patient, insightful, and supportive of our plans to build a bigger platform. Our merchant partners were cutting edge and innovative, and in many cases they were willing to try new approaches and new technologies to better the user experience.

Google acquires Like.com
The company says it has developed technology that lets it understand what terms like "red high-heeled pumps" or "floral patterned sleeveless dress" mean, and has created algorithms to understand whether or not items will complement or clash with one another. The company also operates personalized shopping site Covet.com, user-generated fashion site Weardrobe.com, and Couturious.com, which it says "pushes the envelope on Rich Internet Architectures."

Google is obviously impressed with the technology behind Like.com, and it will be interesting to see what they do with it.

Financial details of the acquisition have not been disclosed. TechCrunch says its heard that the price was over $100 million.

| 0 comments ]

Yesterday, news broke that Google was acquiring social Q&A site Aardvark for about $50 million. Aardvark sent its users an email today saying:
Dear friends,

Aardvark has just been acquired by Google!

Aardvark will remain fully operational and completely free, providing quick, helpful answers to all of your questions. For more information about how the acquisition affects Aardvark users, check out the FAQ that we've put together....


"We want social search to reach hundreds of millions people around the world, and joining with Google lets us reach that scale — we’re also excited to work with the team at Google: our company has a culture that was inspired by Google in many ways, and we have a lot of respect for the folks who work there," the company says in a blog post.

Aardvark is already available in Google Labs. Users will keep the same Aardvark account. It will continue to work under Google.

The company says it will continue to keep introducing new features, fixing bugs, and improving speed and quality. They say the main thing that is going to change is that they will be able to move faster with the support of Google.

User questions and answers will show up in search results from Google, Bing, Yahoo and other search results if you choose to share them publicly.

Ask (formerly Ask Jeeves) thinks Google is coming after its business.

| 1 comments ]

AT&T has introduced its FamilyMap App for the iPhone, which allows users to track the location of family members.

Users can download the FamilyMap the App Store on iPhone or at iTunes. Users can track tow phones on an account for $9.99 a month or up to five phone for $14.99 per month. The FamilyMap App can also be used on most other AT&T smartphones. Previously the app was only available via a desktop.

FamilyMap

Features of the FamilyMap App include:

  • Interactive Map: View whereabouts within an interactive map, including surrounding landmarks such as schools and parks; and, toggle between satellite and interactive street maps.
  • Personalize: Assign a name and photo to each device within an account, and label frequently visited locations such as "Bobby's house" and "School."
  • Schedule Checks: Use the app to see if a family member is on schedule. Parents can schedule and receive text and email alerts.
  • My Places: Set up and view a list of landmarks within the app. Users can display the landmark on the map, edit the landmark's details, and remove or add landmarks.

| 2 comments ]

You may have gotten some good links in the past, but don't count on them helping you forever. Old links go stale in the eyes of Google.
Google's Matt Cutts responded to a user-submitted question asking if Google removes PageRank coming from links on pages that no longer exist.The answer to this question is unsurprisingly yes, but Cutts makes a statement within his response that may not be so obvious to everybody.

"In order to prevent things from becoming stale, we tend to use the current link graph, rather than a link graph of all of time," he says. (Emphasis added)

Now, this isn't exactly news, and to the seasoned search professional, probably not much of a revelation. However, to the average business owner looking to improve search engine performance (and not necessarily adapting to the ever-changing ways of SEO), it could be something that really hasn't resonated. Businesses have always been told about the power of links, but even if you got a lot of significant links a year or two ago, that doesn't mean your content will continue to perform well based on that. WebProNews has discussed the value of "link velocity" and Google's need for freshness in the past:

Link velocity refers to the speed at which new links to a webpage are formed, and by this term we may gain some new and vital insight. Historically, great bursts of new links to a specific page has been considered a red flag, the quickest way to identify a spammer trying to manipulate the results by creating the appearance of user trust. This led to Google’s famous assaults on link farms and paid link directories.

But the Web has changed, become more of a live Web than a static document Web. We have the advent of social bookmarking, embedded videos, links, buttons, and badges, social networks, real-time networks like Twitter and Friendfeed. Certainly the age of a website is still an indication of success and trustworthiness, but in an environment of live, real time updating, the age of a link as well as the slowing velocity of incoming links may be indicators of stale content in a world that values freshness.

So how do you keep getting "fresh" links?

If you want fresh links, there are a number of things you can do. For one, keep putting out content. Write content that has staying power. You can link to your old content when appropriate. Always promote the sharing of your content. Include buttons to make it easy for people to share your content on their social network of choice. You may want to make sure your old content is presented in the same template as your new content so it has the same sharing features. People still may find their way to that old content, and they may want to share it if encouraged.

Go back over old content, and look for stuff that is still relevant. You can update stories with new posts adding a fresher take, linking to the original. Encourage readers to follow the link and read the original article, which they may then link to themselves.

Leave commenting on for ongoing discussion. This can keep an old post relevant. Just because you wrote an article a year ago, does not mean that people will still not add to it, and sometimes people will link to articles based on comments that are left.

Share old posts through social networks if they are still about relevant topics. You don't want to just start flooding your Twitter account with tweets to all of your old content, but if you have an older article that is relevant to a current discussion, you may share it, as your take on the subject. A follower who has not seen it before, or perhaps has forgotten about it, may find it worth linking to themselves. Can you think of other ways to get more link value out of old content?

| 0 comments ]

Google has at some point quietly increased its sitemaps limit from 1,000 to 50,000. In a discussion on a Google Webmasters forum thread back in April of last year, Google employee Jonathan Simon said that each sitemap index file can include 1,000 sitemaps.

Just recently, however, David Harkness posted to that same thread, pointing to official Google documentation for sitemap errors, which says under the "Too many Sitemaps" error:

The list of Sitemaps in your Sitemap index exceeds the maximum allowed. A Sitemap index can contain no more than 50,000 Sitemaps. Split your Sitemap index into multiple Sitemap index files and ensure that each contains no more than 50,000 Sitemaps. Then, resubmit your Sitemap index files individually.

Jonathan SimonThe larger number was confirmed by Simon, who came back to the conversation, saying, "Thanks for resurfacing this thread as we've improved our capacity a bit since then. The limit used to be 1,000. The Help Center article you point to is correct. The current maximum number of Sitemaps that can be referenced in a Sitemap Index file is 50,000."

As Barry Schwartz at Search Engine Roundtable, who stumbled across this post points out, "This is a huge increase in capacity...Still, each Sitemap file can contain up to 50,000 URLs, so technically 50,000 multiplied by 50,000 is 2,500,000,000 or 2.5 billion URLs can be submitted to Google via Sitemaps."

In other words, you can have a lot of sitemaps in one sitemap index file. That's some good information to know, and it is a little surprising that there wasn't a bigger announcement made about this.

| 0 comments ]

RDFa, which stands for Resource Description Framework in attributes, is a W3C recommendation, which adds a set of attribute level extensions to XHTML for embedding rich metadata within web documents. While not everyone believes that W3C standards are incredibly necessary to operate a successful site, some see a great deal of potential for search engine optimization in RDFa.

In fact, this is the topic of a current WebProWorld thread, which was started by Dave Lauretti of MoreStar, who asks, "Are you working the RDFa Framework into your SEO campaigns?" He writes, "Now under certain conditions and with certain search strings on both Google and Yahoo we can find instances where the RDFa framework integrated within a website can enhance their listing in the search results."

Lauretti refers to an article from last summer at A List Apart, by Mark Birbeck who said that Google was beginning to process RDFa and Microformats as it indexes sites, using the parsed data to enhance the display of search results with "rich snippets". This results in the Google results you see like this:

RDFa in play

"It's a simple change to the display of search results, yet our experiments have shown that users find the new data valuable -- if they see useful and relevant information from the page, they are more likely to click through," Google said upon the launch of rich snippets.

Google says it is experimenting with markup for business and location data, but that it doesn't currently display this information, unless the business or organization is part of a review (hence the results in the above example). But when review information is marked up in the body of a web page, Google can identify it and may make it available in search results. When review information is shown in search results, this can of course entice users to click through to the page (one of the many reasons to treat customers right and monitor your reputation).

Currently Google uses RDFa for reviews, but this search also displays the date of the review, the star rating, the author and the price range of an iPod, as Lauretti points out.

Best Buy's lead web development engineer reported that by adding RDFa the company saw improved ranking for respective pages. They saw a 30% increase in traffic, and Yahoo evidently observed a 15% increase in click-through rates.(via Steven Pemberton)

Implications for SEO

I'm not going to get into the technical side of RDFa here (see resources listed later in the article), but I would like to get into some of the implications that Google's use of RDFa could have on SEO practices. For one, rich snippets can show specific information related to products that are searched for. For example, a result for a movie search could bring up information like:

- Run time
- Release Date
- Rating
- Theaters that are showing it

"The implementation of RDFa not only gives more information about products or services but also increases the visibility of these in the latest generations of search engines, recommender systems and other applications," Lauretti tells WebProNews. "If accuracy is an issue when it comes to search and search results then pages with RDFa will get better rankings as there would be little to question regarding the page theme." (Source) He provides the following chart containing examples of the types of data that could potentially be displayed with RDFa:

RDFa Implications

"It is obvious that search marketers and SEOs will be utilizing this ability for themselves and their clients," says Lauretti. Take contact information specifically. "Using RDFa in your contact information clarifies to the search engine that the text within your contact block of code is indeed contact information." He says in this same light, "people information" can be displayed in the search results (usually social networking info). You could potentially show manufacturer information or author information.

RDFa actually has implications beyond just Google's regular web search.
With respect to Google's Image search, the owner of images can also use RDFa to provide license information about the images they own. Google currently allows image searchers to have images displayed based on license type, and using RDFa with your images lets the search bots know under which licenses you are making your images available (Via Mark Birbeck). There is also RDFa support for video.

Following are some resources where you can learn more about RDFa and how to implement it:

Google Introduces Rich Snippets
Introduction to RDFa
RDFa Primer
About RDFa (Google Webmaster Central)
RDFa to Provide Image License Info
RDFa Microformat Tagging For Your Website
For Businesses and Organizations
About Review Data (Google Webmaster Central)

Google's Matt Cutts has said in the past that Google has been kind of "white listing" sites to get rich snippets, as Google feels they are appropriate, but as they grow more confident that such snippets don't hurt the user experience, then Google will likely roll the ability out more and more broadly. This is one thing to keep an eye on as the year progresses, and is why those in the WebProWorld thread believe RDFa will become a bigger topic of discussion in 2010.

Make Money Now!!

An online guru making more in a single month than most people make in a year is finally revealing his top money making opportunity.  Click here to learn more.
If you wish to no longer receive emails, please click here.
Or write to: Suite 208 – 260 West Esplanade North Vancouver, B.C. V7M3G7 Canada