Facebook’s IPO

Inside Facebook’s IPO: How the Social Web Will Reshape the Economy

Time.comBy SAM GUSTIN | Time.com – 8 hrs ago

Facebook’s initial public offering later this spring will create a billion-dollar windfall for its founders and early investors, so it’s easy to be cynical and view the IPO as another example of insiders cashing in on the latest web phenomenon. But the significance — and symbolism — of Facebook’s IPO goes much deeper. Facebook’s astonishing rise is an apt metaphor for the emergence over the last two decades of the Internet itself as a tool for individual self-expression and collective organization. It’s also a dramatic example of a generational shift taking place among the entrepreneurial class, one that elevates social change as a priority along with commercial success. Perhaps most importantly, Facebook’s success is a powerful argument to the transformational impact that a free, open Internet can have on society and commerce.

Facebook’s IPO — the largest in Internet history — is a once-in-a-generation milestone in the evolution of the web. It represents a coming-of-age moment for the second major phase in the development of the Internet as a widespread platform: the rise of the social web. In the initial phase, large portals such as America Online and Yahoo established the web as a repository for vast amounts of information. It wasn’t until Google indexed the web with its search engine that this phase reached maturity, allowing users to quickly find information across the Internet, beyond the borders of these centralized communities. But even as Google democratized the web by enabling ordinary people to exercise their voices and become accessible to the broader online world, the Internet remained a largely individualized experience. (More: Read Facebook CEO Mark Zuckerberg’s IPO Letter)

The emergence of social networks like Friendster, MySpace and Facebook gave rise to the second major phase of the web by establishing it as a platform for social interaction. For the first time, these social networks allowed users to establish personal identities on highly-scaled platforms in order to connect and share with others. These services delighted and empowered users by making them feel ownership of their own pieces of digital real-estate, which they could use as personal calling cards to project themselves into cyberspace. Thanks to savvy marketing and engaging design, Facebook caught on like wildfire, first among college students, and later among the public at large. As it grew, Facebook leveraged the network effect of millions of people connected to each other to create an unstoppable momentum that vanquished its social networking foes.

Facebook’s ascendance demonstrates the power of the Internet as a force for social connection and, over the last few years, for social and political change. This new power has three main features: it’s democratic, because everyone in the free world has access to the same internet; it’s meritocratic, because success on this platform is a function of the power of one’s ideas, not traditional categories of wealth or class; and it’s viral, because the web enables good ideas to spread at light-speed and reach millions. The democratic, meritocratic, and viral nature of the web has shaken existing power structures worldwide, so it’s not surprising that repressive regimes have responded by trying to crack down on web activity in order to clamp down on dissent. This push-and-pull between old-world, legacy power structures and upstart digital communities will continue for some time, but if you believe the digital idealists — and Zuckerberg clearly is one — history is on their side. (More: Mark Zuckerberg: TIME Person of the Year 2010)

It’s no coincidence that Facebook’s recent rise as a cultural phenomenon has mirrored the Internet’s emergence as a powerful social and political tool. Reformers in Tunisia and Egypt used Facebook to organize unprecedented protests against dictators who had held power for decades. Activists in the United States used social networks to launch the Occupy Wall Street movement, which has focused national attention on income inequality and made it a central issue in the 2012 presidential election. Most recently, a viral, web-based campaign against anti-piracy legislation in the U.S. Congress helped convince dozens of lawmakers to withdraw their support. The Internet is emerging as a potent socio-political force in its own right — and social networks like Facebook and Twitter are the glue that holds this force together.

Facebook’s commercial success, meanwhile, demonstrates the potency of the Internet as a platform for building wildly successful businesses from the ground up, as well as a vehicle for established businesses to engage with existing customers and reach new ones. Facebook’s IPO prospectus contains several stunning figures that illustrate the immense reach of the service, and the potential for businesses to engage with users. The company now has 845 million monthly active users — or nearly half of the estimated 2 billion Internet users worldwide — and 483 million of them log in every day. The remarkable thing is that Facebook is just scratching the surface of its commercial possibilities. But Zuckerberg has made it clear that the singular pursuit of profit is not what’s driving him. “We don’t build services to make money,” he writes in Facebook’s IPO letter, “we make money to build better services.” Facebook’s service, in turn, is designed to further what he describes as a “social mission” — “to make the world more open and connected.” (More: Is Facebook Really Worth $100 Billion?)

Facebook has been criticized for encroaching on the boundaries of individual privacy, and the company’s increasing ubiquity will require vigilance by civil liberty watchdogs and, if necessary, the federal government. But Zuckerberg has been nothing if not consistent: He deeply believes that making the Internet more social will, in turn, make society more democratic. “People sharing more — even if just with their close friends or families — creates a more open culture and leads to a better understanding of the lives and perspectives of others,” Zuckerberg writes. In this respect, Zuckerberg is clearly an idealist of the first order, and he’s not alone. Increasingly, young entrepreneurs, particularly in technology, are looking to create businesses that contribute to the public good as well as make money. And it’s not just entrepreneurs who feel this way, but also consumers. “These days I think more and more people want to use services from companies that believe in something beyond simply maximizing profits,” he says.

Finally, Facebook’s triumph is an important reminder of the importance of a free, open Internet. When Zuckerberg launched the service as a Harvard sophomore in 2004, it’s unlikely he imagined that eight years later he would be leading a company poised to go public with over 800 million users and a $100 billion valuation. Such an outcome would never have been possible if the web were not the open, fertile environment for innovation that we’ve all come to take for granted. In less than two decades, the Internet has established itself as one of the most transformative economic and social platforms in history. It has created the spawning ground for new, disruptive companies that have created billions of dollars in wealth and economic activity, and changed the way we live forever. And as the web becomes more social, platforms like Facebook and Twitter will have greater impact on politics around the world. Facebook’s IPO is one of the most important business stories of the year, but the company’s success should also serve as a powerful reminder of why it’s crucial to maintain the Internet as a free, open platform. If we, as a society, can do that, there’s no telling where the next Mark Zuckerberg may appear.


Where does SEO and Social Media Optimization fit within your organization?

Where Does SEO & Social Media Optimization Fit Within Your Company? Hint: It’s Not All About Marketing

Lee Odden  on Dec 5th, 2011    Content Marketing, Online Marketing, Online PR, Optimize Book, Social Media

When most people think of search engine optimization for a company, it’s usually as a marketing function. That makes sense since SEO is such a low-cost, high impact and measurable method of attracting new customers and revenue. However, I think looking at SEO and optimizing social media solely as a marketing function is like looking only at the eyeball of your favorite portait. Take a step back and you’ll see a bigger, more interesting picture.

Think Outside the Marketing Search Box. Companies publish a variety of content besides products and services but usually rely only on people’s familiarity with the company website to find that information.  Customers, employees, partners, job seekers, news media, industry analysts and investors need a variety of information from those companies and with over 11 billion queries handled every month by Google, search is a prime channel for discovery. So is social. Facebook has over 800 million members, Twitter has over 200 million and LinkedIn over 100 million. If content has a purpose and an intended audience, why not optimize for findability and shareability?

Go Holistic. As companies are making plans for resource allocation in the coming year, I’d suggest consideration of how to facilitate the ease of connection and sharing between the online content published by the brand (on and off the corporate website) and the intended audiences. In particular, the holistic application of search keywords and social topics on marketing, PR, Customer Service, Talent Acquisition and HR.


ROI on Optimization Isn’t Just for Marketing. There are more departments that publish online content than those above, but it’s a good example of the opportunity that companies have to make it easier, more efficient and effective for intended audiences to discover and engage with a company’s content. If Marketing can show a return on investment based on content optimization that attracts website traffic and converts visitors into sales, then so can optimizing other types of business content with a specific purpose. For example, optimizing news content for discovery by journalists, bloggers and analysts who are doing research.

Amplify PR Investment with SEO. If a company is spending $10,000 per month on a Public Relations firm retainer and part of that budget is specifically for media relations and getting press coverage, then optimizing press releases, the newsroom, digital assets like PDFs, images, video and even social media content can help connect the brand story with journalists who are in need of sources. With effective PR content optimization, the impact of that $10k spend might be increased by 50%. We spend nothing on media relations (not that I don’t think we should) and gain about $5k in value of media coverage every month because our news content is discovered by journalists, analysts and bloggers through search and social media.

Make Connecting & Sharing Easier. So think of search and social media optimization not only as way to boost leads and sales, but as a way to amplify the effect of communications between the brand and its community of customers, media, job seekers and employees.  Whatever can be searched can be optimized. Facilitating search and social media based discovery of that content can mean attracting better employees, solving customer service problems online without phone calls and increasing credibility of the brand where key constituents are looking.

Are you optimizing for search and social media discovery outside of marketing? If you could show an impact on increased effectiveness or even a deflection of costs, where would you start?


CSS for Search Engine Optimization

How to Use CSS for Search Engine Optimization
By Mikhail Tuknov

Cascading Style Sheets (CSS) is a language that permits Web designers to attach styles such as spacing, color, font, etc. to HTML documents. Cascading style sheets are similar to a template, permitting Web developers to label styles for an HTML element and then apply it to the number of Web pages required. Thus, Cascading Style Sheets (CSS) are collections of formatting rules, which control the appearance of content in a Web page. With CSS styles you have great flexibility and control of the exact page appearance; from precise positioning of the layout to specific fonts and styles.

There are many benefits of using CSS. Maintenance of a Web site made with CSS is much easier compared to the ones which are table based. Aside from being able to make extensive changes with one CSS file, the code it generates makes it simpler to update. With CSS, when you decide to craft a change, you simply alter the style and that element is updated automatically anywhere it appears in the site, saving you an enormous amount of time. Without CSS you’d have edit each page independently. CSS generally requires less code compared to a table based layout, making your code lighter, cleaner and easier to maintain.

Cascading Style Sheets Benefits and Search Engine Optimization (SEO)
Another major benefit of CSS is that it makes your Web site SEO friendly. The reason behind this is simple. Search engines spiders are actually lethargic. They don’t go through the bundles of HTML code to get to the indexed codes. Font tags and tables make HTML code cumbersome, and thus reduce the accuracy of the results. If you use external CSS files to determine the design attributes, the HTML code will be clean and will create better search engine rankings. With some knowledge of CSS you can change the code without destroying the visual layout. For instance, you could easily make the main content of your site to show up above the header or navigation menu in the code of your Web site; this will help to show search engine crawlers the importance of your content. I personally saw a huge boost in rankings in fully functional CSS Web sites. When I look at someone’s Web site that was built using old school HTML code with tags such as: TABLES, TD, TR, FONT and so on, I convert that site to a CSS layout. There are many tools on the Internet that shows the actual code over text ratio weight of your site. Modern search engines such as Google, Yahoo and MSN love light-weighted Web sites. They want to see your content; the text, not the code. With CSS everything is possible. You can place excessive code into an external file, thus leaving the actual page clean and simple.

Web Site Accessibility
CSS makes your Web site more accessible. By 2008, it’s estimated that one-third of the world’s population will be using hand held devices to access the Internet. It’s important that your site is accessible to them also. You can make an additional CSS document particularly for handheld devices like cell phones, which will be called up in place of the regular CSS document; which is not achievable with a tabular layout. CSS benefits accessibility chiefly by separating document structure from presentation.

Increases Download Speed of Your Website
CSS code downloads faster than tables. Browsers read through tables twice prior to exhibiting their contents; first to work out their structure and then to determine their content. Moreover, tables are shown on the screen as a whole, no part of the table will be displayed until the entire table is downloaded and rendered. Tables support the use of spaced images to assist with positioning. CSS generally requires less code than tables. All layout code can be placed in an external CSS document, which will be called up just once and then stored on the user’s computer; while the table layout stored in each HTML document must be loaded up each time a new page downloads. Also with CSS, you can manage the order of the items downloaded. You have the control to make the content appear prior to images, which tend to load slower than text.

Cross Browser Compatibility
To summarize, CSS makes your Web sites load faster, it saves on time and labor, links can be more attractive and dynamic, and you can add rollovers without using JavaScript. Currently all the major browsers ( Firefox, Explorer and Netscape) recognize CSS.

Ajax explained


Ajax (pronounced /ˈeɪdʒæks/) (shorthand for Asynchronous JavaScript and XML is a group of interrelated web development techniques used on the client-side to create interactive web applications.

With Ajax, web applications can retrieve data from the server asynchronously in the background without interfering with the display and behavior of the existing page. The use of Ajax techniques has led to an increase in interactive or dynamic interfaces on web pages. Data is usually retrieved using the XMLHttpRequest object. Despite the name, the use of XML is not actually required, nor do the requests need to be asynchronous.

Like DHTML and LAMP, Ajax is not a technology in itself, but a group of technologies. Ajax uses a combination of HTML and CSS to mark up and style information. The DOM is accessed with JavaScript to dynamically display, and to allow the user to interact with, the information presented. JavaScript and the XMLHttpRequest object provide a method for exchanging data asynchronously between browser and server to avoid full page reloads.


In the 1990s, most web sites were based on complete HTML pages; each user action required that the page be re-loaded from the server (or a new page loaded). This process is not efficient, as reflected by the user experience (all page content disappears then reappears, etc.). Each time a page is reloaded due to a partial change, all of the content must be re-sent instead of just the changed information. This can place additional load on the server and use excessive bandwidth.

Asynchronous loading of content first became practical when Java applets were introduced in the first version of the Java language in 1995. These allow compiled client-side code to load data asynchronously from the web server after a web page is loaded. In 1996, Internet Explorer introduced the IFrame element to HTML, which also enabled asynchronous loading.

In 1999, Microsoft created the XMLHTTP ActiveX control in Internet Explorer 5, which was later adopted by Mozilla, Safari, Opera and other browsers as the native XMLHttpRequest object. Microsoft has adopted the native XMLHttpRequest model as of Internet Explorer 7, though the ActiveX version is still supported.

The utility of background HTTP requests to the server and asynchronous web technologies remained fairly obscure until it started appearing in full scale online applications such as Outlook Web Access,(2000)[6] Oddpost (2002), and later, notably Google made a wide deployment of Ajax with Gmail (2004) and Google Maps (2005).

The term “Ajax” was coined in 2005 by Jesse James Garrett. However, a patent application covering this type of user interface was filed on September 3, 2003, thus predating the term itself by two years. This application resulted in US Patent #7,523,401 being issued to Greg Aldridge of Kokomo, IN.

On April 5, 2006 the World Wide Web Consortium (W3C) released the first draft specification for the object in an attempt to create an official web standard.


The term Ajax has come to represent a broad group of web technologies that can be used to implement a web application that communicates with a server in the background, without interfering with the current state of the page. In the article that coined the term Ajax, Jesse James Garrett explained that the following technologies are incorporated:

*HTML or XHTML and CSS for presentation
*The Document Object Model (DOM) for dynamic display of and interaction with data
*XML for the interchange of data, and XSLT for its manipulation
*The XMLHttpRequest object for asynchronous communication
*JavaScript to bring these technologies together

Since then, however, there have been a number of developments in the technologies used in an Ajax application, and the definition of the term Ajax. In particular, it has been noted that:

JavaScript is not the only client-side scripting language that can be used for implementing an Ajax application. Other languages such as VBScript are also capable of the required functionality.

However JavaScript is the most popular language for Ajax programming due to its inclusion in and compatibility with the majority of modern web browsers.

XML is not required for data interchange and therefore XSLT is not required for the manipulation of data. JavaScript Object Notation (JSON) is often used as an alternative format for data interchange,[10] although other formats such as preformatted HTML or plain text can also be used.

Classic Ajax involves writing ad hoc JavaScript on the client. A simpler if cruder alternative is to use standard JavaScript libraries that can partially update a page, such as ASP.Net’s UpdatePanel. Tools (or web application frameworks) such as Echo and ZK enable fine grained control of a page from the server, using only standard JavaScript libraries.

Introducing XMLHttpRequest and pseudomultithreading, it is possible to swap the role of client and server (web browser may start behaving as a server and web server may start behaving as a client) in Client-Server models.


1) Pages dynamically created using successive Ajax requests do not automatically register themselves with the browser’s history engine, so clicking the browser’s “back” button may not return the user to an earlier state of the Ajax-enabled page, but may instead return them to the last full page visited before it. Workarounds include the use of invisible IFrames to trigger changes in the browser’s history and changing the URL fragment identifier (the portion of a URL after the ‘#’) when Ajax is run and monitoring it for changes.

2) Dynamic web page updates also make it difficult for a user to bookmark a particular state of the application. Solutions to this problem exist, many of which use the URL fragment identifier (the portion of a URL after the ‘#’) to keep track of, and allow users to return to, the application in a given state.

3) Depending on the nature of the Ajax application, dynamic page updates may interfere disruptively with user interactions, especially if working on an unstable internet connection. For instance, editing a search field may trigger a query to the server for search completions, but the user may not know that a search completion popup is forthcoming, and if the internet connection is slow, the popup list may show up at an inconvenient time, when the user has already proceeded to do something else.

4) Because most web crawlers do not execute JavaScript code,[14] publicly indexable web applications should provide an alternative means of accessing the content that would normally be retrieved with Ajax, thereby allowing search engines to index it.

5) Any user whose browser does not support JavaScript or XMLHttpRequest, or simply has this functionality disabled, will not be able to properly use pages which depend on Ajax. Similarly, devices such as mobile phones, PDAs, and screen readers may not have support for the required technologies. Screen readers that are able to use Ajax may still not be able to properly read the dynamically generated content. The only way to let the user carry out functionality is to fall back to non-JavaScript methods. This can be achieved by making sure links and forms can be resolved properly and do not rely solely on Ajax.

6) The same origin policy prevents some Ajax techniques from being used across domains, although the W3C has a draft of the XMLHttpRequest object that would enable this functionality. Techniques exist to sidestep this limitation by using a special Cross Domain Communications channel embedded as an iframe within a page.

7) Ajax has its own set of vulnerabilities that developers must address. Developers familiar with other web technologies may have to learn new testing and coding methods to write secure Ajax applications.

8) Ajax-powered interfaces may dramatically increase the number of user-generated requests to web servers and their back-ends (databases, or other). This can lead to longer response times and/or additional hardware needs.