Friday, November 12, 2010



Case Summary :

Myspace became the most popular social networking site in the United States in June 2006.[8] According to comScore, Myspace was overtaken internationally by its main competitor, Facebook, in April 2008, based on monthly unique visitors. This growth and failure to sustain the leadership are the lessons for managers to follow.

The CHALLENGE to MySpace was one of the fastest growing site on the Internet with 176 million MEMBER ACCOUNTS in May 2007 and 260,000 new users registering each day. Often criticized for poor performance, MySpace has had to tackle scalability issues few other sites have faced.

How did they do it?

When growth spruts so fast and un-anticipated for any business then the challenge is tough. In such a scenario the plans fail so frequently that management starts feeling that no planning is better. But that myth calls for the down fall.

Comparison Table of MySpace with News Other Sites

Initial Architecture of MySpace :

In the initial phase they adopted two web server architecture and

  • Data base server
  • MS SQL Server

This was suitable for small to medium sites for simplicity. As the members grew shartfalls in the initial setup start getting visible. MySpace team tried to attend and resolve them to a certain level.

The Problems & Resolution

Prob.1. : Database servers reached their I/O limits when MySpace reached 2 million accounts.

Effect was site lag behind in Content update

Solution Attempetd : MySpace switched to vertical partitioning model where separate database supported distinct functions like –

  • The Handbook: The Complete Guide for Members and ParentsLog-in Screens
  • User Profile 
  • Blogs

Problems-2  : When MySpace reached 3 million accounts some functions grew very large.

Effect was one database server proved insufficient

Resolution  : Requirement was scale up strategy. MySpace added many cheaper servers to share the database workload. And Distributed Architecture.

Data of 1 million accounts per separate instance of SQL server.

Problem.3 : Grwoth in accounts led to performance issues and waiting time increased drastically.

Resolution -In 2005 added layer of servers between database servers and web servers. This reduced load on database server.

Problem.4 :When MySpace Crossed 25 million accounts Effect was seen on the performance and I/O speeds

Resolution -Moved to 64-bit SQL server to work around their memory bottleneck issues. Their standard database server configuration uses 64 GB of RAM.

Failure isolation. Segment requests into web server by database. Allow only 7 threads per database. So if the database is slow only those threads will slowdown and the traffic in the other threads will flow.

Further Task for Developers
MySpace still faces overloads more frequently than other sites.
Login errors occurs at 20 to 40%.

Site activity continues to challenge the technology. Developers continue to redesign Database Software and Storage System. Task is never ending

Conclusion : 
Since the beginning, has operated in ad-hoc fire-fighting mode, evolving its architecture to oil whatever new squeaks presented themselves. continues to experience significant performance and reliability problems, but they’ve never been showstoppers. Lack of long term planning by the MySpace team gets reflected in the case and this has costed them the loss of leadership advantage. Face Book, Twitter and You Tube with similar business model have marched ahead.

Other Posts :

Handling Multinational market on web by Nokia

Loreal collaboration through MS Share Point

Mode and media for Communication.

Demand Paging