Senin, 29 April 2013

LinkedIn: Silicon Valley's Possibly Unhealthy Need for Speed

Kevin Scott’s desk at LinkedIn flanks what must be among the most exotic office coffee makers. The contraption replicates the functions of a French Press, while doing away with all of its inadequacies. It consists of a couple of beakers, a Corning PC-420D stirring hot plate and an electronic thermometer. When the water reaches exactly 94 degrees Celsius, Scott pours in some coffee grounds and the Corning device starts whirring them around. “All of this ensures a constant temperature and a vortex that circulates the grounds and expands their surface area with the water,” Scott says. “It’s my own little invention that I’m dialing in right now. I’m largely insane.”

Scott arrived at LinkedIn two years ago after working in academia and at Google where he became one of the company’s big thinkers around building a modern computing infrastructure. Officially, he’s now LinkedIn’s senior vice president of engineering, which basically means that he’s the company’s top geek. He seems to approach most things in life as if they were engineering challenges that need conquering, and his grandest battle to date has been waged not against coffee but against humans.

In Internet Years, LinkedIn is much older than other networking sites like Facebook and Twitter. It was founded in 2003, and built out its technology infrastructure on software and hardware used during the dot-com boom. As LinkedIn grew and grew and grew, its systems started to break down. The company’s engineers needed huge lead times to roll out new features or even just to fix buggy parts of the site. And when LinkedIn would try and add a bunch of new things at once, the site would crumble into a broken mess, requiring engineers to work long into the night and fix the problems.

Scott and a number of LinkedIn’s top engineers reached their breaking point with the company’s infrastructure in November of 2011. This was just a few months after the company’s blockbuster IPO, and investors were still obsessing over LinkedIn’s every move. No matter. Scott and his team decided to begin Project Inversion in which they would stop all engineering work on new features and have every able body focus on rebuilding the company’s infrastructure from scratch. Scott had a theory that something better would emerge. Truth be told, he wasn’t sure. “It was a scary thing,” he says.

Ultimately, it took about two months to pull off Project Inversion. The job required LinkedIn to embrace new, more modern database technology at the core of its computing systems. Just as importantly, though, LinkedIn created a whole suite of software and tools to help with how it developed code for the site. Instead of waiting weeks for their new features to make their way onto LinkedIn’s main site, engineers could develop a new service, have a series of automated systems examine the code for any bugs and issues the service might have interacting with existing features and then launch it right to the live LinkedIn site.

What exactly does this mean? Well, even other very modern Web companies have far more process laden procedures for updating the millions of code that make up their sites. Some will cleave off big chunks of code and hand them out to separate teams and then come back and try and assemble the parts into a functioning whole. Companies like Facebook and Google also have special teams that review the lines of code written by developers. It’s these people who get to decide when a new feature is ready to make its way to their Web sites. Not LinkedIn. It has one, huge stash of code that everyone works on; algorithms do the code reviewing. “Humans have largely been removed from the process,” Scott says. “Humans slow you down.”

As mentioned, Scott likes to engineer things to the extreme. Two months ago, for example, he made a ribeye confit by submerging six pounds of steak in in four pounds of herb-infused melted butter for three hours. “I feel guilty to admit to doing such a horrible thing, but it was pretty awesome,” says Scott, who is not a thin man. With his confit-enhanced stomach, Hawaiian shirts, soul patch, and affably obsessive demeanor, he seems to be an inspiration to LinkedIn’s engineering corps, which performs major upgrades to the site three times a day.

You might wonder, “Is that a lot?” Yes, it is. Google might update its sites once a week, and Facebook has been the gold standard to date, updating its site once a day – sometimes twice.  In Web engineer speak, LinkedIn is charging after what’s known as continuous development, which workslike it sounds. Instead of having moments in time when upgrades occur, the idea is to turn a Web site into a constantly evolving organism.

You might also wonder, “Why keep messing with something that seems to work pretty well?”

Well, talk to some of the key figures behind Project Inversion, like senior directors for engineering Mohak Shroff and Dan Grillo, and they’ll explain that LinkedIn has thousands of tests running at any given time. The company could be testing twenty different looks for the smartphone app, or trying out different colors and shapes of menus to see what most pleases the eye. If one thing works better than another, the engineers make a fix as soon as possible.

The result has been a flood of new features on LinkedIn. You’ve probably noticed things like Endorsements and Influencers popping up on the LinkedIn homepage and all sorts of new metrics appearing on your profile. Most of the LinkedIn users I’ve talked to feel like the site has grown too cluttered. “We wouldn’t say the site is cluttered,” says Shroff. “It’s very deliberately designed in this manner based on the feedback we have from users.”

Silicon Valley has a long-running obsession with speed. Ever since Intel co-founder Gordon Moore formulated what would become known as Moore’s Law, the tendency has been to make things go quicker, get smaller, and cost less. This used to apply mostly to hardware like chips and storage devices, but it now seems to be carrying over to the software side of the world. Makes as many changes as you can, as fast as you can. Why? Because you can. And because someone might stay on your site for a few seconds longer.

Most of the big Web companies – Google, Facebook, LinkedIn – have developed their own custom tools for coding at record speed. They’re enabling a Web site feeding addiction, and building the apparatus they need to get their fix. Will this same culture carry over to mainstream businesses? It’s tough to say, although the whole mantra around A/B testing – these constant experiments – seems to be catching on outside of Silicon Valley.

Jonathan Heiliger, a venture capitalist at North Bridge Venture Partners and the former vice president of technical operations at Facebook, urges caution around the continuous development movement. “We have been driven to a culture of performance and instantaneousness,” he says. “We expect to hit refresh and see a change. This carries over to developers who make features and then want to see them work. They think, ‘I should not have to wait a day to see what happens. I want to see it now.’”

The downside of this type of culture can be change for change sake. The engineers can forget what this constant stream of experiments looks like to the end user. “There is always a tradeoff between features, functions and performance,” Heiliger says. He remembers, for example, when Mark Zuckerberg, the Facebook co-founder and CEO, once demanded that all of the image icons on the site have rounded corners. Facebook’s engineers had to develop a technique for modifying images on-the-fly, and it slowed the site down by 10 percent. “He demanded it, we did it, and then a couple months later, we reverted it,” Heiliger says.

Heiliger describes Scott as a “geek’s geek” and voices confidence that he will come to a well thought out plan for balancing all of the changes for what’s best for LinkedIn’s users. Both men are part of a group called Webmonsters, which is sort of a secret society of elite Internet engineers.

Project Inversion has certainly made life better for LinkedIn’s employees. Jay Kreps, a principal staff engineer, remembers the long days and nights before the infrastructure overhaul took place. Major code refreshes would turn into tortuous versions of parties. “You were there well past midnight trying to deal with the things going wrong,” Kreps says. “There was food and booze. It was an event.” Kreps misses the bonding that came with this struggle, but not the loss of sleep. “It’s more fun just to ship on time,” he says.

Free Phone Sex