Archive for September, 2010
A few days ago a former co-worker, Mike, and I met up and talked a bit about the state of the web, and this led into a discussion about what tools we were both using for development. Given that we were developing a Firefox plugin only a year ago I expected Mike to say that he used Firefox for everyday use, but I was surprised to find that he had switched to Google’s Chrome for everyday use, as had I.
For years the rumors of a Google-branded browser ran rampant in the technology community, yet I scoffed that Google would get into the game. After all, why would it make sense for a company that largely built web applications (Google Desktop being on of the few exceptions) get into building desktop software? With Firefox, Safari, and Opera all pushing Internet Explorer, why would Google need to get into the game?
And yet 2 years ago Google launched their browser, Chrome, with fairly decent reviews and little fanfare. After all, the changes that Chrome brought to the table over competing Firefox and even Internet Explorer seemed minimal. Yet after using Chrome for a year or so, I’ve come to reevaluate my stance on the product itself, and the strategy behind it.
First, let’s examine the strategy and the results have been for the browser. The state of the browser market when Google launched was Microsoft’s Internet Explorer with roughly 70% browser share, Mozilla’s Firefox with roughly 20%, and Safari with 5% or so (Opera and a few others made up the remaining set). Browsers are important strategically for a few reasons:
- Browsers control search share. This one was pretty close to my heart as someone that has worked in search, and the basic reasoning is distribution: lots of users perform web searches through the default search engine in their browser. If you can get the browser to ship with your search engine, you are going to have a lot more searchers.
- Browsers determine what’s possible on the web. Okay, this is a simplistic argument because what goes into a browser comes from standards bodies, and the actual applications are created by web developers, but the basic point is that a browser makes certain things possible for web developers
My initial thinking with Chrome is that Google would have a hard time getting users to change and use their browser. I mentally divided the browser market into two sets of users: power users, who were fairly loyal Firefox users with extensions and the like, and “ordinary” users, who would simply use the default that shipped with their machine: Internet Explorer. I figured if Internet Explorer users wouldn’t switch to a significantly better Firefox, they’d be unlikely to switch to Google’s Chrome.
But Google made a series of tactical choices in Chrome that I believe positioned them well:
- Google forked open-source webkit as the rendering engine for Chrome, selecting it over the competing Gecko rendering engine that Firefox uses
- Google introduced the browser for Windows initially, and then added Max and Linux versions
- Chrome was initially distributed via their home page and other Google sites
- Google pushed a “get out of the way” strategy, with the browser designed to take a back-seat to the actual web pages themselves. The browser’s frame is called the “chrome”, and “Chrome” gets it’s “chrome” get out of the way
The first point, that Google chose webkit for Chrome, was a brilliant move because it positioned them well to pushing updates to the core rendering engine that powered the iPhone (as the browser on the iPhone, Safari, uses webkit for rendering). By making changes to the core code-base they then were able to add features that would work both on their mobile platform – Android – and the largest mobile threat, iPhone. That Research in Motion, makers of the blackberry, subsequently adopted webkit had to make this choice look all the more ingenious.
The second point was Google focused on getting real users, which turn into real dollars. Each user they were able to take away from Firefox was one fewer user that Google did not have to pay a revenue split for with the Mozilla foundation. To understand the economics of this it helps to understand how traffic acquisition costs (TAC) for search work. It turns out that most browser users search with the default box at the top of their browser, and this makes the distribution browsers have very valuable. The search companies compete with each other to bid for the right to have their engine as the default in the various browser makers out there, and then when those users search some percentage of the time they click on ads, and the revenue generated from the ads is split between the browser maker and the search engine. Let’s take an example – if Firefox users search on Google and click $100M worth of ads annually, then Google and Mozilla will share that $100M between themselves at some split. It turns out because of the competitive nature of the business between search manufacturers that the split is actually quite favorable to the browser maker, so the Mozilla foundation may take home $75M (or even more!) of that $100M. But if Google can take users from Firefox to Chrome thats 75 cents on the dollar that Google now keeps for themselves.
The last point of brilliance I want to call attention to is the strategic point of pushing the other browser manufacturers to improve their products. Google makes web products, and part of their long-term strategy is to eat into the dominance of desktop products, like Microsoft Office. Currently a lot of their web products don’t behave as smoothly as native desktop products, and some of that is because the current browsers don’t have enough functionality to make really smooth desktop quality applications. Google has pushed the other browser manufacturers to iterate and add features to their products, which will result in better web applications for everybody. Yes, of course Microsoft can make great web products as well, but if you’re a maker of web applications you are really happy the browser can do more.
When Google’s Chrome came out I saw it as a minor product launch of a desktop that didn’t make a lot of sense strategically. Two years later I realize I was very wrong: Chrome is a piece of brilliant business strategy and execution that has resulted in an excellent (and likely profitable) product!
The last few months my co-founder and I have been focused on developing a mobile application, and one of the biggest debates we had was whether we wanted to develop native applications for the iPhone and Android. Our initial perception was that we needed to develop a native app, because that’s where all the user’s are. After doing some more research, however, we decided that we would develop our initial version in HTML5. This post discusses why we made that decision.
First off, a little background about the state of mobile application development. There are two major ways to get your software to user’s on a mobile device:
- Build a native application
- Build a web version of your application – in HTML5
Building Native Apps
To build native apps you’ll need to build a version of your application for each mobile platform. The current dominant mobile platforms are Nokia’s Symbian (widely panned as not a true “smartphone platform”), Apple’s iOS, Google’s Android, BlackBerry’s BlackBerry OS. In addition, Palm’s WebOS may re-emerge as HP purchased them and may invest more in the platform, and Microsoft may roar back on the scene with the launch of Windows Phone 7.
Looking at that chart we can see we’ve got 4 major OS platforms to write for (subtracting Windows Mobile as it’s not long for the world), and 2 more looming on the horizon (Palm/HP’s WebOS, and Microsoft’s Windows Phone 7). Is it easy to build for these platforms?
- Symbian applications are written in C++, but some apps are written in Python and Adobe’s Flash. Most US app developers ignore Symbian’s platform, as it’s widely considered a lower-tier smartphone OS
- Android applications are written in Java and written specifically for the OS. Google provides lots of support resources, but do not provide native UI elements for many functions, and thus many applications written for the platform look radically different from one another.
- iOS applications are written in Objective-C, with a native UI library called Cocoa providing a lot of the UI elements. Most iOS applications therefore have a very similar UI
- RIM’s BlackBerry OS Applications are written in Java
One of the problems with developing for each platform is that there are differences between the devices on each platform. This has yet to become a huge deal, largely because the platforms are only a few years (I always pinch myself to remember: the iPhone launched only 3 years ago, in 2007!) old, but with Android there are differences in the OS versions (everywhere from version 1.5 to today’s current 2.2) and in the hardware devices, with iOS there are some running the latest version with multi-tasking and some not, and with BlackBerry some devices have a touchscreen and some do not. As a developer, writing for the different variations can be quite daunting!
In addition to writing across different languages and platforms there is another challenge to writing any kind of native application: installation and distribution. Bear with me a second while I tell an anecdote about applications: I remember my first job, in 1999, when I worked for an e-commerce company named SelfCare.com. For bug-tracking we used an application that the consulting company we hired, Cambridge Technology Partners, had written for us in Visual Basic. This application ran on each developer’s desktop and connected to a central server to pull down the data on the individual bugs. This worked fairly well, except when we had to fix a problem with the desktop software. The application’s developer would fire up his IDE, fix the problem, and then send out a mass email with an attached executable with the new software. Some people would inevitably not install, and would therefore be running old versions of the software with outdated features and problems. The same thing happens to native mobile applications these days: when you want to update your software to make a change (whether it be from market demand or a bug or innovation) some percentage of your users will ignore the change and blissfully go on using their old versions of the software. Some apps have taken to battling this by disabling use of the older versions until they update, but this is a terrible user experience – user’s don’t want to have to worry about updating their version of your software, they just want it to work!
To sum up the issues with writing native apps:
- Many different device platforms
- Differentiations in the devices and OS versions within the same platform
- Multiple languages to write software in, and different UI’s across the different devices
- Deploying updates can be difficult because users do not want to download and install new versions
- One more issue specific to iOS – Apple has a review process, where they will approve your application or not, and if they don’t like it you won’t get to distribute your application to their customers
Writing Web (HTML5) Apps
In Mark Suster’s strong piece, App is Crap, he argues that mobile applications will be the way of the future. If you think back to my example about bug-tracking software it’s easy to see the parallel to mobile software. In 2010 (almost) nobody runs a native desktop version of bug-tracking software – they all run web-based tools like bugzilla. The ubiquity of browsers with standards has led to application development being about writing applications for the web standard, and anyone with a modern browser being able to access it. If a new hot operating system platform came out tomorrow (fat chance of that) as long as it had a modern browser written, that user would have access to all the same web applications users of windows and OS X would.
One of these features is the ability to detect touch and swipe gestures, specific to touch devices. This allows mobile web applications to respond to these events and not just think of the world as clicks. Mobile web frameworks such as jqTouch and Sencha Touch allow mobile web developers to develop better software with less code, and the proliferation of larger devices will lead to more searching on the device (as opposed to just downloading apps), which will result in more opportunities for developers to market their web-based apps.
There are limits to what a web-based mobile application can do, however:
- Mobile web applications cannot access some device specific features, like the camera, dialer, and contact list (note that the user’s location can be used in HTML5)
- Mobile web applications aren’t quite as sensitive to user’s actions as native apps are, and some actions are not accessible for mobile web apps
- There is no major app store for mobile web applications
The last point, the app store, seemed like the most compelling reason for my co-founder and I to go with native apps over mobile web apps. We figured that when we built a iOS mobile app and put it in Apple’s app store, people would download it, and we’d have instant distribution. After meeting with a few people familiar with this space, we discovered this is just not the case. Almost universally we heard that distribution is “difficult”, you need to “get lucky”, and that users “don’t search, they download from the top 10 lists”. We were most surprised to hear this reason, as it wiped out one of the biggest reasons to develop a native version of our app.
Ultimately there are pros and cons to building native and web-based mobile apps, at least in 2010. Coming from a web background, and subscribing to the “lean-startup” approach of iteration, I feel more comfortable with building a web application, looking at what my user’s are telling me with their actions (measurable web data – in clicks and touches), and making changes based on that feedback. Time will tell if I’m right, but I am betting that mobile web apps – using HTML5 – are the future, and native mobile apps, like desktop apps before them, will be afterthoughts for all but a few in the future.