The shopping comparison service EarlyMiser is a classic example of a useful eCommerce application built using a set of third party APIs. In this case study interview we speak with EarlyMiser founder Brian DeSpain who gives us the background on this application, its business model, and some interesting insights from their experience in working with web services from some of the leading eCommerce providers (including the Amazon E-Commerce API, the eBay API and the Shopping.com API).
Q: What is Earlymiser and why did you build it?
A: Earlymiser.com is a meta comparison shopping engine. It pulls the best prices on products from Shopping.com, Ebay, Amazon and Yahoo Shopping. We built it because no one shopping engine covers all the outlets where you can buy a product. For example, auctions are completely neglected in most comparison shopping engines today. Some great deals can be found there. Ultimately earlymiser.com is there to help consumers find the best price on items and cover enough of the market so that users can be sure that they are getting a good deal.
Q: What is your business model?
A: Our revenue model is a hybrid model. The Shopping.com clicks are pay-per-click (PPC). For Ebay, and Amazon the revenue model is an affiliate revenue model (pay per action, PPA). Actually our affiliate revenues exceed our PPC revenue which was a big surprise for me. In a way it’s not too surprising given that our eBay Buy It Now results can find some great deals that are well below what the other shopping engines provide. We haven’t added Google Adsense to the mix yet although that may change with the upcoming re-design.
Q: What APIs did you use and what was the best and the worst part of using them? Did you do any screen-scraping or other related techniques as well?
A: At first we looked at screen scraping Froogle/Google Shopping and decided against it. You can never tell when someone might shut you down. It’s not a good idea to build a business model to build around stealing someone else’s content. At present we present store results from Shopping.com, Amazon, Ebay and Yahoo shopping. The best part of using APIs is that it drastically speeds development time. When using screen scrapers you need to custom build each scrapers and lovingly maintain them. Oh and hope your application doesn’t become too popular and they decide your free ride is over. APIs make things easier.
That said you need to be ready when working with APIs for varying degrees of reliability and stability. All of that stuff needs to be hidden from the user and you need to provide a consistent user experience that is pretty speedy. Remember you are losing milliseconds each time you call a service. Multiple that by 4-7 services and you can have real problems on the user side of things. You need to make sure that you cache properly so that users never notice. Another issue when building on multiple APIs is data normalization and taxonomy. APIs won’t look the same nor will they structure data in the same ways.
Q: What programming languages and tools is this built with?
A: PHP/MySQL for the site and some Ajax for tagging and UI.
Q: What are the near-term and longer-term plans for this project?
A: We have a revenue model in the works for bloggers and people with social network profiles. For example our TagOuts already work with myspace so people can add products to their myspace profile. We will be sharing revenue with bloggers and social networkers.
Q: Do you foresee using other APIs as part of your service in the future?
A: We will be bringing two other APIs on online in order to improve the service. One of these APIs modifies our Ebay results. We will be bringing on at least one more shopping comparison engine.
Q: Using third-party APIs is seen by many as introducing business and technology risk. How do view this issue and how do you account for it in your strategy and implementation?
A: For a startup it certainly can. To minimize your risk from a business perspective, take a look at what other applications have been built using that API. Many APIs (Yahoo is notorious for this) won’t allow you to build a commercial service using the API which means quite frankly you are at their mercy. That’s a huge risk for a startup, especially when you want to monetize your service. For example I approached Yahoo about a commercial relationship and they told me that they wanted an exclusive relationship. This was a real problem because the Yahoo shopping API is the lest developed and reliable API of the four we call. They haven’t developed extensive product taxonomies nor can you do something as simple as search by UPC. They are working on making changes but they are well behind the rest of the market.
Deciding what API to build your mashup on is an important decision. Take a look at how other people have used the API and make sure it fits your business goals. For us we have spread risk by having multiple APIs we can hit. But that meant making each API work with our application which added overhead.
Q: From an API and mashup perspective, what are the main lessons learned from this project and/or is there any advice you’d like to share on the development, design or business side of mashups?
A: Well on the technical side you should build basic monitoring or API usage tools into your application. You should certainly use the reporting tools provided by your API provider. This will allow you to revisit your queries and optimize them. Certainly if your API ever gets popular, you are going to face a time when your demand is going to exceed your allotted queries.
Additionally track user behavior very closely. Your expectations of how your users use your applications and how they actually use it are very much at odds.
Q: And finally, besides your own, do you have a favorite mashup?
A: A: I like a mashup a friend of mine did at ISBNDB.com because instead he took an non-web/non web service resource (library MERC records) built a custom crawler for it and then built a killer ISBN book database.