API Caching: The Secret Sauce of Continuity

Frank Ohlhorst, January 28th, 2014

APIs and data delivery go hand in hand, with APIs being the core mechanism for transferring information between dissimilar systems. Web applications have relied on APIs for a variety of tasks and chores, making coding for API integration a core competency and critical element for those building Web applications.

The beauty of a properly constructed API and the code that leverages it is that, as a symbiotic combination, almost any programming challenge can be solved. In addition, leveraging APIs can create a rich Web app experience that matches what desktop applications can offer.

Achieving those goals, however, can take some serious programming effort, and API development and selection are only one part of the equation. Simply put, APIs must work reliably for Web applications to leverage them and deliver a rich, productive experience.

Yet programmers rarely have control over APIs and are limited to a few commands, such as fetches, gets, puts, and so forth. Such situations can lead to trouble in environments in which the APIs are responsible for returning results or data so that an application can continue to function.

One case in point is the U.S. government shutdown, which occurred from October 1 through October 16, 2013. For 15 days, the U.S. government was shut down because of budgetary issues, and several government services (and associated Web sites) were either shut down or, at the very least, hampered by the lack of support. Those using Web applications that rely on data provided by APIs from government agencies, such as the Environmental Protection Agency, Commerce Department, Energy Department and many other departments, felt the pain of API failures.

However, much of the data provided by those agencies is static in nature and updated only by a schedule—which brings up an interesting question: Why do Web apps have to rely on APIs to retrieve data that changes infrequently? With some creative coding, the symbiotic relationship between API-delivered data and Web application processing can be severed, thanks to the concept of caching.

Although caching has long been used as a method to increase performance, scale applications and reduce traffic, the applicability of the technology can be extended to create active solutions to problems such as data availability, API responses and so forth.

Originally, the goal of caching was never having to generate the same response twice—in other words, local copies of information could be reused, instead of adding the latency required to repeatedly retrieve the information from a remote source.

Realized benefits include improved speed that places the solution between the app and the API’s destination. Several vendors build reverse-proxy appliances, including F5 networks, Bluecoat Systems, Astaro and others. Nevertheless, those appliances might not always be the answer; some investigative work must be done to see how long those appliances will cache API data, which can be limited by storage space, length of time and other factors.

In some cases, application frameworks incorporate native-reverse proxies to address the issues of caching, while some frameworks rely on external hardware or software solutions. Thus, API caching requires a full understanding of the framework in place and how a cache would affect it. Unlike the old axiom of “two is better than one,” APIs are often ill-served by having multiple caches between them and the content.

For some application engineers, a good solution is to look at the world of open source, where the Varnish project seems to reign supreme as the caching and web acceleration platform of choice. However, Varnish might not be for everyone, and ultimately, the goal should be to improve performance and enhance continuity by deploying the correct API caching solution for a given environment. Environments change based on the needs of the application and the source of the information.

As a result, all comes down to safety and information accuracy—where the reverse proxy should cache the response that is returned from the API. That cached response is then used to answer all subsequent requests for the same resource before re-engaging the API.

Eventually, the designers of applications and APIs will have to determine the best methodology for enabling continuity when unreliable remote data sources are part of the equation. Several vendors, fortunately, offer solutions, and several other vendors offer code, tips and tricks, and other resources to make API caching second nature when deploying applications. For example, Microsoft offers a document for techniques and tips on the MSDN site for those using ASP.NET. Other vendors, including Google and IBM, offer extensive information on API caching techniques for the various associated platforms the companies support.

Finally, ProgramableWeb offers a variety of fluid resources that can explain and extol the advantages of API caching—finding that information takes little more than executing a search, such as http://www.programmableweb.com/search/caching or http://www.programmableweb.com/search/cache, which will return a list of ever-growing results.

Both comments and pings are currently closed.

Comments are closed.

Follow the PW team on Twitter

ProgrammableWeb
APIs, mashups and code. Because the world's your programmable oyster.

John Musser
Founder, ProgrammableWeb

Adam DuVander
Executive Editor, ProgrammableWeb. Author, Map Scripting 101. Lover, APIs.