The impetus for this note is a recent data scientist interview question I was asked. I used various sources to research this topic, most of which are hyperlinked inline, and some of which are enumerated below.
APIs, or Application Programming Interfaces, are a primary means by which computer programmers are able to build complex computer programs. Fundamentally, they are a way of clearly prescribing the manner in which various parts of a program communicate. They can be thought of as analogous to a Graphical User Interface, or GUI. The difference is that whereas GUIs help users interact with computer applications in a user-friendly way, APIs facilitate developers’ incorporation of underlying technologies in new software they create. Both APIs and GUIs are examples of abstraction layers that simplify something complex into something manageable. Clearly specified APIs are critical to the development of software systems, especially as those systems become increasingly complex.
There are various release policies, uses, and paradigms for APIs, discussed in the following sections.
There are three main types of API release policies: Open, Partner, and Private. Open APIs allow companies to publicly expose functionalities of their various systems and applications to third parties, which may or may not have a formal business relationship with them. Both the Twitter and Google ML examples I cite later are open APIs. Partner APIs are more exclusive, and are used to facilitate software communication between a company and its business partners. Finally, Private APIs are used internally only, to facilitate the integration and cross-departmental collaboration within a single company. 
Private APIs are by far the most common in practice. They result in significant improvements to the efficiency of operations. A well-known graphic in the API world shows an iceberg, where the top of the iceberg are publicly available “open” APIs, and the much larger submerged portion are “private” APIs.
The uses of APIs have expanded dramatically over the years.
- 1985-2001: Operating System APIs - APIs enabled software developers to create applications for operating systems. APIs were limited to big software companies.
- 1990s: Application Services APIs - APIs began to be used to build new functionality between large companies.
- 2002: Infrastructure Services APIs - APIs were used to enable companies to externalize IT infrastructure. It allowed anyone easy access to computing power. One of the most well-known examples of infrastructure services is Amazon Web Services.
- 2006: Web Services APIs - APIs began to be used to share data internally and externally, via a more unified communication protocol. Web services APIs are accessible to any company and easily integrated.
- 2013: IoT APIs - APIs enable real-time information on usage of IoT devices.
Today, APIs are used to connect smartphones, laptops, tablets, servers, etc. Tomorrow, most objects and people will be able to exchange information resources through APIs.
This part of this note is the real reason I wrote this, and the question I was asked in the interview. It was, what is the difference between REST and SOAP interfaces? This part of the note will unpack these differences.
A few pertinent definitions are necessary before defining REST and SOAP. An API “paradigm” is a general approach to building APIs. An implementation is something you actually download, install, and use to build an API. Specifications (or recommendations, or standards) are descriptions of implementations, to help them various implementations share functionality. APIs generally conform to one of three paradigms: “RPC”, “REST”, or “query language.”
SOAP stands for Simple Object Access Protocol. SOAP is a recommendation of the World Wide Web consortium that conforms to the “RPC” paradigm. A SOAP implementation is something like gSOAP.
SOAP is the successor of a different system called “XML-RPC,” which is a “Remote-Procedure Call” that uses XML to encode its calls. A SOAP message is an XML document containing four elements:
- an envelope that identifies the XML document as a SOAP message,
- a header,
- a body containing call and response information, and
- a fault element that contains errors and status information.
The following example code was taken from w3schools, linked here.
The four elements above distinguish SOAP APIs from more generic RPCs. A “Remote Procedure Call” is when a computer program causes a procedure to execute on a different computer on the same network. The communication is one-way, and is analogous to something like a function call. An example RPC call and response (taken from Martin Fowler’s excellent article on the subject) is shown below.
There is a lot going on in these HTTP messages, but the idea here is that communication is very one-way. The client must have a complete understanding of the context for the responses it receives from the server, and must construct its own workflow from those (comparatively unhelpful) responses.
REST stands for REpresentational State Transfer. It is an API paradigm that can be thought of as building upon RPC. The following code examples are taken from the same Martin Fowler article linked above.
The first extension is the introduction of individual resources.
With individual resources, posting to a particular slot is possible.
The second addition Martin mentions is the inclusion of HTTP verbs. Again, his examples follow.
Now, the server responses with a response code of 409 to indicate an error. The 409 code code is in lieu of an error message in the reply body, the alternative under RPC.
Alternatively, a 201 code would indicate the creation of a new resource. Note that it includes a location that a client could use in the future.
The third introduction is the inclusion of hypermedia controls. Hypermedia controls tell the client what it can do next, and the URI of the resource to do it.
The advantage of including these possible actions in the response is that it loosens the otherwise very tight-coupling of client and server under RPC. The server can add new possible actions and resources without “breaking” the client.
Martin breaks down these various additions as follows:
- Including individual resources tackles the question of handling complexity by using divide and conquer. It breaks a large service endpoint down into multiple resources.
- Including http verbs results in handling similar situations in the same way, removing unnecessary variation.
- Including hypermedia controls introduces discoverability, providing a way of making a protocol more self-documenting. I add that it also creates additional flexibility between client and server.
This is an incomplete treatment of a topic I only vaguely understand, but I found these examples, taken from Martin Fowler’s article, very helpful. For more details, see Martin Fowler’s far more complete and well-written article, linked for a third time, because I like it that much.
First of all, REST versus SOAP is a false dichotomy. REST is a paradigm, whereas SOAP is a slightly more concrete “recommendation,” or standard. That technical distinction aside, the fundamental differences between the API types follow.
|Invokes services by calling RPC method||Calls services via URL path|
|Less Common Recently||More Common Recently|
|XML Only||XML, JSON, and others|
|More bandwidth||Less bandwidth|
General consensus is that REST is preferred to SOAP for all except for a few applications, but the identical results can be achieved with both systems.
I used several “API calls” in the creation of my data analysis projects. Specifically, in my twitter feed analysis for the “We Rate Dogs” twitter feed, I leveraged the Tweepy Python library to access the Twitter API. As a further example, I recently used Machine Learning APIs exposed by Google, discussed further below.
As with almost all APIs, the first step is to obtain authorization from the relevant organization. There are a variety of possible means of authorization, each of which has its own drawbacks and advantages, but that is outside the scope of the current discussion. In the case of Twitter, obtaining the relevant API keys (or, more precisely, in this case, “oauth authentication”) is discussed here.
These tokens and secrets are used to create an OAuthHandler instance.
In any case, once the OAuthHandler is obtained, tweepy is able to access the API. I use a list of tweet IDs I have to obtain detailed information about each tweet, in the form of a JSON dump.
In this way, I obtain a large amount of information for each tweet. For more specifics, see the project page.
Another example from my recent work on Google’s Machine Learning with TensorFlow on Google Cloud Platform Specialization course follows. Similar to Twitter, the first step is to obtain an API Key. In this case, it is obtained from the GCP Console, and resembles the one below.
For this example, I invoke the Google vision API to perform OCR on an image, as follows.
The image processed using the preceding code follows. It was obtained here.
Then, I print the response, which follows. ‘zh’ indicates this is Chinese text.
The impetus for this note is a recent data scientist interview question I received. I used various sources to research this topic, most of which are hyperlinked inline. In particular, this article also uses information taken from the following works.
- FABERNOVEL, Why Should I Care About APIs, December 2013
- Martin Fowler, Richardson Maturity Model - steps toward the glory of REST, March 2010
- Phil Sturgeon, Understanding RPC, REST and GraphQL, January, 2018