Pedro Sttau

Oct 30, 2021

8 min read

The Foundations of the API Gateway

Understanding the scope of the API Gateway, its capabilities, boundaries and common implementation mistakes.

At its core, the function of an API gateway is to act as a transit between different services/applications (not necessarily micro-services). It is commonly used in the context of Cloud native architecture but not exclusively as there are use cases that leverage an API Gateway as a critical component to move from a legacy monolithic architecture to APIs and Microservices.

In either case, an API Gateway will provide an abstraction layer that simplifies the process of exposing and consuming API end-points, managing API traffic while providing key performance and security features that enhance an API driven architecture making it simpler for developers to consume and expose technical assets.

An API Gateway is not an API Management tool, it is by inception an API traffic management component that natively does not manage APIs as these should be completely decoupled from the Gateway and are not directly manageable in that sense.

Typically, companies that build API Gateway products label their products as API Management tools in the context that they provide full API lifecycle tools beyond the native capabilities of a gateway, these are useful but they are not the essence of what a gateway provides.

Another common misconception is that an API Gateway is a type of load balancer, while it offers load balancing capabilities its main purpose is to optimize API traffic through optimal routing, using policies that are specifically tailored for APIs.

On the other hand the job of a load balancer is to manage load and route traffic to resources that are able to handle the load, a common pattern is the use of a load balancer in front of API gateway instances, particularly if these are on premises and are unable to auto-scale. In addition to this, load Balancers typically do not handle authentication between services, they don’t provide caching, throttling or any of the API specific features we will find in an API Gateway.

Companies that have a large number of API endpoints and user bases will benefit from all the features of an API Gateway. In this context, due to their sheer scale it’s common for stakeholders to lose track of what endpoints are available to consume, an API Gateway is able to provide a common denominator across all available APIs and is able to apply in a unified way policy driven security features that apply equality to all APIs, irrespective of their protocols and structure.

In the Enterprise space one of the main differences is in the constraints that these companies typically operate in, particularly those operating on premises infrastructure or Hybrid cloud.

When an API Gateway is implemented on-premise infrastructure its scale is limited to the Hardware Capability that has been provisioned upfront, presenting a very different setting compared to a public cloud deployment with theoretical unlimited ability to scale the hardware up to meet demand.

While in the public cloud we can use a single API Gateway replicated across multiple instances that scale dynamically to meet demand, in the context of on-premises we are dealing with predefined hardware capabilities.

When operating under these conditions we should guard against making the Gateway a Single point of failure, decoupling as much responsibility from it as possible, in most of these use cases the best implementation of an API Gateway is for it to act as a pure reverse proxy, delegating rate limiting and other functionality away from the Gateway cluster.

Native features of the API Gateway

IP Whitelisting

Restrict and/or whitelist IP address ranges, access is typically configured through an access policy manager where restrictions can be implemented alongside how the gateway will handle and limitations and display them to users. Using AWS’s API Gateway as an example, the resource policies feature allows users to assign JSON policy documents to resources and determine which IAM group, user or role has access to them.

Transit message encryption

Protect messages between services that go through the API Gateway ensuring there is a common standard applied across all information in transit. In the case of the Amazon API Gateway, it doesn’t support unencrypted workloads which sets a good minimum security standard.

Rate limiting

Rate limiting is the capability to limit the number of requests to an API, using a throttling mechanism to manage the queuing and in some cases the priority of the requests. Configurations are defined at a policy level, and apply to all requests and protocols. Rate limiting is also considered to be a security feature as it potentially limits the impact of a DDOS attack limiting the possible number of concurrent requests between services.

If an API gateway has been deployed on-premises, enabling rate limiting inside the gateway is a potential risk as it could become a single point of failure if the consumption planning is not done adequately. This creates a monolithic dependency on a component that cannot meet scale demands dynamically due to hardware limitations.

As mentioned above, in this particular use case it is a good idea to keep the API Gateway as light as possible and not let it handle the contract between different microservices rate limiting is in itself business logic that should sit within the micro services and not given to an external component that cannot scale to handle.

API Composition & Routing

The job of an API Gateway is not just limited to routing requests from service to service, it cares about performance and efficiency — This is where API composition comes into play. API Composition or aggregation is the process of combining results from the query of different services into a single response. While this is not ideal for large data sets it is a very simple and efficient way of providing an ideal route from the requestor to the service.

Caching

Instead of having to implement caching at the service level or allowing requests to hit the endpoints directly, it’s possible to cache at API gateway level avoiding the need for requests to hit the services all together until the TTL expires. While caching is a native feature common to most API Gateways, the granularity of the configuration differs on the product offering and implementation.

Logging tracing

Modern API Gateways typically provide out of the box logging capabilities, enabling tracing of all API traffic going through the API Gateway, surfacing metrics around the requests from and to the Gateway, the URL, the parameters being requested along with how they perform from the perspective of the Gateway without offering the granularity that should sit within the dominant that owns the API.

Other monitoring capabilities are also important, for example AWS Cloud watch uses Cloudwatch to surface metrics from the API Gateway such as Rest API execution, this holds a lot of value as it is able to provide comparative metrics between different APIs.

Products like Axway provide a stand alone interface for monitoring — API Gateway Manager — providing real time inbound and outbound traffic metrics from the API Gateway along with dynamic tracing and logging capabilities.

API Versioning

Handling different versions of APIs is an important feature of an API Gateway. Kong handles versions through routing, the version is defined in the Uniform Resource Identifier or through Header-Based Versioning. Products like Tyk.io offer a full life cycle management of APIs, versions are set by unique names that match against version tags that are identified through the hade key or query parameter.

API Versioning is an important strategy that helps prevent codebreaking changes to existing APIs with consumers, allowing developers to slowly depreciate older versions of API’s that are still in use minimizing impact to users. This is critical for internal API users but perhaps even more important when working with third party consumers of APIs that do not have direct access to the domain owners, as in, the actual developers.

The question regarding when it is necessary for a change to create a new version is debatable. In theory, any change to an API should produce a new version, however this is not always practical as generating too many versions of an API adds complexity and users would need to go through numerous APIs to understand what to consume.

There are a fair amount of decisions that have to be made when providing a versioned API. The most important decision is how to determine when the version should change. To be clear, not all API changes require a version change. Here’s the key determinant for a new version: are you changing the functionality in such a way that breaks current implementations? If the answer is yes, then it’s time for a new version; if no, then a new version isn’t necessary.

Nicholas Z Zahcas

https://humanwhocodes.com/blog/2011/02/22/the-importance-of-being-versioned/

Extending beyond the API Gateway

Discovery

Service discovery becomes an important feature, especially in larger organizations operating under architectures where there are frequent changes to services exposed through API and when reliance exclusively on communication between people inside the organization is no longer possible from a scale perspective.

There are different parts to Service discovery: It begins with a service being exposed for consumption outside of the services owner domain, this service needs to be registered somewhere (service registry) in order to be searchable by consumers — Service discovery implies that there is a registry mechanism that can be either programmatic or manual.

The discovery mechanism can be manual or automated — Manual discovery is implemented through work-flow and while it is certainly better than no discovery it creates an overhead that will make it very difficult to scale over time.

Automated client-side service discovery tools like Eureka help microservices find each other without the owners having to go through onboarding portals or hard coding URLs into micro services. The Eureka server will have all the Ports, IPs of all the apps registered on it, essentially becoming a self maintainable discovery point for all microservices.

It’s important to distinguish the boundary between service discovery and the API Gateway. A service discovery typically does not handle authentication, dynamic routing nor does it secure any of the traffic that goes through it — its main job is to match an inquiry with the location of a service.

Conclusion

An API Gateway is a strategic technical component in modern architecture, the decision to implement it needs to be driven by a use case, it should not be put in place by default.

In the context of a monolithic architecture or when there are very few microservices exposed, an API Gateway may not be needed, it could make more sense to use a simple reverse proxy or allow client to microservice communication to take place, any new component that is added to will require expertise and needs to be maintained, this adds overhead in cost and effort.

Introducing an API Gateway into a legacy architecture is not going to modernize it, but given the right use case it can be an excellent way to secure and improve the performance of API traffic and a way to simplify the interface between clients and microservices.

API Gateways Products