Why Out-of-Band API Security Solutions Cannot Protect Sensitive APIs

Author
Brian Joe
Published on
May 30, 2024
Read time
8
Brian Joe
May 30, 2024
8
min

Many security teams have tried first-generation API security solutions to improve their API security posture. These first-generation solutions centralized the analysis of API traffic from the network perimeter to the cloud, which was a drastic improvement in terms of detection and visibility.

However, even though these solutions moved the processing and intelligence to the cloud, they were unable to find a way to distribute their findings effectively back to the perimeter, resulting in a poor experience.

In this blog post, we’ll go through a simple API security use case, protecting API endpoints transmitting sensitive data from requests using API tokens that have shown signs of abuse.

We will then contrast that to how the same use case would be addressed using an inline API security solution such as the Impart Security platform.

What we will see is that out-of-band solutions end up creating more lots of unnecessary work for security teams.

Why out-of-band solutions don’t work

Early API security companies recognized the limitations of analyzing network traffic solely at the perimeter, as it often lacked comprehensive end-to-end context. To overcome this, they innovated by routing network traffic to the cloud, where it could be analyzed more thoroughly. This shift enabled the development of new insights and improved visibility that were previously unattainable, such as the detection of sensitive data and monitoring of API token usage in network traffic. However, implementing this approach in real-world settings is not as straightforward as it might seem. Let’s explore what this implementation process entails.

They’re hard to deploy

The first step is deploying a solution that will monitor our API traffic. This proves difficult for a few reasons:

  • API traffic scale—API traffic is a huge streaming data set and a big networking challenge to figure out how to send the data to different places where it can be analyzed. It is also a big compute challenge to figure out how to analyze it quickly.
  • Encryption is very tricky—To properly inspect API traffic, it must be decrypted first, which is only done in a few places in the network. Early companies tried to do this via network traffic mirroring, but that doesn’t solve for encryption, which has resulted in most companies utilizing eBPF for this current type of monitoring.
  • Proxy settings and XFF headers—Most companies have multiple layers of proxies, such as CDNs, API gateways, application loan balancers (ALBs), network load balancers (NLBs), and more—each of which can potentially update the source IP and XFF headers. These are tricky to keep track of, order, and manage.
  • Privacy—Network traffic contains a lot of potentially sensitive data, such as secrets, tokens, and headers. Compliance is very important and hard to do.

Now, let’s take a look at the reference architecture of how to deploy a typical solution like this in AWS.

Source: AWS

Here is list of the changes you need to make to your infrastructure to get this solution to analyze traffic:

  • Deploy a “Sensor” to capture API traffic within your existing tech stack, which is likely to require site reliability engineering (SRE) and platform teams to get involved, and would need to navigate tricky TLS decryption safely.
  • Stand up new compute infrastructure to sanitize your API traffic (large enough to handle your full API traffic load), which is more work for SREs and also potentially very costly depending on your traffic volume.
  • Update your networking infrastructure to send the full volume of your captured API traffic to a data sanitization service, which can lead to more work for SREs to pipe a copy of your north/south traffic within your virtual private cloud, and can also be very costly depending on traffic volume and should be evaluated by compliance teams for safety.
  • Update your AWS egress settings to allow a duplicate copy of your API traffic to be sent to the cloud for out-of-band processing, which is yet another copy of your network traffic that is being transmitted. This traffic also needs to be absorbed by your cloud/SaaS provider, which is likely to be eventually passed back to you as a cost.

In summary, we’re looking at a minimum of  two software deployments, three networking changes, and potentially three additional copies of your API traffic being replicated and transmitted to different places. These are all new work activities that are created for security, SRE, compliance, and networking teams.

They only provide visibility

The main value prop of an out-of-band solution is visibility or in other words: showing you findings and reports about your API security issues. This might have been valuable to security teams in the 2010s, when security products had terrible UX and reporting and were lightly packaged test scripts and filters.

However, in the 2020s, things are different. Most security tools have made great strides in reporting UX. For example, look at the improvements in the Web Application Firewall (WAF) UX landscape over the last 10 years.

Today, the problem isn’t a lack of visibility into security issues. The problem is that we’ve found so many issues that we don’t know what to do with them, and we can’t get any of them resolved, which has resulted in an entire new category of product called “Posture Management” whose sole value prop is to help prioritize all the issues we’ve found, and send alerts to other teams that tell them to do thing.

Ross Haleliuk explains it well. Visibility isn’t enough.

They create too much noise

Putting aside the fact that visibility isn’t a solution to any security issue, what is often overlooked is that the surfacing of issues also creates a mountain of additional work for security teams to sort through the issues.

Here’s a screenshot of an out-of-band solution’s overview dashboard that summarizes the number of OWASP API top 10 issues found in the environment. Let’s take a look at the Category called “Security Misconfiguration.” There are 789 issues within that very broad category, 671 of which are critical.

For each of these 671 critical issues, we now have to investigate. Let’s look at an existing workflow of one of these misconfigurations: sensitive data in an un-authenticated endpoint.

In this example, the investigation activity isn’t too extensive, and it involves reviewing API request/response traffic for sensitive data payloads and indicators of weak authentication.  However this means two new tasks for a security engineer to complete, in addition to the administrative overhead of keeping track of the issue status.

Furthermore, if the security engineer wants to resolve this issue, additional administrative tasks of creating a ticket for engineering and keeping track of the state of that ticket are created.

In summary, this potentially results in six different things a security engineer has to do for this critical  issue—all of which has to be repeated 671 more times. And what has been accomplished by the end of this?

Not a lot.

They can’t block traffic

While there have been across-the-board improvements in visibility within the cybersecurity realm, the core protection technology, particularly ModSecurity, has seen little to no advancement. Out-of-band solutions continue to depend on this antiquated system to facilitate blocking through webhooks and alerts. However, relying on this method to send alerts is fraught with issues; it's not only unreliable, but also cannot be trusted as a dependable security measure.

Here’s why this is the case:

They can’t block more than an IP

Because out-of-band solutions are by definition, not inline, they must rely on third-party integrations with existing infrastructure like a WAF or API gateway to work. The problem with this is that these integrations are extremely limited.

Imagine we ended up with a list of sensitive endpoints and tokens and wanted to create a policy to block a specific endpoint or a specific API token.

Unfortunately, the only rule that an out-of-band solution can create via webhook to AWS WAF is a simple IP-based blocking request.

Still just an IP-based access-control list (ACL)

Looking at the example above, this WAF rule generated by a out-of-band solution has no expiry time, no endpoint detail, or no API token because most WAFs do not offer the ability to specify that level of detail in a blocking policy via API call or webhook. The rule just has a simple IP address block.

There are many reasons why blocking by IP address is problematic; for example imagine a coffee shop or a university where there is a single malicious user accessing your APIs. If you block the IP address then all of the legit users at that location will also be blocked.

More detail and context is required to properly construct a blocking policy, otherwise you will generate too many false positives to be able to ever trust this type of solution to block production traffic.

They can’t validate blocking decisions

Those with deep familiarity in the runtime protection space know that avoiding false positives is one of the most important objectives. If security teams are going to block production traffic, they need to be confident in what they’re blocking, why it’s being blocked, and be able to provide evidence to show stakeholders about the decisions they made.

Now imagine you were trying to get this from your out-of-band solution, which was sending webhook requests to your WAF. How would you validate what was happening?

Because AWS WAF does not have a corresponding outbound data pipeline to feed information back to your out-of-band tool, that means that you are left having to analyze your WAF logs for this decision.

By default, the WAF logs are going to tell you, “I blocked this request because of rule 123."  To better understand why this happened, you would actually need to correlate the log line in your WAF logs with the rule set up in your out-of-band solution. Did it create rule 123? What was the policy for? And based on the limitations of the WAF, it is most likely going to be a single IP block, which you would then have to investigate to validate the rule that blocked it.

Going back to our sensitive data use case—let’s say you wanted to block a suspicious API token heading to an endpoint that processed sensitive data.

Let’s add that somehow you were able to get around the AWS WAF ACL limitation and communicate what path and token combination to block. How would you be able to validate that you blocked the right endpoint and token at the WAF level? The WAF log is not going to contain the API token that was blocked.

The answer is you can’t validate it. And this means you can’t trust this type of solution.

API protection with Impart Security

Now let’s look at how to solve the same use case using Impart’s API security platform.

Detect endpoints sensitive data with security functions

Impart comes with sensitive data detections out of the box. They are code-based detections that leverage a DSL to run automatically and generate lists of endpoints that process sensitive data— without you having to write, tune, or maintain queries. They are accurate, customizable, and can be ordered using a graph-based interface. This allows them to be layered in a way that reduces false positives—for example, only looking for credit card expiry fragments when a higher confidence detection (such as an algorithmically validated credit card number) has already been detected.

Detect abusive API tokens with over-time detections

Impart comes out of the box with a full suite of JWT detections that can be used to identify potential authorization anomalies or forgeries. In addition to these detections, Impart can also detect abusive behavior over time, such as API tokens that are used within a short period from two different geographical locations, API tokens that are seeing high volumes of error response codes for different endpoints, or API tokens that are seeing excessive usage. These detections come out of the box with Impart as Rule Templates, and can be easily customized to meet specific use cases. Similar to security functions, these detections can be layered using a graph interface to create highly accurate security policies that identify suspicious API tokens.

Dynamic lists and one simple API Firewall rule

With sensitive data and abusive API token detections turned on, Impart can generate dynamic lists for each. As new sensitive data endpoints and potentially abusive tokens are detected, they’ll be added to the lists automatically.

These lists can be referenced in a single rule that limits requests to any endpoint on the sensitive data endpoints list, from any token on the abusive tokens list. This rule can be created and tuned quickly using Impart’s rule templates and rule editor, and doesn’t require logging request/response bodies or API tokens.

Self maintaining

Because all of these concepts are natively built into Impart, this entire process is self maintaining.  Endpoints with sensitive data are constantly cycled in and out of the endpoint list based on what data they are transmitting within a user-defined window. Similarly, abusive API tokens are constantly cycled in and out of the token list based on how they behave.

Because this is all happening at runtime, this entire system can also detect things like new endpoints, new tokens, and new threats with very little maintenance or changes to firewall policies. Impart’s complete loop gives security teams the ability to detect, respond, and adapt to emerging security threats dynamically.

Wrapping up

If you’ve made it this far, I hope you leave with a much deeper understanding of the pain involved with trying to defend against API attacks with a WAF, and how Impart makes this easier.Please contact us at try.imp.art if you want to learn more about Impart’s runtime API security approach and be sure to follow us on LinkedIn to stay up-to-date with the latest and greatest.

Subscribe to newsletter

Want to learn more about API security? Subscribe to our newsletter for updates.

By subscribing you agree to with our Privacy Policy.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

See why security teams love us