How Serverless Technology is Changing the Security Paradigm

There are clear benefits to serverless technology with many enterprises already making or planning to make the change. Physical infrastructure and systems software are seemingly no longer issues that developers have to deal with, serverless applications are extremely elastic and easily scalable, and companies only have to pay for the resources they actually use.

What’s more, as Ayala Goldstein points out, going serverless is another example of how developers can leverage third-party services—much like smart open source code usage—to outsource work that isn’t core to the product they are building and release to market faster.

Like any paradigm shift, however, serverless technology introduces a new set of issues and application security challenges.
In this article, we’ll explore how serverless can improve security, where it can introduce risks, and how it changes traditional threats. The aim here is to give you a broad understanding of the advantages and disadvantages of this technology where security is concerned.

Serverless Technology: Too Good to Be True?

When you’re developing a product or service, you spend a great deal of time building and deploying apps—and a lot of time managing the servers and resources they use, too.

Serverless technology, with its abstraction of operating systems, servers, and infrastructure, solves this problem, eliminating many of the issues associated with provisioning or managing physical servers. This allows developers to get on with what they do best—writing code.

In a lot of ways, going serverless alleviates security concerns associated with managing your own servers and resources because you’re handing over much of the responsibility to your cloud provider.

This isn’t such a bad thing. In fact, it’s particularly reassuring since the big names in serverless technology—Amazon, Microsoft, Google, and IBM, which run popular serverless architectures AWS Lambda, Azure Functions, Google Cloud Functions, and IBM BlueMix Cloud Functions, respectively—take security very seriously.

But developers are still responsible for building robust applications and making sure that application code doesn’t introduce application layer vulnerabilities. Moreover, any configuration related to the application itself or to cloud services it interacts with still need to be secure—again, the responsibility of the developer.

In the serverless world of Functions as a Service, the developer and the cloud provider share security responsibilities:

Serverless Security Model

Security experts at Snyk highlighted this recently in an analysis of the world’s top 50 breaches. They found 12 of the breaches were caused by components with known vulnerabilities. This means that application-level vulnerabilities—including cross-site scripting, SQL injection and CSRF—are just as bad, and attackers will just move up the stack.

Further, AWS Solutions Architect Justin Pirtle advised in a presentation on security best practices for serverless applications that application security best practices (i.e. mandatory code review, static analysis, input validation/sanitization, SQLi) still apply in serverless architectures.

Pirtle’s presentation is well worth a read. So, too, is Protego CTO Hillel Solow’s article on the 9 serverless security best practices developers should adopt as part of the shift to serverless.

But let’s take a look at the broader serverless security picture and how things are changing.

Serverless Security: The Good, the Bad, and the Changing

The reduced cost and simplicity of serverless technology is not without its drawbacks. Serverless introduces a new set of challenges that should be taken into consideration.

1. Provider Takes Care of Operating System Patches

Let’s start with one of the biggest advantages that’s often touted by proponents of serverless: you no longer have to deal with operating system patching.

Since most malware tries to compromise systems by using known vulnerabilities that operating systems have already patched, it’s best practice to apply OS patches as soon as they become available and actively monitor your system for missing patches.

With serverless, gone is the time spent worrying about patching because your provider takes care of it for you.
Take the recent Meltdown and Spectre attacks, for example. As Tom McLaughlin points out, AWS Lambda customers weren’t required to do anything at all during these different attacks. AWS assumed responsibility for testing and rolling out patches.

Compare this to enterprises that spent time tracking patch announcements, testing, and rolling out patches (and rolling back in some cases), and the overall cost incurred as a result of the disclosure. A month after the attacks, just one-third of companies had patched over 25% of their hosts.

According to a Verizon Data Breach Investigations Report, known security vulnerabilities in unpatched servers and apps are the primary vector through which systems are commonly exploited. This is due to their frequency and board deployment, and the fact updating servers and apps at scale is hard.

When you switch to serverless, you immediately solve this headache and become significantly more secure.
It’s worth pointing out that enterprise managed WordPress hosting solutions like Pagely also take care of OS patching and actively monitoring for vulnerabilities so you don’t have to.

2. Denial of Service Becomes Denial of Wallet

Another big advantage of serverless technology is its extreme elasticity. Its immediate and seamless provisioning of functions means you can quickly scale your application to handle sudden and heavy demand.

It also means when there is no demand, you can have no servers running—and pay nothing.

But this elasticity introduces a new problem: Denial of Wallet (DoW).

Consider Denial of Service (DoS) attacks, which often try to take down systems with large volumes of compute or memory-intensive actions, maxing out server capacity and locking out legitimate users.

With serverless technology, you could potentially scale your way out of a DoS attack. More requests, whether they’re from an attacker or a genuine user, would simply make your provider provision more ad hoc servers so good users wouldn’t be impacted. However, it would cost you—you would still have to pay for the extra resources, which is why this new kind of attack has been labeled DoW.

Snyk’s co-founder and CEO Guy Podjarny writes that while serverless can help mitigate DoS, it doesn’t completely eliminate it. Platforms don’t really have infinite capacity and some types of DoS, like Distributed Denial of Service (DDoS), target the network bandwidth or DNS and not the application. To handle these issues, it worth considering a DDoS protection solution, such as those offered by web hosts and some cloud platforms.

3. Greater Dependency On Third-Party Services

Serverless functions often include dependencies pulled in from npm (Node.js), PyPI (Phython), Maven (Java) or other relevant repositories. Depending on the language you use in your apps, it’s important to keep in mind that libraries and packages are prone to security vulnerabilities, whether they are deployed manually or in a sandbox.

The nature of serverless makes managing third-party dependencies manually particularly challenging. So it’s important to follow best practices in your code so as not to leave your application open to traditional vulnerabilities like DDoS attacks and SQL injection.

Christopher Shoe, Director of Data Operations at Ease Inc., says static and dynamic security testing, input validation, and whitelisting should be favored whenever possible.

Securing application dependencies, writes Protego’s Solow, requires access to a good database and automated tools to prevent new vulnerable packages from being used and so you can be alerted about newly disclosed issues.

Solow also recommends minimizing the impact of vulnerable libraries by ensuring proper segmentation of your app into disparate services, and carefully applying the principle of least privilege.

Ultimately, each third-party service you use is a potential point of compromise. To control this risk, Podjarny suggests following these steps:

  • Require a valid TLS certificate to validate the service you’re using is indeed the one you think you are using  (not to mention actually security data in transit).
  • Apply input validation on responses from third-party services. Such responses are often processed blindly, even when user input is tightly managed.
  • Minimize and anonymize the data you send the service, keeping it to the information it needs to receive to properly operate.

4. Increased Attack Surface and Complexity

Functions pull data from a broad range of event sources, including HTTP APIs, cloud storage, message queues, and IoT device communications, which increases the attack surface dramatically especially when messages use protocols and complex message structures, many of which can’t be inspected by web application firewalls.

As Podjarny explains, while functions are technically independent of each other, most are only invoked in a handful of sequences within serverless apps. As a result, many functions start assuming another function ran before them and sanitized the data in some way, i.e. functions start trusting input, thinking it’s coming from a trusted source.

This is harmful for your security because:

  • Functions may be invoked directly by an attacker;
  • Functions may be added to a new flow, which won’t sanitize the input; and
  • Because an attacker may compromise one of the other functions then have easy access to another poorly defended function.

On top of this, given the newness of serverless architecture, the attack surface can also be complex to understand. Many developers are yet to build enough experience with the security risks and appropriate protections required to secure serverless applications.

To avoid being as strong as your weakest function, Podjarny recommends making sure you treat every function as an independent entity with a secured perimeter.

5. Cheap and Easy Deployment Leads to Explosion of Functions

As I mentioned earlier, one of the benefits of serverless technology is that you don’t pay for functions when they’re not being used. On top of this, deploying them is fairly easy and automated.

With such low thresholds, developers tend to take a fairly casual approach to deploying new functions, even if they’re not absolutely needed. The problem is, these dormant functions still exist as an attack surface and actually become more of a problem because they’re less likely to be updated and patched.

Over time, as engineer and AWS Lambda enthusiast Yan Cui writes, these unused functions can become a hotbed for components with known vulnerabilities that attackers can exploit.

Deployed functions can be hard to remove unless you track them well since it’s difficult to know who maybe be relying on their existence. On top of that, excessive permissions that are also hard to reduce result in a mess of functions galore that are tough to remove and contributing to an ever-growing attack surface for hackers.

This isn’t a difficult one to combat—just keep in mind that each new function you deploy represents another security risk rather than a low-cost addition to your application. To avoid future problems, make sure you monitor all functions for security risk and track their deployment.

6. Better Data Security Need for Stateless Servers

Due to the ephemeral nature of serverless functions, all your functions are likely to be stateless. This means states are stored in external systems (e.g. Amazon Elasticache) and you need to secure them both at rest and in-transit.

Storing sensitive data outside the server has significant security implications. For one thing, the data is at risk when transferred. It’s also likely to persist longer and be accessible to more machines. Plus, if the data store is compromised, more users will be impacted.

While serverless isn’t the only example of technology that uses external storage, it certainly increases the frequency and importance of security such data.

To combat this, Podjarny recommends encrypting data stored in session stores, using short-lived caches, carefully managing who has access to these repositories, and encrypting data in transit.

Obviously (though it’s not always obvious), keep tabs on opportunities for leaked credentials, compromised developers, or any other means for database compromise that could lead to problems for your application and users.

Conclusion

Serverless technology offers enormous benefits for enterprise as we’re explored above, including extreme elasticity and scalability and relatively low cost, with the biggest advantage being that it allows you to take your mind off infrastructure issues and focus on your business goals.

Despite its significant adoption over the past four years and constantly growing interest, it’s important to recognize serverless technology is still in its infancy. With new security challenges and changing traditional threats, stringent adherence to best practices will boost your security posture.

While we’ve covered some negative aspects in this article, don’t let them scare you off serverless technology. Ultimately, its security advantages and disadvantages are comparable to other approaches, especially given the early stage of relevant security tools. It’s important to understand the risks and challenges before making the switch.

0 Comments