Cloud Security and AWS: Part 1

As DevOps culture and the public cloud have become widely adopted, engineering teams in many companies have become faster and more agile than ever before. However, this very autonomy and agility that teams now enjoy also demand a bigger role in operational security. As such, DevSecOps and Cloud Native Security are essential topics that need to be addressed to ensure smooth sailing on every project and a proactive stance towards security.

In this article, we will cover some industry best practices in cloud security and give some tips and tricks that you can apply in your day-to-day operations. While these can be generally applied to any public cloud vendor, we’ll use AWS as a reference due to their large market share.

Overview of Cloud Security

Previously, security was a shared responsibility between Development and Operations (Infra) teams. But today, the responsibility is split between an entire company (via its DevOps team) and its public cloud provider. AWS illustrates this topic very well in their AWS Shared Responsibility Model. AWS is responsible for the Security “of” the Cloud, while its customers are responsible for the Security “in” the Cloud. In other words, AWS is responsible for protecting the underlying infrastructure (e.g., Hardware, Software, Network, Facilities) and managed services, while the customer is responsible for managing and configuring the services they build on top of AWS.

You can divide AWS services into three groups: Infrastructure, Container, and Abstracted.

  • Infrastructure: You have a bigger operational responsibility with these services, such as VPC or EC2. You are responsible for applying the Security Updates to your own instances and making sure that your VPC is properly configured.
  • Container: These services are the ones where AWS offers you a “semi-managed” service, such as RDS and EMR. Here, you still have the operational responsibility for the underlying resources that those services provision, including the EC2 instances associated with them.
  • Abstracted: Typically these services are associated with Serverless models, such as S3, SQS, and SES. With these, you have the least operational responsibility because you don’t have underlying resources running in your account. However, you do have to make sure that these resources are properly configured.

Equally important to the responsibility split is understanding how to comply with standards and regulations in a cloud world. The major public cloud vendors opt to adhere to multiple compliance programs in order to allow their customers an easier way to achieve that status.

Using AWS as an example, customers can navigate its compliance portal to see the many programs and certifications that AWS has attained, such as PCI DSS, HIPAA, FIPS, and ISO 27001. 

However, just because a cloud provider has achieved this status does not automatically qualify its customers to claim the same level of certification/status. Any customer that wishes to comply with such programs and regulations for their services running on the public cloud is required to pursue them on their own. 

The good news is that when the underlying infrastructure and platform of a given cloud provider are already certified, the process for you will become much easier and lightweight.

An Introduction to AWS Services

First-time cloud users and companies transitioning from an on-premises data center to the public cloud might find the process to be quite daunting and intimidating, even if they are IT people with 20+ years of experience. This is because there is a paradigm shift between traditional IT infrastructure and the public cloud. This section will provide an overview of the key differences and similarities between both worlds.

At its core, the public cloud has the same type of resources that can be found in a traditional on-premises data center. These foundational elements include computestorage, and network resources.

Foundational Elements

Compute resources: These are similar to the virtual machines you have on-premises. AWS provides virtual instances (AWS EC2), but EC2 is only one of multiple compute services that AWS has to offer. Others include AWS Lambda (for serverless FaaS computing), AWS EKS (for managed Kubernetes), and AWS ECS/Fargate (for serverless containers as a service). These are services worth exploring when designing a cloud-native application because they reduce the operational overhead and truly allow you to take advantage of public cloud managed services.

Storage resources: There are two services in AWS that are equivalent to the traditional on-premises data center. AWS EBS provides block storage to be used in services such as AWS EC2 (virtual instances), similar to a traditional SAN that provides block storage to virtual machines in a data center. AWS EFS provides a network file storage similar to a traditional NAS. And a third storage service that is available in AWS–and not often found in a traditional on-premises data center–is AWS S3 (object storage). Object storage offers unlimited storage for objects (i.e., binary files) via a REST API. It is a key concept used to provide persistent data storage (i.e., statefulness) to cloud-native applications.

Network resources: These resource types in the public cloud are similar (in theory) yet very different (in practice) to the ones found in an on-premises data center. Networks in the public cloud are SDNs (Software Defined Networks), so all the network resources can be fully created, managed, and terminated via API requests.

 

Availability Zones

There are multiple AWS Regions around the globe, each with two or more Availability Zones. One could roughly compare an Availability Zone to an independent data center, meaning that one would expect to find at least two data centers in each AWS Region. A network subnet created in AWS is assigned to a specific Availability Zone. Multiple network subnets can belong logically together and grouped into a VPC (Virtual Private Cloud). A VPC is assigned to a specific AWS Region and provides an isolated network perimeter. This enables customers to have highly available networks in multiple Availability Zones, creating ideal scenarios for failover, disaster recovery, and high availability.

Picture1Fig 1: Example AWS network layout with on-premises connectivity

An example of a typical network topology in AWS, using the native services and including connectivity to an on-premises data center, can be found in Figure 1 above.

The security considerations and more in-depth details for both the network and storage aspects of AWS will be the main theme of the second article of this series. For now, we will move on to the several aspects that need to be taken into account when considering migrating to the public cloud and planning the transition.

Things To Consider Before Migrating

First and foremost, you should understand the business drivers behind your cloud migration. Cost usually comes to mind, but that shouldn’t be the only nor the main consideration. It is true that the public cloud enables you to save on costs over the medium to long term by shifting from CapEx (Capital Expenditures) to OpEx (Operational Expenses) and benefiting from the economies of scale that public cloud providers provide. However, you should keep in mind that over the short term (during the transition) you will typically incur a higher cost for the organization.

From a technical point of view, there are different strategies to consider while moving to the public cloud. The most popular ones are the “Lift-and-Shift” approach, which encourages the migration of on-premises as is, and the “Re-Architecting” approach, which encourages the re-working of on-premises applications to become cloud-native. While both have their merits, they both also highly depend on the organization’s available timeframe and technical talent and thus need to be carefully considered and planned.

For further guidance on migration, see Securely Migrating to AWS. Also, the AWS Cloud Migration portal and the AWS Cloud Adoption Framework provide a rich and in-depth analysis of what should be taken into account before migrating.

Anatomy of AWS Accounts

An AWS account is a key element in your Cloud Security strategy. It can be seen as a logical way to group resources and provide some checks and balances since each account comes with separate service usage limits and billing. However, this does not mean that an account default configuration is at a good level (security-wise). Because of this, three key actions that must be implemented for any new account are the following:

  • Enable Multi-Factor Authentication for your root username/password.
  • Enable CloudTrail to allow logging and auditing of any API requests with AWS Services.
  • Create IAM users and roles for every future action (i.e., don’t use the root user/password in your day-to-day activities).

Depending on the size and complexity of their use cases, some companies/teams opt for a multi-account strategy (see Fig. 2). This is a very effective way to limit the “blast radius” in case of a security incident due to the built-in isolation between AWS Accounts. Having multiple accounts per project (e.g., CI/DEV, Staging, and Production) does bring an additional level of complexity from a management point of view. However, the AWS Organizations service can simplify the process by allowing you to manage all accounts from a single view and apply security policies and restrictions from the same place.

Picture11Fig 2: Example of a multi-account AWS strategy

On the other hand, a single account strategy is also rather popular (especially in smaller projects with limited resources) since management is far simpler. Still, it’s important to take into consideration that this strategy often translates into more complex IAM roles to ensure that you follow the principle of least privilege per service and environment.

You can also opt for a mixed approach, with multiple accounts throughout the organization paired with a single account or multi-accounts per each service/use case depending on the complexity of each.

Conclusion and Next Step

This article provided an introduction to the theme of security in the public cloud–AWS, Microsoft Azure, and Google Cloud–and some real-world examples in AWS. The main topics introduced were the different strategies that can be used for AWS accounts as well as the main building blocks that can be found in a public cloud provider such as AWS.

We also covered the foundational elements that can be found both in a traditional on-premises data center and in a public cloud (compute, network, and storage). Understanding these is an important first step towards having a solid security strategy in the public cloud. In part two of this series, we will venture deeper into two of these–network and storage–and cover their main security concerns and strategies worth exploring in AWS.

 

Source: Reblaze/AWS