26 AWS Security Best Practices to Adopt in Production – Part 2

One of the most important pillars of a well-architected framework is security. Thus, it is important to follow these AWS security best practices to prevent unnecessary security situations.

There are many things you must set up if you want your solution to be operative, secure, reliable, performant, and cost effective. And, first things first, the best time to do that is now – right from the beginning, before you start to design and engineer. Continuing Part 1, here are the next AWS security best practices to adopt in Production.

Amazon S3

Amazon Simple Storage Service (Amazon S3) is an object storage service offering industry-leading scalability, data availability, security, and performance. There are few AWS security best practices to adopt when it comes to S3.

9.- Enable S3 Block Public Access setting 🟨🟨

Amazon S3 public access block is designed to provide controls across an entire AWS account or at the individual S3 bucket level to ensure that objects never have public access. Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or both.

AWS security best practices s3

Unless you intend to have your S3 buckets be publicly accessible, you should configure the account level Amazon S3 Block Public Access feature.

Get the names of all S3 buckets available in your AWS account:

aws s3api list-buckets –query ‘Buckets[*].Name’

For each bucket returned, get its S3 Block Public Access feature configuration:

aws s3api get-public-access-block –bucket BUCKET_NAME

The output for the previous command should be like this:

“PublicAccessBlockConfiguration”: {

  “BlockPublicAcls”: false,

  “IgnorePublicAcls”: false,

  “BlockPublicPolicy”: false,

  “RestrictPublicBuckets”: false

}

If any of these values is false, then your data privacy is at stake. Use this short command to remediate it:

aws s3api put-public-access-block

–region REGION

–bucket BUCKET_NAME

–public-access-block-configuration BlockPublicAcls=true,IgnorePublicAcls=true,BlockPublicPolicy=true,RestrictPublicBuckets=true

10.- Enable server-side encryption on S3 buckets 🟥🟥🟥

For an added layer of security for your sensitive data in S3 buckets, you should configure your buckets with server-side encryption to protect your data at rest. Amazon S3 encrypts each object with a unique key. As an additional safeguard, Amazon S3 encrypts the key itself with a root key that it rotates regularly. Amazon S3 server-side encryption uses one of the strongest block ciphers available to encrypt your data, 256-bit Advanced Encryption Standard (AES-256).

aws security best practices s3 768x397 1

List all existing S3 buckets available in your AWS account:

aws s3api list-buckets –query ‘Buckets[*].Name’

Now, use the names of the S3 buckets returned at the previous step as identifiers to retrieve their Default Encryption feature status:

aws s3api get-bucket-encryption –bucket BUCKET_NAME

The command output should return the requested feature configuration details. If the get-bucket-encryption command output returns an error message, the default encryption is not currently enabled, and therefore the selected S3 bucket does not automatically encrypt all objects when stored in Amazon S3.

Repeat this procedure for all your S3 buckets.

11.- Enable S3 Block Public Access setting at the bucket level 🟨🟨

Amazon S3 public access block is designed to provide controls across an entire AWS account or at the individual S3 bucket level to ensure that objects never have public access. Public access is granted to buckets and objects through access control lists (ACLs), bucket policies, or both.

​​AWS security best practices s3 policy 768x296 1

Unless you intend to have your S3 buckets be publicly accessible, which you probably shouldn’t, you should configure the account level Amazon S3 Block Public Access feature.

You can use this Cloud Custodian rule to detect S3 buckets that are publicly accessible:

– name: buckets-public-access-block

  description: Amazon S3 provides Block public access (bucket settings) and Block public access (account settings) to help you manage public access to Amazon S3 resources. By default, S3 buckets and objects are created with public access disabled. However, an IAM principle with sufficient S3 permissions can enable public access at the bucket and/or object level. While enabled, Block public access (bucket settings) prevents an individual bucket, and its contained objects, from becoming publicly accessible. Similarly, Block public access (account settings) prevents all buckets, and contained objects, from becoming publicly accessible across the entire account.

  resource: s3

  filters:

    – or:

      – type: check-public-block

        BlockPublicAcls: false

      – type: check-public-block

        BlockPublicPolicy: false

      – type: check-public-block

        IgnorePublicAcls: false

      – type: check-public-block

        RestrictPublicBuckets: false

AWS CloudTrail

AWS CloudTrail is an AWS service that helps you enable governance, compliance, and operational and risk auditing of your AWS account. Actions taken by a user, role, or an AWS service are recorded as events in CloudTrail. Events include actions taken in the AWS Management Console, AWS Command Line Interface, and AWS SDKs and APIs.

The following section will help you configure CloudTrail to monitor your infrastructure across all your regions.

12.- Enable and configure CloudTrail with at least one multi-Region trail 🟥🟥🟥

CloudTrail provides a history of AWS API calls for an account, including API calls made from the AWS Management Console, AWS SDKs, and command line tools. The history also includes API calls from higher-level AWS services, such as AWS CloudFormation.

aws security best practices cloudtrail 768x397 1

The AWS API call history produced by CloudTrail enables security analysis, resource change tracking, and compliance auditing. Multi-Region trails also provide the following benefits.

  • A multi-Region trail helps to detect unexpected activity occurring in otherwise unused Regions.
  • A multi-Region trail ensures that global service event logging is enabled for a trail by default. Global service event logging records events generated by AWS global services.
  • For a multi-Region trail, management events for all read and write operations ensure that CloudTrail records management operations on all of an AWS account’s resources.

By default, CloudTrail trails that are created using the AWS Management Console are multi-Region trails.

List all trails available in the selected AWS region:

aws cloudtrail describe-trails

The output exposes each AWS CloudTrail trail along with its configuration details. If IsMultiRegionTrail config parameter value is false, the selected trail is not currently enabled for all AWS regions:

{

    “trailList”: [

        {

            “IncludeGlobalServiceEvents”: true,

            “Name”: “ExampleTrail”,

            “TrailARN”: “arn:aws:cloudtrail:us-east-1:123456789012:trail/ExampleTrail”,

            “LogFileValidationEnabled”: false,

            “IsMultiRegionTrail”: false,

            “S3BucketName”: “ExampleLogging”,

            “HomeRegion”: “us-east-1”

        }

    ]

}

Verify that all of your trails and make sure at least one is multi-Region.

13.- Enable encryption at rest with CloudTrail 🟨🟨

Check whether CloudTrail is configured to use the server-side encryption (SSE) AWS Key Management Service customer master key (CMK) encryption.

The check passes if the KmsKeyId is defined. For an added layer of security for your sensitive CloudTrail log files, you should use server-side encryption with AWS KMS–managed keys (SSE-KMS) for your CloudTrail log files for encryption at rest. Note that by default, the log files delivered by CloudTrail to your buckets are encrypted by Amazon server-side encryption with Amazon S3-managed encryption keys (SSE-S3).

You can check that the logs are encrypted with the following Cloud Custodian rule:

– name: cloudtrail-logs-encrypted-at-rest

  description: AWS CloudTrail is a web service that records AWS API calls for an account and makes those logs available to users and resources in accordance with IAM policies. AWS Key Management Service (KMS) is a managed service that helps create and control the encryption keys used to encrypt account data, and uses Hardware Security Modules (HSMs) to protect the security of encryption keys. CloudTrail logs can be configured to leverage server side encryption (SSE) and KMS customer created master keys (CMK) to further protect CloudTrail logs. It is recommended that CloudTrail be configured to use SSE-KMS.

  resource: cloudtrail

  filters:

    – type: value

      key: KmsKeyId

      value: absent

You can remediate it using the AWS Console like this:

  1. Sign in to the AWS Management Console at https://console.aws.amazon.com/cloudtrail/.
  2. In the left navigation panel, select Trails.
  3. Under the Name column, select the trail name that you need to update.
  4. Click the pencil icon next to the S3 section to edit the trail bucket configuration.
  5. Under S3 bucket* click Advanced.
  6. Select Yes next to Encrypt log files to encrypt your log files with SSE-KMS using a Customer Master Key (CMK).
  7. Select Yes next to Create a new KMS key to create a new CMK and enter a name for it, or otherwise select No to use an existing CMK encryption key available in the region.
  8. Click Save to enable SSE-KMS encryption.

14.- Enable CloudTrail log file validation 🟨🟨

CloudTrail log file validation creates a digitally signed digest file that contains a hash of each log that CloudTrail writes to Amazon S3. You can use these digest files to determine whether a log file was changed, deleted, or unchanged after CloudTrail delivered the log.

It is recommended that you enable file validation on all trails. Log file validation provides additional integrity checks of CloudTrail logs.

To check this in the AWS Console proceed as follows:

  1. Sign in to the AWS Management Console at https://console.aws.amazon.com/cloudtrail/.
  2. In the left navigation panel, select Trails.
  3. Under the Name column, select the trail name that you need to examine.
  4. Under S3 section, check for Enable log file validation status:
  5. Enable log file validation status. If the feature status is set to No, then the selected trail does not have log file integrity validation enabled. If this is the case, fix it:
    1. Click the pencil icon next to the S3 section to edit the trail bucket configuration.
    2. Under S3 bucket* click Advanced and search for the Enable log file validation configuration status.
    3. Select Yes to enable log file validation, and then click Save.

Learn more about security best practices in AWS Cloudtrail.

 

AWS Config

AWS Config provides a detailed view of the resources associated with your AWS account, including how they are configured, how they are related to one another, and how the configurations and their relationships have changed over time.

15.- Verify AWS Config is enabled 🟥🟥🟥

The AWS Config service performs configuration management of supported AWS resources in your account and delivers log files to you. The recorded information includes the configuration item (AWS resource), relationships between configuration items, and any configuration changes between resources.

aws security best practices config 768x397 1

It is recommended that you enable AWS Config in all Regions. The AWS configuration item history that AWS Config captures enables security analysis, resource change tracking, and compliance auditing.

Get the status of all configuration recorders and delivery channels created by the Config service in the selected region:

aws configservice –region REGION get-status

The output from the previous command shows the status of all AWS Config delivery channels and configuration recorders available. If AWS Config is not enabled, the list for both configuration recorders and delivery channels are shown empty:

Configuration Recorders:

Delivery Channels:

Or, if the service was previously enabled but is now disabled, the status should be set to OFF:

Configuration Recorders:

name: default

recorder: OFF

Delivery Channels:

name: default

last stream delivery status: NOT_APPLICABLE

last history delivery status: SUCCESS

last snapshot delivery status: SUCCESS

To remediate this, after you enable AWS Config, configure it to record all resources.

  1. Open the AWS Config console at https://console.aws.amazon.com/config/.
  2. Select the Region to configure AWS Config in.
  3. If you haven’t used AWS Config before, see Getting Started in the AWS Config Developer Guide.
  4. Navigate to the Settings page from the menu, and do the following:
    1. Choose Edit.
    2. Under Resource types to record, select Record all resources supported in this region and Include global resources (e.g., AWS IAM resources).
    3. Under Data retention period, choose the default retention period for AWS Config data, or specify a custom retention period.
    4. Under AWS Config role, either choose Create AWS Config service-linked role or choose Choose a role from your account and then select the role to use.
    5. Under Amazon S3 bucket, specify the bucket to use or create a bucket and optionally include a prefix.
    6. Under Amazon SNS topic, select an Amazon SNS topic from your account or create one. For more information about Amazon SNS, see the Amazon Simple Notification Service Getting Started Guide.
  5. Choose Save.

To go deeper, follow the AWS security best practices for AWS Config.

Amazon EC2

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable computing capacity that you use to build and host your software systems. Therefore, EC2 is one of the core services of AWS and it is necessary to know the best security practices and how to secure EC2.

16.- Ensure attached EBS volumes are encrypted at rest 🟥🟥🟥

It is to check whether the EBS volumes that are in an attached state are encrypted. To pass this check, EBS volumes must be in use and encrypted. If the EBS volume is not attached, then it is not subject to this check.

aws security best practices ec2 768x397 1

For an added layer of security to your sensitive data in EBS volumes, you should enable EBS encryption at rest. Amazon EBS encryption offers a straightforward encryption solution for your EBS resources that doesn’t require you to build, maintain, and secure your own key management infrastructure. It uses KMS keys when creating encrypted volumes and snapshots.

Run the describe-volumes command to determine if your EC2 Elastic Block Store volume is encrypted:

aws ec2 describe-volumes

–filters Name=attachment.instance-id, Values=INSTANCE_ID

The command output should reveal the instance EBS volume encryption status (true for enabled, false for disabled).

There is no direct way to encrypt an existing unencrypted volume or snapshot. You can only encrypt a new volume or snapshot when you create it.

If you enable encryption by default, Amazon EBS encrypts the resulting new volume or snapshot by using your default key for Amazon EBS encryption. Even if you have not enabled encryption by default, you can enable encryption when you create an individual volume or snapshot. In both cases, you can override the default key for Amazon EBS encryption and choose a symmetric customer managed key.

17.- Enable VPC flow logging in all VPCs 🟩

With the VPC Flow Logs feature, you can capture information about the IP address traffic going to and from network interfaces in your VPC. After you create a flow log, you can view and retrieve its data in CloudWatch Logs. To reduce cost, you can also send your flow logs to Amazon S3.

It is recommended that you enable flow logging for packet rejects for VPCs. Flow logs provide visibility into network traffic that traverses the VPC and can detect anomalous traffic or provide insight during security workflows. By default, the record includes values for the different components of the IP address flow, including the source, destination, and protocol.

– name: flow-logs-enabled

  description: VPC Flow Logs is a feature that enables you to capture information about the IP traffic going to and from network interfaces in your VPC. After you’ve created a flow log, you can view and retrieve its data in Amazon CloudWatch Logs. It is recommended that VPC Flow Logs be enabled for packet ‘Rejects’ for VPCs.

  resource: vpc

  filters:

    – not:

        – type: flow-logs

          enabled: true

18.- Confirm the VPC default security group does not allow inbound and outbound traffic 🟩

The rules for the default security group allow all outbound and inbound traffic from network interfaces (and their associated instances) that are assigned to the same security group.

We do not recommend using the default security group. Because the default security group cannot be deleted, you should change the default security group rules setting to restrict inbound and outbound traffic. This prevents unintended traffic if the default security group is accidentally configured for resources, such as EC2 instances.

AWS security best practices vpc 768x423 1

Get the description of the default security group within the selected region:

aws ec2 describe-security-groups

–region REGION

–filters Name=group-name,Values=’default’

–output table

–query ‘SecurityGroups[*].IpPermissions[*].IpRanges’

If this command does not return any output, then the default security group does not allow public inbound traffic. Otherwise, it should return the inbound traffic source IPs defined, as in the following example:

————————

|DescribeSecurityGroups|

+——————-+

|         CidrIp          |

+——————-+

|  0.0.0.0/0           |

|  ::/0                      |

|  1.2.3.4/32          |

|  1.2.3.5/32          |

+——————+

If the IPs returned are 0.0.0.0/0 or ::/0, then the selected default security group is allowing public inbound traffic. We’ve explained previously what the real threats are when securing SSH on EC2.

To remediate this issue, create new security groups and assign those security groups to your resources. To prevent the default security groups from being used, remove their inbound and outbound rules.

19.- Enable EBS default encryption 🟥🟥🟥

When encryption is enabled for your account, Amazon EBS volumes and snapshot copies are encrypted at rest. This adds an additional layer of protection for your data. For more information, see Encryption by default in the Amazon EC2 User Guide for Linux Instances.

Note that following instance types do not support encryption: R1, C1, and M1.

Run the get-ebs-encryption-by-default command to know whether EBS encryption by default is enabled for your AWS cloud account in the selected region:

aws ec2 get-ebs-encryption-by-default

–region REGION

–query ‘EbsEncryptionByDefault’

If the command returns false, the encryption of data at rest by default for new EBS volumes is not enabled in the selected AWS region. Fix it with the following command:

aws ec2 enable-ebs-encryption-by-default

–region REGION

Conclusion

Going all cloud opens a new world of possibilities, but it also opens a wide door to attacking vectors. Each new AWS service you leverage has its own set of potential dangers you need to be aware of and well prepared for.

Luckily, cloud native security tools like Renova Cloud, an AWS Consulting Partner with a focus on Security, can guide you through these best practices, and help you meet your compliance requirements.

Learn more about security best practices in AWS part 1 here