26 AWS Security Best Practices to Adopt in Production – Part 3

One of the most important pillars of a well-architected framework is security. Thus, it is important to follow these AWS security best practices to prevent unnecessary security situations.

There are many things you must set up if you want your solution to be operative, secure, reliable, performant, and cost effective. And, first things first, the best time to do that is now – right from the beginning, before you start to design and engineer. Continuing Part 1 and Part 2, here are some AWS security best practices to adopt in Production.

AWS Database Migration Service (DMS)

AWS Database Migration Service (AWS DMS) is a cloud service that makes it easy to migrate relational databases, data warehouses, NoSQL databases, and other types of data stores. You can use AWS DMS to migrate your data into the AWS Cloud or between combinations of cloud and on-premises setups.

20.- Verify AWS Database Migration Service replication instances are not public 🟥🟥🟥

Ensure that your Amazon Database Migration Service (DMS) is not publicly accessible from the Internet in order to avoid exposing private data and minimize security risks. A DMS replication instance should have a private IP address and the Publicly Accessible feature disabled when both the source and the target databases are in the same network that is connected to the instance’s VPC through a VPN, VPC peering connection, or using an AWS Direct Connect dedicated connection.

  1. Sign in to AWS Management Console at https://console.aws.amazon.com/dms/.
  2. In the left navigation panel, choose Replication instances.
  3. Select the DMS replication instance that you want to examine to open the panel with the resource configuration details.
  4. Select the Overview tab from the dashboard bottom panel and check the Publicly accessible configuration attribute value. If the attribute value is set to Yes, the selected Amazon DMS replication instance is accessible outside the Virtual Private Cloud (VPC) and can be exposed to security risks. To fix it, do the following:
    1. Click the Create replication instance button from the dashboard top menu to initiate the launch process.
    2. On Create replication instance page, perform the following:
      1. Uncheck Publicly accessible checkbox to disable the public access to the new replication instance. If this setting is disabled, Amazon DMS will not assign a public IP address to the instance at creation and you will not be able to connect to the source/target databases outside the VPC.
      2. Provide a unique name for the new replication instance within the Name box, then configure the rest of the instance settings using the configuration information copied at step No. 5.
      3. Click Create replication instance to launch your new Amazon DMS instance.
    3. Update your database migration plan by developing a new migration task to include the newly created AWS DMS replication instance.
    4. To stop adding charges for the old replication instance:
      1. Select the old DMS instance, then click the Delete button from the dashboard top menu.
      2. Within the Delete replication instance dialog box, review the instance details then click Delete to terminate the selected DMS resource.
  5. Repeat step Nos. 3 and 4 for each AWS DMS replication instance provisioned in the selected region.
  6. Change the region from the console navigation bar and repeat the process for all the other regions.

Learn more about AWS security best practices for AWS Database Migration Service.

Amazon Elastic Block Store (EBS)

Amazon Elastic Block Store (Amazon EBS) provides block level storage volumes for use with EC2 instances. EBS volumes behave like raw, unformatted block devices. You can mount these volumes as devices on your instances. EBS volumes that are attached to an instance are exposed as storage volumes that persist independently from the life of the instance. You can create a file system on top of these volumes, or use them in any way you would use a block device (such as a hard drive).

You can dynamically change the configuration of a volume attached to an instance.

21.- Ensure Amazon EBS snapshots are not public, or to be restored by anyone 🟥🟥🟥

EBS snapshots are used to back up the data on your EBS volumes to Amazon S3 at a specific point in time. You can use the snapshots to restore previous states of EBS volumes. It is rarely acceptable to share a snapshot with the public. Typically, the decision to share a snapshot publicly was made in error or without a complete understanding of the implications. This check helps ensure that all such sharing was fully planned and intentional.

Get the list of all EBS volume snapshots:

aws ec2 describe-snapshots

–region REGION

–owner-ids ACCOUNT_ID

–filters Name=status,Values=completed

–output table

–query ‘Snapshots[*].SnapshotId’

For each snapshot, check its createVolumePermission attribute:

aws ec2 describe-snapshot-attribute

–region REGION

–snapshot-id SNAPSHOT_ID

–attribute createVolumePermission

–query ‘CreateVolumePermissions[]’

The output from the previous command returns information about the permissions for creating EBS volumes from the selected snapshot:

{

    “Group”: “all”

}

If the command output is “Group”: “all”, the snapshot is accessible to all AWS accounts and users. If this is the case, take your time to run this command to fix it:

aws ec2 modify-snapshot-attribute

–region REGION

–snapshot-id SNAPSHOT_ID

–attribute createVolumePermission

–operation-type remove

–group-names all

Amazon OpenSearch Service

Amazon OpenSearch Service is a managed service that makes it easy to deploy, operate, and scale OpenSearch clusters in the AWS Cloud. Amazon OpenSearch Service is the successor to Amazon Elasticsearch Service and supports OpenSearch and legacy Elasticsearch OSS (up to 7.10, the final open source version of the software). When you create a cluster, you have the option of which search engine to use.

22.- Ensure Elasticsearch domains have encryption at rest enabled 🟥🟥🟥

For an added layer of security for your sensitive data in OpenSearch, you should configure your OpenSearch to be encrypted at rest. Elasticsearch domains offer encryption of data at rest. The feature uses AWS KMS to store and manage your encryption keys. To perform the encryption, it uses the Advanced Encryption Standard algorithm with 256-bit keys (AES-256).

List all Amazon OpenSearch domains currently available:

aws es list-domain-names –region REGION

Now determine if data-at-rest encryption feature is enabled with:

aws es describe-elasticsearch-domain

–region REGION

–domain-name DOMAIN_NAME

–query ‘DomainStatus.EncryptionAtRestOptions’

If the Enabled flag is false, the data-at-rest encryption is not enabled for the selected Amazon ElasticSearch domain. Fix it with:

aws es create-elasticsearch-domain

–region REGION

–domain-name DOMAIN_NAME

–elasticsearch-version 5.5

–elasticsearch-cluster-config InstanceType=m4.large.elasticsearch,InstanceCount=2

–ebs-options EBSEnabled=true,VolumeType=standard,VolumeSize=200

–access-policies file://source-domain-access-policy.json

–vpc-options SubnetIds=SUBNET_ID,SecurityGroupIds=SECURITY_GROUP_ID

–encryption-at-rest-options Enabled=true,KmsKeyId=KMS_KEY_ID

Once the new cluster is provisioned, upload the existing data (exported from the original cluster) to the newly created cluster.

After all the data is uploaded, it is safe to remove the unencrypted OpenSearch domain to stop incurring charges for the resource:

aws es delete-elasticsearch-domain

–region REGION

–domain-name DOMAIN_NAME

Amazon SageMaker

Amazon SageMaker is a fully-managed machine learning service. With Amazon SageMaker, data scientists and developers can quickly build and train machine learning models, and then deploy them into a production-ready hosted environment.

23.- Verify SageMaker notebook instances do not have direct internet access 🟨🟨

If you configure your SageMaker instance without a VPC, then, by default, direct internet access is enabled on your instance. You should configure your instance with a VPC and change the default setting to Disable — Access the internet through a VPC.

To train or host models from a notebook, you need internet access. To enable internet access, make sure that your VPC has a NAT gateway and your security group allows outbound connections. To learn more about how to connect a notebook instance to resources in a VPC, see “Connect a notebook instance to resources in a VPC” in the Amazon SageMaker Developer Guide.

You should also ensure that access to your SageMaker configuration is limited to only authorized users. Restrict users’ IAM permissions to modify SageMaker settings and resources.

  1. Sign in to the AWS Management Console at https://console.aws.amazon.com/sagemaker/.
  2. In the navigation panel, under Notebook, choose Notebook instances.
  3. Select the SageMaker notebook instance that you want to examine and click on the instance name (link).
  4. On the selected instance configuration page, within the Network section, check for any VPC subnet IDs and security group IDs. If these network configuration details are not available, instead the following status is displayed: “No custom VPC settings applied.” The notebook instance is not running inside a VPC network, therefore you can follow the steps described in this conformity rule to deploy the instance within a VPC. Otherwise, if the notebook instance is running inside a VPC, check the Direct internet access configuration attribute value. If the attribute value is set to Enabled, the selected Amazon SageMaker notebook instance is publicly accessible.
  5. If the notebook has direct internet access enabled, fix it by recreating it with this CLI command:

aws sagemaker create-notebook-instance

–region REGION

–notebook-instance-name NOTEBOOK_INSTANCE_NAME

–instance-type INSTANCE_TYPE

–role-arn ROLE_ARN

–kms-key-id KMS_KEY_ID

–subnet-id SUBNET_ID

–security-group-ids SECURITY_GROUP_ID

–direct-internet-access Disabled

AWS Lambda

With AWS Lambda, you can run code without provisioning or managing servers. You pay only for the compute time that you consume — there’s no charge when your code isn’t running. You can run code for virtually any type of application or backend service — all with zero administration.

Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.

It is important to mention the problems that could occur if we do not secure or audit the code we execute in our lambda functions, as you could be the initial access for attackers.

24.- Use supported runtimes for Lambda functions 🟨🟨

This AWS security best practice recommends checking that the Lambda function settings for runtimes match the expected values set for the supported runtimes for each language. This control checks function settings for the following runtimes: nodejs16.x, nodejs14.x, nodejs12.x, python3.9, python3.8, python3.7, ruby2.7, java11, java8, java8.al2, go1.x, dotnetcore3.1, and dotnet6.

The AWS Config rule ignores functions that have a package type of image.

Lambda runtimes are built around a combination of operating system, programming language, and software libraries that are subject to maintenance and security updates. When a runtime component is no longer supported for security updates, Lambda deprecates the runtime. Even though you cannot create functions that use the deprecated runtime, the function is still available to process invocation events. Make sure that your Lambda functions are current and do not use out-of-date runtime environments.

Get the names of all Amazon Lambda functions available in the selected AWS cloud region:

aws lambda list-functions

  –region REGION

  –output table

  –query ‘Functions[*].FunctionName’

Now examine the runtime information available for each functions:

aws lambda get-function-configuration

  –region REGION

  –function-name FUNCTION_NAME

  –query ‘Runtime’

Compare the value returned with the updated list of Amazon Lambda runtimes supported by AWS, as well as the end of support plan listed in the AWS documentation.

If the runtime is unsupported, fix it to use the latest runtime version. For example:

aws lambda update-function-configuration

  –region REGION

  –function-name FUNCTION_NAME

  –runtime “nodejs16.x”

AWS Key Management Service (AWS KMS)

AWS Key Management Service (AWS KMS) is an encryption and key management service scaled for the cloud. AWS KMS keys and functionality are used by other AWS services, and you can use them to protect data in your own applications that use AWS.

25.- Do not unintentionally delete AWS KMS keys 🟨🟨

KMS keys cannot be recovered once deleted. Data encrypted under a KMS key is also permanently unrecoverable if the KMS key is deleted. If meaningful data has been encrypted under a KMS key scheduled for deletion, consider decrypting the data or re-encrypting the data under a new KMS key unless you are intentionally performing a cryptographic erasure.

When a KMS key is scheduled for deletion, a mandatory waiting period is enforced to allow time to reverse the deletion if it was scheduled in error. The default waiting period is 30 days, but it can be reduced to as short as seven days when the KMS key is scheduled for deletion. During the waiting period, the scheduled deletion can be canceled and the KMS key will not be deleted.

List all Customer Master keys available in the selected AWS region:

aws kms list-keys –region REGION

Run the describe-key command for each CMK to identify any keys scheduled for deletion:

aws kms describe-key –key-id KEY_ID

The output for this command shows the selected key metadata. If the KeyState value is set to PendingDeletion, the key is scheduled for deletion. But if this is not what you actually want (the most common case), unschedule the deletion with:

aws kms cancel-key-deletion –key-id KEY_ID

Amazon GuardDuty

Amazon GuardDuty is a continuous security monitoring service. Amazon GuardDuty can help to identify unexpected and potentially unauthorized or malicious activity in your AWS environment.

26.- Enable GuardDuty 🟨🟨

It is highly recommended that you enable GuardDuty in all supported AWS Regions. Doing so allows GuardDuty to generate findings about unauthorized or unusual activity, even in Regions that you do not actively use. This also allows GuardDuty to monitor CloudTrail events for global AWS services, such as IAM.

List the IDs of all the existing Amazon GuardDuty detectors. A detector is an object that represents the AWS GuardDuty service. A detector must be created in order for GuardDuty to become operational:

aws guardduty list-detectors

–region REGION

–query ‘DetectorIds’

If the list-detectors command output returns an empty array, then there are no GuardDuty detectors available. In this instance, the Amazon GuardDuty service is not enabled within your AWS account. If this is the case, create a detector with the following command:

aws guardduty create-detector

–region REGION

–enable

Once the detector is enabled, it will start to pull and analyze independent streams of data from AWS CloudTrail, VPC flow logs, and DNS logs in order to generate findings.

AWS compliance standards & benchmarks

Setting up and maintaining your AWS infrastructure to keep it secure is a never-ending effort that will require a lot of time.

For this, you will be better off following the compliance standard(s) relevant to your industry, since they provide all the requirements needed to effectively secure your cloud environment.

Because of the ongoing nature of securing your environment and complying with a security standard, you might also want to recurrently run policies, such as CIS Amazon Web Services Foundations Benchmark, which will audit your system and report any non-conformity it finds.

Conclusion

Going all cloud opens a new world of possibilities, but it also opens a wide door to attacking vectors. Each new AWS service you leverage has its own set of potential dangers you need to be aware of and well prepared for.

Luckily, cloud native security tools like Renova Cloud, an AWS Consulting Partner with a focus on Security, can guide you through these best practices, and help you meet your compliance requirements.

Learn more about security best practices in AWS part 1 and part 2 here