blog
5 Crucial Best Practices for Security in AWS
Jason Deck, Vice President, Product Management, Hyperscale Cloud
Monday, June 05, 2017

There’s a common misconception that exists within the IT world that where the public cloud wins in flexibility, scalability, and cost; it lacks in security. But those that fail to achieve security in the cloud do so, not due to a lack of resources, but a lack of change in approach.

To create and manage an effective security infrastructure in AWS or any public cloud, you must introduce security early in the development process – not retroactively. Today we are going to talk about AWS. Where do you start? What is there to know? Quite a lot. But the best place to start, as with anything, is with best practices.

Here are a few of ours:
 

  1. Take full advantage of native cloud security resources
    In AWS, there are a host of native security tools to help you achieve a secure environment. And with standard compliance frameworks, such as the ISO/IEC 27000 series, and Amazon Machine Images (AMIs) preconfigured with varying compliance elements built in, there is a lot of front-end work already done for you.

But that is only one small piece of the puzzle. You still must learn how to properly leverage the tools:

  • Trusted Advisor is a resource in AWS that performs checks and provides detailed reports of your environments regarding security, cost optimization, performance, and fault tolerance. For security, you can set up notifications for things like unrotated access keys, rules that allow unrestricted access, and other potential weak points. You can then use this information to identify lapses in permissions and any policy-related vulnerabilities.
  • CloudWatch is a monitoring service for your AWS resources and applications that allows you to collect and monitor CloudTrail log files, set alarms, and automate responses to specific activity within your environment. It’s crucial you enable individual CloudWatch monitoring for all your resources so you can address security issues quickly and at the source.
  • Identity and Access Management (IAM) is a tool for managing permissions and credentials within your AWS infrastructure. Using groups, users, and roles, you can set policies and controls based on departmental, individual, and programmatic needs. Utilizing features like roles, which allows you to conditionally delegate read or write permissions without sharing credentials, and Access Advisor, which gives holistic breakdowns of who has access to what, you can gain full visibility and control of your AWS security infrastructure.

 

  1. Manage access credentials around “least privilege” ruthlessly
    Creating strong access policies and controls should be job one when it comes to constructing a sound security framework in AWS. And if you do not enforce them strictly around least privilege, your chances of achieving a secure environment go way down.

With AWS, using IAM, you can create policies that restrict access to AWS resources based on factors like time of day, originating IP address, SSL requirements, and functions. The best way to leverage this capability is by applying these conditional policies to roles.

Unlike users or groups, roles can be assumed by any entity (user, group, application, or AWS resource) based on the conditions you set, via secure calls through the API or SDK. When a user, group, or application’s credentials align with a role’s stipulated conditions, access is programmatically granted. If conditions are not met, the entity does not gain access.

It is critical you avoid long-term credentials when creating users and groups, and exploit the use of roles. Other native tools like Access Advisor, AWS policy simulator, and AWS managed policies – which allows you to create and manage policies via the AWS Management Console, AWS CLI, or the IAM API – will enable you the flexibility to further optimize your policies around least privilege.

 

  1. Implement unique policies at the resource, OS, and network levels
    As detailed in their SLAs, AWS is responsible for the security of their global infrastructure and physical hardware, while you are responsible for ensuring the security of your cloud environments. To do that, you must exploit the granularity of AWS security, implementing unique controls at each major level of your infrastructure.

At the resource level, IAM allows you to apply unique policies to each individual resource in AWS. Roles can be used for applications running on Elastic Compute Cloud (EC2) instances (virtual servers) to allow access programmatically, which developers can set on an instance-to-instance basis with temporary security credentials for each individual resource (S3, DynamoDB, RDB, etc.), or on an account basis (dev, test, or prod environments).

In AWS, you will need a different set of credentials to access the operating systems of your EC2 instances. A centralized directory service is required so administrators can control user or group access to EC2 operating systems in alignment with your organizational security requirements. You can use AWS Directory Service for this or your own self-managed directory, like Microsoft Active Directory, which can be synced to AWS via AD Connector.

For network level security measures that depend on encryption, keys, and their safe storage, are mission-critical. With server-side encryption, AWS CloudHSM allows you to manage keys and API access of your entire cryptographic domain, of which AWS has absolutely zero access to. Using SafeNet HSMs, CloudHSM supports core functions like database encryption (EBS, S3, Glacier, and RDS), PKI, and Digital Rights Management (DRM), as well as authentication, authorization, transaction processing and document signing. With it, you can configure EC2 instances to restrict applications to only make the API calls you allow. You can also use AWS security groups (virtual firewalls) to strengthen enforcement of network level controls.

 

  1. Leverage AMIs to standardize policies and mitigate risk
    Amazon Machine Images (AMIs) are virtual appliances with all the information required to instantiate a given server (launch an instance). AMIs can be configured to launch instances for any kind of workload (web, mobile, DBs, dev/test/prod environments, etc.), and can be provisioned with individual, custom security measures.

You can select hardened, public AMIs from the AWS Management Console – which address common security needs – or, you can create your own custom AMIs, architecting your security controls into them as code. Doing this will allow you to begin automating policy enforcement within your infrastructure.

You can also use native monitoring tools like CloudWatch, or a third-party tool like Datadog – which provide robust alerting and auditing of your cloud environment – and their findings, to see what controls need refining in your framework and continue standardizing policies accordingly.

As you refine your policies and controls in AWS, you can either “bake” them into your own private AMIs, or find public AMIs that align with your policies and instantiate them into your environment, mitigating risk and greatly reducing the need for maintenance, patching or manual provisioning. The more you harden the security controls of your AMIs, the less human involvement is needed to maintain a secure environment.

 

  1. Constantly move toward adopting Security by Design
    As you make us of native AWS security tools to manage access, standardize policies and controls, and begin baking them into your AMIs, you will have taken the first key step toward Security by Design.

Security by Design (SbD), or Security Automation, is an approach to cloud security that’s predicated on automating security controls and policies by formalizing infrastructure design. This is not an entirely new approach, but the public cloud has made it exponentially easier to implement. Since your entire cloud environment is software, you can “design”, or write your security infrastructure as code, without having to retroactively provision or configure individual stacks or servers.

To achieve SbD, you must first standardize your build and update processes. Using AWS CloudFormation or third-party tools, you can orchestrate your infrastructure to automate provisioning by coding OS configurations into templates with a configuration management tool like Ansible, or Chef. You can then put these templates and config manifests into a source code management tool like Bitbucket to allow for version control.

Begin integrating code testing as a part of your development pipeline, using tools like AWS CodeDeploy or Ansible to manage configuration updates. Be sure to implement scripts that run against your environments to further enforce configurations in production. As you refine configurations in your templatized infrastructure to better address failures like missed health checks or penetration tests, you can execute changes by simply tweaking code.

 

Follow best practices to achieve a secure environment

The more standardized and automated your infrastructure is, the more consistently you can enforce your security and compliance standards throughout your environment without admin involvement. And the more you abide by best practices, the easier achieving, and maintaining a secure environment in the public cloud will become.

 

About the Author

Jason has more than 15 years of experience and leadership in IT and telecommunications industries. His expertise in assisting clients in their transition from legacy systems to the cloud makes Jason an integral piece of Ensono’s growth in the IT managed services space. Prior to joining Ensono, Jason was the senior vice president of strategy at Logicworks, an enterprise cloud automation and managed service provider, where he was responsible for defining brand strategy and position in the market. Jason played a critical role in Logicworks’ significant growth from its Managed AWS business, and the company’s sale to Pamplona Capital Management in 2016. Before his time at Logicworks, he held sales, business development, marketing and alliance roles for Cisco Systems, Brocade Communications and Equinix. Jason holds a Bachelor of Art degree from Columbia University in New York, in addition to an M.B.A. from the Kellogg School of Management at Northwestern University.