Amazon Redshift is a data warehouse product released by Amazon Web Services (AWS) back in 2012, and it is one the most popular products of this cloud computing market leader.
Amazon Redshift is a fully managed, petabyte-scale data warehouse service in the cloud, and it is available in two options:
- Amazon Redshift Cluster: we deploy a provisioned cluster consisting of compute nodes and a leader node. We build a cluster with node types that meet our cost and performance specifications (this story).
- Amazon Redshift Serverless: we deploy a namespace (a collection of database objects and users) and a workgroup (a collection of compute resources). Amazon Redshift Serverless automatically provisions and manages capacity for us. We can specify base data warehouse capacity to select the right price/performance balance for our workloads. We can also specify maximum RPU hours to set cost controls to ensure that costs are predictable. If you want to deploy an Amazon Redshift Serverless in AWS using Terraform, please read this story.
To deploy an Amazon Redshift Cluster, we will need the following:
- AWS Credentials
- Define AWS and Terraform Providers
- Create a VPC, Subnets, and other network components
- Create a VPC Default Security Group
- Create an AWS IAM Role and IAM Policy
- Create the Amazon Redshift Cluster
Note: For clarity, we prefer to define separate files. However, you can put together the code using the traditional main.tf, variables.tf, and output.tf layout.
2. AWS Credentials
Before creating our Redshift Cluster, we will need AWS Credentials to execute our Terraform code.
The AWS provider offers a few options for providing credentials for authentication:
- Static credentials
- Environment variables
- Shared credentials/configuration file
For this story, we will use static credentials. Please refer to the “How to create an IAM account and configure…