Amazon Redshift offers an attractive feature that can help organizations manage their hosting bill. It’s called concurrency scaling, and according to Amazon, it “automatically and elastically scales query processing power to provide consistently fast performance for hundreds of concurrent queries.”
Introduction to Amazon Redshift Concurrency Scaling
Before concurrency scaling, Redshift users faced a familiar dilemma – dealing with peak demand. There were two options:
- Optimize for typical workload, which means that analytics and BI queries may run slower at peak times.
- Overprovision to meet peak demand, which is a waste of resources at off-peak times
Concurrency scaling adds resources to your Redshift cluster on an on-demand basis, adding processing power during peak time and withdrawing it in quieter moments.
In terms of pricing, concurrency scaling works on a credit system that should make it free for most users. Amazon allows you to earn one free hour of scaling for every 24 hours of main Redshift cluster usage, and these credits accrue over time. Any usage outside of your credits gets billed on a per-second basis according to your Redshift agreement.
Concurrency scaling makes financial sense, but can it offer consistent service? Let’s find out.
Cluster Requirements
There are three eligibility requirements for concurrency scaling. Your Redshift cluster must be:
- EC2-VPC platform
- Node type must be dc2.8xlarge, ds2.8xlarge, dc2.large, ds2.xlarge, ra3.4xlarge, or ra3.16xlarge.
- Number of nodes between 2 and 32
This means that single-node clusters are not eligible. Also, note that the cluster must have had fewer than 32 nodes at creation. If your cluster originally had 50 nodes and you scale down to 32, you’re still not eligible for concurrency scaling.
Eligible Query Types
Concurrency scaling does not work on all query types. For the first release, it handles read-only queries that meet three conditions:
- Read-only SELECT queries (although more types are planned)
- The query does not reference a table with sorting style of INTERLEAVED.
- The query does not use Amazon Redshift Spectrum to reference external tables.
For routing to a concurrency scaling cluster, a query needs to encounter queueing. Also, queries eligible for SQA (Short Query Acceleration) queue will not run on the concurrency scaling clusters.
Queuing and SQA are a function of a proper set-up of Redshift’s workload management (WLM). We recommend first optimizing your WLM because it will reduce the need for concurrency scaling. And that matters because, while AWS claims that concurrency scaling will be free for 97% of customers, you could face an additional usage charge if you exceed your credits.
We’ve also tested enabling Redshift’s automatic WLM and captured our experience with it in this blog post, “Should I Enable Amazon Redshift’s Automatic WLM?“
Enabling Concurrency Scaling
Concurrency scaling is enabled on a per-WLM queue basis. Go to the AWS Redshift Console and click on “Workload Management” from the left-side navigation menu. Select your cluster’s WLM parameter group from the subsequent pull-down menu.
You should see a new column called “Concurrency Scaling Mode” next to each queue. The default is ‘off’. Click ‘Edit’ and you’ll be able to modify the settings for each queue.
How We Configured Redshift Concurrency Scaling
Concurrency scaling works by routing eligible queries to new, dedicated clusters. The new clusters have the same size (node type and number) as the main cluster.
The number of clusters used for concurrency scaling defaults to one (1), with the option to configure up to ten (10) total clusters.
The total number of clusters that should be used for concurrency scaling can be set by the parameter max_concurrency_scaling_clusters. Increasing the value of this parameter provisions additional standby clusters.
Monitoring our Concurrency Scaling Test
There are a few additional charts in the AWS Redshift console. There is a chart called “Max Configured Concurrency Scaling Clusters” which plots the value of max_concurrency_scaling_clusters over time.
The number of Active Scaling clusters is also shown in the UI under Concurrency Scaling Activity:
The Queries tab in the UI also has a column to show if the query ran on the Main cluster or on the Concurrency Scaling cluster:
Whether a particular query ran on the main cluster or via a concurrency scaling cluster is stored in stl_query.concurrency_scaling_status.
A value of 1 means the query ran on a Concurrency Scaling cluster, and other values mean it ran on the main cluster.
Example:
1
2
3
4
5
6
7
8
9
10
11
12
13
|
redshiftcluster_2=# select distinct
concurrency_scaling_status,count(*) from stl_query where endtime <
'2019-03-29 15:00:00' group by concurrency_scaling_status;
concurrency_scaling_status | count
----------------------------+--------
2 | 21
0 | 310790
4 | 19818
6 | 69082
11 | 7
3 | 853546
8 | 228977
(7 rows)
|
Concurrency Scaling info is also stored in some other tables/views, e.g. SVCS_CONCURRENCY_SCALING_USAGE. TherConcurrency scaling info is also stored in some other tables/views, such asSVCS_CONCURRENCY_SCALING_USAGE.
The following views have similar information as the corresponding STL views or SVL views:
- SVCS_ALERT_EVENT_LOG
- SVCS_COMPILE
- SVCS_EXPLAIN
- SVCS_PLAN_INFO
- SVCS_QUERY_SUMMARY
- SVCS_STREAM_SEGS
These views work in the same way as their STL or SVL equivalents.
Results of our Concurrency Scaling Tests
We enabled concurrency scaling for a single queue on an internal cluster at approximately 2019-03-29 18:30:00 GMT. We changed the max_concurrency_scaling_clusters parameter to 3 at approximately 2019-03-29 20:30:00.
To simulate query queuing, we lowered the # of slots for the queue from 15 slots to 5 slots.
Below is a chart from the Integrate.io dashboard, showing the running versus queuing queries for this queue, after cranking down the number of slots.
We observe that the queueing time for queries went up, maxing out at about > 5 minutes.
Here’s the corresponding summary in the AWS console of what happened during that time:
Redshift spun up three (3) concurrency scaling clusters as requested. It appears that these clusters were not fully utilized, even though our cluster had many queries that were queuing.
The usage chart correlates closely with the scaling activity chart:
After a few hours, we checked and it looked like 6 queries ran with concurrency scaling. We also spot-checked two queries against the UI. We haven’t checked how this value may be used if multiple concurrency clusters are active.
1
2
3
4
5
6
7
8
9
10
11
12
|
redshiftcluster_2=# select distinct
concurrency_scaling_status,count(*) from stl_query where endtime >
'2019-03-29 18:30:00' group by concurrency_scaling_status;
concurrency_scaling_status | count
----------------------------+-------
4 | 108
6 | 333
1 | 6
0 | 913
3 | 4495
8 | 304
(6 rows)
|
Conclusion: Is Redshift Concurrency Scaling Worth it?
Concurrency scaling may mitigate queue times during bursts in queries.
From this basic test, it appears that a portion of our query load improved as a result. However, simply enabling concurrency scaling didn’t fix all of our concurrency problems. The limited impact is likely due to the limitations on the types of queries that can use concurrency scaling. For example, we have a lot of tables with interleaved sort keys, and much of our workload is writes.
While concurrency scaling doesn’t appear to be a silver bullet solution for WLM tuning in all cases, using the feature is transparent and easy to use. You can start with a single concurrency cluster, then monitor the peak load via the console to determine whether the new clusters are being fully utilized.
Though it may not have lived up to be the automatic solution advertised, concurrency scaling will become more and more effective over time as AWS adds more features and support. We strongly recommend enabling the feature on your WLM queues.