QUESTION
At the end of the month, data analyst teams run end of month reporting and ad hoc analysis consisting long, complex queries, creating a spike in read usage. Queries are running slowly. Automatic Workload Management (WLM) and Short Queue Acceleration (SQA) are in place but have not fixed the problem.
Which of the following are the most cost effective and least disruptive means of scaling to meet demand?
(Select Two)
We’re scaling for READ, not write, so both Elastic and Classic resize which will scale both READ & WRITE capacity are unnecessary
A) Incorrect -We’re scaling for READ, not write but Elastic will scale both READ & WRITE capacity and there will be some service interruption.
B) Correct -“With the Concurrency Scaling feature, you can support virtually unlimited concurrent users and concurrent queries, with consistently fast query performance. When concurrency scaling is enabled, Amazon Redshift automatically adds additional cluster capacity when you need it to process an increase in concurrent read queries. Write operations continue as normal on your main clusterhttps://docs.aws.amazon.com/redshift/latest/dg/concurrency-scaling.html
C) Incorrect – Classic resize would be for scaling to meet an ongoing increase and read and write capacity need. Whilst the snapshot approach reduces down-time, the new cluster won’t be available for hours – days.
D) Correct – “With Amazon Redshift, you can already scale quickly in three ways. First, you can query data in your Amazon S3 data lakes in place using Amazon Redshift Spectrum, without needing to load it into the cluster. This flexibility lets you analyze growing data volumes without waiting for extract, transform, and load (ETL) jobs or adding more storage capacity” –https://aws.amazon.com/blogs/big-data/scale-your-amazon-redshift-clusters-up-and-down-in-minutes-to-get-the-performance-you-need-when-you-need-it/
No responses yet