AWS SAA-C03 #204

An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.

The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.

Which solution will meet these requirements?

A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.

B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.

C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.

D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.


Sure, here's why the other options are not as suitable:

A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.

This option would not meet the requirement to minimize operational overhead. Migrating all purchase data to write directly to Amazon RDS could be a significant task, and managing access controls in RDS could also be complex and time-consuming.

B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.

While this solution could work, it doesn't provide the ability to manage fine-grained permissions for the data as effectively as AWS Lake Formation does. S3 policies are not designed for fine-grained access control.

D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.

This solution could also work, but it might not minimize operational overhead because managing an Amazon Redshift cluster and scheduling AWS Lambda functions for data transfer can be complex tasks. Moreover, Redshift is a data warehousing solution and might be overkill for this use case if the primary requirement is just to perform analytics on the data.

The solution that will meet these requirements is:

C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.

This solution allows the company to make all the data available to various teams for analytics, manage fine-grained permissions for the data, and minimize operational overhead. AWS Lake Formation simplifies the process of setting up, securing, and managing data lakes. AWS Glue can connect to Amazon RDS using a JDBC connection, and you can register an Amazon S3 bucket in Lake Formation as a data source. Then, you can use Lake Formation's access controls to manage permissions for the data.

相关推荐
鱼跃鹰飞2 分钟前
面试题:Spring事务失效的八大场景
数据库·mysql·spring
ss27319 分钟前
类的线程安全:多线程编程-银行转账系统:如果两个线程同时修改同一个账户余额,没有适当的保护机制,会发生什么?
java·开发语言·数据库
郑泰科技19 分钟前
windows下启动hbase的步骤
数据库·windows·hbase
子一!!28 分钟前
MySQL数据库基础操作
数据库·mysql·oracle
DarkAthena39 分钟前
【GaussDB】从 sqlplus 到 gsql:Shell 中执行 SQL 文件方案的迁移与改造
数据库·sql·oracle·gaussdb
Wpa.wk43 分钟前
接口自动化 - 了解接口自动化框架RESTAssured (Java版)
java·数据库·自动化
二等饼干~za8986681 小时前
GEO优化---关键词搜索排名源码开发思路分享
大数据·前端·网络·数据库·django
程序员柒叔1 小时前
Dify 集成-向量数据库
数据库·milvus·向量数据库·工作流·dify·向量库
月明长歌1 小时前
MySQL 视图:把复杂查询封装成表,并且还能控权限、做解耦
数据库·mysql
l1t1 小时前
postgresql 18版bytea 类型转换的改进
数据库·postgresql