AWS SAA-C03 #204

An online retail company has more than 50 million active customers and receives more than 25,000 orders each day. The company collects purchase data for customers and stores this data in Amazon S3. Additional customer data is stored in Amazon RDS.

The company wants to make all the data available to various teams so that the teams can perform analytics. The solution must provide the ability to manage fine-grained permissions for the data and must minimize operational overhead.

Which solution will meet these requirements?

A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.

B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.

C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.

D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.


Sure, here's why the other options are not as suitable:

A. Migrate the purchase data to write directly to Amazon RDS. Use RDS access controls to limit access.

This option would not meet the requirement to minimize operational overhead. Migrating all purchase data to write directly to Amazon RDS could be a significant task, and managing access controls in RDS could also be complex and time-consuming.

B. Schedule an AWS Lambda function to periodically copy data from Amazon RDS to Amazon S3. Create an AWS Glue crawler. Use Amazon Athena to query the data. Use S3 policies to limit access.

While this solution could work, it doesn't provide the ability to manage fine-grained permissions for the data as effectively as AWS Lake Formation does. S3 policies are not designed for fine-grained access control.

D. Create an Amazon Redshift cluster. Schedule an AWS Lambda function to periodically copy data from Amazon S3 and Amazon RDS to Amazon Redshift. Use Amazon Redshift access controls to limit access.

This solution could also work, but it might not minimize operational overhead because managing an Amazon Redshift cluster and scheduling AWS Lambda functions for data transfer can be complex tasks. Moreover, Redshift is a data warehousing solution and might be overkill for this use case if the primary requirement is just to perform analytics on the data.

The solution that will meet these requirements is:

C. Create a data lake by using AWS Lake Formation. Create an AWS Glue JDBC connection to Amazon RDS. Register the S3 bucket in Lake Formation. Use Lake Formation access controls to limit access.

This solution allows the company to make all the data available to various teams for analytics, manage fine-grained permissions for the data, and minimize operational overhead. AWS Lake Formation simplifies the process of setting up, securing, and managing data lakes. AWS Glue can connect to Amazon RDS using a JDBC connection, and you can register an Amazon S3 bucket in Lake Formation as a data source. Then, you can use Lake Formation's access controls to manage permissions for the data.

相关推荐
麦聪聊数据10 分钟前
为何通用堡垒机无法在数据库运维中实现精准风控?
数据库·sql·安全·低代码·架构
2301_7903009615 分钟前
Python数据库操作:SQLAlchemy ORM指南
jvm·数据库·python
m0_7369191031 分钟前
用Pandas处理时间序列数据(Time Series)
jvm·数据库·python
亓才孓31 分钟前
[JDBC]PreparedStatement替代Statement
java·数据库
m0_466525291 小时前
绿盟科技风云卫AI安全能力平台成果重磅发布
大数据·数据库·人工智能·安全
阿里云大数据AI技术2 小时前
全模态、多引擎、一体化,阿里云DLF3.0构建Data+AI驱动的智能湖仓平台
人工智能·阿里云·云计算
爱学习的阿磊2 小时前
使用Fabric自动化你的部署流程
jvm·数据库·python
摇滚侠2 小时前
阿里云安装的 Redis 在什么位置,如何找到 Redis 的安装位置
redis·阿里云·云计算
枷锁—sha2 小时前
【SRC】SQL注入快速判定与应对策略(一)
网络·数据库·sql·安全·网络安全·系统安全
惜分飞2 小时前
ORA-600 kcratr_nab_less_than_odr和ORA-600 4193故障处理--惜分飞
数据库·oracle