AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
ai_xiaogui1 小时前
宝塔FTP备份网站失败?21端口+被动端口39000-40000放行+阿里云安全组配置全攻略
阿里云·云计算·宝塔ftp被动端口放行·安全组21端口·宝塔备份下载中断解决·ftp主动模式连接不上·宝塔面板ftp配置教程
zhangfeng113316 小时前
阿里云人工智能平台 PAI(Platform of Artificial Intelligence)训练大模型的几种方式
人工智能·阿里云·云计算
郏国上17 小时前
如何在阿里云上建立Mongo DB数据库并且用Mongo DB Compass客户端连接数据库
数据库·阿里云·云计算
翼龙云_cloud19 小时前
阿里云渠道商:如何利用弹性伸缩在业务低谷时自动缩减资源?
服务器·阿里云·云计算
亚林瓜子21 小时前
AWS中国云中的ETL之从Amazon Glue Data Catalog搬数据到MySQL(Glue版)
python·mysql·spark·etl·aws·glue·py
rum551 天前
云计算中商业智能的挑战
云计算·响应时间·商业智能·roi·cloudsim
The star"'1 天前
kubernetes的概述,部署方式,基础命令,核心部件
云原生·容器·kubernetes·云计算
柴犬小管家1 天前
云计算的经济与运营优势及人力资源影响
云计算·saas·经济优势·运营支出·人力资源
普通网友1 天前
云计算数据加密选型:合规要求(GDPR / 等保)下的方法选择
开发语言·云计算·perl
莫大3301 天前
盘点国内主流的云计算厂商有哪些?你还知道哪些云?
云计算