AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
瞎某某Blinder9 小时前
DFT学习记录[6]基于 HES06的能带计算+有效质量计算
python·学习·程序人生·数据挖掘·云计算·学习方法
byoass14 小时前
智巢AI知识库深度解析:企业文档管理从大海捞针到精准狙击的进化之路
开发语言·网络·人工智能·安全·c#·云计算
byoass18 小时前
企业云盘权限管理深度对比:巴别鸟、联想Filez、腾讯企微云盘
网络·安全·云计算·企业微信
翼龙云_cloud20 小时前
阿里云代理商:阿里云部署的Hermes Agent 钉钉接入指南
人工智能·阿里云·云计算·钉钉·ai 智能体·hermes agent
byoass20 小时前
企业云盘API集成指南:如何与CI/CD流水线打通
网络·安全·ci/cd·云计算
easy_coder21 小时前
超越提示词:Context Engineering 在AI智能诊断中的应用
人工智能·云计算
easy_coder21 小时前
ReAct Agent 陷入死循环?私有云部署诊断中的陷阱与破局之道
人工智能·云计算
手揽回忆怎么睡1 天前
本地服务镜像推送到阿里云ACR
阿里云·云计算
byoass1 天前
企业云盘全文检索实战:Elasticsearch集成与分布式搜索
网络·分布式·安全·elasticsearch·云计算·全文检索
翼龙云_cloud1 天前
云代理商:云端部署的Hermes Agent 如何接入钉钉?
人工智能·云计算·ai 智能体·hermes agent·hermes