AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
AKAMAI8 小时前
无服务器计算架构的优势
人工智能·云计算
翼龙云_cloud12 小时前
阿里云国际站渠道商:DDoS防护方案适合哪些类型的企业?
服务器·网络·阿里云·云计算·ddos
gaize121314 小时前
服务器性能优化方式
服务器·云计算
打码人的日常分享14 小时前
云计算大数据系统建设方案,私有云建设方案
运维·网络·安全·信息可视化·架构·云计算
数字时代全景窗15 小时前
学习Palantir,对传统产业加速AI+有什么启示?(1)智能系统的“铁三角”
人工智能·学习·云计算·软件工程
月亮!16 小时前
AI安全红线:模型投毒与防御策略全解读
java·网络·人工智能·python·测试工具·安全·云计算
weixin_3077791318 小时前
Jenkins JSON Path API 插件详解:CI/CD 中的数据提取利器
运维·ci/cd·架构·云计算·aws
小白考证进阶中18 小时前
阿里云ACP怎么考?(保姆级考试流程)
阿里云·大模型·云计算·acp·阿里云acp云计算·阿里云acp认证·acp云计算
YY1881939539518 小时前
第一次作业
云计算
翼龙云_cloud18 小时前
阿里云国际站渠道商:如何选择适合自己的DDoS防护方案?
运维·阿里云·云计算·aws