AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
十月南城14 小时前
高可用的三件事——无状态化、水平扩展与故障转移的协同设计
运维·web安全·微服务·云计算·xss
Akamai中国1 天前
构建分布式应用?Akamai 和 Fermyon 正在改变游戏规则
人工智能·云计算·云服务·云存储
养军博客2 天前
江苏省计算机大类专转本全面知识点
计算机网络·云计算·媒体·量子计算
EverydayJoy^v^2 天前
RH134学习进程——七.管理基本存储
运维·服务器·云计算
翼龙云_cloud2 天前
阿里云渠道商:如何提升阿里云弹性伸缩扩容成功率?
服务器·阿里云·云计算
PM老周2 天前
国产Jira方案哪家强?2026年Jira替代工具测评指南
安全·阿里云·云计算·团队开发·个人开发
China_Yanhy2 天前
AWS 全链路监控 (Application Signals/X-Ray) + EKS 实战落地指南
云计算·aws
zhou lily2 天前
私有化部署VS云服务:数字化转型中的部署策略选择
云计算
十月南城2 天前
压测方法论——目标、场景、指标与容量评估的闭环
运维·web安全·ci/cd·微服务·云计算
咕噜企业分发小米2 天前
腾讯云在多云管理工具上如何实现合规性要求?
java·云计算·腾讯云