AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
陈皮糖..9 小时前
27 届运维实习笔记|第三、四周:从流程熟练到故障排查,企业运维实战深化
运维·笔记·sql·nginx·ci/cd·云计算·jenkins
ZStack开发者社区10 小时前
DeepSeek-V4首发即支持,ZStack AIOS 私有化部署即刻可用
人工智能·开源·云计算
阿乔外贸日记16 小时前
土耳其包装市场需求缺口分析
大数据·人工智能·物联网·搜索引擎·云计算
阿里-于怀17 小时前
【无标题】阿里云 AI 网关支持 DeepSeek V4
人工智能·阿里云·云计算·deepseek
easy_coder19 小时前
一次部署阻塞的根因分析:自动提交与手动提交链路混用的代价
运维·云计算
新知图书19 小时前
通过阿里云百炼平台调用DeepSeek大模型
人工智能·阿里云·云计算·langchian
TokenByte-AI导航小贴士19 小时前
Claude 4.5 Sonnet / Opus / Haiku:新手选型指南
人工智能·ai·云计算·aigc·claude·aws
China_Yanhy20 小时前
[Infra/SRE 知识库] AWS CloudFront API 边缘缓存配置与排障复盘
缓存·云计算·aws
小夏子_riotous21 小时前
Docker学习路径——7、Docker搭建MySQL 主从复制
linux·运维·mysql·docker·容器·centos·云计算
有想法的py工程师21 小时前
如何用 AWS CLI 判断 T 系列实例 CPU 不够(实战指南)
大数据·aws