AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
容器魔方7 分钟前
KubeCon China 2025 | 与KubeEdge畅聊毕业经验与创新未来
云原生·容器·云计算
高冷小伙2 小时前
简单聊下阿里云DNS劫持事件
阿里云·云计算
Lw老王要学习5 小时前
Linux容器篇、第一章_02Rocky9.5 系统下 Docker 的持久化操作与 Dockerfile 指令详解
linux·运维·docker·容器·云计算
_可乐无糖5 小时前
EC2安装WebRTC sdk-c环境、构建、编译
服务器·webrtc·aws
Britz_Kevin7 小时前
从零开始的云计算——番外实战,iptables防火墙项目
云计算·#项目实战·#linux·#iptables
debug 小菜鸟17 小时前
浏览器访问 AWS ECS 上部署的 Docker 容器(监听 80 端口)
docker·云计算·aws
可观测性用观测云1 天前
AWS EKS 集群日志上报观测云实践
aws
Johny_Zhao1 天前
2025年6月Docker镜像加速失效终极解决方案
linux·网络·网络安全·docker·信息安全·kubernetes·云计算·containerd·yum源·系统运维
容器魔方1 天前
KubeCon 抢鲜 | Kmesh与你共创高性能流量治理更优方案
云原生·容器·云计算
亚林瓜子1 天前
AWS Elastic Beanstalk + CodePipeline(Python Flask Web的国区CI/CD)
python·ci/cd·flask·web·aws·beanstalk·codepipeline