AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
**蓝桉**27 分钟前
云网络概述
阿里云·云计算
醇氧9 小时前
【Hermes Agent】阿里云百炼模型接入完整配置
阿里云·云计算
子牙老师10 小时前
软件虚拟化 vs 硬件虚拟化
linux·性能优化·云计算
lwf00616413 小时前
如何获取自己的阿里云镜像加速地址
阿里云·云计算
认真的薛薛15 小时前
阿里云:VPC对等连接
阿里云·云计算
LiLiYuan.15 小时前
【HotSpot 是什么?】
云计算
ZStack开发者社区15 小时前
从 “制造” 到 “智造”,ZStack助力制造企业破局而上
人工智能·云计算·制造
科技峰行者15 小时前
解析OpenClaw安全挑战及应对策略 构筑AI Agent安全新边界
网络·人工智能·科技·安全·aws·亚马逊·亚马逊云科技
亚林瓜子17 小时前
AWS Glue PySpark中日志设置
python·spark·日志·aws·pyspark·log·glue
哎呦哥哥和巨炮叔叔18 小时前
Maya / Blender 云解析 | 渲染101一键提交,解析渲染更省心
云计算·blender·云渲染·maya·云解析·特效解算·影视动画云渲染