AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
Johny_Zhao2 天前
OpenClaw中级到高级教程
linux·人工智能·信息安全·kubernetes·云计算·yum源·系统运维·openclaw
Johny_Zhao4 天前
centos7安装部署openclaw
linux·人工智能·信息安全·云计算·yum源·系统运维·openclaw
Johny_Zhao7 天前
OpenClaw安装部署教程
linux·人工智能·ai·云计算·系统运维·openclaw
NineData7 天前
数据库迁移总踩坑?用 NineData 迁移评估,提前识别所有兼容性风险
数据库·程序员·云计算
SaaS_Product12 天前
从实用性与体验角度出发,OneDrive有什么替代品
云计算·saas·onedrive
小扎仙森12 天前
关于阿里云实时语音翻译-Gummy推送WebSocket
websocket·阿里云·云计算
Elastic 中国社区官方博客12 天前
Elastic 公共 roadmap 在此
大数据·elasticsearch·ai·云原生·serverless·全文检索·aws
Shacoray12 天前
OpenClaw 接入阿里云百炼 Coding Plan 指南
阿里云·ai·云计算·qwen3·openclaw·coding plan
TG_yunshuguoji12 天前
阿里云代理商:2026 年阿里云国际站上云接入指南
服务器·阿里云·云计算
阿里云云原生12 天前
阿里云可观测 2026 年 1 月产品动态
阿里云·云计算