AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
暮雨疏桐3 小时前
阿里云函数计算部署广告数据ETL方案
阿里云·云计算
天草二十六_简村人6 小时前
对接AI大模型之nginx代理配置SSE接口
运维·网络·nginx·http·阿里云·ai·云计算
yyuuuzz6 小时前
独立站部署的几个常见技术问题
运维·服务器·网络·云计算·aws
China_Yanhy7 小时前
AWS RDS PostgreSQL 大版本升级故障复盘与 SRE 最佳实践指南
运维·云计算·aws
HehuaTang8 小时前
IPOIB TCP ROCE IB 性能比较
网络·云计算
yyuuuzz1 天前
企业出海中的技术稳定性问题梳理
运维·服务器·网络·github·aws
ZStack开发者社区1 天前
青海交科携手云轴科技ZStack为青海交控集团打造智慧物流大数据平台
大数据·服务器·科技·云计算·gpu算力
yyuuuzz1 天前
aws亚马逊云上部署常见问题梳理
运维·服务器·网络·数据库·云计算·aws
Hy行者勇哥1 天前
阿里云百炼平台技术文档
阿里云·云计算
TG_yunshuguoji1 天前
阿里云代理商:DeepSeek V4 API接入全攻略 + 云服务器部署指南
运维·阿里云·云计算·ai智能体·deepseek-v4