AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
小安运维日记43 分钟前
CKA认证 | Day7 K8s存储
运维·云原生·容器·kubernetes·云计算
云计算DevOps-韩老师2 小时前
【网络云计算】2024第52周-每日【2024/12/26】小测-理论&实操-备份MySQL数据库并发送邮件-解析
linux·开发语言·网络·数据库·mysql·云计算·perl
小馒头学python4 小时前
【玩转OCR】 | 腾讯云智能结构化OCR在多场景的实际应用与体验
云计算·ocr·腾讯云·玩转腾讯云ocr
马剑威(威哥爱编程)5 小时前
阿里云DataWorks产品使用
阿里云·云计算·dataworks
朴拙数科7 小时前
AWS、Google Cloud Platform (GCP)、Microsoft Azure、Linode和 桔子数据 的 价格对比
microsoft·azure·aws
SelectDB技术团队7 小时前
一文了解多云原生的现代化实时数仓 SelectDB Cloud
大数据·数据库·数据仓库·云原生·云计算
小安运维日记7 小时前
CKA认证 | Day8 K8s安全
运维·云原生·容器·kubernetes·云计算
HaoHao_0109 小时前
云原生大数据计算服务 MaxCompute
阿里云·云计算·云服务器
Aileen_0v016 小时前
【玩转OCR | 腾讯云智能结构化OCR在图像增强与发票识别中的应用实践】
android·java·人工智能·云计算·ocr·腾讯云·玩转腾讯云ocr
九河云17 小时前
华为云国内版与国际版的主要区别解析
华为云·云计算