AWS SAA C003 #33

A company runs an online marketplace web application on AWS. The application serves hundreds of thousands of users during peak hours. The company needs a scalable, near-real-time solution to share the details of millions of financial transactions with several other internal applications. Transactions also need to be processed to remove sensitive data before being stored in a document database for low-latency retrieval.

What should a solutions architect recommend to meet these requirements?

A. Store the transactions data into Amazon DynamoDB. Set up a rule in DynamoDB to remove sensitive data from every transaction upon write. Use DynamoDB Streams to share the transactions data with other applications.

B. Stream the transactions data into Amazon Kinesis Data Firehose to store data in Amazon DynamoDB and Amazon S3. Use AWS Lambda integration with Kinesis Data Firehose to remove sensitive data. Other applications can consume the data stored in Amazon S3.

C. Stream the transactions data into Amazon Kinesis Data Streams. Use AWS Lambda integration to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB. Other applications can consume the transactions data off the Kinesis data stream.

D. Store the batched transactions data in Amazon S3 as files. Use AWS Lambda to process every file and remove sensitive data before updating the files in Amazon S3. The Lambda function then stores the data in Amazon DynamoDB. Other applications can consume transaction files stored in Amazon S3.


The best option would be C. Stream the transactions data into Amazon Kinesis Data Streams.

This is because Amazon Kinesis Data Streams can handle the high volume of data and provide near-real-time data processing, which is crucial for this scenario. AWS Lambda integration can be used to process each transaction and remove sensitive data before storing it in Amazon DynamoDB. DynamoDB is a good choice for storing the processed transactions due to its low-latency data access capabilities. Other applications can consume the transactions data off the Kinesis data stream, ensuring that all applications have access to the latest transactions data.

Options A, B, and D have certain limitations:

  • Option A: DynamoDB does not have a built-in feature to remove sensitive data upon write.
  • Option B: Storing data in S3 would not provide the low-latency retrieval required for this use case.
  • Option D: Processing files in S3 with Lambda would not provide near-real-time data processing.

Therefore, option C is the most suitable solution for this scenario.

相关推荐
Johny_Zhao11 小时前
OpenStack 全套搭建部署指南(基于 Kolla-Ansible)
linux·python·信息安全·云计算·openstack·shell·yum源·系统运维
AI_CPU_GPU_Cloud16 小时前
云计算市场的重新分类研究
云计算
唐僧洗头爱飘柔952717 小时前
(云计算HCIP)HCIP全笔记(九)本篇介绍操作系统基础,内容包含:操作系统组成、分类和定义,Linux的特性结构和Linux版本分类
linux·笔记·华为云·云计算·hcip·openeuler·操作系统概述
阻容降压18 小时前
腾讯云物联网平台
云计算·腾讯云
24k小善19 小时前
FlinkUpsertKafka深度解析
java·大数据·flink·云计算
SAP工博科技20 小时前
深圳市富力达:SAP一体化管理助力精密制造升级 | 工博科技SAP客户案例
科技·云计算·制造
阿里云大数据AI技术1 天前
演讲实录:中小企业如何快速构建AI应用?
大数据·人工智能·云计算
@t.t.1 天前
利用脚本搭建私有云平台,部署云平台,发布云主机并实现互连和远程连接
运维·云计算·openstack
国际云,接待1 天前
腾讯云国际版服务器从注册到使用的完整流程指南
运维·服务器·阿里云·架构·云计算·腾讯云·csdn开发云
泛黄的咖啡店1 天前
域名系统DNS
运维·云计算