Optimizing Event Processing in AWS: A Case Study on Reducing Latency and Cost

Introduction

Dishank Oza
3 min readJul 24, 2024

In today’s fast-paced digital environment, system efficiency and cost management are crucial. In this post, I’ll share my experience with optimizing an AWS-based event processing system that was growing increasingly, slow and expensive due to a rapidly expanding user base. This case study highlights the problem, the solution, and the lessons learned along the way.

The Challenge

Our system was generating approximately 30 AWS Step Functions events per user per day based on specific actions. AWS Step Functions allow you to coordinate multiple AWS services into serverless workflows so that you can build and update apps quickly. However, this frequency caused our event bucket to grow extremely fast, leading to significant latency in event processing.

As our user base expanded, editing events became sluggish, adversely affecting the overall user experience. Upon analyzing the system, we discovered that most events were scheduled to execute within the next 30 days, with only 1% of them being executed on the current date.

The Solution

To tackle this, I implemented a system that runs at a specific time, preferably between 3 AM and 4 AM. This system gets captured user activity and generates only the events needed for that day. This approach drastically reduced the event count by 3000%, resulting in significant cost savings and improved performance.

However, there was a caveat. Since the system ran early in the morning, it did not account for user activities post the execution time. To address this, I utilized AWS Database Migration Service (DMS) and Amazon Kinesis.

Implementing AWS DMS and Kinesis

AWS DMS: AWS Database Migration Service helps you migrate databases to AWS quickly and securely. The source database remains fully operational during the migration, minimizing downtime to applications that rely on the database.

Amazon Kinesis: Amazon Kinesis makes it easy to collect, process, and analyze real-time, streaming data. By using Kinesis, you can ingest real-time data such as video, audio, application logs, website clickstreams, and IoT telemetry data for machine learning, analytics, and other applications.

System Flow

1. AWS DMS: Listens to MySQL specific table logs to detect data changes.
2. DMS Destination Endpoint: Configured to send these change events to Amazon Kinesis.
3. Amazon Kinesis: Streams the events to Amazon SQS (Simple Queue Service).
4. Amazon SQS: Queues the events and triggers AWS Lambda functions.
5. AWS Lambda: Processes the events and creates the necessary AWS Step Functions events.

This setup allowed us to create a highly scalable and loosely coupled system that could handle user activity data in near real-time.

Lessons Learned

1. Efficiency: By generating only the necessary events, we significantly reduced computational overhead and costs.
2. Scalability: Leveraging AWS services like DMS and Kinesis, we built a system capable of scaling with our growing user base.
3. Real-time Processing: The integration of DMS and Kinesis ensured that user activities were processed in near real-time, even post the scheduled system run.

Conclusion

This experience not only enhanced my understanding of AWS services like DMS, Kinesis, replication instances, DMS tasks, and endpoints but also underscored the importance of designing efficient, scalable, and cost-effective systems. By addressing the latency and cost issues, we improved our system’s performance and provided a better user experience.

Call to Action

If you found this case study helpful or have any questions, feel free to leave a comment or reach out to me directly. Let’s connect and share our experiences to build more efficient systems together!

--

--