As part of a long-term collaboration with Otto, one of Germany’s largest e-commerce platforms, I was responsible for building and scaling backend services that process vast amounts of product and search-related data. The goal was to support internal teams with actionable insights, automated classification, and keyword analytics for millions of items. I worked closely with other backend developers and data engineers to improve performance and modernize the underlying architecture.
In the context of large-scale e-commerce, timely and accurate data processing is critical — from crawling third-party sources to analyzing trends and keeping listings up to date. The challenge was twofold: first, legacy batch-based processes could no longer meet performance demands; second, relevant data (like search volume, trends, and classifications) needed to be enriched and delivered across different internal services reliably and at scale.
We developed a collection of high-performance backend services in Python, each focused on a specific task:
Initially designed as batch-processing jobs, all critical services were eventually refactored into a Kafka-based architecture, allowing for real-time data streaming, easier scaling, and better fault tolerance.
All services were written in Python, with a strong focus on performance (e.g., asynchronous tasks, optimized data pipelines, multiprocessing where needed).
The original system relied on scheduled batch jobs using Celery and cron-like orchestration. As demand grew, we transitioned to a Kafka-based microservice architecture:
The result was a highly modular, scalable backend landscape that enabled Otto to make faster decisions and deliver richer product data across the platform.