Job Description
This role is for one of the Weekday's clients
Salary range: Rs 2000000 - Rs 3700000 (ie INR 20-37 LPA)
Min Experience: 3 years
Location: Bangalore
JobType: full-time
We are seeking a highly skilled and motivated Software Engineer with a strong background in Kafka Streams to join our dynamic engineering team. The ideal candidate will have hands-on experience in designing, developing, and maintaining high-throughput, low-latency stream processing systems using Apache Kafka and Kafka Streams. You will be responsible for building real-time data pipelines and stream processing applications that power mission-critical systems.
Requirements
Key Responsibilities:
- Design and develop robust, scalable, and high-performance stream processing applications using Kafka Streams.
- Implement real-time data pipelines, integrating multiple data sources with a focus on high availability and fault tolerance.
- Collaborate with data engineers, backend developers, and architects to design system architectures and data models that align with business goals.
- Optimize stream processing jobs for performance, scalability, and reliability.
- Develop monitoring and alerting mechanisms to ensure smooth operations of Kafka-based systems.
- Write clean, maintainable, and well-documented code that adheres to engineering best practices.
- Participate in code reviews, design discussions, and contribute to technical documentation.
- Troubleshoot and resolve issues related to stream processing and data integration.
- Stay up-to-date with the latest advancements in Kafka ecosystem and incorporate best practices into development workflow.
Required Skills and Qualifications:
- Minimum 3 years of hands-on experience as a Software Engineer, with at least 2 years working on Kafka Streams.
- Strong programming skills in Java or Scala (Java preferred).
- In-depth understanding of stream processing paradigms and event-driven architectures.
- Experience with the Kafka ecosystem including Kafka Connect, Kafka Streams, and Schema Registry.
- Solid understanding of distributed systems concepts and experience working with scalable microservices.
- Familiarity with serialization formats like Avro, Protobuf, or JSON.
- Experience with RESTful APIs and integration of external services/data sources.
- Knowledge of CI/CD pipelines and automated testing practices.
- Proficiency with tools such as Git, Jenkins, Docker, and monitoring tools like Prometheus, Grafana, etc.
Preferred Qualifications:
- Experience with Confluent Kafka or enterprise-grade Kafka deployments.
- Exposure to cloud environments like AWS, GCP, or Azure.
- Knowledge of other stream processing frameworks like Apache Flink or Spark Streaming is a plus.
- Familiarity with container orchestration tools like Kubernetes is a bonus.
Key skill Required
- Java
- JAVA
- AWS
- Avro
- Apache
- Apache Flink
- Apache Kafka
- Azure
- Java
- JSON
- CI/CD
- Business Goals
- Data Integration
- Data Pipelines
- Design
- Development
- Documentation
- Ecosystem
- Fault Tolerance
- Git
- Grafana
- High Availability
- Highly Skilled
- Integration
- Jenkins
- Kubernetes
- Microservices
- Orchestration
- Scalability
- Serialization
- SPARK
- Stream Processing
- Technical Documentation
- Tolerance
- Workflow