
Hybrid
Full-Time
Hyderabad, Telangana
India
Skills
Amazon Web Services (AWS)
Kubernetes
Docker Products
Amazon S3
Apache Spark
Scala
Messaging
About the Role
JD
Must-Have Skills:
Experience with Apache Spark for big data processing
Proficiency in Java (SpringBoot) and Scala
Hands-on experience with Docker and Kubernetes
Strong knowledge of data stores like Postres, and AWS services (e.g., S3, SQS, Aurora/RDS)
Expertise in Snowflake
Apache AirFlow
Good-to-Have Skills:
Python
Proficiency in SQL and SparkSQL to query and process large datasets
Familiarity with CI/CD pipelines
Hands-on experience with monitoring & alerting tools (Splunk, Sentry, Prometheus, Grafana)
Exposure to functional programming concepts (Cats/Cats IO)
Experience with messaging & streaming using Apache Kafka (essential for streaming projects)
Must-Have Skills:
Experience with Apache Spark for big data processing
Proficiency in Java (SpringBoot) and Scala
Hands-on experience with Docker and Kubernetes
Strong knowledge of data stores like Postres, and AWS services (e.g., S3, SQS, Aurora/RDS)
Expertise in Snowflake
Apache AirFlow
Good-to-Have Skills:
Python
Proficiency in SQL and SparkSQL to query and process large datasets
Familiarity with CI/CD pipelines
Hands-on experience with monitoring & alerting tools (Splunk, Sentry, Prometheus, Grafana)
Exposure to functional programming concepts (Cats/Cats IO)
Experience with messaging & streaming using Apache Kafka (essential for streaming projects)
Apply for this position
Application Status
Application Draft
In Progress
Submit Application
Pending
Review Process
Expected within 5-7 days
Similar Jobs



Senior Big Data Engineer - Scala/Spark - ( Location - Pune )
DigiHelic Solutions Pvt. Ltd. • On-site
