Loading...

NAZZTEC

Data Engineer

NAZZTEC
Hyderabad, Telangana On-site
On-Site Contract Hyderabad, Telangana India

Skills

Java Amazon Web Services (AWS) Python (Programming Language) Kubernetes SQL PySpark Extract, Transform, Load (ETL) Apache kafka-tools Databricks Products

About the Role

Job Title: Senior Databricks Data Engineer (AWS Platform)
Location: Hyderabad
Shift: B Shift (12 PM – 10 PM IST)
Experience Required: 5+ Years

Job Summary:
We are seeking a Senior Databricks Data Engineer with strong experience in building and optimizing data pipelines and architectures on AWS using Databricks and PySpark. The ideal candidate will also have hands-on experience in Big Data technologies, real-time streaming, and CI/CD pipelines. This role demands client-facing experience and the ability to operate in a fast-paced environment while delivering high-quality data engineering solutions.

Key Responsibilities:
Design and develop scalable data engineering solutions using Databricks on AWS with PySpark and Databricks SQL.
Build and maintain data pipelines using Delta Lake for batch and streaming data.
Design and implement real-time data streaming applications using Kafka or Kinesis.
Develop and maintain ETL workflows and data warehouse architectures aligned with business goals.
Collaborate with data scientists, analysts, and stakeholders to ensure data quality and integrity.
Use Airflow to orchestrate complex data workflows.
Develop and maintain CI/CD pipelines using GIT, Jenkins, Docker, Kubernetes, and Terraform.
Write efficient and reusable code in Python, Java, or Scala.
Engage with clients regularly to gather requirements, provide updates, and build trusted relationships.

Required Skills & Experience:
5+ years of experience in Databricks engineering on AWS using PySpark, Databricks SQL, and Delta Lake.
5+ years of experience in ETL, Big Data/Hadoop, and data warehouse design/delivery.
2+ years of hands-on experience with Kafka or Kinesis for real-time data streaming.
4+ years of experience in programming languages such as Python, Java, or Scala.
Experience using Apache Airflow in at least one project for data pipeline orchestration.
At least 1 year of experience developing CI/CD pipelines with GIT, Jenkins, Docker, Kubernetes, Shell Scripting, and Terraform.

Professional Attributes:
Willingness to work in B Shift (12 PM – 10 PM IST).
Strong client-facing skills with the ability to build trusted relationships.
Excellent problem-solving and critical-thinking abilities.
Strong communication and collaboration skills in cross-functional teams.

Preferred Qualifications:
AWS certifications or relevant cloud training.
Familiarity with DataOps or MLOps practices.
Experience in Agile or Scrum development methodologies.

Apply for this position

Log in or Sign up to Apply

Access the application form by logging in or creating an account.

Application Status

Application Draft

In Progress

Submit Application

Pending

Review Process

Expected within 5-7 days

Similar Jobs

RECRIVIO Logo

Senior PySpark Data Engineer (Contract)

RECRIVIO Hybrid
Avensys Consulting Logo

Data Engineer - AWS & Pyspark

Avensys Consulting Hybrid
MethodHub Logo

Big Data Developer

MethodHub On-site
Northern Arc Capital Logo

Senior Data Engineer

Northern Arc Capital On-site
NAZZTEC Logo

Data Engineer

NAZZTEC On-site