
On-Site
Full-Time
Bengaluru, Karnataka
India
About the Role
Role: Senior Engineer – Data Platform
Experience: 3 to 7 years
Location: Bangalore, on-site
Reports To: VP of Engineering
About Astuto
We aim to build an AI-first Cloud Efficiency Platform that helps businesses maximize the ROI on public cloud investments.
Given the increase in complexity and cost of cloud infrastructure and services, this space has an enormous opportunity. Over $600 billion is being spent on the public cloud, and an estimated $180 billion is being wasted. In addition, lack of expertise as complexity and scale increase is a big challenge for organizations.
Astuto is growing its engineering team in Bengaluru and is on the lookout for exceptional engineers to drive our vision.
Job Overview:
We are seeking an experienced Data Engineer with a passion for building scalable, modular ELT/ETL systems. You will be central to developing and enhancing a platform designed to efficiently handle vast data volumes, ensuring quick and reliable data ingestion and processing. If you are a problem-solver, a team player, and have expertise in building and optimizing data pipelines, we would like to meet you.
Key Responsibilities:
Own and operate the end-to-end data pipeline platform, ensuring high availability and resilience.
Design, develop, and deploy new ELT/ETL pipelines to meet evolving product requirements.
Lead efforts to enhance the platform's architecture, targeting a 10x scalability increase.
Proactively monitor, troubleshoot, and optimize pipeline performance, focusing on ingestion and processing efficiency.
Collaborate effectively within a cross-functional team environment.
Qualifications:
Bachelor's degree in CS, Engineering, or related field.
3-5 years of experience focused on building and managing data pipelines and ELT systems.
Strong Python programming skills applied to data engineering tasks.
Mandatory hands-on experience with PySpark.
Solid working knowledge of streaming technologies (Kafka/Pulsar) and caching (Redis).
Familiarity with monitoring tools like Grafana and Prometheus.
Good to have practical experience with Apache Flink.
Proven ability to work independently and deliver results.
Experience: 3 to 7 years
Location: Bangalore, on-site
Reports To: VP of Engineering
About Astuto
We aim to build an AI-first Cloud Efficiency Platform that helps businesses maximize the ROI on public cloud investments.
Given the increase in complexity and cost of cloud infrastructure and services, this space has an enormous opportunity. Over $600 billion is being spent on the public cloud, and an estimated $180 billion is being wasted. In addition, lack of expertise as complexity and scale increase is a big challenge for organizations.
Astuto is growing its engineering team in Bengaluru and is on the lookout for exceptional engineers to drive our vision.
Job Overview:
We are seeking an experienced Data Engineer with a passion for building scalable, modular ELT/ETL systems. You will be central to developing and enhancing a platform designed to efficiently handle vast data volumes, ensuring quick and reliable data ingestion and processing. If you are a problem-solver, a team player, and have expertise in building and optimizing data pipelines, we would like to meet you.
Key Responsibilities:
Own and operate the end-to-end data pipeline platform, ensuring high availability and resilience.
Design, develop, and deploy new ELT/ETL pipelines to meet evolving product requirements.
Lead efforts to enhance the platform's architecture, targeting a 10x scalability increase.
Proactively monitor, troubleshoot, and optimize pipeline performance, focusing on ingestion and processing efficiency.
Collaborate effectively within a cross-functional team environment.
Qualifications:
Bachelor's degree in CS, Engineering, or related field.
3-5 years of experience focused on building and managing data pipelines and ELT systems.
Strong Python programming skills applied to data engineering tasks.
Mandatory hands-on experience with PySpark.
Solid working knowledge of streaming technologies (Kafka/Pulsar) and caching (Redis).
Familiarity with monitoring tools like Grafana and Prometheus.
Good to have practical experience with Apache Flink.
Proven ability to work independently and deliver results.