
On-Site
Full-Time
Chennai, Tamil Nadu
India
About the Role
Hello Techies,
We are conducting F2F interviews on 12th July 2025 for an urgent hiring for ETL Data Engineers.
PFB Job description for the same. Interested candidates who can meet us on 12th July in Bangalore or Chennai Movate office can apply and we will call you back to share further details.
Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS Pyspark scripting)
1.
Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions.
2.
Person should be strong in Pyspark
3.
Hands on and working knowledge in Python packages like NumPy, Pandas, Etc
4.
Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS services is must.
5.
Person should work as Individual contributor
6.
Good to have familiar with metadata management, data lineage, and principles of data governance.
Good to have:
1.
Experience to process large set of data transformations both semi and structured data
2.
Experience to build data lake & configuration on delta tables.
3.
Good experience with computing & cost optimization.
4.
Understanding the environment and use case and ready to build holistic Data Integration frame works.
5.
Good experience in MWAA (airflow orchestration)
Soft skill:
1.
Having good communication to interact with IT-Stake holders and Business.
2.
Understand the pain point to delivery.
We are conducting F2F interviews on 12th July 2025 for an urgent hiring for ETL Data Engineers.
PFB Job description for the same. Interested candidates who can meet us on 12th July in Bangalore or Chennai Movate office can apply and we will call you back to share further details.
Mandatory skills (8+ Years of experience in ETL development with 4+ Years on AWS Pyspark scripting)
1.
Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions.
2.
Person should be strong in Pyspark
3.
Hands on and working knowledge in Python packages like NumPy, Pandas, Etc
4.
Experience deploying and running AWS-based data solutions using services or products such as S3, Lambda, SNS, Cloud Step Functions. Sound knowledge in AWS services is must.
5.
Person should work as Individual contributor
6.
Good to have familiar with metadata management, data lineage, and principles of data governance.
Good to have:
1.
Experience to process large set of data transformations both semi and structured data
2.
Experience to build data lake & configuration on delta tables.
3.
Good experience with computing & cost optimization.
4.
Understanding the environment and use case and ready to build holistic Data Integration frame works.
5.
Good experience in MWAA (airflow orchestration)
Soft skill:
1.
Having good communication to interact with IT-Stake holders and Business.
2.
Understand the pain point to delivery.