
Hybrid
Contract
Bengaluru, Karnataka
India
About the Role
What You’ll Do:
Develop data pipelines/data frameworks/applications/apis using industry best practices. Make adjustments to adapt to new methodologies that provide the business with increased flexibility and agility.
Build data system and data quality monitoring process and ensure health of the big data system.
Influence and educate the team with your experience, idea and learning.
What you'll bring:
Bachelor’s degree in Computer Science or related technical field.
8+ years of relevant experience on Scala/Python (PySpark), Distributed Databases, Kafka with solid hands-on multi-threading, functional programing etc.
A good understanding of CS Fundamentals, Data Structures, Algorithms and Problem Solving.
Professional hand-on experience in Sql and Query Optimization.
Experience in building frameworks for data ingestions and consumptions patterns.
Experience with Orchestration tools like Airflow (preferred) , Automic , Autosys etc.
Hand-on experience in Data processing and Data manipulation.
Expertise with GCP cloud and GCP data processing tools, platforms and technologies like GCS, DataProc, DPaaS, BigQuery, Hive etc.
Exposure to lambda Architecture.
Exposure to visualization tools for data reporting such as Tableau, PowerBI, Looker etc.
Excellent communication skills for collaborating with teams.
Appreciate productivity and care deeply about helping others work more effectively and efficiently.
Develop data pipelines/data frameworks/applications/apis using industry best practices. Make adjustments to adapt to new methodologies that provide the business with increased flexibility and agility.
Build data system and data quality monitoring process and ensure health of the big data system.
Influence and educate the team with your experience, idea and learning.
What you'll bring:
Bachelor’s degree in Computer Science or related technical field.
8+ years of relevant experience on Scala/Python (PySpark), Distributed Databases, Kafka with solid hands-on multi-threading, functional programing etc.
A good understanding of CS Fundamentals, Data Structures, Algorithms and Problem Solving.
Professional hand-on experience in Sql and Query Optimization.
Experience in building frameworks for data ingestions and consumptions patterns.
Experience with Orchestration tools like Airflow (preferred) , Automic , Autosys etc.
Hand-on experience in Data processing and Data manipulation.
Expertise with GCP cloud and GCP data processing tools, platforms and technologies like GCS, DataProc, DPaaS, BigQuery, Hive etc.
Exposure to lambda Architecture.
Exposure to visualization tools for data reporting such as Tableau, PowerBI, Looker etc.
Excellent communication skills for collaborating with teams.
Appreciate productivity and care deeply about helping others work more effectively and efficiently.