
On-Site
Contract
Bengaluru, Karnataka
India
Skills
Python (Programming Language)
Data Warehousing
Extract, Transform, Load (ETL)
Data Modeling
Data Pipelines
Modeling Tools
Cloud Security
Cross-functional Collaborations
About the Role
ONLY APPLY IF
Available to join in Bangalore - Work from Office
Startup and product based companies experience is a must
You are available to join within 15-20 days
B.tech from Tier I/II institute is a must
Basic Qualifications
Bachelor’s Degree in Computer Science, Engineering, or a related technical field from a Tier-I/II institution.
Proven experience as a Data Engineer with expertise in ETL techniques (3+ years) is a MUST
Strong programming skills in languages such as Python, Java, or Scala.
Skillset to scrape and transform data off of the publicly available web sources
Experience with cloud-based data platforms (e.g., AWS, Azure, GCP).
Proficiency in SQL and experience working with relational and non-relational databases.
Knowledge of data warehousing concepts and architectures.
Familiarity with big data technologies such as Hadoop, Spark, and Kafka.
Experience with data modeling tools and techniques.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
Preferred Qualifications
Master's degree or equivalent in Computer Science/ Data Science
Knowledge of data streaming and real-time processing.
Familiarity with data governance and security best practices.
Key Responsibilities :
ETL Development:
Design, develop, and maintain efficient ETL processes for multi-scale data sets.
Implement and optimize data transformation and validation processes to ensure data accuracy and consistency.
Collaborate with cross-functional teams to understand data requirements and business logic.
Data Pipeline Architecture:
Architect, build and maintain scalable and high-performance data pipelines
Evaluate and implement new technologies to enhance data pipeline efficiency and reliability
Pipelines for extracting data through scraping for ad-hoc sector specific datasets
Data Modelling:
Develop and implement data models to support analytics and reporting needs.
Optimize database structures for performance and scalability.
Data Quality and Governance:
Implement data quality checks and governance processes to ensure data integrity.
Collaborate with stakeholders to define and enforce data quality standards.
Documentation and Communication:
Document ETL processes, data models, and other relevant information.
Communicate complex technical concepts to non-technical stakeholders effectively.
Cross-functional collaboration:
Collaborate internally with the Quant team and developers to lay and optimize the data pipelines and externally with the stakeholders to understand the business requirements for the enrichment of the cloud database
Available to join in Bangalore - Work from Office
Startup and product based companies experience is a must
You are available to join within 15-20 days
B.tech from Tier I/II institute is a must
Basic Qualifications
Bachelor’s Degree in Computer Science, Engineering, or a related technical field from a Tier-I/II institution.
Proven experience as a Data Engineer with expertise in ETL techniques (3+ years) is a MUST
Strong programming skills in languages such as Python, Java, or Scala.
Skillset to scrape and transform data off of the publicly available web sources
Experience with cloud-based data platforms (e.g., AWS, Azure, GCP).
Proficiency in SQL and experience working with relational and non-relational databases.
Knowledge of data warehousing concepts and architectures.
Familiarity with big data technologies such as Hadoop, Spark, and Kafka.
Experience with data modeling tools and techniques.
Excellent problem-solving and analytical skills.
Strong communication and collaboration skills.
Preferred Qualifications
Master's degree or equivalent in Computer Science/ Data Science
Knowledge of data streaming and real-time processing.
Familiarity with data governance and security best practices.
Key Responsibilities :
ETL Development:
Design, develop, and maintain efficient ETL processes for multi-scale data sets.
Implement and optimize data transformation and validation processes to ensure data accuracy and consistency.
Collaborate with cross-functional teams to understand data requirements and business logic.
Data Pipeline Architecture:
Architect, build and maintain scalable and high-performance data pipelines
Evaluate and implement new technologies to enhance data pipeline efficiency and reliability
Pipelines for extracting data through scraping for ad-hoc sector specific datasets
Data Modelling:
Develop and implement data models to support analytics and reporting needs.
Optimize database structures for performance and scalability.
Data Quality and Governance:
Implement data quality checks and governance processes to ensure data integrity.
Collaborate with stakeholders to define and enforce data quality standards.
Documentation and Communication:
Document ETL processes, data models, and other relevant information.
Communicate complex technical concepts to non-technical stakeholders effectively.
Cross-functional collaboration:
Collaborate internally with the Quant team and developers to lay and optimize the data pipelines and externally with the stakeholders to understand the business requirements for the enrichment of the cloud database
Apply for this position
Application Status
Application Draft
In Progress
Submit Application
Pending
Review Process
Expected within 5-7 days
Similar Jobs




