
Remote
Contract
India
Skills
Python (Programming Language)
Kubernetes
Continuous Integration and Continuous Delivery (CI/CD)
Infrastructure as code (IaC)
Amazon S3
Apache Spark
Airflow
Terraform
Git BASH
About the Role
Job Opportunity: Sr. DevOps Engineer
Location: Remote (India - IST hours)
Duration: 6-month contract with high potential for extensions
About the Role
Be part of a cutting-edge MLOps team, blending DevOps, software, and platform engineering to support enterprise-level data science applications from an infrastructure standpoint.
Responsibilities:
Deploy services to the cloud
Write code in Terraform, Bash, and Python (some exposure to Scala preferred)
Understand and manage policies, permissions, and security
Deploy new infrastructure to AWS (e.g., Airflow, ML Flow, etc.)
Create IAM roles, S3 buckets, Glue tables, and databases
Build and deploy CI/CD pipelines to enable quicker job production
Provide cross-functional access to accounts
Support ad hoc requests and address infrastructure needs
Required Qualifications:
7-10+ years of experience in relevant roles
Proficiency in Python and familiarity with command-line tools
Strong knowledge of AWS cloud architecture, including S3, EMR, Glue, and IAM
Ability to understand and modify Terraform scripts
Experience using GIT
Solid understanding of CI/CD pipelines (their purpose, use cases, and implementation)
Key Focus: Strong expertise in Automation (Python), Infrastructure as Code (Terraform), and deep AWS knowledge
Bonus Skills:
Networking expertise
Kubernetes (EKS) experience
Ability to develop CI/CD pipelines from scratch
Knowledge of Spark and distributed computing
Data engineering expertise
Familiarity with Scala
Why Join Us?
You'll work alongside top-tier talent on impactful projects in a fast-paced environment while building cutting-edge solutions that enhance connectivity and data science capabilities.
Does this sound like your next challenge? Apply now!
Location: Remote (India - IST hours)
Duration: 6-month contract with high potential for extensions
About the Role
Be part of a cutting-edge MLOps team, blending DevOps, software, and platform engineering to support enterprise-level data science applications from an infrastructure standpoint.
Responsibilities:
Deploy services to the cloud
Write code in Terraform, Bash, and Python (some exposure to Scala preferred)
Understand and manage policies, permissions, and security
Deploy new infrastructure to AWS (e.g., Airflow, ML Flow, etc.)
Create IAM roles, S3 buckets, Glue tables, and databases
Build and deploy CI/CD pipelines to enable quicker job production
Provide cross-functional access to accounts
Support ad hoc requests and address infrastructure needs
Required Qualifications:
7-10+ years of experience in relevant roles
Proficiency in Python and familiarity with command-line tools
Strong knowledge of AWS cloud architecture, including S3, EMR, Glue, and IAM
Ability to understand and modify Terraform scripts
Experience using GIT
Solid understanding of CI/CD pipelines (their purpose, use cases, and implementation)
Key Focus: Strong expertise in Automation (Python), Infrastructure as Code (Terraform), and deep AWS knowledge
Bonus Skills:
Networking expertise
Kubernetes (EKS) experience
Ability to develop CI/CD pipelines from scratch
Knowledge of Spark and distributed computing
Data engineering expertise
Familiarity with Scala
Why Join Us?
You'll work alongside top-tier talent on impactful projects in a fast-paced environment while building cutting-edge solutions that enhance connectivity and data science capabilities.
Does this sound like your next challenge? Apply now!
Apply for this position
Application Status
Application Draft
In Progress
Submit Application
Pending
Review Process
Expected within 5-7 days
Similar Jobs




