Loading...

Algonomy

Senior Backend/Data Engineer - Python FastAPI | Databricks | Azure Data Engineering

Algonomy
Bangalore Urban, Karnataka On-site
On-Site Full-Time Bangalore Urban, Karnataka India

About the Role

Senior Backend/Data Engineer – Python FastAPI | Databricks | Azure Data Engineering
Location: Bangalore
Experience Level: 5+ Years

Job Summary:
We are seeking a highly skilled Backend/Data Engineer with expertise in Python FastAPI, Databricks, and Azure Data Engineering tools. The ideal candidate will have a strong grasp of data structures and algorithms, as well as experience building scalable backend services and data pipelines in a cloud environment. Proficiency with the Medallion Architecture, ADLS Gen2, and ADF is essential.

Key Responsibilities:
Design and implement RESTful APIs and microservices using Python FastAPI for high-performance data-driven applications.
Architect and develop scalable and robust data pipelines using Azure Databricks and PySpark.
Implement and optimize data workflows following the Medallion Architecture (Bronze, Silver, Gold) for structured data processing.
Integrate and orchestrate data movement using Azure Data Factory (ADF) and automate ETL/ELT workflows.
Work with Azure Data Lake Storage Gen2 (ADLS Gen2) for efficient storage and access of large-scale data assets.
Ensure high performance and scalability using strong foundational skills in data structures algorithms and design patterns and distributed systems.
Collaborate with cross-functional teams including data scientists, analysts, and DevOps engineers.
Maintain code quality through unit testing, peer code reviews, and best practices in CI/CD.
Ensure data security, privacy, and compliance across all services and pipelines.
Monitor, debug, and optimize applications and data jobs in production environments.

Required Skills and Qualifications:
Bachelor’s/Master’s degree in Computer Science, Engineering, or related field.
5+ years of experience in backend development with Python, including production-grade APIs using FastAPI or Flask.
Strong understanding of data structures, algorithms, and problem-solving.
Hands-on experience with Databricks and PySpark for big data processing.
Solid understanding of Medallion Architecture and experience designing layered data models (Bronze/Silver/Gold).
Expertise in Azure Data Factory (ADF), including pipeline orchestration, parameterization, and triggers.
Proficiency with Azure Data Lake Storage Gen2 (ADLS Gen2) and data partitioning strategies.
Familiarity with Delta Lake, versioning, and ACID transaction handling in a distributed context.
Experience with SQL, performance tuning, and working with structured and semi-structured data.
Knowledge of CI/CD, Git, Docker, and infrastructure-as-code practices.
Solid grasp of cloud-native principles and Azure ecosystem (Databricks, Synapse, Key Vault, etc.).

Preferred Qualifications:
Experience with streaming data frameworks like Kafka, Spark Structured Streaming.
Knowledge of monitoring/logging tools such as Azure Monitor, DataDog, or Prometheus.
Exposure to DevOps tools (Terraform, Azure DevOps, GitHub Actions).
Understanding of MLOps and model deployment using MLflow or similar tools.

About the Company:
Algonomy helps consumer businesses maximize customer value by automating decisioning across their retail business lifecycle with AI-enabled solutions for eCommerce, Marketing, Merchandising, and Supply Chain. Algonomy is a trusted partner to more than 400 leading brands, with a global presence spanning over 20 countries. Our innovations have garnered recognition from top industry analysts such as Gartner and Forrester—more at www.algonomy.com.

Apply for this position

Log in or Sign up to Apply

Access the application form by logging in or creating an account.

Application Status

Application Draft

In Progress

Submit Application

Pending

Review Process

Expected within 5-7 days

Similar Jobs