Location: Bangalore, Gurgaon, Pune, Mumbai, Delhi, Chennai, Hyderabad, Kolkata, Noida, Ahmedabad
Experience: 3 – 6 Years
Notice Period: Immediate to 15 Days
Job Type: Full-Time
Industry: Technology / Data & Analytics / Cloud Solutions
Overview
We are looking for a Data Engineering Lead who thrives in solving complex data challenges and building scalable, high-performance data solutions. This role is ideal for someone with deep expertise in Databricks, PySpark, SQL, and cloud data platforms such as AWS or Azure. You will lead a team of skilled data engineers to design and implement robust, cloud-native data pipelines and drive data architecture strategy that empowers real-time and strategic business decisions.
If you are passionate about big data, cloud technologies, and mentoring engineers while building impactful solutions—we want you on our team!
Key Responsibilities
- Lead the design and implementation of scalable data pipelines using Databricks and PySpark
- Build efficient ETL/ELT processes for seamless data ingestion and transformation across systems
- Work with product and analytics teams to gather and translate business data requirements into technical solutions
- Architect and optimize cloud-based data lakes and warehouses on AWS or Azure
- Guide and mentor junior data engineers, conduct code reviews, and enforce coding best practices
- Continuously evaluate and integrate new tools, technologies, and frameworks to enhance data workflows
- Ensure data integrity, security, and compliance across all solutions
- Collaborate cross-functionally with data scientists, analysts, and DevOps teams
- Drive performance tuning and operational excellence in deployed pipelines and data infrastructure
- Contribute to defining and executing the long-term data strategy for the organization
Required Skills & Experience
- 4+ years of hands-on experience in Databricks, PySpark, and SQL on AWS or Azure
- Strong programming skills in Python with good understanding of software engineering principles
- Solid knowledge of data warehousing concepts, structured/unstructured data, and big data ecosystems
- Proven experience with cloud data lakes and data pipeline development in a production environment
- Expertise in building ETL/ELT workflows, including orchestration and automation
- Experience designing data models that support reporting, analytics, and machine learning
- Familiarity with DevOps practices, CI/CD pipelines, and version control systems (e.g., Git)
- Excellent problem-solving, analytical thinking, and communication skills
- Proven ability to lead and mentor technical teams in a collaborative environment
Preferred Qualifications
- Experience with Apache Airflow, Delta Lake, Snowflake, or Azure Synapse
- Understanding of data governance, lineage, and compliance practices
- Familiarity with Agile/Scrum methodologies
- Exposure to real-time streaming data technologies such as Kafka, Spark Streaming
Why Join Us?
- Impactful Work: Be part of a high-impact team shaping data strategy at scale
- Learning Culture: Access to training, certifications, and mentorship programs
- Collaborative Environment: Work with passionate, skilled professionals on innovative data projects
- Cutting-edge Tech Stack: Opportunity to work with the latest in data and cloud technology
- Growth Opportunities: Transparent career path with performance-based growth
Ready to Lead the Future of Data Engineering?
Apply now and bring your vision to life in a data-first organization!