Skip To Main Content
backBack to Search

Lead Data Engineer

Remote in India
Data Software Engineering
& 10 others
warning.png
Sorry, this position is no longer available

We are in search of a dynamic remote Lead Data Engineer to join our Corporate Data Engineering team. In this role, you will spearhead the development of cutting-edge data solutions and applications that empower crucial business decisions organization-wide. If you are a forward-thinking and structured thinker with a passion for scalable system development, we invite you to apply and be a key player in shaping our data engineering initiatives.

Responsibilities
  • Lead the design, development, and implementation of world-class data solutions and applications, ensuring scalability and high performance
  • Collaborate closely with cross-functional teams to understand business requirements and translate them into innovative data engineering strategies
  • Provide mentorship and guidance to the data engineering team, fostering professional growth and knowledge sharing
  • Oversee end-to-end data pipeline development, from data ingestion to transformation and storage
  • Establish and enforce best practices for data engineering, ensuring data quality, security, and compliance
  • Contribute to the continuous improvement of data engineering processes by adopting Agile methodologies and driving CI/CD
Requirements
  • Minimum of 5 years of relevant experience in Data Engineering or a similar role, showcasing expertise in building and optimizing data solutions
  • At least 1 year of direct leadership and team management experience, demonstrating the ability to lead and inspire a technical team
  • Proficiency in Python for the development of efficient and scalable data solutions
  • Strong familiarity with AWS, utilizing cloud resources to optimize data processing and storage
  • Experience with data warehousing solutions like Amazon Redshift or Snowflake
  • Expertise in Databricks, contributing to efficient data processing and analysis
  • Proficiency in either Apache Spark or Hive for large-scale data processing and analysis
  • Knowledge of streaming data processing technologies like Apache Kafka or Apache Flink
  • Familiarity with CI/CD practices, promoting efficient and reliable software development processes
  • Proficiency in SQL for effective data querying and manipulation
  • Experience with Terraform, enhancing the ability to manage and automate infrastructure
  • Strong understanding of Agile methodologies, enabling collaborative work within dynamic teams
  • Fluent English communication skills at a B2+ level, facilitating effective collaboration and communication
Nice to have
  • Familiarity with containerization and orchestration tools such as Docker and Kubernetes
  • Understanding of machine learning concepts and their integration into data engineering pipelines
Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn

These jobs are for you