Back to Search
Lead Big Data Engineer with Databricks
Data Software Engineering, Amazon Web Services, Apache Airflow, Apache Spark, Databricks, Python, Scala
Sorry, this position is no longer available
We are looking for a skilled remote Lead Big Data Engineer with Databricks experience to join our team.
Your primary responsibility will be to lead the development of our ETL pipelines on Databricks using Spark/Scala, Databricks, and AWS. You must have extremely strong experience with Databricks, AWS, Python, and Elastic.
Responsibilities
- Lead the development of ETL pipelines on Databricks using Spark/Scala, Databricks, and AWS
- Work closely with cross-functional teams to define data requirements
- Develop and maintain scalable data infrastructure
- Ensure the reliability, efficiency, and scalability of data pipelines
- Coach and mentor other members of the team
Requirements
- At least 5 years of experience in Big Data Engineering
- 1+ years of relevant leadership experience
- Expertise in Databricks, AWS, Python, and Elastic
- Deep understanding of Spark/Scala and data architecture principles
- Knowledge of data models, ETL design, implementation, and maintenance
- Strong communication skills and ability to collaborate with cross-functional teams
- B2+ English level
Nice to have
- Experience with Apache Airflow
- Familiarity with Agile methodologies and SDLC practices
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn