Skip To Main Content
backBack to Search

Senior Data Engineer (Scala)

Remote in Ukraine
Data Software Engineering
& 8 others
warning.png
Sorry, this position is no longer available

We are seeking a highly skilled remote Senior Big Data Engineer to join our team, working on cutting-edge projects in the field of data software engineering.

As a leading provider of data-driven insights, we are looking for a candidate with expertise in Databricks, Scala, Microsoft Azure, SQL, Data Modeling, Apache Spark, Scala Dataprogramming, and Scala core. In this role, you will be responsible for designing, developing, and maintaining our data infrastructure, ensuring the reliability and scalability of our data solutions. If you are passionate about working with big data and enjoy tackling complex data engineering challenges, we invite you to apply.

Responsibilities
  • Design, develop, and maintain our data infrastructure, ensuring the reliability and scalability of our data solutions
  • Create and maintain data pipelines, data lakes, and data warehouses using ETL/ELT tools and techniques
  • Develop and implement data models, database schemas, and data access layers
  • Collaborate with cross-functional teams to identify and analyze business requirements, translating them into technical solutions
  • Optimize data processing and storage, ensuring high performance and scalability
  • Develop and maintain automated tests, ensuring the quality and accuracy of data solutions
  • Provide technical guidance and mentorship to junior team members
Requirements
  • A minimum of 3 years of experience in Data Software Engineering, demonstrating your proficiency in designing and implementing complex data solutions
  • Expertise in Databricks, Scala, Microsoft Azure, SQL, Data Modeling, Apache Spark, Scala Dataprogramming, and Scala core
  • Experience in building and maintaining data pipelines, data lakes, and data warehouses, utilizing ETL/ELT tools and techniques
  • Proficiency in data modeling and database design, including schema design, optimization, and performance tuning
  • Strong understanding of distributed computing principles and experience with distributed systems such as Hadoop, Spark, or Cassandra
  • Excellent communication skills and ability to work collaboratively with cross-functional teams
  • Ability to work independently and manage projects effectively, delivering high-quality work on time
  • Fluent spoken and written English at an Upper-Intermediate level or higher
Nice to have
  • Familiarity with machine learning and data analytics technologies, such as TensorFlow or PyTorch
  • Knowledge of data governance and security best practices
  • Experience in working with NoSQL databases, such as MongoDB or Cassandra
Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn

These jobs are for you