Skip To Main Content
backBack to Search

Senior Data Engineer (Scala)

Remote in Ukraine
Data Software Engineering
& 5 others
warning.png
Sorry, this position is no longer available

We are seeking a highly skilled remote Senior Big Data Engineer to join our team, working on a cutting-edge project in the field of data software engineering.

In this position, you will be responsible for designing and implementing large-scale data processing systems, working with a diverse range of technologies and tools. If you are passionate about data engineering and have experience working with Databricks, Scala, Microsoft Azure, Apache Kafka, and Apache Spark, we invite you to apply for this exciting opportunity.

Responsibilities
  • Design and implement large-scale data processing systems using Databricks, Scala, Microsoft Azure, Apache Kafka, and Apache Spark
  • Develop and maintain data pipelines and data storage solutions, ensuring data integrity and reliability
  • Collaborate with cross-functional teams to understand business requirements and design data solutions that meet those requirements
  • Optimize data processing and storage solutions for performance, scalability, and cost-effectiveness
  • Monitor and troubleshoot data processing systems, identifying and resolving issues as they arise
  • Develop and maintain documentation for data processing systems and data storage solutions
  • Stay up-to-date with emerging trends and technologies in data engineering and recommend new tools and technologies to improve data processing and storage solutions
Requirements
  • A minimum of 3 years of experience in Data Software Engineering, with a strong background in data processing systems and distributed computing
  • Expertise in Databricks, Scala, Microsoft Azure, Apache Kafka, and Apache Spark, with a track record of designing and implementing large-scale data processing systems
  • Strong knowledge of data modeling, data architecture, and data warehousing principles, with the ability to design and implement data pipelines
  • Experience with data streaming and real-time data processing using technologies such as Apache Kafka and Spark Streaming
  • Proficiency in SQL and NoSQL databases, with the ability to design and implement data storage solutions
  • Strong analytical and problem-solving skills, with the ability to troubleshoot complex issues and provide effective solutions
  • Excellent communication skills and the ability to work collaboratively in a team environment
  • Fluent spoken and written English at an Upper-Intermediate level or higher
Nice to have
  • Experience with other big data technologies such as Hadoop, Hive, or Pig
  • Experience with machine learning and data analytics technologies and tools
  • Experience with data visualization tools such as Tableau or Power BI
Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn

These jobs are for you