Senior Big Data Engineer (Scala)
Remote in Ukraine
Data Software Engineering
& 7 others

Sorry, this position is no longer available
Ukraine
We are seeking a highly skilled Senior Big Data Engineer to join our remote team, working on a cutting-edge project in the field of data software engineering.
In this position, you will be responsible for designing, building, and maintaining large-scale data processing systems, utilizing your expertise in Databricks, Scala, Microsoft Azure, Apache Spark, Java, Azure Data Factory, Oracle Cloud, and other relevant technologies. You will play a vital role in shaping the project's architecture and design, collaborating with cross-functional teams to ensure the project's success.
Responsibilities
- Design, build, and maintain large-scale data processing systems, ensuring high performance, scalability, and reliability
- Collaborate with cross-functional teams to identify business requirements and design solutions that meet project objectives
- Develop and maintain complex data pipelines, ensuring data accuracy, completeness, and timeliness
- Create and maintain data models and schemas, optimizing for performance and scalability
- Design and implement data security and privacy policies, ensuring compliance with regulatory requirements
- Provide technical guidance and mentorship to junior team members, sharing your expertise and best practices
- Stay up-to-date with emerging trends and technologies in Big Data engineering, sharing your insights and knowledge with the team
Requirements
- A minimum of 3 years of experience in Data Software Engineering, with a focus on Big Data
- Extensive knowledge of Big Data technologies such as Databricks, Apache Spark, Azure Data Factory, Oracle Cloud, and other relevant technologies
- Experience in designing and building large-scale data processing systems, including data ingestion, storage, and analysis
- Proficiency in programming languages such as Scala, Java, or Python, enabling you to develop and maintain complex data pipelines
- Strong experience in data modeling and schema design, with a focus on performance and scalability
- Excellent problem-solving skills and the ability to analyze complex data sets to identify patterns and insights
- Strong focus on teamwork and the ability to collaborate effectively with cross-functional teams
- Ability to work independently and manage stress effectively, maintaining a high level of performance even under pressure
- Fluent spoken and written English at an Upper-Intermediate level or higher (B2+)
Nice to have
- Experience with other Big Data technologies such as Hadoop, Hive, or Pig
- Experience with NoSQL databases such as MongoDB or Cassandra
- Experience with data visualization tools such as Tableau or Power BI
- Experience with machine learning frameworks such as TensorFlow or PyTorch
- Experience with containerization technologies such as Docker or Kubernetes
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn