Back to Search
Join our Corporate Data Engineering team as a Senior Data Engineer and contribute to the creation of cutting-edge data solutions and applications that drive essential business decisions across the organization. We are seeking a forward-thinking, detail-oriented individual who thrives on building scalable systems.
Responsibilities
- Collaborate with cross-functional teams to design, develop, and implement data solutions and applications that align with business needs
- Build and maintain scalable data pipelines for efficient data collection, processing, and storage
- Perform data modeling and transformation to facilitate data analysis and reporting
- Implement best practices for data quality, security, and compliance
- Contribute to the enhancement of data engineering processes through continuous improvement and adoption of CI/CD practices
- Collaborate within an Agile environment, participating in sprint planning, daily stand-ups, and other Agile ceremonies
Requirements
- A minimum of 3 years of relevant experience in Data Engineering or a similar role, demonstrating your proficiency in developing and optimizing data solutions
- Proficiency in Python, leveraging its capabilities to construct robust and scalable data pipelines
- Strong familiarity with AWS, using its resources for efficient data processing and storage
- Expertise in Databricks, contributing to efficient data processing and analysis
- Proficiency in either Apache Spark or Hive, enabling you to work with large-scale data processing and analysis
- Knowledge of CI/CD practices, promoting efficient and reliable software development processes
- Proficiency in SQL, enabling you to query and manipulate data effectively
- Experience with Terraform, enhancing your ability to manage and automate infrastructure
- Strong understanding of Agile methodologies, enabling you to work collaboratively within dynamic teams
- Fluent English communication skills at a B2+ level, facilitating effective collaboration and communication
Nice to have
- Familiarity with containerization and orchestration tools such as Docker and Kubernetes
- Knowledge of streaming data processing technologies like Apache Kafka or Apache Flink
- Experience with data warehousing solutions like Amazon Redshift or Snowflake
- Understanding of machine learning concepts and their integration into data engineering pipelines
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn