Back to Search
We're on the lookout for an exceptionally talented Senior Data Software Engineer to become a vital part of our remote team.
Your primary role will involve supporting our client's Data Science teams by constructing datamarts and fulfilling ad-hoc requests as they arise.
As a seasoned Senior Data Software Engineer, your collaborative efforts will focus on crafting and maintaining cutting-edge data pipelines, ETL processes, and REST APIs alongside a team of skilled professionals. Your mandate includes ensuring the scalability, efficiency, and reliability of our data solutions. Additionally, you'll take charge of providing on-call support to guarantee the smooth functioning of our data solutions.
Responsibilities
- Collaborate with Data Science teams, constructing datamarts, and addressing ad-hoc requests
- Craft and sustain data pipelines, ETL processes, and REST APIs to optimize data processing and delivery
- Ensure the scalability, efficiency, and reliability of our data solutions
- Provide on-call support for uninterrupted operation of data solutions
- Team up with cross-functional groups to deliver top-notch data solutions aligned with project objectives and timelines
- Continuously assess industry trends and implement cutting-edge data engineering strategies
- Provide guidance and mentorship to junior team members, fostering a culture of growth and perpetual learning
- Engage directly with clients to comprehend their requirements and provide tailored, efficient solutions
- Work closely with stakeholders, showcasing exceptional communication and leadership skills
Requirements
- At least 3 years of hands-on experience in Data Software Engineering, specializing in large-scale data projects and intricate data infrastructures
- Proven track record in constructing and maintaining data pipelines, ETL processes, and REST APIs
- Expertise in Amazon Web Services, particularly in data-centric services such as Redshift, S3, and Glue
- Solid proficiency in Apache Airflow and Apache Spark, utilizing them for data processing and pipeline automation
- Fluency in Python and SQL for data processing purposes
- Experience with Databricks and PySpark for pipeline automation
- Familiarity with CI/CD tools to facilitate the efficient delivery of data solutions
- Robust analytical skills, enabling effective problem-solving and decision-making in complex data environments
- Capability to communicate technical concepts effectively to a non-technical audience
- Advanced proficiency in the English language (Upper-Intermediate level), facilitating clear written and spoken collaboration in team meetings and discussions with stakeholders
Nice to have
- Hands-on experience with Redshift for data warehousing and management
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn