Back to Search
We are in search of an exceptionally talented Senior Data Software Engineer to join our remote team and aid our client's Data Science squads in constructing datamarts and fulfilling ad-hoc requests as required.
As a Senior Data Software Engineer, your role involves collaborating with a group of skilled professionals to create and sustain data pipelines, ETL processes, and REST APIs. Your responsibilities extend to ensuring the scalability, effectiveness, and dependability of our data solutions. Furthermore, you'll be tasked with providing on-call support to guarantee the seamless functioning of these solutions.
Responsibilities
- Collaborate with Data Science teams to construct datamarts and fulfill ad-hoc requests as necessary
- Develop and manage data pipelines, ETL processes, and REST APIs to facilitate efficient data processing and delivery
- Verify the scalability, efficiency, and reliability of our data solutions
- Provide on-call support to uphold the smooth operation of our data solutions
- Work alongside cross-functional teams to deliver top-notch data solutions in accordance with project objectives and timelines
- Regularly assess industry trends and optimal practices, refining and implementing cutting-edge data engineering strategies
- Offer guidance and mentorship to junior team members, nurturing a culture of continuous learning and growth within the team
- Engage directly with clients, comprehending their requirements, and deliver well-suited, efficient solutions
- Collaborate with stakeholders, showcasing outstanding communication and leadership skills
Requirements
- A minimum of 3 years of hands-on experience in Data Software Engineering, contributing to large-scale data projects and intricate data infrastructures
- Demonstrated expertise in constructing and sustaining data pipelines, ETL processes, and REST APIs
- Proficiency in Amazon Web Services, specifically focusing on data-related services like Redshift, S3, and Glue
- Substantial familiarity with Apache Airflow and Apache Spark, utilizing them for data processing and pipeline automation
- Adeptness in Python and SQL for the purpose of data processing
- Experience with Databricks and PySpark for efficient pipeline automation
- Familiarity with CI/CD tools to ensure the streamlined delivery of data solutions
- Robust analytical skills, enabling effective troubleshooting and decision-making in intricate data environments
- Ability to convey technical concepts clearly to a non-technical audience
- Advanced English language proficiency (Upper-Intermediate level), enabling effective written and verbal collaboration in team meetings and discussions with stakeholders
Nice to have
- Experience with Redshift for data warehousing and management
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn