Skip To Main Content
backBack to Search

Senior Data Software Engineer

Data Software Engineering, Amazon Web Services, Apache Airflow, Apache Spark, CI/CD, Python, SQL
warning.png
Sorry, this position is no longer available

We are seeking a highly skilled Senior Data Software Engineer to join our team and provide critical support to our remote Data Science teams.

As a Senior Data Software Engineer, you will be responsible for building datamarts and providing ad-hoc support in a fast-paced, dynamic environment. You will work closely with Data Scientists and other stakeholders to understand their needs and develop solutions that meet their requirements.

Responsibilities
  • Build datamarts and data pipelines to support the Data Science teams
  • Provide ad-hoc support to Data Scientists and other stakeholders, ensuring the seamless operation of data pipelines and processes
  • Collaborate with cross-functional teams to understand their needs and develop solutions that meet their requirements
  • Design, develop, and maintain efficient and scalable ETL processes
  • Optimize complex SQL queries and database operations
  • Ensure the implementation of CI/CD pipelines for data engineering tasks
  • Develop and maintain REST APIs for data processing and consumption
  • Collaborate with stakeholders to define project requirements and timelines
  • Provide technical guidance and mentorship to junior team members
Requirements
  • Minimum of 4 years of experience as a Data Software Engineer, working with large datasets and complex data pipelines
  • Expertise in Amazon Web Services, specifically with services such as S3 and EC2
  • Advanced experience with Apache Airflow and Apache Spark for data processing and workflow orchestration
  • Proficient in Python programming language, with experience in writing efficient and scalable code
  • Strong understanding of SQL and relational databases, with experience in designing and optimizing complex queries
  • Experience in CI/CD pipelines, with experience in tools such as Jenkins or GitLab
  • Experience in PySpark and REST APIs for data engineering tasks
  • Strong understanding of ETL processes and data modeling
  • Excellent communication skills, with the ability to effectively collaborate with technical and non-technical stakeholders
  • Upper-intermediate English language proficiency, enabling clear communication and collaboration with the team and stakeholders
Nice to have
  • Experience with Redshift and Databricks for data processing and analysis
Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn

These jobs are for you