Senior Data Software Engineer
Remote in India
Data Software Engineering
& 6 others

Sorry, this position is no longer available
India
We are seeking a highly skilled Senior Data Software Engineer to join our remote team and support our client's Data Science teams in building datamarts and providing ad-hoc requests as needed.
As a Senior Data Software Engineer, you will be working with a team of talented professionals to develop and maintain data pipelines, ETL processes, and REST APIs. You will also be responsible for ensuring the scalability, efficiency, and reliability of the data solutions. Additionally, you will be responsible for providing on-call support to ensure the seamless operation of the data solutions.
Responsibilities
- Work with the Data Science teams to build datamarts and provide ad-hoc requests as needed
- Develop and maintain data pipelines, ETL processes, and REST APIs for efficient data processing and delivery
- Ensure the scalability, efficiency, and reliability of the data solutions
- Provide on-call support to ensure the seamless operation of the data solutions
- Collaborate with cross-functional teams to deliver high-quality data solutions in line with project goals and timelines
- Continuously evaluate industry trends and best practices to refine and implement the most effective data engineering strategies
- Guide and mentor junior team members, fostering a culture of growth and continuous learning within the group
- Work directly with clients to understand their needs and deliver effective, tailored solutions
- Collaborate with stakeholders, demonstrating excellent communication and leadership skills
Requirements
- Minimum of 3 years of experience in Data Software Engineering, working on large-scale data projects and complex data infrastructures
- Proven experience in building and maintaining data pipelines, ETL processes, and REST APIs
- Expertise in Amazon Web Services, specifically with data-related services such as Redshift, S3, and Glue
- Solid experience with Apache Airflow and Apache Spark, using them for data processing and pipeline automation
- Proficiency in Python and SQL for data processing purposes
- Experience with Databricks and PySpark for pipeline automation
- Experience with CI/CD tools for efficient delivery of data solutions
- Strong analytical skills, enabling effective problem-solving and decision-making in complex data environments
- Ability to effectively communicate technical ideas to a non-technical audience
- Advanced English language skills (Upper-Intermediate level), allowing for effective written and spoken collaboration and participation in meetings and discussions with the team and stakeholders
Nice to have
- Experience with Redshift for data warehousing and management
Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn