Skip To Main Content
backBack to Search

Senior Data DevOps Engineer

Remote in Colombia,
& 6 others
Data DevOps
& 7 others

We are seeking a Senior Data DevOps Engineer to join our remote team, working on a cutting-edge project that involves developing and maintaining large-scale big data infrastructure.

In this role, you will play a crucial role in ensuring the reliability, scalability, and performance of our big data infrastructure. You will work closely with cross-functional teams, including data scientists, data engineers, and software developers, to deploy, operate, monitor, optimize, and troubleshoot our big data infrastructure.

Responsibilities
  • Develop and maintain infrastructure as code for big data components, using tools such as Terraform, Kubernetes, and Helm
  • Deploy, operate, monitor, optimize, and troubleshoot large-scale big data infrastructure, ensuring high availability, reliability, and performance
  • Collaborate with cross-functional teams, including data scientists, data engineers, and software developers, to design, implement, and maintain data pipelines and processing workflows
  • Automate data processing tasks, using shell scripts, Python, and other programming languages
  • Ensure compliance with security and data privacy policies and regulations
  • Participate in on-call rotation to provide 24/7 support for critical production systems
  • Continuously improve the performance, scalability, and reliability of our big data infrastructure, using monitoring and alerting tools
  • Provide technical guidance and mentorship to junior team members
Requirements
  • At least 3 years of experience in DevOps, with a focus on data infrastructure and operations
  • Hands-on experience with big data components
  • Expertise in deploying, operating, monitoring, optimizing, and troubleshooting large-scale big data infrastructure
  • Proficient in shell scripting, Python, and SQL, with a deep understanding of data pipeline and data processing concepts
  • Experience in cloud computing platforms, such as AWS, Azure, and GCP, with a focus on cloud operations and infrastructure as code development and maintenance
  • Proficiency in infrastructure automation tools, such as Terraform, CloudFormation, CDK, and Kubernetes
  • Strong knowledge of Unix-based operating systems and networking concepts
  • Fluent spoken and written English at an upper-intermediate level or higher
Nice to have
  • Experience with machine learning frameworks and tools, such as TensorFlow, PyTorch, and Scikit-learn
  • Familiarity with data visualization tools, such as Tableau and Power BI
  • Knowledge of container orchestration platforms, such as Docker Swarm and Amazon ECS
  • Experience with CI/CD pipelines, using tools such as Jenkins and GitLab
Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn