Skip To Main Content
backBack to Search

Lead AI Platform Engineer

Hybrid in Mexico
Cloud Native Development
& 7 others

We are looking for an experienced and driven Lead AI Platform Engineer to join our innovative team.

In this role, you will be instrumental in designing and implementing large-scale AI/ML infrastructure to address transformative challenges in drug discovery and healthcare. This position provides a unique opportunity to build advanced platforms and systems that empower data scientists, enabling impactful healthcare solutions on a global scale.

Responsibilities
  • Design and maintain infrastructure and platforms to support the deployment and monitoring of machine learning solutions in production environments
  • Optimize systems to ensure high performance and scalability for large-scale operations
  • Collaborate with data science teams to create state-of-the-art AI/ML workflows and environments on AWS
  • Work with R&D data scientists to operationalize machine learning pipelines, models, and algorithms
  • Oversee all aspects of software engineering, including architecture, development, testing, and maintenance
  • Lead technology initiatives from conceptualization to successful project delivery
  • Continuously improve the technology stack by integrating the latest advancements in artificial intelligence and data processing
  • Manage an enterprise-level platform and service, addressing customer needs and feature requests effectively
  • Implement DevOps best practices and modern toolchains to enhance automation and efficiency
  • Scale machine learning operations (MLOps) environments to meet production standards
  • Ensure compliance with GxP standards when applicable
Requirements
  • A minimum of 5 years of experience working in AWS cloud environments, with expertise in services like SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
  • Proficiency in infrastructure-as-code tools such as Terraform, Ansible, or CloudFormation
  • Strong programming expertise, particularly in Python, with consideration for other exceptional programming skills
  • Experience with containers, microservices architectures, and Kubernetes or Docker-based systems
  • Advanced knowledge of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
  • Experience managing large-scale enterprise platforms and addressing end-user needs for features and requests
  • Hands-on experience with DevOps practices and tools, including Docker and Git
  • Familiarity with GxP compliance standards
  • Strong analytical, communication, and problem-solving skills
Nice to have
  • Experience building large-scale data processing pipelines using Hadoop, Spark, or SQL
  • Proficiency with data science modeling tools and platforms such as R, Python, or Jupyter Notebooks
  • Understanding of multi-cloud environments, including AWS, Azure, and GCP
  • Background in mentoring and supporting colleagues or clients in a professional setting
  • Experience working within SAFe Agile frameworks and methodologies
We offer/Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn