Skip To Main Content
backBack to Search

Chief AI Platform Engineer

Hybrid in Mexico
Cloud Native Development
& 7 others

We are seeking a dynamic and experienced Chief AI Platform Engineer to join our team.

In this role, you will be responsible for building and deploying large-scale AI/ML infrastructure to tackle complex challenges in healthcare and drug discovery. This position offers an exciting opportunity to develop advanced platforms that empower data scientists to deliver transformative solutions, driving progress in global healthcare innovation.

Responsibilities
  • Develop and oversee infrastructure and platforms to support the deployment and monitoring of machine learning solutions in production environments
  • Improve system scalability and optimize performance to meet the demands of large-scale operations
  • Collaborate with data science teams to design and deploy advanced AI/ML workflows and environments on AWS
  • Work closely with R&D data scientists to operationalize machine learning models, pipelines, and algorithms
  • Manage all aspects of the software engineering lifecycle, including architecture, development, testing, and maintenance
  • Lead technology projects from initial concept through successful delivery and implementation
  • Modernize the technology stack by integrating advancements in artificial intelligence and data processing
  • Oversee enterprise-level platforms and services, effectively addressing customer needs and feature requests
  • Adopt DevOps practices and utilize modern toolchains to streamline workflows and enhance automation
  • Scale MLOps environments to production-level standards and requirements
  • Ensure systems comply with GxP standards when necessary
Requirements
  • At least 7 years of hands-on experience working with AWS cloud services, including expertise in SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
  • A minimum of 1 year of experience managing and leading development teams
  • Proficiency in infrastructure-as-code tools such as Terraform, Ansible, or CloudFormation
  • Strong programming skills in Python, with the ability to adapt to other programming languages as needed
  • Experience with containerization and microservices architectures using platforms like Kubernetes or Docker
  • Extensive knowledge of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
  • Proven ability to manage large-scale enterprise platforms and address feature requests from end users
  • Hands-on experience with DevOps practices and tools, including Docker and Git
  • Familiarity with GxP compliance standards
  • Strong problem-solving, analytical, and communication skills
Nice to have
  • Experience creating large-scale data processing pipelines with technologies like Hadoop, Spark, or SQL
  • Proficiency in data science modeling tools and platforms, such as R, Python, or Jupyter Notebooks
  • Knowledge of multi-cloud environments, including AWS, Azure, and GCP
  • Experience mentoring and guiding team members or clients in a professional setting
  • Familiarity with SAFe Agile methodologies and frameworks
We offer/Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn