Skip To Main Content
backBack to Search

Lead AI Platform Engineer

Hybrid in Mexico
Cloud Native Development
& 7 others

We are seeking a motivated and experienced Lead AI Platform Engineer to join our forward-thinking team.

In this role, you will play a key part in designing and deploying large-scale AI/ML infrastructure to tackle transformative challenges in healthcare and drug discovery. This is an exciting opportunity to build cutting-edge platforms and systems that empower data scientists, driving impactful healthcare solutions on a global scale.

Responsibilities
  • Develop and manage infrastructure and platforms to enable the deployment and monitoring of machine learning solutions in production
  • Enhance system performance and scalability for large-scale operations
  • Collaborate with data science teams to design advanced AI/ML workflows and environments on AWS
  • Work with R&D data scientists to operationalize machine learning pipelines, models, and algorithms
  • Take ownership of software engineering processes, including architecture, development, testing, and maintenance
  • Lead the development of technology initiatives from concept to successful delivery
  • Continuously upgrade the technology stack by incorporating advancements in artificial intelligence and data processing
  • Oversee an enterprise-level platform and service, addressing customer needs and feature requests effectively
  • Introduce and implement DevOps best practices and modern toolchains to improve automation and efficiency
  • Scale machine learning operations (MLOps) environments to production-grade standards
  • Ensure adherence to GxP compliance standards when required
Requirements
  • At least 5 years of experience working in AWS cloud environments, with expertise in services such as SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
  • Proficiency in infrastructure-as-code tools, including Terraform, Ansible, or CloudFormation
  • Strong programming skills, particularly in Python, with consideration for other programming expertise
  • Experience with containers, microservices architectures, and systems like Kubernetes or Docker
  • Advanced understanding of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
  • Proven experience managing large-scale enterprise platforms and addressing end-user feature requests
  • Hands-on experience with DevOps practices and tools, including Docker and Git
  • Familiarity with GxP compliance standards
  • Strong analytical, problem-solving, and communication skills
Nice to have
  • Experience developing large-scale data processing pipelines with technologies like Hadoop, Spark, or SQL
  • Proficiency in data science modeling tools and platforms, such as R, Python, or Jupyter Notebooks
  • Knowledge of multi-cloud environments, including AWS, Azure, and GCP
  • Experience mentoring and supporting team members or clients in a professional setting
  • Familiarity with SAFe Agile frameworks and methodologies
We offer/Benefits
  • International projects with top brands
  • Work with global teams of highly skilled, diverse peers
  • Healthcare benefits
  • Employee financial programs
  • Paid time off and sick leave
  • Upskilling, reskilling and certification courses
  • Unlimited access to the LinkedIn Learning library and 22,000+ courses
  • Global career opportunities
  • Volunteer and community involvement opportunities
  • EPAM Employee Groups
  • Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn