Chief AI Platform Engineer
Hybrid in Mexico
Cloud Native Development
& 7 others
Mexico
We are looking for a highly skilled and innovative Chief AI Platform Engineer to join our team.
In this role, you will design and implement large-scale AI/ML infrastructure to address complex challenges in healthcare and drug discovery. This position provides a unique opportunity to create advanced platforms that empower data scientists to deliver impactful solutions, driving global advancements in healthcare technology.
Responsibilities
- Design and manage infrastructure and platforms to support the deployment and monitoring of machine learning solutions in production settings
- Enhance system scalability and optimize performance to handle the demands of large-scale operations
- Collaborate with data science teams to develop and implement sophisticated AI/ML workflows and environments on AWS
- Work closely with R&D data scientists to operationalize machine learning models, pipelines, and algorithms
- Oversee the full software engineering lifecycle, including architecture, development, testing, and maintenance
- Lead the execution of technology projects from concept development through successful delivery
- Modernize the technology stack by integrating the latest advancements in artificial intelligence and data processing
- Manage enterprise-level platforms and services, addressing customer requirements and feature requests efficiently
- Implement DevOps practices and utilize modern tools to streamline workflows and enhance automation processes
- Scale MLOps environments to production-level standards and operational requirements
- Ensure compliance with GxP standards when applicable
Requirements
- At least 7 years of hands-on experience working with AWS cloud services, including expertise in SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
- A minimum of 1 year of experience leading and managing development teams
- Proficiency in infrastructure-as-code tools such as Terraform, Ansible, or CloudFormation
- Strong programming skills in Python, with the ability to adapt to other programming languages as required
- Experience with containerization and microservices architectures, utilizing platforms like Kubernetes or Docker
- Comprehensive knowledge of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
- Proven ability to manage large-scale enterprise platforms and address feature requests from end users
- Hands-on experience with DevOps methodologies and tools, including Docker and Git
- Familiarity with GxP compliance standards
- Strong problem-solving, analytical, and communication skills
Nice to have
- Experience designing large-scale data processing systems using technologies like Hadoop, Spark, or SQL
- Proficiency with data science modeling tools and platforms, including R, Python, or Jupyter Notebooks
- Understanding of multi-cloud environments, such as AWS, Azure, and GCP
- Experience mentoring and supporting team members or clients in a professional capacity
- Familiarity with SAFe Agile methodologies and practices
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn