Chief AI Platform Engineer
Hybrid in Mexico
Cloud Native Development
& 7 others
Mexico
We are seeking a visionary and experienced Chief AI Platform Engineer to join our team.
In this role, you will be responsible for designing and deploying AI/ML infrastructure at scale, addressing critical challenges in healthcare and drug discovery. This role offers a unique chance to develop advanced platforms that empower data scientists to create transformative solutions, accelerating innovation in global healthcare.
Responsibilities
- Develop and oversee infrastructure and platforms to support the deployment and monitoring of machine learning solutions in production environments
- Optimize system performance and scalability to meet the needs of large-scale operations
- Collaborate with data science teams to create and implement sophisticated AI/ML workflows and environments on AWS
- Work closely with R&D data scientists to operationalize machine learning pipelines, algorithms, and models
- Manage the complete software engineering lifecycle, from architecture and development to testing and maintenance
- Lead technology projects from concept to successful execution and delivery
- Upgrade the existing technology stack by integrating the latest advancements in artificial intelligence and data processing
- Administer an enterprise-level platform and service, addressing customer requirements and feature requests efficiently
- Adopt DevOps methodologies and implement modern tools to streamline workflows and enhance automation
- Scale MLOps environments to meet production-grade standards and operational requirements
- Ensure compliance with GxP standards when applicable
Requirements
- A minimum of 7 years of experience working with AWS cloud services, including expertise in SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
- At least 1 year of experience managing and leading development teams
- Proficiency in infrastructure-as-code tools, such as Terraform, Ansible, or CloudFormation
- Expertise in Python programming, with the ability to work with additional programming languages as needed
- Experience with containerization and microservices architectures, utilizing platforms such as Kubernetes or Docker
- Comprehensive knowledge of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
- Proven ability to manage large-scale enterprise platforms and address end-user feature needs
- Hands-on experience with DevOps tools and practices, including Docker and Git
- Understanding of GxP compliance standards
- Strong analytical, communication, and problem-solving skills
Nice to have
- Experience developing large-scale data processing pipelines with technologies such as Hadoop, Spark, or SQL
- Proficiency with data science modeling tools and environments, including R, Python, or Jupyter Notebooks
- Knowledge of multi-cloud platforms, including AWS, Azure, and GCP
- Experience mentoring and supporting team members or clients in a professional setting
- Familiarity with SAFe Agile methodologies and processes
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn