Chief AI Platform Engineer
Hybrid in Mexico
Cloud Native Development
& 7 others
Mexico
We are looking for an experienced and innovative Chief AI Platform Engineer to join our team.
6In this role, you will design and implement large-scale AI/ML infrastructure to tackle significant challenges in healthcare and drug discovery. This position provides an exceptional opportunity to build advanced platforms that enable data scientists to deliver impactful solutions, driving advancements in global healthcare technology.
Responsibilities
- Design and maintain infrastructure and platforms to support the deployment and monitoring of machine learning solutions in production environments
- Enhance system scalability and optimize performance to meet the demands of large-scale operations
- Collaborate with data science teams to design and deploy advanced AI/ML workflows and environments on AWS
- Work with R&D data scientists to operationalize machine learning models, pipelines, and algorithms
- Oversee the entire software engineering lifecycle, including architecture, development, testing, and ongoing maintenance
- Lead the execution of technology initiatives from concept development to successful implementation
- Modernize the technology stack by integrating the latest advancements in artificial intelligence and data processing
- Manage an enterprise platform and service, addressing customer requirements and feature requests effectively
- Implement DevOps practices and modern tools to improve automation and streamline workflows
- Scale MLOps environments to meet production-level standards and requirements
- Ensure systems comply with GxP standards when necessary
Requirements
- At least 7 years of experience working with AWS cloud services, including expertise in SageMaker, Athena, S3, EC2, RDS, Glue, Lambda, Step Functions, EKS, and ECS
- At least 1 year of experience leading and managing development teams
- Proficiency in infrastructure-as-code tools such as Terraform, Ansible, or CloudFormation
- Strong programming expertise in Python, with the ability to adapt to other programming languages
- Experience with containerization and microservices architectures, using platforms like Kubernetes or Docker
- Extensive knowledge of Continuous Integration and Continuous Delivery pipelines, including tools like CodePipeline, CodeBuild, or CodeDeploy
- Demonstrated success in managing large-scale enterprise platforms and addressing feature requirements from end users
- Hands-on experience with DevOps practices and tools, including Docker and Git
- Awareness of GxP compliance standards
- Strong analytical, problem-solving, and communication skills
Nice to have
- Experience designing large-scale data processing pipelines using technologies like Hadoop, Spark, or SQL
- Proficiency with data science modeling tools and platforms, such as R, Python, or Jupyter Notebooks
- Knowledge of multi-cloud environments, including AWS, Azure, and GCP
- Experience mentoring and supporting team members or clients in a professional setting
- Familiarity with SAFe Agile frameworks and methodologies
We offer/Benefits
- International projects with top brands
- Work with global teams of highly skilled, diverse peers
- Healthcare benefits
- Employee financial programs
- Paid time off and sick leave
- Upskilling, reskilling and certification courses
- Unlimited access to the LinkedIn Learning library and 22,000+ courses
- Global career opportunities
- Volunteer and community involvement opportunities
- EPAM Employee Groups
- Award-winning culture recognized by Glassdoor, Newsweek and LinkedIn