We welcome a Data Software Engineer / MLOps / MLE to become part of our team and contribute to enabling multi-country operations for the LATAM region.
A candidate with a strong foundation in Python, Spark, and PySpark, along with hands-on experience in AWS and Databricks, will thrive in this role.
Responsibilities
- Design data pipelines that scale efficiently using Python, Spark, and PySpark
- Construct AWS infrastructure and manage deployments through Databricks
- Establish CI/CD workflows for machine learning models with Jenkins and similar platforms
- Apply Amazon SageMaker to construct, train, and release machine learning models
- Track model performance and drive improvements in efficiency
- Uphold standards for data quality and integrity at every stage
- Engage with teams across functions to generate innovative use cases
- Develop unit tests covering all phases of machine learning processes
- Modify data science pipelines to align with project demands
- Exchange information clearly to resolve questions and address technical matters
Requirements
- Knowledge of Python, Spark, and PySpark
- Background in AWS and Databricks
- Expertise in MLOps methodologies and tools including Jenkins
- Skills in using Amazon SageMaker for machine learning lifecycle management
- Experience with CI/CD platforms and Terraform for infrastructure automation
- Flexibility to approach new use cases with ease
- Understanding of principles behind production-ready data science pipelines
- Background in preparing data and building models
- Competency in designing unit tests for each step of machine learning workflows
- Ability to communicate ideas and technical details effectively
Nice to have
- Familiarity with Apache Airflow
- Knowledge of practices that ensure high data quality
- Certification in Databricks
- Certification in Jenkins
Looking for something else?
Find a vacancy that works for you. Send us your CV to receive a personalized offer.
Find me a job