Senior Data Engineer (AWS)
Office in Mexico: Mexico City
Data Integration
& 11 others
Choose an option
We are seeking a skilled and innovative Senior Data Engineer to join our team and drive the evolution of the Client's data architecture and strategy.
In this role, you will address critical challenges such as integrating diverse data sources and leading agile cloud migrations to support the decommissioning of cross-platform business intelligence. Your work will help advance the Client's Data Mesh architecture, democratize data products, and enhance analytics capabilities across BI, Machine Learning, AI, Deep Learning, and IoT.
Responsibilities
- Build and implement data warehousing solutions and develop robust ETL/ELT processes to ensure data integrity and availability
- Develop and optimize data pipelines using AWS tools, including Glue, Redshift, Athena, DynamoDB, and Amazon RDS
- Create and maintain automation scripts for data pipelines, such as Glue scripts for generating parameterized parquet files
- Lead the migration of data sources and processes to the cloud to enable decommissioning of cross-platform BI
- Ensure seamless data integration across various systems, sources, and formats to support the transition to a Data Mesh architecture
- Collaborate with software developers, data analysts, and system administrators to understand business needs and deliver effective data solutions
- Partner with business teams to refine data requirements, estimate development efforts, and build pipelines for Data Lake ingestion
- Support the development of data models in Redshift to enable BI visualization in Tableau Cloud/Server
- Enhance the current BI layer while supporting the growth of advanced ML, AI, and Deep Learning capabilities
- Document database designs, ETL/ELT workflows, security schemas, and architecture diagrams for clarity and reproducibility
- Maintain high standards of data governance to ensure data accuracy, security, and accessibility
- Build Glue Data Pipelines to integrate data from APIs, transform it, and ingest it into cloud storage
- Develop Glue scripts to produce parameterized parquet files in JSON format with automated cloud storage
- Design and implement Redshift data models for indicators to be visualized in Tableau
- Assist business teams in scoping data requirements, estimating effort, and delivering data pipeline solutions for Data Lake ingestion
Requirements
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related technical field
- Minimum 3 years of relevant experience in data engineering
- Proven experience with cloud-based data solutions, specifically AWS services such as S3, Glue, Redshift, Lambda, DynamoDB, Athena, and RDS
- Strong understanding of ETL/ELT development, data pipeline architecture, and modern data warehousing concepts
- Proficiency in programming languages such as Python, SQL, or Scala
- Experience designing, implementing, and optimizing data models in Redshift or similar platforms
- Strong problem-solving skills and attention to detail
- Excellent communication skills and the ability to collaborate effectively with both technical and non-technical teams
- Fluent English skills, both written and spoken, at a B2+ level or higher
Nice to have
- Familiarity with data visualization tools, especially Tableau Server or Tableau Cloud
- Experience working with Data Mesh architecture
- Knowledge of machine learning, deep learning, AI, and IoT concepts
- Experience with tools such as Apache Spark and Hadoop
- Familiarity with JSON schema design and cloud storage management
- Experience with AWS Lambda and Amazon EC2