We are currently looking for an experienced Big Data Engineer.
Our client's engineering teams are engaged in the migration of data warehouses from the old MS SQL ecosystem (migration to AWS RDS) to Data Lake/Data Warehousing in native AWS services. Experience with the full range of AWS D/A services is a plus.
Responsibilities
- Develop data models for optimal storage and retrieval in the cloud, as well as to meet critical product and business requirements
- Build scalable and high-performance distributed data processing systems during cloud migration
- Work closely with our business partners to define requirements and ensure they are met
- Establish and develop data standards and best practices
- Adhere to and implement software development best practices
Requirements
- Bachelor's degree in Computer Science or related technical field; or relevant professional experience
- 5+ years of Python programming experience
- 2+ years of experience with enterprise cloud solutions (AWS is preferred)
- 2+ years of implementing data processing pipelines in the cloud (batch and/or streaming)
- In-depth understanding of SQL, relational and NoSQL databases
- Experience with various data access models, streaming technologies, data quality, data modeling, data performance, and cost optimization
- Knowledge of written and spoken English at Upper-Intermediate level and above (B2+)
Nice to have
- AWS Glue
- AWS Lake Formation
- Amazon QuickSight
- Amazon Redshift
- Data Warehousing
- NoSQL in BigData
Looking for something else?
Find a vacancy that works for you. Send us your CV to receive a personalized offer.
Find me a job