We are seeking a motivated Data Engineer with experience in modern data engineering and cloud-based data platforms, preferably on Azure. The role focuses on building, maintaining and optimizing data pipelines while working closely with senior engineers, architects and cross-functional teams.
responsibilities
Develop and maintain data pipelines for ingestion, transformation and analytics
Implement data processing solutions using Python, PySpark and SparkSQL
Work with Azure Fabric components and contribute to Fabric-based data solutions
Support data storage and access using OneLake (Delta / OpenLake)
Assist in working with Cosmos DB (NoSQL API) under guidance
Follow established CI/CD practices and contribute to deployment pipelines
Support integration with Power BI and downstream analytics use cases
Collaborate with senior engineers, data scientists and product teams
Ensure data quality, performance and reliability of data workflows
Participate in Agile ceremonies and sprint delivery
requirements
2+ years of experience in Data Engineering
Python (hands-on experience)
PySpark and SparkSQL (working knowledge)
Experience or exposure to Azure Fabric or similar Azure data services
Familiarity with OneLake / Delta Lake concepts
Basic experience with Cosmos DB (NoSQL API) or other NoSQL databases
Understanding of DF Gen2 and M-code (basic to intermediate)
Exposure to CI/CD pipelines and version control (Git)
Basic knowledge of Azure services
Familiarity with Power BI integration
Strong problem-solving and analytical skills
Willingness to learn and adapt to new technologies
Good communication and collaboration skills
Ability to work effectively in a team-oriented Agile environment
Upper-intermediate proficiency in English (B2+)
nice to have
Exposure to AI-assisted or automated code generation
Experience with Big Data concepts and distributed processing
Basic understanding of Data Science workflows
Familiarity with LLMs (e.g., GPT, Claude) or AI-enabled data use cases
Experience in financial services or data-intensive domains
Knowledge of additional Cosmos DB APIs
Apache Kafka, Azure Blob Storage, CI/CD, Data Lakehouse, ETL/ELT Solutions, MS SQL DB Development, SQL
We are seeking a dynamic and experienced Lead Data Engineer to architect, develop, and maintain cutting-edge data solutions within a fast-paced and collaborative environment. This role requires expertise in modern data platforms, business intelligence tools, process optimization, and significant experience in agile methodologies, ensuring scalable and efficient data management for actionable business insights.
responsibilities
Lead the design and productionalization of ETL/ELT processes to ensure efficient data ingestion and transformation
Architect and implement Data Platforms/Data Warehousing solutions on technologies such as Microsoft SQL Server, Azure Data Warehouse, or Azure Synapse
Manage and optimize traditional relational Data Warehouse platforms (e.g., SQL Server, Oracle, Teradata) with expertise in structured data management
Craft reporting data models using multidimensional/Kimball design principles for high-performance analytics
Develop ML algorithms or frameworks with Python to support advanced analytics initiatives
Oversee fine-tuning of T-SQL queries and performance optimization for existing solutions
Define and enhance workflows leveraging tools like Power BI, Tableau, and QlikView for business intelligence needs
Create conceptual and logical solution designs through ERD diagramming and Project Start Architecture documentation
Apply ITIL practices to manage change, incident, problem resolution, release, and service request processes
Collaborate in agile settings to translate complex challenges into actionable business solutions
Implement Master Data Management strategies (or Reference Data Management) and design interfaces such as APIs for seamless data exchange
Ensure scalable and efficient pipelines using platforms like Azure Data Factory, Data Lake, and Databricks
requirements
Minimum 6 years of experience designing and developing data ingestion processes within Data Warehouse and Data Lake environments
Proficiency in Microsoft SQL Server, Azure Data Warehouse, or Azure Synapse; expertise implementing relational Data Warehouse platforms (SQL Server, Oracle, Teradata)
Advanced competency in multidimensional/Kimball modeling and structured data management
Strong skills in T-SQL, including advanced query optimization techniques and execution plans
Proficiency in BI tools such as Power BI, Tableau, or QlikView
Knowledge of Python for ML algorithm development; flexibility to design or work with ML frameworks
Capability to define architectural standards and create Project Start Architecture documentation
Familiarity with ITIL processes related to change, incident, and problem management
Experience in Azure Data Lake, Data Factory, and Azure DevOps or equivalent AWS/GCP tools
Background in agile environments with a focus on collaborative, cross-functional problem-solving
nice to have
Agile/SAFe certifications
Azure Data Engineer or equivalent AWS/GCP certifications
Azure Administrator or Solutions Architect certifications or their AWS/GCP equivalents
We are looking for a passionate and experienced Senior Data Engineer to join our team. You will play a pivotal role in designing, building, and maintaining scalable data solutions that empower our teams with actionable insights and drive innovation across the organization.
responsibilities
Monitor storage and compute resources on Google Cloud Platform (GCP) to ensure optimal capacity and usage
Support approximately 20 applications by addressing data needs and maintaining strong performance
Use Enterprise Analytics Platform (EAP) and data warehouse solutions for data management and reporting
Consolidate large, complex datasets from various sources, transforming them into accessible formats
Develop and maintain data extraction, transformation, and pipeline architectures
Identify opportunities to enhance data reliability, quality, and accessibility
Address challenges related to organic data growth and storage scaling, improving monitoring capabilities
Work closely with application teams to understand data requirements and share best practices
requirements
A minimum of 3+ years of experience in data engineering or a related field
Proficiency in ELT, data modeling, and data integration, ingestion, manipulation, and processing
Experience with GCP monitoring, EAP or data warehouse solutions, and tracking capacity vs usage
Hands-on expertise with GitHub, Actions, Azure DevOps, and tools such as SQL DB, Synapse, Databricks, and Data Factory
Familiarity with Glue, Airflow, Stream Analytics, Redshift, Kinesis, SonarQube, and PyTest
Exceptional analytical and problem-solving skills with a proactive mindset
Competency in managing large volumes, velocity, and variety of data
Strong communication and collaboration skills to effectively work with cross-functional teams
nice to have
Knowledge of external technical ecosystems
Familiarity with automation tools for monitoring and reporting
Background in data modeling concepts
Experience with dashboarding and visualization tools such as Tableau, Power BI, or Looker
Previous exposure to financial services or enterprise environments
We are seeking an experienced and innovative Lead Data Engineer to play a critical role in our Data Transformation program. You will be at the forefront of designing and implementing scalable, cutting-edge data solutions, leveraging advanced technologies to optimize our cloud-based analytics platform. This role offers a unique opportunity to contribute to the transformation of our data platform and develop expertise in emerging technologies.
responsibilities
Design and develop automated data pipelines and data structures for modern data solutions
Deliver business tenancies as part of the data platform strategy
Build and optimize cloud data platforms leveraging AWS and Snowflake
Collaborate with product teams to ensure alignment with business goals and objectives
Migrate data from legacy platforms to cloud-based solutions
Design and operate event‑driven or streaming systems on Kafka with focus on delivery semantics and tuning throughput
Implement robust CI/CD pipelines using GitLab to support development processes
Create and maintain automated testing frameworks for data pipelines
Adapt ITIL processes into a 2nd/3rd line support environment to ensure system reliability
Drive innovation by staying updated with technology standards and emerging trends in software and data engineering
Contribute to Agile delivery teams utilizing methodologies such as Scrum and Kanban
requirements
5+ years of Python engineering experience with a focus on performance optimization, object-oriented design, and production-grade reliability
2+ years of hands-on expertise in Snowflake, SQL, and data engineering tools like dbt and Airflow
2+ years designing and managing Kafka systems with an understanding of message delivery guarantees and DLQs
1+ years implementing CI/CD pipelines using GitLab in production environments
Familiarity with Agile methodologies, including Scrum or Kanban, and JIRA for workflow management
Knowledge of automated testing frameworks for validating data pipelines
Expertise in modern cloud computing technologies, with a focus on AWS and Snowflake
Demonstrated proficiency in ITIL processes within a support setting
Strong written and verbal English communication skills (B2+)
We are seeking a Data Engineer with deep expertise in database support and cloud-based data platforms, focusing on Cosmos DB and Azure Fabric environments . This position is ideal for an experienced engineer who excels in production-grade data environments, ensuring operational stability and driving enhancements to data platform solutions.
responsibilities
Provide end-to-end support for Cosmos DB–based databases, including monitoring, troubleshooting and performance tuning
Work extensively with Cosmos DB (NoSQL API) and other Cosmos DB variants (Core SQL, Mongo API, Cassandra API, Table API)
Support and maintain data platform components within Azure Fabric
Diagnose and resolve production issues related to data access, latency, throughput and availability
Implement best practices for scalability, security, backup and disaster recovery
Collaborate with application, data engineering and platform teams to support data-driven solutions
Contribute to automation, scripting and operational improvements
Participate in on-call or production support rotations as required
Document operational procedures and support knowledge
requirements
3+ years of experience as a Dev, Data or Platform Engineer
Strong hands-on experience with Cosmos DB (NoSQL API)
Expertise in other Cosmos DB variants
Experience working with Azure Service Fabric or Azure-based data platforms
Solid understanding of NoSQL data modeling, partitioning and performance tuning
Experience supporting production databases in cloud environments
Familiarity with Azure services and monitoring tools
Strong troubleshooting and analytical skills
Calm, methodical approach to production support and incident management
Good communication skills, able to work with both technical and non-technical stakeholders
Ability to prioritize effectively in high-availability environments
Collaborative mindset and ownership mentality
Excellent command of written and spoken English (B2+ level)
nice to have
Experience with code generation, including non-AI and AI-assisted approaches
Exposure to Data Science workflows
Experience with Big Data platforms and distributed systems
Knowledge of financial instruments and financial services data
Hands-on experience with industry-standard LLMs (including GPT, Claude or similar)
We are seeking a highly skilled Data Engineer with deep expertise in PySpark and strong experience in Azure Data Factory/Synapse. The ideal candidate will have a proven ability to design, develop, and optimize scalable data solutions, build robust data pipelines, and apply modern DevOps practices in a cloud environment.
responsibilities
Design, develop, and optimize large-scale data processing solutions using PySpark
Implement and maintain advanced data pipelines and workflows in Azure Data Factory and Azure Synapse
Automate and monitor development pipelines for efficient, resilient data engineering solutions
Architect scalable data solutions while collaborating with cross-functional engineering teams
Apply best practices for code optimization, version control (Git), and infrastructure automation
Integrate Azure Functions for data orchestration or transformation tasks
Contribute to infrastructure setup and maintain automation using Terraform
Produce technical documentation and mentor junior team members on best practices
requirements
2+ years of professional experience in data engineering roles
Extensive hands-on experience with PySpark for writing optimized, scalable code
Strong background in Azure Data Factory and/or Azure Synapse for data integration and orchestration
Proficiency in leveraging Azure Functions for enhancing workflows
Competency in DevOps practices, including CI/CD toolchains, automation, and version control
Showcase of Git expertise for collaborative code development
Familiarity with Terraform for managing infrastructure as code
Solid skills in designing build and development pipelines within cloud-based environments
Understanding of Azure DevOps for end-to-end CI/CD management
Exceptional problem-solving abilities and documentation skills
Excellent written and verbal communication skills in English (B2+ level)
We are seeking a detail-oriented and experienced Senior Data Quality Engineer to join our team, specializing in testing software solutions for Capital Markets Equities Products. As a key member of our Scrum team, you will leverage your expertise in software testing lifecycles, ensure the delivery of high-quality software solutions, and contribute to team success through mentorship and innovation.
responsibilities
Study requirement specifications and clarify ambiguities with business analysts and customers
Document processes and share knowledge across the team
Build, maintain, and update test scenarios and test cases based on specifications
Report results of manual and automated tests while troubleshooting script issues
Identify, track, and document system issues and anomalies in issue tracking systems
Communicate unforeseen obstacles affecting work progress to leads in a timely manner
Provide daily and weekly status updates to leads and managers
Develop robust test plans, estimates, and identify opportunities for test process improvements
Review QA project artifacts, including test scenarios, test scripts, defect reports, and status updates
Research and recommend new QA tools, methodologies, and innovations
Mentor and train QA team members, ensuring knowledge dissemination
Ensure efficient testing for applications hosted in cloud environments like AWS
requirements
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent professional certification
6+ years of experience in the software development lifecycle, with a focus on testing databases, ETL processes, migration testing, and more
At least 3 years’ experience in testing backend applications built on REST APIs
Proficiency in writing complex SQL queries with at least 6 years of experience
Background in creating comprehensive test plans and test case documentation
Knowledge of software QA methodologies, tools, and processes
Competency in cloud application testing, specifically in AWS
Expertise in tools like Jira and Confluence
Skills in practicing Agile principles within Scrum teams
Attention to detail with strong communication skills and the ability to thrive under pressure
Flexibility to mentor junior QA analysts and adapt to evolving priorities
Understanding of the Equities, FX, and Derivatives trading space
Excellent command of written and spoken English (B2+ level)
nice to have
Capability to design, build, and maintain automated test scripts
Experience within the Capital Markets domain
Experience using FIX protocol
Proficiency in coding test automation scripts in Java, Python, or JavaScript
Familiarity with Snowflake, AWS, or any cloud experience
Experience using Matillion
Real-time system experience
Background in integration testing with upstream and downstream systems
We are seeking a Senior Data Engineer with deep expertise in Azure Fabric, PySpark and AI-driven data platforms. This role focuses on designing, building and optimizing scalable data pipelines and analytics solutions, collaborating with architects, engineers and business stakeholders to deliver modern, AI-integrated data solutions.
responsibilities
Design, develop and maintain scalable data pipelines using Azure Fabric
Implement data processing and transformation with Python, PySpark and SparkSQL
Utilize OneLake (Delta / OpenLake) for efficient data storage and analytics
Develop and support solutions leveraging Cosmos DB (NoSQL API)
Contribute to Fabric workloads such as Data Engineering, Data Factory Gen2 and Lakehouse
Implement and maintain CI/CD pipelines following DevOps best practices
Integrate data solutions with Power BI for reporting and analytics
Collaborate with AI, data science and product teams to support AI-driven use cases
Ensure data quality, performance, security and reliability
Participate in Agile ceremonies and contribute to sprint delivery
Support production issues and drive continuous improvements
requirements
5+ years of experience in Data Engineering or related engineering roles
Strong hands-on experience with Azure Fabric
Proficiency in Python, PySpark and SparkSQL
Experience with Cosmos DB (NoSQL API) and OneLake / Delta Lake (OpenLake concepts)
Knowledge of DF Gen2 and M-code
Experience with CI/CD pipelines using Azure DevOps or equivalent
Good understanding of Azure services and Power BI integration
Strong problem-solving and analytical skills
Ability to work independently on complex tasks
Clear communication and collaboration skills
Ownership mindset with attention to quality and performance
Experience working in Agile or Scrum environments
Upper-Intermediate English language proficiency (B2)
nice to have
Experience with code generation, including non-AI and AI-assisted approaches
Expertise with other Cosmos DB variants such as Mongo, Cassandra or Table APIs
Exposure to Azure AI Foundry and Data Science workflows
Strong background in Big Data and Spark ecosystems
Knowledge of financial instruments and financial services data
Hands-on experience with industry-standard LLMs such as GPT, Claude or similar
We are seeking a skilled Data Science Consultant to join our team and contribute to the delivery of AI and Data Cloud Solutions. As a member of our team, you will collaborate with data scientists, engineers, and product owners to advance our AI delivery framework. Your expertise in data structures, databases, and ETL tools will help optimize AI processes while ensuring compliance with security and ethical standards. This role provides the opportunity to drive AI innovation in a collaborative and dynamic environment.
responsibilities
Drive and implement continuous improvements in the client's delivery framework
Apply (SAFe) Agile operational standards and practices to optimize efficiency and AI investment returns
Act as a subject matter expert in Data Science or Data Engineering
Collaborate with Product Owners to gather, address, and align their needs and requirements
Ensure our client's solutions comply with security, AI ethics, DPP, legal, and works council standards
Facilitate quarterly planning activities for the client
Contribute to operational KPIs reporting and drive improvements in these KPIs
Contribute to the development of reusable and enablement assets relevant to the client's context
Educate customers and stakeholders on AI and machine learning concepts
requirements
Bachelor’s or master’s degree in machine learning, computer science, engineering, or related technical fields
Background in working with teams of AI Scientists, MLOps Engineers, and Product Owners
Showcase of stakeholder communication skills to understand SAP business processes
Understanding of agile team practices and methods including Scrum, Kanban, and SAFe
Proficiency in AI and ML concepts, including MLOps, technologies, and cloud frameworks such as Jupyter, Docker, Kubernetes, Github, SAP BTP, OCR, NLP, and CV
Expertise in Python libraries and machine learning frameworks such as NumPy, Pandas, Keras, scikit-learn, TensorFlow, PyTorch, and Gensim
Interest in business process engineering and modeling
Qualifications in ITIL and ITSM practices
Capability to take ownership of tasks and demonstrate collaborative teamwork skills
Ability to communicate effectively in both written and spoken English (B2 level or higher)
We're seeking a Data Technology Consultant to join the Data Practice team and help our clients in unlocking their data's full potential. In this role, you'll contribute to projects centered around digital transformation, data platforms & science, business analytics, intelligent automation, and cloud solutions.
responsibilities
Work with European technical and business data practices, assisting clients in their Data Analytics strategy and delivery programs
Maximize the value of clients' Data & Analytics initiatives by recognizing appropriate solutions and services
Act as Data Technology Consultant and/or Data Solution Architect, working with the delivery team in complex programs
Maintain understanding of technical solutions, architecture design trends and best practices
Drive Data Analytics initiatives and technology consulting engagements
Collaborate with internal, client and third-party teams to execute transformations
Understand the intersection between technology, customers and business
Stay updated on emerging trends and challenges in clients' markets and geographies and how it affects clients’ business and initiatives
Work closely with project/program management to ensure successful delivery through an integrated delivery model
Deliver clear and consistent communications within projects with relevant stakeholders
Establish and cultivate strong relationships with clients
requirements
Strong experience as a Data Technical Consultant and Data Solution Architect
Hands-on technology experience in the areas of Data Analytics
Skills in one of the following: Big Data, BI, Data Warehousing, Data Science, Data Management, Data Storage, Data Visualization
Good knowledge in at least one of the Cloud providers (AWS, Azure, GCP)
Background in continuous delivery tools and technologies
Ability to work with relevant delivery teams
Skill in effectively communicating technology pros & cons and presenting rational options to clients
Confidence in expressing viewpoints, making recommendations and presenting analysis when needed
English language proficiency at an Upper-Intermediate level (B2) or higher
Let us find a perfect job for you
Share your CV and pass our review to get a personalized job offer even if you didn't find a job on the site.