We are seeking a motivated Data Engineer with experience in modern data engineering and cloud-based data platforms, preferably on Azure. The role focuses on building, maintaining and optimizing data pipelines while working closely with senior engineers, architects and cross-functional teams.
responsibilities
Develop and maintain data pipelines for ingestion, transformation and analytics
Implement data processing solutions using Python, PySpark and SparkSQL
Work with Azure Fabric components and contribute to Fabric-based data solutions
Support data storage and access using OneLake (Delta / OpenLake)
Assist in working with Cosmos DB (NoSQL API) under guidance
Follow established CI/CD practices and contribute to deployment pipelines
Support integration with Power BI and downstream analytics use cases
Collaborate with senior engineers, data scientists and product teams
Ensure data quality, performance and reliability of data workflows
Participate in Agile ceremonies and sprint delivery
requirements
2+ years of experience in Data Engineering
Python (hands-on experience)
PySpark and SparkSQL (working knowledge)
Experience or exposure to Azure Fabric or similar Azure data services
Familiarity with OneLake / Delta Lake concepts
Basic experience with Cosmos DB (NoSQL API) or other NoSQL databases
Understanding of DF Gen2 and M-code (basic to intermediate)
Exposure to CI/CD pipelines and version control (Git)
Basic knowledge of Azure services
Familiarity with Power BI integration
Strong problem-solving and analytical skills
Willingness to learn and adapt to new technologies
Good communication and collaboration skills
Ability to work effectively in a team-oriented Agile environment
Upper-intermediate proficiency in English (B2+)
nice to have
Exposure to AI-assisted or automated code generation
Experience with Big Data concepts and distributed processing
Basic understanding of Data Science workflows
Familiarity with LLMs (e.g., GPT, Claude) or AI-enabled data use cases
Experience in financial services or data-intensive domains
Knowledge of additional Cosmos DB APIs
Apache Kafka, Azure Blob Storage, CI/CD, Data Lakehouse, ETL/ELT Solutions, MS SQL DB Development, SQL
We are seeking a dynamic and experienced Lead Data Engineer to architect, develop, and maintain cutting-edge data solutions within a fast-paced and collaborative environment. This role requires expertise in modern data platforms, business intelligence tools, process optimization, and significant experience in agile methodologies, ensuring scalable and efficient data management for actionable business insights.
responsibilities
Lead the design and productionalization of ETL/ELT processes to ensure efficient data ingestion and transformation
Architect and implement Data Platforms/Data Warehousing solutions on technologies such as Microsoft SQL Server, Azure Data Warehouse, or Azure Synapse
Manage and optimize traditional relational Data Warehouse platforms (e.g., SQL Server, Oracle, Teradata) with expertise in structured data management
Craft reporting data models using multidimensional/Kimball design principles for high-performance analytics
Develop ML algorithms or frameworks with Python to support advanced analytics initiatives
Oversee fine-tuning of T-SQL queries and performance optimization for existing solutions
Define and enhance workflows leveraging tools like Power BI, Tableau, and QlikView for business intelligence needs
Create conceptual and logical solution designs through ERD diagramming and Project Start Architecture documentation
Apply ITIL practices to manage change, incident, problem resolution, release, and service request processes
Collaborate in agile settings to translate complex challenges into actionable business solutions
Implement Master Data Management strategies (or Reference Data Management) and design interfaces such as APIs for seamless data exchange
Ensure scalable and efficient pipelines using platforms like Azure Data Factory, Data Lake, and Databricks
requirements
Minimum 6 years of experience designing and developing data ingestion processes within Data Warehouse and Data Lake environments
Proficiency in Microsoft SQL Server, Azure Data Warehouse, or Azure Synapse; expertise implementing relational Data Warehouse platforms (SQL Server, Oracle, Teradata)
Advanced competency in multidimensional/Kimball modeling and structured data management
Strong skills in T-SQL, including advanced query optimization techniques and execution plans
Proficiency in BI tools such as Power BI, Tableau, or QlikView
Knowledge of Python for ML algorithm development; flexibility to design or work with ML frameworks
Capability to define architectural standards and create Project Start Architecture documentation
Familiarity with ITIL processes related to change, incident, and problem management
Experience in Azure Data Lake, Data Factory, and Azure DevOps or equivalent AWS/GCP tools
Background in agile environments with a focus on collaborative, cross-functional problem-solving
nice to have
Agile/SAFe certifications
Azure Data Engineer or equivalent AWS/GCP certifications
Azure Administrator or Solutions Architect certifications or their AWS/GCP equivalents
We are seeking a Data Engineer with deep expertise in database support and cloud-based data platforms, focusing on Cosmos DB and Azure Fabric environments . This position is ideal for an experienced engineer who excels in production-grade data environments, ensuring operational stability and driving enhancements to data platform solutions.
responsibilities
Provide end-to-end support for Cosmos DB–based databases, including monitoring, troubleshooting and performance tuning
Work extensively with Cosmos DB (NoSQL API) and other Cosmos DB variants (Core SQL, Mongo API, Cassandra API, Table API)
Support and maintain data platform components within Azure Fabric
Diagnose and resolve production issues related to data access, latency, throughput and availability
Implement best practices for scalability, security, backup and disaster recovery
Collaborate with application, data engineering and platform teams to support data-driven solutions
Contribute to automation, scripting and operational improvements
Participate in on-call or production support rotations as required
Document operational procedures and support knowledge
requirements
3+ years of experience as a Dev, Data or Platform Engineer
Strong hands-on experience with Cosmos DB (NoSQL API)
Expertise in other Cosmos DB variants
Experience working with Azure Service Fabric or Azure-based data platforms
Solid understanding of NoSQL data modeling, partitioning and performance tuning
Experience supporting production databases in cloud environments
Familiarity with Azure services and monitoring tools
Strong troubleshooting and analytical skills
Calm, methodical approach to production support and incident management
Good communication skills, able to work with both technical and non-technical stakeholders
Ability to prioritize effectively in high-availability environments
Collaborative mindset and ownership mentality
Excellent command of written and spoken English (B2+ level)
nice to have
Experience with code generation, including non-AI and AI-assisted approaches
Exposure to Data Science workflows
Experience with Big Data platforms and distributed systems
Knowledge of financial instruments and financial services data
Hands-on experience with industry-standard LLMs (including GPT, Claude or similar)
We are seeking a highly skilled Data Engineer with deep expertise in PySpark and strong experience in Azure Data Factory/Synapse. The ideal candidate will have a proven ability to design, develop, and optimize scalable data solutions, build robust data pipelines, and apply modern DevOps practices in a cloud environment.
responsibilities
Design, develop, and optimize large-scale data processing solutions using PySpark
Implement and maintain advanced data pipelines and workflows in Azure Data Factory and Azure Synapse
Automate and monitor development pipelines for efficient, resilient data engineering solutions
Architect scalable data solutions while collaborating with cross-functional engineering teams
Apply best practices for code optimization, version control (Git), and infrastructure automation
Integrate Azure Functions for data orchestration or transformation tasks
Contribute to infrastructure setup and maintain automation using Terraform
Produce technical documentation and mentor junior team members on best practices
requirements
2+ years of professional experience in data engineering roles
Extensive hands-on experience with PySpark for writing optimized, scalable code
Strong background in Azure Data Factory and/or Azure Synapse for data integration and orchestration
Proficiency in leveraging Azure Functions for enhancing workflows
Competency in DevOps practices, including CI/CD toolchains, automation, and version control
Showcase of Git expertise for collaborative code development
Familiarity with Terraform for managing infrastructure as code
Solid skills in designing build and development pipelines within cloud-based environments
Understanding of Azure DevOps for end-to-end CI/CD management
Exceptional problem-solving abilities and documentation skills
Excellent written and verbal communication skills in English (B2+ level)
We are seeking a detail-oriented and experienced Senior Data Quality Engineer to join our team, specializing in testing software solutions for Capital Markets Equities Products. As a key member of our Scrum team, you will leverage your expertise in software testing lifecycles, ensure the delivery of high-quality software solutions, and contribute to team success through mentorship and innovation.
responsibilities
Study requirement specifications and clarify ambiguities with business analysts and customers
Document processes and share knowledge across the team
Build, maintain, and update test scenarios and test cases based on specifications
Report results of manual and automated tests while troubleshooting script issues
Identify, track, and document system issues and anomalies in issue tracking systems
Communicate unforeseen obstacles affecting work progress to leads in a timely manner
Provide daily and weekly status updates to leads and managers
Develop robust test plans, estimates, and identify opportunities for test process improvements
Review QA project artifacts, including test scenarios, test scripts, defect reports, and status updates
Research and recommend new QA tools, methodologies, and innovations
Mentor and train QA team members, ensuring knowledge dissemination
Ensure efficient testing for applications hosted in cloud environments like AWS
requirements
Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent professional certification
6+ years of experience in the software development lifecycle, with a focus on testing databases, ETL processes, migration testing, and more
At least 3 years’ experience in testing backend applications built on REST APIs
Proficiency in writing complex SQL queries with at least 6 years of experience
Background in creating comprehensive test plans and test case documentation
Knowledge of software QA methodologies, tools, and processes
Competency in cloud application testing, specifically in AWS
Expertise in tools like Jira and Confluence
Skills in practicing Agile principles within Scrum teams
Attention to detail with strong communication skills and the ability to thrive under pressure
Flexibility to mentor junior QA analysts and adapt to evolving priorities
Understanding of the Equities, FX, and Derivatives trading space
Excellent command of written and spoken English (B2+ level)
nice to have
Capability to design, build, and maintain automated test scripts
Experience within the Capital Markets domain
Experience using FIX protocol
Proficiency in coding test automation scripts in Java, Python, or JavaScript
Familiarity with Snowflake, AWS, or any cloud experience
Experience using Matillion
Real-time system experience
Background in integration testing with upstream and downstream systems
We are seeking a Senior Data Engineer with deep expertise in Azure Fabric, PySpark and AI-driven data platforms. This role focuses on designing, building and optimizing scalable data pipelines and analytics solutions, collaborating with architects, engineers and business stakeholders to deliver modern, AI-integrated data solutions.
responsibilities
Design, develop and maintain scalable data pipelines using Azure Fabric
Implement data processing and transformation with Python, PySpark and SparkSQL
Utilize OneLake (Delta / OpenLake) for efficient data storage and analytics
Develop and support solutions leveraging Cosmos DB (NoSQL API)
Contribute to Fabric workloads such as Data Engineering, Data Factory Gen2 and Lakehouse
Implement and maintain CI/CD pipelines following DevOps best practices
Integrate data solutions with Power BI for reporting and analytics
Collaborate with AI, data science and product teams to support AI-driven use cases
Ensure data quality, performance, security and reliability
Participate in Agile ceremonies and contribute to sprint delivery
Support production issues and drive continuous improvements
requirements
5+ years of experience in Data Engineering or related engineering roles
Strong hands-on experience with Azure Fabric
Proficiency in Python, PySpark and SparkSQL
Experience with Cosmos DB (NoSQL API) and OneLake / Delta Lake (OpenLake concepts)
Knowledge of DF Gen2 and M-code
Experience with CI/CD pipelines using Azure DevOps or equivalent
Good understanding of Azure services and Power BI integration
Strong problem-solving and analytical skills
Ability to work independently on complex tasks
Clear communication and collaboration skills
Ownership mindset with attention to quality and performance
Experience working in Agile or Scrum environments
Upper-Intermediate English language proficiency (B2)
nice to have
Experience with code generation, including non-AI and AI-assisted approaches
Expertise with other Cosmos DB variants such as Mongo, Cassandra or Table APIs
Exposure to Azure AI Foundry and Data Science workflows
Strong background in Big Data and Spark ecosystems
Knowledge of financial instruments and financial services data
Hands-on experience with industry-standard LLMs such as GPT, Claude or similar
We are seeking a skilled Data Science Consultant to join our team and contribute to the delivery of AI and Data Cloud Solutions. As a member of our team, you will collaborate with data scientists, engineers, and product owners to advance our AI delivery framework. Your expertise in data structures, databases, and ETL tools will help optimize AI processes while ensuring compliance with security and ethical standards. This role provides the opportunity to drive AI innovation in a collaborative and dynamic environment.
responsibilities
Drive and implement continuous improvements in the client's delivery framework
Apply (SAFe) Agile operational standards and practices to optimize efficiency and AI investment returns
Act as a subject matter expert in Data Science or Data Engineering
Collaborate with Product Owners to gather, address, and align their needs and requirements
Ensure our client's solutions comply with security, AI ethics, DPP, legal, and works council standards
Facilitate quarterly planning activities for the client
Contribute to operational KPIs reporting and drive improvements in these KPIs
Contribute to the development of reusable and enablement assets relevant to the client's context
Educate customers and stakeholders on AI and machine learning concepts
requirements
Bachelor’s or master’s degree in machine learning, computer science, engineering, or related technical fields
Background in working with teams of AI Scientists, MLOps Engineers, and Product Owners
Showcase of stakeholder communication skills to understand SAP business processes
Understanding of agile team practices and methods including Scrum, Kanban, and SAFe
Proficiency in AI and ML concepts, including MLOps, technologies, and cloud frameworks such as Jupyter, Docker, Kubernetes, Github, SAP BTP, OCR, NLP, and CV
Expertise in Python libraries and machine learning frameworks such as NumPy, Pandas, Keras, scikit-learn, TensorFlow, PyTorch, and Gensim
Interest in business process engineering and modeling
Qualifications in ITIL and ITSM practices
Capability to take ownership of tasks and demonstrate collaborative teamwork skills
Ability to communicate effectively in both written and spoken English (B2 level or higher)
We are excited to announce an opportunity for a Manager, Data Analytics Consulting at EPAM, a role essential for assisting our customers in creating information strategies and supporting the data-driven transformation of their enterprises. Our ideal candidate will come with a robust Life Sciences background, prepared to lead and manage data analytics initiatives. If you are ready to make significant impacts and thrive in a vibrant, engaging environment, we welcome you to join our team!
responsibilities
Engage with customer business stakeholders to identify Data & Analytics opportunities
Lead and challenge customers and team members to develop innovative strategies and solutions
Map customer visions and requirements to specific products and components
Advocate for data and work to eliminate business, technical, and political impediments
Mentor and coach clients and team members on data & analytics best practices
Oversee consulting engagements, ensuring quality delivery and output
Guarantee strategies are actionable and well documented for smooth handover to implementation teams
Assemble EPAM analysts & engineers to deliver MVPs
Collaborate with account managers to nurture and manage customer relationships
Design and lead workshops & interviews to define and prioritize client needs
Collaborate with Solution Architects and Technical Leads to design solutions that drive business value
Assist the business development team with proposals and pitches for new business
Respond to RFxs, create proposals, and present to potential clients
Lead Data & Analytics presale activities in the Healthcare segment
requirements
7+ years of total experience in the data and analytics domain
3+ years in roles focused on management, delivery, consulting, or presales
5+ years in the Life Science industry or with Life Science clients
Knowledge of one or more Life Science segments such as clinical trials, R&D, Commercial, Regulatory & Compliance
Proficiency in facilitating and driving strategy discussions
Competency to engage in and influence solution design discussions
Solid knowledge of the data and analytics landscape and recent trends
Experience in comprehensive data and analytics project delivery
Excellent communication skills and dynamic presentation abilities
Skills in managing customer expectations and presenting project deliverables to senior stakeholders
Capability to shape a presales opportunity into a customer engagement
Primary career focus on areas such as Cloud Data Platforms, BI, Data Warehousing, Data Lakes, Data Science & Predictive Analytics
Experience leading medium to large-scale software/technology projects
Understanding of Cloud, Data, Analytics, and Data Science technologies and trends
Flexibility to learn & expand knowledge in new technology trends within Data & Analytics
English level of minimum B2 (Upper-Intermediate) for effective communication
We're seeking a Data Technology Consultant to join the Data Practice team and help our clients in unlocking their data's full potential. In this role, you'll contribute to projects centered around digital transformation, data platforms & science, business analytics, intelligent automation, and cloud solutions.
responsibilities
Work with European technical and business data practices, assisting clients in their Data Analytics strategy and delivery programs
Maximize the value of clients' Data & Analytics initiatives by recognizing appropriate solutions and services
Act as Data Technology Consultant and/or Data Solution Architect, working with the delivery team in complex programs
Maintain understanding of technical solutions, architecture design trends and best practices
Drive Data Analytics initiatives and technology consulting engagements
Collaborate with internal, client and third-party teams to execute transformations
Understand the intersection between technology, customers and business
Stay updated on emerging trends and challenges in clients' markets and geographies and how it affects clients’ business and initiatives
Work closely with project/program management to ensure successful delivery through an integrated delivery model
Deliver clear and consistent communications within projects with relevant stakeholders
Establish and cultivate strong relationships with clients
requirements
Strong experience as a Data Technical Consultant and Data Solution Architect
Hands-on technology experience in the areas of Data Analytics
Skills in one of the following: Big Data, BI, Data Warehousing, Data Science, Data Management, Data Storage, Data Visualization
Good knowledge in at least one of the Cloud providers (AWS, Azure, GCP)
Background in continuous delivery tools and technologies
Ability to work with relevant delivery teams
Skill in effectively communicating technology pros & cons and presenting rational options to clients
Confidence in expressing viewpoints, making recommendations and presenting analysis when needed
English language proficiency at an Upper-Intermediate level (B2) or higher
We are in search of a Senior Data Solution Architect to spearhead data-driven projects that leverage scalable platforms, cutting-edge technologies, and machine-learning algorithms. By designing robust solutions, you will be a crucial part in generating value from data, all while driving our organization's digital transformation through data enablement and democratization.
responsibilities
Address organization business goals and strategies using best practices in data-, solution-, enterprise- and business- architectures, along with software engineering expertise
Conduct architectural activities including business problem analysis, technology landscape identification, significant requirements identification, solution design, and artifact creation
Perform software engineering tasks such as coding and data model creation in primary technology stacks, comprehend code and data models for non-primary technology stacks, and apply software design patterns and practices
Build and manage data management environments on premise and in the cloud, addressing security, compliance, and regulatory concerns with automation, monitoring, orchestration, and data ingestion
Guide the implementation team in integrating and harmonizing data from various sources, building analytical products on the data platform, and managing data product life cycles efficiently
Provide data technology consulting services to help clients shape solution visions and make informed decisions under uncertainty
requirements
10+ years of IT experience
5+ years in leadership roles
Extensive experience in requirements engineering, solution architecture, systems development, deployment, and maintenance
Knowledge in architecture, design patterns, and the technological landscape across at least 3 technology domains including Data Platforms, IoT, and ML
Profound knowledge of technology internals for at least 1 technology domain
Solid understanding of core concepts in data and analytics platform architectures, data warehousing, business intelligence, data management, integration, security, and operations
Experience in designing, implementing, deploying, troubleshooting, and re-platforming distributed systems both on-premises and in the Cloud
Structured knowledge of the entire architecture design process including technology selection, estimation, proposal verification, and documentation
Background in all phases of the software development life cycle using various development methodologies and best practices
High level of organization and attention to detail
Competency in effective communication
Fluency in English (B2 level or higher)
nice to have
Familiarity with CI/CD and infrastructure automation
Flexibility to adapt to evolving technologies and methodologies within data management
Capability to drive operations focusing on scaling and cost-efficiency
Qualifications in advanced data security and privacy practices
Understanding of metadata management and semantic layering within data platforms
Let us find a perfect job for you
Share your CV and pass our review to get a personalized job offer even if you didn't find a job on the site.