We are seeking an experienced and driven Data Engineer to join our expanding Data Engineering team. In this role, you will be a key member of one of our established Platform Teams, designing, developing, and scaling data pipelines in Databricks with PySpark on Microsoft Azure for the AI Factory team. This is an exciting opportunity to work at the intersection of big data and cloud engineering, delivering reliable, scalable, and high-performance data platforms that drive innovation across our organization.
responsibilities
Design cloud-native analytical solutions using Big Data and NoSQL technologies
Build and optimize scalable data pipelines in Databricks with PySpark on Azure
Develop and maintain data lakes and data warehouses to ensure reliability and performance
Design and implement ETL/ELT workflows to collect, clean, and structure data
Implement data quality, lineage, and monitoring frameworks
Collaborate with ML and analytics teams to deliver clean, production-ready datasets
Conduct code reviews to uphold technical standards and best practices
Mentor junior engineers and foster a high-performance, collaborative culture
Integrate CI/CD methodologies into data engineering workflows using tools such as Jenkins or GitLab CI/CD
Support requirements gathering and deliver solutions in alignment with architects, technical leads, and cross-functional teams
Engage with stakeholders to understand business processes, model input data, and ensure deliverables meet requirements
requirements
2+ years of experience in Data Engineering or a related field
Proficiency in Python and PySpark
Hands-on experience with Databricks and Microsoft Azure cloud services
Familiarity with software version control tools (e.g., GitHub, Git)
Experience with CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Proven ability to build scalable, robust, and highly available data solutions
Strong problem-solving, analytical, and stakeholder engagement skills
English level of minimum B2 (Upper-Intermediate) for effective communication
nice to have
Experience with additional programming languages such as Java, SQL, or Scala
Knowledge of SAP BTP or similar enterprise data platforms
We are seeking an experienced and driven Senior Data Engineer to join our expanding Data Engineering team. In this role, you will be a key member of one of our established Platform Teams, designing, developing, and scaling data pipelines in Databricks with PySpark on Microsoft Azure for the AI Factory team. This is an exciting opportunity to work at the intersection of big data and cloud engineering, delivering reliable, scalable, and high-performance data platforms that drive innovation across our organization.
responsibilities
Design cloud-native analytical solutions using Big Data and NoSQL technologies
Build and optimize scalable data pipelines in Databricks with PySpark on Azure
Develop and maintain data lakes and data warehouses to ensure reliability and performance
Design and implement ETL/ELT workflows to collect, clean, and structure data
Implement data quality, lineage, and monitoring frameworks
Collaborate with ML and analytics teams to deliver clean, production-ready datasets
Conduct code reviews to uphold technical standards and best practices
Mentor junior engineers and foster a high-performance, collaborative culture
Integrate CI/CD methodologies into data engineering workflows using tools such as Jenkins or GitLab CI/CD
Support requirements gathering and deliver solutions in alignment with architects, technical leads, and cross-functional teams
Engage with stakeholders to understand business processes, model input data, and ensure deliverables meet requirements
requirements
4+ years of experience in Data Engineering or a related field
Proficiency in Python and PySpark
Hands-on experience with Databricks and Microsoft Azure cloud services
Familiarity with software version control tools (e.g., GitHub, Git)
Experience with CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Proven ability to build scalable, robust, and highly available data solutions
Strong problem-solving, analytical, and stakeholder engagement skills
nice to have
Experience with additional programming languages such as Java, SQL, or Scala
Knowledge of SAP BTP or similar enterprise data platforms
We are seeking a visionary and experienced Lead Data Engineer to join our Data Engineering team. In this role, you will lead a Platform Team, driving the design, development, and scaling of data pipelines in Databricks with PySpark on Microsoft Azure for our AI Factory team. This is a unique opportunity to shape the future of our data platforms, influence technical strategy, and mentor a talented team, all while working at the forefront of big data and cloud engineering.
responsibilities
Lead the design and architecture of cloud-native analytical solutions using Big Data and NoSQL technologies
Oversee the development and optimization of scalable data pipelines in Databricks with PySpark on Azure
Own the strategy for building and maintaining data lakes and warehouses, ensuring reliability, performance, and scalability
Define and implement ETL/ELT workflows and best practices for data collection, cleaning, and structuring
Establish and enforce data quality, lineage, and monitoring frameworks across the team
Collaborate closely with ML, analytics, and business teams to deliver production-ready datasets and solutions
Conduct and lead code reviews, setting technical standards and fostering best practices
Mentor and coach data engineers, cultivating a high-performance and collaborative team culture
Champion CI/CD methodologies in data engineering workflows using tools like Jenkins or GitLab CI/CD
Drive requirements gathering and solution alignment with architects, technical leads, and cross-functional teams
Engage with stakeholders at all levels to understand business processes, model input data, and ensure deliverable alignment with strategic goals
requirements
6+ years of experience in Data Engineering or a related field, with at least 2 years in a technical leadership role
Deep proficiency in Python and PySpark
Extensive hands-on experience with Databricks and Microsoft Azure cloud services
Strong background in software version control tools (e.g., GitHub, Git)
Proven track record with CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Demonstrated expertise in architecting and building scalable, robust, and highly available data solutions
Excellent problem-solving, analytical, and stakeholder management skills
Experience in mentoring and leading engineering teams
nice to have
Experience with additional programming languages such as Java, SQL, or Scala
Knowledge of SAP BTP or similar enterprise data platforms
Familiarity with agile development methodologies
Experience in strategic planning and technical roadmap development
We are looking for a motivated Data Engineer to join our growing Data Engineering team. You will work as part of one of our Platform Teams, developing and maintaining data pipelines in Databricks with PySpark on Microsoft Azure for the AI Factory team. This role offers the opportunity to work with modern cloud and big data technologies, contributing to reliable and scalable data platforms that support innovation across the organization.
responsibilities
Develop and maintain data pipelines in Databricks with PySpark on Azure
Support the design and implementation of cloud-based analytical solutions using Big Data and NoSQL technologies
Assist in building and maintaining data lakes and warehouses to ensure reliability and performance
Participate in the development of ETL/ELT workflows to collect, clean, and structure data
Help implement data quality, lineage, and monitoring frameworks
Collaborate with ML and analytics teams to deliver clean, production-ready datasets
Participate in code reviews and follow technical standards and best practices
Work with CI/CD methodologies in data engineering workflows using tools like Jenkins or GitLab CI/CD
Collaborate with architects, technical leads, and cross-functional teams to deliver solutions aligned with requirements
Engage with stakeholders to understand processes and ensure deliverable alignment
requirements
2+ years of experience in Data Engineering or a related field
Proficiency in Python and PySpark
Experience with Databricks and Microsoft Azure cloud services
Familiarity with software version control tools (e.g., GitHub, Git)
Exposure to CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Ability to build reliable and scalable data solutions
Strong problem-solving and analytical skills, with effective communication and teamwork abilities
nice to have
Experience with additional programming languages such as Java, SQL, or Scala
Knowledge of SAP BTP or similar enterprise data platforms
We are seeking an experienced and highly motivated Data Engineer to join our expanding Data Engineering practice. This role will enhance one of our established Platform Teams, where you will design, construct, and scale data pipelines using Databricks with PySpark on Microsoft Azure, enabling our AI Factory teams to create and deploy advanced machine learning solutions. This position offers a unique opportunity to work at the crossroads of big data, cloud frameworks, and MLOps, delivering durable, scalable, and efficient data platforms that promote innovation throughout the organization.
responsibilities
Design analytical solutions using Cloud Native, Big Data, and NoSQL technologies
Build scalable data pipelines with Databricks and PySpark on Azure
Collaborate with MLOps and ML Engineering teams to deliver robust data platforms for AI model development and deployment
Work alongside architects, technical leads, and cross-functional teams to align solutions with business objectives
Assist the SAP Platform Team in utilizing SAP BTP and hyperscaler services for enterprise-grade data platforms
Conduct and participate in code reviews to ensure adherence to technical standards and best practices
Mentor junior engineers to cultivate an innovative and high-performance engineering culture
Implement CI/CD practices into data workflows using tools like Jenkins, GitLab CI/CD, and Concourse
Facilitate communication with stakeholders to understand business processes, model input data, and deliver aligned solutions
requirements
Minimum of 2 years' experience in Software Engineering with a solid background in Data Engineering, Machine Learning Operations, or Machine Learning
Expertise in Python and PySpark
Familiarity with Databricks and Microsoft Azure cloud services
Strong knowledge of software version control tools like GitHub or Git
Background in CI/CD tools such as Jenkins, GitLab CI/CD, or Concourse
Proven capability to deliver scalable, robust, and highly available data solutions
Competency in problem-solving, analytical thinking, and managing stakeholder relationships
nice to have
Skills in another programming language such as Java, SQL, or Scala
Understanding of SAP BTP or comparable enterprise data platforms
We are seeking an experienced and driven Senior Data Engineer to join our expanding Data Engineering team. This position involves working as part of one of our established Platform Teams, where you will design, develop, and scale data pipelines in Databricks with PySpark on Microsoft Azure, empowering our AI Factory teams to create and deploy advanced machine learning solutions. This role offers the chance to contribute at the nexus of big data, cloud engineering, and MLOps, delivering reliable, scalable, and high-performance data platforms that support innovation across the organization.
responsibilities
Design cloud-native analytical solutions using Big Data and NoSQL technologies
Build scalable data pipelines in Databricks with PySpark on Azure
Collaborate with MLOps and ML Engineering teams to deliver data platforms for AI development and deployment
Support requirements gathering and deliver aligned solutions with architects, technical leads, and cross-functional teams
Assist the SAP Platform Team in utilizing SAP BTP and hyperscaler offerings for enterprise-grade data solutions
Conduct code reviews to ensure technical standards and practices are maintained
Provide mentorship to junior engineers to cultivate a high-performance culture
Incorporate CI/CD methodologies into data engineering workflows using tools like Jenkins or GitLab CI/CD
Engage with stakeholders to understand processes, model input data, and ensure deliverable alignment
requirements
4+ years of experience in Software Engineering with a focus on Data Engineering, Machine Learning, or MLOps
Proficiency in Python and PySpark
Background in Databricks and Microsoft Azure cloud services
Knowledge of software version control tools, including GitHub or Git
Capability to work with CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Expertise in building scalable, robust, and available data solutions
Competency in problem-solving and analytical skills, along with effective stakeholder engagement
nice to have
Skills in an additional programming language like Java, SQL, or Scala
Understanding of SAP BTP or similar enterprise data platforms
We are seeking an experienced and driven Databricks Engineer to join our expanding Data Engineering team. In this role, you will be a key member of one of our established Platform Teams, designing, developing, and scaling data pipelines in Databricks with PySpark on Microsoft Azure for the AI Factory team. This is an exciting opportunity to work at the intersection of big data and cloud engineering, delivering reliable, scalable, and high-performance data platforms that drive innovation across our organization.
responsibilities
Design cloud-native analytical solutions using Big Data and NoSQL technologies
Build and optimize scalable data pipelines in Databricks with PySpark on Azure
Develop and maintain data lakes and data warehouses to ensure reliability and performance
Design and implement ETL/ELT workflows to collect, clean, and structure data
Implement data quality, lineage, and monitoring frameworks
Collaborate with ML and analytics teams to deliver clean, production-ready datasets
Conduct code reviews to uphold technical standards and best practices
Mentor junior engineers and foster a high-performance, collaborative culture
Integrate CI/CD methodologies into data engineering workflows using tools such as Jenkins or GitLab CI/CD
Support requirements gathering and deliver solutions in alignment with architects, technical leads, and cross-functional teams
Engage with stakeholders to understand business processes, model input data, and ensure deliverables meet requirements
requirements
2+ years of experience in Data Engineering or a related field
Proficiency in Python and PySpark
Hands-on experience with Databricks and Microsoft Azure cloud services
Familiarity with software version control tools (e.g., GitHub, Git)
Experience with CI/CD frameworks such as Jenkins, Concourse, or GitLab CI/CD
Proven ability to build scalable, robust, and highly available data solutions
Strong problem-solving, analytical, and stakeholder engagement skills
English level of minimum B2 (Upper-Intermediate) for effective communication
nice to have
Experience with additional programming languages such as Java, SQL, or Scala
Knowledge of SAP BTP or similar enterprise data platforms
We are seeking a dedicated and skilled Presales Solution Consultant to become a key member of our growing Data & AI practice. This position blends technology with business strategy, empowering clients to leverage data for transformative outcomes. You will be instrumental in designing cutting-edge data solutions, advancing sales efforts, and delivering exceptional strategic insights through your consulting and technical expertise. Our ideal candidate possesses strong technical acumen combined with outstanding consulting abilities, excels in dynamic settings, and ensures alignment between client needs and modern data and analytics capabilities. This opportunity allows you to engage with groundbreaking technologies and contribute to impactful results across global industries.
responsibilities
Collaborate with sales, delivery, and expert teams to align client objectives with tailored data-driven solutions, ensuring our offerings address business challenges
Manage all facets of the presales process, including qualification of opportunities, solution design, proposal creation, demonstrations, and client presentations
Facilitate workshops and meetings to understand client goals and technical needs, ensuring our solutions align with their business objectives
Develop comprehensive data solutions featuring cloud-native platforms, data integration, advanced analytics, and AI/ML models for actionable insights
Craft compelling proposals, pricing models, and value propositions to successfully secure opportunities and convert engagements
Monitor updates and advancements in Data & Analytics, Cloud, and AI spaces while providing strategic recommendations to clients and driving innovation within the practice
Enhance operational efficiency by contributing to reusable solution frameworks, accelerators, and methodologies
Foster team development through mentorship and knowledge-sharing initiatives
requirements
Qualifications in Data Science, Computer Science, Business Analytics, or comparable professional expertise
Extensive background (5+ years) in presales, solution architecture, or consulting, emphasizing Data/AI technologies and their application in business contexts
Deep knowledge of data platforms (e.g., Databricks, Snowflake, AWS/Azure/GCP Data Services), ETL processes, data modeling, BI tools, and frameworks (e.g., Power BI, Tableau)
Familiarity with data science methodologies combined with proficiency in identifying solutions that meet client needs
Exceptional communication skills with proficiency in conveying complex technical concepts through clear, engaging storytelling
Showcase of experience in interacting with senior stakeholders and delivering impactful pitches or presentations
Competency in creating proposals, RFP responses, and pricing/engagement models for data-related projects or services
Understanding of strategic consulting principles and capability to design future-state roadmaps and solutions for client innovation
Strong organizational and leadership qualities paired with flexibility to manage intricate tasks within fast-paced settings
Proficiency in English communication (B2 level or higher), ensuring technical concepts are clearly understood by diverse audiences
We are seeking a seasoned Senior SAP FSM Engineer to design, build, and enhance scalable solutions across the SAP FSM platform. In this role, you will combine hands-on development with close collaboration across technical and functional teams to deliver seamless field service processes. Join our team to contribute to high-impact projects and help drive innovation in service operations—apply today to propel your career forward.
responsibilities
Design and implement SAP FSM features for planning, dispatching, reporting, master data, and business rules
Collaborate with functional and technical teams to define and clarify requirements
Ensure smooth operational performance through root cause analysis and system optimization
Develop and extend SAP BTP solutions using CAP, custom OData services, Shell UI applications, Event Mesh, and Webhooks
Create and enhance SAPUI5 front-end features integrated with FSM Web UI or BTP Launchpad
Implement mobile app enhancements including offline synchronization, hardware integration, and workflow improvements
Optimize mobile and web FSM user experience through customized views, controllers, and routing logic
Integrate FSM with APIs, synchronization services, and middleware for cross-system functionality
Enhance security protocols in FSM solutions using secure coding standards
Document technical processes and solutions for development and maintenance purposes
requirements
Minimum 3+ years experience in SAP Field Service Management development
Expertise in SAP CAP based Node.js development
Proven leadership in cross-functional technical collaboration
Track record of delivering complex FSM projects on SAP BTP
Strong problem-solving skills for field service process optimization
Effective communication and teamwork abilities
Upper-Intermediate (B2) English proficiency
nice to have
Hands-on experience with Jaspersoft reporting for FSM
Integration expertise with REST/OData APIs and CPI/iFlows
Knowledge of OAuth2 and JWT secure coding practices
Proficiency in HANA SQL, Git, CI/CD, unit testing, and technical documentation
Experience with multi-tenant BTP applications, C4C development, SAP MDK, and FSM data replication
We are looking for a DevOps Engineer who enjoys solving technical challenges and contributing to innovative infrastructure solutions. In this role, you will support the enhancement of infrastructure, automation, and operational excellence, helping teams deliver high-quality software faster and more efficiently. You will work closely with senior engineers and stakeholders while continuing to grow your technical expertise.
responsibilities
Configure and maintain continuous integration and deployment pipelines
Support configuration management across the infrastructure
Contribute to building and maintaining Infrastructure as Code solutions
Provide operational support for infrastructure and automation tools
Monitor system performance and assist in troubleshooting incidents
Collaborate with stakeholders and team members to ensure alignment and successful delivery
requirements
2+ years of experience in a DevOps, System Administration, or similar role
Good knowledge of public or private clouds such as AWS, GCP, Azure, or OpenStack
Solid system administration skills in Linux or Windows environments
Practical experience with containers such as Docker or Kubernetes
Experience with CI/CD tools such as Jenkins, TeamCity, GoCD, or CodePipeline
Experience with scripting for automation (Bash, Python, PowerShell, or similar)
Strong analytical and problem-solving skills
Motivation to learn and adapt to evolving technologies
Good command of written and spoken English (B2 level or higher)
nice to have
Familiarity with configuration management tools such as Ansible, Puppet, or Chef
Experience with IaC technologies like Terraform or CloudFormation
Basic understanding of virtualization technologies (KVM/libvirt or HyperV)
Experience with version control systems such as Git or Subversion
Basic knowledge of TCP networking and monitoring tools
Let us find a perfect job for you
Share your CV and pass our review to get a personalized job offer even if you didn't find a job on the site.