We are seeking a Senior Data DevOps Engineer to enhance our data infrastructure and streamline our development processes. In this role, you will leverage your expertise in Data DevOps to ensure efficient data management and deployment practices. If you are passionate about optimizing data workflows and enjoy working in a collaborative environment, we encourage you to apply.
responsibilities
Develop and maintain data pipelines for efficient data flow
Implement automation tools for deployment processes
Collaborate with data scientists to optimize data usage
Monitor and troubleshoot data systems for performance issues
Create documentation for data processes and workflows
Ensure data security and compliance with industry standards
Conduct regular audits of data systems for quality assurance
Participate in team meetings to discuss project progress
Assist in training team members on data tools and practices
requirements
3+ years of experience in Data DevOps
Proficiency in data pipeline tools such as Apache Kafka
Knowledge of containerization technologies like Docker
Familiarity with cloud services like Amazon Web Services
Experience in cross-functional project participation
Excellent problem-solving skills in technical environments
Strong communication skills for effective collaboration
Advanced English proficiency (B2 level or higher) for technical documentation
nice to have
Experience with monitoring tools like Prometheus
Familiarity with data visualization tools such as Tableau
Knowledge of Agile methodologies for project management
Ability to work under pressure in fast-paced environments
Strong interpersonal skills to foster team collaboration
We are looking for a Senior/Lead Data DevOps to join EPAM and contribute to a project for a large customer. As a Senior/Lead Data DevOps in Data Platform, you will focus on maintaining and implementing new features to the data transformation architecture, which is the backbone of the Customer's analytical data platform. As a key figure in our team, you'll implement and deliver high-performance data processing solutions that are efficient and reliable at scale.
responsibilities
Design, build and maintain highly available production systems utilizing Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Design and implement build, deployment, and configuration management systems together with CI/CD experience improvements based on Terraform and Azure DevOps pipeline solutions across multiple subscriptions and environments
Improve users experience with Databricks platform based on best practices of Databricks cluster management, cost-effective setups, data security models, etc.
Design, implement and improve monitoring and alerting system
Collaborate with Architecture teams to ensure platform architecture and design standards align with support model requirements
Identify opportunities to optimize platform activities and processes, implement automation mechanisms to streamline operations
requirements
4+ years of professional experience
2+ years of hands-on experience with a variety of Azure services
Proficiency in Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Solid Linux/Unix systems administration background
Advanced skills in configuring, managing and maintaining networking on Azure cloud
Solid experience in managing production infrastructure with Terraform
Hands-on experience with one of the Azure DevOps/GitLab CI/GitHub Actions pipelines for infrastructure management and automation
Hands-on experience with Databricks platform
Practical knowledge of Python combined with SQL knowledge
Hands-on experience in one scripting language: Bash, Perl, Groovy
Advanced skills in Kubernetes/Docker
Good knowledge of Security Best Practices
Good knowledge of Monitoring Best Practices
Good organizational, analytical and problem solving skills
Ability to present and communicate the architecture in a visual form
English language proficiency – ability to communicate directly with a customer. B2 level is required
We are seeking a highly skilled and experienced Senior DevOps Engineer with Azure to join our team. In this role, you will manage and optimize Azure cloud environments, ensuring the high availability, scalability, and security of our applications. You will collaborate closely with development teams to streamline deployment processes and enhance infrastructure through automation.
responsibilities
Work extensively with container automation tooling, including Kubernetes and Azure Kubernetes Service (AKS)
Provide Tier 2 support for diverse compute platforms and their containerized solutions
Architect, deploy, and maintain a secure and scalable Azure-based compute platform
Advocate for Site Reliability Engineering (SRE) methodologies by implementing effective monitoring and defining SLOs and SLAs
Identify and act on opportunities to streamline and optimize existing Azure systems using automation tools
Collaborate across teams to conduct post-mortem analyses on service disruptions or degradations
Demonstrate strong verbal and written communication to support technical teams and stakeholders
Design and build automation suites to simplify operational support in Azure environments
Work with CNCF tools like ArgoCD, Crossplane, and Kyverno tailored for Azure environments
Participate in the on-call rotation to provide support for production services built on Azure
requirements
3+ years of relevant experience in DevOps or a similar role
Hands-on experience with containerized applications and automation tools like AKS or Kubernetes
Solid knowledge and expertise in Azure cloud architecture and services
Strong understanding of observability fundamentals (logging, metrics, tracing) within cloud environments
Excellent organizational and technical skills essential for providing exceptional support
Ability to learn quickly, master existing Azure systems, and identify opportunities for improvement
A creative mindset and strong problem-solving skills with a "test-and-learn" approach to challenges
Proficiency in Helm, scripting with any programming language, Terraform/Terragrunt, and networking knowledge, including Azure Service Mesh (Istio)
Fluency in English (both written and spoken) at a minimum B2 level
nice to have
Experience with observability tools like Datadog or Prometheus
Familiarity with GitHub Actions for CI/CD processes
Knowledge of tools such as Kyverno, OPA, or GateKeeper for Azure policy enforcement
Understanding of SRE principles and practices
Experience with deployment and management tools like ArgoCD
We are seeking an experienced Site Reliability Engineer (SRE) to help build, harden, and scale our CDN and Web Application Firewall (WAF) product. You will be responsible for implementing new features, optimizing performance, and improving security controls across our edge stack. This role requires deep hands-on expertise in Nginx/OpenResty, Lua, C/FFI module development, eBPF, Linux networking and infrastructure-as-code tooling.
responsibilities
Design, implement and ship features and improvements for our CDN and WAF edge stack
Develop and maintain Lua code running in Nginx/OpenResty and build high-performance C modules or FFI bindings where needed
Implement packet- and kernel-level observability or filtering using eBPF (including XDP/eBPF tracing for telemetry and enforcement)
Tune and troubleshoot high-volume Nginx deployments for latency, throughput and memory usage
Define, author and maintain WAF rule logic, request/response inspection and mitigation workflows
Build automation for deployment and configuration using Infrastructure-as-Code (Ansible, Puppet, Terraform, or similar)
Work with networking protocols and operational requirements of a CDN: BGP, anycast, TCP/IP stack, load balancing, connection handling
Create and run performance/load tests, fuzzing and security tests; profile and optimize hotspots
Produce clear design documentation, runbooks, and hand over completed work to operations. Participate in code reviews and mentor engineers
Collaborate with product, security and SRE teams to align feature work with product goals and SLAs
requirements
5+ years of production experience in Linux systems engineering, networked services or edge infrastructure
Strong hands-on experience with Nginx and Lua (ngx_lua/OpenResty), including writing Lua modules for request processing
Familiarity or experience building native C modules or FFI bindings used by Nginx/Lua; comfortable with libc, POSIX APIs and building/packaging C extensions
Practical experience with eBPF (tools, BCC/libbpf, XDP) for telemetry, filtering or tracing
Deep knowledge of networking and TCP/IP internals, load balancing, and CDN operational patterns. Familiarity with BGP and anycast is a plus
Experience with Web application firewall, either appliance, service or software and its capabilities
Solid Linux kernel and userland troubleshooting skills: perf, tcpdump/Wireshark, strace, systemtap
Experience with Infrastructure-as-Code and configuration management (Ansible, Puppet, Chef, Terraform or similar)
Experience deploying and maintaining WAF rulesets and policies; understanding of OWASP top risks and typical web attack patterns
Experience with testing and benchmarking tools (wrk, ab, locust, etc.) and CI/CD pipelines
Excellent communication skills; able to work independently and collaborate effectively with distributed teams
English level B1+ for effective communication
nice to have
Prior experience building or operating CDNs or edge platforms
Familiarity with web security tooling, such as ModSecurity, or other WAF platforms
Experience with container workflows and edge deployment (e.g., Docker, HashiCorp Nomad)
Exposure to cloud providers’ networking (AWS/GCP/Azure) and hybrid edge deployments
Familiarity with observability stacks: Prometheus, Grafana, ELK/EFK
Experience in cross-compiling or packaging modules for multiple Linux distributions
Experience programming in Lua and C, with familiarity using LuaJIT and the Lua FFI for native, high-performance integrations
We are seeking a remote Senior Data Integration Engineer to join our team and take a leading role in building and optimizing data integration solutions. This position offers the chance to work on challenging data projects, collaborating with diverse teams to deliver impactful results. If you are enthusiastic about data systems and enjoy tackling technical problems, this role provides an excellent platform for professional growth and meaningful contributions.
responsibilities
Develop, implement, and manage efficient data integration pipelines and workflows
Work closely with cross-functional teams to ensure seamless data transfer and alignment with business needs
Enhance existing data integration processes to improve scalability and performance
Apply data transformation and validation techniques to maintain high data quality standards
Contribute to the design and execution of end-to-end data integration strategies in line with project objectives
Diagnose and resolve technical challenges related to data integration systems
Prepare detailed documentation for data integration workflows, processes, and best practices
Adhere to security and data governance policies throughout integration activities
requirements
Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field
At least 3 years of hands-on experience in data integration, working with complex data systems
Strong knowledge of SDLC methodologies and their application in data projects
Proficiency in Agile methodologies to deliver effective data integration solutions
Advanced skills in SQL for managing and querying relational databases
Experience working with NoSQL databases for semi-structured and unstructured data
Familiarity with CI/CD pipelines to automate data integration workflows
Fluency in English, both written and verbal, at a B2 level or higher
nice to have
Experience with cloud-based platforms and tools for data integration
Understanding of big data technologies and frameworks for processing large-scale datasets
Are you an experienced engineer with skills in Azure and Databricks? Our client – the world's leading data, insights and consulting company – is looking for a highly qualified Senior or Lead Big Data DevOps to support enterprise data platform deployment on Azure environments. We operate on a global scale, understanding more about people's thought processes, feelings, shopping preferences, sharing habits, voting behaviors and perspectives than anyone else. As part of our team, you'll utilize your expertise to define brands and audiences, disrupt and renew offers, connect with audiences and win over consumers and customers.
responsibilities
Set up Azure Big Data service environment
Lead Infrastructure & CI/CD implementation
Supporting DEV and QA teams throughout the SDLC period
Configure infrastructure automation framework, CI/CD automation framework, Monitoring & Alerts via automation and Environments
Design and automate Quality Gates for Dev-QA-Prod deployment
Implement IaaC with Azure DevOps CI/CD
requirements
Expertise in Azure DevOps
Strong knowledge of Azure Big Data (ADF, Data Bricks)
Extensive production experience in CI/CD automation and Azure DevOps
Production background in Azure Data Factory, Data Bricks, Azure monitor, adls, Event grid, Azure functions, Azure purview, Azure Cloud services and Terraform
nice to have
Expertise with IaaC with Azure DevOps CI/CD
Knowledge of Jenkins and Ansible
Experience with continuous Integration and Continuous Delivery processes
Familiarity with Azure Infrastructure as a service and Platform as a service
We are looking for an SAP Basis / System Solution Architect to join our tight-knit EPAM team and engage in technology-driven activities. In this role, you will have an opportunity to work at a leading software engineering and IT consulting company. If you want to deepen professional experience and enrich your knowledge – welcome on board.
responsibilities
Manage the technical design and deployment of integrated end-to-end solutions
Provide input to architecture, security and data guidelines, ensuring compliance with these standards
Operate with non-functional requirements and quality attributes
Advise on SAP CIO Guides (On-premises / Cloud / Hybrid)
Run SAP systems governance, including SAP technical changes management, troubleshooting and root cause analysis
Estimate work efforts relating to SAP Basis, DevOps activities and technical architecture components
requirements
5+ years of experience in SAP Basis and technical architectures
Knowledge of SAP Basis administration technologies and tools
Understanding of SAP on-premises and cloud software products’ technical architectures
Skills in database architectures (SAP HANA, SAP ASE, SAP MaxDB, Oracle, MS SQL Server, IBM DB2)
Background in SAP BTP technical architecture
Expertise in several SAP products capabilities, functionality and technologies (SAP S/4HANA, SAP BW/4HANA, SAP C/4HANA, SAP SCM, SAP GRC, SAP SuccessFactors, SAP Ariba)
nice to have
Understanding of ABAP dictionary and data model for one or more SAP Business Suite products
Ability to identify solution gaps and develop gap closure options
Capability to present SAP features and innovations filtered and focused on potential client needs
Leadership and interpersonal skills with client interaction background
We are seeking a Senior Platform Engineer to join our global Cloud Tooling team, enabling online services used worldwide. You will develop and maintain secure, scalable cloud infrastructure tools, working closely with clients to enhance developer experience. Join us to help build, operate, and scale world-class cloud services.
responsibilities
Develop high quality, stable code for automation and internal APIs
Administer and manage HashiCorp Vault services
Support cloud platform provisioning and lifecycle management
Write and publish high level designs and developer documentation
Provide client support for cloud engineering services as part of a support rota
Design for privacy, security, compliance, high availability, performance, and resilience
Collaborate with global teams to plan releases, workloads, and priorities
Problem solve issues across the technology stack
Work closely with clients to ensure fast turnaround of feature requests and bug fixes
Maintain certificate monitoring and management services
requirements
3+ years of experience in DevOps or Site Reliability Engineering
Proven experience in Go language development
Practical knowledge of Kubernetes, including EKS and core system components
Hands-on administration of HashiCorp Vault and secrets management
Familiarity with AWS services such as provisioning, IAM roles, and account management
Proficiency with infrastructure as code tools like Terraform and Git
Understanding of certificate management, cryptography, and encryption
Solid networking skills, especially within AWS environments
Strong problem-solving abilities
Ability to produce clear technical documentation and high-level designs
Experience in a client-focused engineering environment
English proficiency at B2 (Upper-Intermediate) level or higher
nice to have
Knowledge of authentication and authorization concepts
Experience with cloud platform provisioning and lifecycle management
Participation in on-call support rotations
Familiarity with security and compliance standards relevant to cloud infrastructure
Advance your career as a Senior Data DevOps Engineer at EPAM! In this important role, you'll deploy and maintain sophisticated Terraform and Azure DevOps pipeline solutions spanning multiple environments. If you possess practical experience with Terraform, Azure DevOps, and have a talent for communication, optimization, and automation, we are eager to meet you.
responsibilities
Deploy and maintain sophisticated Terraform and Azure DevOps pipeline solutions across various subscriptions and environments
Communicate effectively with diverse stakeholders to collect requirements and provide updates on platform activities
Apply Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Work alongside Architecture teams to ensure the platform's architecture and design standards are in line with support model needs
Discover avenues for optimizing platform activities and processes, and initiate automation methods to streamline operations
requirements
3+ years of practical experience with Terraform and Azure DevOps pipelines for infrastructure management and automation
Proficiency in Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Effective communication skills and the capability to interact with stakeholders at various levels
Experience in collaborating with Architecture teams to align with design standards
Skills in troubleshooting and mitigating issues related to data workloads and platform infrastructure
We are seeking a Senior Azure DevOps Engineer to provide expert operational support and drive the reliability, security and performance of our Azure Kubernetes Service (AKS) environments. You will play a key role in managing Kubernetes workloads, optimizing CI/CD pipelines and collaborating with cross-functional teams to ensure robust cloud infrastructure.
responsibilities
Provide operational support for AKS clusters, including monitoring, incident management and performance optimization
Manage and troubleshoot Kubernetes workloads such as Pods, Deployments, Services, ConfigMaps and Secrets
Implement and support CI/CD pipelines for containerized applications
Maintain and enhance cluster observability using Azure Monitor, Prometheus and Grafana
Perform cluster upgrades, patching and scaling in coordination with Azure cloud services
Ensure security best practices including RBAC, Azure AD integration, Network Policies and Secrets management
Support troubleshooting of networking issues including ingress, load balancers, DNS and service mesh
Collaborate with development and DevOps teams to improve deployment reliability and automation
Participate in on-call rotation to respond to incidents and outages
Document runbooks, troubleshooting steps and best practices for AKS operations
requirements
5+ years of working experience in DevOps or cloud engineering roles
Expertise in Kubernetes with a focus on AKS in Azure
Hands-on knowledge of Azure cloud services including VNETs, NSGs, Load Balancers, Azure AD and Azure Monitor
Proficiency in infrastructure-as-code using Terraform
Skills in CI/CD tools such as GitHub Actions
Knowledge of observability tools including Grafana
Strong problem-solving skills and capability to troubleshoot complex issues
Excellent communication skills for working with cross-functional teams
English proficiency at B2 level or higher
Let us find a perfect job for you
Share your CV and pass our review to get a personalized job offer even if you didn't find a job on the site.