We are looking for a Senior/Lead Data DevOps to join EPAM and contribute to a project for a large customer. As a Senior/Lead Data DevOps in Data Platform, you will focus on maintaining and implementing new features to the data transformation architecture, which is the backbone of the Customer's analytical data platform. As a key figure in our team, you'll implement and deliver high-performance data processing solutions that are efficient and reliable at scale.
чим ви будете займатися у цій ролі
Design, build and maintain highly available production systems utilizing Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Design and implement build, deployment, and configuration management systems together with CI/CD experience improvements based on Terraform and Azure DevOps pipeline solutions across multiple subscriptions and environments
Improve users experience with Databricks platform based on best practices of Databricks cluster management, cost-effective setups, data security models, etc.
Design, implement and improve monitoring and alerting system
Collaborate with Architecture teams to ensure platform architecture and design standards align with support model requirements
Identify opportunities to optimize platform activities and processes, implement automation mechanisms to streamline operations
навички
4+ years of professional experience
2+ years of hands-on experience with a variety of Azure services
Proficiency in Azure data solutions including Data Lake Storage, Databricks, ADF, and Synapse Analytics
Solid Linux/Unix systems administration background
Advanced skills in configuring, managing and maintaining networking on Azure cloud
Solid experience in managing production infrastructure with Terraform
Hands-on experience with one of the Azure DevOps/GitLab CI/GitHub Actions pipelines for infrastructure management and automation
Hands-on experience with Databricks platform
Practical knowledge of Python combined with SQL knowledge
Hands-on experience in one scripting language: Bash, Perl, Groovy
Advanced skills in Kubernetes/Docker
Good knowledge of Security Best Practices
Good knowledge of Monitoring Best Practices
Good organizational, analytical and problem solving skills
Ability to present and communicate the architecture in a visual form
English language proficiency – ability to communicate directly with a customer (B2 level or higher)
We are looking for a skilled and dedicated Senior Python Engineer with LLM to enhance our team's efforts in developing multiple artificial intelligence proofs of concept (AI PoCs). The successful candidate will take charge of development initiatives, improve our systems, and play a key role in crafting innovative solutions utilizing large language models (LLM) and associated technologies.
чим ви будете займатися у цій ролі
Lead the development and configuration of custom GPTs on the ChatGPT platform
Design and implement robust solutions using LangChain, LlamaIndex, and other LLM frameworks
Collaborate with cross-functional teams to integrate AI solutions into existing systems
Oversee the build of retrieval-augmented generation (RAG) solutions
Utilize Docker for application deployment and management
Maintain and update AI applications on Microsoft Azure
Mentor junior developers and provide technical guidance
Monitor system performance and troubleshoot issues as they arise
Stay current with industry trends and advancements in AI and machine learning
Contribute to the expansion of AI capabilities within the company
навички
3+ years of experience in Python programming
Skills in building and configuring custom GPTs for the ChatGPT platform
Background in working with open-source LLM models such as llama and mixtral
Experience in designing solutions using LLM frameworks like LangChain and LlamaIndex
Expertise in Docker and familiarity with cloud platforms such as Azure
Experience in building RAG solutions and function calling techniques
Capability to lead and mentor a team of developers
Excellent problem-solving and analytical skills
Strong communication and collaboration abilities
буде перевагою
Previous involvement with AI PoCs
Published works or contributions to AI or machine learning communities
We are actively seeking an experienced Senior Data Integration Engineer to join our team. In this role, you will play a crucial part in designing, developing, testing, deploying, maintaining, and enhancing data integration pipelines for our cutting-edge projects. If you possess a wealth of experience in data integration and pipeline development and thrive in a dynamic, collaborative environment, we welcome you to be a key player in shaping the future of our data-driven initiatives.
чим ви будете займатися у цій ролі
Design, develop, test, deploy, maintain, and improve data integration pipelines
Utilize Python and common libraries for efficient data integration and pipeline development
Employ advanced SQL for analytical purposes and to optimize complex queries
Collaborate with cross-functional teams to ensure the success of data integration projects
Implement and maintain data integration and pipeline developments in an AWS environment
Optimize and debug user-defined functions, views, and indexes for efficient data integration
Utilize GitLab for source control and Jenkins for build and continuous integration
Contribute to the enhancement and maintenance of data integration pipelines
Provide insights and recommendations for improving data integration processes
Collaborate with the wider engineering team to drive standards of excellence across all data integration projects
навички
At least 3+ years of hands-on experience in data integration and pipeline development
Strong expertise in Amazon Web Services (AWS) Cloud for data integration with Apache Spark, Glue, Kinesis, and S3
Proficiency in Python for data integration, with real-life experience in Python development
Solid understanding of advanced SQL for analytical purposes and complex query optimization
Experience with Databricks in an AWS environment for effective data integration
Knowledge of source control systems such as GitLab and continuous integration tools like Jenkins
Analytical experience with databases, including writing complex queries, query optimization, and debugging
Strong communication and collaboration skills, essential for effective teamwork
Fluent English language skills at an Upper-Intermediate level
буде перевагою
Familiarity with other data integration tools and technologies
We are searching for a Senior Python/GenAI Engineer to join our team, working on a project to develop a secure, scalable, and customizable enterprise-grade AI ecosystem. The role involves implementing real-life client AI use cases. You will be responsible for developing both production-ready and proof-of-concept (PoC) solutions that demonstrate the potential of AI to businesses.
чим ви будете займатися у цій ролі
Develop GenAI-based applications
Work with the latest GenAI/LLM technologies
Collaborate with a team that has extensive experience in GenAI
навички
3+ years of experience in Python
Knowledge of working with LLM APIs or a strong interest in GenAI and developing in this field
Experience with FastAPI
Strong experience with AsyncIO
Deployment skills, with hands-on experience in at least one cloud platform (AWS, Azure, or GCP)
Understanding and application of best practices and design patterns
Strong problem-solving and analytical skills
Effective communication and collaboration abilities
We are seeking a highly skilled Lead Python Developer to spearhead the implementation of a sophisticated chatbot using LangChain or LlamaIndex, with a strong emphasis on leveraging Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG). The ideal candidate will have extensive experience in Python and a deep understanding of tools, embedding strategies, semantic search, vector search, and vector databases. Additionally, the role involves working on various spikes, POCs, and research-oriented tasks.
чим ви будете займатися у цій ролі
Lead the development of chatbot solutions using LangChain or LlamaIndex with a focus on LLMs and RAG
Research and implement embedding strategies, semantic search, and vector search to enhance system performance
Design, develop, and maintain robust Python applications, ensuring adherence to best coding practices
Collaborate across teams to conduct spikes and build proof-of-concepts (PoCs) for innovative AI functionalities
Develop and integrate vector database solutions such as Pinecone, Faiss, or similar
Optimize application performance, ensuring scalability and efficiency in AI-driven systems
Implement error handling, logging mechanisms, and monitoring using OpenTelemetry
Maintain and support the .NET components of the application, including MS Teams bot integrations and other services
Conduct architectural reviews and provide technical leadership on AI-related initiatives
Ensure compliance with software lifecycle processes, assisting with code reviews and testing frameworks
навички
5+ years of experience in Python programming, with a focus on developing complex applications
1+ years of leadership experience in relevant roles
Deep understanding and hands-on experience with Large Language Models and Retrieval-Augmented Generation techniques
Proven experience with LangChain or LlamaIndex for chatbot development
Strong knowledge of embedding strategies, semantic search, and vector search methodologies
Proficiency in working with vector databases such as Pinecone, Faiss, or similar
Experience in implementing tracing and monitoring using OpenTelemetry
Expertise in designing robust error-handling and logging mechanisms
Commitment to writing clean, maintainable, and scalable code, with a strong understanding of software development best practices
Residence in Ukraine (remote work is eligible only for candidates based in Ukraine)
Are you ready to elevate your Python engineering skills and transition into the exciting field of Big Data? EPAM is offering a unique opportunity where you can secure a position after a single technical interview and gain Big Data expertise without affecting your title or compensation. This 8-week retraining program is designed to transition Python engineers into the role of Big Data engineer. The curriculum is divided into three key phases: theoretical coursework led by production specialists, hands-on projects or practical tasks, and comprehensive knowledge assessment with feedback. Covering essential topics such as data management, distributed systems, Spark, Kafka, NoSQL databases, and cloud-native services, the program offers a robust foundation in data processing platforms. Training is fully online, allowing participants to engage remotely.
чим ви будете займатися у цій ролі
Explore Big Data and Hadoop: Gain a solid understanding of Big Data concepts, delve into Hadoop’s infrastructure and real-world applications, and learn about data characteristics and deployment trends
Understand DevOps Practices: Familiarize yourself with the basics of DevOps, including continuous integration and continuous deployment (CI/CD), and how these practices streamline software development and operations
Master Data Modeling and Architectures: Learn the essential techniques and levels of data modeling, crucial for Data Engineers and Architects, to effectively manage and interpret complex data structures
Dive into Apache Spark: Deepen your knowledge of Spark with detailed explorations of its architecture, components, and various functionalities including Spark SQL, Spark ML, and Spark Streaming
Harness the Power of Kafka: Understand the fundamentals of Kafka, and explore Kafka Connect and Kafka Streams to manage real-time data feeds and perform stream processing
Leverage Elastic Stack and NoSQL: Get hands-on experience with Elastic Stack for searching, analyzing, and visualizing data in real time, and explore NoSQL databases to manage large volumes of structured, semi-structured, and unstructured data
Implement Data Flow and Pipelining: Learn about data movement essentials and tools like NiFi and Streamsets for effective data collection, flow, and processing
Navigate Orchestration and Scheduling: Understand the role of orchestration in managing complex workflows using tools like Airflow and Jenkins
навички
4+ years of production experience in IT
Proficiency in Python, SQL, and cloud platforms (AWS, GCP, Azure)
Experience with tools like Databricks, Spark, Docker, and Kubernetes is a plus
We are seeking a dedicated Senior Big Data Engineer to join our team at a leading financial market analytical research company. The candidate will work within one of the top three most influential international rating agencies and contribute to sophisticated big data solutions.
чим ви будете займатися у цій ролі
Develop an analytical platform interfacing with multiple data sources for data collection and processing via NLP pipelines
Manage efficient data searching within AWS Opensearch
Facilitate the availability of data for internal clients and support ML flow integration
Apply expertise in Python for scripting and automation tasks
Employ knowledge of Elasticsearch and Enterprise Search Platforms to enhance search capabilities
Utilize proficiency in Semantic Web and Text Analytics in project requirements
навички
3+ years of working experience as a Big Data Engineer
Knowledge of Amazon Web Services, Elasticsearch and Enterprise Search Platforms
Background in OpenSearch
Skills in Python
Understanding of Semantic Web and Text Analytics
Proficiency in written and spoken English at Upper-Intermediate level (B2+)
Come join a project at an industry-leading financial intelligence and analytics powerhouse! Our customer is a renowned financial information and analytics company that provides essential intelligence to businesses, governments, and individuals worldwide. The client offers a wide range of services, including credit ratings, market intelligence, data analysis, and investment research. A company is best known for its credit ratings division which assesses the creditworthiness of companies and governments. Join the project and be at the forefront of shaping the future of financial information and analytics!
чим ви будете займатися у цій ролі
Drive the development of the data platform
Oversee a team of developers and QAs
Take part in architecture decisions and implementation
Design and develop solutions utilizing Integration Best Practices
Manage complex data workflows
Provide day-to-day support and technical expertise to other engineers
Work across the full software development cycle
Help improve end-to-end development process, from data collection tools to deployment in a production environment
Handle direct communication with the customer
навички
Data Architecture Modeling experience
Strong Python development skills
Expertise in SQL
Strong understanding of Data Pipeline and Data Storage tooling (AWS S3, AWS Glue, Apache AirFlow, Apache Kafka, Apache Spark/PySpark, CloudFormation/Terraform, Delta Lake)
Understanding of RESTFUL APIs and web technologies
Familiarity with Micro-service architecture
Experience with the full development lifecycle
Experience with AWS cloud platform
Solid understanding of containers and orchestration tools (Docker, CI/CD)
English level of minimum B2 (Upper-Intermediate) for effective communication
EPAM is seeking an AI Solution Architect with strong expertise in software engineering and a passion for Generative AI technologies. In this role, you will tackle complex technical challenges and drive the development of our AI capabilities, making a significant impact on both client projects and our internal products. Ready to innovate? Join us now.
чим ви будете займатися у цій ролі
Design and implement advanced solutions for complex challenges in Generative AI
Develop, refine and optimize AI systems within our in-house products to enhance the software development life cycle
Engage in client-specific projects to integrate AI solutions and address unique business challenges
Resolve unpredictable challenges across technical and infrastructural domains
Adapt new technologies and methodologies to maintain innovative development practices
навички
Proven expertise in software development in an Architect role
Proficiency in Python with a focus on Python frameworks in Generative AI
Skills in programming languages such as Java, .NET, C/C++ or JavaScript
Experience in production environments within Generative AI, including prompt engineering and integration
Keen interest in Generative AI for continuous professional growth
Motivation for hands-on coding and implementation beyond architectural design
Enthusiasm for both in-house product development and external projects with AI integration
We are looking for a Senior Developer specialized in Microsoft Power Apps to join our software and system engineering team. You will work on developing innovative solutions using the Microsoft Power Platform to meet our clients' needs. Your expertise will contribute to building efficient and scalable applications. Join us to advance your career and make an impact through technology.
чим ви будете займатися у цій ролі
Design and develop custom applications
Integrate Power Apps with various data sources
Automate business processes using Power Automate and integrate them with Power Apps, build RPA using Power Automate Desktop
Collaborate with business users and stakeholders to gather requirements and translate them into functional applications
Troubleshoot and resolve issues related to Power Apps applications
Develop TypeScript to customize Model Driven Application logic, develop Plug-ins, Custom APIs using C#
навички
3+ years of Proven experience in designing and developing applications using Microsoft Power Platform
Experience with C#/JavaScript/TypeScript
Strong understanding of data integration with different data sources such as SharePoint, SQL Server other third party systems
Strong analytical and problem-solving skills
Excellent communication skills for interacting with business users and stakeholders
Strong attention to detail and a commitment to delivering high-quality solutions
буде перевагою
Proven experience of using Power Automate, Power Automate Desktop
Experience with Power Apps Component Framework (PCF)
Не знайшли свою вакансію?
Поділіться своїм CV — ми запропонуємо роль, що відповідає вашим навичкам та досвіду.