Skip To Main Content
backBack to Search

Senior Python Software Engineer for Big Data Retraining program

Hybrid in Ukraine: Lviv
hot

Are you prepared to take your Python engineering expertise to the next level and transition into the dynamic world of Big Data? EPAM provides a unique chance to secure a role after passing a single technical interview, while acquiring Big Data skills without changing your title or compensation.

This focused 8-week program is tailored for Python engineers making the shift to Big Data Engineering. The curriculum encompasses three distinct phases: theoretical coursework delivered by industry professionals, practical tasks or hands-on projects, and a well-rounded knowledge assessment accompanied by feedback. Key topics include data management, distributed systems, Spark, Kafka, NoSQL databases, and cloud-native services, ensuring a solid grounding in modern data platforms. The training is fully remote, enabling participants to learn from the comfort of their own space.

Responsibilities
  • Develop a strong foundation in Big Data by exploring core concepts, Hadoop infrastructure, real-world applications, data characteristics, and deployment trends
  • Build familiarity with DevOps practices, including continuous integration and continuous deployment (CI/CD), to enhance software development and operational workflows
  • Acquire essential data modeling techniques critical for managing and interpreting complex data structures in engineering and architectural roles
  • Expand knowledge of Apache Spark by understanding its architecture, components, and functionalities such as Spark SQL, Spark ML, and Spark Streaming
  • Gain expertise in Kafka fundamentals and tools like Kafka Connect and Kafka Streams for managing real-time data feeds and performing stream processing
  • Explore Elastic Stack for real-time data analysis and visualization while understanding NoSQL databases to manage diverse data types
  • Deepen understanding of data flow and pipelining with tools like NiFi and Streamsets to improve data collection, flow, and processing capabilities
  • Learn orchestration and workflow management with tools such as Airflow and Jenkins to coordinate complex processes effectively
Requirements
  • 4+ years of production experience in IT
  • Proficiency in Python, SQL, and cloud platforms (AWS, GCP, Azure)
  • Background in tools such as Databricks, Spark, Docker, Kubernetes is a plus
  • Familiarity with AI, LLM
  • English proficiency at B2+ or higher
Looking for something else?

Find a vacancy that works for you. Send us your CV to receive a personalized offer.

Find me a job