Senior Data Engineer

Hyderabad, TG, IN, 500081

Let's play
together

Voleyball player

About our company

Fortuna has become an established brand among customers within just a few years. We became a proud international Family of companies carrying Fortuna Entertainment Group from the first betting shop.

We want to go further and be known for having the best tech department offering our employees the usage of modern technologies, and being part of many exciting projects. Our new home is the remarkable Churchill II building which has a view of Prague.

Every detail underlines the company's corporate culture and represents our values. The workplace layout is 100% ecological, providing ideal conditions for everyday work. We all work as one team and treat each other with respect, openness, a sense of honor and respect for individual and cultural differences.

POSITION TITLE: Sr. Data Engineer

 

Key Purpose Statement – Core mission

The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Databricks, Azure Synapse, DevOps practices, cloud platforms, and Python programming to deliver high-quality data solutions.

 

RESPONSIBILITIES

Data Infrastructure and Pipeline Development:
   - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse.
   - Optimize data pipelines for performance, scalability, and cost-efficiency.
   - Implement best practices for data governance, quality, and security.

Cloud Platform Management:

   - Design and manage cloud-based data infrastructure on platforms such as Azure
   - Utilize cloud-native tools and services to enhance data processing and storage capabilities.
   - Ensure high availability and disaster recovery for critical data systems.

   - Implement and manage CI/CD pipelines for data engineering projects.
   - Automate infrastructure provisioning and deployment using tools like Terraform or ARM templates.
Python Programming:

   - Develop and maintain high-quality, reusable Python code for data processing and automation.
   - Collaborate with data scientists and analysts to integrate Python-based solutions into data workflows.
   - Conduct code reviews and mentor junior engineers in Python best practices.

 

REQUIREMENTS - KNOWLEDGE, SKILLS AND EXPERIENCE

Education (High/University), language knowledge (level of EN), length of practice and experiences required:

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
  • Over 7 years of experience in data engineering or a related field.
  • At least 4 years of hands-on experience with Databricks, PySpark, Azure Synapse, DevOps practices, and cloud platforms.
  • Proven expertise in Python programming and its application in data engineering. High quantitative and cognitive ability to solve complex problems and think with a vision

 

Qualifications, knowledge of “XY”, specific technology; hard and soft skills required:

  •  Strong proficiency in Databricks and Azure Synapse for large-scale data processing.
  •  Extensive experience with cloud platforms (AWS, Azure, GCP) and their data services.
  •  In-depth knowledge of DevOps tools and practices, including CI/CD, automation, and infrastructure as code.
  •  Advanced SQL skills and familiarity with database technologies (e.g., MySQL, PostgreSQL, NoSQL).
  •  Experience with orchestration tools (e.g., Docker, Kubernetes).

Offices at FEG