Senior Data Engineer

Hyderabad, TG, IN, 500081

Let's play
together

Voleyball player

About our company

Fortuna has become an established brand among customers within just a few years. We became a proud international Family of companies carrying Fortuna Entertainment Group from the first betting shop.

We want to go further and be known for having the best tech department offering our employees the usage of modern technologies, and being part of many exciting projects. Our new home is the remarkable Churchill II building which has a view of Prague.

Every detail underlines the company's corporate culture and represents our values. The workplace layout is 100% ecological, providing ideal conditions for everyday work. We all work as one team and treat each other with respect, openness, a sense of honor and respect for individual and cultural differences.

POSITION TITLE: Sr. Data Engineer

 

Key Purpose Statement – Core mission

The core purpose of a Senior Data Engineer will play a key role in designing, building, and optimizing our data infrastructure and pipelines. This individual will leverage their deep expertise in Azure Synapse, Databricks cloud platforms, and Python programming to deliver high-quality data solutions.

 

RESPONSIBILITIES

Data Infrastructure and Pipeline Development:
   - Develop and maintain complex ETL/ELT pipelines using Databricks and Azure Synapse.
   - Optimize data pipelines for performance, scalability, and cost-efficiency.
   - Implement best practices for data governance, quality, and security.

Cloud Platform Management:

   - Design and manage cloud-based data infrastructure on platforms such as Azure
   - Utilize cloud-native tools and services to enhance data processing and storage capabilities.
   - understanding and designing CI/CD pipelines for data engineering projects.
Programming:

   - Develop and maintain high-quality, reusable Code on Databricks, and Synapse environment for data processing and automation.
   - Collaborate with data scientists and analysts to design solutions into data workflows.
   - Conduct code reviews and mentor junior engineers in Python, PySpark & SQL environments best practices.

 

REQUIREMENTS - KNOWLEDGE, SKILLS AND EXPERIENCE

  • Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
  • Over 5 to 8 years of experience in data engineering or a related field.
  • At least 2 years of hands-on experience with Databricks, PySpark, Azure Synapse and cloud platforms.
  • Proven expertise in Python programming and its application in data engineering. High quantitative and cognitive ability to solve complex problems and think with a vision

 

Qualifications, knowledge of “XY”, specific technology; hard and soft skills required:

  •  Strong proficiency in Databricks and Azure Synapse for large-scale data processing.
  •  Extensive experience with cloud platforms (Azure preferred, AWS / GCP) and their data services.
  •  In-depth knowledge coding practices including CI/CD, automation.
  •  Advanced SQL skills and familiarity with database technologies (e.g., MsSql, MySQL, PostgreSQL, NoSQL).
  •  Experience in dbt (data build tool) will be an added advantage.

Offices at FEG