Senior Data Engineer
Hyderabad, TG, IN, 500081
Let's play
together

About our company
Fortuna has become an established brand among customers within just a few years. We became a proud international Family of companies carrying Fortuna Entertainment Group from the first betting shop.
We want to go further and be known for having the best tech department offering our employees the usage of modern technologies, and being part of many exciting projects. Our new home is the remarkable Churchill II building which has a view of Prague.
Every detail underlines the company's corporate culture and represents our values. The workplace layout is 100% ecological, providing ideal conditions for everyday work. We all work as one team and treat each other with respect, openness, a sense of honor and respect for individual and cultural differences.
Hey there!
We're Fortuna Entertainment Group, and we’re excited to share why we’re a team worth joining.
Who We Are?
Founded in 1990, FEG is a top player in the betting and gaming industry. We proudly serve millions of customers across five European countries – Czech Republic, Slovakia, Poland, Romania, and Croatia – with our Business Intelligence operations based in India.
Why Join Us?
At FEG India, you’ll be part of a team that’s powering the digital future of one of Central and Eastern Europe’s leading betting and gaming operators. We’re a growing tech hub delivering high-quality solutions in Data, BI, AI/ML, Development, and IT Services. Your work here directly supports global operations — and we make sure our people grow with us.
Current Opportunity
Right now, we´re seeking a Senior Data Engineer who will be managing and overseeing Integration and Ingestion of batch and realtime data sources into our Data lakehouse and warehouse build on azure cloud resources . The successful candidate will be responsible for the administration, optimization, and monitoring of all Azure cloud resources with a specific focus on Databricks. The role requires expert-level knowledge and proven experience in handling large-scale data environments, ensuring high performance and efficiency.
What You’ll Be Doing
Your daily activities will include, but not limited to:
Databricks:
- Design, develop, and maintain scalable data pipelines and ETL processes using Azure.
- Implement and optimize Spark jobs, data transformations, and data processing workflows in Databricks.
- Leverage Azure DevOps and CI/CD best practices to automate the deployment and management of data pipelines and infrastructure
- Ensure data quality and integrity across various data sources.
- Troubleshoot existing data pipelines for data integrity and performance issues.
- Document data pipeline processes and technical specifications.
Azure Synapse:
- Implement, troubleshoot and optimize Azure synapse pipelines .
- Data wrangling of heterogeneous data.
- Developing Modern Data Warehouse solutions using Azure Stack (Azure Data Lake, Azure Data Factory, Azure Databricks)
Cloud Platform Management:
- Design and manage cloud-based data infrastructure on platforms such as Azure
- Utilize cloud-native tools and services to enhance data processing and storage capabilities.
- Ensure high availability and disaster recovery for critical data systems.
- Implement and manage CI/CD pipelines for data engineering projects.
- Automate infrastructure provisioning and deployment using tools like Terraform or ARM templates.
What We’re Looking For
- Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or a related field.
- Over 5+ years of experience in data engineering.
- At least 5 years of hands-on experience with Python,Databricks, Azure Synapse ,SQL Server, CI/CD practices, and cloud platforms.
- At least 3+ yeats of hands-on experience working with real time data sources like Kafka/RabbitMQ.
- Proven track record of successfully designing and deploying large-scale, complex cloud solutions.
You Should Have Experience In
- Deep understanding of Modern Data Warehouse solutions using Azure Stack (Azure Data Lake, Azure Data Factory, Azure Databricks) .
- Demonstrated analytical and problem-solving skills, particularly those that apply to a big data environment.
- Ability to write complex and advanced scripts in Python and SQL.
Ability to write optimized Pyspark code and good understanding of Srtuctured streaming in Pyspark. - Good understanding of Kafka concepts
- Knowledge of data warehousing concepts and ETL processes
Why You’ll Love It Here
- We are the biggest omni-channel betting and gaming operator in Central and Eastern Europe, B2C High-tech Entertainment company
- Hybrid Working Arrangements (work from home)
- Flexible working hours
- Interesting innovative projects
- Cooperation across 5 markets and departments, international teams
- Variety of Tasks, where Problem-Solving and Creativity is needed
- Advancement, Promotions and Career opportunities for talents
- Skill Development & Learning options – both individual and team, development goals filled by individualised development plans
- Welcoming Atmosphere, open and informal culture and dress code, friendly colleagues, strong eNPS scores
If this sounds like your kind of place, let us know by applying! We can’t wait to hear from you.
#LI-KP1
Offices at FEG