Stimulating iGaming Careers
With Gaming talent headhunted from our peer providers in the industry, our teams across the world are passionate about leading towards the Gaming experience of the future.
Our Core Values
We do what we say, on time, every time. We work closely with our clients to provide robust and effective plans of action, working strategically to deliver the optimal iGaming solutions for your specific business objectives and audience needs.
We’re dedicated to progression
We employ industry experts around the world with specialist knowledge in emerging markets and localisation, who keep one step ahead of market trends to ensure that our customers stay ahead of the game.
We invest in the best iGaming developers, design teams and infrastructure around the world to deliver progressive, constantly evolving technological solutions that help to give our operator clients the competitive edge.
- You will develop and maintain the different data pipelines in order to ensure the quality and accuracy of our product analytics, as well as build datasets for reports and visualizations for internal use and external customers.
- Explore and implement data and technology best practices, guidelines, and repeatable processes in building pipelines, logical data models and pre-aggregated analytics data sets
- Build the infrastructure required for optimal extraction, transformation, and loading of large volumes of data from a wide variety of data sources – both internal and external
- Proactively identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Develop ETL monitoring and testing in order to troubleshoot and resolve data-related issues, ensuring data quality and availability
- You will provide technical guidance and mentorship to junior members of the data engineering team
Qualifications and Skills
- At least 3 years of experience in Dev engineering role
- Experience with feature engineering and data pipelines using Python, SQL, Scala and similar programming language
- Strong knowledge of distributed computing frameworks, like Apache Spark, Hadoop and Hadoop YARN, for managing and processing big data sets
- Experience in data acquisition (API calls/FTP downloads), ETL, transformation / normalization (from raw to DB table schema structure), storage (Raw files, Database server), distribution & access (Entitlements for users, build of API’s and access points for data)
- Solid knowledge of data warehousing platforms like Snowflake, Redshift, Big Query, Oracle, including data transformation (DBT, Talend), data model design and query optimization strategies
- Experience in data management – clean and commented code, version control, documentation, automated testing and deployment etc.
- Familiar with the design (dimensional modelling and schema design) and optimization of databases or data warehouses – including handling and logging errors, system monitoring
- Experience with event-driven technologies and streaming such as RMQ and/or Kafka
Tools you will use on a daily basis:
- Databases – PostgreSQL and Oracle
- Programming languages – SQL / Python / Scala / Java
- Computing frameworks – Apache Spark / Kafka / Hadoop and Hadoop YARN
Where you fit in:
The data team, responsible for maintaining and developing end-to-end data solutions. Data team is made up of highly skilled professionals who are dedicated to ensuring that our organization’s data is accurate, reliable, and easily accessible. The team work closely with other departments to understand their data needs and provide them with the information they require to make informed decisions.