General Info
Our Company
EFG International is a global private banking group, offering private banking and asset management services. We serve clients in over 40 locations worldwide. EFG International offers a stimulating and dynamic work environment and strives to be an employer of choice.
EFG is committed to providing an equitable and inclusive working environment that is founded on the principle of mutual respect. Joining our team means experiencing a supportive environment, where your contributions are valued and recognised. We strongly believe that the diversity of our teams gives us a competitive advantage by fostering better decision-making and greater innovation.
Our Purpose and Mission
Empowering entrepreneurial minds to create value – today and for the future.
We are a private bank, offering personalised solutions on a global scale to private and institutional clients. Our sustainable success is based on our talents and on how we partner with our clients and communities to create lasting value.
Job Description
Introduction to the Team:
• DataOps Team is responsible for seemless integration, release and monitoring of mission critical data pipelines and workflows
• Currently based in Geneva, we are expanding our global footprint by establishing a virtual team presence in Singapore.
• Reporting to the local IT Head of Applications, the role will be a key pillar in our follow-the-sun support model, ensuring continuous operational excellence across Asia and Europe timezone.
• Our data platform is evolving into a more converged stack, unifying data streams and batches. We are expanding into a cloud based lakehouse and a microservice based data streaming integration model supporting business critical integrations
• To support team activity Python, SQL and CI/CD are essentialskills. This role requires a rigorous approach to documentation and procedure development.
• The team is responsible for building, testing, deploying and monitor the operating part of the platform.
Tasks and Responsibilities
• Take operational ownership of specific data services (change management, quality assurance, SLA, SOP)
• Track and ensure operations continuity of critical data pipelines across the group during Asian hours, adopting a follow-the sun support model.
• Design, implement, and optimize data pipeline in batch and streaming mode.
• Follow and apply CI/CD practice to automate and streamline deployments and integrations.
• Create, maintain, improve documentation and operational procedures as part of operational ownership
• Collaborate with cross-functional teams to review governance and improve integration operations.
• Maintain high standards of data quality, reliability, and performance.
• Stay current with advancements in data engineering, of dataops tech stack, to support continuous improvement.
Requirements and Qualifications
• Experience in data engineering and data operations roles.
• Proven hands-on experience with SQL, Python, PySpark and Databricks
• Knowledge of micro-services architecture, API management and observability is highly desirable.
• Solid understanding of CI/CD platforms and processes (e.g., Azure DevOps, GitLabCI, GitHub Actions).
• Experience with cloud data platforms (Azure preferred).
• Demonstrated ability to produce rigorous, clear, and comprehensive documentation and procedures.
• Familiarity with database design.
• Excellent problem-solving skills and attention to detail.
• Effective communication and collaboration skills.
Personal Attributes
• Responsible and reliable, especially during critical operational periods
• Rigorous and methodical in documentation and process creation
• Motivated and goal-oriented
• Curious, enthusiastic, creative, and proactive
• Passionate about analytics and technology, with a strong desire to learn and grow
• Fluent in spoken and written English
Our Values
Please ensure to attach a cover letter to your CV when filling the application.
Application