Team Lead, Data Engineer

Location:Greater London
Job Type:Full Time
Apply Now

Who we're looking for


A resourceful team lead with data engineering background and demonstrable team lead experience. Experience with python particularly related to data engineering and an interest in machine learning, and data virtualisation (Denodo, Tibco, AtScale, Dremio or others), but adjacent technologies are also useful especially if you can demonstrate a track record of adapting to new tools and approaches.


You will have recent hands on coding experience, and be able to quickly pick up and run with new technologies to meet the varying needs of the team, but this role will be a mixture of hands-on and team leadership. As a leader, you’ll know how to spot poor quality or struggling team-mates and have a variety of tactics to help them improve their skills. You will have a solid practical understanding of how to apply data theory and ideally be from a mathematical, computer or applied science background although relevant industry experience is equally valuable and will be taken into consideration.


About Schroders


We're a global investment manager. We help institutions, intermediaries and individuals around the world invest money to meet their goals, fulfil their ambitions, and prepare for the future.


We have around 4,000 people on six continents. And we’ve been around for over 200 years, but keep adapting as society and technology changes. What doesn’t change is our commitment to helping our clients, and society, prosper.


The base


We moved into our new HQ in the City of London in 2018. We’re close to our clients, in the heart of the UK's financial centre. We offer flexible working appropriate for a client solutions focussed role.


The team


Data operations is a team of data engineers embedded in the business. Our current focus is on helping Schroders embed ESG (Environmental, Social & Governance) data throughout our investment process, to meet the objectives of our Global Sustainability initiative. We use modern data and technology stack including python, docker, data virtualisation and cloud based stacks, and are hands on, partnering closely with both business subject matter experts and technology counterparts to deliver for this incredibly important area.


What you'll do


You'll work with the team, as a technical lead and mentoring more junior members to help deliver data sources and data enablement capabilities (that is: tools, data sources and products that enable business teams to participate more directly than traditional IT processes) across a wide range of technologies and stakeholders. This might involve coding python feeds one day, or setting up virtual data sources another. It’s not purely hands-on role, but may require it from time to time. You may not have be worked with every technology in our stack, which will be an opportunity for you to learn something new and ground-breaking. You’ll be responsible for contributing to the analysis, implementation and testing of working data pipelines required by the Capability [Product] Owner, and will be accountable for ensuring the team are delivering stories agreed in the regular sprint planning sessions.


You'll be familiar with agile methodology, scrum or Kanban and have worked in teams that use this approach. As a team, we support each other in our personal development knowing that each has their strengths, and work to share those throughout the team. We hope you'll have something to offer and something to gain.


The knowledge, experience and qualifications you need


  • Have hands on experience with python or similar data engineering language in a commercial setting, ideally on data related projects (e.g. hands on coding experience)
  • Working knowledge of agile methodology, and capable of following the framework, contributing to team success through participation in ceremonies and occasionally assisting with scrum-master duties, owning of retro actions and maintaining artefacts.
  • Experience working with data, and some basic understanding of typical things that impact the quality of data.
  • Working knowledge of data modelling
  • Experience applying devops/automation principals to data pipelines, and have good awareness of the trajectory of global data engineering practices.
  • Familiarity with source control systems (any) and workflow that includes working on a shared code-base. Have participated in code-reviews and other quality management practices, and able to take a lead in setting the standard for the team.
  • Experience working with business stakeholders in understanding requirements and translating these to technical requirements, implementing them, and then closing the loop with the stakeholders.

The knowledge, experience and qualifications that will help


  • Educated to undergraduate or higher level in a numerate or applied science subject, but relevant industry experience will be equally valuable, and considered.
  • Any basic professional qualifications related to asset management (such as IMC) are preferred.

What you’ll be like


  • Friendly, approachable and collaborative team player who enjoys working with people across the business.
  • Able to mentor more junior employees, getting the best out of them and encouraging continuous learning and improvement.
  • A continuous improvement mind-set, always thoughtful about the status quo, and making sure that standard approaches continue to make practical sense.
  • A problem solver, comfortable analysing, breaking down and ultimately resolving complex and sometimes ambiguous requirements.