This job is no longer available.
You can view related vacancies or set-up an email alert notification when similar jobs are added to the website using the buttons below.
BACK TO LISTING

Data Engineer

Attractive Salary Package

OPPORTUNITY DETAILS

SUMMARY

Our client is hiring a Data Engineer to join the team.

A suitable candidate should be a motivated and multi-skilled individual that meets the requirements of the role.

DUTIES TO INCLUDE

  • Produce advanced modelling of structured and unstructured extra-financial data (environmental, social, governance KPIs) to support the production of our ratings and investment decisions.
  • Re-engineering of data flows to enable scaling of our services.
  • Build tools and processes to produce accessible data for analysis.
  • Communicate effectively, at all levels, complex processes into an easy-to-understand solution.
  • Design, write and code from prototypes to production-ready solutions.
  • Maintaining a variety of metadata repositories to enable the client to understand their data assets.
  • Coach and mentor ESG analysts in the use of advanced tools to increase the efficiency, effectiveness and robustness of their modelling and analysis tasks.
  • Produce high-level documentation of all designed models and automation tools.

KEY REQUIREMENTS

ESSENTIAL

  • Experience in a similar role.
  • Good understating of coding best practices, data versioning, dependency management, Code quality, error handling, logging, monitoring and validation.
  • Experience in Python.
  • Experience building CI/CD pipelines.
  • Ability to work under pressure with a can-do attitude.
  • Strong analytical ability, initiative, independent thinker, and highly numerate.
  • Ability to work to deadlines and produce consistent results, in line with team objectives.
  • Excellent organizational and time management skills.

ESSENTIAL

  • Experience in writing complex queries against relations and non-relational data stores.
  • Experience in parallel computing - the ability to process large datasets and optimize computationally intensive tasks.