Skip to content

CareerBoard

Contact us at +44 (0)1621 817335
Advertise your job!
 

Job Application

 
 
 

Please answer the following questions in order to process your application.

 
 
Email Address *
 
Select your working status in the UK *
 
 
 
File Attachments:
(2MB file maximum. doc, docx, pdf, rtf or txt files only)
 
Attach a CV * 
 
Optional covering letter 
OR
Clear covering letter
 
 
 * denotes required field
 
 
 
Additional Information:
 
First Name
 
Last Name
 
Address
 
Country
 
Home Telephone
 
Mobile/Cell
 
Availability/Notice
 
Hourly Rate GBP
 
Approximately how far are you willing to travel to work (in miles) ?
 
 
 

Key Privacy Information

When you apply for a job, CareerBoard will collect the information you provide in the application and disclose it to the advertiser of the job.

If the advertiser wishes to contact you they have agreed to use your information following data protection law.

CareerBoard will keep a copy of the application for 90 days.

More information about our Privacy Policy.

 

Job Details

 

Senior Data Engineer - Python/Hadoop/Spark (Contract)

Location: London Country: UK Rate: £800 - £900 per day
 

Senior Data Engineer - Python/Hadoop/Spark - sought by leading investment bank based in London - Hybrid - contract

*inside IR35 - umbrella*

Key Responsibilities:

  • Design and implement scalable data pipelines that extract, transform and load data from various sources into the data Lakehouse.
  • Help teams push the boundaries of analytical insights, creating new product features using data.
  • Develop and automate large scale, high performance data processing systems(batch and Real Time) to drive growth and improve product experience.
  • Develop and maintain infrastructure tooling for our data systems.
  • Collaborate with software teams and business analysts to understand their data requirements and deliver quality fit for purpose data solutions.
  • Ensure data quality and accuracy by implementing data quality checks, data contracts and data governance processes.
  • Contribute to the ongoing development of our data architecture and data governance capabilities.
  • Develop and maintain data models and data dictionaries.

Skills & Qualifications:

  • Significant Experience with Datamodelling, ETL processes, and data warehousing.
  • Significant exposure and hands on at least 2 of the programming languages - Python, Java, Scala, GoLang.
  • Significant experience with Hadoop, Spark and other distributed processing platforms and frameworks.
  • Experience working with Open table/storage formats like delta lake, Apache iceberg or Apache hudi.
  • Experience of developing and managing Real Time data streaming pipelines using Change data capture (CDC), Kafka and Apache Spark.
  • Experience with SQL and database management systems such as Oracle, MySQL or PostgreSQL.
  • Strong understanding of data governance, data quality, data contracts, and data security best practices.
  • Exposure to data governance, catalogue, lineage and associated tools.
  • Experience in setting up SLAs and contracts with the interfacing teams.
  • Experience working with and configuring data visualisation tools such as Tableau.
  • Ability to work independently and as part of a team in a fast-paced environment.
  • Experience working in a DevOps culture and willing to drive it. You are comfortable working with CI/CD tools (ideally IBM UrbanCode Deploy, TeamCity or Jenkins), monitoring tools and log aggregation tools. Ideally, you would have worked with VMs and/or Docker and orchestration systems like Kubernetes/OpenShift.

Please apply within for further details - Matt Holmes - Harvey Nash


Posted Date: 15 May 2024 Reference: JS-BBBH106283 Employment Business: Harvey Nash IT Recruitment UK Contact: Matthew Holmes