Job Description

Outsource UK are currently seeking a Hadoop Developer for a 6 month contract based in Bristol (working remote until spring 2021).

This role is based inside IR35 the candidate will be required to work through an Umbrella company.

Key Responsibilities:

  • The project is using a Hadoop big data lake that processes insurance data to produce reports for the new regulatory standard.
  • Workstream #1 is responsible for ingesting, curating and consuming data. Ingestion is completed using Kafka streaming into HBASE. Transformed data is stored in HIVE.
  • The developer will be responsible for writing Spark/Scala code to ingest data process/transform insurance data
  • Workstream #2. Data is ingested to the lake by a separate team using batch processes. This workstream is responsible for curating and consuming data in HIVE.
  • The developer will be responsible for writing Spark/Scala code to process/transform insurance data and writing Spark/Scala code. Building data marts that summarise data and reporting data from the data marts.
  • Estimating work packages
  • Design/documentation/software development
  • Writing and maintaining unit and acceptance tests (in TDD/BDD styles)
  • Deploying code to production

Essential Skills Required:

  • Strong Scala experience, preferably in functional programming style
  • Strong Spark experience, preferably with Data Frames
  • Worked in an agile team
  • Good documentation skills (Word, Design patterns, HLD, LLD, Confluence, Support handover documents etc)
  • Writing and maintaining unit tests / acceptances tests (TDD / BDD)
  • Experience with Git, GitHub (or GitLab) and pull request contributions
  • Familiarity with Continuous Integration and Dev Ops tools (Jenkins, Nexus, UrbanCode, etc)
  • JIRA/Confluence/Agile-Scrum-Kanban methodology
  • Experience with Linux shell scripting
  • Previously worked in the Finance industry


  • Experience with Scala libraries such as Cats, Shapeless or Frameless.
  • Experience in tuning and optimising the performance of Spark applications
  • Experience parsing structured data using Spark SQL
  • Experience with Maven or other build tools (SBT, Gradle, etc)
  • Knowledge and experience of pure functional programming, in Scala or other programming languages (Haskell, Elm, etc). Experience with Python and PySpark
  • Experience in the Hadoop ecosystem: HDFS command line tool, Apache Hive, Apache HBase, Apache Oozie, etc.

If you would like to be considered for this position please click 'Apply' or send your CV to Tewsdae Phillips -

Ready to Start?

Apply now
Outsource - taking care of everything