Data Engineer

Data Engineer

Ref: 326| Posted: 11th Jun 2018

This vacancy is now closed
We are looking for an experienced Data Engineer for our global shipping client in Glasgow. 
  • Graduate degree in Computer Science, Information Systems or equivalent quantitative field and 5+ years of experience in a similar Data Engineer role.
  • Experience working with and extracting value from large, disconnected and/or unstructured datasets. Strong data modelling skills (relational, dimensional and flattened).
  • Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
  • Strong interpersonal skills and ability to project manage and work with cross-functional global teams
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets for machine learning and analytics.
  • Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement.
  • Experience with the following tools and technologies:
    • Big Data Technologies: Azure Data Lake (ADL Analytics, ADL Store), Hadoop (HDFS, HDInsight)
    • Relational SQL and NoSQL databases
    • Data pipeline/workflow management tools: Azkaban and Airflow
    • Pipeline and Stream-processing systems: Azure Event Hub, Kafka, and Spark-Streaming
    • Object-oriented/object function scripting languages such as Python, Java, C#, C++
    • Nice to have: Reporting Tools: SSAS, SSRS, SSIS, Tableau, PowerBI
  • Experience in an operational role supporting software platforms
  • Plus: Thorough understanding of machine learning algorithms, supervised and unsupervised modelling techniques
  • Work closely with other data and analytics team members to optimize the company’s data systems and pipeline architecture
  • Design and build the infrastructure for data extraction, preparation, and loading of data from a variety of sources using technology such as SQL and AWS
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build data and analytics tools that will offer deeper insight into the pipeline, allowing for critical discoveries surrounding key performance metrics
  • Always angle for greater efficiency across global data systems.
  • Development and support of reports and analytic solutions
Competitive Salary and benefits package.