Open uri20190128 17683 kziter?1548680414
Big Data Engineer @ Reloading

Description

Reloading is in the market since 2014 and is specialized Training and Consulting services from Demand Management to Service Management, and brings out together a set of functional and technological skills to help its clients.

Certified by:
E.E.P.
(Endorsed Education Provider ™) Certificates (Provider ID: 141915) IIBA® (International Institute of Business Analysis)

R.E.P. (Provider ID: 3988) from PMI® (Project Management Institute®),

Agile Methodologies with extensive experience in preparing professionals through Scrum.Org, Accredited Training Organization (ATO) by Axelos for ITIL materials – in process

Job brief

We are looking for a skilled Big Data Engineer to join our analytics team. The ideal candidate has an eye for building and optimizing data systems and will work closely with our systems architects, data scientists, and analysts to help direct the flow of data within the pipeline and ensure consistency of data delivery and utilization across multiple projects.

Responsibilities:

  • Work closely with other data and analytics team members to optimize the company’s data systems and pipeline architecture
  • Design and build the infrastructure for data extraction, preparation, and loading of data from a variety of sources using technology such as SQL and AWS
  • Build data and analytics tools that will offer deeper insight into the pipeline, allowing for critical discoveries surrounding key performance indicators and customer activity
  • Always angle for greater efficiency across all of our company data systems.

Requirements:

  • Graduate degree in Computer Science, Information Systems or equivalent quantitative field and 3+ years of experience in a similar Data Engineer role.
  • Experience working with and extracting value from large, disconnected and/or unstructured datasets
  • Demonstrated ability to build processes that support data transformation, data structures, metadata, dependency and workload management
  • Strong interpersonal skills and ability to project manage and work with cross-functional teams
  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience with the following tools and technologies:
    • Hadoop, Spark, Kafka,
    • Relational SQL and NoSQL databases
    • Data pipeline/workflow management tools such as Azkaban and Airflow
    • AWS cloud services such as EC2, EMR, RDS and Redshift
    • Stream-processing systems such as Storm and Spark-Streaming
    • Object-oriented/object function scripting languages such as Python, Java, C++, etc.

Are you interested in this opportunity? Please send us your CV to [email protected]