Senior Data Engineer Details

84.51° - Cincinnati, OH

Employment Type : Full-Time

Senior Data Engineer (G2) Job Description

About Us
We are a full stack data science company and a wholly owned subsidiary of The Kroger Company. We own 10 Petabytes of data and collect 35+ Terabytes of new data each week sourced from 62 Million households. As a member of our engineering team you will use various cutting-edge technologies to develop applications that turn our data into actionable insights used to personalize the customer experience for shoppers at Kroger. We use agile development methodology bringing everyone into the planning process to build scalable enterprise applications.

What you'll do
As a senior data engineer, you will have the opportunity to build solutions that ingest, store and distribute our big data to be consumed by data scientists and our products. Our data engineers use Python, Hadoop, PySpark, Hive, and other data engineering technologies while working alongside our application developers to deliver data capabilities and services to our scientists, products, and tools.

Responsibilities
Take ownership of features and drive them to completion through all phases of the entire 84.51° SDLC. This includes internal and external facing applications as well as process improvement activities:

  • Participate in design of and develop Cloud and Hadoop based solutions
  • Perform unit and integration testing
  • Collaborate with architecture and lead engineers to ensure consistent development practices
  • Provide mentoring to junior engineers
  • Participate in retrospective reviews
  • Participate in the estimation process for new work and releases
  • Collaborate with other engineers to solve and bring new perspectives to complex problems
  • Drive improvements in people, practices, and procedures
  • Embrace new technologies and an ever-changing environment

Requirements:
Bachelor's degree typically in Computer Science, Management Information Systems, Mathematics, Business Analytics or another STEM degree.

  • 5+ years proven ability of professional data development experience
  • 3+ years proven ability of developing with Hadoop/HDFS
  • 3+ years developing experience with either Java or Python
  • 3+ years experience with PySpark/Spark
  • Full understanding of ETL concepts
  • Exposure to VCS (Git, SVN)
  • Strong understanding of Agile Principles (Scrum)

Preferred Skills – Experience in the following

  • Exposure to NoSQL (Mongo, Cassandra)
  • Exposure to Service Oriented Architecture
  • Exposure to cloud platforms (Azure/GCP/AWS)
  • Proficient with relational data modeling
  • Continuous Integration/Continuous Delivery

Posted on : 4 years ago