Data engineer Details

DUOPEAK - Menlo Park, CA

Employment Type : Full-Time

At DuoPeak we’re a team of passionate and hard-working individuals with a real love for mobile games. We found ourselves enamored with understanding the process of what makes a game successful. Through our combined understanding, we found that the real essence of a successful game comes down to three things: Product, Marketing and Operations.


What we’re looking for is the Data Engineer, who will thrive in an environment that is hands on and is always looking for ways to improve and further our business through big data and AI. We are offering a large opportunity for growth.


Job Type: Full-time


Key Responsibilities:


  • Create and maintain optimal data pipeline architecture.

  • Identify, design, and implement internal data process improvements for security, accuracy, stability and scalability purposes.

  • Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL, GCS and AWS ‘big data’ technologies.

  • Build analytics tools that utilize the data pipelines to provide actionable insights into customer acquisition, operational efficiency and other key business performance metrics.

  • Create data tools and deploy ML models for the analytics/data scientist team members, which will assist them in building and optimizing our product into an innovative industry leader.

  • Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs.


What We're Looking For:


  • Highly analytical, data-driven individuals

  • Detail-oriented and organized people

  • Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.

  • Experience with data processing software (such as Hadoop, Spark, Pig, Hive) , data processing algorithms (MapReduce, Flume) and data pipeline & workflow management tools: Azkaban, Luigi, Airflow, etc.

  • Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, etc.

  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.

  • Experience building processes supporting data transformation, data structures, metadata, dependency and workload management.

  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data stores.


A Plus For:


  • Strong project management and organizational skills is a plus

  • Experience supporting and working with cross-functional teams in a dynamic environment.

  • A Graduate degree in Computer Science, Statistics, Informatics, Information Systems or another quantitative field. They should also have experience using the following software/tools:

    • Experience with Google or AWS cloud services

    • Experience with stream-processing systems: Storm, Kafka, Spark-Streaming, etc.


Benefits:


  • Fully Covered Health insurance

  • Unlimited DTO

  • 401K

  • Snacks (Food/Drinks)

  • Cell Phone Reimbursement

Posted on : 4 years ago