Talent.com
この求人はお住まいの国からは応募できません。
Data Engineer - Analytics Platform Section, Analytics Data Engineering Department (ADED)

Data Engineer - Analytics Platform Section, Analytics Data Engineering Department (ADED)

RakutenTokyo, Japan
29日前
職務内容の詳細

Description

Business Overview

AI & Data Division provides innovative solutions leveraging AI and data for products across various industries, including e-commerce, finance, and telecommunications.

We focus on powerful customer-centric analytics, AI and data-driven search technologies, advertising and marketing strategies, and the development of cutting-edge analytics platforms. By effectively utilizing Rakuten Group's vast data assets and transforming data into valuable insights, we accelerate innovation and strongly support business growth. We aim to cultivate highly specialized and creative talent and become a world-leading AI & Data team. We strive to deliver innovative data analytics solutions that benefit people and societies around the world.

Department Overview

As part of the AI & Data Division, we are dedicated to developing "Rakuten Analytics," a powerful analytics platform that supports data-driven decision-making. We contribute to business growth by securely integrating diverse data such as user behavior, purchase history, and location information with Rakuten Group's rich statistical data assets, enabling advanced analysis. Furthermore, as a product development department, we are responsible for the integrated development of everything from data collection to data pipeline construction and the analytics UI.

Position : Why We Hire

We are seeking a data engineer who can contribute to business growth by strengthening Rakuten Analytics' data analysis pipeline and enhancing data processing through the utilization of new technologies, including AI. We welcome individuals who can leverage their data engineering knowledge and experience to proactively enhance our data analytics infrastructure, and who are also interested in utilizing AI technologies in the process.

Position Details

As a Data Engineer, you will build and maintain high-throughput data pipelines that process terabytes of data per hour, enabling rapid data access for users. You will contribute to the entire pipeline lifecycle, from development and deployment to incident response and data quality. Collaborating with the SRE team, you will improve the performance and reliability of our pipeline, leveraging tools like Prometheus and Grafana. You will also work with customer service and product teams to deliver data access solutions that meet diverse customer needs.

Tech Stack

Streaming data processing : Spark, Flink, Dataproc

Batch data processing : Spark, Dataproc

Monitoring and Alerting : Elasticsearch / OpenSearch, Prometheus, Grafana

Data investigations : Python, PySpark, SQL, BigQuery, Hive

CI / CD and automations : Jenkins, Ansible

Programming language : Scala, Java

Pipeline workflow : Airflow, Composer

Cloud service : Google Cloud Platform

Work Environment

We Data Engineering Group value collaboration and innovation, and we foster an environment where team members can inspire each other as we build and maintain robust data pipelines. We strive to leverage cutting-edge technologies to address challenges related to data processing, storage, and accessibility, ensuring that reliable and scalable data is available for critical business decisions. In our fast-paced environment, continuous learning and the pursuit of new challenges are encouraged, and you will directly contribute to the data-driven success of the entire organization.

Mandatory Qualifications :

Technical Skills

  • 5+ years of hands-on Data Engineering experience in enterprise products (e.g., e-commerce, financial services, telecommunications).
  • Proven experience in designing, developing, and deploying streaming and / or batch applications using big data technologies such as Spark, Flink, Kafka, Cloud Pub / Sub, Dataproc, SQL, BigQuery, Cloud Dataflow, Cloud Storage, and Cloud Composer.
  • Proficiency in developing Spark applications from the ground up and deploying them to production.
  • Ability to understand and modify existing Spark applications to implement new features and resolve bugs.
  • Experience in troubleshooting and optimizing Spark application performance through configuration adjustments, JVM tuning, and code refactoring.
  • Ability to perform data quality assurance (QA) and investigate data discrepancies using SQL or Spark.

Soft Skills / Competencies

  • Strong ownership and a proactive approach to improving our product.
  • Logical problem-solving skills and a systematic approach to execution.
  • Proven ability to drive initiatives from concept to completion.
  • Excellent collaboration and communication skills.
  • Ability to work productively and reliably in an asynchronous environment.
  • Desired Qualifications :

  • Have worked on developing and optimizing applications that regularly processes data with at least 1TB's in size
  • Experience on cloud bigdata technologies like BigQuery, Dataproc, PubSub, etc.
  • Working knowledge in DevOps tooling like Jenkins, Ansible, etc.
  • engineer #applicationsengineer #technologyservicediv

    この検索に対してジョブアラートを作成する

    Data Engineer • Tokyo, Japan