|採用企業||Multinational Insurance Company|
Work Location: Tokyo
Japanese: Fluent (both spoken and written)
English: advanced (both spoken and written)
Looking for people who are passionate about data with an emphasis on quality programming and
building the best solution possible. If you are an innovative and adaptable data expert with a
strong desire to succeed, you might be a good fit for this role.
You'll have demonstrated experience working in a high performing business intelligence or data
warehouse environment, excellent communication skills and a passion for problem-solving and
learning new technologies.
You'll be exposed to a large variety of tasks, tools and programming languages so the desire and
ability to constantly learn new skills is essential.
Data engineers working in the data lake team carry out a wide variety of business intelligence
tasks in a largely AWS based cloud computing environment.
1. Building high quality and sustainable data pipelines and ETL processes to extract data from
a variety of APIs and ingest into cloud-based services.
2. Efficiently developing complex SQL queries to aggregate and transform data for analytics
teams and general users.
3. Maintaining accurate and error-free databases and data lake structures
4. Conducting quality assessment and integrity checks on both new and existing queries and
5. Monitoring existing solutions and working pro-actively to rapidly resolve errors and identify
future problems before they occur.
6. Using data visualization tools such as Power BI, SSRS, Tableau, Looker, etc. to develop
high-quality dashboards and reports.
7. Consulting with a variety of stakeholders to gather new project requirements and transform
these into well-defined tasks and targets.
|英語レベル||流暢 (英語使用比率: 75％程度)|
1. 3-5 years of practical experience in data or analytics with at least 1 year working in an
engineering or B.I role.
2. At least 1 year of practical experience in working on data pipelines or analytics projects with
languages such as Python, Scala or Node.JS
3. At least 2 years of practical experience in working on data pipelines or analytics projects with
SQL / NoSql databases (ideally in a Hadoop based environment).
4. Strong knowledge and practical experience in working with at least four of the following AWS
services: (s3, EMR, ECS/EC2, Lambda, Glue, Athena, Kinises/Spark Streaming, Step
Functions, Cloudwatch, Dynamo DB).
5. Strong Experience working with data processing and ETL systems such as Oozie, Airflow,
Azkaban, Luigi, SSIS.
6. Experience developing solutions inside a Hadoop stack using tools such as (Hive, Spark,
Storm, Kafka, Ambari, Hue etc).
7. Ability to work with large volumes of both raw and processed data in a variety of formats
including (JSON, ORC, Parquet, CSV).
8. Ability to work in a Linux /Unix environment (predominately via EMR & AWS CLi / Hadoop
9. Experience with DevOps solutions such as (Jenkins, GitHub, Ansible, Docker, Kubernetes).
10. Minimum undergraduate level qualifications in a technical discipline such as (Computer
Science, Data Science, Analytics, Machine Learning, Statistics etc). Post Graduate
11. Demonstrated experience and expertise on setting up and maintaining cloud data solutions
and AWS infrastructure will be highly regarded.
12. Strong knowledge of cloud-based data security, encryption and protection methods will also
be highly regarded.
|会社の種類||大手企業 (300名を超える従業員数) - 外資系企業|