As part of the Data Lake department, you will work on Big Data projects that process large volumes terabytes of data either batch or real time in the cloud.
The focus is on highly available microservices so any knowledge of distributed systems is a plus.
The successful candidate will be an analytical person with solid computer science knowledge, at least one year of experience working in the Java ecosystem or similar and basic knowledge of relational databases.
You will have the chance to work with modern application architectures and technologies such as AWS Cloud, Docker containers, Kubernetes and GitLab CI / CD systems.
Understand current implementation, learn and apply best industry standards.
Maintain the set of core applications.
Research & develop improved ways of storing and processing data.
Understand the CI / CD application lifecycle with the help of the DevOps team.
Continuous professional growth using the available resources.
Technical specifications, design, unit tests and documentation.
Familiarity with Java SE with Spring Boot.
Knowledge of SQL or RDBMS eg. MySQL, PostgreSQL.
Knowledge of OOP and design patterns.
Fluent in English.
Big pluses :
Clean code approaches.
Knowledge of distributed computing systems.
Experience with AWS, especially Big Data solutions such as S3, Kinesis, EMR
Familiar to a few key NoSQL concepts CAP theorem, BASE / ACID, indexes.
Basic knowledge of Linux systems Debian, Ubuntu, Linux Mint.
Microservices approach Spring Cloud Ecosystem.
Experience with Big Data technologies such as Hadoop, Spark.
Messaging systems such as Artemis, Kafka, and Kinesis.
Good team player and communication skills.
Fast learner and willing to improve himself.
Self-motivated, resourceful person.
We offer :
Possibility to work in a global product company with talented people
Competitive salary according to the qualifications
21-day paid vacation, 5 days paid sick leave
Free English courses
Regular corporate events and team buildings