Cloud Native Data Engineer (f/m/d)

Job description

For our client we are looking for a Cloud Native Data Engineer (f/m/d).

Role:
- In this role, you will be responsible, together with the architect(s) in your team for designing, developing, and maintaining scalable data architectures that are capable of handling mass-data within defined processing times.
- Moreover, the app modules will be developed and deployed on a Platform, which is the client's internal hybrid cloud platform that offers services to facilitate an end-to-end software development life cycle.
- You will collaborate with cross-functional teams to develop, migrate, and deploy various modules, leveraging your expertise in Cloud Native technologies, DevOps practices, and observability tools.

Key Responsibilities:
Data Architecture, Data Modeling, Data Integration, Data Quality and Governance, Application Migration, Documentation.

helpful experience:
- A minimum experience of 5 years as a Cloud Native application engineer.
- Experience with rearchitecting existing monolithic architecture to micro-services based Cloud Native architectures.
- Strong understanding of Cloud Native architectures (loosely coupled services, containers, horizontal scalability, application resilience patterns).
- Proficiency in at least one programming language: Java or Scala
- Knowledge and experience with at least some of the (Big) Data technologies/frameworks:
- Workflow orchestration (AirFlow/Oozie etc.)
- Data integration/Ingestion (Nifi, Flume etc)
- Messaging/Data Streaming (Kafka/RabbitMQ etc.)
- Data Processing (Spark, Flink etc.)
- RDBMS (PostgreSQL/MySql etc.)
- NoSQL Storages (MongoDB, Cassandra, Neo4j etc.)
- Timeseries (InfluxDB, OpenTSDB, TimescaleDB, Prometheus etc.)
- And/Or with their Cloud provided counterparts, i.e., Cloud Data/Analytics services (GCP, Azure, AWS)
- Familiarity with reference Big Data architectures (Warehouse, Data Lake, Data Lakehouse) and their implementation.
- Proficiency in the following Tech Stack:
- Deployment & Containerization: Docker/JIB, Kubernetes, Helm, OpenShift.
- CI/CD & DevOps Tools: Azure DevOps, GitHub Actions, GitOps, Gitlab, Bash/Shell scripting, Linux
- Familiarity with agile development methodologies and tools (e.g., Scrum, SAFE, JIRA, Confluence).

Must have skills
Cloudlösungen, Microservices, Big data analytics, CI/CD, ETL

Nice to have skills
Java, Scala, Airflow, Apache NiFi, Apache Kafka, RabbitMQ, Apache Spark, PostgreSQL, MongoDB, InfluxDB, Docker, Kubernetes, Data Lake Storage

Start date
Earliest: asap
Latest: Nov 1, 2024

Length
4-24 months
mit Option auf Verlängerung

Engagement
Fulltime

Remote
Partly possible
75% remote (3 weeks remote / 1 week Berlin)

Language requirements
German, English
sehr gute Deutsch- & Englischkenntnisse

Budget
90-130€ per hour
Gib bei der Bewerbung bitte einen Stundensatz für die Arbeit vor Ort und einen für Remote-Arbeit an, soweit vorhanden.

Onsite locations
Berlin