📣 "IPAs & APIs" Freelancer Meetup on 12. September in Hamburg 📣 Details

Cloud Native Data Engineer (f/m/d)

Status Deaktiviert
Kategorie Tech » DevOps/Cloud
Kunde Large IT Service Provider
Veröffentlicht
Uplink-Provision keine Mehr Infos
Job-Typ Recruiter Mehr Infos

For our client we are looking for a Cloud Native Data Engineer (f/m/d).

Role:
- In this role, you will be responsible, together with the architect(s) in your team for designing, developing, and maintaining scalable data architectures that are capable of handling mass-data within defined processing times.
- Moreover, the app modules will be developed and deployed on a Platform, which is the client's internal hybrid cloud platform that offers services to facilitate an end-to-end software development life cycle.
- You will collaborate with cross-functional teams to develop, migrate, and deploy various modules, leveraging your expertise in Cloud Native technologies, DevOps practices, and observability tools.

Key Responsibilities:
Data Architecture, Data Modeling, Data Integration, Data Quality and Governance, Application Migration, Documentation.

helpful experience:
- A minimum experience of 5 years as a Cloud Native application engineer.
- Experience with rearchitecting existing monolithic architecture to micro-services based Cloud Native architectures.
- Strong understanding of Cloud Native architectures (loosely coupled services, containers, horizontal scalability, application resilience patterns).
- Proficiency in at least one programming language: Java or Scala
- Knowledge and experience with at least some of the (Big) Data technologies/frameworks:
- Workflow orchestration (AirFlow/Oozie etc.)
- Data integration/Ingestion (Nifi, Flume etc)
- Messaging/Data Streaming (Kafka/RabbitMQ etc.)
- Data Processing (Spark, Flink etc.)
- RDBMS (PostgreSQL/MySql etc.)
- NoSQL Storages (MongoDB, Cassandra, Neo4j etc.)
- Timeseries (InfluxDB, OpenTSDB, TimescaleDB, Prometheus etc.)
- And/Or with their Cloud provided counterparts, i.e., Cloud Data/Analytics services (GCP, Azure, AWS)
- Familiarity with reference Big Data architectures (Warehouse, Data Lake, Data Lakehouse) and their implementation.
- Proficiency in the following Tech Stack:
- Deployment & Containerization: Docker/JIB, Kubernetes, Helm, OpenShift.
- CI/CD & DevOps Tools: Azure DevOps, GitHub Actions, GitOps, Gitlab, Bash/Shell scripting, Linux
- Familiarity with agile development methodologies and tools (e.g., Scrum, SAFE, JIRA, Confluence).

Must have skills
Cloudlösungen, Microservices, Big data analytics, CI/CD, ETL

Nice to have skills
Java, Scala, Airflow, Apache NiFi, Apache Kafka, RabbitMQ, Apache Spark, PostgreSQL, MongoDB, InfluxDB, Docker, Kubernetes, Data Lake Storage

Startdatum
Frühestens: 1. Okt 2024
Spätestens: 1. Nov 2024

Laufzeit
4-24 Monate
mit Option auf Verlängerung

Auslastung
Vollzeit

Remote
Teilweise möglich
75% remote (3 weeks remote / 1 week Berlin)

Erforderliche Sprachkenntnisse
Deutsch, Englisch
sehr gute Deutsch- & Englischkenntnisse

Budget
90-130€ pro Stunde
Gib bei der Bewerbung bitte einen Stundensatz für die Arbeit vor Ort und einen für Remote-Arbeit an, soweit vorhanden.

Einsatzorte
Berlin

Dieser Job ist bereits geschlossen und Bewerbungen sind daher nicht mehr möglich.
Um in Zukunft Jobs wie diesen nicht mehr zu verpassen, melde dich jetzt bei Uplink an!

Jetzt Mitglied werden