




Client: Avantti Tecnologia Location: Remote Contract: PJ or CLT (please indicate salary expectations) Language: Intermediate/Advanced English (preferred) **Responsibilities** * Develop and maintain scalable, high-performance data pipelines. * Work with large volumes of structured and unstructured data, ensuring quality, integrity, and availability. * Implement integrations via APIs (REST and/or messaging), including data transformation and workflow automation. * Develop distributed processing solutions and optimize queries. * Automate deployments and resource provisioning using Infrastructure-as-Code principles. * Orchestrate data workflows, ensuring monitoring, versioning, and observability. * Collaborate with technical and business teams to translate requirements into efficient data engineering solutions. Requirements: * Solid experience in Python for data manipulation, automation, and integrations. * Solid experience in advanced SQL, including query optimization and data modeling. * Knowledge of distributed and parallel processing, applying cluster computing concepts. * Experience with cloud-based data architecture (preferably AWS). * Data integration via APIs, including authentication, error handling, and integration patterns. * Code versioning and automation practices (Git and DevOps/DataOps). * Familiarity with Infrastructure-as-Code and automated provisioning. * Knowledge of pipeline orchestration and governance of process execution. * Familiarity with distributed query engines and large-scale processing optimization. * Experience orchestrating data workflows using tools such as Airflow or similar (not mandatory but desirable). **Desirable Knowledge (Tools):** * Version control and CI/CD platforms (e.g., GitLab); * Data processing and cluster tools (Databricks, Spark); * Data lake platforms and distributed SQL engines (Trino, Dremio); * Pipeline orchestration tools (Airflow); * Cloud solutions (AWS) and Infrastructure-as-Code (Terraform); * API integration and consumption.


