···
Log in / Register
Dataservices - Data Engineering | Python + PL/SQL + Databricks (Temporary)
Indeed
Full-time
Onsite
No experience limit
No degree limit
Praça do Patriarca, 62 - Centro Histórico de São Paulo, São Paulo - SP, 01002-010, Brazil
Favourites
Share
Some content was automatically translatedView Original
Description

Job Summary: We are seeking a Data Engineer to develop, evolve, and optimize large-scale data pipelines, with a focus on Databricks and Spark. Key Highlights: 1. Develop and optimize ETL/ELT pipelines on Databricks and Spark 2. Work with Python for data engineering and Oracle/PL/SQL 3. Improve pipeline performance and provide advanced technical support **Job Description:** **About the Role** We are seeking a Data Engineer to develop, evolve, and optimize large-scale data pipelines, focusing on Databricks and Spark. This person will be responsible for implementing robust data solutions, ensuring performance and quality in data transformations, optimizing processes, documenting technical standards, and supporting the operations team in advanced analyses. Proficiency in Databricks is mandatory, as is solid experience with Oracle and PL/SQL. **Responsibilities** * Develop, evolve, and optimize ETL/ELT pipelines on **Databricks**, ensuring robustness and efficiency. * Implement transformations using **Spark** (PySpark / SQL), optimizing jobs and data flows. * Apply **Python for data engineering** to data processing, integrations, and automations. * Work with **Oracle / PL/SQL** (procedures, functions, tuning, validations, and views). * Build resilient routines with **validations, idempotency, and controlled reprocessing**. * Improve pipeline performance: partitioning, caching, shuffle tuning, Delta optimization. * Document pipelines, data flows, technical rules, and operational standards. * Provide **N2/N3 technical support** to the operations team, assisting with failures and advanced analyses. **Required Qualifications** * Proven hands-on experience with **Databricks** (Jobs, Workflows, clusters, logs). * Strong expertise in **Apache Spark** and optimization of distributed processes. * Experience with **Python** applied to data engineering (ETL/ELT). * Solid experience with **databases**, preferably **Oracle**. * In-depth knowledge of **PL/SQL** (procedures, functions, performance tuning, explain plan). * Ability to design, structure, and maintain complex data pipelines. **Preferred Qualifications** * Knowledge of **Delta Lake** (MERGE/UPSERT, OPTIMIZE/ZORDER, schema evolution). * Experience with **cloud storage** (S3 or OCI Object Storage). * Experience with **CI/CD for data** (Git + pipelines). * Experience with DataOps, observability, and monitoring (metrics, alerts, dashboards). * Relevant certifications (valued): * Databricks Data Engineer Associate/Professional * AWS Solutions Architect / Data Analytics **Job Information** Hybrid model PJ contract – 6 months with possibility of extension **Location****Contract Model****Work Model** São Paulo, SP, BR PJ Freelancer Remote

Source:  indeed View original post
João Silva
Indeed · HR

Company

Indeed
João Silva
Indeed · HR
Similar jobs

Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.