···
Log in / Register
Senior/Principal Data Engineer
Negotiable Salary
Indeed
Full-time
Onsite
No experience limit
No degree limit
R. Guilherme Pereira da Silva, 59, Carmópolis - SE, 49740-000, Brazil
Favourites
Share
Some content was automatically translatedView Original
Description

**Job Description:** We are seeking a **Data Engineer** to perform **data source mapping and documentation** with internal teams and **external vendors** (e.g., **SGA, PROSIS, ServiceNow**) while defining **ingestion specifications**, **connector standardization**, and **ETL/ELT mechanisms**. The role will be responsible for **designing, building, and operating pipelines**, with emphasis on **security, performance, reliability, and compliance** (governance/LGPD), ensuring **data availability** for consumption layers and the corporate data catalog. **Responsibilities** * Gather technical requirements from internal departments and partners (e.g., SGA, PROSIS, ServiceNow) to **map, classify, and document** data sources (structured, semi-structured, and APIs). * Define **ingestion specifications** (batch/streaming), **connector standards**, and **data contracts** (schemas, SLAs/SLOs, versioning). * Design and implement resilient and observable **ETL/ELT pipelines** (reprocessing, idempotency, alerts), including **data quality monitoring** (DQ checks) and **end-to-end lineage**. * Optimize **performance and cost** (partitioning, clustering, compression, parallelism), applying **FinOps** principles where applicable. * Publish datasets in **Bronze/Silver/Gold layers** and the **corporate data catalog** (metadata, access policies, classification). * Ensure **security and compliance**: IAM, encryption, masking, anonymization/pseudonymization, retention, and auditing, aligned with **LGPD** and governance policies. * Operate and evolve **orchestration** (job execution/retries, SLAs), conduct **tuning/troubleshooting**, and support analytics/BI teams in data consumption. * Collaborate with Product/Business teams on **business rule definition** and **handoff to consumption layers** (APIs, views, marts). **Requirements** * Proven experience in **data engineering**, including development and operation of **ETL/ELT pipelines** (batch and/or streaming). * Strong proficiency in **SQL** and **Python**; hands-on experience with **Spark** and/or **dbt** is a plus. * Practical experience with **cloud data platforms**, preferably **GCP**: **BigQuery, Dataflow/Dataproc, Pub/Sub, Cloud Storage, Cloud Composer, Dataplex**, or equivalent services on AWS/Azure. * Knowledge of **data catalog and lineage tools** (e.g., **Dataplex/Data Catalog/Atlas**), **data quality**, and **analytics-oriented modeling** (Medallion architecture, marts). * Experience with **orchestration tools** (Airflow/Composer), **Git/CI-CD**, and **observability** (logs, metrics, alerts). * Familiarity with **security and privacy practices** (IAM, encryption, LGPD), **API-based integration**, and connectors (REST, JDBC/ODBC). * Ability to produce technical documentation and **communicate effectively** with business units and vendors. * **Bachelor’s degree** in IT, Engineering, Computer Science, Information Systems, or related fields; certifications (e.g., **GCP Data Engineer**, **dbt**, **ITIL/DAMA**) are advantageous. ### **Employment Type:** CLT ### **Benefits:** Meal Voucher, Transportation Voucher, Culture Voucher, Health Insurance, Life Insurance ### **Department:** Corporate

Source:  indeed View original post
João Silva
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.