




We are specialists in **technological transformation**, combining human expertise with AI to build scalable tech solutions. With more than 8,000 CI&Ters worldwide, we have already partnered with over 1,000 clients throughout our 30-year history. Artificial Intelligence is our reality. **Important**: If you reside in the Campinas Metropolitan Region, your physical presence at our city offices is mandatory, per our current attendance policy. We are seeking a senior, hands-on leader to join and lead our CI&T Data team in building a modern, product- and platform-oriented data ecosystem. You will work directly with mid- and senior-level management, translating business needs into reusable architectures and capabilities—integrating governance-as-a-platform, Policy-as-Code, Lakehouse/Event-Driven approaches, and GenAI to accelerate Data Discovery, Data Quality, Compliance, and adoption. Responsibilities* Lead and develop the team, promoting engineering best practices and a Data-as-a-Product culture (domain ownership, SLOs/SLIs). * Design and implement complex data pipelines (batch and streaming), with automation, orchestration, and ETL/ELT optimization. * Develop transformations in Python/PySpark and SQL, with automated testing (PyTest) and engineering standards. * Ensure end-to-end Data Quality: tests-as-code, dataset SLOs/SLIs, proactive alerting, and automated remediation. * Design the platform blueprint: Metadata-first, Lineage, Policy-as-Code, Data Catalog/Discovery, Semantic Layer, observability. * Integrate GenAI into workflows: copilots for catalog and documentation, Sensitive Data Classification (PII), assisted policy and test generation, Change Intelligence (impact summaries). * Collaborate with cross-functional teams (data, product, security, compliance, business), translating business requirements into scalable, measurable technical solutions. * Drive architecture decisions (Lakehouse, Delta/Parquet, real-time with Pub/Sub/Kafka, microservices) with focus on scalability, cost, and security. * Operate “always-on” Compliance mechanisms (LGPD/PII): classification, masking, contextual access control, and end-to-end traceability. Integrate the platform with the corporate ecosystem (APIs, events, legacy/SaaS systems), ensuring performance and reliability.* Implement versioning, CI/CD, and IaC (Git/Azure DevOps) for reproducibility and reduced time-to-data. * Interact with senior management, present roadmaps, risks, and results; define and track OKRs/metrics (adoption, lead time, incidents, residual risk). * Evangelize and manage change to increase adoption and business value. Required Experience* Solid experience building data pipelines with Python, PyTest, PySpark, and SQL. * Solid experience on Google Cloud Platform (BigQuery, Dataproc, Dataflow, Pub/Sub, Composer/Airflow, IAM). * Solid experience with orchestration tools (Airflow/Composer, Dagster, Prefect). * Experience with Databricks in complex pipelines (Delta Lake, Jobs/Workflows, Unity Catalog). * Hands-on experience with relational and non-relational databases, including schema design and query optimization. * Experience in microservices architecture and enterprise integrations (REST/GraphQL, event-driven). * Proficiency in Git/Azure DevOps for versioning, CI/CD, and collaboration. * Practical experience in operational Data Governance: Metadata, Lineage, Data Catalog/Discovery, Data Quality, Security & Access. * Experience leading technical teams and engaging with executives, with strong communication and influence skills. * Solid understanding of Security, Privacy, and Compliance (LGPD) applied to data. * Ability to make architectural decisions focused on cost/efficiency (FinOps), scalability, and reliability. Desirable* Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or related fields. * Prior senior experience in data engineering and GCP-based projects. * Certifications: Google Professional Data Engineer/Cloud Architect; Databricks (Data Engineer Professional); Airflow/Dagster; security. * Data Quality/Observability tools (Great Expectations, Soda, Monte Carlo) and Metadata/Catalog tools (DataHub, Collibra, Atlan). * Experience with Policy-as-Code and governance (OPA, Ranger, BigQuery policies, DLP, masking/tokenization). * Knowledge of Semantic Layer and metrics (Looker semantic model, dbt/metrics). * Experience with GenAI/ML for data (Vertex AI, embeddings/RAG for Discovery, internal assistants). * Experience with Data Mesh and data-as-a-product strategies in large organizations. MidSenior LI-RW1 **Our benefits:*** Health and dental insurance; * Meal and food allowance; * Childcare assistance; * Extended parental leave; * Partnerships with gyms and health & wellness professionals via Wellhub (Gympass, TotalPass); * Profit Sharing Program (PLR); * Life insurance; * Continuous learning platform (CI&T University); * Discount club; * Free online platform dedicated to physical, mental, and overall well-being; * Pregnancy and responsible parenting course; * Partnerships with online learning platforms; * Language learning platform; * And many more. More details about our benefits here: https://ciandt.com/br/en-us/careers At CI&T, inclusion starts from the first contact. If you are a person with a disability, it is important to **submit your medical report during the selection process.** *Check which information must be included in the report* *by clicking here.* This way, we can ensure the support and accommodations you deserve. **If you don’t yet have the official characterization report, don’t worry—we can support you in obtaining it.** We have a dedicated Health & Wellness team, inclusion specialists, and affinity groups that will accompany you at every stage. Count on us to walk this journey side by side.


