




Job Summary: A professional responsible for defining and evolving the enterprise data architecture, ensuring technical consistency, quality, security, and efficiency for analytics and AI. Key Highlights: 1. Lead the definition of target architectures for data and integrations. 2. Serve as a technical reference and mentor data engineers. 3. Design and optimize robust, scalable data models and pipelines. The Data Architecture team is responsible for defining vision, principles, standards, and solution designs to ensure the organization’s data is trustworthy, secure, well-cataloged, and interoperable—enabling scalable, efficient analytics and AI. It acts as a transformation agent, building a resilient platform, promoting democratization with control, and consolidating a data- and governance-driven culture. Job Mission: Define and evolve the corporate data architecture (platform, integration, modeling, metadata, and governance), ensuring technical consistency, quality, security, compliance, and cost efficiency (FinOps). Guide architectural decisions with stakeholders, establish standards, and support teams in implementing solutions that sustain data and AI at scale. **Responsibilities and Duties** * Define target architectures for Data Lake, Data Warehouse, Lakehouse, streaming, and integrations (batch/real-time), aligned with business strategy; * Establish standards, blueprints, and guidelines (naming conventions, partitioning, versioning, layers, data contracts, SLAs/SLOs); * Lead architectural reviews, ensuring adherence to standards, security, and governance; * Design, build, and maintain robust, scalable data pipelines for ingestion, processing, and transformation of large-scale data volumes; * Develop and optimize data models for analytical systems; * Ensure data integrity, quality, and governance by applying best practices in management, security, and FINOPS; * Collaborate with cross-functional teams to understand business requirements and deliver data-driven solutions; * Monitor and optimize pipeline, database, and integration solution performance; * Lead data architecture modernization initiatives using cutting-edge technologies and modern frameworks; * Serve as a technical reference and mentor junior and mid-level data engineers; * Design and operationalize governance: domains, ownership (data owner/steward), access policies, data classification, LGPD, retention, and audit trails; * Implement and evolve metadata and catalog, lineage, business glossary, and technical dictionary; * Define and monitor quality controls (rules, thresholds, reconciliations), including incident management and metrics; * Guide cloud service selection and design, ensuring security-by-design (IAM, KMS/Secret Manager, encryption, segregation); * Define data observability: monitoring, alerts, tracing, dashboards, pipeline governance, and availability. **Requirements and Qualifications** **Education and Experience** * Bachelor’s degree in Computer Science, Software Engineering, Information Systems, or related fields; * Proven experience with data visualization platforms (Tableau, Power BI, and/or Looker); * Track record of leadership or active participation in complex integration and cloud migration projects. Technical Knowledge * Cloud & Data Platform: experience with GCP (BigQuery, Composer/Airflow, Dataflow, Pub/Sub, Cloud Run/Functions, KMS/Secret Manager, Datastream, GitHub) and/or Azure (ADF, Synapse, ADLS); * Experience with GCP and its services: BigQuery, Cloud Composer, Apache Airflow, Dataflow, Pub/Sub, Cloud Run, Cloud Functions, KMS, Secret Manager, DataStream, Vertex AI, and GitHub; * Integration architecture: batch, streaming, CDC, APIs; design of layers and ingestion/transformation/consumption patterns; * Data modeling: dimensional, relational, data vault (if applicable), modeling for analytics/BI and semantic layer; * Governance & Security: classification and access control, LGPD, encryption, policies and auditing, catalog/lineage/metadata; * DataOps/DevOps: CI/CD, data testing, versioning, automation, and operational best practices; * Performance & Cost: query optimization, partitioning, clustering, tuning, and FinOps practices; * Programming Languages: Proficiency in Python and PySpark (used for scripting and automation) and SQL (for data manipulation and querying); * ETL/ELT: Experience building ETL/ELT pipelines for data ingestion and transformation; * Security and Governance: Understanding of cloud security practices, data encryption, and implementation of governance policies; * Data Orchestration: Use of tools such as Azure Data Factory and/or Synapse and/or Airflow/Composer for integration and data movement; * Performance and Optimization: Experience applying techniques to improve data pipeline and query performance in distributed and non-distributed environments; * APIs: Knowledge of extracting data via RESTful APIs. **Differentiators** * Certifications in data/cloud (e.g., GCP Professional Data Engineer, Azure Data Engineer, or equivalent); * Experience with Azure and its services, such as Azure Data Factory, Azure Synapse Analytics, and Azure Data Lake Storage; * Hands-on experience with catalog/lineage/governance tools (e.g., DataPlex/Dataproc Metastore, Purview, Collibra, Alation, etc.); * Experience with AI/ML data (feature store, data versioning, model governance, and traceability). **Technical Competencies** * Ability to design scalable, secure, and governable architectures; Systems thinking: end-to-end vision (source → integration → consumption → auditing); * Architectural decision-making based on trade-offs (timeline, risk, cost, value); * Strong ability to efficiently and creatively solve complex data problems. **Behavioral Competencies** * Influence to drive alignment across teams and uphold standards; * Ownership mindset, pragmatism, and results focus with applied governance (not “paper governance”); * Analytical and results-oriented thinking; * Excellent communication skills to translate technical and business needs; * Proactivity and sense of urgency to address critical demands; * Teamwork and collaborative leadership capabilities; * Commitment to continuous learning and technological updating. **Additional Information** **And more—check out our benefits package:** * Health and Dental Plan – Bradesco – extendable to dependents * PAE – Financial assistance provided for dependent children and/or stepchildren with intellectual disabilities * Pharmacy Program – Discounts up to 85% * Supplementary Pension – FlexPrev Plan – ranging from 1% to 11%, depending on salary * Life Insurance – coverage for all employees starting on their admission date, at no cost * Extended Leave – Maternity (total of 180 days) and Paternity (total of 20 days) * Meal Allowance and/or Food Voucher – Caju Benefits * Educational Assistance – For dependents up to high school level * TotalPass * Solar Subscription – Opportunity to obtain special discounts on electricity bills through enrollment in a distributed generation plan * Smiles Corporate Club – Up to 50% discount on travel plans for you, spouse, and children aged 18+ * Service Length Bonus – Additional salary component (based on tenure), paid during vacation time **And for our teams’ development...** * Learning Platform: Values-based learning paths and renowned curation with over 200 courses available anytime * Internal Recruitment: Open posting of vacancies across Brazil, encouraging internal mobility **Who We Are?** We are Vibra, a young company born at scale—**one of the five largest companies in Brazil**, with a diversified and global investor base. **With over 50 years of experience** and a competent, committed team, we serve more than **30 million people** who visit our **8,000+ service stations** nationwide through our distributors, and **26,000 corporate clients.** Here, we operate with **pace** in delivering **results**, and **respect** in relationships. We **listen** with empathy and **deliver** with agility. We act with **ownership**, and **collaborate** to make things happen. We speak with **transparency**, act with **integrity**, and **honor our commitments.**


