




Job Summary: You will be responsible for designing, developing, and maintaining high-performance data pipelines, ensuring data quality, governance, and scalability for advanced analytics and strategic decision-making. Key Highlights: 1. Design and maintain data ingestion, transformation, and integration pipelines 2. Implement scalable architectures for Data Lakes and Data Warehouses 3. Collaborate with BI, data science, and engineering teams Description: You will be responsible for designing, developing, and maintaining high-performance data pipelines, ensuring data quality, governance, and scalability. You will contribute to building and evolving modern data architectures, integrating multiple data sources, and enabling advanced analytics, artificial intelligence, and strategic decision-making. Responsibilities and Duties * Design and maintain data ingestion, transformation, and integration pipelines (ETL/ELT). * Implement scalable architectures for Data Lakes, Data Warehouses, and analytical systems. * Optimize queries, data models, and storage for large-scale datasets. * Ensure data quality, integrity, and security in compliance with LGPD. * Automate data collection, processing, and loading workflows. * Monitor data flows and resolve any failures. * Collaborate with BI, data science, and engineering teams to integrate solutions. Requirements and Qualifications * Bachelor's degree in Computer Science, Software Engineering, Information Systems, or related fields. * Advanced English proficiency. * Solid experience with Python and SQL. * Knowledge of relational databases (MySQL, PostgreSQL, SQL Server) and non-relational databases (MongoDB, OpenSearch). * Experience with ETL/ELT tools (Apache Airflow, dbt, Talend) and Data Lake / Data Warehouse architectures. * Experience with the AWS ecosystem, including ElastiCache (Redis) and RabbitMQ. * Knowledge of CI/CD for data pipelines. * Experience with distributed processing frameworks (Spark, Flink, Kafka). 2512100202181771742


