




Job Summary: Design, build, and maintain the corporate data architecture, ensuring integration, quality, availability, and governance of information for strategic decision-making. Key Highlights: 1. Working on the development of data pipelines (batch and streaming) 2. Data modeling for corporate Data Warehouse and Data Lake 3. Collaborative work with BI, Analytics, and business area teams #### **About the Data Engineer Role** **Join a vibrant ecosystem where the future of business is created and experienced every day. Be part of this transformation!** At LUZA Group, passion, perseverance, and the drive to surpass limits define our path to success. Founded in 2006, we are a Portuguese multinational company with over 1,200 talented professionals and a significant volume of business. With presence in strategic markets including Portugal, Spain, Morocco, Brazil, Mexico, the United States, and China, we deliver innovative solutions in engineering, IT, design, consulting, Industry 4.0, training, and recruitment. Everything we do is powered by the talent of our people. **This is a moment of growth and opportunity. The future belongs to visionary minds. Join us.** Design, build, and maintain the corporate data architecture, ensuring integration, quality, availability, and governance of data originating from industrial systems, ERP, logistics, commercial, and financial systems, supporting BI, analytics, and strategic decision-making initiatives. **Main Responsibilities** -------------------------------- * Design and implement data pipelines (batch and streaming). * Develop data ingestion, transformation, and delivery processes (ETL/ELT). * Model data for corporate Data Warehouse and Data Lake. * Integrate data from: + Corporate ERP + Industrial systems (shop floor) + Logistics and commercial systems * Ensure data quality, traceability, and governance. * Optimize query and analytical structure performance. * Collaborate closely with BI, Analytics, and business area teams. * Implement best practices for versioning, observability, and documentation. * Support data initiatives for industrial KPIs (productivity, yield, losses, logistics SLA). **Technology Stack (Current Corporate Environment)** -------------------------------------------------- * **Cloud:** AWS / Azure or GCP * **Databases:** SQL Server / PostgreSQL / Oracle * **Data Warehouse:** Snowflake / BigQuery / Redshift * **Processing:** Python, PySpark * **Orchestration:** Airflow / Data Factory * **Transformation:** DBT * **Integration:** REST APIs / Kafka * **BI:** Power BI * **Versioning:** Git **Technical Requirements** ----------------------- * Solid experience with dimensional modeling (Star Schema / Snowflake Schema). * Strong knowledge of advanced SQL. * Experience with Python for data manipulation and processing. * Hands-on experience with cloud-based data architecture. * Experience with pipeline orchestration tools. * Knowledge of data governance and data quality. * Experience integrating data from ERP and legacy systems. **Preferred Qualifications** ---------------- * Experience in industrial or manufacturing environments. * Knowledge of industrial KPIs (OEE, losses, productivity). * Experience with logistics and cold-chain data. * Experience in digital transformation projects. * Knowledge of Lakehouse architecture. **Behavioral Competencies** -------------------------------- * Systemic vision and results orientation. * Analytical capability and ability to solve complex problems. * Clear communication with technical and business teams. * Collaborative profile focused on continuous improvement. * Proactivity and ownership mindset. Location: Remote


