




Job Summary: We are seeking a Senior Data Engineer to design data architectures, implement and maintain data pipelines, refactor ingestion architectures, and ensure data quality. Key Highlights: 1. Remote work (anywhere office) on national and international projects. 2. Continuous learning and development in a relaxed and dynamic environment. 3. Focus on technology, creativity, and challenges. We are passionate about technology, creativity, and challenges. If you enjoy challenges, constant learning, and value personal connections, join us! \# We value diversity and believe it is essential for innovation and delivering value to our clients. All our positions are open to everyone, with or without disabilities, regardless of age, gender, sexual orientation, ethnicity, religion, or any other characteristic. If you identify with this role, come join our team! **WHAT ARE WE LOOKING FOR?** We seek a **Data Engineer**, at the **Senior** career level, who wants to work with us in a relaxed and dynamic environment, with continuous learning while developing large-scale projects alongside major national and international clients. We have offices in Maringá, São Paulo, and Chicago (USA), but our operations are fully remote—we prefer to call it *anywhere office*. **WHAT WILL THIS PROFESSIONAL DO?** * Support defining and evolving the **data architecture** (medallion, batch, CDC, streaming). * Directly implement and maintain **data pipelines** on Databricks using Python, SQL, and Spark. * Participate in **refactoring the ingestion architecture**, replacing current processes with direct ingestion from MongoDB. * Develop and adjust notebooks for data ingestion and transformation (e.g., fix tables, add columns, review logic). * Ensure data quality, consistency, and governance, supporting the team’s technical decisions. * Collaborate with technical stakeholders to propose scalable, secure, and efficient solutions. **WHAT IS REQUIRED FOR THIS POSITION?** * Solid experience in **Python** and **SQL**. * Knowledge of **Spark** and **data architecture** (batch and streaming, CDC, medallion). * Experience with **Databricks**, with desirable knowledge of MongoDB and Kafka. * Ability to design ingestion and data processing architectures for batch and/or streaming scenarios. * Hands-on experience with **Azure**, especially **Cosmos DB** and **Data Factory**. * Proactive mindset to collaborate closely with a **tech lead**, with project- and architecture-level vision. **WHAT WOULD BE A PLUS?** * Knowledge of **shell script**. * Prior experience with complex data ingestion scenarios (e.g., streaming \-\> batch, MongoDB integrations). * Familiarity with best practices in data governance and cloud pipeline optimization. **HIRING PROCESS STEPS:** * Application submission * Cultural fit interview * Technical interview * Client interview * Hiring * ***Note: It is essential to ensure no conflicts of interest or affiliations that could compromise confidentiality or impartiality. This criterion will be assessed from the beginning of the selection process.***


