




Job Summary: We are seeking an IT professional to develop and maintain data pipelines, collaborate on automation of data processes, and build scalable solutions. Key Highlights: 1. Develop and maintain data pipelines using Databricks and Apache Spark 2. Write clean and efficient Python code for data manipulation 3. Collaborate on automating data processes and continuous integration Description: * Bachelor's degree in IT or related fields (completed or in progress); * Solid knowledge of data modeling and best practices in data engineering; * Solid knowledge of SQL; * Basic/intermediate knowledge of Databricks and Apache Spark; * Basic/intermediate knowledge of Python for data applications (PySpark, etc.); Preferred Qualifications: * Experience with cloud environments, especially Azure; * Develop and maintain data pipelines using Databricks and Apache Spark; * Write clean and efficient Python code for data manipulation and transformation; * Collaborate on automating data processes and continuous integration using Azure DevOps; * Work with code versioning using Git; * Support the development of scalable solutions focused on data quality; * Participate in code reviews, daily stand-ups, and technical meetings with the team. 2512180202551811399


