




Job Summary: We are seeking a professional to develop and optimize data processing and modeling solutions, collaborating with multidisciplinary teams on technological transformation. Key Highlights: 1. Technological transformation with human expertise and AI 2. Development and optimization of data processing solutions 3. Team collaboration for continuous improvement We are specialists in **technological transformation**, combining human expertise with AI to create scalable tech solutions. With over 8,000 CI\&Ters worldwide, we have built partnerships with more than 1,000 clients throughout our 30-year history. Artificial Intelligence is our reality. **Important**: If you reside in the Metropolitan Region of Campinas, your physical presence at our city offices is mandatory, per our current attendance policy. **Responsibilities:** Develop data processing routines and solutions using Python on Azure Databricks. Create, maintain, and optimize ETL/ELT pipelines, ensuring data quality and reliability. Work with relational and non-relational databases, ensuring efficiency in data storage and retrieval. Perform data modeling (dimensional model, star schema, and snowflake schema). Support data preparation, cleansing, and validation for analytics. Identify and resolve issues in data ingestion and transformation processes. Actively participate in data quality and integrity control routines. Collaborate with multidisciplinary teams, contributing to data-driven solutions and continuous improvement. **You must have experience with:** Proven experience in data development and modeling. Practical experience with Azure Data Factory and Databricks for pipeline development. Experience in data modeling (Entity-Relationship, Logical, and Physical models). Proficiency in Python for data processing and manipulation. Experience with SQL (queries, views, stored procedures, and optimization). Knowledge of MongoDB or another non-relational database. Familiarity with version control tools such as Git, GitHub, or Bitbucket. Ability to handle moderate to large volumes of data. **Nice-to-have:** Experience with Databricks Delta Lake. Knowledge of CI/CD processes (Azure DevOps or similar). Experience with Apache Kafka for data ingestion and streaming. \#LI\-RR2 **Our Benefits:*** Health and dental insurance; * Meal and food allowance; * Childcare assistance; * Extended parental leave; * Gym and health/wellness professional partnerships via Wellhub (Gympass, TotalPass); * Profit and Results Sharing Program (PLR); * Life insurance; * Continuous learning platform (CI\&T University); * Discount club; * Free online platform dedicated to promoting physical health, mental health, and well-being; * Pregnancy and responsible parenting course; * Partnerships with online course platforms; * Language learning platform; * And many more More details about our benefits here: https://ciandt.com/br/pt\-br/carreiras At CI\&T, inclusion begins at first contact. If you are a person with disability, it is important to **submit your medical report during the selection process.** *Check which information must be included in the report* *by clicking here.*This way, we can ensure the support and accommodations you deserve. **If you do not yet have the official medical report, don’t worry: we can support you in obtaining it.** We have a dedicated Health and Well-being team, inclusion specialists, and affinity groups that will support you throughout every step. Count on us to walk this journey alongside you.


