




**Responsibilities:** * **Design, develop, and maintain** Machine Learning pipelines (training, validation, deployment, and monitoring); * **Implement predictive and classification models** using statistical techniques and supervised and unsupervised algorithms; * **Ensure versioning and traceability** of data, models, and experiments; * **Build and maintain** APIs, services, and automations that support production models; * **Monitor** data drift and model performance in production, proposing continuous improvements; * Collaborate with Data Engineering, Product, and Business teams to ensure efficient and demand-aligned deliveries; * **Implement** MLOps best practices (CI/CD for models, automations, containers, scheduled jobs); * **Work with orchestration and MLOps tools** (e.g., Kubeflow, MLflow, Airflow) to ensure robust workflows; * **Define and apply Infrastructure-as-Code (IaC)** for cloud provisioning (Terraform, CloudFormation, Pulumi, etc.); * **Manage and optimize** model deployment solutions, both in serverless environments and containerized clusters (e.g., AWS SageMaker, Kubernetes). Requirements: **We’re looking for someone with:** * Proven experience with **Python** and libraries such as **Pandas** and **NumPy**; * Knowledge of **modeling techniques** (*regression, classification, clustering, ensembles, etc.*); * Solid experience with **ML libraries** (*scikit-learn, TensorFlow, PyTorch, XGBoost, CatBoost, LightGBM*); * Strong expertise in the **ML lifecycle**, **modeling**, **hyperparameter tuning**, and **performance evaluation**; * Practical experience deploying models into production on cloud platforms (*AWS SageMaker, GCP Vertex AI, or Azure ML*); * Proficiency in **CI/CD applied to ML pipelines**, including automated testing and continuous integration; * Experience with **Infrastructure-as-Code (IaC)** — *Terraform, CloudFormation, or equivalents*; * Experience with **workflow orchestration or MLOps tools** (*Kubeflow, Airflow, MLflow*); * Experience deploying models via **APIs** (*FastAPI, Flask, etc.*); * Adherence to best practices for **version control (Git)** and **documentation**; * **Familiarity with cloud environments** (*AWS, GCP, or Azure*). **Nice-to-have:** * Knowledge of **distributed systems** and **large-scale data processing** (*Spark, Beam*); * Solid experience with **AWS tools** (*SageMaker, Lambda, S3, Glue, CloudFormation, CodeBuild*); * Experience with **model monitoring** (*EvidentlyAI, WhyLabs*); * Knowledge of **containers and orchestration** (*Docker and Kubernetes*) for scalable model serving; * Experience with **SQL and/or NoSQL databases**; * Understanding of **data engineering and data pipelines**; * Experience with **microservices architecture**; * Participation in **business-oriented data science projects**. Benefits **What we offer:** * Bradesco **health and dental insurance**; * **Wellhub** (formerly Gympass); * **Conexa Saúde & Psicologia Viva**; * Corporate partnership with **Open English** (discounts on English & Spanish courses); * **Caju**: Remote work allowance; * Paid **22 business-day vacation per year**; * Birthday **day off**; * **PJ (individual contractor) hiring model.**


