




Job Summary: We are seeking an MLOps Engineer to develop and orchestrate Machine Learning pipelines, version models and datasets, automate model training and production deployment, and monitor performance in a dynamic, continuously learning environment. Key Highlights: 1. Remote work (anywhere office) with continuous learning 2. Development and orchestration of Machine Learning pipelines 3. Collaboration with major national and international clients We are passionate about technology, creativity, and challenges. If you enjoy challenges, constant learning, and value personal connections, join us! \# We value diversity and believe it is fundamental to innovation and delivering value to our clients. All our positions are open to everyone, with or without disabilities, regardless of age, gender, sexual orientation, ethnicity, religion, or any other characteristic. If you identify with this role, come join our team! **WHAT ARE WE LOOKING FOR?** We seek **MLOps Engineers** who want to work with us in a relaxed and dynamic environment, with continuous learning while developing large-scale projects alongside major national and international clients. We have offices in Maringá, São Paulo, and Chicago (USA), but our operations are fully remote—we prefer to call it *anywhere office*. **WHAT WILL THIS PROFESSIONAL DO?** * **Develop and orchestrate Machine Learning pipelines**, using Vertex AI Pipelines, Kubeflow, Airflow, Prefect, or similar tools. * **Version models and datasets**, ensuring reproducibility and traceability of experiments (MLflow, DVC, Vertex AI Model Registry). * **Automate model training, validation, and production deployment**, including batch and online scenarios. * **Monitor models in production**, detecting drift, performance degradation, and latency issues. * Implement and manage **CI/CD for pipelines and models**, integrating Cloud Build, GitHub Actions, or GitLab CI. * **Prepare and transform data (feature engineering)** to feed ML models. * **Apply statistical modeling and ML algorithms**, both supervised and unsupervised, according to the problem. * **Evaluate models** using appropriate metrics and propose improvements. * Develop and maintain scalable **data pipelines**, using Dataflow, Apache Beam, or Spark. * Work with **Google Cloud Platform services**, especially Vertex AI and Dataflow, to train, serve, and monitor models. **WHAT IS REQUIRED FOR THIS POSITION?** * Experience with **Machine Learning pipeline orchestration** (Vertex AI Pipelines, Kubeflow, Airflow, Prefect, or similar). * **Model and dataset versioning** (MLflow, Vertex AI Model Registry, DVC). * **Automation** of model training, validation, and **deployment**. * **Production model monitoring** (drift, performance, latency). * Experience with **CI/CD tools** (Cloud Build, GitHub Actions, GitLab CI). * Knowledge of **feature engineering**. * Understanding of **statistical modeling** and supervised/unsupervised ML. * Knowledge of **model evaluation metrics**. * Experience deploying models in **batch and online** environments. * Experience building **data pipelines**, using Dataflow, Apache Beam, or Apache Spark. * Hands-on experience with **Google Cloud Platform**, especially Vertex AI and Dataflow. **WHAT WOULD BE A PLUS?** * Experience with **Kubernetes** and **Docker** for model deployment. * Knowledge of **monitoring and observability** (Prometheus, Grafana). * **Google Cloud certifications** (ML Engineer or Data Engineer). * Experience with **Infrastructure as Code (Terraform)**. * Experience with **Generative AI (LLMs, RAG).** **HIRING PROCESS STEPS:** * Application submission * Cultural fit interview * Technical interview * Client interview * Offer and hiring


