




Description: So, what do you need to have? * Experience with ML/MLOps pipelines, CI/CD (GitHub Actions), and tools such as Kubeflow, MLflow, Airflow, and AWS SageMaker * Knowledge of containers (Docker), Kubernetes, APIs, and Python * Experience in cloud environments (AWS, GCP, or Azure) * Strong understanding of Machine Learning models Advantages: * Experience with large-scale data modeling and manipulation * Experience with real-time/NRT model processing and deployment If you enjoy learning and want to be part of this challenge, this opportunity is for you. In this Data Science Specialist I role, you will join the Data Science team, whose goal is to build scalable data solutions that enhance user experience and business metrics. On a day-to-day basis, you will: * Build and maintain end-to-end ML model training, versioning, testing, and deployment pipelines * Implement MLOps practices for automation, monitoring, and maintenance of production models * Manage environments, containers, GPUs, scalability, and resource orchestration * Collaborate with AI Scientists to transform experiments into production-ready solutions * Ensure model observability by monitoring drift, performance, cost, logs, and metrics * Create experimentation infrastructure and standardize AI development workflows * Support security, governance, and best practices in model lifecycle management 2512310202491938843


