




**Positions for those who want to transform businesses and their careers!** --------------------------------------------------------------------- ### **What you will find** • As a contracted PJ, you'll interact with both companies, but will be even closer to the client. At UDS, you'll also have access to: • Manager support; • Training; • Flexible hours; • Work equipment; • Access to the corporate UDEMY account; • Access to AWS certification.### **Desirable** • Previous experience in startups or small teams, with autonomy to lead end-to-end projects. • Familiarity with modern AI frameworks (LangChain, LlamaIndex, etc.). • Experience with predictive analytics and modeling applied to marketing or product data. • Experience integrating with SaaS platforms or video data. • Experience with data visualization (Metabase, Looker, Power BI).### **Job Requirements** • Solid experience in Python, SQL, and data manipulation (Pandas, NumPy, etc.). • Practical experience with ETL/ELT pipelines and orchestration tools (Airflow, Prefect, Dagster, or similar). • Knowledge of data modeling and analytical databases (BigQuery, Snowflake, Postgres). • Experience with LLM APIs (OpenAI, Anthropic, etc.), embeddings, and basic NLP techniques. • Understanding of traditional Machine Learning concepts (scikit-learn, regressions, clustering). • Experience with code versioning and engineering best practices (Git, Docker, basic CI/CD). • The project requires building infrastructure on AWS, involving the use of DMS for migration, • EMR with Spark for distributed processing, • S3 as Data Lake, and in-depth IAM (Roles/Policies) and Networking (VPC/Subnets) configurations, among other AWS ecosystem tools. • Proficiency in PySpark for job development and mastery of Iceberg tables is crucial.### **Your responsibilities** • This role is extremely hands-on and also requires someone who goes beyond execution: we're looking for someone capable of building the company's data foundation and applying artificial intelligence in practical and creative ways to impact the product. • Design and implement automated data pipelines (ETL/ELT) integrating multiple sources (player, transcripts, conversion metrics, etc). • Structure and maintain a clean, well-documented data warehouse (e.g.: BigQuery, Snowflake, or Redshift). • Create datasets and APIs that can be used by LLMs and internal models. • Prototype AI applications using LLMs (OpenAI, Anthropic, Mistral) to generate headlines, CTAs, mini-hooks, and autoplays. • Explore and apply lightweight predictive models (regressions, clustering, embeddings) to understand and predict conversion patterns. • Monitor pipeline performance and data quality. • Work closely with the CTO and PM to define priorities, measure impact, and ensure tangible results.### **It's important to remember that...** At UDS, we hire competent people who are eager to **transform using their knowledge.** **This is independent** of your location, age, ethnicity or race, religion, gender identity, or sexual orientation. Are your skills aligned with the position? **That's all** that matters. Does your profile and values match ours? **Come create transformations with us.**


