




Job Summary: A professional to develop and maintain data transformation pipelines, build dimensional models and integration processes, and assist in data governance. Key Highlights: 1. Development of pipelines for raw data transformation 2. Construction of dimensional models for accessible and consistent data 3. Active participation in data governance and observability Description: * Bachelor's degree (completed or in progress) in Information Technology or related fields. * Advanced SQL knowledge (Procedures and Functions; Window Functions). * Knowledge of relational and NoSQL databases (Redshift, Oracle, PostgreSQL, DynamoDB, MongoDB). * Knowledge of data distribution. * Experience with Relational Modeling and Dimensional Modeling (facts, dimensions, SCD). * Experience with ETL orchestration using Airflow. * Knowledge of code versioning with Git and GitHub. * Familiarity with Apache Kafka. * Knowledge of OGG and CDC (Change Data Capture). * Knowledge of AWS Architecture (Redshift, S3, Lambda, IAM). * Familiarity with Data Observability. * Develop and maintain pipelines to transform raw data into valuable information for business areas. * Actively participate in building dimensional models to provide accessible, consistent, and high-performance data for complex queries. * Develop data integration processes into and out of the analytical environment via Apache Kafka. * Assist in data governance processes, actively contributing to building data monitors and documenting tables. * Critically analyze business requirements to build dimensional models that express metrics of real importance to the company. 251105020223655559


