




Description: * Bachelor's degree in Computer Science, Engineering, Information Systems, or related fields. * Experience with SQL. * Experience with technologies: Oracle (OCI), Jenkins, and Git. * Experience developing in Python (focused on data manipulation). * Experience with PySpark. * Knowledge of KPIs, reporting, and dashboards. * Experience with SQL and query tuning. * Experience building and optimizing ETL processes. * Knowledge of dimensional data modeling. * Familiarity with data monitoring. * Knowledge of Airflow. * Familiarity with AWS services (S3, Redshift). * Design and maintain high-performance data pipeline architectures. * Build scalable data structures meeting both functional and non-functional business requirements. * Collaborate with executive stakeholders from Product and other departments to deliver a data infrastructure supporting their data and information needs. * Document departmental data procedures and data flows. * Implement systems and routines to monitor and ensure the quality and consistency of our databases. * Serve as Quality Assurance for data infrastructure deliverables developed by other team members. 251203020219661723


