




Job Summary: Develop and maintain software solutions and data pipelines in a cloud environment, supporting the development team and collaborating with business stakeholders. Key Highlights: 1. Software Development and Cloud Computing on AWS 2. Creation and Maintenance of Data Pipelines 3. Data Requirements Analysis and Elicitation Description: * Software Development; * Cloud Computing on the AWS platform; * ETL concepts; * SQL; * Spark; * Python and/or PySpark programming; * Open Source projects; * SRE (Concept); * RedShift; * S3; * Crawler; * Glue; * Kubernetes; * Airflow; * Creation/Maintenance of data pipelines across layers in the data lake, including optimization; * Database preparation/modeling; * Ingestion of databases into the data lake; * Data requirements analysis and elicitation from end users (business units); * Support the development team with daily tasks and new requests; * Create/maintain data pipelines across layers in the data lake; * Develop/model databases for dashboards, reports, algorithms, and/or refined datasets; * Ingest data into the data lake; * Document daily activity updates in JIRA; * Document metadata/business rules using the internal standard document; * Understand/identify database/variable requirements with business stakeholders; * Perform database ingestion into the data lake; * Conduct data quality analysis and ensure analytics standards are applied; * Version control for developed code. 251127020240255870


