




Description: * Prior experience as a Data Engineer. * Advanced Python programming. * In-depth knowledge of SQL and experience with various database systems (relational and non-relational). * Knowledge of ETL/ELT tools and concepts, especially Azure Data Factory and Databricks. * Familiarity with Data Warehouse, Data Lake, or Data Lakehouse concepts. * Cloud computing knowledge (AWS, Azure, or Google Cloud) is a plus. * Analytical ability and skill to solve complex problems. * Completed undergraduate degree. What are the differentiators? * Knowledge of pipeline orchestration tools (ADF, Airflow, etc.). * Knowledge of agile methodologies (Scrum, Kanban). * Design and maintain data pipelines to collect, transform, and load information from diverse sources using tools such as Azure Data Factory and Databricks. * Manage relational and non-relational databases, applying efficient modeling to ensure storage and availability. * Optimize data solution performance, continuously improving speed and efficiency. * Implement and monitor routines to ensure data quality, integrity, and security. * Collaborate with analysts, data scientists, and other engineers to deliver robust, business-aligned solutions. * Monitor data platform performance, identify issues, and act promptly to resolve them. * Automate repetitive processes by developing scripts and tools. 2512050202181904449


