




Job Summary: You will develop and maintain cloud-based data pipelines, work on data transformation and data quality, support data modeling, and ensure best practices for performance and security. Key Highlights: 1. Develop and maintain data pipelines in cloud and on-premises environments. 2. Work with data transformation and data quality tools. 3. Collaborate with analysts, engineers, and architects to deliver solutions. Description: What You Need * Experience with cloud data services (AWS, GCP, Azure, etc.); * Experience with data integration and transformation tools (e.g., Qlik Replicate, Qlik Compose, Talend, IICS, IDMC, etc.); * Intermediate SQL knowledge and Python skills oriented toward data engineering; * Experience with Databricks / Snowflake, etc.; * Familiarity with version control processes (Git), CI/CD, and automation; * Conceptual understanding of data architecture and data governance, applying best practices in your projects. Nice-to-Have (but not required — you're still very welcome without them) * Certifications in AWS, GCP, Azure, Qlik, Informatica, Databricks, or Snowflake; * Experience with pipeline observability and monitoring; * Basic knowledge of data security and compliance. Your Day-to-Day Will Include * Developing and maintaining data pipelines in cloud and/or on-premises environments; * Working with data transformation and data quality tools; * Supporting data modeling for Data Lake and Data Warehouse; * Ensuring best practices for data performance, security, and versioning; * Collaborating with data analysts, engineers, and architects to deliver end-to-end solutions. 2512060202191721992


