




**Responsibilities and Assignments** Collect, process, and manage large datasets from diverse sources, ensuring their quality and integrity. Design new secure, reliable, available, and scalable data architecture solutions. Plan and implement data repository modeling, such as Data Lake, Data Warehouse, relational and non-relational databases, supporting the company’s transactional and analytical needs. Build and maintain efficient, scalable, and secure data pipelines using Big Data tools and technologies such as Hadoop, Spark, Hive, etc. Collaborate with data analysis and software development teams to deliver accurate, up-to-date, and reliable data. Develop and maintain technical documentation, including architecture diagrams and data flow diagrams. Structure and manage cloud-based data environments. Coordinate tickets, incidents, problems, and present performance indicators. **Requirements and Qualifications** Bachelor’s degree in Computer Science, Computer Engineering, Information Systems, or related fields. Postgraduate studies in progress in related fields. Previous experience of 3 to 5 years in developing Big Data solutions, including data storage, processing, and analysis. Knowledge of SQL Server, Analysis Services, Reporting Services, and Integration Services is desirable. Minimum 3 years of prior experience with the Azure Stack (Data Factory, Databricks, Synapse, Purview) and Microsoft Power BI. Experience with programming languages such as Python, Java, or Scala. Knowledge of relational and non-relational databases, such as MySQL, PostgreSQL, MongoDB, etc. Knowledge of Data Streaming platforms. Advanced English proficiency is desirable.


