




**Position:** Mid-Level Data Engineer **Location:** Remote or Hybrid (Solo Network offices) Employment model: Cooperative Area: Data, AI and Modernization **About Solo Network** Solo is one of the leading Microsoft integrators in Latin America. We are accelerating the expansion of our Data and AI practice and are seeking technical talents who want to lead this journey with us. We work on high-impact projects, always focused on value creation, technical excellence, and close client relationships. **About the Position** We are looking for Mid-Level and Senior Data Engineers to work on consulting projects, focusing on building scalable platforms and data pipelines, data ingestion and processing architectures, and directly supporting data democratization for strategic clients. We seek individuals with strong technical skills, hands-on experience with modern tools, and the ability to work autonomously in complex environments. **Responsibilities** * Develop data pipelines for ingestion, transformation, and delivery at scale; * Work with modern architectures such as Lakehouse, Data Mesh, and Medallion; * Ensure data quality, traceability, versioning, and secure reprocessing; * Support the development of monitorable environments with DataOps practices; * Implement solutions focused on performance, governance, and security; * Collaborate with architects, data scientists, and analysts to enable efficient integrations; * Participate in technical discussions with clients and recommend best practices. **Required Qualifications:** Develop and maintain ETL/ELT data pipelines for collecting, transforming, and loading data across systems and platforms. Implement and manage data architectures in cloud and on-premise environments, with focus on Azure and Google Cloud. Monitor data pipeline performance and optimize processes for improved efficiency. Ensure data quality and integrity through validation and monitoring strategies. Implement data storage solutions (data lakes and data warehouses) to facilitate data analysis. Collaborate with data scientists and analysts to ensure data needs align with business requirements. Implement data governance and support security policy implementation. Automate data processes using scripts in Python, SQL, and other automation tools. Produce detailed documentation of developed data solutions to facilitate maintenance and scalability. Skills with data process monitoring and automation tools. Advanced SQL and Python knowledge for data manipulation and analysis. Programming languages: Python and SQL Relational and non-relational databases (SQL Server, PostgreSQL, MongoDB, etc.) Data modeling and ETL/ELT Data pipelines (Azure Data Factory, Synapse, Databricks, Fabric, etc.) Data Lake and Data Warehouse Code versioning (Git) DataOps and Data Governance concepts Desirable: * Experience with tools such as dbt, Airflow, Dagster; * Experience with Data Platforms, Databricks, Snowflake, BigQuery, etc.; * Microsoft certifications (DP-203, DP-700, etc.); * Experience with AI, MLOps, or advanced Analytics projects. What we offer * Challenging projects with major companies across different industries; * Experienced, collaborative technical team with freedom to propose solutions; * Opportunity to work with leading Microsoft tools and emerging technologies; * Environment that values continuous learning and technical autonomy; * Access to certification programs, trainings, and technical communities. If you enjoy hands-on work, are passionate about data, and want to join a team that values excellence, come talk to us.


