




**Role in the company:** The Senior Data Engineer at Dataside has autonomy to deliver high-complexity solutions in data engineering projects. Acts as the technical owner for contracts, always supported by a Technical Leader, Senior Engineer, or Data Architect. This role involves guiding clients on decisions regarding pipelines, services, cost and performance optimization, as well as identifying opportunities for improvement in existing architectures and communicating them to more senior technical leads. Also actively participates in the development of less experienced colleagues, performing technical reviews and mentoring Junior engineers or team members in growth phases. Additionally, the professional is expected to focus on delivering real business value, working with empathy, active listening, and strategic vision. It is essential to delight clients through well-structured deliveries, clear communication, and a collaborative mindset—working side-by-side to resolve the project’s key challenges with a focus on concrete results. **Responsibilities:** Design, build, and maintain data pipelines using PySpark, SQL, and AWS services. Manage data ingestion and transformation efficiently, emphasizing reusability and best practices. Work on projects involving Glue, EMR, Redshift, Athena, Lambda, and Step Functions. Optimize data lifecycle in S3 and ensure query performance. Generate value through data organization and governance. Work in critical, high-volume environments. Lead client meetings, explain technical decisions, and propose scalable solutions. Anticipate technical risks and suggest solutions. Mentor Junior professionals and actively contribute to disseminating best practices across the team. **Requirements:** Practical experience with medium- and high-complexity data projects. Experience building pipelines with Spark/PySpark and advanced SQL. Proficiency in the AWS analytics stack: S3, Glue, EMR, Athena, Redshift, Lambda, Step Functions. Experience with CI/CD, Git version control, and automated deployment. Strong communication skills with both technical and non-technical stakeholders. Clear understanding of data flow and cloud architecture. Autonomy in execution and accountability for deliverables. **Hard Skills:** Advanced PySpark. Advanced SQL. AWS analytics stack: S3, Glue, EMR, Athena, Redshift, Lambda, Step Functions. Git + CI/CD. Amazon Redshift: Table creation and maintenance, distribution styles (KEY, EVEN, ALL), sort keys, vacuum/stats, materialized views, COPY/UNLOAD, WLM. **Soft Skills:** Consultative mindset. Organization and autonomy. Clarity and structure in communication. Ability to anticipate problems and risks. Team spirit and results orientation. Ability to guide and influence less experienced colleagues. Intermediate English (preferred). **Desired Certifications:** AWS Data Engineer Associate. AWS Solutions Architect Associate. **Differentiators:** Knowledge/experience with Terraform, DBT, DuckDB, Docker. Databricks Certified Data Engineer Associate. * **Our benefits:** * **Health insurance allowance**: monthly financial support to help cover your health plan. * **Wellhub**, to keep your body and mind active—your way. * **Fully company-paid online therapy**, because mental health matters. * **Online nutrition counseling**, up to two sessions per month to support your dietary health. * **Life insurance**, with a policy value of R$125,000, ensuring greater security for you and your family. * **Birthday day off**, because your special day deserves to be celebrated. * **Paid time off**, so you can recharge your energy. * **Internal gamification**, turning achievements into rewards and recognition. * **Educational partnerships with universities**, including FIAP, Anhanguera, and Instituto Infnet, to support your growth and learning. * **Technical certification bonus**, recognizing and rewarding your effort to learn. ***We value every voice and every person, because we know diversity makes us more innovative and stronger.***


