···
Log in / Register
SENAI/PE – CÓD. 525 – DEVELOPER II - (AREA: DATA ENGINEERING) - (RMR)
R$7,295/month
Indeed
Full-time
Onsite
No experience limit
No degree limit
Av. Cruz Cabugá, 767 - Santo Amaro, Recife - PE, 50040-000, Brazil
Favourites
New tab
Share
Some content was automatically translatedView Original
Description

**SENAI/PE – CÓD. 525 – DEVELOPER II \- (AREA: DATA ENGINEERING) \- (RMR)** **Before registering, carefully read the information available in the Selection Process Rules – SENAI (****click here****)** * **REGISTRATION PERIOD: 10/27/2025 to 11/05/2025** **Position:** Developer II **Job Code:** 525 **Work Location:** Senai Santo Amaro – Hybrid Format **Opportunity Details** * **Number of vacancies:** 1 (immediate hiring, indefinite term). * **Talent pool:** Valid for 12 months, extendable by another 12 months at the institution’s discretion. During the talent pool validity period, candidates may be called for positions in other units within the same region, with possible changes in benefits, schedules, or working hours, as needed. **What do we expect from you?** **Desired Profile:** * Excellent oral and written communication skills; * Familiarity with and interest in collaborative work within multidisciplinary teams; * Strong problem-solving ability and analytical mindset; * Strong ability to manage multiple concurrent priorities, taking responsibility and risks. **Technical Skills:** * Advanced knowledge of programming languages such as Python. * Familiarity with software development tools and technologies such as Git, Docker, and AWS; * Knowledge of agile software development methodologies such as Scrum or Kanban; * Experience in data analysis and statistical modeling; * Familiarity with emerging technologies such as artificial intelligence and machine learning. * **Mandatory academic qualification:** Completed bachelor's degree in the field of exact sciences. * **Mandatory requirement:** 1\. Data Engineering * Expertise in data modeling for DW, Data Lake, Star Schema, Iceberg, etc.; * Experience with data observability (quality); * Define, clarify, and size architectural characteristics of solutions. 2\. Development * Experience with CI/CD for data pipelines; * Proficiency in data manipulation (Python and PySpark); * Familiarity with code optimization; \- Experience with infrastructure as code \- Terraform and GitOps; * Container orchestration with Kubernetes; * Experience with unit testing. 3\. Cloud and Data Applications * Strong familiarity with cloud computing concepts and AWS services; * Familiarity using AWS services for data ingestion, processing, governance, and analytics; * Proficiency in optimizing scalable data pipelines using AWS technologies and their optimization; * Solid understanding of Apache Iceberg. * **Desirable requirements:** 1\. At least one specialization or certification in AWS cloud, such as AWS Certified Cloud Practitioner. 2\. At least one specialization or certification in the field of data engineering. 3\. Proficiency in Microsoft Power BI. 4\. Knowledge and experience with gRPC APIs and strong understanding of object-oriented programming; 6\. Experience developing solutions with RAG and using MCP; 7\. Integration and E2E testing. 8\. Knowledge and experience with Totvs and Microsoft CRM Dynamics. * Relational databases from Microsoft SQL Server, MySQL, and Postgres technologies. * Proficiency in Microsoft SQL Server Management Studio. * **Main Responsibilities:** Perform activities related to prospecting, acquisition, management, development, and dissemination of strategic and operational projects; \- Participate in requirements analysis meetings; \- Execute strategic and operational marketing actions such as visits, presentations, and participation in events, in addition to writing articles; \- Identify user needs to develop applications according to demands; \- Monitor and guide the implementation of new software and programs, ensuring user satisfaction within application standards, cost, and timeline; \- Prepare user manuals for developed applications to facilitate their use; \- Research and study software and new tools to maintain and improve service quality provided by the department; \- Perform maintenance and updates of implementations, identifying technical and operational issues; \- Develop and deliver training sessions for users on accessing and using computing resources; \- Keep management informed and updated by preparing reports on identified problems and adopted solutions; \- Solve problems during new software implementations with a strategic vision of the organization; \- Select and define an executable and logical standard in application development for technical consistency; \- Understand interactions and dependencies between components involved in software development, identifying improvement opportunities; \- Document and update developed materials/artifacts, following best development practices. \- Provide excellent service to internal and external clients. * **Mandatory Experience:** Minimum of 6 months proven experience in the field. * **Working Hours:** 08:00 AM to 5:00 PM (with 1 hour break) * **Flexibility and availability:** Required for travel. **What we offer:** * **Compensation:** BRL 7,295.03 * **Benefits:** Medical and dental insurance; Meal allowance or food voucher; Day Off; Childcare allowance; Corporate University; Payroll loan; Private pension plan; Career development plan. **Selection Process: Eliminatory Stages** * All communication will be conducted exclusively via e\-mail. Please monitor your inbox closely and avoid missing deadlines! **1st Stage \- Registration:** **10/27/2025 to 11/05/2025** **2nd Stage – Objective Assessment:** Program Content: 1\. Data Fundamentals and Architecture * ETL, ELT, Data Lake, Data Warehouse, and Data Lakehouse; * Data modeling for analytical environments; * Architectural characteristics: ACID, schema enforcement, partitioning, columnar formats (Parquet, ORC); * Strategies for optimizing analytical queries (Athena, S3\) and data organization for ML and BI. 2\. AWS Cloud Services * Amazon S3 (data layers/zones: RAW, Processed, Analytics, etc.); * AWS Glue: ETL Jobs, Data Catalog, Data Quality, Detect PII, Crawlers, VPC Endpoints, integration with Apache Iceberg; * AWS Lake Formation: Governance, granular access control, LF\-Tags, Resource\-based policies. * Amazon Redshift and Redshift Spectrum; * Amazon Athena; * AWS DMS (Database Migration Service – CDC and Full Load); * AWS Step Functions and MWAA (Apache Airflow); * Amazon SageMaker Unified Studio and AI; * Amazon Quicksight; * Services for semantic and conversational search: Amazon Kendra, Amazon Lex, Bedrock, Lambda, Iceberg. 3\. Data Engineering, Development, and Integration Processes * Development of Lambda Functions and AWS Glue in Python * Use of AWS SDK; * Frameworks such as boto3, pandas, pyarrow. * Advanced SQL: moving average, windows, partitioning; * Creation of REST architecture APIs using frameworks like Flask or FastAPI; * Kappa and Lambda architectures; * Data integration via CDC, REST APIs, ingestion of structured and unstructured data; * Application containerization and orchestration: Docker and Kubernetes; * Automated testing (unit tests); * Critical pipelines: SLA, RTO, RPO, cross\-region replication, automated failover; * Optimization of PySpark scripts: broadcast, repartitioning, catalyst, Tungsten; * Code versioning with Git; * Technical documentation (usage and creation), including API Docs creation using Swagger; * Definition of architectural characteristics and creation/evaluation of solutions compliant with these characteristics; * Ingestion of structured and unstructured data; * Orchestration of ETL/ELT pipelines; * Analytical consumption (transactionality; schema evolution; partition spec; compaction/merge, integration); * DataOps. 4\. Governance and Security * Management of technical and business metadata; * Granular access control with Lake Formation; * Data Lineage and Data Quality; * Use of VPN and VPC Endpoints for secure communication. 5\. Use Cases and Specific Technologies * CI/CD; * Infrastructure as Code. 6\. Requirements analysis and coordination with leadership and teams * Work in multidisciplinary teams; * Self-management; * Proactive communication. **3rd STAGE \- Resume Review**: For the resume review stage, a form will be sent to the candidate for completion. Only the information filled directly into the candidate's resume deepening form will be considered. Responses in questionnaires, resumes attached during registration, resumes sent by e\-mail, or through any other communication channel will not be considered. **4th STAGE \- Behavioral Interview** **5th STAGE \- Technical Interview** **6th STAGE \- Document Verification** **7th STAGE \- Finalization and Result Announcement** **Tie-breaking criteria (in case of a tie):** Candidates with disabilities (PwD); Higher score in the manager interview; Longer work experience in the field; Better performance across selection stages. **Why work with us?** We believe in building and strengthening industry in Pernambuco, working towards the advancement of basic and professional education, worker health promotion, and progress in innovation and technology. Here, you will be welcomed into a collaborative, dynamic environment that supports your professional growth. We also value inclusion and diversity; therefore, all our positions are open to people with disabilities (PwD) and those rehabilitated by INSS. **Want to join our team? Register now and take the next step in your career!**

Source:  indeed View original post
João Silva
Indeed · HR

Company

Indeed
Cookie
Cookie Settings
Our Apps
Download
Download on the
APP Store
Download
Get it on
Google Play
© 2025 Servanan International Pte. Ltd.