About the Role
Responsibilities:
• Lead the design, development, and deployment of robust and scalable data pipelines across
raw, curated, and consumer layers.
• Collaborate with cross-functional teams to gather data requirements and translate them into
technical solutions.
• Leverage Databricks (Apache Spark) and PySpark for large-scale data processing and real
t
ime analytics.
• Implement solutions using Microsoft Fabric, ensuring seamless integration, performance
optimization, and centralized governance.
• Design and manage ETL/ELT processes using Azure Data Factory (ADF), Synapse Analytics,
and Delta Lake on Azure Data Lake Storage (ADLS).
• Drive implementation of data quality checks, error handling, and monitoring for data
pipelines.
• Work with SQL-based and NoSQL-based systems to support diverse data ingestion and
transformation needs.
• Guide junior engineers through code reviews, mentoring, and enforcing development best
practices.
• Support data governance and compliance efforts, ensuring high data quality, security, and
lineage tracking.
• Create and maintain detailed technical documentation, data flow diagrams, and reusable
frameworks.
• Stay current with emerging data engineering tools and trends to continuously improve
infrastructure and processes.
Requirements:
• 8–10 years of experience in Data Engineering, with a focus on Azure Cloud, Databricks, and
Microsoft Fabric.
• Proficiency in PySpark, Spark SQL, and ADF for building enterprise-grade data solutions.
• Strong hands-on experience with SQL and experience managing data in Delta Lake (parquet)
format.
• Expertise in Power BI for developing insightful dashboards and supporting self-service
analytics.
• Solid understanding of data modelling, data warehousing, and ETL/ELT frameworks.
• Experience working with Azure Synapse Analytics, MS SQL Server, and other cloud-native
services.
• Familiarity with data governance, data lineage, and security best practices in the cloud.
• Demonstrated ability to lead engineering efforts, mentor team members, and drive delivery
in Agile environments.
• Relevant certifications such as DP-203, DP-600, or DP-700 are a strong plus.
• Strong problem-solving abilities, excellent communication skills, and a passion for building
high-quality data products.
Requirements
Responsibilities:
• Lead the design, development, and deployment of robust and scalable data pipelines across
raw, curated, and consumer layers.
• Collaborate with cross-functional teams to gather data requirements and translate them into
technical solutions.
• Leverage Databricks (Apache Spark) and PySpark for large-scale data processing and real
t
ime analytics.
• Implement solutions using Microsoft Fabric, ensuring seamless integration, performance
optimization, and centralized governance.
• Design and manage ETL/ELT processes using Azure Data Factory (ADF), Synapse Analytics,
and Delta Lake on Azure Data Lake Storage (ADLS).
• Drive implementation of data quality checks, error handling, and monitoring for data
pipelines.
• Work with SQL-based and NoSQL-based systems to support diverse data ingestion and
transformation needs.
• Guide junior engineers through code reviews, mentoring, and enforcing development best
practices.
• Support data governance and compliance efforts, ensuring high data quality, security, and
lineage tracking.
• Create and maintain detailed technical documentation, data flow diagrams, and reusable
frameworks.
• Stay current with emerging data engineering tools and trends to continuously improve
infrastructure and processes.
Requirements:
• 8–10 years of experience in Data Engineering, with a focus on Azure Cloud, Databricks, and
Microsoft Fabric.
• Proficiency in PySpark, Spark SQL, and ADF for building enterprise-grade data solutions.
• Strong hands-on experience with SQL and experience managing data in Delta Lake (parquet)
format.
• Expertise in Power BI for developing insightful dashboards and supporting self-service
analytics.
• Solid understanding of data modelling, data warehousing, and ETL/ELT frameworks.
• Experience working with Azure Synapse Analytics, MS SQL Server, and other cloud-native
services.
• Familiarity with data governance, data lineage, and security best practices in the cloud.
• Demonstrated ability to lead engineering efforts, mentor team members, and drive delivery
in Agile environments.
• Relevant certifications such as DP-203, DP-600, or DP-700 are a strong plus.
• Strong problem-solving abilities, excellent communication skills, and a passion for building
high-quality data products.
About the Company
Cigres Technologies Private Limited is a technology consulting and services company that focuses on helping clients resolve their significant digital problems and enabling radical digital transformation using multiple technologies on premise or in the cloud. The company was founded with the goal of leveraging cutting-edge technology to deliver innovative solutions to clients across various industries.