Job Description
Senior Data Engineer
Job Summary
Seeking an experienced Data Engineer to support enterprise-level data initiatives within the insurance and financial services sector. This role focuses on designing and maintaining scalable data pipelines using Azure Data Factory (ADF) and SQL Server Integration Services (SSIS), while ensuring data accuracy, performance, and reliability across hybrid environments. Experience with AWS and Databricks is highly desirable, as the organization continues to expand into multi-cloud and modern analytics platforms.
Key Responsibilities
- Design, build, and maintain end-to-end data pipelines and ETL processes using Azure Data Factory and SSIS.
- Develop and optimize complex SQL Server stored procedures, queries, and database schema designs for large-scale data processing.
- Integrate data from multiple sources (cloud and on-premises) to support reporting, analytics, and operational systems.
- Collaborate with analysts, product teams, and business stakeholders to define data requirements and deliver accurate, efficient solutions.
- Implement and monitor data quality, validation, and performance-tuning processes.
- Troubleshoot and resolve production data issues, ensuring reliability and minimal downtime.
- Support modernization initiatives by migrating or refactoring legacy workflows into cloud-based architectures.
- Maintain detailed documentation of data flows, pipelines, and integrations.
- Stay current on emerging data technologies and recommend enhancements to improve existing systems.
Qualifications / Requirements
- Bachelor’s degree in Computer Science, Information Systems, or a related field (or equivalent experience).
- Minimum of 7 years of hands-on experience in data engineering or related disciplines.
- Proven expertise with Azure Data Factory and SSIS for ETL/ELT pipeline development.
- Strong proficiency in SQL Server, including stored procedures, schema design, and performance tuning.
- Solid understanding of data modeling, relational databases, and integration best practices.
- Experience within highly regulated industries such as insurance, financial services, or healthcare is a plus.
- Excellent analytical, troubleshooting, and communication skills.
Preferred Qualifications
- Experience with AWS data services (e.g., Redshift, Glue, S3) and/or Databricks.
- Familiarity with Python for scripting and pipeline automation.
- Knowledge of Power BI or Tableau for reporting and visualization.
- Exposure to version control (Git) and CI/CD pipelines for data operations.
Disclaimer: Please note that this job description may not cover all duties, responsibilities, or aspects of the role, and it is subject to modification at the employer’s discretion.
#LI-MC1