Data Engineer developing and maintaining data pipelines and applications at EvidenceCare. Collaborating across teams to generate actionable insights from healthcare data for better decision-making.
Responsibilities
Design and build data pipelines that are reliable, performant, and maintainable
Develop Python applications for data calculation, transformation, and automated report generation
Maintain and enhance existing legacy Python applications while we modernize our platform—this includes bug fixes, performance improvements, and incremental refactoring
Work with SQL databases (PostgreSQL, Snowflake) to optimize queries, design schemas, and ensure data integrity
Contribute to the design and maintenance of our data warehouse architecture
Build and maintain APIs and services that expose data products to internal and external consumers
Contribute to data quality frameworks, monitoring, and alerting
Participate in architecture decisions and help evolve our data platform- Collaborate with Product and Clinical teams to translate business requirements into technical solutions
Requirements
3-5 years of experience in data engineering or a similar role
Strong proficiency in Python, including experience building production applications (not just scripts)
Comfortable working with legacy codebases—you can navigate unfamiliar code, understand its intent, and improve it incrementally without breaking things
Solid SQL skills with experience in PostgreSQL, data warehouse platforms, query optimization, and schema design
Experience building and maintaining ETL pipelines that process millions of rows reliably
Experience with cloud data platforms, preferably AWS and Snowflake
Familiarity with data orchestration tools (Airflow, Dagster, or similar)
Understanding of software engineering best practices: version control, testing, code review, CI/CD
Ability to communicate technical concepts clearly to both technical and non-technical audiences
Nice to Have
Experience in healthcare or another compliance-sensitive industry
Familiarity with reporting frameworks or BI tools
Exposure to containerization (Docker, Kubernetes)
Experience with event-driven architectures or message queues
Benefits
Competitive salary + stock option opportunities
Unlimited PTO
Company-provided laptop
Medical, Dental, Vision, & Life Insurance Benefit Plans
Company 401k plan
Frequent company and team outings to celebrate wins and life together
Professional development opportunities through conferences and online courses
Data Engineer engineering DUAL Personal Lines’ strategic data platforms for global insurance group. Providing technical expertise in data engineering and collaborating with internal teams for solution delivery.
Data Engineer role focused on creating and monitoring data pipelines in an innovative energy company. Collaborate with IT and departments to ensure quality data availability in a hybrid work environment.
SQL Migration Data Engineer at Auxo Solutions focusing on Azure SQL/Fabric Lakehouse migrations and building data pipelines. Collaborating on technical designs and data governance for modernization initiatives.
Data Engineer developing cloud solutions and software tools on Microsoft Azure big data platform. Collaborating with various teams for data analysis and visualization in healthcare.
Boomi Integration Architect designing and leading integration solutions for data warehouses. Collaborating with cross - functional teams to implement scalable integration patterns using Boomi technologies.
Seeking a Boomi Integration Architect specializing in Data Warehouse and Master Data Hub implementations. Responsible for designing high - performance integration solutions across enterprise platforms.
Principal Data Engineer at Serko enhancing global travel tech through data - driven solutions. Collaborating across teams in Bengaluru to drive innovative engineering and best practices.
Data Engineer at Keyrus responsible for building and optimizing data pipelines for major projects. Contributing to data solutions and ensuring data quality in a growing team.