Hybrid Data Engineer Intern

Posted last week

Apply now

About the role

  • Design and implement a modular ETL pipeline on Databricks and enable parameterized, YAML-driven deployments using Databricks Bundles.
  • Implement Spark performance optimizations and CI/CD to promote pipelines across environments.
  • Build a programmatic deployment and management layer for Databricks using the Databricks REST API to create/configure clusters, jobs, and notebooks dynamically and securely.
  • Architect and implement a secure, scalable file-ingestion API that provides validation, auto-renaming, manifest generation, and reliable transfer to cloud storage (with full traceability).

Requirements

  • Excellent academics in Computer Science, Engineering, or related field.
  • Problem-solving is your jam, and you're all about critical thinking.
  • You're not afraid to roll up your sleeves and get stuff done, even if you're independently on your own with minimal supervision.
  • You can juggle multiple projects like a pro.
  • Challenges don't scare you; in fact, you love diving into them.
  • You can communicate like a champ, whether it's writing reports or presenting in a room full of people.
  • You're curious, and you love picking up new skills & technologies.
  • You're a team player, always up for sharing your ideas and best practices.

Benefits

  • Great company culture.
  • "Learn and Share" sessions.
  • You'll get support from your mentors.
  • Social events and after-work.
  • A flexible and fun work environment.
  • Casual dress code.
  • You'll work with a cool team!
  • We respect your ideas, and we're all about trying new things.
  • Work/life balance

Job title

Data Engineer Intern

Job type

Experience level

Entry level

Salary

Not specified

Degree requirement

Bachelor's Degree

Tech skills

Location requirements

Report this job

See something inaccurate? Let us know and we'll update the listing.

Report job