Senior Data Engineer – Netfix Careers

Netflix is re-imagining entertainment in 190 countries and on millions of devices, and our goal is to use data to optimize for the best customer experience. The growth data engineering team is responsible for acquisition related data that is important to optimize the signup experience and driving engagement. Our work empowers product managers, and business leads to make decisions across areas such as payments, partnerships, signup flows, and messaging.

In this role, you’ll partner closely with software engineers, and data scientists to power analytical data products, experimentation, and machine learning models. The best person will have a strong engineering background and will be able to tie initiatives to business impact.

What you will do?
  • Partner with engineering teams and internal data consumers on new projects or enhancements
  • Build highly scalable data pipelines and clean datasets around key business metrics
  • Enhance our data architecture to balance scale and performance
  • Build and improve internal tools, and collaborate with the larger data teams on ideas

Here are some examples of our work

  • Analytic Data Products – Engineer storage layers using Druid, Hive, Redshift to power interactive custom viz applications
  • Data Pipelines – Create new pipelines or rewrite existing pipelines using Spark (Scala)
  • Data Quality and Anomaly Detection – Improve existing tools to measure data quality through metrics and automatic alerting
  • Data Modeling – Partner with data consumers to improve existing data models and build different facets of the business for analytic use cases
  • Machine Learning – In addition to feature engineering, build feedback loops in a Pub/Sub model between ML models and production applications

Who are you?

  • Software engineering mindset and ability to write elegant, maintainable code, and follow engineering best practices
  • Analytical mindset to understand business needs, and come up with engineering solutions
  • Experience balancing complexity and simplicity in terms of schema design
  • Expertise building data pipelines (in either Real-time or batch) on large complex datasets using Spark or other open source frameworks
  • Expertise in one or more programming languages (ideally Scala, or Python)
  • Strong SQL (Presto, Spark SQL, Hive) skills
  • Excellent communication skills to collaborate with cross functional partners and independently drive projects and decisions
  • Knowledge and familiarity with other distributed data stores (Elasticsearch, Druid)….Read More>>>


Source:- netflix