Data Platform Engineer

  • Tech and Engineering
  • Gurgaon, India

Data Platform Engineer

Job description

Zomato as an organisation has grown and scaled multifold over the last few years. We are committed to bringing the best food to everyone; no matter who they are and what they can afford

To know more about what’s cooking at Zomato, here is our Annual Report FY’19 and this is what life at Zomato looks like. Creating and re-inventing has been a key practice at Zomato and at this point we require folks who can help us keep pace with the dynamic ecosystem we are all a part of. Check out our blog for all the latest updates. 
At Zomato, Data is central to decision making across Product Development, Engineering and Business. We capture a huge amount of data from across our products ranging across Ad performance, application performance, user journey and beyond. We cover the widest possible gamut of services in Restaurants/Food in the world, from Search & Discovery, to Online Ordering, Table Reservations on the consumer side to multitude of B2B products. All this brings in a unique set of data which feeds back into our current and future products. 

The Data Infrastructure team, builds distributed components, systems, and tools that enable capturing, processing and analysis of this data to distil into usable insights. We work with Open Source technologies like Apache Kafka, Hadoop, Presto, Spark and also write some of our own. 

Here's what you'll do day-to-day:

  • Build data architecture / data models in support of Data warehouse, Big data and Analytics capabilities, and business requirements

  • Evolve the Data Pipeline and Architecture to allow (near) real time access to data

  • Build tools and systems to make data access easier and friendlier to everyone within the

    organisation

  • Improve system efficiency to bring down cost per unit of data stored

  • Work with various teams to understand their data needs and provide solutions that scale

  • Improve data consistency and quality by defining and enforcing better guidelines 

Requirements


You must have: 
  • 3+ years of professional experience working with Big Data technologies
  • Bachelor’s degree or higher in Computer Science or related field or equivalent experience 

Who fits the bill?

  • Proficiency with technologies like Kafka, Zookeeper, Hadoop, Hive, Yarn, Presto, Spark, Flink, Parquet and ORC

  • Experience with one of these messaging systems (Kafka, Kinesis) and serialization formats (JSON, Avro, Protobuf etc)

  • Experience with AWS or other Cloud technology

  • Experience with designing, building & deploying self-service high-volume data pipelines

  • Experience with designing and building dimensional data models to improve accessibility,

    efficiency, and quality of data

  • Good understanding of SQL engines and ability to perform advanced performance tuning

  • Good knowledge of and programming experience with scripting languages such as Perl or Unix

    shell

  • Experience with large Operational Data Store, Data Warehouse, Business Intelligence databases

  • You enjoy working in a fast, agile and nimble environment with frequent changes

  • You have excellent problem solving and critical thinking skills