Data Engineer

Location Piscataway
Discipline: Data Engineering
Salary: $ 110000 to $140000
Contact email: rgarcia@brightmetro.com
Job ref: 438914
Published: 13 days ago
About the Company:
Our client is a global Fortune 500 company in the consumer goods sector with over 30,000 employees. 

About the role: 
100% remote for now and then eventually 3 times per week in Piscataway office. 

Data Engineers focus on expanding and optimizing our data, data pipeline architecture, data flow, and collection for multi-functional teams. The Data Engineer will be an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. You will support our software developers, data analysts and data scientists on data initiatives and will ensure efficient data delivery architecture is consistent throughout ongoing projects.

Responsibilities:
  • Build and maintain optimal data pipeline architecture
  • Assemble large, sophisticated data sets that meet functional / non-functional business requirements.
  • Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Build the infrastructure required for efficient extraction, transformation, and loading of data from a wide variety of data sources
  • Assist with data-related technical issues and support their data infrastructure needs.
  • Build data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.
Requirements: 

  • Bachelor’s degree or higher degree in Computer Science, Statistics, Informatics, Information Systems or related work experience.
  • 5+ years of experience in a Data Engineer role
  • SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases.
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets.
They should also have experience using the following software/tools:
  • Experience with relational SQL and NoSQL databases: MongoDB, Neo4j, etc
  • Experience with cloud services: GCP, AWS, etc
  • Experience with object-oriented/object function scripting languages: Python, Java, etc.
  • Experience with big data tools: Hadoop, Spark, Kafka, etc.
  • Experience with Data Flow, Data Pipeline and workflow management tools: Cloud Composer, Airflow, Luigi, etc.