New York City, NY, USA | Deloitte Consulting
Functions:IT / Information Technology
Consulting - IT
Job Description:101 people have viewed this job
Are you a talented software engineer with outstanding Python skills? Passionate about creating innovative Data Pipelines from scratch to solve real-world problems? Deloitte Digital is seeking creative minds and persistent problem solvers to build and support cutting-edge marketing AI Platform that help our clients deliver meaningful, personalized journeys to delight their customers at every touch point.
At Deloitte Digital, we build state-of-the-art marketing Platform that give our clients a single view of their customers’ brand interactions across all channels (in-store, online, call centers), and enable our clients to deliver tailored marketing offers that maximize customer happiness and value.
We’re growing fast and need brilliant Senior Data Engineers like you to fuel our continuing innovation and growth. Along the way, you’ll find exceptional growth opportunities limited only by your hunger for learning and applying new technologies in our exciting, start-up-like environment.
Work you’ll do
As a Senior Data Engineer, you’ll design, implements, maintain a full suite of real-time and batch jobs that fuels our cutting edge AI to provide real-time marketing intelligence to our existing clients.
You’ll develop, test and deliver production grade code to help our clients solve their most marketing challenges using cutting edge big data tools. You’ll also ensure data integrity, resolve production issues, and assist in the support and maintenance of our overall Platform.
As you grow your capabilities and learn how to build a platform that can ingest, load and process billions of data points, you’ll enjoy new challenges and opportunities to showcase your development skills by joining project teams to build innovative new-client platform and execute high-value strategic development projects with high visibility.
Your responsibilities will include:
• Design, construct, install, test and maintain highly scalable data pipelines with state-of-the-art monitoring and logging practices.
• Bring together large, complex and sparce data sets to meet functional and non-functional business requirements.
• Design and implements data tools for analytics and data scientist team members to help them in building, optimizing and tuning our product.
• Integrate new data management technologies and software engineering tools into existing structures.
• Help in building high-performance algorithms, prototypes, predictive models and proof of concepts.
• Use a variety of languages, tools and frameworks to marry data and systems together.
• Recommend ways to improve data reliability, efficiency and quality.
• Collaborate with Data Scientists, DevOps and Project Managers on meeting project goals.
• Tackle challenges and to solve complex problems on a daily basis.
You’ll join a team of passionate, talented Data Engineers who collaborate to design, build and maintain cutting-edge AI solutions that arm our clients with real-time customer insights delivering tremendous value. If you’re intellectually curious, hardworking and solution-oriented, you’ll fit right into our fast-paced, collaborative environment.
In addition to working daily with our DataScience Team and DevOps Team to deliver production level grade pipelines that will run unattended for weeks and months. You’ll also work closely with our Project Management Team to understand our clients’ needs and any change requests that drive our development efforts.
• Proven track record of 8+ years of experience in software development, a substantial part of which was gained in a high-throughput, decision-automation related environment.
• 4+ years of experience in working with big data using technologies like Spark, Kafka, Flink, Hadoop, and NoSQL datastores.
• 3+ years of experience on distributed, high throughput and low latency architecture.
• 1+ years of experience deploying or managing data pipelines for supporting data-science-driven decisioning at scale.
• A successful track-record of manipulating, processing and extracting value from large disconnected datasets.
• Producing high quality code in Python.
• Passionate about testing, and with extensive experience in Agile teams using SCRUM you consider automated build and test to be the norm.
• Proven ability to communicate in both verbal and writing in a high performance, collaborative environment.
• Follows data development best practices, and enjoys helping others learn to do the same.
• An independent thinker who considers the operating context of what he/she is developing.
• Believes that the best data pipelines run unattended for weeks and months on end.
• Familiar with version control, you believe that code reviews help to catch bugs, improves code base and spread knowledge.
• Ability to travel 5-10% of the time
Helpful, but not required:
• Experience with large consumer data sets used in performance marketing is a major advantage.
• Familiarity with machine learning libraries is a plus.
• Well-versed in (or contributes to) data-centric open source projects.
• Reads Hacker News, blogs, or stays on top of emerging tools in some other way
• Data visualization.
• Industry-specific marketing data.
Technologies of Interest:
• Languages/Libraries – Python, Java, Scala, Spark, Kafka, Hadoop, HDFS, Parquet.
• Cloud – AWS, Azure, Google.
Already a member? Sign In