Short Description:
The Data Engineer II role at Netomi AI in Gurugram seeks a detail-oriented professional to analyze trends, enhance customer experience, and drive improvements through intricate data analysis. Responsibilities involve building data pipelines, coding for test-driven development, and implementing modern data architecture strategies. Requirements include 4+ years of startup experience, proficiency in Java/Python, SQL, data modeling, ETL, and familiarity with data engineering tools/platforms like Kafka, Spark, and Hadoop in a cloud environment. Netomi values diversity and equal opportunities.
Position: Data Engineer II
Location: Gurugram
Department: Product Engineering – Product Development
Employment Type: Full-Time - Remote/Hybrid
About Netomi AI:
At Netomi AI, we have embarked on a mission to create artificial intelligence that fosters customer loyalty for the world's largest global brands. Some of the biggest brands already trust Netomi AI's platform to address mission-critical issues, providing you with the opportunity to work with high-profile clients at a senior level and expand your professional network.
Backed by leading investors such as Y-Combinator, Index Ventures, Jeffrey Katzenberg (co-founder of DreamWorks), and Greg Brockman (co-founder & President of OpenAI/ChatGPT), you will join an exclusive group of visionaries who are shaping the future of AI in customer experience. We are establishing a dynamic, rapidly growing team that values innovation, creativity, and hard work. In this environment, you'll have the chance to make a significant impact on the company's success while advancing your career in the field of AI.
If you want to be a key player in the Generative AI revolution, we should definitely have a conversation.
Position Overview:
Netomi is seeking a highly analytical and detail-oriented candidate to join the Analytics team in Gurugram. As a member of this team, you will collaborate with product, engineering, and customer success teams to conduct intricate data and trend analyses aimed at proposing improvements to enhance the overall customer experience. Your role will also encompass benchmarking and measuring the performance of various product operations projects, constructing and publishing comprehensive scorecards and reports, and identifying and driving new opportunities based on customer and business data.
We are in search of a Data Engineer who is passionate about using data to uncover and solve real-world problems. You will enjoy working with extensive datasets, modern business intelligence technology, and the ability to witness your insights driving the development of features for our customers. Furthermore, you will have the opportunity to contribute to the formulation of policies, processes, and tools that address product quality challenges in collaboration with other teams.
Key Responsibilities:
- Collaborate with colleagues to establish complex data processing pipelines to address our clients' most challenging problems.
- Collaborate to write clean, iterative code based on Test-Driven Development (TDD) principles.
- Utilize various continuous delivery practices to deploy, support, and operate data pipelines.
- Offer guidance and education to clients on the use of different distributed storage and computing technologies.
- Develop and manage modern data architecture strategies to meet essential business objectives and deliver end-to-end data solutions.
- Create data models and provide insights into the trade-offs associated with different modeling approaches.
- Seamlessly integrate data quality considerations into your daily tasks and the delivery process.
Requirements:
- A minimum of 4 years of work experience with a start-up mindset and a strong willingness to learn.
- Proficiency in Java or Python, coupled with a solid understanding of a web framework (e.g., Spring, Django, etc.) for writing maintainable, scalable, unit-tested code.
- Expertise in SQL and a robust grasp of databases (e.g., MySQL, PostgreSQL) and NoSQL databases.
- A solid grasp of data modeling.
- Preferred work experience as an ETL developer.
- Familiarity with data engineering tools and platforms such as Kafka, Druid, AWS Kinesis, Spark, and Hadoop.
- Experience building large-scale data pipelines and data-centric applications using distributed storage platforms like HDFS, S3, NoSQL databases (Hbase, Cassandra, etc.), and distributed processing platforms like Hadoop, Spark, Hive, and Airflow in a production environment.
- Adept at employing data-driven approaches and data security strategies to solve business problems.
- Genuine enthusiasm for data infrastructure and operations, with experience working in cloud environments.
- A passion for working with data: capable of building and operating data pipelines and maintaining data storage within distributed systems.
- Effective collaboration skills, fostering open communication between Netomi and business client teams while advocating for shared outcomes.
- Experience in writing data quality unit and functional tests.
Netomi is an equal opportunity employer committed to fostering diversity in the workplace. We evaluate qualified applicants without regard to race, color, religion, sex, sexual orientation, disability, veteran status, and other protected characteristics.
Please click here to apply.