Short Description:
The role of Senior Data Engineer at Grab in Bangalore for Digibank involves shaping a state-of-the-art data lifecycle platform, collaborating with diverse teams for tailored financial solutions, and adopting open-source big data tech. Candidates need 5+ years of experience in developing scalable, fault-tolerant big data platforms, proficiency in Linux, AWS, Kubernetes, Python/Scala/Java, and expertise in technologies like Spark, Airflow, and Kafka. Grab prioritizes an inclusive workplace, embracing diversity for optimal performance.
Job Title: Senior Data Engineer (Digibank)
Locations
- Bangalore (Salarpuria Aura)
Employment Type
- Full time
Posted On
- Posted 9 Days Ago
Job Requisition ID
- R-2023-6-0028
Job Overview:
At Grab, we are guided by The Grab Way, which defines our mission, approach to achieving it, and our core principles: Heart, Hunger, Honour, and Humility. These principles steer our decision-making as we strive to create economic empowerment for the people of Southeast Asia.
Get to know the Team:
In these dynamic times, technology is reshaping our lives, and we aim to revolutionize how financial services are provided. Singtel, Asia's leading communications group, connects millions of consumers and enterprises to vital digital services. Together, we are working to unlock significant aspirations, with financial inclusion in our region being one of them. Our goal is to establish a digital bank with a strong foundation built on data, technology, and trust to solve problems and serve our customers. If you have what it takes to help us build this new Digibank, join us.
Get to know the Role:
As a Data Engineer within the Data Technology team, you will engage with all aspects of data, including platform and infrastructure development, pipeline engineering, and the creation of tools and services to enhance the core platform. Your role involves constructing and maintaining a state-of-the-art data Life Cycle management platform, encompassing data acquisition, storage, processing, and consumption channels. The team collaborates closely with Data scientists, Product Managers, Finance, Legal, Compliance, and business stakeholders across Southeast Asia to tailor offerings to their specific requirements. As a member of the Data Tech team, you will be at the forefront of adopting and contributing to various open source big data technologies, allowing you to explore the latest patterns and designs in Software and Data Engineering.
The day-to-day activities:
Build and manage the data asset using scalable and resilient open source big data technologies, including Airflow, Spark, Snowflake, Kafka, Kubernetes, ElasticSearch, Superset, and more on cloud infrastructure.
Design and deliver the next-generation data lifecycle management suite of tools/frameworks, supporting real-time, API-based, serverless use-cases, and batch processing as needed.
Create and expose a metadata catalog for the Data Lake, facilitating exploration, profiling, and lineage requirements.
Empower Data Science teams to test and productionize various ML models, including propensity, risk, and fraud models, for better understanding, serving, and protecting customers.
Lead technical discussions across the organization through collaboration, including running RFC and architecture review sessions, tech talks on new technologies, and retrospectives.
Apply core software engineering and design concepts to create operational and strategic technical roadmaps for business problems that may be vague or not fully understood.
Prioritize security by ensuring that all components, from platforms and frameworks to applications, are fully secure and compliant with the group's infosec policies.
The Must-Haves:
At least 5+ years of relevant experience in developing scalable, secured, distributed, fault-tolerant, resilient, and mission-critical Big Data platforms.
Ability to maintain and monitor the ecosystem with 99.99% availability.
Candidates will be aligned appropriately within the organization depending on experience and depth of knowledge.
Strong fundamental hands-on knowledge of Linux and building a big data stack on top of AWS using Kubernetes.
Proficiency in at least one of the programming languages Python, Scala, or Java.
Strong understanding of big data and related technologies like Spark, Airflow, Kafka, etc.
Experience with NoSQL databases (KV, Document, and Graph).
Ability to drive DevOps best practices like CI/CD, containerization, blue-green deployments, 12-factor apps, secrets management, etc., in the Data ecosystem.
Good understanding of Machine Learning models and efficiently supporting them is a plus.
Our Commitment:
We are committed to building diverse teams and creating an inclusive workplace that enables all Grabbers to perform at their best, regardless of nationality, ethnicity, religion, age, gender identity, sexual orientation, and other attributes that make each Grabber unique.
Please click here to apply.