Short Description:
Tredence, a leading analytics and data science company, is hiring a Senior Data Engineer with 3-6 years of experience for roles in Bangalore, Chennai, Delhi, Pune, and Kolkata. The candidate will be responsible for developing advanced Data Warehouse solutions using Databricks and AWS/Azure Stack, collaborating with DW/BI leads, and addressing ETL pipeline development requirements. Strong skills in SQL, Python, and Spark (PySpark) are essential, along with experience in AWS/Azure stack. The ideal candidate should have expertise in data modeling, Databricks Data & AI platform, and proficiency in managing structured and unstructured data.
Job Title: Senior Databricks Engineer
Organization: Tredence
Position: Senior Data Engineering Professional (3-6 years experience)
Work Locations: Bangalore, Chennai, Delhi, Pune, Kolkata
About Tredence:
Tredence is dedicated to delivering impactful insights that drive profitable actions by integrating business analytics, data science, and software engineering. We collaborate with major companies across various sectors, deploying predictive and optimization solutions at scale. Headquartered in the San Francisco Bay Area, our clientele spans the US, Canada, Europe, and South East Asia. We are in search of an accomplished data scientist who not only possesses the required mathematical and statistical expertise but also exhibits natural curiosity and a creative mindset to explore, connect, and unveil hidden opportunities, ultimately unlocking the full potential of data.
Primary Roles and Responsibilities:
- Develop advanced Data Warehouse solutions utilizing Databricks and AWS/Azure Stack.
- Offer forward-thinking solutions in the realm of data engineering and analytics.
- Collaborate with DW/BI leads to comprehend new ETL pipeline development requirements.
- Identify gaps in existing pipelines and rectify issues efficiently.
- Work closely with the business to understand reporting layer needs and develop data models accordingly.
- Assist team members in resolving issues and technical challenges.
- Lead technical discussions with client architects and team members.
- Orchestrate data pipelines in scheduler via Airflow.
Skills and Qualifications:
- Bachelor's and/or master’s degree in computer science or equivalent experience.
- Minimum of 3 years of IT experience and 3+ years' hands-on experience in Data warehouse/ETL projects.
- Profound understanding of Star and Snowflake dimensional modeling.
- Robust knowledge of Data Management principles.
- Thorough understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture.
- Hands-on experience in SQL, Python, and Spark (PySpark).
- Experience in AWS/Azure stack is mandatory.
- Desirable skills include ETL with batch and streaming (Kinesis).
- Proficiency in building ETL/data warehouse transformation processes.
- Familiarity with Apache Kafka for streaming data/event-based data.
- Exposure to Open-Source big data products, including Hadoop (incl. Hive, Pig, Impala).
- Familiarity with Open Source non-relational/NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J).
- Experience with structured and unstructured data, including imaging & geospatial data.
- Proficient in working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT.
- Databricks Certified Data Engineer Associate/Professional Certification (Desirable).
- Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects.
- Experience working in Agile methodology.
- Strong verbal and written communication skills.
- Strong analytical and problem-solving skills with a high attention to detail.
Mandatory Skills: Python/PySpark/Spark with Azure/AWS Databricks
Please click here to apply.
Comments
Post a Comment
Please feel free to share your thoughts and discuss.