Short Description:
Log9, a leading force in sustainable energy solutions, seeks an Intern-Data Engineer in Bangalore, India, for a full-time position. Responsibilities include managing data pipelines, collaborating with stakeholders, and ensuring data integrity. Required skills include Python, PySpark, SQL, cloud computing, and familiarity with data orchestration tools. Candidates should possess a Bachelor's degree in Computer Science, Electrical Engineering, or Information Technology.
Position: Intern-Data Engineer
Type: Internship
Company: LOG9
Location: Bangalore, India
Employment Type: Full-time (1 year)
About LOG9:
- Log9 is a pioneering force in combatting climate change, driving India's transition towards a clean and sustainable mobility and energy storage ecosystem.
- Our innovative batteries offer 9x faster charging, 9x longer lifespan, and 9x higher performance and safety, revolutionizing the industry.
- As a deep-tech startup, Log9 is committed to pioneering responsible energy solutions, holding expertise across electrode materials, cell fabrication, and battery packs.
- With over 80 industry-defining patents, Log9 fosters a culture of transparency, inclusiveness, empowerment, courage, and humility, empowering employees to excel.
- Comprising a 300+ member tribe, we are dedicated to pushing technological boundaries for a sustainable and liveable world.
Job Responsibilities:
- Build, maintain, and manage data pipelines to ensure efficient data flow across various systems.
- Collaborate with stakeholders to design and oversee customized data pipelines tailored to organizational needs.
- Evaluate and test ETL (Extract, Transform, Load) tools for data ingestion and processing efficiency.
- Assist in scaling the data infrastructure to accommodate the organization's growing data requirements.
- Monitor data pipeline performance and address any data-related issues promptly.
- Document pipeline architectures and workflows for future reference and scalability.
- Assess data formats, sources, and transformation techniques to optimize data processing.
- Collaborate closely with data scientists to ensure the availability and reliability of data for analytics purposes.
Required Skills/Experience:
- Proficiency in Python, PySpark, and understanding of Big Data concepts such as Data Lakes and Data Warehouses.
- Strong background in SQL for effective data manipulation and querying.
- Familiarity with cloud computing platforms such as AWS, Azure, or Google Cloud.
- Basic understanding of containerization technologies like Docker for efficient deployment.
- Exposure to data orchestration tools like Apache Airflow or Luigi for workflow management.
Educational Background:
- Bachelor's degree in Computer Science, Electrical Engineering, or Information Technology.
This job description is crafted to highlight the unique aspects of the position while avoiding plagiarism by using original language and structure.
Please click here to apply.
- Get link
- X
- Other Apps
- Get link
- X
- Other Apps
Interested for internship
ReplyDelete