Job Search
S nama možete ostvariti iznimnu karijeru.
Big Data Engineer IRC245004
Job | IRC245004 |
Location | India - Bangalore |
Designation | Senior Software Engineer |
Experience | 5-10 years |
Function | Engineering |
Skills | Apache Spark / PySpark, Big Data enginering, CI/CD Pipeline Management, Elastic Search, Hadoop, Monitoring, Performance testing/tuning, Pipeline Execution/ETL, Python, Scala, SQL |
Work Model: | Hybrid |
Description
6+ years of experience in big data engineering or related fields.
Strong expertise in Apache Spark for data processing, with hands-on experience developing Spark applications in Scala or Python.
Experience in ETL pipeline development, including data ingestion, transformation, and processing of large-scale datasets.
Proficient in Elasticsearch: Experience with Elasticsearch for querying and indexing large datasets, understanding of its architecture and performance optimization.
Advanced SQL skills: Strong ability to write complex SQL queries for data extraction, aggregation, and analysis.
Knowledge of distributed data processing and managing large data sets in a cloud or on-premise environment.
Familiarity with Hadoop ecosystems, HDFS, and other big data technologies.
Experience with version control (e.g., Git) and CI/CD pipelines.
Strong problem-solving skills, attention to detail, and a mindset for performance tuning and optimization.
Requirements
- 6+ years of experience in big data engineering or related fields.
- Strong expertise in Apache Spark for data processing, with hands-on experience developing Spark applications in Scala or Python.
- Experience in ETL pipeline development, including data ingestion, transformation, and processing of large-scale datasets.
- Proficient in Elasticsearch: Experience with Elasticsearch for querying and indexing large datasets, understanding of its architecture and performance optimization.
- Advanced SQL skills: Strong ability to write complex SQL queries for data extraction, aggregation, and analysis.
- Knowledge of distributed data processing and managing large data sets in a cloud or on-premise environment.
- Familiarity with Hadoop ecosystems, HDFS, and other big data technologies.
- Experience with version control (e.g., Git) and CI/CD pipelines.
- Strong problem-solving skills, attention to detail, and a mindset for performance tuning and optimization.
Job Responsibilities
- Design and implement ETL pipelines: Develop, optimize, and maintain ETL processes for the ingestion, transformation, and extraction of large datasets using Spark (Scala/Python).
- Data modeling and processing: Build scalable data processing applications that handle high-volume and complex data from various sources.
- Spark Application Development: Write optimized, scalable Spark applications in Scala or Python to process and analyze large datasets.
- Elasticsearch Integration: Utilize Elasticsearch for indexing, searching, and querying big data; ensure the data in Elasticsearch is well-structured and optimized for performance.
- SQL Expertise: Develop complex SQL queries and scripts to perform data analysis, transformations, and ensure data integrity across the pipeline.
- Data Architecture: Collaborate with architects to design data solutions that are resilient, efficient, and easy to scale.
- Performance Tuning: Optimize Spark jobs and queries for performance, including tuning parameters, resource management, and troubleshooting issues related to data quality and performance.
- Collaboration: Work with cross-functional teams, including data scientists, analysts, and software engineers, to deliver end-to-end data solutions.
- Monitoring and Maintenance: Set up monitoring tools and frameworks to ensure the stability and reliability of data pipelines in production environments.
We Offer
Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them.
Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities!
Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays.
Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings.
Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.
Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!