Job Search

Podemos ayudarlo a desarrollar una carrera excepcional.

1039 + Posiciones abiertas a nivel mundial

1039 + Posiciones abiertas a nivel mundial

Data Scientist IRC241434

Job: IRC241434
Location: India - Hyderabad
Designation: Consultant
Experience: 5-10 years
Function: Engineering
Skills: Data Science, Docker Compose, pandas xlwings, Python, RabbitMQ, Tensorflow
Work Model: Hybrid

Description:

Position Overview: We are looking for a highly skilled and versatile Data Scientist/Data Engineer to join our team. The ideal candidate will have a strong technical background, be proficient in Python, and have experience managing data pipelines and using technologies like Docker, RabbitMQ, SQLite, and more.

Key Responsibilities:
Data Science and Engineering:
Develop and implement advanced data science models.
Design and optimize data pipelines for various AI features across our product suite.
Utilize Python and its major libraries Position Overview: We are looking for a highly skilled and versatile Data Scientist/Data Engineer to join our team. The ideal candidate will have a strong technical background, be proficient in Python, and have experience managing data pipelines and using technologies like Docker, RabbitMQ, SQLite, and more.

Key Responsibilities:
Data Science and Engineering:
Develop and implement advanced data science models.
Design and optimize data pipelines for various AI features across our product suite.
Utilize Python and its major libraries (Pandas, Scikit-learn, NumPy, etc.) to analyze and process large datasets.
Product Mastery:
Gain deep knowledge of our various AI or generative AI features across our products.
Work closely with the product development team to integrate advanced data science methodologies into our products.

Pipeline Management:
Design, build, and maintain scalable data pipelines that ensure smooth operation across various products.
Optimize data processing workflows using tools like Docker, RabbitMQ, and SQLite.
Monitor and troubleshoot data pipelines, ensuring data integrity and performance.

Collaboration and Implementation:
Collaborate with cross-functional teams, including software developers, data analysts, and product managers, to deliver high-quality solutions.
Implement and deploy machine learning models and data processing algorithms across various products.
Continuous Improvement:
Continuously seek ways to improve the accuracy and efficiency of data processing across our various products.
Stakeholder Communication:
Communicate complex data insights to non-technical stakeholders in a clear and concise manner.

Research:
Stay up-to-date with the latest technology trends and techniques in data science, and implement new methodologies as appropriate.

Required Qualifications:
Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field.
3+ years of experience in data science and data engineering roles.
Strong proficiency in Python and major libraries such as Pandas, Scikit-learn, NumPy, TensorFlow, etc.
Proven experience in building and managing data pipelines using Docker, RabbitMQ, and SQLite.
Familiarity with SQL and database management.
Strong problem-solving skills and the ability to work both independently and collaboratively.

Preferred Qualifications:
Experience with generative AI technologies.
Familiarity with containerization and orchestration tools like Kubernetes.
Experience with cloud platforms like AWS, Azure, or Google Cloud.

Product Mastery:
Gain deep knowledge of our various AI or generative AI features across our products.
Work closely with the product development team to integrate advanced data science methodologies into our products.

Pipeline Management:
Design, build, and maintain scalable data pipelines that ensure smooth operation across various products.
Optimize data processing workflows using tools like Docker, RabbitMQ, and SQLite.
Monitor and troubleshoot data pipelines, ensuring data integrity and performance.

Collaboration and Implementation:
Collaborate with cross-functional teams, including software developers, data analysts, and product managers, to deliver high-quality solutions.
Implement and deploy machine learning models and data processing algorithms across various products.
Continuous Improvement:
Continuously seek ways to improve the accuracy and efficiency of data processing across our various products.
Stakeholder Communication:
Communicate complex data insights to non-technical stakeholders in a clear and concise manner.

Research:
Stay up-to-date with the latest technology trends and techniques in data science, and implement new methodologies as appropriate.

Required Qualifications:
Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field.
3+ years of experience in data science and data engineering roles.
Strong proficiency in Python and major libraries such as Pandas, Scikit-learn, NumPy, TensorFlow, etc.
Proven experience in building and managing data pipelines using Docker, RabbitMQ, and SQLite.
Familiarity with SQL and database management.
Strong problem-solving skills and the ability to work both independently and collaboratively.

Preferred Qualifications:
Experience with generative AI technologies.
Familiarity with containerization and orchestration tools like Kubernetes.
Experience with cloud platforms like AWS, Azure, or Google Cloud.

Requirements:

Position Overview: We are looking for a highly skilled and versatile Data Scientist/Data Engineer to join our team. The ideal candidate will have a strong technical background, be proficient in Python, and have experience managing data pipelines and using technologies like Docker, RabbitMQ, SQLite, and more.

Key Responsibilities:
Data Science and Engineering:
Develop and implement advanced data science models.
Design and optimize data pipelines for various AI features across our product suite.
Utilize Python and its major libraries Position Overview: We are looking for a highly skilled and versatile Data Scientist/Data Engineer to join our team. The ideal candidate will have a strong technical background, be proficient in Python, and have experience managing data pipelines and using technologies like Docker, RabbitMQ, SQLite, and more.

Key Responsibilities:
Data Science and Engineering:
Develop and implement advanced data science models.
Design and optimize data pipelines for various AI features across our product suite.
Utilize Python and its major libraries (Pandas, Scikit-learn, NumPy, etc.) to analyze and process large datasets.
Product Mastery:
Gain deep knowledge of our various AI or generative AI features across our products.
Work closely with the product development team to integrate advanced data science methodologies into our products.

Pipeline Management:
Design, build, and maintain scalable data pipelines that ensure smooth operation across various products.
Optimize data processing workflows using tools like Docker, RabbitMQ, and SQLite.
Monitor and troubleshoot data pipelines, ensuring data integrity and performance.

Collaboration and Implementation:
Collaborate with cross-functional teams, including software developers, data analysts, and product managers, to deliver high-quality solutions.
Implement and deploy machine learning models and data processing algorithms across various products.
Continuous Improvement:
Continuously seek ways to improve the accuracy and efficiency of data processing across our various products.
Stakeholder Communication:
Communicate complex data insights to non-technical stakeholders in a clear and concise manner.

Research:
Stay up-to-date with the latest technology trends and techniques in data science, and implement new methodologies as appropriate.

Required Qualifications:
Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field.
3+ years of experience in data science and data engineering roles.
Strong proficiency in Python and major libraries such as Pandas, Scikit-learn, NumPy, TensorFlow, etc.
Proven experience in building and managing data pipelines using Docker, RabbitMQ, and SQLite.
Familiarity with SQL and database management.
Strong problem-solving skills and the ability to work both independently and collaboratively.

Preferred Qualifications:
Experience with generative AI technologies.
Familiarity with containerization and orchestration tools like Kubernetes.
Experience with cloud platforms like AWS, Azure, or Google Cloud.

Product Mastery:
Gain deep knowledge of our various AI or generative AI features across our products.
Work closely with the product development team to integrate advanced data science methodologies into our products.

Pipeline Management:
Design, build, and maintain scalable data pipelines that ensure smooth operation across various products.
Optimize data processing workflows using tools like Docker, RabbitMQ, and SQLite.
Monitor and troubleshoot data pipelines, ensuring data integrity and performance.

Collaboration and Implementation:
Collaborate with cross-functional teams, including software developers, data analysts, and product managers, to deliver high-quality solutions.
Implement and deploy machine learning models and data processing algorithms across various products.
Continuous Improvement:
Continuously seek ways to improve the accuracy and efficiency of data processing across our various products.
Stakeholder Communication:
Communicate complex data insights to non-technical stakeholders in a clear and concise manner.

Research:
Stay up-to-date with the latest technology trends and techniques in data science, and implement new methodologies as appropriate.

Required Qualifications:
Bachelor’s or Master’s degree in Data Science, Computer Science, Engineering, or a related field.
3+ years of experience in data science and data engineering roles.
Strong proficiency in Python and major libraries such as Pandas, Scikit-learn, NumPy, TensorFlow, etc.
Proven experience in building and managing data pipelines using Docker, RabbitMQ, and SQLite.
Familiarity with SQL and database management.
Strong problem-solving skills and the ability to work both independently and collaboratively.

Preferred Qualifications:
Experience with generative AI technologies.
Familiarity with containerization and orchestration tools like Kubernetes.
Experience with cloud platforms like AWS, Azure, or Google Cloud.


Job Responsibilities:

• Work in the team working under agile software engineering (analysis, architecture, technical design, task planning, coding, PR reviews, maintenance, etc.) framework
• Build innovative backend services
• Code, test, document and deliver features with high-performance and availability in mind
• Enhance existing and create new features for AI based Process Discovery Product Suite
• Collaborate with program managers, designers, software engineers in tests to build and enhance the backend


We Offer

Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them.

Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities!

Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays.

Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings.

Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses.

Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

About GlobalLogic

GlobalLogic is a leader in digital engineering. We help brands across the globe design and build innovative products, platforms, and digital experiences for the modern world. By integrating experience design, complex engineering, and data expertise—we help our clients imagine what’s possible, and accelerate their transition into tomorrow’s digital businesses. Headquartered in Silicon Valley, GlobalLogic operates design studios and engineering centers around the world, extending our deep expertise to customers in the automotive, communications, financial services, healthcare and life sciences, manufacturing, media and entertainment, semiconductor, and technology industries. GlobalLogic is a Hitachi Group Company operating under Hitachi, Ltd. (TSE: 6501) which contributes to a sustainable society with a higher quality of life by driving innovation through data and technology as the Social Innovation Business.

Apply Now

The gender information on this form helps us understand the makeup of our applicant pool in this key area, and to continuously improve our efforts to make our workforce more inclusive.
Attach your file here or browse
Only .docx, .rtf, .pdf formats allowed to a max size of 5 MB.
  • URL copied!