Archives

Robotics is one of the fastest-growing technologies and is evolving continuously to meet human needs. With such a rapid pace of development, many security issues arise. The future of robotics will be determined in large part by how secure robotic platforms are.

How can developers keep robotic platforms safe from security threats? Learn about robot types and generations, robotic platforms, the common security problems in robotics platforms, types of attacks on robots, and cyber-attack mitigation strategies.

Edge computing aims to take computation execution resources out of traditional data centers and bring them as close as possible to the location where they are needed. Hand-held devices, appliances, or physical units, typically within or at the boundary (edge) of access networks are providing low latency, high bandwidth, and more secure computing and storage.

How can businesses in the telecom, automotive, manufacturing, and IoT industries protect themselves from cyber risks if they employ edge computing? Learn about edge computing in detail, its meaning to different sectors, and security threats such as DDoS, side-channel, malware injection, and authentication and authorization attacks. Each threat description includes possible defense mechanism solutions.

Data in the insurance industry is increasing exponentially, with a 90% increase in the last two years. This big data cannot be managed efficiently using old technologies but requires next-generation solutions.

How can the insurance industry work with big data to price policies, settle claims, analyze customer behavior, detect fraud, and map threats? Learn about big data’s 3 V’s, big data analysis and its benefits, and how to work with unstructured data. Then, examine some big data use cases in the insurance sector.

As cybersecurity challenges grow in complexity and scope, companies cannot rely solely on manual effort anymore. While security operation teams have long used Security Incident and Event Management (SIEM) tools to identify and process threats, they are not proactive enough to deal with today’s security issues.

How can businesses use Security Orchestration, Automation, and Response (SOAR) platforms to empower their security analysts for a new era? Learn about the SIEM process, then see how we can define SOAR, explore its four engines, and discover some of its use cases for malware containment, phishing emails, and alert enrichment.

Medical and biomedical research has advanced over the last two decades thanks to continuous increases in computer processing. As we enter an age of personalized healthcare dependent on genomics, individual physiology, and pharmacokinetics, the need to handle large amounts of data and process it in a format for clinical use will become more urgent.

How can the health industry utilize quantum computing to improve patient outcomes? Learn how quantum computing can be used to personalize medicine, accelerate diagnoses, predict probable health conditions, and recommend treatments to health practitioners. Then, discover how quantum fundamentals like superposition, entanglement, algorithms, and circuits can be applied to patient diagnosis.

I have long given thought to why some Enterprise Agile transformations are successful while others seem to go on for an exceedingly long time.  It’s one of my favorite topics to discuss, and I’ve had the benefit of working with some amazing fellow Agile coaches who have given me their opinion, as well.  I base this blog on my own experiences at several Fortune 500 clients, as well as feedback I’ve received from numerous other Agile enthusiasts.

1. Support from Executive Management

I guess I got lucky in my first role as an Agile coach at a Fortune 500 company.  The 100+ year old company began their Agile transformation journey the same week I started.  It had a “burning platform” problem, so there clearly was an incentive.  But most importantly, it had the support from the very top – clear, unambiguous support.  This support was instrumental in removing obstacles and establishing a common goal that was clearly communicated.

I have been at client engagements where the Agile transformation was being driven by mid-management.  This created its own obstacles, as it forced us Agile coaches to go shopping for support among the various senior management teams.  It became more of an Agile transformation of subsets of the company instead of being a transformation of the enterprise itself.  It’s well known that optimization of the subsystems does not equate to optimization of the system. That is, when you optimize only a smaller business unit, that business unit (along with other business units) will have to change once again when you focus on a company-wide adoption of Agile.  This includes revisiting the processes, tools, and more to better support this level of transformation.

2. Stay Above Politics within the Client

One of the beautiful things about being brought in from the outside as a consultant is that we can honestly say we’re not a part of some political faction.  Most Agile coaches know to stay out of the politics and remain neutral.  Our success (and the client’s) depends on this.

However, this isn’t to say we’re not impacted by politics.  I’ve been told by more than one client not to talk to a group — for example, the PMO — or even another set of Agile coaches working at the same client location.  While we seek success where we can, these sorts of instructions can be quite disheartening, and it inhibits the transformation effort.

3. Agile Coaches Need to Report to Very Senior Management

I started alluding to this in the above sections, but I wanted to clearly specify this.  As Agile coaches, we propose changes.  I believe these changes should be discussed with the impacted parties and within the guardrails to which we agree.  However, I can occasionally be hampered if not outright blocked by politically powerful people.  If an Agile coach doesn't report to an individual high enough in the organizational structure, then we have to come up with alternatives — which is not the best use of an Agile coach’s time and additionally results in less-than-optimal progress in the transformation.

4. Recognize We’re Changing the Company — This Isn’t a Checklist Activity

From my own and other Agile coaches’ experience, as well as my many discussions with potential clients, I’ve discovered that a fair number of companies are treating the Agile transformation process more as a “checklist.”  For example, if you have a project, you bring in project managers.  Once you bring them in, you can check that activity off your list of to-dos and move onto something else.

An Agile transformation is a transformation.  It’s a transformation of how you do things, how you talk about and address problems, as well as significant changes in the corporate culture.  Many companies bring in an Agile coach, check the checkbox that they completed this activity, then forget about it — except for an occasional follow-up on progress.  This approach lessons the effectiveness of the Agile coaches and — unless addressed — will stall the transformation.

5. Know Why the Client Is Doing an Agile Transformation

This is my favorite question for any potential client – why do you want to go through an Agile transformation?  While I have my opinions, I’ve been amazed at the wide variety of responses.

In short, the reasons don’t matter too much, as long as an Agile transformation can address them.  I’ve seen responses that are essentially asking the Agile transformation to address other issues, such as to change how employee performance is measured.  While Agile (and for example SAFe) does discuss the topic of how to measure an employee’s progress within an Agile environment, this shouldn’t be the main driver.

6. Use Metrics — But Keep Them Productive

Following the old adage, “People respond to how they’re measured,” you should use metrics that reflect the goals of the Agile transformation.  I’ve seen a lot of metrics that are very development-focused, such as cycle time, defects, velocity, etc.  Even though many companies want to undergo an Agile transformation to respond to changes in the market, I have never seen this discussed or measured, which I find quite interesting.

Just as important as what metrics you use, it’s also important not to overdo the metrics.  I’ve seen people use metrics as inflexible guardrails.  For example, if a team has a single P1 (most severe) defect in production, it is automatically assumed it’s a poor team.  This sort of approach creates a culture of fear, which is the opposite of what we want to achieve, and it will impact productivity quite severely.

7. Advocate Agile, But Remain Practical

I’m an Agile coach.  I make my bread and butter by helping companies become Agile.  However, I wouldn’t be doing anyone any favors if I said that Agile is always the solution.  Sometimes it’s not.  One of my favorite discussions is to ask about this opinion with other Agile enthusiasts. A few will take this question as sacrilege, but to me it’s simply being practical.

I would suggest a highly customized approach to Agile in certain situations.  The first one that comes to mind is when I heard about the PMP (a project management certification) structure being developed from the processes they used to build nuclear submarines.  I would imagine building something like this would require significant portions of the requirements to be identified and flushed out upfront.

More recently, I was in discussion about companies in highly regulated fields.  One example was software for the medical field, where both the federal government and each state have a say about what it can, should, must, and mustn’t do.  In this situation, you can’t just go into a room with a high-level concept and walk out with ready-for-development stories, which one Agile framework suggests you do.

8. An Agile Transformation Should Be Incrementally Introduced

It’s advisable to incrementally introduce your Agile transformation.  While the goal of an Agile transformation is generally the ability to react to changes in the market (i.e., needs active business engagement), it’s not uncommon (and is usually a good idea) for your company to initiate the Agile transformation with the technology teams first. Get that piece humming, then bring in other parts of the company.

I’ve been on one Agile transformation where they decided the entire business unit would “go Agile” overnight.  When I stepped in this effort after two years of “going Agile,” they were still unable to release software reliably, which was an indication of multiple other issues that were never previously addressed.  They were following a framework without realizing that the framework made some assumptions that simply weren’t true for them.

In Conclusion

As you consider an Agile transformation for your client, it's important to understand some of the elements to improve the chances of success, as well as issues to look out for. If you have more questions about hiring an Agile coach for your own enterprise, you can reach out to GlobalLogic at info@globallogic.com or by filling out the "Let's Work Together" form at the bottom of this page.

Clients interact with customer care representatives from the insurance industry every day. These recorded phone conversations are a valuable source of information for the kinds of questions consumers ask and what their preferences are when purchasing an insurance plan.

How can the insurance sector employ cluster analysis with its call data in order to better serve its clients? Learn how customer phone calls can be converted to a transcript, cleaned of unnecessary words, converted to a vectorized form, and modelled using KMeans and Word2Vec.

What Is a Secure Development Lifecycle?

The secure development lifecycle (SDL) is a set of best practices to support software security and
compliance requirements. It covers every aspect of software development to implement security.

Why is the Secure Development Lifecycle Important?

Why do we need SDL in the world of Cloud Native, DevOps and DevSecOps, where everything moves fast and there is no time to follow a heavyweight process like SDL? Just implement, integrate and automate SAST, DAST, SCA and Pen testing and we should be good on the security side. Right?

It’s not that simple.

It is important to build security from the ground up in today’s complex and changing landscape of threats. SDL provides a framework and best practices to implement security and privacy controls and consideration throughout all phases of the development.

Modern development processes and methodologies have changed the way we build software. However, we must still follow each development step, formally thinking through the requirements, architecture and design, through testing the software, writing test cases and scripts, and so on. These processes are critical. However, implementing them without a foundational process or framework cannot ensure secure software. Instead, it overwhelms the development team who are probably already struggling to meet the release timelines with security bugs to be fixed. So why skip the foundation step ofbuilding secure software?

Microsoft Secure Development Lifecycle

Of the many SDL models, Microsoft Security Development Lifecycle (MS SDL) is the most widely used. It has five capability areas with four levels of maturity. Microsoft published SDL in 2008 and regularly updates it based on their growing experience and new industry trends.

secure development lifecycle
Source - Microsoft Secure Development Lifecycle

Lessons Learned

At GlobalLogic, we’ve successfully implemented MS SDL in multiple projects, improving the overall security of the related software. Our point of view and takeaways based on the experience we gained from these projects include the following:

It’s Challenging To Get Started

Starting with the SDL can be complex and overwhelming. Teams struggle to implement the basics of SDL mainly because the outcome is challenging to measure at the beginning. Our recommendation is not to fall into the trap of the immediate value add, but to keep following the best practices and have patience.

Plan incremental goals rather than trying to achieve too many things at once. It helps to assess the baseline of the current state so that improvements can be measured.

Avoiding Focusing Only On Security Hygiene

Most of the team only focus on security hygiene (such as OWASP Tops 10) and consider software secure if it takes care of these. But software security goes beyond this. A team must focus on security functionality and ask how a functionality will impact the security; for example, the account lock functionality in case of three or more incorrect login attempts. Some of these can be standard or generic, while others require analysis from the hacker perspective.

Consistency is Key

Security is a continuous process and the team must be consistent in whatever they implement. Do not fall into the trap of one-time achievement mode, as it will create a false sense of security.

Consider the Organization’s Risk Appetite

Not all security risks are equal and not all applications have the same risks. The team must understand the organization’s risk appetite and should try to take care of risk accordingly.

Defined Roles and Responsibilities Help

When most of the team is focused on processes and tools for security, they can miss the importance of roles and responsibilities. It is essential to define clear security roles and responsibilities such as who is the security champion, who is responsible for fixing the security bug, etc.

Skills Training is Ongoing

Make sure that your team understands security and has the required skills. If not, have a strong training plan to bridge the gap. We have seen many teams fail in implementing SDL because of a lack of skills.

Conclusion

These are some of the key learnings and takeaways based on our experience. We will be publishing detailed implementation steps in future papers to help you start and mature your Secure Development Lifecycle.

Introduction

“True happiness comes only by making others happy.” - David O. McKay

Taking a lead from the quote above, a data platform can be truly happy if it can make others happy. “Others” in this context would be the actors/teams with whom the data platform interacts. Below are the key actors that typically have interactions with the data platform:

  • Data Engineers
  • Data Consumers
    • Data Analysts
    • Data Scientists/Machine Learning Engineers
    • External Data Consumers like partners & data buyers
  • DataOps Engineers
  • Data Stewards & Admins (for Data Governance)

This blog identifies the common expectations that Data Engineers and Data Consumers have for a data platform, and it demonstrates how to meet these expectations. DataOps and Data Governance are also extremely important aspects of a comprehensive, end-to-end data platform, so we will cover the perspectives of DataOps Engineers and Data Stewards & Admins in Part II of this blog series.

 

Great (User) Expectations

Happy Data Engineers

Data Engineers typically expect the following from a data platform:

  • If something is already done, I should not waste my time recreating the same. 
  • I should have the means to discover and re-use the already existing data platform components (e.g., extractors, transformers, loaders, connectors etc) and data assets (e.g., ingested data sets in this case).
  • I should have access to a framework that allows me to stitch together the modular components reusing the existing available ones.
  • I don’t expect all the data pipeline scenarios to be done using the existing components only. I know there might be a need to extend the existing components or create new ones. I want the framework to allow me to extend and create new components and stitch together the same while creating new pipelines.
  • For me to be able to do that effectively, I should know exactly how the components must be created in order to stitch the pipelines effectively.
  • I want to be able to create the components in a manner so that my work can be utilized not only by me, but also by the larger data engineering community within the organization.
  • I want CI/CD integration at a high level, including easy-to-access resources and services to start on the job, as well as being able to move data pipelines across different environments.
  • I would like to have the ability to store versions of the pipeline in a smooth, integrated manner.

Happy Data Consumers

Data consumers typically expect the following from a data platform:

  • I should know exactly which Golden Records/Versions of processed data already exist.
  • I should be able to trust that the data is trustworthy and fit for purpose. (E.g., I should be able to check the lineage to confirm that I am looking at the appropriate data that is needed for the requirement.)
  • I should know the exact process required to access the data sets.
  • Based on the exact need of the use case, I should be able to leverage different kinds of access patterns like streaming, bulk export/copy, Query, APIs etc.
  • If I need a new data set, I should be able to get it serviced quickly.
  • I should be able to share datasets and collaborate with other users.
  • I should be able to add custom metadata like tags and comments.

 

Building a Happy Data Platform 

“Efforts and courage are not enough without purpose and direction.” – John F. Kennedy

Approach 1: Build for Platform Feature

Below is a traditional, technology-driven approach:

  1. Ingest all the data from multiple different systems.
  2. Build tightly coupled pipelines for each use case, from ingestion to data processing and storage.
  3. Some approaches work towards generating all possible components for extracting, processing, loading, and exposing data — including batch and stream processing.

However, there are few issues with this approach:

  • It doesn’t take long for a user to accumulate a lot of data in the lake and not know what to do with it. The data lake converts into a data swamp, and it becomes increasingly difficult to derive value from the data.
  • There may be limited reuse in a case of tightly coupled pipelines.
  • Return on investment and time to value might be a big challenge.

Approach 2: Build for Purpose

Below is an approach driven by business needs:

  • Create a framework that allows extensibility, modularity, and flexibility by using configurations, templates, etc.
  • Explore and discover already existing data and data platform component assets that can be reused.
  • Implement specific, prioritized, business-driven use cases by leveraging the framework — creating reusable data platform component assets.
  • Get the needed data for the specific use case.
  • Platform components should be created based on the framework to allow reuse.
  • Build Data Apps like Data Validator, Schema Mapper, etc.

While a DataOps mindset is a complete topic unto itself, it is worth mentioning on a high level that it is important to bring a DevOps and Agile approach to a data project. DataOps encompasses all aspects, including infrastructure management, services setup and management, environment setup, access management of data and components, quality, security and compliance, deployments, version control, and monitoring.

Paying attention to the high-level team setup also enables you to clearly separate team concerns:

  • The Core Platform team works on architecting, designing, and creating technical components for the data platform (e.g., data extractors, loaders, processors and transformers, CI/CD, Infrastructure-as-a-Code, etc.).
  • The Use Case Implementation team stitches together a pipeline using the components created by the Core Platform team; configures/extends it as needed; and writes the domain/business logic specific to the use case.

 

Accelerate Your Own Data Journey

The objective of a data platform is to eventually enable purposeful, actionable insights that can lead to business outcomes. Additionally, if the data platform puts the right emphasis on the journey and process (i.e., how it can make the job easier for its key actors while delivering the prioritized projects), then it will deliver an ecosystem that is fit for purpose, minimize waste, and enable a “reuse” mindset.

At GlobalLogic, we are continuously improving our Data Platform Accelerator, which is based on a similar approach. This digital accelerator enables enterprises to immediately manifest a solution that can gather, transform, and enrich data from across their organization. We are excited to work with our clients to accelerate their data journeys, and we would be happy to discuss your own needs through the below contact form.

Introduction

The truth is, when COVID hit, the reliance on cloud to solve a cosmic number of business problems spanned across all industries.

Not only did the pandemic validate cloud’s value proposition, “the ability to use on-demand, scalable cloud models to achieve cost efficiency and business continuity is providing the impetus for organisations to rapidly accelerate their digital business transformation plans.” Sid Nag, research vice president at Gartner also notes that “the increased use of public cloud services has reinforced cloud adoption to be the ‘new normal,’ now more than ever.”

According to Gartner, the aftermath of the COVID crisis will spark an acceleration of IT spending in the cloud – with cloud predicted to make up 14.2% of the total global enterprise IT spending market by 2024, up from 9.1% in 2020.

Of course, cloud is not the only solution available.

In fact, there remains huge appetite to use a combination of both a public cloud and a private environment, an approach known as hybrid cloud. Adopting multiple deployment models is popular, with more than 90% of global enterprises expected to rely on hybrid cloud by 2022.

The appetite for cloud is there. The problem therein lies not in the use of cloud, but how cloud is being used.

When you consider the pace of IT and BCP decision-making since March 2020, you begin to question the longevity of solutions put in place. Are the chosen models resilient, capable of supporting changing business user and customer demands, affordable? Or do they resemble a sticking plaster, solving one problem in the moment?

If you’re unsure, or you fall into bucket two, keep reading.

Well-Architected Review designed for the cloud-era

From experience, it’s cheaper and easier to remedy if you can answer the above question sooner rather than later. This is why GlobalLogic has partnered with AWS to provide a free Well-Architected Review.

GlobalLogic has a wide range of experience helping large enterprise customers adopt Public Cloud in a safe, reliable and scalable manner. We have often found that customers aren’t always aware of the specific gaps between their environment and Cloud best practices. Where gaps are discovered, it can also take time for adjustments to be implemented because customers are sometimes nervous about making changes or don’t give them enough priority.

The Well-Architected Review provides a quick and targeted analysis based on the AWS Well-Architected Framework to help highlight specific gaps between the current state and AWS best practices.

GlobalLogic, in Partnership AWS, is offering to conduct Well-Architected Reviews for free to strategic customers and in addition, AWS is offering up to $5000 service credit per workload addressed as described below.

Why carry out a Well-Architected Review?

The AWS Well-Architected Review offering by GlobalLogic has the following benefits for customers, AWS and GlobalLogic:

What exactly is the Well-Architected Framework and Review?

The AWS Well-Architected Framework is a set of standards set out by AWS for benchmarking workloads and their environment in line with the following pillars:

  • Operational Excellence – The ability to run and monitor systems to deliver business value and to continually improve supporting processes and procedures
  • Security – The ability to protect information, systems, and assets while delivering business value through risk assessments and mitigation strategies
  • Reliability – The ability of a system to recover from infrastructure or service disruptions, dynamically acquire computing resources to meet demand and mitigate disruptions such as misconfigurations or transient network issues.
  • Performance Efficiency – The ability to use computing resources efficiently to meet system requirements, and to maintain that efficiency as demand changes and technologies evolve.
  • Cost Optimisation – The ability to run systems to deliver business value at the lowest price point.​

More information about the framework can be found on the AWS website here.

The Well-Architected Review is best targeted towards a production workload which could be running in AWS, on-prem or another Cloud Provider. The review assesses the workload’s environment, operations and deployment mechanisms against those five pillars to provide detailed insight into exactly where, if any, gaps exist.

GlobalLogic use its experience and expertise to suggest practical ways in which these could be immediately improved to close the gaps in a timely and cost-effective manner.

Is the Well-Architected Review for me?

If you’re using cloud, yes.

Having launched the Well-Architected Review offering in late 2020, GlobalLogic has successfully carried out pilots in-house as well as a number of reviews for customers. These reviews provided customers with assurance in areas where they were already meeting best practices. They also highlighted areas of improvement with suggested remediation actions.

In one case, a Tier 1 Financial Services company was initially sceptical of how useful the AWS Well-Architected tool could be for them. However, once the review got going, they quickly understood the extra value GlobalLogic were bringing to the process. With our broad experience in financial services and cloud, the client was able to tap into our expertise during the review, and were particularly impressed with the backlog of remediation tasks that were generated as a result.

This backlog helped them secure quick wins and prioritise actions to ensure the most effective cloud strategy for their business, user and customer needs. They also took comfort that best practice guidelines were being met and operations were being undertaken in a secure and reliable environment.

In some cases, customers have chosen GlobalLogic to help with implementing the remediation activities which also made them eligible to receive the $5000 service credit from AWS per workload reviewed.

Next Steps

GlobalLogic is looking to scale out this free offering in 2021 and would like to invite customers to take advantage of the offer. If you have a production workload you would like reviewed, book an introductory call with a member of the team by filling out the "Let's Work Together" form below. You can also download our AWS Well-Architected Fact Sheet to learn more about how the offer works.

  • URL copied!