Archives

Analytic Process Automation (APA) is the essential data analytics software to optimize insurance industry business processes. APA's three main aspects are democratizing data and analytics, automating processes, and upskilling people. It also helps make your data work for you and alleviates employees' focus from repetitive tasks to create time for upskilling. 

Additionally, APA can automate time-consuming processes like claim management and underwriting. Incorporating APA into your business operations can help your company overcome main challenges in insurance, such as mismanaged resources, operational blockades, and data crunches. Learn about the critical components of APA and how to incorporate them into your company effectively.

Industry 4.0 is streamlining the incorporation of automation and technology to improve smart machine capabilities. Artificial intelligence, machine learning, and data analysis are the foundation of smart machines, which help create smart spaces in factories. In addition, these resources enhance the efficiency of data flow to management and help keep their workforce safe.

Low-power wide-area networks, 5G Networks, Edge Computing, and AI are improving the functionality and application of smart machine technology to put the control of the factory and its output in the factory leader’s hands. Read about the technological improvements these smart machines can bring to your company and the use cases where Industry 4.0 technology can improve your factories.

Introduction

Over the last few decades, huge amounts of data have been generated from different types of sources. Enterprises increasingly want to utilize the new age data paradigms to drive better decisions and actions. It provides them an opportunity to increase efficiencies, push newer ways of doing business, and optimize spends.

However, a lot of companies are struggling with data issues because of the advanced technological stacks involved and the complex data pipelines that keep changing due to newer business goals. It has become imperative to leverage best practices for implementing data quality and validation techniques to ensure that data remains usable for further analytics to derive insights.

In this blog, we look at the data quality requirements and the core design for a solution that can help enterprises perform data quality and validation in a flexible, modular, and scalable way.

Data Quality Requirements

A data platform integrates data from a variety of sources to provide processed and cleansed datasets that comply with quality and regulatory needs to analytical systems so that insights can be generated from them. The data being moved from the data sources to the storage layers need to be validated, either as part of the data integration pipeline itself, or independently compared between the source and the sink.

Below are some of the requirements that a data quality and validation solution needs to address:

  • Check Data Completeness: Validate the results between the source and target data sources, such as:
    • Compare row count across columns
    • Compare output of column value aggregation
    • Compare a subset of data without hashing or full dataset with SHA256 hashing of all columns
    • Compare profiling statistics like min, max, mean, quantiles

 

  • Check Schema/Metadata: Validate results across the source and target, or between the source and an expected value.
    • Check column names, data type, ordering or positions of columns, data length

 

  • Check Data Transformations: Validate the intermediate step of actual values with the expected values.
    • Check custom data transformation rules
    • Check data quality, such as whether data is in range, in a reference lookup, domain value comparison, or row count matches a particular value
    • Check data integrity constraints like not null, uniqueness, no negative value

 

  • Data Security Validation: Validate different aspects of security, such as:
    • Verify if data is compliant as per regulations and policies applicable
    • Identify security vulnerabilities in underlying infrastructure, tools leveraged, or code that can impact data
    • Identify issues at the access, authorization, and authentication level
    • Conduct threat modeling and testing data in rest and transit

 

  • Data Pipeline Validation: Verify pipeline related aspects such as whether:
    • If the expected source data is picked
    • Requisite operations in the pipeline are as per requirements (e.g., aggregation, transformations, cleansing)
    • The data is being delivered to the target

 

  • Code & Pipelines Deployment Validation: Validate that the pipelines with code have been deployed correctly in the requisite environment
    • Scale seamlessly for large data volumes
    • Support orchestration and scheduling of validation jobs
    • Provide a low code approach to define data sources and configure validation rules
    • Generate a report that provides details about the validation results across datasets for the configured rules

 

High-Level Overview of the Solution

Below is a high-level design for a data quality and validation solution that addresses the above-mentioned requirements.

  • Component Library: Generalize the commonly used validation rules as a stand-alone component that can be provided out-of-box through a pre-defined Component Library.

 

  • Components: For advanced users or for certain scenarios, custom validation rules might be required. These can be supported through an extensible framework that supports the addition of new components to the existing library.

 

  • Job Configuration: A typical QA tester prefers a low-code way of configuring the validation jobs without having to write code. A JSON or YAML-based configuration can be used to define the data sources and configure the different validation rules.

 

  • Data Processing Engine: The solution needs to be able to scale to handle large volumes of data. A big data processing framework such as Apache Spark can be used to build the base framework. This will enable the job to be deployed and executed in any data processing environment that supports Spark.

 

  • Job Templates: Pre-defined job templates and customizable job templates can provide a standardized way of defining validation jobs.

 

  • Validation Output: The output of the job should be a consistent validation report that provides a summary of the validation rules output across the data sources configured.

 

Accelerate Your Own Data Quality Journey

At GlobalLogic, we are working on a similar approach as part of our GlobalLogic Data Platform. The platform includes a Data Quality and Validation Accelerator that provides a modular and scalable framework that can be deployed on cloud serverless Spark environments to validate a variety of sources.

We regularly work with our clients to help them with their data journeys. Tell us about your needs through the below contact form, and we would be happy to talk to you about next steps.

 

I had an opportunity recently to play with test cases and asked my colleague, “What do I need to test?”

He said, “Mate, this is a unit test, and you need to decide the test cases according to the request and response, which should cover all the scenarios.”

This presented a dilemma for me, so I decided to write this complete guide for test cases. Let’s begin with my first question.

What is a Test Case?

In their simplest form, test cases are the set of conditions under which a tester determines whether the software satisfies requirements and functions properly. In layman’s terms, these are predefined conditions to check that the output is correct.

What Do I Need to test?

There is usually a simple answer to this question, by using a coverage package that measures the code coverage that also works during test execution. You can learn more about this in its official documents. Unfortunately, this was not the case in my situation.

The second approach is fairly straightforward. Typically, test cases are written by the developer of the code – and if you are the developer of the code, you are well aware of the flow of the code. In this situation, you need to write your test cases around the request and expected response of the code.

For example, if you are writing test cases for the division of a number, you must think about the code's expected input and expected output.

Test-driven Development Definition: "Test-driven development (TDD) is a software development process relying on software requirements being converted to test cases before software is fully developed, and tracking all software development by repeatedly testing the software against all test cases."

Django’s unit tests use a Python standard library module called unit test. The below module shows tests using a class-based approach.

How to Start Writing Test Cases

Here, we have one example for the method and class where we can start the test cases.

Class ProfileTestCase(TestCase):

def setUp(self):

pass

def test_my_test_1(self):

self.assertTrue(False)

def test_my_test_2(self):

self.assertTrue(False)

def tearDown(self):

pass

The following is a general test template for writing test cases in Django python.

For this example, TestCase is one of the most important classes provided by the unit test module, and it provides the foundation for testing our functions.

Also, for this example, SetUp is the first method that we run in our testing of the code. Therefore, it helps us set up the standard code required for each method that we can use in our entire testing process inside our testing class.

A teardown test case always runs last, as it can delete all objects or tables made while testing. It will also clean the testing environment after completing a test.

Now, let’s write out the test case:

Class CourierServices(TestCase):

def setup(self):

self.courier_data = CourrierModel.objects.all()

self.url = ‘/courier/service/’ #This is the url which we are going to hit for the response

def test_route(self):

response = self.client.get(self.url)

self.assertEqual(response.status_code,200) #here we are checking for the status 200

def test_zipcode(self):

test_courrier_zip_code(self):

zip_code = “110001”

query_param = {‘zip_code’: zip_code}

response = self.client.get(self.url, data=query_params) #(we are trying to hit the url(self.url) using #parameter(data=query_params) and collecting the response in (response))

self.assertEqual(200, response.status_code)

#here test the response code you get from the url and compare it with the 200

response_json = response.json()

results = response_json.get('results', [])

self.assertIsInstance(results, list)

self.assertEqual(results[0]['zip_code'], zip_code)

Here is another valuable code sequence, and here we are trying to test the most common code known as the Login Function:

Class loginTest(TestCase):

def setUp(self):

self.user = get_username_model().objects.create_user(username='test', password='test123', email='test@test.com',mobile_no=1234567890)

self.user.save()

def test_correct_user_pass(self):

user = authenticate(username='test', password='test123')

self.assertTrue((user is not None) and user.is_authenticated)

def test_wrong_username(self):

user = authenticate(username='fakeuser', password='test123')

self.assertFalse(user is not None and user.is_authenticated)

def test_wrong_password(self):

user = authenticate(username='test', password='fakepassword')

self.assertFalse(user is not None and user.is_authenticated)

def tearDown(self):

self.user.delete()

Note: A test method passes only if every assertion in the method passes. Now, you may be wondering, What do these assertions mean, and how do you know which ones are available? I will try to answer these questions as thoroughly as possible.

Here are some commonly used assertion methods:

 

Method Meaning
assertEqual(a, b) a==b
assertNotEqual(a, b) a != b
assertTrue(x) bool(x) is True
assertFalse(x) bool(x) is False
assertIs(a, b) a is b
assertIsNot(a, b) a is not b
assertIsNone(x) x is None
assertIsNotNone(x) x is not None
assertIn(a, b) a in b
assertNotIn(a, b) a not in b
assertIsInstance(a, b) isinstance(a, b)
assertNotIsInstance(a, b) not isinstance(a, b)

These methods are empowering; sometimes when we use them, an exact match isn’t required.

For example, how do I test for x-y = almost zero? This is where assertion methods can help. I see it as the “lifesaver” method.

 

Method Meaning
assertAlmostEqual(a, b) round(a-b,7)==0
assertNotAlmostEqual(a,b) round(a-b,7)!=0
assertGreater(a, b) a>b
assertGreaterEqual(a,b) a>=b
assertLess(a, b) a<b
assertLessEqual(a, b) a<=b
assertRegex(s, r) r.search(s)
assertNotRegex(s, r) not r.search(s)
assertCountEqual(a, b) a and b have the same elements in the same number, regardless of their order.
assertListEqual(a, b) It compare two list
assertTupleEqual(a, b) It compare two tuple
assertSetEqual(a, b) It compare two set
assertDictEqual(a, b) It compare two dictionary

Now that we know how to write the test cases, let me show you how to run them. Running the test cases is easy in Django python.

Write your test cases in the module, then go to the terminal and Run this command:

Python –m unittest my_test_module_1 my_test_module_2

If you want to run the test class, then use:

Python –m unittest my_test_module_1.TestClass

If you want to test your method, run this:

Python –m unittest my_test_module_1.TestClass.my_test_method

You can also run this test case:

Python -m unittest tests/my_test_testcase.py

Sometimes, we want to run the test cases via docker. For that, you can use the following method.

  1. First, go inside your web container using exec:

docker exec -it my-own-services_web_1 \bin\bash

 

  1. Then you will get the cmd prompt like this:

 runuser@123456789:/opt/project123$

Note: You need to check your docker-compose.yaml and see the volume path. It will look something like this - .:/opt/app and it may change in your case.

 python3 manage.py test test_folder.sub_folder.test_views.YourTestCases --settings=docker.test_settings

I hope this blog inspires you to start coding with the TDD approach, which will help make your code bug-free and robust too.

Remember the Golden Rules of TDD

  • Write production code only to pass a failing unit test.
  • Write no more of a unit test than is sufficient to fail (compilation failures are failures).
  • Write no more production code than is necessary to pass the one failing unit test.

Next blog will cover the same in detail…

User-centric design is essential to the dealer, workshop, and automotive brand’s ability to provide the best possible driving experience for the customer. It is vital to consider the sounds, smells, and feelings of the different interactions drivers may experience – such as the acceleration potential, for example – to provide the perfect driving experience.

User centric design Automotive

As developers incorporate more software into the design model, there is greater potential to control the driver's experience. Software developers can help create a positive experience for drivers with the vehicle’s technology, such as the display screen, touch panel, sound equipment, and the comfort of the vehicle through temperature monitoring and ventilation.

Software developers can also provide active sound insulation, suspension steering, battery usage optimization, online services, and artificial intelligence to assist the driver. Once the vehicle is purchased, software can still assist the driver with remote updates such as new features or content updates, active safety setups, and updates for the communication technology that helps assess the driver's surroundings.

As the importance and quantity of a vehicle’s software increases, we must focus on what that would mean for the requirements, quality, and management of the vehicle’s software through its entire life cycle to ensure it maintains an efficient user-centric design. Therefore, to build a final plan for the necessary software components, the following software production phases need to be included:

  • Architectural Design
  • User Experience (UX) and User Interface (UI) Design
  • Define Requirements and Engineering
  • Software Development
  • Testing and Maturation
  • Integration, Scaling, and Maturation
  • Support and Maintenance

Automotive companies constantly search for competitive advantages, innovative capabilities, and safer and more efficient ways to produce vehicles. As a result, automotive brands gain value through software-defined products and by creating additional R&D efforts to improve customer satisfaction and increase revenue and shares.

In recent years, GlobalLogic has begun incorporating a great deal of Human-Machine Interface (HMI) technology into automotive applications as it is the best way to assess the user's experience. Furthermore, the quantity of information displayed during a user's experience is constantly growing, increasing the complexity of UIs and making HMI a critical component of a car.

We work on the comprehensive process of HMI design and development, as well as complete advanced testing to ensure that the technology is user-friendly. We focus on memory consumption, startup time, frames speed and rendering acceleration, animations, speech recognition, voice control, face recognition, and eye-tracking or gesture recognition to optimize the HMI technology. We frequently equip display devices with augmented and virtual reality, as well.

All phases within a specific software domain require skilled engineering resources, specific equipment, labs, and sufficient safety and security. In addition, to satisfy the user's expectations and achieve software quality and reliability, the implementation process must be conducted under Automotive Software Performance Improvement and Capability determination (A-SPICE) standards, providing traceability and compliance with the industry's numerous regulations and trends.

Behind the visible pieces of software, we use multiple hardware platforms with software architectures for high scalability and algorithms modeled by Matlab or semi-automated code generated by AutoSar and MatLab. Typically, for this type of software, we use continuous development and integration techniques.

Finally, software developers validate all software components by hardware, software, or model-in-the-loop testing before testing the software on the road.

In conclusion, it is crucial to monitor and control software production throughout, to create the best possible user-centric experience and amplify the customer's experience with the automotive brand.

The transportation industry is facing multiple challenges in terms of offering safe and efficient products with low downtime, increased utilization, and lower duo carbon emissions. Businesses must also ensure that they are compliant with environmental regulations and numerous other transport regulations. They must develop their products in a balanced way, with social and environmental responsibility in mind. All of these challenges create a demand for specific technology solutions that are software-driven.

1024-Tech-that-drives-future-of-transportation

At GlobalLogic, we work on many projects for the transportation industry, with an extraordinary focus on road transport. Our clients are vehicle manufactures, parts suppliers, and fleet transportation companies. Based on our experience, we highlight the technologies that are currently impacting the transportation industry here.

Driver Safety Systems

Whether for drivers, passengers, or pedestrians, personal safety is central to the vehicle design process. Vehicle manufacturers focus a significant amount of their resources on minimizing human injuries or fatalities and providing drivers with safe, comfortable driving experiences through Advanced Driver Assistance Systems (ADAS).

The ADAS leverages various technologies and applications to help drivers safely drive and park, improving road safety. Let’s explore a hypothetical scenario that uses ADAS technology.

  1. The intelligent vehicle uses computer vision technologies such as Light Detection and Ranging (LIDAR), Radio Detection and Ranging (RADAR), and cameras to view its surroundings.
  2. Vehicle-to-everything (V2X) technology enables the vehicle to instantly communicate with surrounding objects, infrastructure, and other vehicles. It can exchange data about dangers on the road, incidents, or blocked highways, obtain data on optimal driving routes, and minimize collisions by identifying nearby objects.
  3. These exchanges are made lightning-fast through high-speed communication links – either via ethernet inside the vehicle or 5G cellular networks outside the vehicle – as well as high computing power from internal computers, and edge processing in the network or delivered on-demand from the cloud.
  4. The vehicle can learn through Machine Learning (ML) and act through Artificial Intelligence (AI) thanks to advanced decision algorithms. These capabilities help the vehicle make decisions based on the driver’s route, adjust speed as needed, and act quickly in dangerous situations.

Not only do computer vision and ADAS technologies help protect drivers from their external surroundings, but they can also protect drivers from themselves. At GlobalLogic, we have worked on multiple projects that leverage internal cameras to create driver monitoring systems.

These systems can detect a driver’s alertness and ability to respond to a road’s conditions appropriately, whether drowsy or even just distracted from eating, smoking, or talking on the phone. Some vehicles can even recognize their assigned driver and check their vital signs. Through computer vision, ADAS, AI, and ML, we are getting closer to achieving a genuinely autonomous ride for drivers and automated loading, unloading, and docking for fleet operators.

Transportation Regulations

Businesses across the transportation industry hold themselves to high compliance standards with safety regulations because of the far-reaching impacts of vehicles and infrastructure. For example, all new vehicles introduced to the market must provide proof of their safety on the road and their low pollution emissions, which we will discuss further in the next section.

Fleet operators must comply with mobility and transport requirements. One example is the EU Regulation 2014/165: Tachographs in Road Transport that states that fleet operations must install a tachograph device in vehicles to record driving times accurately and rest periods to avoid fatigue and ensure road safety. To accomplish this, fleet operators require both embedded and cloud software in compliance with their electronic tachographs and data analytics platforms.

Even tolling systems are subject to specific regulations such as ISO 17573-1:2019 Electronic fee collection, which requires transportation businesses to ensure that their digital vehicle identification systems and contactless payment systems are in compliance.

All these critical regulations mean that transportation software developers like GlobalLogic must ensure that software meets automotive standards like Automotive Software Performance Improvement and Capability determination (ASPICE) standards and principles of functional safety regulations like ISO26262/ASIL, as well as strict verification and validation procedures.

Vehicle Electrification

More and more automotive manufacturers are embracing electric vehicles for both the consumer and commercial marketplaces. Generally speaking, using an electric powertrain minimizes duo oxygen emissions by utilizing energy from renewable sources and storing it efficiently.

Electric vehicles rely on specific software and applications to better utilize power and manage energy storage. For example, GlobalLogic frequently works on battery management systems that help expand an electric vehicle’s range, achieve the best possible performance, and even recover energy during braking. On the other end, intelligent charging systems can communicate with a vehicle to optimize charging and provide drivers with complete transparency.

Manufacturers are also starting to use fuel cells powered by hydrogen instead of just drawing electricity from a battery. So, it is only a matter of time before commercial vehicles like buses, trains, and trucks become equipped with electric powertrains that use hydrogen fuel cells.

Software-Defined Vehicles

A vehicle as a source of data or software-defined vehicle offers many possibilities to optimize usage, assure durability, shorten downtimes, and increase productivity.  For example, Tesla can continuously, remotely deploy software updates and introduce new features and user interfaces. Among our clients, we’ve found high demand for software that enables over-the-air update campaigns.

Software-defined vehicles also enable manufacturers to monitor a vehicle’s data to quickly respond to bugs or proactively avoid them. Processing this data is now even easier, thanks to automotive players adopting the cloud. With almost unlimited storage space and computing power, the cloud enables vehicles to analyze enormous amounts of information. As a result, we’ve seen cloud adoption, big data processing, and AI help streamline telematics applications and other transportation services among our clients.

For fleet operators, software platforms can help them manage entire fleets, process log driving, dispatch information, and even provide predictive maintenance support. AI is a powerful tool that can help fleet operators:

  • Analyze fleet utilization
  • Automate road freight loads dispatching
  • Analyze drivers’ behaviors and driving styles
  • Monitor petrol usage and vehicle maintenance costs
  • Optimize business risks
  • Manage insurance, tolling, or dispatching policies

In addition, artificial intelligence helps streamline problems with under-utilized trucks by creating semi-automated dispatching plans or solving inefficient routing. GlobalLogic works on numerous projects located in the private and public clouds where advanced analytics, machine learning, and intelligence algorithms are in backend applications. For instance, we delivered a ridesharing platform for road freight for an innovative startup company.

We have developed fleet management and telematics platforms for vehicle data logging and remote diagnostics for other clients, as well. GlobalLogic is engaged in numerous marketplace portal projects where companies can search for tracks, parts, or services – and where intelligent algorithms support their choices. Regardless of the type of service, modern telematics platforms based on the cloud system as a service (SaaS) offer high scalability, enable real-time processing, and deliver interactive analytics with the highest level of security and reliability due to AI and ML.

Digital Transformation

Businesses across the transportation ecosystem, such as manufacturers, dealers, repair shops, transportation service providers, and aftermarket players, are leveraging digital accelerators like the Internet of Things (IoT) solutions and Digital Twins to streamline their daily operations. For example, Digital Twins creates a digital version of a physical item that users can virtually manipulate and analyze in various scenarios to generate data to see how it would act in real life. Gaining such insights without investing in a physical setup provides significant cost and time savings.

We have created solution accelerators to help dealerships and repair shops quickly adopt digital transformation through contactless payments, appointment booking, remanufactured parts workflows, automated warehousing, and distribution solutions. Additionally, we’ve developed a vehicle cockpit accelerator that enables vehicle manufacturers to rapidly and cost-effectively create a customizable in-vehicle infotainment (IVI) system. As mentioned earlier, vehicles are becoming more reliant on software, and powerful, engaging IVI systems are no longer a nice-to-have feature.

Another trend we’re seeing is the use of AI in knowledge transfer and vehicle maintenance. For example, we’ve worked with vehicle manufacturers to create virtual technical training tools that enable technicians to use virtual reality or augmented reality to get remote support. We’ve also built AI tools that can identify damaged parts based on their images.

In Conclusion

Automotive software has become the driving force behind the transportation industry, and artificial intelligence is at the forefront of this movement. From automotive original equipment manufacturers (OEM) to fleet operators, they all utilize AI and supporting technologies such as machine learning, IoT, big data, and analytics. These technological capabilities help create driving and transportation solutions that are just as engaging for drivers as they are safe and reliable.

Digital currencies are gaining more importance over time. However, digital currencies such as Bitcoin can be complicated to understand, and some may question the security of the transaction process, making it difficult to utilize effectively.

What would someone need to know to use Bitcoin, and how is it secure? Enhance your knowledge of Bitcoin concepts, such as blockchain, and the different transaction types. You’ll also learn about Bitcoin’s security and how it lies within its encryption factors, and the detailed validation process.

Learning a new programming language is not easy, but the process is more manageable with proper guidelines and step-by-step instructions.

1024-Develop-Restful-API-using-Go-and-Gin

Go is a powerful and easy to learn language that any programmer can adapt without difficulty. It’s as simple as its name, Go or Golang.

Like most programming languages, Go requires some initial setup. It is important to use Go with Gin, which supports various coding assignments related to building web applications, including web services. To use Go with Gin, we will first project the use of Gin, get request details, and marshal JSON for responses.

Next, we’ll build a REST API server with two endpoints. The project in this example will be a repository of data for event impacts and event moratorium records.

This article includes the following sections:

  • Prerequisites
  • Design API endpoints
  • Create a folder structure for your code
  • Create the test data
  • Write handler to return all t-shirts
  • Write handler to add a new t-shirt
  • Write handler to return a specific t-shirt
  • Advantages
  • Disadvantages
  • Users for Go

 

Prerequisites

  • Installation of Go 1.17 or later. Refer to Installing Go.
  • A tool to edit code. Any text editor will work; for example, Visual Studio Code.
  • A command terminal. Any terminal on Linux orMac, and on PowerShell and CMD in Windows will work. Visual Studio Code has a terminal option built in.
  • The curl tool. This tool is on Linux andMac systems. There is no need to install it in Windows 10 Insider Build 17063 and later models. For earlier Windows versions, be sure to install the curl tool.

Design API Endpoints

In this example, we’ll build an API that provides access to a store selling customized t-shirts on “Test Amazon.” We’ll need to build endpoints where a client can retrieve and add t-shirts for users.

When building an API, we generally start by designing the endpoints. Our API’s end-users will have a better experience if the endpoints are easy to use and understand.

The following are the endpoints that we’ll develop in this tutorial for “/tshirts”:

  • GET – Get a list of all t-shirts, returned as JSON
  • POST – This will add a new t-shirt from request data sent as JSON: “/tshirts/:id”
  • GET – Get t-shirt by its ID. This will return the t-shirt data as JSON.

Next, we’ll create a folder structure for our code.

Create a folder for our code

To start creating a folder for our code, we’ll use:

  1. Open a cmd anchange to your home directory.
    On Linux or Mac:
    $ cd
    On Windows:
    C:\> cd %HOMEPATH%
  2. Using cmd, we’ll create a directory for our code called “gin-web-service”.
    $ mkdir gin-web-service
    $ cd gin-web-service
  3. Create a module in which we can manage dependencies.
    $ go mod init example/gin-web-service
    go: creating new go.mod: module example/gin-web-service

 

Create the data

We will store test data in the memory instead of the database to avoid complexity. This would mean that the set of t-shirts will not save when we stop the server, and then we would need to recreate it when we restart it.

  1. Create a file called “main.go” in the web-service directory. This file will be used to write Go code.
  2. In main.go, paste the following package declaration. A standalone program is always in package main.
    package main
  3. Start writing the following declaration of “tshirt” struct, which we will use to store t-shirt data in memory.

    // tshirt collection.
    type tshirt struct {
    ID     string  `json:"id"`
    Color  string  `json:"color"`
    Ceremony string  `json:"ceremony"`
    Price  float64 `json:"price"`
    }
  4. Let’s now create test data based on defined struct specification.

    // tshirts slice to seed test data.
    var tshirts = []tshirt{
    {ID: "1", Color: "Blue", Ceremony: "ChildBirthday", Price: 56.99},
    {ID: "2", Color: "Red", Ceremony: "Anniversary", Price: 17.99},
    {ID: "3", Color: "White", Ceremony: "Christmas", Price: 39.99},
    }

 

Write a handler to return all t-shirts

  1. The “getTshirts” function creates JSON from the slice of “tshirt” structs, which writes the JSON into the response.

    // gettshirts responds with the list of all tshirts as JSON.
    func getTshirts(c *gin.Context) {                             // gin.Context parameter.
    c.IndentedJSON(http.StatusOK, tshirts)
    },
  2. Assign the function to an endpoint path.

    func main() {
    router := gin.Default()
    router.GET("/tshirts", getTshirts)
    router.Run("localhost:8080")
    }
  3. The above code will ask to import following packages:

    import (
    "net/http"
    "github.com/gin-gonic/gin"
    )
  4. Save file main.go and run code using:

    $ go run                            // since we have only one executable main.go
  5. From different cmd, run:

    $ curl http://localhost:8080/tshirts/2

    And the output would be:

    [
    {
    "id": "1",
    "color": "Blue",
    "ceremony": "ChildBirthday",
    "price": 56.99
    },
    {
    "id": "2",
    "color": "Red",
    "ceremony": "Anniversary",
    "price": 17.99
    },
    {
    "id": "3",
    "color": "White",
    "ceremony": "Christmas",
    "price": 39.99
    }
    ]

 

Write a handler to add a new t-shirt:

When the client makes a POST request at “/tshirts,” then you want to add the t-shirt described in the request body to the existing t-shirts data.

  1. Add code to add t-shirts data to the list of t-shirts.

    // postTshirts adds an tshirt from JSON received in the request body.
    func postTshirts(c *gin.Context) {
    var newTshirt tshirt
    // To bind the received JSON to newTshirt, call BindJSON
    if err := c.BindJSON(&newTshirt); err != nil {
    return
    }// Add the new tshirt to the slice.
    tshirts = append(tshirts, newTshirt)
    c.IndentedJSON(http.StatusCreated, newTshirt)
    }
  2. Change main function so that it includes the router.POST function:

    func main() {
    router := gin.Default()
    router.GET("/tshirts", getTshirts)
    router.POST("/tshirts", postTshirts)
    router.Run("localhost:8080")
    }
  3. Run the code by running the following command. If the server is still running from the last section, stop it.

    $ go run
  4. From a different cmd, use curl to make a request to your running web service.

    $ curl http://localhost:8080/tshirts \
    --include \
    --header "Content-Type: application/json" \
    --request "POST" \
    --data '{"id": "4","color": "Yellow","ceremony": "Baby Born","price": 49.99}'
  5. To confirm that you added the new t-shirt, run the following code:

    $ curl http://localhost:8080/tshirts \
    --header "Content-Type: application/json" \
    --request "GET"

And the output would be:

[
{
"id": "1",
"color": "Blue",
"ceremony": "ChildBirthday",
"price": 56.99
},
{
"id": "2",
"color": "Red",
"ceremony": "Anniversary",
"price": 17.99
},
{
"id": "3",
"color": "White",
"ceremony": "Christmas",
"price": 39.99
},
{
"id": "4",
"color": "Yellow",
"ceremony": "Baby Born",
"price": 49.99
}
]

Write handler to return a specific t-shirt:

When the client requests to GET “/tshirts/[id],” you want to return the t-shirt that ID matches the ID path parameter.

  1. “getTshirtByID” function will extract the ID in the request path, then locate the t-shirt that matches.

    // getTshirtByID locates the tshirt whose ID value matches the id
    // parameter sent by the client, then returns that tshirt as a response.
    func getTshirtByID(c *gin.Context) {
    id := c.Param("id")
    // Loop over the tshirts list, looking for
    // tshirt whose ID value matches the parameter.
    for _, a := range tshirts {
    if a.ID == id {
    c.IndentedJSON(http.StatusOK, a)
    return
    }
    }
    c.IndentedJSON(http.StatusNotFound, gin.H{"message": "tshirt not found"})
    }
  2. Change your main to include a new call to router.GET.

    func main() {
    router := gin.Default()
    router.GET("/tshirts", getTshirts)
    router.GET("/tshirts/:id", getTshirtByID)
    router.POST("/tshirts", postTshirts)
    router.Run("localhost:8080")
    }
  3. Run code. In case the server is still active and running from the last section, stop it.

    $ go run
  4. From a different cmd window, use the following curl command to make a request to your running web service.

    $ curl http://localhost:8080/tshirts/2
  5. The following command should display JSON for the t-shirt based on the ID you used. If it can’t locate the t-shirt, you’ll get an error message in the JSON response.

    {
    "id": "2",
    "color": "Red",
    "ceremony": "Anniversary",
    "price": 17.99
    }

 

Advantages

The key advantages of Go include:

  1. Speed.
  2. Easy to learn.
  3. Scalability.
  4. Comprehensive programming tools.
  5. Enforced coding style.
  6. Strong typing.
  7. Garbage collection.
  8. Simple concurrency primitives.
  9. Native binaries.
  10. Exception handling, etc.

 

Disadvantages

The disadvantages of Go are:

  1. It is time consuming.
  2. It does not support generic functions.
  3. It is a newer language.

 

Tips for Go Users

The following are insights to consider for those who would like to use this language for their project:

  1. If your business is validating its concept, Go is not the right fit to quickly craft a demo for investors.
  2. It is the ideal option for backend developments in cases where servers deal with heavy loads or requests because it supports concurrency functions and has a small memory footprint.
  3. Golang is suitable to solve software issues of scalability.
  4. It takes more effort to implement Go into your system compared to other scripting languages like Python, as it focuses on simplicity and speed of execution.
  5. Those who need built-in testing, benchmarking facilities, and a straightforward build process should utilize Go.

 

Conclusion

Now, you will be able to create Restful web services using Go and Gin and work with other packages based on your needs, such as Io/ioutil, validator.v9, Math, Strconv, Fmt, etc.

All languages can have disadvantages, so it is important to carefully choose which language to use for your project with these potential drawbacks in mind.

HAPPY LEARNING!

Background

In a previous paper entitled Secure Development Lifecycle: Importance & Learning, I covered the importance of the secure development lifecycle (SDL) and the lessons teams learned when implementing SDL. In this paper, I share my experience in building a security training program for the development team.

To build a secure application or platform, one must ensure that the development team understands security and incorporates it both during the design phases and while writing the first lines of code. The following are key steps and considerations for a successful security training program for the development team.

Security Skill Assessment

It is essential to assess the security skill level of the development team to efficiently implement a security training program. One can accomplish this through a simple survey or readily available assessment tool.

If a team is larger than fifty members, I recommend using a predesigned assessment tool, such as a SaaS-based security training platform. Reserve this evaluation as a baseline for future reassessments. Generally, the team should take an assessment annually to evaluate the progress and value of the security training program.

Security Learning Path

Once you assess the team’s security skills, the next step is to create a learning path. A well-defined learning path will help team members understand their security skills and areas for improvement. The learning path should be role-based (see next section) and consider team members’ current security skills and project workload.

In addition, the learning path should be feasible and must have the commitment of the respective team member; otherwise, it will be challenging to commit the team members to the later stages of the learning path.

Be sure to revisit the learning path with team members every six months. Reviewing the learning path will help gauge its effectiveness based on the progress and feedback from the team members. Some team members may be ahead and others behind, so revisiting the learning path will help set realistic expectations and goals.

Role-Based Security Training

The one-size-fits-all approach does not work for training the development team. Each role in the development team has unique responsibilities and skills; hence a role-based security training program is critical. Some topics are common, such as threat modeling, attack surface, and defense in-depth, so the entire development team must learn these topics.

Other subjects are specific to certain roles in the development team, like language-based secure coding, security testing, and compliance. Only the individuals in those roles need training on those subjects. Therefore, all members of a development team must receive the following training based on their roles.

Basic Software Security Training

Other subjects are specific to certain roles in the development team, like language-based secure coding, security testing, and compliance. Only the individuals in those roles need training on those subjects. Therefore, all members of a development team must receive the following training based on their roles.

Security Application Fundamentals

Regardless of their role, everyone on the development team must learn the following concepts to establish a solid foundation for proper security training:

• Threat modeling basics
• Introduction to attach surface
• Defense in depth
• Principle of least privilege
• Secure by default
• Open design
• Privacy and data protection
• Fail securely
• Trust no inputs
• Secure error handling
• Secure logging
• Reuse of existing security control

Secure Coding

Every developer must go through secure code training. Some topics can be language-independent, covering basic principles of secure coding, while others are language-specific. The following topics are essential to secure coding.

Secure Coding Fundamentals - These principles are the core of secure coding practices, which team members must adhere to and be aware of at all times:

• Buffer overflow and remote code execution
• Avoid hardcoded credentials and configuration
• Software composition analysis
• Security misconfiguration
• Storing sensitive data in plain text
• Insecure cryptographic storage
• Insecure communication
• Improper error handling and logging
• Functional vulnerability

Web Application Security - These are the top web application security issues:

• Injection flaws
• Broken authentication and session management
• Sensitive data storage
• XML external entities
• Broken access control
• Insecure deserialization
• Cross-site scripting
• Cross-site request forgery
• Denial of service

Mobile Application Security - These are the top mobile application security issues:

• Improper platform usage
• Unintended data leakage
• Insecure communication
• Application code quality
• Insecure authentication and authorization
• Code tampering
• Reverse engineering
• Non-functional requirements

Security Testing

Every quality assurance (QA) team member must be able to understand security fundamentals. QA must also be able to conceptualize and perform the following procedures:

• Risk assessment
• Functional testing vs security testing
• Dynamic application security testing (DAST)
• Vulnerability scanning
• Penetration testing
• Attack surface review
• Fuzz testing (for more advanced testing)

Advanced Security Concept

Every senior team member and team lead must understand these concepts thoroughly:

• Secure coding best practices - proactive controls
• Secure development environment
• Secure code repository
• Secure deployment
• Secure code reviews - static analysis tools and manual
• Advance threat modeling and mitigation

Security Tournaments

Security tournaments are valuable since they spread security awareness and increase engagement within a team.

There are many ways to host a security tournament. One of the most common ways is to present a series of security coding challenges and missions and ask the team members to compete against one another to identify, locate, and fix vulnerabilities. Most SaaS-based security training platforms provide the ability to host and run tournaments and have templates to get them started. In addition, tournaments can be entirely online, with team members competing remotely, in person, or a mix of the two.

When we complete tournaments, it helps us to raise security awareness and team involvement through gamification. Depending on the team’s size and workload, there should be a tournament every quarter or at least every six months.

Internal Security Bug Bounty

Internal bug bounty programs help make team members think like hackers, which is critical for a successful security program. The ability to see things from a hacker’s point of view allows teams to write secure applications and helps when responding to a security attack. In addition, it helps develop a security culture within the team and brings constructive viewpoints to the application.

Summary

It is important to remember that understanding each team member’s security skill level and requirements is essential for establishing a successful security training program. A carefully designed security training program is one of the critical steps to improving your development team’s capabilities and can significantly improve the security posture of an application or platform.

In today’s digitized world, it’s never been more important to understand the different software and platforms that are newly available. While the public cloud is all around us, businesses are finding an increasing need to adopt and integrate new cloud strategies into their business models. However, the problem is training their developers with the right skills without negatively impacting the business.

Software developers are fast recognizing the importance of virtual sandboxes. These useful training labs are increasing in popularity due to having the flexibility of allowing developers to practice using different software in a virtual environment, thus minimizing the risk factor. This paper will delve deeper into the intricacies of how developers can maximize their cloud sandbox training, so they can manage and use the cloud platforms more efficiently.

  • URL copied!