Archives

What is an Architect?

For a software company, or a traditional enterprise going through digital transformation, there is no more important hire than your chief product software architect. People mean different things when they call themselves an "architect". This is not, generally, an attempt to frustrate or confuse you. Instead it's caused by increasing specialization, and also by the 0gradual convergence of the "IT", "system integration" and "product development" realms in the context of digital businesses. However, it will generally be disastrous if you put the wrong type of "architect" in the wrong role. The skills and mindsets are generally not transferable. Generally speaking, a:

  • "Systems architect" or "network architect" is someone who knows how to set up hardware or cloud systems for maximum performance, reliability and security. This is often called the "physical architecture" or sometimes the "IT architecture" since it concerns itself primarily with what software capabilities are deployed where, and how they connect.  This type of activity requires some coding, but primarily in the area of deployment automation, system monitoring and other physical deployment configuration work. Companies that are evolving from an IT-focus to a more "software product" focus generally have this type of architect already on staff.
  • A "solutions architect" generally comes from the systems integration world. He or she tends to be expert at selecting off-the-shelf packaged software, at configuring these packages, and at implementing interfaces between systems. He or she will generally not be focused on creating new software systems "from scratch", but will rather create them by combining pre-existing purchased systems with some home-built systems. Frequently this person will have coding skills, often in scripting languages, used to tie existing systems together, transform and manage data, and do similar tasks.

In this blog we focus on yet another type of architect, the "software product" architect. This is the person who can listen to your requirements, and drive the creation of your system "from scratch"-- that is, to develop your unique IP through writing software code in Java, C#, .Net and many other languages, rather than buying and configuring commercial off-the-shelf systems.

Software product architect

Software product architects are frequently rather eccentric people.  For example, one of the best architects I ever worked with drove a dirty, beat-up 20-year-old car despite making a multiple six-figure (USD) annual salary. He was constantly getting lost, even when he had been to the same location multiple times. Another architect was constantly late for meetings. Even if his previous meeting ended on time and was literally in the same building in the room next door, he'd still somehow manage to be 20 minutes late. Yet a third excellent architect was so paranoid that I honestly would not have been surprised to see him come to work in a tin-foil hat to protect his brain from alien radio signals.

Eccentricity is not a job requirement, and some excellent software product architects actually seem quite normal. But don't let eccentricity dissuade you from hiring someone who is otherwise great. Eccentricity does more-or-less come with the territory.

Architects also tend to be highly independent thinkers. They like to figure things out for themselves, and they aren't bothered a bit if no one agrees with them. Some like to argue, others prefer to be quietly confident, but all good architects tend to be very sure of their work. The best ones can explain their perspective so well that they end up convincing you and everyone else that their approach is the right one--and it is.

How Do You Interview for an Architect?

So how do you hire a great software product architect when, almost by definition, you are looking for someone at least as smart or maybe even smarter than you and your very best people; a person who knows things that you and your team don't already know? Obviously, you'll want to check your candidate's technical credentials as well as you possibly can--ideally by getting people you trust to do "skills interviews" on technical areas to weed any "con artists" out of the system. But technical skills alone do not assure you of a good hire as an architect. Here's an approach that has worked well for me:

  • When I'm interviewing an architect candidate, as soon as it is polite I ask him or her to describe a system they designed. For a junior candidate, I ask about one they were involved in, or a commercial system whose architecture they know about and admire. I might ask a senior architect to describe the system they are proudest of, the one they most recently worked on (if they can do that while still respecting confidentiality), or just a system that strikes me as interesting, based on their past CV. If they start to answer verbally, I ask them to go to the whiteboard and sketch it out for me.
  • A good architect's natural habitat is the whiteboard. Good architects are never more at home than they are drawing technical diagrams on whiteboards, and explaining, discussing and debating those diagrams with others. Architects tend to be introverts and sometimes even loaners--but I've yet to work with a good one who doesn't come alive at the whiteboard in a lively discussion. Notice their body language in addition to their explanations--does it look like they are totally relaxed and in their element at the whiteboard explaining things to someone? Are they animated and excited about the system they are describing? Is their excitement about the system contagious--that is, are you starting to feel excited about it too, at least a little bit? If so, they may be a good architect; if not, they probably aren't. This is a crucial test.
  • Listen to the candidate's description of their previous system. Even if you know nothing whatsoever about the business domain or the technology they are describing, if the candidate is good, both will start to become clear to you as they describe it. Good architects make things understandable--and often interesting. They will be able to explain things--even very complex technical things--in a way that makes them comprehensible, while giving you new insights. This is regardless of your own level of understanding, high or low. A good architect should be able to clearly describe a system to an 80+ year old grandparent, to a business-savvy but non-technical CEO, or to the CTO, in ways they can all understand. A good architect adjusts his or her explanation to his audience--an essential skill.
  • The best architects make complex subjects sound simple. On the other hand, mediocre and bad architects survive in their careers by making simple subjects sound complex. When someone is throwing obscure TLA's ("Three-Letter Acronyms") at you, it's easy to be intimidated and back off. Don't. Push until you either understand the system they are describing, or realize they are just throwing acronyms at you to confuse you.

acronyms

  • Don't confuse confidence with arrogance. All good architects (and many bad ones) are self-confident about their work. They may or may not be confident about themselves personally, but they are rock-solid when it comes to their belief in the value of their work. Good architects are not indecisive. I often test this by asking challenging questions, like "Why did you do it this way?", or "I've seen other people do it that way--did you consider that? If you did, why did you reject it?" A good architect will answer these questions easily, without getting upset. He or she will have thought through all the variations you are likely to come up with and come to an answer that they genuinely think is the best. If they appear uncertain when challenged, unless they are very junior, that is not a good sign. A good senior architect may (or may not) be diplomatic, but they will tell you why their way is better--unless you really do come up with an idea they didn't think of. In that case, the best architects will generally consider your idea thoughtfully and very quickly be able to give you an alternative that incorporates it, right there at the whiteboard. You'll know a good potential partner when you see this happen.
  • I find arrogance and defensiveness often go hand-in-hand. When you question, or challenge some aspect of their architecture (which I suggest you do--even if it's just to ask for more detail) and your candidate gets emotional or defensive, that's a bad sign. If, subtly or more overtly, he or she tries to make you feel like it's your problem that you don't understand why their approach is better, then that's a very big danger sign. And if they condescend to you or are in any way insulting, I wouldn't hire them.
  • You and your team will depend on this person in a vital way if you bring them on-board. Note your own feelings when interacting with them--and don't discount those feelings. You as a human being are a finely tuned instrument for picking up non-verbal cues. Our responses to these cues manifest themselves, often, as emotions. If you catch yourself feeling dumb, insulted, intimidated / "put down" or otherwise negatively affected during the interview, it could well be that this person has developed an interaction strategy to keep people from questioning him or her by making them feel negative feelings ("I'm dumb") if they do. If a person cannot accept constructive input and questions, or function as a respectful peer or subordinate, this does not lead to successful outcomes no matter how brilliant that person may or may not be. Don't hire them. That being said, examine yourself to see if your reaction is your own issue. If you frequently feel these emotions, I would not necessarily attribute them to this candidate. I would also suggest it doesn't really matter if you "like" this person or not, provided you can work with them. I tend to like the architects I work with, but they are an eccentric group and you may not! Don't let that stop you. In sum, use your feelings as a barometer: it gives you valuable feedback about the candidates’ communication style, and also lets you know if this is someone you feel good about working with.
  • I always ask an architecture candidate how much code they still write. The best architects generally still do some degree of coding from time-to-time; and all of the best architects know how to code and can do it extremely well when they need to. My perspective is that unless an architect can code, they are not a good architect--because in that case everything they do is all theory. On the other hand, it is almost always the case that good architects don't currently code a lot. That's because they are generally one of the highest-paid technical employees in the group--probably the highest--so their time is too valuable to have them spend it coding very often. Good architects do spend time coding with developers when they get stuck, developing examples, doing code reviews and giving inputs or updates. Some may end up writing tricky algorithms--or coding for fun in their spare time. The key thing is that they should know how to write code and code hands-on at least some of the time.

What are the Traits of a Good Architect?

To be a good architect, you really do have to be very smart. This means that an architect often is, in fact, the "biggest brain in the room". However, if he or she manifests that by making everyone around them feel stupid, the relationship is not going to work. The best architects are humble in a genuine way, and respect what the people around them bring to the table.

Do such paragons exist? Yes, they really do. But the combination of high intelligence, technical knowledge and soft-skills that go into making a first-rate architect is rare. Here are some things I would trade off / sacrifice to get the positive traits mentioned above.

  • Provided he or she had the technical skills and the soft skills, I would not insist that a software product architect be an expert in my business domain. For example, if I worked in retail, I would not insist that my software product architect be an expert in retail when I hired him or her. This may seem surprising. If I were hiring a "solutions" or "system integration" architect, then I would insist they know my domain because their primary job is domain-dependent package selection and integration. Not so for a software product architect.
  • A software product architect should be an expert in designing and developing software products--that is SaaS systems, platforms, applications and so on. A software product architect is the kind of architect who creates packaged software in the first place. The ability to create a packaged solution "from scratch" is a very different skill set than ability to select the right one for your business. A software product architect needs to understand first and foremost how to build first-rate software systems. Perhaps surprisingly, these design skills tend to be transferable from business domain to domain. If you pick the right person, they will have the brains to pick up what they need to know about your domain in a very short period of time.

Architect traits

  • I would not insist that an architect candidate be completely current on coding, languages and technologies. Some architects code a lot. Others, however, work primarily in a technical design and technical supervision role. I do think it's essential that architects know how to code, and that they spend at least some of their time--even if it's just a few hours every week or so--actually doing it. However, ironically, your junior engineers may be more up to speed on the latest UI packages or some specific technologies than your architect. If your most junior engineer finds that your architecture candidate has a surprising gap in his or her knowledge of the details of a particular open-source package, that is not in itself a problem. Architects operate at a completely different level of abstraction than coders. An Architect's basic job is to take your business requirements, and determine how to embody those in software. Like a physical home or office building architect, software product architects worry about things like flow, cohesion, maintainability, structure and other big-picture issues.  Even the best software architects may not be up-to-the-minute on the details of a particular package or technology, other than to know when it is useful. The lack of detailed technology knowledge in a particular area is not a negative for an architect.
  • Good architects tend to be confident (but not arrogant), and to prefer to figure things out by themselves rather than to be told what to do. This leads to some very interesting dynamics at times, that can make even great architects challenging and frustrating to manage. Still, when you check references and talk to their former bosses, I'd look to see whether those bosses believe that, on the whole, your candidate's merits far outweighed their eccentricities.  I would not be put off by the fact that these frustrations and eccentricities exist--provided they are not too extreme. A great architect will create so much value for you and your company that I think it is very much worth the effort to get around their foibles and eccentricities.

Unless you get incredibly lucky--which happens, but who likes to count on that--the success or failure of your software business or "digitally transforming" company will be heavily impacted by your software product architecture. A good software architecture is what allows your company to rapidly pivot and evolve; to minimize maintenance and operations cost; to scale robustly and smoothly to millions of users; to deliver value to your end users, and to bring revenue to your business. Your software product architect is the key hire to get those benefits. It is very much worth the time and trouble to find the right one--and to work effectively with him or her once they are on-board.

Originally published on DOU.ua on June 14, 2016  

At GlobalLogic’s Kharkiv office, we have created numerous technological pet projects (in other word, PoCs) in recent years. We often think about our ideas developed 2 or 3 years ago, but sometimes we can't find anything except a presentation or a demo video. That is why we felt a necessity to set up a system for arranging and storing our technical solutions. Moreover, an ideal system would not just store the groundwork, but also help us develop projects so that we can gain experience and expertise in various technological domains.

Thus, the initiative called BrainMade arose. Launched in October 2015, it soon transformed into an elaborated project that has its own strategy, roadmap, tasks, terms, and a big team.

Trying to Square the Circle

BrainMade is a project designed to create new PoCs and develop already existing PoCs.

The main purpose of its members is to store the solutions invented earlier, as well as to gain new experience in different technologies and industrial domains. Since our Kharkiv engineers have concepts in a variety of areas (e.g., embedded, Internet of Things, big data, e-commerce, etc.), we needed to think of a core project that could embrace all these and any other fields.

We came up with an idea of a railway. A toy railway, of course.

Brainmade1

First, almost everybody likes toy railways since their childhood. Second, it is itself a wonderful ecosystem capable of combining multiple technological areas. Third, even the most complex ideas can be demonstrated easily by a model railway. Just imagine, you drive up to the station, and the system notifies you about the location of available parking spaces. You buy a ticket in an online terminal, independently check in on the train, and board. The train control is in turn based on processing and analyzing of a large data amount.

A Room for Growth

The biggest charm of our railway ecosystem is that it is alive. BrainMade is constantly supplemented with new ideas and even separate domains. Each domain has its own leader team. Anyone can join the team (anyone from GlobalLogic so far). Now the following domains are being actively developed:

The Big Data area comprises everything related to big data analysis, from defining the architecture to choosing and setting up the system for working with Big Data that is used in all BrainMade projects. Here we have a system to analyze freight transport and passenger traffic data, store and process telemetry data obtained from a moving train, etc.

Industrial PoC is a team that works on software and hardware to operate a railway. Earlier, the guys from this team have assembled a model railway, developed and implemented control interfaces, and coded a module web application to operate trains.

IoT Control is a project devoted to creating various sensors for train telemetry systems. It includes a sensor prototype for optical positioning and a solution for tracking a train’s location through a built-in digital speed indicator and an optical sensor. The team has also developed firmware for sensors and software for managing data obtained from sensor devices.

IoT Parking is a project involved in building a system for the optical recognition of available parking spaces. The team developed a flexible and self-learning algorithm for image recognition and implemented it into a system for parking space optical monitoring. The solution is now ready to be used in real-life environments, such as the parking lots of business centers, shops, airports, and railway stations.

The Retail project unites a team that develops a solution for e-commerce and online sales. Within the railway system, this solution allows users to reserve and buy tickets online, independently check in on the train via a QR-code, as well as check in multiple passengers and analyze passenger traffic.

Moreover, a completely new domain called Augmented Reality has emerged recently. This technology is becoming more and more popular in a market that looks interesting for our customers, hence it is attractive for GlobalLogic as well. Within the railway system, augmented reality can be used for navigating a railway station and helping a train driver operate the train. However, once improved, the technology can be used in hundreds of other ways in different commercial projects.

Brainmade2

The teams often come up with new ideas and implements them. This is great, because the main point of BrainMade project is to be open for innovations. While creating PoCs is a common practice for GlobalLogic, BrainMade has brought in a new momentum to the tradition of constant development. We can be sure that the newly launched PoC projects will not get lost and will be developed and used in the future. It truly motivates people, and they spend a lot of their free time driving these projects.

Despite the fact that BrainMade participants are volunteers, the work of each team is subject to fine-tuned processes. All teams use the Agile framework with scheduled sprints and goals, as well as a strict division of roles. The roles in the project can be different from those the specialists have within their main projects. And this is great, because it helps everyone gain new and diverse experience.

Achievements and Plans

More than 50 people are currently taking part in BrainMade. Everybody has their own incentives, such as technical skills development, access to new technologies, learning and knowledge-sharing, product ownership and, of course, communication and friendship. BrainMade also helps experts try different roles by allowing them to set their goals, define their strategy for particular domains, plan their work, and be responsible for the result. These activities help develop their skills dramatically. By trying new roles, technologies, and tasks, our experts grow professionally and personally.

Having seven basic activity domains, BrainMade combines many PoCs that have been created over the past several years. Some of the PoCs became an integral part of BrainMade, while others served as the basis for ideas and inspiration for current projects. Due to BrainMade, thousands of code lines have gotten a second life.

Brainmade3

Along with solving basic goals set at the launch of the project, BrainMade has brought in many other positive outcomes. The project offers multiple teambuilding activities supported by GlobalLogic. Moreover, the initiative helped solve some infrastructural problems: for example, a VLAN was built specifically for its needs. Now it enables us to quickly launch new PoCs, together with people from different projects and even from different locations.

Moreover, GlobalLogic provides the project with equipment, such as soldering sets or a 3D printer, and it also pays for trainings and conference participation for BrainMade members. However, the most valuable asset of this project is the new knowledge, experience, and positive emotions triggered by the tangible project results and peer approval.

Our subsequent steps are as follows: to finalize our first script of the demo, to start developing new domains, and of course to share our experience with other GlobalLogic locations. It's not easy to foresee the future of the project, but we are convinced that BrainMade is already a good tool for serious business tasks. It helps develop expertise much more efficiently by unifying miscellaneous PoCs within a single complex solution. And the visualisation of such a solution motivates the participants and increases the value of their work.

Brainmade4

This blog was originally posted by GlobalLogic’s experience design arm, Method.

Business

Who Pays?

Cities are the original sharing economy — a sharing ecosystem in practice founded on a cost-orientated logic where services and infrastructure are collectively funded through taxes and fees. Participants, whether individuals or organizations, contribute relative to their means, leaving the collective whole better off than their individual contribution. Utilitarianism in action, with all the soft benefits adding a general sense of belonging, pride of place, and a perch on the world’s stage.

According to the UN, there are 1,692 cities with a population of 300,000 or more people. If we designed a new city from scratch today, with all levels of participants in mind (citizens, companies, government entities), would we operate in the same “business model?” What if there’s a mission greater than simply cost sharing and provision of services? What if today’s mindset, methodologies, and capabilities help us organize value for an exponentially higher purpose and collective benefit? What if the economy of a city was centered around value creation rather than basic cost sharing?

These questions aren’t merely a theoretical exercise — in reality they’re the potential new model for the future that forward leaning cities are already walking into and adapting. It’s a journey we are actively shaping in our Strategic Design work with city administrations and infrastructure providers through emergent models and mindsets for growth and innovation, such as the EVRY Strategic Design Lab.

Why Now?

In cities large and small, a common set of pressure points are bearing down ever more intensively. Disposable income, industry footprints, tax bases, corporate investment, and shipped innovation are in decline. Meanwhile, population, cost of services, income inequality, and social isolation are on the rise. Consequently, the revenue required for provision of services isn’t adding up by traditional measures. Sanitation collection frequency diminishes, parks go unattended, classrooms close, and community services shut down. Each setback acting as a standalone line-item of the economy, counting pennies to balance budgets. The resulting culture of cuts comes at the greater expense of growth and opportunity.

In this landscape, how can a city shift its approach from service provision, rule enforcement, and taxation to one of opportunity and value creation? In other words, how can a city actively evolve from being viewed as a simple balance sheet to an enabler of multiple interconnected forces that harness the holistic power of its cultural and economic potential?

Who Creates Value?

Cities are places of opportunity. The long-term global urbanisation growth demonstrates the mobility of people to greater possibilities. Cities necessarily thrive and decline based on the opportunities (or lack thereof) created by each generation.

Today, we see a shift in mindset from the city as a single economy to one of multiple inter-connected economies forced on the next generation of opportunity, thereby orientating each participant towards the holistic value and impact created with other forces in their own and adjacent economies. By thinking of cities as a holistic platform, structured around the intertwined economies and contributing forces, we can tap the full potential and spectrum of opportunities through the five economies of a City of Opportunity:

1. Education Economy

2. Logistics Economy

3. Cultural Economy

4. Knowledge Economy

5. Environmental Economy

Within an economy-centric model, the creation and growth of value is inherently collaborative for citizens, organizations, and governmental agencies. Most importantly, it allows cities to think about roles for citizens, not only for the city itself but also in one or more of its economies. As citizens of an economy, they gain a sense of place that goes beyond the physicality of a neighborhood or one’s favorite sports team, and orientate around the value they generate.

By approaching the business model of a city as a meta-mode economy, each of these interlinked economies can work towards a greater vision with a clear sense of purpose. Every city has its own unique make up, born from its distinctive geographic, historic, culture, and human composition from which to draft a vision. A vision that captures a civic aspiration that holistically and collaboratively creates value for its many contributors.

We go beyond campaign slogans, logos, and five year plans, to a North Star that allows inclusive collaboration in a participant’s economy. Collaboration that enables participants to adapt and grow in stride with emergent local and world events more quickly and relevantly than any central administration could.

How is Value Captured?

Whether as individuals, organizations, or the government, we each have a role in making our cities thrive. While the value creation moment is important, so to is value capture: in line with the goals defined in our city’s vision.

A true City of Opportunity is a portfolio of small, medium, and large participants collaborating and contributing their unique capabilities to civic life. Through clarity of vision and visibility towards long-term outcomes within their economy, autonomous participants create and collaborate to contribute their unique value to the city. The economy-centric model allows them to see their role and realize its full holistic potential. Every city will have its own vision, transforming the role of a town hall beyond just operating the services of today, to create the value of tomorrow.

As a city goes from being the funder/benefactor of yesterday’s policies to being the facilitator of the platform of economies for tomorrow’s opportunities, a town hall can do what a large government does best: use its scale to create opportunity for the many, openly and inclusively. The intent of scale is not to control centrally, but to facilitate collectively, with the aim of establishing a positive outcomes-oriented vision. A vision that’s openly and objectively tracked through high-level quantitative and qualitative indicators — a balanced scorecard of the economy, weighed by financial, environmental, and social progress.

At a company level, we’ve seen the functional impact on society cities can make: step-change improvements in development with a contemporary digital mindset founded on user value, continuous improvements, and holistic outcomes. And as cities are as much emotional as functional, we need a more complete, representative, and balanced way of holistically thinking about the model of where we live.

Who is Leading the Way?

In our research and work with progressive cities and government agencies, we see the emergence of a new generation of urban leaders. Such leaders are growing their impact born from the logic of citizen-centric models and their resulting value creation. This new mindset enables leaders to break free from the traditional struggles of simultaneously optimizing their current model while trying to envision and implement emergent opportunities for each city. Optimistically, we see real momentum, pragmatic progress, and a new muscle growing for how we design the operational model of cities.

Here are some leaders we admire and learn from. Please share your favorites.

Snippet: Who is Leading the Way?:

Vancouver on Environmental Economy,

Paris for the Knowledge Economy

Helsinki the Logistics Economy.

Peru for the Education Economy

Estonia for e-citizenship creating a wider economy of actors.

This blog was originally posted by GlobalLogic’s experience design arm, Method.

The way we acquire knowledge and skills is going to fundamentally change due to the introduction of new technologies supporting flexible and personalized learning models. Artificial intelligence and machine learning are playing a transformational role in the world of education as well as in many other industries. Their adoption promises to offer the efficacy of one-to-one tutoring at an unprecedented scale and in the context of an open, collaborative, lifelong learning experience.

Top Skills in 2020. Source: World Economic Forum

With the rise of advanced automation causing radical socio-economical changes, pedagogy will likely focus on the indispensable cognitive, creative, and social skills of the 21st century workplace. Recently the World Economic Forum estimated that 65% of children entering primary school today will ultimately end up working in jobs that currently don’t exist. We can assume those jobs will be cognitively demanding and require higher-order skills such as the ability to comprehend the context and not just the content, as well as creatively defining problems rather than reflexively solving for them. In an hyperconnected society where the nature of work is increasingly collaborative, social and argumentative capabilities will be even more important.

Educational system reform, the design of scalable and nonisolated applications of AI into education, and the growth of an entrepreneurial mindset to lifelong learning are just some of the intertwined challenges yet to be addressed adequately. Although academic research has been investigating scenarios and possibilities for decades, not much has significantly changed as far as the tools and resources available to teachers and learners, despite their increased digitization.

There is an important distinction between the digitization of traditional learning models and their actual transformation using digital technologies.

For example, the broad availability of online courses taught by top instructors from the best educational institutions offers high-quality instructional content but still falls far short of providing a high-impact, personalized, and adaptive learning experience. Education startups around the world are growing exponentially and pioneering new learning systems (see AltSchool for instance), while incumbents are waking up to the reality of rethinking their business to rapidly plan for complex, massive shifts.

Image: Blooma

Personalized education as a design challenge

The prevailing educational system was designed for the needs of the industrial age and is quickly losing its edge as we move into the information age. My exposure to the pioneering work of DARPA, the ingenuity of edtech startups such Kidaptive, and more recently Method’s collaboration with Pearson, has been the opportunity to explore the design implications of new learning approaches.

Absorbing factual knowledge within a rigid scaffolding is obsolete. Personalized methods promote instead the understanding of key concepts and frameworks as well as how they can best be applied.

Therefore, we need to design a new generation of tools capable of successfully supporting nonlinear, adaptive, and engaging educational experiences. Their efficacy will depend on the “intelligence” of the learning systems to continually adapt according to at least five types of awareness:

  1. The recognition of the individual’s physical and emotional state (as it affects their learning ability)
  2. The understanding of the purpose and context of learning (e.g. reasons for acquiring new skills, the learning environment, physical and virtual, culture, market, flow of contextual data from IoT and wearables)
  3. The awareness of the individual as a learner (e.g. previous achievements, engagement level, mastery grade, etc.)
  4. The knowledge of the subject or domain (e.g. genetics, critical thinking, cultural awareness, etc.)
  5. The pedagogical approach (e.g. conceptual learning, assessment model, proactive guidance, productive failure, etc.)

The learning system will need to combine this ever-changing aggregate information with the algorithms designed to process it.

In a more aware educational system, the primary and most valuable role of AI is to determine the sequence and modality of the interactions between the learning tool and the learner. As a result, the user experience is largely the product of artificial, data-driven, real-time design. It’s critical to define clear responsibility for decisions affecting the user experience that are unpredictable and potentially undesirable or flawed. Who should be held accountable? How can we mitigate those risks? How are these kinds of smart technologies ultimately shaping the role of instructors rather than the other way around? Analogous issues are present in the AI discourse across industries, from automotive to healthcare.

These questions will persist long after primary schooling, as we will have to interact frequently with a rich educational marketplace. It will be crucial to find effective ways to inform decisions about the most relevant capabilities to acquire. Lifelong education is likely to assume an entrepreneurial, strategic, growth-based mindset in dealing with knowledge. How can such approach be taught? Will it perhaps be the specialty of human/artificial coaches or mentors, independent from specific learning platforms?

Finally, another key aspect of the design challenge is related to the creation of virtual environments where practice can enhance knowledge in an artificial but realistic context with the support of real or synthetic learning companions.

LEFT: 1910 — Students in the 21st century would receive all their education via headsets attached to a converted wood chipper fed with textbooks. Image: French National Library. RIGHT: 2015 — Collaborative virtual environment. Image: Microsoft HoloLens.

Design will have to expand to new areas, such as pedagogical design and algorithmic design, and it will also need to bring an even broader systemic approach into the creation of products and services. The quality of education will be predicated on the quality of its ecosystem as a whole. The strategic ability of design is to connect human and cultural insights with business rationale and technology innovations. This ability is going to play a crucial role in developing new tools such as AI to meaningfully advance the learning experience.

This blog was originally posted by GlobalLogic's experience design arm, Method.

Contents

  • Mixed Reality is a new, fundamentally different medium.
  • The current, default interaction model is quite simple and limits what we can do with MR
  • To improve it, we need to look into natural ways of interacting with the world and do four things:
    • Place interface elements on real-world surfaces for tactile feedback.
    • Allow for direct manipulation of virtual objects
    • Use spatial anchors to expand interfaces beyond the desktop
    • Utilize 3D sound to enhance the experience with directional cues

This will allow us to do two amazing things: make us super humans and let us collaborate around virtual objects by adding the human element back into tech.

Intro

Jim blog 01 Sensorama “Experience Theatre” from 1962

Although Virtual Reality headsets, devices, and applications have been around for a few decades now, Mixed Reality?—?a subset of Augmented Reality?—?has only recently matured beyond experimental prototypes. At Method, we’ve looked at natural ways of interacting with it by building prototypes and explored its potential uses outside the usual fun and games.

What is this Mixed reality you speak of?

When we talk about Mixed Reality, we most often mention Microsoft’s Hololens, the headset that, unlike VR counterparts, lets you see the world around you as if you were looking through a pair of sunglasses. Hololens overlays virtual objects, or holograms. Since there is no need to record the “real” world with a camera and play it back on a screen (like AR applications on your phone do), there’s no lag or latency in your perception?—?you see what you would normally see, with added objects from the virtual dimension. Apart from 3D scanning the space around you in realtime, having a whole bunch of accelerometers, compasses, and other sensors, it also includes a pair of 3D speakers, that induce the perception of true spatial sound.

What’s wrong with the current interaction model?


The current interaction model is based on an old VR paradigm of using your gaze to move a reticle in the centre of the screen to an object and then using a “pinch” gesture to interact with it. This doesn’t require much work with implementation, but is unnatural and rather unintuitive?—?how many times do we stand still, move our head, and raise our arms midway in the air. It’s also disappointing as a device that scans your gestures, the environment, and knows exactly where in the room you are in real time making the user feel handicapped instead of empowered.
Because MR is fundamentally a new medium augmenting the real world rather than transporting us into a virtual one, we have to look for interaction clues in our perception of the world around us.

Humans of today are more or less the same as humans from 50,000 years ago. We perceive things through a combination of senses , with the most importance usually being given to vision, followed by sound, smell, touch and taste. We interact with the world mostly by touch?—?we press buttons, touch surfaces, rotate knobs, and move objects physically. Touch is closely followed by voice when we say our intents and engage in conversations. More subtly, we use vision to look at something and, sometimes on an unconscious level, people around us turn to see it too.

In our workplaces and homes, we constantly move through functional areas?—?from desks, to workbenches, to pinned walls and kitchen tops.

How do we improve it?

At Method we think there are four ingredients that would make interacting with the virtual dimension a better, more intuitive experience:

  • Place interface elements on real-world surfaces for tactile feedback
  • Allow for direct manipulation of virtual objects
  • Use spatial anchors to expand interfaces beyond the desktop
  • Utilize 3D sound to enhance the experience with directional cues
Place interface elements on real-world surfaces for tactile feedback


Placing interface elements on surfaces and real-world objects allows users to virtually interact the way they would with real things. Not only can they see them, but they also feel when they’ve touched them, by using actual physical textures already there. The placement increases intuitive precision and removes the ergonomic stress of having to hold your arm out in mid air (which is still a problem with ultrasound haptic interfaces).

Allow for direct manipulation of virtual objects


Let users manipulate objects directly instead of having to use an interface element, like f.ex., to move or rotate something. This offsets cognitive load, allowing them to forget about the interface itself and focus on the task they’re performing. If objects are floating in mid air, a use of a simple guide metaphor can be useful for interaction, again relating back to having the actual sensation of touch during the interaction.

Use spatial anchors to expand interfaces beyond the desktop


With MR headsets like Hololens, we have the ability to position objects and interface elements in the space around us, not just on a table right in front of us. This expands our useful interface area beyond the desktop. It provides us with the ability to create focused interface areas that we can physically move through , much like how we do things in a real-world workshop. As a by product this can also simplify individual interfaces and help with some of the detection resolution issues.

Utilize 3D sound to enhance the experience with directional cues


Spatial sound can provide us with subtle cues about status and the position of objects around us?—?from clicks that provide audio feedback when we hit interface buttons to gentle notifications from far away objects. Sound works on a more unconscious level than vision. While we consciously direct our vision towards objects, sound emanates from its emitter towards us. This means that we cannot only notify a user of an object’s status (f.ex. train platform or an oven left on), but also its location and direct them towards it. Together with the rest of the sensors and voice control in the headset, the combination allows for some very advanced but naturally intuitive scenarios: f.ex. 3D sound notification sounds from the direction of an object, the user turns towards it (narrowing the context of interest), they issue a voice command now specific to that object, AI’s load is decreased, and the machine responds accordingly.

Examples created along the way

While prototyping, we’ve built a set of examples that touch upon the various ingredients (above) and illustrated them in practice. We’ll be publishing the code as open source later this month on Github so you can play with the examples in action.

blog 01

Rotating Buddha demonstrating how rotation mapped to a tea-cup makes for a more tangible interface

blog 01

Robot arm showcasing direct manipulation with a virtual object in play

blog 01

Morning briefing illustrating how different functional spaces, such as entertainment and planning interconnect

blog 01

Finding your keys (right) showing 3D sound combined with visual cues in use to help you navigate space and find what you’ve lost

Amazing things to come

blog 01
Mixed Reality shows great potential in applications that mix the real with the virtual?—?not complete immersion, like in VR simulations or games, but rather augmentation or enhancement.

The ability to augment real-world objects with contextual information unlocks a wide range of opportunities that will make us feel like super humans?—?from enabling us to become expert navigators to upscaling manual workforces to augmented remote assistance in the field. Combined with the ability to create interactive three-dimensional virtual prototypes that we can collaborate around in real time without losing the human connection, the new medium of MR could very well be the next big interaction paradigm.

This blog was originally posted by GlobalLogic’s experience design arm, Method.

Dinosaur

Are we witnessing the last throes of retail as we know it in the 21st century?

“I don’t shop in stores much these days — they don’t carry the same level of choice you get online. I only go to stores to try things when I am in the mood to browse but the internet is only a swipe away, so that’s less and less now.” Sound familiar?

Across the Western World, we are seeing headlines calling out the accelerated demise of traditional retail. Some of the world’s most iconic store lights are blinking out at an unprecedented rate. In the UK, Debenhams recently announced the closure of 10 stores. In the US, there have been nine retail bankruptcies alone in 2017 — as many as the whole of 2016. J.C. Penney, RadioShack, Macy’s, and Sears have all announced more than 100 store closures.

Why is this happening?

The usual drivers of progress and extinction are to blame. New types of organizations with a superior grasp of technology enables them to meet changing consumer needs and behaviors better, faster and cheaper than the other guy. From the start, this new breed has had access to better data and algorithms that help them know you better than you know yourself. They also have a legacy–free organizational structure geared towards a more unified approach to buying, product development, and marketing that’s powered by customer data rather than fragmented departments driven by their own research, priorities, and KPIs.

Amazon and the other tech giants have a lot to do with the doom casting of the retail sector, but there are also more fundamental human factors at play. There has been a systemic shift in how and why we consume. People are eschewing the overtly branded consumerism of the previous decades for a more subtle form. In the West we are changing our spending from buying “stuff” to experiences, such as dinner out with friends. Travel and leisure is thriving and restaurants are booming. We have no quarter for retailers who can’t deliver an engaging, Instagram-worthy retail experience that gives us the social currency we crave and all the dopamine, good vibes that result; we just expect it to happen regardless of channel or even sector.

When it launched, we all thought Amazon was a retailer, but we understand now that first and foremost it’s a technology company addressing the needs of people rather than focusing on how many books it can sell. To succeed, retailers and brands must now think about how to enable your life by creating shareable experiences that validate “you” rather than just clothing or feeding you. This is not an easy thing to do. It means navigating an ever increasing level of complexity, requires a systematic, human-centered approach, all driven by data insights and the application of technology to create new brand experiences.

Once a market has hit saturated commodity status like Amazon did with books, there’s really only one way to go from there: experience is the next frontier after convenience and rock-bottom prices. Amazon knows this, and is now actively investing in new, physical Amazon bookstores to close the experience loop. The same is true for groceries, spectacles, fashion, mobile phones, supermarkets, and so on. It’s not enough to deliver more stuff, faster and cheaper than the other guy anymore — it’s about why you do what you do and equally importantly how you do it.

So how do retailers adapt?

Think about connected experiences, not digital and physical.

The quicker we stop separating bricks and clicks and start taking a holistic view the better. The customer doesn’t differentiate and neither should retailers. E.g. when starting as head of retail at Apple a few years ago, Angela Ahrendts famously displayed a commitment to this concept by uniting Apple’s digital and physical retail teams under the same roof, where previously they had been in separate buildings.

To design the right touchpoint you must know how it fits into the wider context of the customer experience. You need to know the customer, where they have come from, and what they want. This may well be informed by technology and user data, but the best solution may be digital or analog or a seamless combination of the two. For instance, a relevant store environment with a well-trained consultant on hand who’s supported by technology to enhance the customer experience.

Understand the function of each channel

The Holy Grail lies in understanding how to leverage the advantages of digital retail (i.e. customization, speed, data tracking) and physical retail (multisensory experience, human engagement) and deploy them at the right points in the customer journey.

As we’ve seen, digital retailers are opening physical stores, changing the focus of why these physical touch points exist, and fixing the legacy inefficiencies from the previous incarnation. For example, rather than stocking a huge amount of books which can be hard for a customer to handle physically, Amazon books uses data-driven design to increase the likelihood that you’ll pick up a book that you didn’t know you wanted to read. The store is geared towards its inherent strengths,like browsing and discovery rather than specific product-based shopper missions, which is a need better serviced through digital touchpoints in store.

Embrace the advantages of access to new data and processing power

By far the biggest advantage digital retailers have is their access to superior customer data. It allows companies to experiment and optimize (even if that means failing repeatedly) until they get the formula just right. But access to this kind of data is now becoming democratized. New technologies are available to all, creating a more level playing field with new API’s, surfacing AI processing power that makes sense of even the most unstructured data. Startups like Rubikloud can collect a variety of data from retailers and associated platforms, then leverage AI and machine learning to help the business synthesize customer behaviors and preferences, enabling easy pricing prediction, as well as stocking and campaign generation based on the data findings. Understanding your customers at an individual level no longer sits in the realm of the digitally native retailers. The quicker retailers embrace these new technologies, the quicker they’ll be able to understand the true needs of an individual customer and iterate on their offer to meet these needs.

Enable your people to think differently

Retailers now need to think like designers, software engineers, behavioral psychologists, content creators, and brand strategists all in one day and be comfortable with the methodology and technology that comes with it. To create the best possible retail experience, understanding all of the contexts is vital (brand, customer need, business/organizational structure, technology, customer journey, partners, innovation, social media, product design, service design, etc.).

Structure your business to act differently

Digital transformation is as much about people as it is about technology. It’s futile to think people can change overnight (or at all), It takes time and commitment. There will always be natural attrition as companies embrace new ways of working. Previously disconnected departments will need to work together. IT, engineering, logistics, product, marketing, and brand teams (both digital and physical) will need to work closely with each other to be able to deliver a holistic experience.

We are regularly asked to help companies navigate this change, often to pick up the pieces when companies have jumped in feet first with scrum teams and agile delivery aspirations only to realize they’re failing. Every company is (or should be) different. Some will embrace change more readily than others, but all must take it seriously and invest in the technology, tools, skillsets and culture to escape extinction.

What we are witnessing is the inevitable rationalization of the market, a natural evolutionary process and dinosaurs must die. But this is never going to be as simple as clicks beating bricks. Physical retail isn’t dead, but stores will never be the same again and neither will the companies that drive them. It’s a textbook case of convergence and evolution.

Remember, not all of the dinosaurs died — some of them grew feathers and learned to fly.

This blog was originally posted by GlobalLogic’s experience design arm, Method.

Alexa

Illustration by Joshua Leigh

“Sorry, I’m not sure about that.” answered Alexa defeatedly. There’s no doubt that AI is a hot topic right now. Whether it’s financial services or healthcare, most industries are looking to capitalize on the technology for their benefit.

Looking at the hotel industry specifically, is it another example of technology looking for a problem to solve? To answer the question, we need to consider where AI is currently in use, the heady design considerations around the technology, and articulating what it could mean moving forward.

In the first place, AI is not new to the travel and hospitality industry. At the front end of the guest journey, companies such as Expedia and Kayak have been investing in the technology. Meanwhile at the back end, it’s already in use by review sites like TripAdvisor. Our focus, however, is on the in-hotel moment of the guest journey as this is the area that has the biggest impact on their overall experience as well as where brand experience and design are most closely entwined.

A number of technologies and solutions fall into one of three categories: portable, embedded, and autonomous.

Portable taps into what most guests carry with them all the time: their phone. Recently, Edwardian Hotels launched Edward, a virtual host running on SMS that helps guests without smartphones to use the service. Similarly, Starwood have created a hybrid web chat and chatbot experience that runs on WhatsApp, BBM, and iMessage.

In the embedded category, we’re starting to see services integrated with existing ecosystems such as Siri and Alexa. The Wynn Hotel in Las Vegas is installing Amazon Echo in all of its 3,500-plus rooms. Competition is heating up for tech companies too. At its Aloft Boston Seaport site, Marriott is pitting Apple and Amazon capabilities against each other in a pilot study.

In terms of autonomous technologies, one example that’s attracted considerable publicity is the somewhat gimmicky Henn-na Hotel in Japan (Figure 1), where floor staff have been replaced entirely by robots. How effective the robotic update is remains to be seen. At the same time, Hilton is collaborating with IBM to test Connie, a robot concierge powered by Watson. Outside the hotel sector but along the same lines, two hospitals in Belgium are trialling Pepper robots to respond to basic requests and guide visitors to their destination. Pepper is 140 cm tall, can converse in 20 languages, and detect if talking to a man, woman, or child.

Fig. 1 Henn-na Hotel, Japan

Despite the varied activity in this space, AI is still in an exploratory phase and largely an untested customer value proposition. As people start to feel more at ease with the technology, designers need to pay greater attention to how AI fits into the wider guest experience. There are key design considerations we expect to surface as hotel brands start to explore opportunities associated with this emerging technology.

Context of Use

Decisions about how, where, and in what way to apply the technology will depend on each hotel’s ambition and operations. Some hotels may opt to just provide the technology in guest rooms, while others may seek to offer it threaded throughout various touchpoints, such as reception, bar, business rooms, elevators, gym, etc. In room, there are a variety of possibilities, ranging from tablets and TVs to smart speakers and custom builds, running on platforms such as Amazon’s Alexa Voice Service (AVS). Elsewhere, it may be preferable to make more use of guests’ mobile devices, in combination with beacons and sensor networks.

Interaction Paradigm

Taking into account the interaction paradigm is paramount — should it be screen based, manifest as voice or gesture, or a hybrid approach? Smartphones and tablets are now commonplace, so delivering services via a chat function on an interface is familiar and natural. This route also opens up scope “UI on demand,” which will make it easier to handle variables and options. Provision of services through voice is more challenging as people have less established mental models of how to interact with the technology in this way, especially when out of home. In its current state, AI struggles to deal with complex requests and variables. It’s likely in the interim that voice solutions with blended interfaces will be the predominant model, where the conversation is augmented via a screen such as a tablet, TV, or the recently launched Amazon Echo Show (See Figure 2).

Fig. 2 Amazon Echo Show

From a brand perspective, the wake word will be another significant consideration. Hotels may choose to use generic wake words such as Alexa and Siri, or create their own branded versions (Figure 3), built on platforms like IBM Watson, Amazon AVS, and Microsoft Cognitive Services.

Fig. 3 Generic vs branded ‘wake word

User Authentication

With digital services, authentication is an important dimension, especially when payment is involved. This issue is somewhat covered via the “portable” route previously mentioned, as smartphones often have authentication technology embedded and are owned and already authenticated by the owner, so they can easily be tied to an appropriate account. But for the “embedded” or “autonomous” routes, authentication is essential, especially when minors, other family members, and visitors are involved.

Data Ownership

Clearly there will be questions about who has access to any data generated by guests during their stay, and how it’s used. Let’s not forget, hotels are places where guests appreciate their privacy — what happens in the hotel stays in the hotel.

So hotels will need to think how providing guests with a facility to manage their data fits in with the wider brand experience. For large hotel chains, it’s imperative to take into account the impact of regional data privacy laws vis-a-vis global guest experience ambitions. On the flip side, guest data has the potential to provide hotels with a competitive advantage by enabling them to deliver a more uniquely tailored experience.

Back of House Integration

Last but by no means least, there are questions around quality and performance. There are the obvious challenges around the interface itself and how it copes with complexity and nuances in guest requests, but also important aspects such as speed, accuracy, and choice. Many of these aspects will be reliant on a human back of house, whether it’s picking up a guest conversation with a chatbot or loading a delivery robot ( Magic is a great example outside the hospitality sector). All this is likely to require additional training, new job descriptions, revised procedures and changes in shifts, to name just few.

So What Does This All Mean for Hotels?

With technology more seamlessly integrating into most aspects of our lives, it’s just a matter of time before we see more wider adoption of AI in the hospitality business. While we have alluded to some of the exemplar use cases, the question still remains what the end value is to both the customer and business. With this in mind, there are two main ways we think AI could help to improve the hotel experience.

An Enhanced Guest Experience

As a temporary (or less temporary for some) substitute for our homes, hotels are personal and intimate places that need to accommodate a number of guest mind states. Regardless of being at the lower or upper end of the price scale, they are tactile, human, and service orientated environments where we eat, drink, work, sleep, and socialize. The hotel industry is facing a number of disruptions right now, Airbnb being just one.

Online travel agents (OTAs) and review sites are tightening their grip on both the nose and tail end of the guest journey and commoditizing the in-hotel guest experience in the process. It’s up to hotels to seize this opportunity and delight guests during the part of the journey they own — the actual stay. AI has the potential to empower hotels to deliver services at both ends of the price spectrum. At the lower end, it can contribute towards more customized self-service experiences, while at the upper end it can support staff to deliver deeper and more personalized guest experiences.

A More Efficient and Effective Back of the House

Hotels operate on ever tighter margins. Yield, occupancy rates, and waste all impact operations, escalating with the large hotel groups. If AI can streamline operations in hotels and enhance services, greater value for owners and guests alike is the net result.

AI offers the possibility to address the minutiae of small but often important requests that are tedious for both guest and staff to deal with. Requests include like finding out when a restaurant closes, calling for an iron, or asking for more soda water. Crowne Plaza has made use of the technology at their San Jose-Silicon Valley site by using the Savioke Relay delivery robot. But it also offers the opportunity to address more complex tasks, such as splitting a meal bill with a colleague, placing a request in a language not spoken by hotel staff or getting pizza delivered.

Savioke Relay delivery robot

Where it will all end up remains to be seen, but it seems that there’s undeniable benefits in our midst. The bigger question seems to lie around who will own the experience. There’s considerable value in guest data, but who will ultimately benefit from it will be dependent on some of the considerations and approaches we outlined. In the way that Airbnb owns the end-to-end journey of its customers, can hotel chains sustain the onslaught from the OTAs such as Expedia and Kayak, and review organizations like TripAdvisor, and tech companies like Google, Amazon, and Facebook? This is one question I know Alexa won’t be able to answer, but she’ll invariably be a part of it.

Introduction

This year, the Mobile World Congress (MWC) took place in Barcelona, Spain. More than 108,000 visitors from over 208 countries visited this annual event, where the most prominent companies in the industry showcased the newest technologies and most innovative products for the mobile industry.

The most popular demos at MWC17 revolved around Virtual Reality (VR), Augmented Reality (AR), chatbots, and personal digital assistants (powered by AI). Most importantly, nearly every top vendor and operator had something to say about 5G and IoT. Let’s take a closer look at these two important drivers of technological disruption.

Intro

5G

Opportunities for Operators
5G provides operators with an opportunity to move beyond connectivity (i.e., remove “tele” from “telecommunications”) and to collaborate across sectors (e.g., finance, transportation, energy, manufacturing, education, agriculture, retail, health) to deliver new services and find new revenue streams. 5G is the key to making mobile networks a general-purpose technology like electricity. It’s a future-generation platform for the next decade and beyond.

Opportunities for Vendors
Revenues for infrastructure vendors like Ericsson, ZTE, and Nokia are being squeezed. As of January 2017, a total of 581 commercial LTE networks have been launched globally, and all indicators point to a year of LTE decline as a result of diminishing rollouts worldwide. So there is a gap between 5G equipment sales beginning and 4G equipment sales lagging. Another worry point for telecom equipment makers is the growing momentum of programmable and software-centric networks, which is being driven by the adoption of Network Function Virtualization (NFV) and Software-Defined Networking (SDN) approaches. This evolutionary leap has led to the introduction of a wide range of innovative pure software players (e.g., Big Switch Networks, Affirmed, etc.), primarily because networks are being increasingly commoditized and intelligence is moving to the software layer.

This is one reason why most vendors and operators are pushing a 5G rollout —it brings new markets and revenue streams.

Demo3  Demo2

5G Use Cases
I saw several interesting demos around autonomous vehicles, such as the first live pre-5G over-the-air wireless interoperability between the third generation of the 5G Intel Mobile Trial Platform (i.e., UE) and the Ericsson 5G Radio Prototype system. This accomplishment was demonstrated live over-the-air at MWC via two use cases: virtual reality and autonomous driving. For the automotive use case, there was a demonstration of 5G 28GHz over-the-air connectivity between an Ericsson base station and an Intel GO Automotive 5G Platform located in the trunk of a BMW 740i (see Figure 1). These two tech giants conducted the most number of public demos around 5G by far at the event.

Figure 1 Figure 1. Intel GO Automotive 5G Platform in the trunk of a BMW 740i

Other interesting 5G demos included the 3.6 Gbps for 5G Connected Car (Ericsson and BMW), the first 5G remote driving concept, as shown in Figure 2 (Telefonica and Ericsson), and the world’s first intercontinental 5G trial network (SK Telecom and Ericsson).

Fig2 Figure 2. 5G remote driving concept

The other prominent event around 5G was the Global 5G Test Summit. Supported by industry organizations like 3GPP, ITU, NGMN, GTI, and GSMA, this event brought together global operators, vendors, and telecom organizations from across the mobile industry. During the Summit, 25 mobile operators announced that they are lab-testing 5G — twelve of whom reported having progressed to field testing and four of whom announced plans for 5G trials (according to a report by Viavi Solutions). The event concluded with the release of a “Global 5G Test/Trial” declaration, which aims to promote a unified standard and accelerate the maturity of the 5G industry towards commercial deployment by strengthening cooperation between vendors, telecoms operators, and vertical industry partners.

IoT

Operator Opportunities
The 3GPP-approved LPWA standards in the latest 3GPP Release (June 2016) for the licensed spectrum provide operators with new IoT opportunities while allowing them to continue using their existing LTE networks. These standards cover LTE Machine-Type Communication (LTE-M) and Narrowband IoT (NB-IoT), and both run on LTE networks. The NB-IoT commercial deployment will be fast, as carrier existing networks can be updated to support NB-IoT with just a software update. In 2017 alone, over 25 NB-IoT networks (e.g., Vodafone, DT, Telefonica, China Mobile, KT, etc. ) will be deployed in more than 20 countries across the world. Such fast adaptation is being driven by the benefits that this technology brings to the market, namely: better indoor coverage, multi-year battery life (more than10 years with two AA batteries), reduced device costs (less than 5 US dollars per module), and significant coverage extension over existing cellular technologies.

LPWA Opportunities
Various LPWA solutions were on display across the conference area, highlighting industry support for technologies such as smart manholes (Ericsson, Intel, Telit ), smart buildings (Ericsson, Intel, Gemalto), connected factories (Ericsson, China Mobile, Intel, Fibcom), smart gas meters (Nokia, Telit), smart buildings (DT, Ista), safety jackets (KT, Kolon Industries), smart parking (Huawei and Vodafone), and lost & found tracker (Telefonica). Most of these deployment-ready concepts are end-to-end “sensor-to-cloud” solutions and demonstrate the business case of NB-IoT, leveraging features like low data rates, low battery power, extended coverage, and pure data-only applications with a fixed installation.

IoT Use Cases
This year, the main focus of IoT use cases was on enterprise and industrial applications, as vendors seek to promote IoT’s role in industrial transformation. General Electric (GE), Qualcomm, and Nokia showcased a private LTE network in an unlicensed spectrum for Industrial IoT (IIoT) use. Moreover, the companies announced plans to lead live field trials this year based on this demonstration, which are designed to promote the digital transformation of industrial processes. In another interesting demo, Korea Telecom partnered with clothing manufacturer Kolon to create a connected jacket (Figure 3) that is designed for use in remote locations. It can sense the unusual movements of a person in danger or distress and then automatically send signals to rescue teams.

Fig3 Figure 3. IoT Safety Jacket

IoT brings efficiency for everyone, and pets are no exception. Telefonica demonstrated an IoT dog health and activity tracker (Figure 4) that can track and analyze its activity. It provides information on how active the animal was throughout the day and whether the activity goal has already been achieved. Based on the collected data, the smart device provides individual recommendations on how a dog owner can move, feed, and keep his/her dog healthy.

Fig4 Figure 4. IoT Smart Pet Tracking (right) and Dog’s Profile (left)

Nevertheless, a lot of challenges remain around a lack of interoperability across multiple standards and platforms, fragmentation, security and identifying business models.

Accelerators for 5G and IoT

Here are some key takeaways from MWC17 regarding how software development can accelerate the mass rollout of 5G and IoT solutions:

  • Agile operation processes like DevOps have inspired continuous integration, allowing the whole industry to change rapidly.
  • Businesses need to adopt new approaches to building software for NFV/SDN (e.g., an evolution from VM-based implementation to microservices-based architecture with the help of container technology such as Docker) because the use of 5G and IoT will connect millions of devices and result in enormous numbers of sessions within networks. By utilizing a microservices architecture, operators can improve their speed in setting up these sessions, more efficiently manage the number of sessions per server, and lower the end-to-end costs of processing all these transactions.
  • The benefits of a microservices architecture and Agile development and DevOps processes require true business transformation, encompassing processes, engineering, operations, IT organization, and culture.
  • Businesses need to balance Edge Computing and Cloud Computing. Edge Computing refers to storing data processing power at the edge of a network instead of a cloud or central data warehouse. The main reason for deploying Edge Computing is to significantly reduce the network latency boosting performance for time-critical applications like autonomous vehicles, industrial IoT, remote surgery, robotics, and all the IoT objects that will be created during the next decade. In addition, it’s the logical next step for computer industry, just as centralized mainframes led to decentralized client-server architecture, which then swung back towards centralized Cloud Computing. It’s highly likely that in the future we will observe a paradigm shift towards decentralized Edge Computing. We can already observe several alliances in the market, such asOpenFog Consortium, the ETSI Mobile Edge Computing (MEC) group, and OpenEdge Computing. At MWC, Huawei also launched the Edge-Computing-IoT (EC-IoT)
  • To improve a competitive position, a number of forward-looking telecom service providers and network equipment makers have embarked on a process to explore the advantages of open source in projects such as OCP (Open Compute Projects), OSM (Open Source MANO), CNCF (Cloud Native Computing Foundation), OCI (Open Container Initiative), CORD (Central Office Re-architected as a Datacenter), ONAP (Open Network Automation Platform) (i.e,. a merger of Open Source ECOMP and OPEN-O projects).

Conclusion

Vendors and operators are making eager progress towards the advancement of 5G commercial rollout and massive IoT availability across the globe — both from technology and commercial perspectives. MWC 2017 demonstrated the significant work being done by industry players across multiple areas to bring these technologies to the mass market, with clear use cases being developed for both businesses and end users.

Another key point is that the move to software-centric networks will lead to a wave of innovation and a growing number of new providers offering new services; the network itself becomes an API (Application Programming Interface). Future generations of product development services for the communications industry will be one of the drivers that will speed up transformation in this area.

Creating Content That Counts

About 300 hours of video are being uploaded onto YouTube every minute, but how many of them have informational or aesthetic value? I would suggest not many. The majority of users do not bother to shoot and edit their videos properly, especially when it comes to mobile videos. Although there are many video editors available in Android and iOS app stores, ranging from simple ones to quite advanced ones, most content creators just want one button: "make it pretty." And the very place you can find this magic button is the Magisto app.

How does it work? After shooting photos and videos with your smartphone, you simply select the style you want for your video (e.g., romantic, travel, extreme sports) and the theme music (from the app or personal library). Then the application optimizes the video and sends it to the cloud, where all the magic happens. Magisto analyzes content, selects significant objects and the most interesting moments, assembles them into a single video, adds special effects and background music to the beat of the events, and exports a finished clip. It takes just a few minutes and requires minimal user interaction.

Today, the service has about 80 million users worldwide. In addition to the web version, Magisto has both iOS and Android apps. The latter has received the Best Android Apps Awards twice (in 2013 and 2015), as well as the title of CES App of the Year 2015. Also, Magisto for Android is included in both the Google Play Editors' Choice and Google Play Top Developer lists.

Few people know that Ukrainian engineers from GlobalLogic developed the Android version from start to finish — and continue to work on it now!

The Beginning of Our Collaboration

Our collaboration with Magisto began in early 2012, when the company was a small Israeli startup. At that time, no one could imagine that a 5-week-old proof-of-concept would turn into something serious. The challenge was determining whether it was possible to create a fast and reliable video transcoding app that would be suitable for any one of a zillion of Android devices.

In the days of Android 2.1, Google did not provide an API for working with video on a low level. It took at least 3-6 minutes for the existing solutions to convert 1 minute of original video. This was unacceptably long. So in 5 weeks, we developed an application for processing and compressing videos. The first public version of Magisto for Android was introduced in just 2.5 months.

Magisto1

Since the customer did not have experience working with Android, GlobalLogic took complete ownership of developing the Android version of Magisto while the customer developed the iOS version. The workflow at a startup is very quick, so we had to perform a specific amount of work in a specific period of time. This was not always easy, as the time to implement the same feature in iOS and in Android could vary significantly.

Just imagine: we make a minor release every two weeks and a major release every four weeks. This is a very fast pace, and each member of the team bears high responsibility for the result. We have been collaborating with Magisto for almost 5 years in such a way, and it is the longest-running Android project in our Lviv office.

For the first 2-3 years, we were developing only basic functionality, as the project grew very quickly. Initially, we were drafting six-month product development roadmaps with the product owner. But every month or so, such plans became irrelevant, so we changed to monthly planning.

What Do We Do Now?

Now when the basic functionality of the product is ready, we have shifted our focus a little. Currently, we make changes that are hardly visible to the user but that significantly influence their level of satisfaction.

For example, when you change the size of the window during video play (i.e., when you go to full-screen mode) or rotate a screen on your smartphone, the standard Android API tools do not always work correctly. The picture is twitching because of the loss of a few frames, and the user gets annoyed. Most likely the users did not notice and never realized that this problem was solved, but, according to our statistics, their level of satisfaction has increased. It is interesting that we have corrected this bug earlier than even YouTube app or Instagram did! At the same time, we continue to develop the main product and add new features to it.

Magisto2

Here's another story: when low-level video editing API first appeared in Android 4.1, each manufacturer of smartphones began to use it in their own way. As a result, video quality on various devices was different. Moreover, a lot of bugs appeared in the course of video processing. We had to come up with a specific solution that would take into account the features of videos made with particular devices.

Eventually, we identified the devices with common video problems and prepared a set of ready-made "patches" to solve them. Furthermore, our team introduced a system that automatically checked the quality of an incoming video and pictures made by Magisto. Depending on the problems with the clip (i.e., missed facts, distortions or artifacts, wrong colors, etc.), the system applies specific algorithms to remove the bugs on a certain device.

If the selected solution delivered good results across a certain number of cases, a kit of "patches" was included in the software construction for a particular device. Thus, we did not have to reinvent the wheel each time while processing the video, and we were able to implement hardware video acceleration for a large number of devices.

Looking for New Opportunities

At the moment, the market is heavily glutted with custom video editors, so Magisto is seeking new niches. That’s how “Magisto for Business” appeared. Generally speaking, this is an enhanced functionality of Magisto, which enables business users to quickly and cheaply create interesting videos of high quality.

Magisto3

Potential customers of this service include fitness clubs and trainers, small retailers, goods manufacturers, real estate agents, etc. who need to create promotional videos or presentations online. Basically, this service is for anyone who would like to create an attractive advertisement for their products or services but do not have the time to do it themselves or the resources to hire a professional production company.

The main task here in terms of development is to create advanced functionality for business users (e.g., adding voiceovers, logos, or text to the video) while maintaining a user-friendly interface.

Project Features

Since we perform full-cycle development for the Android version of Magisto, our team consists of specialists who are able to work on every aspect of the application. We decide how to design, test, and release new features in the product within a specified period. As with any startup, each team member carries significant responsibilities. However, unlike projects that follow a traditional pattern of developing a feature, testing it, and returning it for bug fixing, we have to move very quickly in order to release new features every few weeks.

Magisto4

When someone submits a feature, it means that it has already been tested and is fully functional. Thus, the programmer has to think about the tests and the analytics along with the code. Not all the processes are clearly defined, meaning the programmer is completely responsible for deciding how to accomplish a particular task. Permanent resource limitations demand their intelligent use: automation of CI, tests, etc. There is no other way. QA specialists are always 100% loaded. Therefore, a developer should clearly understand the scope of things to be checked.

Besides, the team must be able to communicate with the manager and the product owner properly. A person who is not deeply into Android features may sometimes find it difficult to believe that while development of a certain iOS feature takes only a couple of days, its development for Android takes about a week.

Although our project is very dynamic, it does not mean rush jobs or overtime. As far as I recall, we never had to work on weekends. Usually we spend 8.5 hours in the office, keeping work-life-balance healthy. This, of course, is not the only advantage of our project.

What is much more important about this project is that it has helped our people grow in the technical field very quickly. Team members understand that a single mistake today will prevent 50 million users from buying something tomorrow. Although the occasional mistake is inevitable, we help our team members learn from their mistakes and stay motivated to avoid them in the future. This project puts a major emphasis on personal responsibility, which is valuable quality to have in both a startup and one’s own personal life.

Magisto5

A True Partnership

Although it is usually assumed that big companies such as GlobalLogic hinder rather than help startups, I can personally say that our collaboration with Magisto proves the opposite to be true. Our relationship with Magisto is based on a true partnership that is beneficial to everyone. We help Magisto develop its product dynamically, while Magisto helps us build our technical expertise, gain valuable experience, and attract new customers.

Oleksandr Odukha is a Senior Project Manager at GlobalLogic’s Lviv Product engineering Solutions center.

Swift is quickly growing as one of the top programming languages. Swift has overtaken Objective-C and has become the 14th most popular language in the TIOBE Index. Some of the reasons for this kind of popularity are: safe memory management, strong typing, generics, etc. Swift is cleaner and more readable than Objective-C, there are modules that eliminate class prefixes, and it also has half as many files in a project and understandable closure syntax — the list of benefits goes on. Overall, with Swift things have improved dramatically, and code has become simpler and stable.

When transitioning from Objective-C to Swift, it's logical to map concepts you know in Objective-C onto Swift. You know how to create classes with Objective-C, and now you know the equivalent in Swift. However, every programming language has some specific features that are core to its design, that makes it different from others languages, and that the designers of that language have deliberately put into it to make things easier for the programmers. Thus, you should always think from the perspective of these language-specific features while designing the structure of your program in the respective language. Similarly, Swift does more than providing a better syntax for your app; here you have the opportunity to change the way in which you tackle problems and write code.

In this article, we will look at some of the design and coding guidelines that will help you apply the benefits of such Swift features to your programs, and that will make your code more Swift-like.

Swift blog 1

Swift is designed to be safe, so make use of Swift language features to make your code safe and robust.

For example, “array index out-of-bound” is a common exception condition with other programming languages. But in Swift, you can prevent it altogether because it's pretty rare in Swift to need to use the index for array operations because the Swift array APIs are designed in such a way that you never need to use the index. There is an extensive set of collection access and iteration operations and syntax that you can use, which have internal bond checking that makes them reliable to use. For example, the first property of array is equivalent to "isEmpty? nil: self[0]". Another sign that Swift wants to discourage you from doing index math is the removal of traditional C-style for loops from the language in Swift 3.

Another example of a common exception condition is the null pointer exception. This can also be avoided in Swift with the help of Optional, which gives you a strict compile-time check on the nullable variables.  

Default to structs unless you actually need a class-only feature or reference semantics.

Structs are preferable if the entity you want to create is relatively small and copiable because copying is much safer than having multiple references to the same instance, as happens with classes. This becomes more important when you are passing around a variable to many classes and/or in a multithreaded environment. If you can always send a copy of your variable to other places, you never have to worry about that other place changing the value of your variable. Moreover, with structs, there is much less to worry about in regards to memory leaks or multiple threads racing to access/modify a single instance of a variable.

Mark classes as final unless you have explicitly designed them to be inheritable.

Inheritance is a very useful tool, but it's also very overused. Inheritance should be used when the classes form a strict hierarchy, where subclasses are their parent classes in every sense of the word. Many times, inheritance is used as a convenient means of code reuse, and this is a big part of why it gets a bad reputation. For such cases, use composition or extensions. Also, in case you want to use inheritance internally but not allow subclassing for external clients, mark a class public but not open. Always start by marking the class as final, unless you have a very good reason for not doing do.

Use guard to exit functions early.

The basic idea behind “guard” is to bail out as soon as possible. It is used to check a set of requirements that must be met before the rest of the method's body is executed. This can also be done using conditionals, but conditionals are often the very cause of complexity. Nested conditionals and multiple conditions can make it difficult to find bugs, make the code hard to understand, make it easy to overlook edge cases.  Nested conditionals and multiple conditions can make it difficult to find bugs, make the code hard to understand, and even make it easy to overlook edge cases.

The guard statement is ideal for getting rid of deeply nested conditionals whose sole purpose is validating the set of requirements. It makes the code more understandable because its syntax is more explicit about the requirement than a regular “if” statement. A guard statement is just as powerful as an if statement. You can use optional bindings, and even using “where” clauses are permitted.

Use extensions over inheritance for flexibility.

As we have seen in the above section, inheritance is a good tool, but it has been overused, and it has its own evils. Swift designers have come up with a good alternative called “extensions,” which helps you extend the functionality of a class, struct, enum, or protocol with ease. You can use extensions if you want to share common methods among related types, or you may want to add more functionality to some library classes. Swift is able to use extensions to improve the Swift standard library itself. It is a good point of reference for learning how to use extensions to improve your code design.

Default to immutable variables (let) unless you know you need mutation.

When you read a declaration like "let some = ...", you know that the value of some will never change; it's enforced by the compiler. This helps greatly when reading through the code. Thus, always default to immutable variables (let) unless you know you need mutation. But don't force it, in case a mutation makes code clearer or more efficient than using immutable variables. However, note that this is only true for types that have value semantics. A let variable to a class instance (i.e, a reference type) guarantees that the reference will never change (i.e, you can't assign another object to that variable). However, the object to which the reference points can change.

Avoid using force-unwraps and implicitly unwrapped optional.

You define an optional as implicitly unwrapped when you define its type like "let x: String!". This technique allows you to tell the compiler to automatically unwrap that value as if it wasn't optional at all.

In force unwrapping, you add a "!" after an optional value to automatically unwrap it, without having to check whether it is nil or not. Unlike implicitly unwrapping, this technique is used on existing values (e.g, "let firstLenght:Int = strings.first!.lenght").

Both of these approaches are dangerous. Unwrapping an optional value without taking into account its nullability is dangerous, and it can actually crash your app. There are some cases when implicitly or force unwrapping an optional can make sense, such as in the case of outlets. It is a good practice to never use "!" apart for outlets; rather, use "if let"s and "guard let"s in the code to avoid access of a nil value and crashing the app.

Using higher-order functions like “map,” “filter,” and “reduce” makes the code more readable. But don't force it; if a simple “for loop” does the job, then use it.

Swift is influenced by functional programming. One of the key functional contributions is polished support for high-order functions. (A function is "higher-order" if it has one or more parameters that are functions and/or if it returns a function.) In Swift, passing a function really means passing a closure. Some of the higher-order functions available in Swift include:

  • Map: Loops over a collection and applies the same operation to each element in the collection
  • Filter: Loops over a collection and returns an array that contains elements that meet a condition
  • Reduce: Combines all items in a collection to create a single value
  • FlatMap: When implemented on sequences, flattens a collection of collections

Using higher-order functions makes code easier to understand and less cluttered.

Write extensions on existing types and protocols, instead of free functions.

Swift extensions are powerful, as they enable you to add behavior to any class, struct, or enumeration — even if you don't have access to the original source code. This means you can add behavior to even primitive types like Int and Double using extensions.

Extensions encourage code reuse by encapsulating behavior that will be used more than once in your project in a single location. Additionally, they promote good code organization, leading to cleaner and more readable code when used to add behavior that's closely related to the type they are extending. iOS Swift Cocoa touch libraries like Foundation are good references for how extensions can be used to organize code, as they use extensions to extend the behaviors of most of their classes, structs, and enumeration types.

  • URL copied!