×

Happy to Help!

This website doesn't store cookies. Enjoy the experience, without worrying about your data!

Great, thanks!

Monthly Archives: April 2021

  • 0

Digital Experience Platform Tech Trends To Look Out For In 2021

Technology is evolving at such a pace that it has become our core responsibility to be at the top of our game. What was called the cutting-edge technology 10 years ago seems dated now.

Circa 2020 turned out to be a leap year for technology. The pandemic changed the way we think, work and use technology. The silver lining here is digitalization, and its advancements has enabled enterprises to grow and become more intuitive and innovative. This ensures satisfied customer experience and helps enterprises stay long in the market.

But the hard fact is that headless CMS (Content Management System) and DXP (Digital Experience Platform) industry were affected by sudden rise in new technology, software architectures and other interruptions within the workflows, upholding the fact that traditional CMSs aren’t ready for today’s tech market. It is the time to turn towards DXP.

Have you checked out our detailed blog on DXP and its importance yet?

No one can predict the future but we all can make educated guesses based on knowledge and experience. Contemplating that, we have listed the tech trends for 2021 and their impact on the future of DXP and CMS.

Data Protection is and Will be the Mantra.

Many companies today are aware of the significance of data management and protection. DXPs (Digital Experience Platform) are deployed to protect customer data and prevent any breach of trust.

In 2021, the value of data is increasing leaps and bounds, all thanks to omnichannel marketing and behavioral advertising. It has become the sole responsibility of DXPs that they provide exceptional privacy, data control and data protection policies to backend users and customers who interact at every touchpoint with the enterprises.

AI-based Personalization Gets a Thumbs Up

Personalization has forged ahead and how?! Rule-based personalization is an age-old procedure that works well even now. But ever since people have identified the potential of AI, there has been no looking back.

AI has been successful in delivering customized content and product-based suggestions and it is phenomenal at directing users to digital experiences based on their actions.

By the end of 2021, personalization would have become less taxing and more mainstream and widespread as online retail will be an obvious thing for customers. DXPs have already started benefiting from AI by giving personalized offers to customers such as ‘shop the look’ options based on user data collected from the platform.

Companies Swear by Omnichannel Customer Experiences

Omnichannel marketing has become the touchstone in digital world today. Marketers can now look deeper into their campaigns, check how campaigns are converted into sales, analyze which channels are productive and why. Currently, omnichannel-ready DXP comes with analytical tools that notifies about customer journeys across devices.

Multi-Cloud Environments Will be in Demand

In multi-cloud strategy, cloud assets, software tools and applications are distributed across various cloud environments. This prevents errors and reduces downtime. Companies can make the best use of various features and tools of each cloud environs with multi-cloud approach; they can overcome vendor lock-in and gain autonomy over their businesses.

Looking from DXP perspective, multi-cloud improves platform’s flexibility and capabilities to handle crisis.

Growth in Microservices Architecture

Traditional architecture will be slowly and steadily replaced by microservices architecture in both small and large enterprises. Although the architecture is complex, microservices are easier to build, design, install and maintain. It enables seamless workflows and instant software updates or changes. Developers have options to build modular software with interchangeable parts.

Breakthrough in Edge Computing

Edge computing is not a new thing. This technology is existing from past few years and mostly used in CDNs (Content Delivery Network). With the rise of EVs, connected vehicles, IoT powered devices and 5G networks, there will surely be a surge in Edge computing this 2021.

Edge computing networks boost the low latency speeds of 5G networks. Edge computing helps DXPs respond well to growing consumption of data and bring flexibility and content delivery speed by benefitting the distributed computing capabilities of an Edge computing network. Looks like Edge computing is gaining momentum.

React Keeps Booming

Front-end architecture and libraries like React help resolve development issues when building manageable and modular applications. React simplifies the design of interactive user interfaces. Developers can build UI components and complete user interfaces, including all the visual elements and the logic that controls these elements. React is the most loved, and upsurging library. React is going to expand as large number of developers are in favor of it.

Kubernetes FTW In the Containerization Space

Ever since Kubernetes worked in favor of the feature CRI (Container Runtime Interface), there is no looking back. Over a period of time, Kubernetes has established itself as the leader in container orchestration and management technology. It is ruling the roost in both public and private cloud landscapes. It is not surprising that every other major cloud provider now offers managed Kubernetes services along with their containerization services. We predict that there will be more acceptance of Kubernetes in hybrid and multi-cloud environs and data platforms and maybe AI and ML (Machine Learning) enterprises.

Jamstack Becomes the Ninja

Jamstack is not a newbie anymore which was used only for fringe development architecture. It is quickly becoming a norm and used in developing enterprise software architecture.

Jamstack is an architecture used to increase the speed of the web pages, for better security and scalability. It is built using many tools and workflows to bring optimum productivity. The fundamental concept is pre-rendering and decoupling which helps websites and applications to carry out tasks seamlessly and efficiently.

SPAs and PWAs are Sought-after

Although relatively new in the industry, Jamstack has established itself as a great architecture by 2021. It has paved the way to Single-Page Applications (SPA) and advanced web apps.

SPAs are web application or website that communicates with the customers dynamically by rewriting the current web page with new data from the web server. Default method of web browser loading entire pages is ruled out. SPA is easy to use, supports rapid content flows and improved website speeds. Search engines are also in favor of SPAs owing enterprises to invest more in adopting them.

Likewise, PWAs (Progressive Web Apps) are also gaining popularity. PWAs are created with latest APIs to deliver better functionalities, robustness, and deployment capabilities to reach customers across devices (desktop or mobile) using a single codebase.

PWAs provide an application like experience without the addition of extra tools to the tech stack.

Hello GraphQL; Move Away REST

GraphQL and REST are the most popular API design standards these days. Although REST is great and has been around for quite some time now, GraphQL is relatively new. There are loads of reasons for enterprises to step aside from REST and adopt GraphQL.

GraphQL is excellent with complex and large frameworks, microservices-based architectures and is built in order to leverage full benefits of the capabilities and functionalities the latest devices and new technologies hold. GraphQL not only enables processing multiple domains in one request but also aids developers use the same domain multiple times at times of queries or issues.

dotCMS – Futuristic DXP

dotCMS is a hybrid DXP that enables users to access in-built features like drag and drop functionality, layout design and personalization. NoCode (no coding needed and can be accessed by any end user) and LowCode(some coding is needed by developers who work under platform’s protocols) features of dotCMS make it a relatively easier tool even for non-technical users.

Edit Mode Anywhere is a SPA (Single Page App) editor. It aids marketing teams with a seamless editor experience to create content without depending on the IT team to edit and publish content for them.

For companies to thrive in the digital ecosystem of the future, they should definitely go for platform like dotCMS which can give the best of both traditional and headless CMS to developers and marketers.

NoCode and LowCode Development

With expansion of digital transformation there has been demand for NoCode and LowCode tools. With the possibility for non-technical users to effortlessly build digital experiences and significantly less knowledge of programming languages, they are gaining prominence. LowCode deployment enables lesser weeks of planning and development and giving more testing time for DXP and CMS users.

Collaboration Will be at its Peak

Collaboration and co-existing are the terms that are going to stay for a long time henceforth. DXPs and CMS (Content Management System) are adapted to this already. Software vendors are coming up with improved and innovative features for such cross-team association. Newer platforms are prominently used for employee experience hubs with intranets and social networking tools to assist remote and distributed teams connect better.

Food For Thought…

It clearly has been a tough year. Businesses should perform wherever the customers are. The year 2020 saw businesses move to online without an option. But this year 2021 is also going to be much better for customers and businesses. Companies have positioned themselves in much better place to deliver better customer service with innovative and personalized experiences.

One thing we can vouch for is that businesses today are much closer to their customers compared to last year. The technical and operational hindrances have become smaller as technical advancements are expanding. This is an incredible time for brands to leverage and reach out to customers and perform better. It won’t be wrong to say that DXP will be dominating the eCommerce world.


  • 0

Key Principles of CX – Customer Experience Design

It has become obvious that engaging and retaining customers and fulfilling their demands is the number one priority of companies, but isn’t it challenging?

Oodles of choices have led to superfast and seamless service demands by customers. Companies must develop and execute competently in order to deliver exceptional user experiences.

As someone has truly said, first impression is the best impression. Companies must recognize the importance of customer experience at every touchpoint. They should not miss out on any opportunity that can help in building a strong brand identity.

You surely need a certain level of specialization to constantly deliver an exceptional customer experience in digital commerce. This is where Customer Experience (CX) Design comes to the rescue.

In layman terms, CX is a business function built to enhance the experience of every customer at every moment whenever they associate with your business.

Business Opportunity Analysis

Business Opportunity Analysis

The above graph shows customer preferences while using the applications. Enterprises can look at it’s as a business opportunity analysis and plan and execute on these lines.

What is CX Design?

We have heard about UX but what is CX? New version of UX? One might have lot of questions on this.

In simple terms, UX design is associated with the product and CX design is associated with the brand.

UX designers are focused on creating easy user experience on mobile devices, website or other kind of software that customers use. Whereas CX designers are responsible for co ordinating business objectives with the entire journey and experience of a customer during their interactions with the brand.

Let us begin by understanding CX and then CX design before we dive into CX design principles.

Customer Experience (CX) refers to the customer interactions and experiences with your business throughout the journey, right from initial contact to becoming a happy and loyal customer.

CX is a fundamental part of CRM – Customer Relationship Management, because positive experience is more likely to lead them to become a loyal customer. Investing in CX is the best move to retain customers.

CX design is the practice of creating clear and efficient interaction between the company and its customers. Customer experience can be divided into three parts: single interaction, customer journey, and lifetime relationship.

Key Principles of CX Design

The following principles are significant in CX design. Businesses should adapt them to successfully deliver exceptional digital experiences:

  • Goal- oriented customer experience design to enhance business performance.

  • The objective of any business is to deliver the design that they conceptualized for their services. The services should meet customer demands and may even exceed expectations.

    It does not come naturally for few people in the design or development team to always deliver excellent customer service and experience. Hence it is the responsibility of the businesses to piece together the touchpoints that should be developed efficiently. This is the best way to deliver a great customer experience consistently.

  • Give human touch to CX

  • When was the last time you had a great customer experience? What made you feel good? It could have been an incident that made you feel acknowledged, personal recognition that made you feel special, attention to detail that made your day or a certain issue that was resolved without any hiccups. Chances are that you remember the “feeling”, and that made the customer experience an exceptional one.

    Adding this human touch in the services that businesses offer will go a long way in creating a great customer experience and converting them as loyal customers. Interactions form the base for a successful customer experience design.

    This humane approach applies to the employees of businesses too. This leads you to our suggested method for CX design – co-creation. When a company brings different groups of people for knowledge sharing, innovations or sometimes a third-party individual as an adviser, then it is termed as co-creation.

  • Involve the organization into the CX design process

  • Once companies have clear knowledge about their CX and customers, it is best advised to co-create the experiences that they think should be delivered at every touchpoint.

    What is the need to co-create you ask? From our experience on CX, we have established that no one knows your customers better than the business itself, and the employees working to get the business services/products in front of the customer. From bridging customer experience gaps, looking for opportunities, making success stories and/or rebuilding from failures, the front-line team members employed by the business know it best.

    The senior management and leadership teams might not have a handle on the process in a similar way as the very team that interacts with customers day-in and day-out.

  • Plan and create an ideal customer experience

  • When a business plans to do a co-creation workshop, they must make sure that they include current state of customer journey with all the stages and touchpoints and their insights on customer milestones and their issues.

    They should then divide the teams into smaller groups and make them articulate the following questions:

    1. What should an excellent customer experience at each touchpoint look like?
    2. How would each touchpoint make their customers feel?
    3. Why and What changes should be made to accomplish the CX? It could be in terms of words, processes, actions, systems, tools, training or collabs.


    By the end of the workshop, the business should have a road map on their ideal customer journey. They should be confident on planning, implementing and executing it within the time frame that was set for the task. They should also register the experience enhancements and note the business case for each and every change that was associated with the process.

  • Documentation of customer experience

  • After designing the customer experience, businesses should use Innovation Blueprint (a universal approach to embed a durable innovation facility into the organization) and Experience Guide (a complete guide to CX) to implement the collaborative ideas from the workshops.

    They should then share the Innovation Blueprint, Experience Guide and other documents with their teams. Continuous feedback from the team will address the critical questions like ‘why’, ‘what’, ‘how’ and ‘when’ with value addition to the process.

    Businesses should remember that motivation and responsibility are the key factors for employee engagement strategy.

Before We Sign Off….

In case you are seeking a partner to help you chalk out a winning customer experience design, we have got your back. We at Embitel create customer centric CMX (Customer Experience management) strategy across all touchpoints aligned with the branding goals of global businesses. We offer solutions like personalization, digital analytics, user journey mapping, UI/UX design, solution architecture and custom solutions development. Check out our page to know more.


  • 0

10 Key Attributes of Cloud Native Applications

Cloud-native application – Are you wondering if it’s just another buzzword from the IT industry jargon? Well, not really. It is actually a quantum leap that organizations were looking for, in terms of innovation.

Within a short span of time, cloud-native application is booming in the software industry. It offers fresh approach to build large and complex systems. Changes in design, implementation, deployment and operations can be made efficiently with the help of cloud native’s modern software development technologies, practices and cloud infrastructure.

Cloud Native and Cloud-Native Application

In a broader sense, cloud-native is an approach to bring together teams, culture, and technology to employ automation and architectures to manage complexity and unlock velocity.

Cloud native can be best described as container-based environments. Containers are standard software units that encases code and all its dependencies enabling applications to run quickly and efficiently across computing environs. Cloud-native applications are developed with services packaged in containers, which are deployed as microservices and managed on scalable infrastructure through agile DevOps processes and continuous delivery workflows.

Cloud native development is well suited for both public and private cloud environs. What actually matters is, how efficiently applications are created and installed, not where.

In the following section, we highlight the top 10 attributes of a typical cloud-native application, an understanding of which can aid you while designing cloud-native applications.

10 Key Attributes of Cloud-Native Applications: A Summary

  1. Containers: Containers are the back-bone of cloud-native architecture. A Cloud-native applications are a stack of independent services that are stored as lightweight containers. Scalability is high with containers. Since scaling-in and scaling-out is easier, framework utilization is optimized. There are ample opportunities for innovation too.
  2. Languages and Frameworks: Cloud native applications can efficiently recognize several languages. Services of cloud-native applications are custom developed using language and frameworks w.r.t their functionality. Cloud-native applications are polyglot- meaning the services use various languages, runtimes and frameworks. For instance, developers can build a real-time streaming service set up on WebSockets, developed in Node.js and use Python and Flask for exposing the API (Application Programming Interface). This exclusive technique of developing microservices gives them options to choose the best language and framework for a particular job.
  3. Microservices: Microservices can be independently installed, upgraded and scaled. Services of the same application communicate through HTTP APIs at the runtime. Resilient infrastructure and application design of such loosely coupled services brings in efficiency and qualitative performance to the business. Decoupling helps developers to concentrate on core functions of services and thus helps in achieving productive lifecycle management of application.
  4. APIs: One of the main challenges with microservices application architecture is achieving consistent communication among different services. It is important for the front-end client-facing microservice to acknowledge customer queries generated from mobile phones, browsers or any other device. So, cloud-native services should use APIs that are based on certain protocols like REST (Representational State Transfer), gRPC (Google’s Remote Procedure call) or NATS (message-oriented middleware/messaging system).

    REST APIs can consistently communicate in microservices-based apps. gRPC is used to connect services and ensure load balancing, performance, tracing and authentication. NATS is used to augment and replace traditional messaging system in microservices.

  5. Architecture and Platform: The USP of a Cloud-native delivery is its speed. The core of the architecture is divided into stateful and stateless services. As mentioned above, these services are independent of each other’s existence. Services are persistent and durable in nature and hence brings out higher availability and resilience in the architecture.

    The cloud-native architecture facilitates developers to use cloud platforms and avoid infrastructure dependencies. Teams will be able to focus on software and not on configuration, and operating systems maintenance. Few recommended platforms for operating on cloud-based infrastructures are Amazon Web Services (AWS), Microsoft Azure and Google Cloud Platform (GCP).

  6. Operating System: Cloud native applications are isolated from server and operating system dependencies. They operate at a higher abstraction levels. The only exception is that whenever microservices need SSDs (solid-state drives), and GPUs (graphics processing units), they are made available by a batch of machines from the architecture.
  7. Infrastructure: Cloud native apps are installed on virtual, shared and elastic infrastructure. For effective running of apps, cloud-native infrastructure consists of operating systems, data centers, deployment pipelines, configuration management, and other systems/software and hardware essentials to back the apps.
  8. Agile DevOps Processes: The independent services in cloud-native apps are managed through agile DevOps processes. Multiple pipelines of CI/CD (Continuous Integration/Continuous Delivery) work parallelly with each other to bring out efficiency in the application.
  9. Automated Capabilities: Automation is the significant factor that is responsible for making cloud-native application a reality. Automation is the need of the hour for applications to run and scale. It manages large and complex applications. High automation can be achieved as cloud-native apps runs based on the concept of     Infrastructure as code.
  10. Resource Allocation – Cloud-native applications are governed by the governance model defined through a set of policies. They comply with network policies that allocate resource to services, policies like central processing unit (CPU) and storage quotas. Central IT in companies can allocate resources for every department. Developers and DevOps teams in every department leverage complete access and ownership to their share of resources.

Conclusion

Every new feature comes with its set of challenges. But when you look at the bigger picture, you will be able to come up with solutions for these relatively small hiccups that do not hinder the overall functioning of business. Having said that, this is undoubtedly the best time to recognize the full potential of cloud by rebuilding or re-architecting your applications as cloud- native.

Wide-ranging access, flexibility and data persistence are significant features of a successful cloud-native application. Make sure you ace it by implementing in your businesses.

Next time when you hear people talk about ‘cloud’; think ‘cloud-native.’


  • 0

[Webinar Part-2] Why SOME/IP is at the Forefront of Future Automotive On-board Networks

[Webinar Part-2] Why SOME/IP is at the Forefront of Future Automotive On-board Networks

What makes SOME/IP such a special middleware solution to enable advanced automotive solutions such as ADAS, Connected vehicles and Electric vehicle technology? Higher speed, and bandwidth are some of the usual answers that you get. Our SOME/IP expert delve a little deeper to find out how SOME/IP is driving the upcoming network architectures such as zonal computing.

In part-1 of the webinar, we focussed on the upcoming trends in automotive solutions and on-board network and how SOME/IP fits into the scheme of things. With the 2nd part, we take the discussion further to features and advantages of SOME/IP.

SOME/IP is the perfect partner to automotive ethernet, a major driver of zonal computing. With benefits of service-oriented architecture, SOME/IP becomes all the more suitable for catering the modern automotive solutions.

What’s in store for you in this SOME/IP webinar?

  1. Why SOME/IP?
  2. How SOME/IP supports higher number of ECUs without increasing latency?
  3. How SOME/IP
  4. Benefits of service-oriented architecture

For more queries and demos, please contact us at sales@embitel.com-

Tutorial Host and Mentor

Ajish Alfred

Technical Lead & Subject Matter Expert (Vehicle Network System)
Embitel Technologies

On-demand Webinar

Release Date: Wednesday, April 21st, 2021

Duration: 25 mins

Name *

Email *

Company Name *

Phone Number


  • 0

Migration from Python 2 to 3: Strategies, Tools and More

Category : Embedded Blog

In part-1 of this blog series, we discussed about what makes Python migration from version 2 to 3 so important for the embedded applications. We also touched upon the kind of changes one would expect when migrating from Python 2 to 3. We highly recommend that you go through the first part of the Python migration blog before reading its sequel.

In part-2 of the blog series, our focus will be on the migration strategy and the various tools that aid us in the smooth transition from older version of Python to the all-new version 3.

Manual or Automated: Which Way to Go for Python Migration?

Migrating to the latest version of Python needs you to make a choice between the two strategies- Manual and automated. One of the major drivers of this choice is a clear understanding of the project’s status in terms of size, complexity, type of application and so on.

For instance, if you plan to migrate an application to Python 3, it is only the top-level scripts that need to be reworked. Since very few internal modules depend on an application, they do not need to be changed.

On the contrary, if a framework is to be migrated, a number of plug-ins and applications depend on it. A small technical change in the framework will impact several applications and modules. Therefore, based on whether your embedded application is a provider of libraries or a consumer, your migration strategy would differ.

Another major factor to be taken into account while choosing the migration strategy is your user-base and the revenue model of your Python-based software. If the software is purely commercial, you may want to speed up the migration process or prioritize certain modules for migration. In case your application is an open-source project used within a team under an enterprise, the migration strategy could be more relaxed.

Once you are clear about the nature of your applications, you have two options to choose from. One is the complete re-write of the code from scratch, and the other is using an automated tool for the purpose.

At times, complete automation of Python migration is not feasible, and some manual code re-writing becomes necessary in order for the application to perform the assigned task correctly. Also, if the code base is not too large, a manual migration can be done to save the tool cost.

Steps Involved in Migration from Python 2 to 3

Whether you are going the manual or automated way of Python migration, there are certain toolboxes and frameworks you will require. The most obvious ones are the older Python version 2.X and newer version to which you wish to migrate, i.e. Python 3.X. A few frameworks will also need to be readied based on the user community you are catering to.

Some basic tasks to be performed for Python migration from 2 to 3:

  • Libraries/frameworks to support the older versions of Python 3
  • Compatibility specifications to be spelled out in the project’s readme file
  • Virtual environment manager to be installed to manage the multiple Python versions installed
  • A Python code analysis tool such as Pylint must be kept handy as it helps identify the syntax changes required for the migration to be done the right way
  • Another Python code testing framework called Pytest is required for code coverage post migration

Once the tool-boxes are ready, we can move forward with one of the strategies- manual code re-writing.

Manual Method for Python 2 to 3 Migration

The manual code re-write strategy is all about creating a new version of the software that conforms to Python 3. You bid goodbye to the older version and its legacy code, however the old code still acts as a reference and also as a source of test cases for unit testing.

The first step in writing the new code is to migrate the old unit test cases to the Python 3 environment. When these tests are run, based on their failure, or passing, new code is written. The codes that fail the unit test need to be re-written. The ones that pass can be preserved.

Any kind of latent bugs found in the old code should be fixed because they might not manifest in a friendly manner in the Python 3 environment as they did in the older version.

At times, some new unit tests might also need to be re-written for Python 3. This is due to the fact that the newer Python version has built-in mock libraries that simplify the unit tests. Also, the tests must be in sync with the new software architecture. However, if you have simpler unit tests that do not rely on Python 3 unique features, they can be left unchanged.

Python 2 to 3

 

Now that the unit tests have run, it is time to manually write the code based on the unit test reports. As the new code follows the test result, developers refer to this approach as test-driven programming. The code is subject to re-iterative testing, i.e., the new code is tested against the test cases. The failure is analysed, and changes are made to the code. Ideally, every iteration of unit test reduces the number of failures and brings the code nearer to perfection.

Migration from Python 2 to 3, the Automated Way

When you have a huge code base, it is practically impossible to re-write each line of code and test it in an iterative manner. The time, cost and effort involved would be enormous. For such projects, automated migration is recommended.

And when there is automation, there are tools that make this possible. Likewise, for automated migration to Python 3, we have a tool called 2to3, a pretty straightforward name.

Automated migration from Python 2 to 3 is a 6-step process. Let’s examine each of them in some detail:

  1. Creating Unit test cases: This is one of the most crucial steps in the process of migration as this is paramount in creating the code that works. However, this can get challenging at times. There are instances where the legacy code does not have unit tests designed or they do not cover certain migration issues like data type conversions. Even worse, they could have been written using syntax that has now changed completely. In such circumstances, tools like 2to3 and Six are quite useful.
  2. Resolving issues due to syntax changes: This is where the real migration starts. The issues arising from syntax changes are addressed in this step. Pylint tool helps in this process by highlighting the problem areas of Python 3 version.
  3. Running the test cases in both Python 2 and 3 environments: Unit tests are run on the legacy code. Usually in the first iteration, all unit tests fail. The trick is to have a system to repeat the unit testing until you get right.
  4. Executing the migration using 2to3 tool: Based on the failure report of the unit test, 2to3 tool will perform the migration of the available code. The migrated code should work fine, however, in some instances, some syntax adjustments and reworking the unit test cases might be required. If there are failures, re-run of the unit tests and repeated migrations are required until the code fulfills the intended purpose.
  5. Fix the bugs and re-test: Even after the code is fully migrated to Python 3, the work is only half done! The code will now be tested in Python 3 environment. You might need to adjust the Python 2 code and re-run the migration. Or you would have moved past that stage and adjustments in Python 3 code would suffice. So, these minor adjustments will have to continue until complete migration is done and code is 100% workable and bug-free.
  6. Optimization after the migration: An automated conversion sometimes introduces certain unwanted elements to the code. They can be anything from extra float function to a list function. These elements need to be cleaned up. And to be extra careful, re-run of tests needs to be performed after every instance of cleanup/optimization.

Final Thoughts

Manual or automated, you need to get Python code migrated from version 2 or 3. For the embedded system applications, whether it is automotive or others, the migration becomes all the more important. Embitel has been working on such migration projects, especially since the support to Python 2 has ended. If you have queries related to Python migration for embedded applications, we are there for you!


  • 0

Advantages Of Building Serverless Applications On AWS

When organizations keep growing, their technologies and ecosystems are bound to expand. Managing architecture becomes a challenge to companies. A lot of time is spent on managing a platform whereas ideally, time and resources must be utilized for applications and development. The best alternative for this is the adoption of serverless architecture.

But the million-dollar question is- Is it suitable for your company? Should you go the serverless way?

Companies are preferring to go with serverless applications as it can be installed quickly, frequently and more efficiently. Amazon’s AWS Serverless Application Model is a revolutionary feature in this space. Developers are finding it easy to create, access and install applications, owing to simple code samples and templates.

In this blog, let us figure out why companies prefer to go serverless, the technology involved and the benefits of AWS Serverless Application Model.

What is AWS SAM?

AWS SAM or Amazon Web Services Serverless Application Model SAM is an open-source framework where you can set up serverless applications on AWS. It comes with a template specification to specify your serverless application and a CLI (Command Line Interface) tool.

Application architecture uses AWS Lambda, Amazon DynamoDB, AWS Amplify Console, Amazon Cognito and Amazon API Gateway.

Amplify Console implements continuous deployment and assembling of static web resources like JavaScript, HTML, CSS, and image files which gets loaded in the user browser. JavaScript is executed in the browser. It sends and receives data from a backend API installed using Lambda and API Gateway. To secure the public backend API, Amazon Cognito maintains authentication functionality and user management. Lastly, DynamoDB stores data with the help of Lambda functionality.

Does Serverless Means “No Server”?

Few years ago, it was a chaotic environment in companies with too many people and servers involved. Later, companies discovered the possibility of creating serverless application, especially on AWS.

Serverless doesn’t imply that there will be no server. It simply suggests that serverless is an infrastructure that enables applications to be hosted by a third-party service, while mitigating the need for server hardware and software management at the consumer’s end.

Why Go Serverless?

In the past, all business processes – even very basic and infrequently used ones – needed to have a server or container running 24/7 to listen for requests. Many tech users are familiar with serverless computing but they are not fully aware of its advantages over software implementations.

With AWS Serverless Application Model you don’t have to worry about configuration of services. AWS handles application code. You are saving architecture management issues and get ample opportunity to give better customer service.

Building a Serverless Application

Serverless applications development involves a cloud service provider where the server’s details and functionalities are fully managed. Maintenance has proven to be easier and efficient with cloud.

While coding for Serverless applications, make sure that the processes are broken down into small and separate functions and then coupled together. But this could lead to more code changes, which means more update to functions and more file uploading throughout the project lifecycle. Coupling these functions together means extra effort and troubleshooting in the console. This is both time consuming and leads to lot of errors.

A code framework can support during such trying tasks. Serverless Application Model (SAM) of AWS is an open-source tool where it is easier to streamline existing services like AWS Lambda, API gateway and DynamoDB. Since CLI tool has already created a repository framework and pipeline, you do not need a developer to do that.

You can see the code in the real world as quickly as possible at cheaper costs. Whoa!

AWS SAM works well for both larger and smaller projects as well.

Once the project is launched in AWS SAM, a folder structure is built for the user. This structure has Lambda functions, code tests and a template file (created on CloudFormation of AWS). The code and configurations are simplified in CloudFormation. Features like DynamoDB tables, S3 buckets, Lambda functionalities, all associated IAM permissions, SNS, SQS or API Gateway triggers are created to make tracking the metrics and workflow simple and seamless.

AWS SAM can be directly built and deployed from the CLI tool with a command. Functionalities can be tested locally. Any cloud user can get used to AWS services and, technologies with simple CLI installation instructions.

Serverless applications can be built with various types of AWS services put together. Post configuration, the applications’ code can be transferred to a serverless computing service alike AWS Lambda.

Is Going Serverless Worth It?

Developing serverless applications can be shaky given that there will be shifts in development flow and time taken by users to get comfortable with AWS services. It is definitely not easy to develop entire serverless applications in one go.

But the best part is AWS services can be independently implemented. Start with transitioning just one API or install Amazon Cognito and explore the surplus features offered by AWS Serverless.

Nearly all AWS services use an elastic cost model where you pay only for the services you used. Here you can scale the services in size to meet the demands of differing resource utilization.

What Does AWS Serverless Platform Provide?

AWS system manages all the back-end tasks like storage, computing, databases, processing and more, giving the user scope to concentrate on his program and innovation.

Apart from the above-mentioned components, AWS Serverless Computing handles the following:

  • Amazon S3 Storage
  • Amazon Kinesis Analytics
  • Amazon SNS for Application Integration
  • AWS Step Functions for Orchestration
  • AWS Identity and Access Management for Security and Access Control
  • AWS Developer for tools and services

Benefits of AWS Serverless Application Model

  • Single-Deployment Configuration – With AWS SAM, it becomes easy to function as a single stack because of coordination of related resources and components. AWS SAM allows you to share configuration information such as memory or timeouts amongst resources and set up all those resources that are related into a single unit.
  • AWS Cloud Formation Extension – With reliable deployment capacities of AWS Cloud Formation, you can define and use the full suite of resources, in-built functions, and other template factors that are available in the system.
  • In-Built System – AWS SAM can be used to interpret and install infrastructure as AWS Config. AWS Code Deploy helps in enforcing best coding practice and reviews. Tracing can be enabled by using AWS X-Ray.
  • Local Debugging and Testing – Applications that are established by AWS SAM templates can be built, tested and debugged locally by AWS SAM CLI. CLI tool yields Lambda kind of executable environment on local. To understand and debug the code you can use AWS toolkits such as AWS Toolkit for JetBrains, AWS Toolkit for PyCharm, AWS Toolkit for IntelliJ and AWS Toolkit for Visual Studio Code. This practice will strengthen your feedback loop by allowing you to detect and troubleshoot issues at an early stage before it runs in the cloud.
  • Integration with Development Tools – New applications can be discovered and used from AWS Serverless Application Repository. AWS Cloud9 IDE can be used for authoring, testing and debugging the applications. CodeBuild, CodeDeploy and CloudPipeline can be used to build a deployment pipeline for serverless applications. AWS CodeStar can be used for code repository, project structure and CI/CD (Continuous Integration/Continuous Delivery). These can be automatically configured. For installation, Jenkins Plugin is used. Stackery.io toolkit is used to build production-ready applications.

Other advantages of using AWS Serverless Application Model are:

  • With AWS SAM there is no need of software installation, runtime check, management, or server maintenance.
  • Scaling of application happens automatically by toggling only units of consumption like memory and throughput.
  • Serverless Applications are aided with availability and fault tolerance built-in services.
  • You are charged only for what is used and not when code isn’t running. Idle capacity need not be paid.

Bottom Line

The opportunities that AWS Serverless application provides are endless and may alleviate a lot of overhead that can result from provisioning, managing and maintaining servers. Since every application is different, it boils down to whether the pricing and time saved by letting go of server management can be beneficial for your business use case.


  • 0

Best Practices for Successful Enterprise Cloud Migration

Data, application, and some resource migration to the cloud has been on the must-do list for many companies for some time now. But the consequences of COVID-19 pandemic have made companies expedite their efforts in cloud migration, as the workforce was forced to go remote.

Without clear-cut planning, migrating to cloud will lead to problems like ad hoc adoption issues, chaotic management of resources and process flows, security susceptibility, cost surging and dissatisfactory customer experience. It also has higher chances of failing altogether.

Companies and enterprises must adapt and adopt the changes of the “new normal” situation and work towards creating a seamless work environment. So, it is important to have a deeper understanding of cloud and migration.

What is Cloud Migration?

The process of transferring digital enterprise operations to the cloud is called Cloud Migration. It could be moving of IT processes, databases, applications and so on, into a cloud or from one cloud to another.

Here “Cloud” means the servers (along with databases and software) that are accessible through internet. With Cloud Migration, enterprises need not manage or maintain servers and software applications on local machines.

Enterprise Cloud Migration is not limited to just hosting on a cloud-based environment. Enterprises should have profound knowledge of cloud products and services and make sure that they overcome the challenges that are associated with cloud without much ado.

Use of Content delivery networks, serverless hosting environs, cloud storage, load balancers, and auto scaling features all play critical roles in making the application future ready.

Every new application comes with its own set of challenges. There is no foolproof method for cloud nirvana. Hence, it is advised to go for a multi-cloud strategy.

It is up to the enterprises and their objectives to decide whether to go for Public Cloud, Private Cloud, Hybrid Cloud, IaaS (Infrastructure as a Service), PaaS (Platform as a Service), or even to build a customized cloud infrastructure.

Why Migrate to Clouds?

Migration to cloud has become a necessity today. There are a plethora of benefits of migrating to cloud and ever since companies have realized this, there is no turning back.

Benefits of Cloud Migration:

  • To help in business stability and disaster recovery in case of IT infrastructure collapse
  • To bring in flexibility and facilitate dynamic data requirements
  • Better IT resource management
  • Network security
  • Quick progress, iterations and minimized provisioning time
  • Unified IT analysis
  • Less carbon footprint
  • Economical

Key factors to consider during cloud migration:

  • Cloud service provider and cloud type
  • Compliance and regulations management
  • Disaster recovery and support
  • Payment model based on the cloud structure – On-premises VS Cloud VS Hybrid
  • Alignment with existing Service-Level Agreements (SLAs)
  • Resources that fit accurately to create and sustain the cloud environment
  • Reliability with data and information

Below we have listed top strategies or steps for successfully migrating to the cloud. Take a look.

Best Strategies for Successful Cloud Migration

  1. Understand the Architecture of Your Application
  2. While you prepare for the Cloud Migration, define the role of Migration Architect to leverage the benefits of cloud to the fullest. Migration Architect is responsible for defining the architecture of the cloud application, and planning and completing all facets of the migration. The core duties include designing strategies for data migration, determining cloud -solution requirements, production switchover operations and all other necessary features to make the migration successful.

  3. Determine Cloud Integration Level
  4. There are two ways to migrate applications from on-premises data center to cloud. One is Shallow Cloud Integration and other is Deep Cloud Integration.

    In Shallow Cloud Integration, also called “Lift-and-Shift” Method, you move on-premises application to the cloud with less or no changes to the servers that are instantiated in the cloud. There is no need of any unique services. Just small application changes are enough to get it to run in the new environment, A.K.A cloud. Hence, the method is called “Lift-and-Shift” because you literally lift and shift from one place to another without too many changes.

    In Deep Cloud Integration, applications are modified during the migration process to utilize the benefits of key cloud capabilities. It might be as simple as using auto-scaling and dynamic load balancing, or as advanced as applying serverless computing capabilities like AWS Lambda or using cloud-specific data store like Amazon S3 or DynamoDB.

  5. Decide Whether to Go for Single Cloud or Multi-Cloud
  6. Before Cloud Migration, you need to be sure about one thing and that is – provider/s. Decide whether you want to go for a single cloud provider. Here, migration of applications is optimized to run in that single environment. Alternatively, you may decide to go for multiple cloud providers where applications are running on different cloud of different providers.

    Going with a single provider is very straightforward; your development teams will have only one set of cloud APIs to learn. Your application can benefit from everything that your cloud provider offers.

    The key drawback of this approach is vendor lock in. Once the application is updated for a particular provider, it becomes problematic when you have to move to another provider. It is as good as starting the cloud migration from scratch. Also, having just one single provider will adversely affect your ability to negotiate terms such as SLAs and pricing with the provider.

    There are various models for using multiple cloud providers:

    • One set of application in one cloud; another set of application in a different cloud. This is probably the simplest multi-cloud approach. This approach helps you gain business leverage with multiple providers along with flexibility on where to place applications in the future. Optimizing each application for the provider on which it runs is at its best.
    • Splitting the application across multiple cloud providers. Some companies prefer to run certain parts of application with one cloud provider and other parts of it with another. Pros of this approach is that it lets you use benefits that each provider offers. Say, one provider might have better AI functionalities, another might have good database speeds etc. Cons of this approach is that your application depends on the performance of both providers. Any issues with either provider will impact your application performance and customer experience.
    • Build cloud agnostic application. Some companies have their applications run on any cloud provider. In this approach, your applications run simultaneously on multiple providers or application load is split across providers. This method provides the best flexibility in vendor negotiations as you can easily shift loads from one cloud provider to another. Drawback is that you might not use the key capabilities of each cloud provider. It sometimes complicates application-development and validation processes.
  7. Define Cloud KPIs
  8. KPI – Key Performance Indicators are metrics that you collect about your application or service to measure performance based on your expectations. Although you might have defined some KPIs for your applications and services, you need to check whether it is enough or you need to set more parameters once it is in the Cloud.

    The KPIs predominantly show the performance of in-progress migration, prominent issues or issues that are not directly seen. Also, they help us figure out whether the migration is complete and successful.

    Below are few important categories of cloud migration KPIs.

    Category KPI Samples

    Application Performance

     

    Throughput

    Availability

    Error Rate

    Application Performance Index (APDEX)

     

    Business Engagement

     

    Engagement Rate

    Conversion Rate

    Conversion Percentage

    Cart Adds

     

    User Experience

     

    Response Time

    Page Load Time

    Session Duration

    Lag

     

    Infrastructure

     

    Memory Usage

    CPU Usage Percentage

    Network Throughput

    Disk Performance

     

     

  9. Baselining and Performance Guidelines
  10. Baselining is a method where current or pre-migration performance of an application or service is measured to determine future or post-migration performance. This process will help companies to make any improvements or check for any errors.

    For each KPI, set a baseline metric to measure or collect the data. Decide the time period for data collection. If you choose a short baseline period (Eg: a day), you get the data faster but you might not be collecting representative performance sample. If you choose longer baseline period (Eg: a month), it is bound to take time but you get detailed representative performance data.

    Different industries require different kinds of data. Just be clear while defining your type and time frame before migration.

  11. Map Out Migration Components
  12. First and foremost, begin with identifying connections between the services. Determine how the services are interdependent on each other.

  13. Go for Application Performance Monitoring tool which uses service maps to create dependency diagrams for larger and complex applications. Using this dependency diagram, you can deduce which components should be migrated, how and in which order.
  14. Check for Restructuring
  15. At times, users want to check few conditions of the services of applications both on-premises and cloud before migration. Restructuring of application is highly recommended and will help you a great deal, because:

    • There is scope for dynamic scaling as components of cloud and servers work seamlessly with numerous applications that are running simultaneously at any given time. It also saves cloud service costs.
    • Dynamic allocation of resources is any day better than static allocation, this will save time of users and helps the system function efficiently.
    • Shifting to service-oriented architecture before migration will be a good move. This will help you move individual services to the cloud easily.
  16. Have a Data Migration Plan
  17. Migrating data is probably one of the most delicate, yet substantial parts of Cloud Migration. Location plays a significant role in impacting the performance of applications.

    Common Data Migration practices are:

    • Applying bi-directional syncing method between on-premises and cloud databases. Here on-premises database is removed after an entire application is moved from data server to cloud.
    • One-way synchronization of on-premises database to a cloud-based database. Users can connect only to on-premises version. When it is ready, make cloud-based database as the main one and disable the functioning of on-premises database.
    • Go for third-party Data-Migration services like AWS migration (Amazon Web Services) for optimum results.
  18. Switch Over Production
  19. Depending upon the architecture and complexity of the application and framework of data and datastores, enterprises can decide the time and method to switch the production system from legacy on-premises to cloud environ.

    The two approaches which are prevailing are as follows:

    • Once the entire service or application is moved to the cloud and confirmed that it is working fine, then switching traffic from on-premises stack to the cloud.
    • Move few applications first, check whether it is working fine, then move few more applications. Continue this process until all the applications are moved and tested on cloud.
  20. Application Resource Allocation Review
  21. Resource optimization is one of the most crucial practices while migrating to cloud. Make sure that cloud is optimized for resource allocation. This can be done with the help of distribution of resources to the application. So, when there is a need to allocate more resources to an application in the cloud, it can be virtually and easily accessed from the vendor. With the support of dynamic scalability, it will be easy to meet customer demands quickly.

Cloud Migration Roll Out

Beta Environment Set-up

  • It is important to have a beta environment created for the current setting
  • Post data loading, test for the flexibility of the framework
  • Testing data and applications constantly

Migration

  • Create a new environment for production
  • Implement DevOps operations
  • Shift data (both production and storage) and set up disaster recovery and fault tolerance
  • Upgrade Domain Name Server (DNS) records and all other configurations

Continuous Monitoring

  • Monitoring tools should be rolled out
  • Keep track of all the necessary metrics

Setting Up a Migration Team

For a successful cloud migration, you need a competent team that will bring the best results. The cloud migration team should consist of :

Manager

  • A proficient project manager with a thorough understanding of existing network, database management technologies, applications, and strategizing the workflow based on company’s goals.

Cloud Developer

  • He/she should have expertise in IaaS and PaaS platforms.
  • He/she should be in charge of cloud platforms development and deployment.

Cloud Security Specialist

  • He/she should oversee configuration and management of security baselines
  • Should be able to design and maintain a reliable cloud environ
  • Must be verified in cloud security management

Architect

  • The professional should be adept to design cloud infrastructure, servers, storage, platforms, content, and network delivery

 

You might want to know about various cloud deployment models – read here

The 5Rs of Cloud Migration Strategy

Generally, the strategies of cloud migration are built around 5 practices and the objectives and current situation of the organization.

  1. Re-hosting – It is popularly known as the lift-and-shift method. It literally means lift-and-shift, the applications are moved from on-premises to cloud without any modification. This method is economical and efficient as there are fewer architectural changes involved. With this method, companies can be assured of minimal risks and long-term advantages of cloud operations.
  2. Refactoring – It is also known as re-architecting. In this process, non-cloud applications are transformed into cloud-native applications. Complete rebuilding of applications is needed in order to transform it to the cloud. Companies that are looking to shift from monolithic architecture to serverless can go for this method as this brings proficiency and enhanced productivity.
  3. Re-platforming – It is commonly known as lift-tinker-shift method. In this method, only specific elements of an application are changed or upgraded. We can say that re-platforming is a mix of rehosting and refactoring. Before migrating to the cloud, some components should be optimized. This method is very effective in terms of flexibility, security and productivity.
  4. Rebuilding – In this method, we re-write a part or whole of an application from the beginning while unaltering its features and specifications. Here, we replace an application with SaaS services. This is commonly known as “cloud-native” development. The application is built as new with current components and with the help of some cloud-based infrastructure methodologies like serverless.
  5. Revising – There are 2 steps involved in this process. In the first step, the existing code undergoes certain changes in order to offer support to the legacy modernization process. The next step is to rehost or refactor and shift applications to the cloud. This helps to leverage the best of cloud options.

Other Points to Remember for Cloud Migration

We have now covered all major points related to Cloud Migration. But there are few other factors that enterprises should look out for: One such factor is security and compliance. Thankfully, the majority of cloud providers offer substantial tooling and resources to support, build and maintain a secure system.

Cloud can be cheaper or expensive than on-premises depending on the objectives of the enterprises.

Lastly, we suggest anyone opting for cloud migration to get accustomed to building modern applications with services and microservices like twelve-factor applications and applying DevOps method which is one of the best practices for building and running cloud services and applications. And yeah, do not forget to optimize customer experience once applications are fully migrated to the cloud.

End of line…

Enterprise Data Migration planning should not be undervalued. The complexity and importance of this practice can save companies from challenges such as performance issues, bandwidth costs, user training, rewriting architecture of applications and so on. When we pay attention to details of smaller things, we can surely make cloud migration services meet the demands and run successfully.

We at Embitel have a strong and competent team who have expertise in cloud architecture, multi-cloud security and tools, dynamic and flexible frameworks, all types of cloud migrations, and proficiency in practical risk and compliance management. Connect with us to explore how you can transform your enterprise operations through cloud migration.


  • 0

What is Telematics?

Category : iot-insights

 
Telematics is a disruptive automotive technology that utilizes IT and communication protocols to send, receive and store information pertaining to remote vehicles. The data is transmitted over a wireless network through secure means and an in-vehicle electronic device or smartphone is employed for establishing remote connectivity.

In this article, we explore the various facets of telematics and the key points to consider while developing a telematics system.

Here is an overview of the topics we are covering in this article:

————————————————-Table of Contents————————————————————-

How Does Telematics Work?

Telematics Control Unit & IoT Cloud Connectivity

Types of Telematics Systems

Telematics Use Cases

Benefits of Telematics

Telematics for After Sales Revenues

Do All Cars have Telematics?

Types of Vehicles in Which Telematics Can Be Used

Telematics for Driverless Cars / Autonomous Vehicles

Telematics Control Unit Architecture

Telematics Software Components

Telematics System Development Considerations

Telematics and OBD

How GPS Tracking Differs from Telematics

AIS 140 Compliance

Telematics Implementation and Challenges

The Future of Telematics

————————————————————————————————————————————–

How Does Telematics Work?

When we say that a vehicle is integrated with telematics, it essentially means that it is fitted with a crash-resistant black box with a complex electronic control unit inside. This black-box, also referred to as the T-Box in automotive engineering parlance, is a telematics control unit.

How Does Telematics Work

As indicated in the image above, the telematics device collects data from within the vehicle and relays it back to the IoT cloud through the communication channel. This information is then pushed to the telematics applications/back-office systems where it is analyzed, and business intelligence decisions are made.

Likewise, the back-end applications send data to the telematics control unit from IoT cloud through the same communication channel.
 

Telematics Control Unit & IoT Cloud Connectivity

An automotive telematics solution fundamentally has four building blocks:

  • Vehicle ECU Network – Inside the vehicle, there is an interconnected network of automotive ECUs, which are small super computers. These ECUs help the Telematics Control Unit to collect vehicle data such as engine temperature, vehicle speed, diagnostics information, etc.
  • Telematics Control Unit (TCU) – This control unit is the heart of the telematics device in the vehicle. It has communication interfaces with the vehicle’s CAN bus and the IoT cloud server. The telematics control unit collects vehicle data such as diagnostics information, vehicle speed and real-time location and transmits this information to the IoT cloud. The communication with the cloud server is established through a cellular, LTE or GPRS network. This information is stored in the IoT cloud and can be accessed by connected mobile or web apps in the IoT ecosystem.


    The TCU also manages the memory and battery of the telematics device. Additionally, it streamlines the data that is shared with the driver through the Human Machine Interface (HMI) device or dashboard.

  • IoT Cloud Server – The information that is collected by the telematics control unit is shared with the cloud-based telematics server through a highly secure GPRS or cellular network. These data packets are also configured as MQTT messages before they are transmitted to the IoT cloud.


    On the IoT cloud platform, the data is extracted and stored in databases for processing.

  • Telematics Applications – The data from the cloud-based telematics server can be accessed by authorized personnel through a web, desktop or mobile application connected to the IoT ecosystem. This data can also be fed into a business intelligence system for further analysis and reporting.

The following video explains this concept further.

Types of Telematics Systems

A vehicle’s telematics system can have a Telematics Control Unit or a Telematics Gateway Unit (TGU) based on the functionalities that the system is expected to perform.

  • Telematics Control Unit – A Telematics Control Unit is designed on a Microcontroller Hardware Platform. It is a low-power solution with low memory footprint. It also offers low data throughput. It can store offline data only for a small period of time.


    TCU can communicate over CAN and dual CAN. The hardware circuitry of TCU is less complex and it is hence, used as a low cost/entry level telematics solution.


    TCU facilitates vehicle tracking and management and remote vehicle diagnosis.

  • Telematics Gateway Unit – A Telematics Gateway Unit has a high-performing Application Processor Hardware Platform at its core. It offers various advantages when compared to a TCU. This includes higher data throughput and capacity to store offline data for a longer time.


    The TGU design, however, has high memory and power footprint.


    The complex hardware circuitry in a TGU enables it to communicate with multiple CAN networks and it also includes audio/video interface. It can also support vehicle ECU reprogramming; hence, it is important that the TGU is compliant with ISO 26262 Functional Safety Standards and has safety mechanism in place

Here is a handy guide that explains the differences between Telematics Control Unit and Telematics Gateway Unit – https://www.embitel.com/wp-content/uploads/TCU-and-TGU-Handbook-1.pdf

Telematics Use Cases

Telematics can be effectively used in various industries such as agriculture & forestry, construction, manufacturing, freight & delivery, retail, finance/insurance, mining, etc.

Some of the use cases of telematics in the automotive industry include the following:

  • Real-time tracking of your vehicle or fleet
  • Verification of driver using mobile apps
  • Driver monitoring and transfer of real-time data regarding over-speeding, theft, breakdown, accidents, etc.
  • Predictive maintenance of vehicle parts which include inspection and repair workflows

Let us classify these use cases and segregate them into the following broad categories:

  1. Telematics and Remote Vehicle Diagnostics
  2. Remote vehicle diagnostics basically includes monitoring remote vehicle status, collecting data and exchanging information in real-time.

    Remote vehicle diagnostics provides the following benefits:

    • Accident Notification and Roadside Assistance – The call center or vehicle monitoring authority gets notified of accidents, while the exact location of the vehicle is also shared. This enables the authority to take immediate action in the event of an unforeseen incident.
    • Non-crash Related Emergency Assistance – Remote diagnostics data also transfers non-crash related emergency signals to the monitoring authority.
    • Turn-by-turn (TBT) Navigation Assistance – The driver receives guidance through this technology while traversing unfamiliar territories. Traffic information on a pre-defined route is also easily available at the fingertips of the driver, courtesy of telematics technology.
    • Vehicle Health Report – The diagnostics data provides actionable intelligence on potential vehicle problems, root cause of automotive failures and how these issues can be fixed. This concept of monitoring vehicle parameters, comparing it with historical data and determining the chances/timelines for failure is commonly known as predictive or preventive maintenance. This enables the vehicle owner to pre-empt vehicle issues and take corrective measures in a timely manner.
  3. Telematics and Fleet Management
  4. Telematics is a necessity for effective fleet management. Crucial fleet information is gathered by the telematics system using sensors, GPS and engine diagnostics and this data is transmitted to the cloud. This enables the fleet manager to get information regarding the vehicle’s location, speed, and direction of movement. It also provides driver monitoring assistance and detects activities such as sharp braking, dangerous cornering, etc.

    In a nutshell, the advantages of using telematics technology for fleet management are as follows:

    • Telematics enables fleet managers to send/receive data to/from vehicles in real-time and eases the burden of managing large fleets.
    • It facilitates a magnified level of transparency in communication between the fleet manager, driver, and customer.
    • Vehicle health updates from the telematics device helps in preventive maintenance, which in turn, reduces operational costs.
    • The fleet efficiency improves tremendously as drivers have access to optimized routes.
    • Real-time location tracking and driver/vehicle monitoring activities boost the safety of the fleet and crew. Telematics devices are often installed with SOS buttons that enables the occupants of the vehicle to send emergency alerts and receive timely assistance. Remote locking/unlocking of the vehicle is another feature that prevents vehicle thefts.
  5. Telematics in the Insurance Industry
  6. Telematics car insurance is a form of vehicle insurance that has been steadily gaining popularity in recent years. One of the greatest advantages of opting for telematics insurance is that the insurance premium is based on the usage of the vehicle and driver behavior – and this data is collected by the telematics device. So, if you are a safe driver, chances are that you will be paying much lesser than what you would normally pay for your car insurance!

    Often referred to as usage-based insurance, smart box insurance or black box insurance, telematics car insurance works on the data collected by an electronic app installed on the driver’s smartphone or a telematics device fitted in the hardware of the vehicle.

    • Information regarding the roads traversed, time of the day, adherence to speed limits, smooth braking and acceleration, etc. are collected by the device and transmitted to the IoT cloud.
    • This data is stored in the cloud database and used by insurers to derive intelligence on the driver’s behavior and driving history.
    • This translates into the insurance premiums that are offered to them at the time of policy renewal.

    Another benefit of using the telematics device is that the insurer will be alerted of accidents involving the vehicle so that they can record crucial and accurate data for the claims process. This also averts any fraudulent claims by the policyholders.

Benefits of Telematics

The concept of telematics is not a recent introduction in the automotive industry. It was prevalent from 1996, but remained an untapped technology at that time due to the high investment cost for infrastructure setup and lack of consumer demand. However, the rise in popularity of vehicle connectivity has given telematics a new leash of life!

Some of the key benefits offered by the implementation of telematics are:

  1. Navigation – Telematics provides turn-by-turn navigation assistance to guide drivers easily to their locations. When drivers are able to access shortest routes to destinations, they are also able to save on fuel costs.
  2. Safety – Telematics devices collect safety-related information such as call for assistance during a crisis, emergency requests, stolen vehicle tracking, etc. and provide timely help to the vehicle occupants. Telematics also collects driving behavior data such as sharp braking, acceleration, etc. This information can be used to educate drivers so that they stay safe on the roads.
  3. Vehicle Performance – Users receive important vehicle health reports through the telematics system. This information can be very useful for fleet managers, as they can then schedule vehicle maintenance accordingly.
  4. Vehicle Visibility – Telematics empowers organizations so that they can track the location of their vehicles. Fleet managers can use the vehicle location data to make timely route adjustments while responding to traffic congestion, weather conditions, etc. This way, they can switch resources around and ensure that there is no delay in deliveries.
  5. Connectivity to Internet – The driver and passengers in the vehicle can utilize live weather forecasts, news bulletins and even information from social networking apps.
  6. Reduced Administrative Costs – Administration and compliance is simplified as telematics devices can be integrated with third-party apps that generate various types of reports.

Telematics for After Sales Revenues

The competition in the automotive industry is perennially growing. And OEMs need to find innovative ways to deliver value to customers and stay relevant. This has resulted in the usage of telematics for after sales monitoring and update of vehicles.

  • Business-to-product value add – The integration of telematics gateways in vehicles enables OEMs to wirelessly collect a large amount of data related to vehicle usage. This helps in generating insightful reports on vehicle maintenance and upgrade requirements. This information is shared with the customers and it enables them to keep the vehicle in great shape – a win-win situation for both parties.
  • Business-to-consumer value add – Telematics technology also enables OEMs and their partners to deliver useful content to the vehicle owners. This includes information such as traffic condition updates, maps, weather forecasts, stock updates, entertainment, etc.
  • Business-to-business value add – The data that is collected from the vehicles by the OEMs can be utilized by third-party businesses such as insurance companies, web portals that stream audio and video content into vehicles, fleet management companies, EV charging companies, etc.

There are various styles of delivery of telematics technology by OEMs:

  1. One approach is where the OEM provisions for the complete unit in the vehicle itself. The OEM also earns incremental revenue from the subscriptions taken by consumers. Support for third-party software and services will be available at an additional cost.
  2. In the second approach, along with the telematics device, smartphones also play a crucial role. In-auto systems need not have all the desired features, as these can be augmented through smartphone apps.

Do All Cars Have Telematics?

With the advent of connected and autonomous vehicle technology, more and more vehicles will be equipped with telematics technology in the future.

A report from Berg Insight clearly indicates how the aftermarket car telematics space is set to see phenomenal growth in the coming years. The report estimates that the total number of installed aftermarket car telematics systems worldwide was 58.7 million in 2018, and it will grow to 150 million in 2023 – an annual growth rate of 20.6 percent!

Telematica Annual Growth

Image Source – Berg Insight

Types of Vehicles in Which Telematics Can Be Used

Telematics devices can be fitted in all types of vehicles – Cars for personal use, fleet of trucks, buses, trailers, personal and cargo boats, tow trucks, etc.

Today, telematics control unit hardware is commonly found in commercial vehicles like buses and trucks. The telematics device helps in tracking these vehicles while they are at remote locations and also streamline fleet management requirements.

Telematics in Trucks

OEMs across the globe are integrating high-end telematics systems in their trucks, as it is now a necessity for such heavy vehicles to stay connected.

The IoT engineering team at Embitel have recently worked on the development of a Telematics Gateway Unit (TGU) for electric trucks. The primary purpose of the TGU was to facilitate Over the Air (OTA) updates by connecting with the vehicle manufacturer’s cloud infrastructure.

  • In case there is a need to update the software in the truck ECUs, the TGU receives this data from the cloud and it installs the updates. Hence, the telematics gateway unit acts as a gateway/master device to the rest of the components in the truck
  • The TGU also collects all the diagnostics data from the truck and transfers this information to the cloud. This crucial information is then used by the vehicle OEM and third-parties (such as fleet management companies) for optimizing the performance of the vehicle.

Based on project requirements, it is possible to configure the TGU and other vehicle dashboard components (such as digital instrument cluster) on the same hardware platform.

Telematics for Driverless Cars / Autonomous Vehicles

Although autonomous vehicle technology is still at its nascent stages, a large global market is observing the latest self-driving vehicle trends. There has also been a huge amount of investment made in developing technologies that fuel autonomous or partially autonomous vehicles.

One of the biggest aspirations in the industry is that self-driving vehicles will be able to reduce accidents and make the roads exponentially safer. For autonomous vehicles to be able to achieve this feat, it is crucial that the underlying telematics systems are empowered to be able to collect vehicle and location data seamlessly and utilize it for boosting the driving performance.

Modern-day telematics systems collate a large amount of information such as insights on fuel usage, vehicle speed, real-time location of the car, etc. All this information will be relevant even when autonomous vehicles become mainstream. In fact, it is estimated that the dependency on telematics to gather all this crucial information will increase when self-driving vehicles enter the roads.

  • Telematics devices help in determining when the autonomous vehicles are due for maintenance procedures through predictive maintenance.
  • In the event of an emergency, telematics devices can also send distress signals to the cloud from where it is redirected to emergency response teams for immediate action.
  • There will be increased demand for technology such as route planning and route optimization, as these are essential for self-driving vehicles.

All in all, the usage of telematics in the autonomous vehicle economy will be more pronounced than it is now.

Telematics Control Unit Architecture

As indicated above, the telematics control unit is a central part of a vehicle’s telematics system. It manages a host of functionalities such as:

  • Collection of vehicle data from the CAN Bus port
  • Managing the data collected over multiple communication interfaces
  • Battery and memory management of the telematics system
  • Streamlining two-way communication with the cloud server
  • Managing the communication with the HMI device

We will now take a look at the hardware architecture of the telematics control unit:

Telematics Control Unit Architecture

The various components of the TCU hardware are as follows:

  • Global Positioning System (GPS) – This module tracks the location of the vehicle (latitude and longitude).
  • Central Processing Unit – This module has data processing and memory management capabilities. If there is a requirement to have an advanced display-based telematics product, then Linux OS is preferred for the processor. For basic telematics products, Android OS is deployed on the processor.
  • CAN Bus Module – This unit manages all communication with the vehicle ECUs. The Telematics control unit exchanges information with the vehicle ECUs through the CAN Bus. It may also use K/Line Bus for specific functions such as theft alerts, remote locking of vehicle, etc.
  • Memory Unit – The memory unit stores information when there are disruptions in the network. It also stores vehicle information for future use. Advanced functionalities such as speech recognition are all managed by this unit as well.
  • Communication Interfaces – These interfaces support multiple communication channels such as Wi-Fi, cellular, LTE, etc.
  • GPRS Module – This module facilitates data connectivity and voice-based communication with remote devices. It often has an ordinary SIM card, e-SIM or plastic SIM card along with the GPRS modem.
  • Battery Module – The in-built battery module facilitates power management. It is a cost-effective source of backup for Real-Time Clocks when the automobile’s engine is off. It also helps in locating and recovering stolen vehicles by tracking telematics data even while the engine is switched off.
  • Bluetooth Module – This module enables connectivity to nearby devices like the user’s mobile phone.
  • Audio Interface – A microphone with audio interface facilitates hands-free calls and voice-based commands. It also helps in playing media files from the vehicle’s audio system.
  • General Purpose Input/Output Interface (GPIO) – This unit consists of I/O type interfaces for connecting lights and buttons.
  • HDMI port for HMI – The HMI is the place where information such as maps, fuel usage, vehicle speed, etc. are displayed to the driver. The HMI is connected to the TCU through an HDMI port.

Telematics Software Components

The telematics system in vehicles usually have the following software components:

  • Bootloader software stack for booting
  • Real Time Operating System (RTOS) and BSP modules
  • Global Navigation Satellite Systems (GNSS) software that assist in vehicle tracking in real-time
  • Multimedia device driver software
  • Automotive Framework Classes that enable applications to access telematics functionalities
  • Software that helps in data analytics which alerts the driver about fuel usage, vehicle servicing, etc.
  • Over the Air (OTA) update software
  • Security software that ensures multi-level security is maintained through data encryption, user authentication, device verification, etc.

Telematics System Development Considerations

During the design phase of a telematics system, it is important for the engineering team to lay out all the basic considerations and requirements. This could include security features to be implemented, flexibility of the design so that various communication protocols are supported, optimization of power consumption, reinforcing the system performance, etc.

The engineering team also has to consider the cost restrictions that outline the scope of the project. In other words, the engineers should be able to find the most suitable hardware components and software development methodology that optimizes system features, while staying close to the estimated budget.

It is also important to consider the regulatory compliance or certification requirements for the telematics product. Apart from this, the memory and power footprint optimization aspect should be given a serious thought.

Another important aspect to consider is the design of the telematics cloud server. The cloud database design, web server and application server design, and user role definitions and management are crucial aspects to consider during the telematics product design phase.

Telematics and IoT Security

The key to the development of a secure telematics system is planning during the design phase. We have an elaborate three part IoT security series on how to develop IoT systems/applications using holistic security principles. Some of these design principles can eliminate common design flaws and present you with a secure IoT product.

Listed below are some essential practices that will ensure that you build a secure telematics product:

  • At the beginning of the design phase, it is important to come up with a security architecture. This architecture should be the baseline when defining key interfaces and data flows.
  • Threat analysis and risk assessment is essential throughout the telematics system development journey. The functionalities that will reduce risks should be clearly highlighted.
  • During the software development phase, standard security best practices should be followed. Code reviews and testing should also consider the security aspects so that all hidden vulnerabilities are unearthed early on, in the project life cycle.
  • Include activities that address known security vulnerabilities in third-party components/libraries.
  • Do not allow additional privileges for system components and limit CAN Bus access, as much as possible.
  • Pay close attention to the security of communication channels.

 

Telematics and OBD

On-Board Diagnostics (OBD) is a mode of communication between the ECUs in a vehicle. OBD II is an international standard of communication written and regulated by the International Standards Organization (ISO) and Society of Automotive Engineers (SAE).

All modern cars support OBD II protocol. With an OBD port that is fitted in a vehicle and an OBD connector, a technician can access the critical vehicle parameters in the form of Diagnostic Trouble Codes (DTC).

Initially, OBD II was predominantly used for vehicle engine diagnostics, but today, it is useful for various other purposes:

  • OBD II helps in analyzing vehicle speed and RPM
  • Through OBD II, it is possible to identify the fuel level in the vehicle
  • It also provides data on the time since the engine started
  • Throttle position is another OBD II parameter that may be useful to the end user
  • Idling time, engine health, distance covered, hard-braking, over acceleration, speeding and fuel efficiency are other parameters that can be recorded via OBD II.

In the beginning, most vehicle tracking devices were based on GPS. These systems transmitted data related to vehicle location so that companies could track their fleet. The introduction of OBD in vehicle tracking opened up a world of new opportunities to fleet managers.

Companies now had access to driver behavior information such as vehicle idling time, over speeding, sudden braking, etc. This enables them to discontinue unsafe driving practices.

Vehicle tracking devices that use OBD data are also able to notify the company when there is a problem with the engine.

To summarize, OBD enhanced telematics provides so much more information to fleet managers than GPS alone. It helps them stay up to date with the location of the fleet and also have a grip on the vehicle condition and driver behavior. Additionally, they can use the information received from each vehicle to lower fuel costs and improve the overall fleet efficiency.

A telematics solution can be connected to the OBD II port of a vehicle quite easily. An adapter can also be used, in case the vehicle does not have an OBD II port. Although the installation is quick, the data it collects is vast and extremely useful for fleet management.

How GPS Tracking Differs from Telematics

GPS is the use of satellite technology to track and trace the location of a vehicle or a device. It is useful for drivers who are seeking the way to a particular destination.

On the other hand, telematics is more than just GPS.

GPS is essentially a part of a telematics system. As explained above, telematics devices transmit data related to the location of the vehicle and also various other details such as driver behavior information, vehicle status, etc. The telematics device transmits all this information to the cloud in real-time.

AIS 140 Compliance

Automotive industry Standard (AIS) 140 is a set of regulations that are published by the Automotive Research Association of India (ARAI) for all commercial vehicles. It aims to build an intelligent transportation system in the country.

As per AIS 140, it is mandatory for all commercial and public transport vehicles to be equipped with vehicle tracking systems. These telematics systems are also required to have emergency buttons and camera surveillance for the safety of the vehicle’s occupants.

Some of the advantages of complying to this standard are given below:

  • Through AIS 140 compliance and the integration of tracking devices in commercial vehicles, the Indian public transportation sector is at the cusp of a technological transformation. This may be the cornerstone to more advanced automotive technologies such as ADAS, in the future.
  • If a vehicle meets with an accident or any other emergency situation, the transport authority will be able to locate it and send timely assistance.
  • In case some passengers in a commercial vehicle are in distress, they can easily send an SOS signal to the transport authorities.
  • AIS 140 compliance also enables authorities to monitor driver behavior – rash driving, sudden brakes, vehicle mishandling, etc. This helps in ensuring that safe driving practices are followed by public transport personnel.

Since the AIS 140 standard has been mandated by the Indian government, it becomes all the more important for the associated IoT architecture to be up to date. The transport authorities also need to have stringent surveillance and management mechanisms for the emergency requests from vehicles.

Telematics Implementation and Challenges

Some of the challenges faced by fleet management companies after the implementation of a telematics system are as follows:

  • Data Aggregation – Some fleet management companies deploy different telematics systems for each of their vehicle lines. For instance, a specific ready-to-deploy solution may be implemented for a line of trailers, whereas the trucks are fitted with another telematics solution. The biggest challenge that the company would face here is consolidating all the different data from the telematics devices. The act of generating cohesive reports after consolidation of data can itself become quite cumbersome. Companies can either use aggregation models to solve this issue or implement a custom-designed telematics solution that is integrated with a dedicated IoT cloud and end-user fleet management software. Although the investment on this front may seem a little higher initially, such an end-to-end solution will facilitate future scalability and provide better ROI in the long run.
  • Data Sharing – Ready-to-deploy telematics solution providers may have various clauses associated with the data sharing policies. It is best to clearly understand who owns the data and its security, before investing in such a solution.
  • User Acceptance – Drivers do not seem to favor the use of telematics as it encroaches into the territory of privacy. This has been a persistent roadblock in the adoption of telematics for fleet management


    Insurance companies are also facing similar challenges when convincing customers to opt for telematics-based insurance. Many insurers had to sacrifice some margin while introducing telematics, as they were offering discounts for customer acquisition. However, this is expected to even out over time, as customers will be more aware of their driving behavior and opt for safe driving practices. Gradually, accidents and claims will reduce, and the insurer margins are likely to improve..

The Future of Telematics

Telematics has been on the path of exponential growth in the automotive industry. These days, companies manufacturing heavy vehicles and luxury passenger cars opt for high-end telematics systems, complete with a telematics gateway unit.

The insurance industry is also embracing telematics in order to differentiate themselves from competition in the market.

Fleet management companies have recognized the need to integrate telematics technology in their operations to boost accountability, control costs and be compliant to government regulations. Fleet managers have also benefited immensely from the technology as it easily integrates with other software related to ERP, workforce management and business management.

Telematics can cater to a large list of use cases that we could have never imagined before. By leveraging the vast amount of data transmitted to the IoT cloud by telematics systems, it is possible to determine actionable insights for a variety of business scenarios.

For instance, the telematics cloud data can be used for urban analytics for smart cities, fleet performance benchmarking, predictive maintenance and suggestions for vehicle spares, to name a few.

All in all, telematics is poised to become an integral part of all future automobiles and we are certain that this will bring about a paradigm shift in the automotive industry. Exciting times ahead!


  • 0

The Rise and Rise of Adobe Experience Manager (AEM) – An Overview

Category : ecommerce-insights

 
If there has been any technology or tool that has rose to fame in a short span of time, then undoubtedly it is Adobe Experience Manager. AEM has everything that is going for it – from technology to architecture, upgrade features and more. Customer testimonials are the validation that AEM is a lasting tool for years to come. So, what’s the big deal about AEM? Let’s check out.

What is AEM?

In layman’s terms, AEM – Adobe Experience Manager is a CMS (Content Management Solution) created to solve the troubles of digital marketing professionals and developers. It is all about delivering the best user experience through a website.

AEM (Adobe Experience Manager) is a combination of digital asset management and content management system to give best customer experiences across all platforms like mobile, web, email and social media. It simplifies the flow of management and delivery of a website’s content.

AEM comes with five modules – sites, assets, mobile, forms and community – together they make a great CMS platform and deliver high traffic websites and mobile applications.

Who Benefits From Adobe Experience Manager?

From retail, manufacturing and financial services to media and entertainment, almost all industries use AEM for data centralization, easier workflows and to scale customization. Navigating the tool is simple even for non-technical marketers.

The interface is user-friendly with in-built features and the drag- and- drop functionality is the icing on cake!

Typically, Adobe Experience Manager is used to solve issues like:

  • Inconsistency due to data silos and stumbling customer experiences.
  • Delay in optimization and updates because of inefficient workflows
  • Outdated technology stack with a dearth of scale and flexibility of cloud-based solution.
  • Absence of DXP (Digital Experience Platform) or D2C (Direct-to-consumer) channel to collect customer data and customer engagement.

AEM with other Adobe Products

Adobe Experience Manager is part of Adobe Experience Cloud, Adobe’s digital experience solution. It can work along with Adobe Analytics, Adobe Target, Adobe Audience Manager, Adobe Commerce Cloud, and Marketo Engage.

Integration between these solutions enables you to do data analysis from various sources, build customer segments, create collective customer profiles, and provide custom-made experiences throughout different channels.
 

Adobe Experience Manager Technology

AEM is based on Apache Sling framework concepts. It is a java application built on OSGi (Open Services Gateway Initiative) framework using Apache Felix engine (a community effort to implement OSGi framework under Apache license). This makes the Adobe AEM CMS one the most powerful component of the Adobe Marketing Cloud. Apache Sling uses JCR (Java Content Repository, built with Apache jackrabbit) object database to store required information.

Apache Sling comes with its own HTTP server that can be used as a Web Application with an application server called Jetty web server (basic server functionality with a servlet framework). Now that Apache Sling is a passé, Adobe expanded the features of Sling to produce their own enhanced version of Apache Sling called the Adobe CRX (Content Repository eXtreme).

At the time of conception, CQ 5 (older version of AEM) functionality was transferred to Granite (UI framework). CRX or Granite managed most of the low-level functionalities like data persistence, event management and user management.

The adobe digital asset management and adobe content management features were provided using WCM (production ready core components) / CQ (older version of AEM) on top of Granite / CRX core. Ever since CQ is upgraded to AEM, there is no looking back.

AEM (Adobe Experience Manager) serves as a hybrid CMS. There is something for everyone.

For designers, it gives strong and user-friendly options to build front-end applications. For marketers, it provides management and optimization of content for their core channels without external dependencies. For developers, it grants the power to create, access, and reuse content elements for seamless customer interfaces across projects.

AEM Architecture

AEM Architecture

 

The diagram above shows the basic architecture of Adobe Experience Manager with its interdependencies. It can be from its own internal management or a third-party counterpart.

Servlet Engine

Servlet engine is a server which has each of the AEM instances running as a web application. It can be any Servlet engine that supports Servlet API 2.4 or higher versions. CQ WCM (Web Content Management) might not need any additional application server but it surely needs a Servlet engine. To cater to this requirement, CQ WCM is equipped with CQSE (CQ Servlet Engine). And voila, it is free to use.

Java Content Repository (JCR)

JCR, Java Content Repository is a storehouse of content which is not dependent on the actual implementation. JCR is a combination of a web application (which follows JSR-170 compliant API plus temporary data storage in the form of a session) and a Persistence Manager (includes persistent data storage like a database or file system).

CQ 5

The infrastructure of CQ5 (former name of Adobe Experience Manager) allows interoperability and seamless integration with other CQ applications. This applies only for applications that are integral part of CQ5 and any other custom-made applications that are developed for the platform.

Applications like Web Content Management and Workflow Engine were developed to support CQ5. Adobe Digital Asset Management and Social Collaboration are few of the best features that are available along with various other product features. Apache Sling and OSGi (Apache Felix) are the technologies that are used predominantly in AEM.

Apache Sling is a Web Application structure where centralization of content applications happens using Java Content Repository like Apache Jackrabbit or CRX to store and retrieve content. Apache Sling is embedded with AEM. Some important factors about Sling are:

  • Apache Sling is utilized to process HTTP requests to store and interpret data.
  • REST (Representational state Transfer, software architectural style API) principles are the basis for Sling, and this makes it easier for development of content-based application life cycles.
  • Sling maps Content objects to their specific components that reproduce and process the incoming data.
  • It is equipped with server-side and AJAX-based scripting support. It can be used with scripting languages like JSP (Java Script Pages) and Ruby and many more.

AEM Adobe Experience Manager is built on OSGi technology which is a dynamic module system for Java. CQ WCM (Web Content Management) is a tool responsible for generating and publishing pages to the website at real-time.

CQ Workflow Engine is an easy to use and powerful process engine running on a CQ5 platform. There is a provision for a Java API and RESTful HTTP interface for the required access by applications outside of the platform. Within this framework itself, all the requests for generating or publishing content will be managed including approvals and signoffs.

CQ Components which are actually a set of widgets, supports the required logic in defining the actual content. It includes components and templates like Text, Image, Column control etc.

CQ Widgets are entities that work like building blocks and perform specific user functions such as editing of content, include radio-boxes, buttons, dialogs, etc.

Adobe Experience Manager Capabilities

  • DAM (Digital Asset Management) – AEM DAM is a tool that helps editors to store content and manage the lifecycle of assets (videos, documents, images) throughout the websites under a clear folder structure. DAM allows the editors to access the project files from different locations. The drag and drop features make the whole process of editing and publishing data really smooth and easy.
  • Creative Cloud Integration – Continuous monitoring of user data, analytics, creating campaigns, targeting definite users or groups is what is required of a marketing tool. Adobe Experience Manager which is a part of AMC (Adobe Marketing Cloud) makes it easy to be integrated with Adobe Analytics, Adobe Target, Adobe Campaign, and other Adobe features. It seamlessly integrates with third-party tools, thereby providing scale and flexibility for the users.
  • Better Search – Adobe Experience Manager facilitates in minimizing search time while looking for the right media. It allows you to add and access tags and metadata to files that are uploaded in the cloud. It improves overall team performance.
  • Task Management – AEM keeps the dashboard chaos free by providing specific workspaces for individual projects. Continuous feedback, comments, and analysis support improved workflows within the teams.
  • Video Management -– Adobe Experience Manager helps you to use diverse types of videos on multiple screens. With the help of user insights and analytics, you know your customer behavior and can manage content subsequently. You can boost customer experiences, increase brand awareness, and retain loyal customers.
  • Visual Media Conversion – With this tool you can instantly convert any files into varied formats and engage customers across various channels and platforms.
  • Personalized Content – Customization is a quintessential aspect of a good brand or company. Everyone appreciates custom-made content and experience. AEM platform helps you deliver personalized content through a single user interface with all the necessary tools. It helps in quick service and reaching the right customer at the right time.
  • Project Dashboard – Adobe Experience Manager’s Project Dashboard supports project management in a centralized environment. Projects are linked together through logical grouping of resources. Users can add several types of information into projects. It could be tasks, project information, assets, websites, external links, or team information.

How Much Power Can AEM Give You?

The power you have with Adobe Experience Manager deployment is limitless. AEM’s user-friendly interface makes it simple for the team to create, design and manage content which is interactive and gives responsive digital experiences.

AEM gives you access to develop unlimited variations of the website. It could be delivery type, format, styles, etc. without really working on multiple sets of assets. AEM helps in seamless working of creative and marketing teams by integrating with Adobe Creative Cloud.

Customized and targeted experiences can be provided through data retrieved from Adobe Analytics. This, in turn, helps to know the customers better, analyze their user behavior and make any improvements on the website for good.
 

Adobe Experience Manager Developer Role

An AEM developer role is the most crucial one in the product life cycle. He/she should be well versed in technologies like Sling, JCR and OSGi, apart from Enterprise CMS.

Let us have a closer look at an AEM developer’s role and responsibilities:

UX Design (User Experience)

Development of AEM begins at wire-framing/structural stage. Including architects and AEM developers during your planning stages will give you an edge over the implementation. This helps AEM developers get insights about user experience, user interaction and a small amount of information about architecture.

Front-end Development

An AEM developer should know the front-end code thoroughly. AEM developers will have scope to know task runners like Gulp/Grunt, NPM and Node.js and then get started with the actual front-end development using CSS, HTML, JavaScript, jQuery.

AEM Component Development

Most of the AEM component development happens using HTML Templating language (formerly called Slightly). This stage combines dialog building (in XML), and client library development (explicit to AEM development). These tools let you add content dynamically to the components via information from the dialog box. Overall logic can be deduced using HTML Templating language, and the complex ones can be accomplished by Java code.

AEM OSGi and Servlets Development

Any AEM developer must have good knowledge about OSGi framework, OSGi service, annotations and life cycle of the OSGi component. Thorough understanding of Java development should be able to guide developers through OSGi and Servlets Development.

AEM DevOps and Production Support

An AEM developer will still be responsible and involved even after the actual development is completed. This is because of oncoming CI/CD (Continuous Integration/Continuous Development) systems like Jenkins, the code can easily move to any environments like Dev, Staging, Pre-Prod, and UAT.

The production movements still has to happen manually to ensure that all the processes are completed following a checklist.

Conclusion

Marketing is beyond providing a web customer experience. One should be able to create efficient campaigns, monitor analytics and reach targeted audience constantly.

Adobe Experience Manager has also gone over and above plain website management. It is a wholesome provider of solutions to mobile sites, mobile applications development, ecommerce, campaign management and overall content marketing. Along with other AMC (Adobe Marketing Cloud) solutions, AEM is a pathfinder in digital marketing.

At this juncture, of objectives and growth, companies should go for a service provider who is trusted and associates with your company goals.

We at Embitel help you design build and deliver seamless digital experience and drive customer engagement. With our Adobe Experience Manager services, get the best of fully integrated CMS, DAM (Digital Asset Management), Digital Enrolment and AEM Cloud Integration for your websites for a cohesive digital experience.


  • 0

How Important is Dependent Failure Analysis in Achieving ‘Freedom from Interference’ as per ISO 26262?

Category : Embedded Blog

An automotive system consists of multiple software components that interact with each other.

For instance, a light outage detection ECU communicates with UDS based diagnostics to report outage in LED light system. Any fault in the light out detection ECU might lead to incorrect diagnostic reporting.

In the context of ISO 26262 compliant automotive software development, the scenarios get more complicated and nuanced. The standard mandates that a lower ASIL component can access a function of a higher ASIL component only when there is no interference and dependence. In other words, a fault in a lower ASIL component must not affect the functioning of a higher ASIL component.

We can take an example of ADAS to understand the extent of this interference. An ASIL B communication module might feed some data to a cruise control module so that the module can take the right decisions in terms of braking and speed control. An instance where the communication module develops some glitch and is not able to feed the right data, can be catastrophic. So how do we avoid such situations? Do we assign higher ASIL to every vehicle component? That would escalate the cost by a significant amount, which is definitely not recommended.

This is where Dependent Failure Analysis (DFA) can prove to be very effective.

In the subsequent sections, we will try to co-relate dependent failure analysis and Freedom From Interference (FFI). We will examine what goes into dependent failure analysis and how it helps in achieving freedom from interference.

But first, let us understand the dependencies among the components and the types of faults to watch out for.

Understanding Independence, Interference and Freedom from Interference

We mentioned earlier that a fault occurring in one component might have a bearing on another component/s as well. Failure due to these faults are called dependent failures.

When we dig deeper into these failures, we are able to establish their cause and effects.

  • When a failure is due to a single specific event or root cause that causes failure of multiple events, it is called common cause failure.
  • Another kind of dependent failure is cascading failure. In this scenario, failure of one element leads to failure of another. It appears like a cascade of failures, hence the name.

Automotive Functional Safety consultants use the term independence only when the dependent failures (cascading and common cause failures) do not lead to any safety goal violation.

Independence can be ascertained by performing a dependent failure analysis (DFA), which we will discuss later in the blog.

Another term that we must understand before we explain dependent failure analysis is interference. We can understand interference as partially opposite of independence. It is the presence of cascading failure from a non-ASIL or a lower ASIL component to a higher ASIL component that leads to one or many safety goal violations.

Finally, freedom from interference implies absence of cascading failure between elements that leads to safety goal violation. Remember that it does not include common cause failure.

Dependent Failure Analysis, Freedom from Interference and Independence: How Are These Related?

Now that the terms related to dependent failures are clear, we can move to the analysis that help achieve freedom from interference and also independence (hope you are able to identify the difference between the two 😊).

Dependent Failure Analysis focused on finding the single causes/events that invalidates independence and freedom from interference. Every element that might cause such failures are taken into consideration while performing this analysis. Part 6, Part 7, and Part 9 of the ISO 26262 standard document serve as the reference for performing dependent failure analysis.

Dependent Failure Analysis

Some points to remember about dependent failure analysis:

  • It validates Freedom from Interference between the elements by identifying the cascading failures
  • It validates independence between the elements by identifying both cascading and common cause failures
  • Dependent Failure Analysis helps in putting in place appropriate safety mechanisms to contain the faults within the element and prevent it from cascading
  • Dependent Failure Analysis can be performed at system, software, and hardware level
  • The analysis brings forth the points that are susceptible to failures
  • It can be performed with both deductive and inductive approaches

What Goes into Dependent Failure Analysis? Stepwise Explainer

Identification of the cause of the failure for all safety-critical elements are recorded in a worksheet. ISO 26262 consultants use tools or regular excel sheets to create the template. Some of the tools that are widely used for dependent failure analysis are Vector PREE Vision, ANSYS medini analyze, ENCO SOX and LDRA tool.

Irrespective of whether you are using tools for analysis or an MS excel sheet, the worksheet has two tabs, one for Cascading Failures (CF) and one for Common Cause Failures (CCF).

Here’s how a typical dependent failure analysis worksheet looks like:

DFA worksheet
 
CCF ISO 26262 DFA

 

Let’s delve a bit deeper into both CCF and CF analysis.

Common cause failure analysis begins with identifying the elements for which common cause of failure have to be identified. The reason for choosing a couple of these elements must be explained by the engineers. Typically, these activities are performed by automotive experts who are able to identify the elements based on the architecture and their extensive experience in automotive domain.

Some factors that influence the selection of the elements are:

  • One software implementing different functions
  • Partitioning of software
  • Redundant elements
  • Same external resource controlling two elements

Following are the aspects based on which the analysis is performed:

  • Root Cause of the Failure: The root cause of failure is the common cause (single specific event) that affects both the elements chosen for analysis. For instance, signal processing fault might affect two elements of an LED ECU.
  • Failure Mode: These are project-specific failure modes of the elements. Failure modes can be anything from loss of function to degraded functions and unintended activation or deactivation of the element’s function.
  • Impact of the Failure: The impact of each failure is documented in this section. Impact at both local and system levels is analyzed. In the context of an LED control ECU, inconsistent switching on and off of an LED can mislead the operator or the driver.
  • Safety Measure: The existing safety measures for handling such failures are described in this section. Details about prevention of the root cause and controlling their effects are also an integral part of the analysis and is mentioned here. For instance, if adherence to some automotive standards like CISPR, ISO 11452 etc. can prevent the cause, it must be included in this section.
  • Risk Analysis: The risk analysis needs to be performed & documented. If the risk is found to be within the acceptable limits as per the project requirements, then no action is needed, i.e., the action item column in the template need not be filled. If the risk is found to be high & the project decides to change the design, then the action items shall be filled with the change request or the planned changes to mitigate\handle the risk.
  • Action Item: The design changes resulting from the CCFA constitute the action items. These are changes that are required to be made to the design in order to keep the impact of the failure as localized as possible.

Cascading failure is analyzed between the source element (origin of fault) and destination element which is the final failure perceived by the vehicle driver. These elements are identified during the software architecture design activity. Analysis of data exchange between source and destination elements is performed in order to identify the signals transmitted by source element that caused the cascading failure.

The ASIL assigned to the source and destination element between which the data exchange took place is also analyzed. It is also extremely important to consider the operating modes and situations that are relevant to the cascading failures and hence, must be listed.

Apart from these factors, failure modes, their impact, and ways to control them are analyzed and documented. These analyses go a long way in achieving freedom from interference and identifying the design changes required to reduce the risk of dependencies.

Following possible failures must be looked out for during cascading failure analysis:

Timing and Execution: Timely execution of processes is paramount to automotive software. Failures such as blocking of execution, execution deadlocks, processes going to infinite loop, incorrect time allocation for execution and issues with synchronization among the elements are some of the failures related to time and execution that must be analyzed.

Memory Corruption: If an element is corrupting the memory of another element or accessing the memory that is allocated to a different element, it can lead to cascading failures. Such interactions must be identified and documented.

Information Exchange: The information exchange between the elements must be accurate and there must not be any loss, repetition, corruption, and delay in information exchange. Other factors such as incorrect addressing, information sequence also need to be analyzed.

Conclusion

ISO 26262 standard makes different analyses a very important part of safety lifecycle. Dependent failure analysis is one such analysis that helps achieve freedom from interference and independence. It demonstrates that requirements to reduce the dependencies between the elements have been met and are in sync with the technical safety requirements and functional safety requirements. At the end of the analysis, the engineers have clear insights on the common and cascading failures which help them reinforce the safety measures.