×

Happy to Help!

This website doesn't store cookies. Enjoy the experience, without worrying about your data!

Great, thanks!

Monthly Archives: July 2020

  • 0

Is Adaptive AUTOSAR Here to Disrupt the Way Automotive Software is Developed?

Category : Embedded Blog

In our previous blog, we compared the Classic AUTOSAR with its Adaptive counterpart. The idea was to highlight the need of the adaptive platform and to bring out the key differences between the two.

In this second blog on the Adaptive AUTOSAR series, we take the discussion forward by delving deeper into the Adaptive AUTOSAR platform.

Reports suggest that by 2025 more than 65% of vehicles will comprise electronic components, and software forms a major chunk of it. Not only does the software grow in number but also in complexity. Additionally, there are functional safety requirements to be fulfilled for the safety relevant components.

Adaptive AUTOSAR platform has been designed considering these requirements. So, you get more computing prowess, highly flexible software configurations and communication managers. That’s not all! The transition from multi-core processors to their advanced successors, from CAN to Ethernet/SOME IP, and Embedded C to C++ are some of disruptions the adaptive platform is likely to bring about.

Curious? Let’s cut to the chase!

What is AUTOSAR Adaptive Platform?

In the simplest terms, AUTOSAR adaptive platform is a set of mechanisms that implement the runtime for the Adaptive Applications. What we call software components in the Classic AUTOSAR environment, are termed as Adaptive Applications.

The adaptive platform offers a flexible environment where the applications can be added or modified by the ECU acting as a central server. Updating the software has become all the more important as it interacts with an ever-changing environment. Case in point- ADAS, which takes data from the surrounding infrastructure and processes it to assist the driver to drive better and safer.

AUTOSAR adaptive is based on POSIX operating system. It is one of the pre-requisites of using complex processors and offers developers the necessary building blocks to create high-performance automotive applications.

What makes this new platform ‘adaptive’ is the AUTOSAR runtime. This runtime environment essentially provides the interfaces required to integrate different applications in the ECU. These applications need not be integrated to the ECU at one time. As the name suggests, the adaptive applications can be integrated at runtime. This implies that different software can be developed and distributed for an ECU, completely independent of each other.

At the heart of an AUTOSAR adaptive platform, there is an POSIX operating system and each adaptive application is implemented as a process in this OS.

We will discuss the POSIX OS and the intricacies of the adaptive platform in the subsequent sections!
AUTOSAR Adaptive

The Emergence of Intelligent ECUs

One of the major factors that has triggered the development of the adaptive AUTOSAR platform is the emergence of automotive ECUs that not only communicate within the in-vehicle network but beyond.

Popularly called Intelligent ECUs, these control units are different from the conventional ECUs in many aspects including time-constraints, safety requirements and frequency of updates.

For instance, in a V2X implementation, a vehicle ECU communicates with the outside infrastructure which is constantly being modified/upgraded. The application inside the ECU also needs to be updated in order to avoid any incompatibility issues.

Moreover, such implementations also require Machine Learning, dynamic loading of software components, use of existing libraries for data analysis and so on.

In addition to collection of the data, some of these sets of information also need to be displayed to the users on an infotainment system or a head-up display. Afterall, these are features that make an intelligent ECU.

Developing such intelligent ECUs and integrating them in a vehicle network influences the network architecture of the vehicle as well as the other conventional ECUs that communicate with them.

  • The first requisite of a vehicle network to support such ECUs is high data throughput rate with minimum latency.
  • Secondly, the development environment should be such that it allows software functions to be added to an ECU even after the production has started.
  • Moreover, the environment must support software from third-party technology providers.

Considering all of this, it is a no-brainer to deduce that a dynamic software integration environment is what the future vehicles would require.

So how does AUTOSAR Adaptive Platform (AUTOSAR AP) achieve this? Let’s find out.

Modern Problems Require Modern Solutions

Essentially, AUTOSAR AP provides an environment that is capable of supporting high-performance computing prowess and offers flexible configuration for dynamic integration of software.

AUTOSAR AP is able to achieve this feat by two major technology accelerators- Ethernet and Multicore Processors. Complex ECU operations in applications such as ADAS and Over-the-Air (OTA) update require higher bandwidth; something that conventional CAN protocol cannot achieve. Ethernet solves this issue by offering higher bandwidth which enables accurate transfer of large messages and point-to-point communication, among others.

Higher bandwidth needs a match in terms of computing prowess, which is generated by multicore processors powered by hundreds of cores. Classic AUTOSAR platform does not support such multicore processors but Adaptive platform is designed to support them. The magic lies in the concept of parallel processing and heterogeneous computing.

Parallel processing is a self-explanatory term which refers to the execution of several processes in parallel.

Heterogeneous computing is a process to combine several computing resources like multicore, GPU (Graphic Processing Unit), etc.

We will talk about these concepts in detail in our next section.

Salient Features of Adaptive AUTOSAR that Make it Indispensable in Modern Times

There are certain features that make AUTOSAR Adaptive Platform what it is. Let’s explore them!

  1. Service-Oriented-Architecture: Unlike the conventional ECUs where the inter-ECU communications are defined at the configuration stage, Adaptive Applications take a service-oriented architecture (SOA) approach. In the SOA paradigm, the adaptive applications make their functionalities available as services. Other ECUs in the network can request for a particular service over the ethernet network. The ECU requesting for the service becomes the client and the one that fulfills the request assumes the role of a server. It is the SOA that allows dynamic integration of a new software at run-time. And that’s what makes AUTOSAR AP so flexible.
  2. POSIX (Portable Operating System Interface) Based Operating System: As mentioned earlier, POSIX based operating system is quite crucial to the adaptive platform. In order to implement the service-oriented architecture, a POSIX OS or a similar operating system is a pre-requisite. It provides a standard interface for communication between the application and the operating system. From memory allocation to scheduling, POSIX based OS makes sure that the services are distributed to the ECU network in the intended manner.
  3. Parallel Processing: In an SOA environment, each running application becomes a separate service that needs to be executed under a strict time constraint. When there are multicore processors running in a heterogeneous computing environment, it is the capability of parallel processing that keeps the applications running.
  4. Ethernet and Some/IP: The service oriented architecture implemented in the AUTOSAR AP is brought to life by Ethernet and SOME/IP. While the ethernet is the physical medium used for the communication, SOME/IP is the network protocol that acts as the middle layer. It defines the way in which the applications communicate with each other. SOME/IP also takes care of the serialization of the data sent over the Ethernet network.
  5. C++ as the coding language: Transition from embedded C to object oriented C++ as the coding language is an interesting and a welcome change. C++ brings with it, the virtues of object oriented programming, dynamic memory management, and an extensive set of standard libraries. All these attributes make C++ a language of choice in performance-intensive algorithm development.

Explaining the Most Important Use Case of Adaptive AUTOSAR Platform: Highly Automated Driving

We have been reiterating in the blog that Adaptive platform supports the much-needed computing prowess. Automated driving is one business use-case that has requirements that can fulfilled most effectively by an adaptive platform. Why so? Because an automated car has to:

  • Collect data from hundreds of sensors
  • Communicate with another vehicles, traffic lights and essentially the environment around it
  • Process these humungous data sets to make accurate driving decisions
  • Update the software frequently at run-time; so, there is no time to take a trip to the service center for each update

At the ECU level, these functions translate to requirements beyond the capabilities of Classic AUTOSAR which is mostly hard-wired. The AUTOSAR Adaptive Platform fulfills these requirements by steering away from hard-wired communication and adopting a service-oriented architecture. The AUTOSAR runtime is completely independent of the application. The applications make use of the brokering service provided by the platform without worrying about the underlying communication protocol. As a result, software can be added or replaced at runtime.

Designed to support multicore processors, the adaptive platform utilizes heterogeneous computing to crunch the numbers real fast. Rest of the job is done by Ethernet and SOME/IP that take care of the bandwidth for faster communication.

Conclusion

Adaptive AUTOSAR is not a replacement to its Classic counterpart. In fact, they complement each other at many levels. The recent shift of the automotive industry towards automation and communication beyond the vehicle has put Adaptive Platform on center stage.

Vehicles will evolve to necessitate higher degree of processing power and flexibility, and it looks like the AUTOSAR Adaptive Platform is all set to disrupt the industry in the future.


  • 0

Embitel Partners with ScandiPWA to Empower Its Global Magento 2 Customers with PWA

Category : Press

22nd July, 2020

Bengaluru, India

Embitel has recently entered the ScandiPWA partner network, a distinction of our success in delivering Headless Progressive Web Apps on Magento 2 for our global ecommerce customer base.

ScandiPWA is characterized by lightning fast page load time that garners priority on search engine result pages. This, in turn, leads to higher conversions with uniform and seamless experience across all devices.

ScandiPWA also includes several other highlights such as Open-Source PWA Theme, Add to Homescreen, Native App features, Offline Mode, and much more. It eliminates the need to incorporate additional middleware or databases between the Magento storefront and backend to improve efficiency. In addition, ScandiPWA is developer-friendly and ensures that back-end operations, integrations and reporting are all in sync.

 

We are excited to be partnered with ScandiPWA, the first open source PWA solution for Magento. We believe this will be a key ingredient for the successful rollout of headless ecommerce solution for our customers.”Arun Kumar, Head of Digital Commerce Business Unit, Embitel

“ScandiPWA is happy to see Embitel, Magento partner agency, joining the ScandiPWA community to offer Embitel customers unprecedented user experiences and convenience provided by PWA” – Aleksandrs Hodakovskis, Marketing and Communications Lead, ScandiPWA

 

This strategic partnership augments the goals of both companies in delivering delightful digital commerce experiences to customers.
 


  • 0

How to Use REST Assured for Testing Application Programming Interfaces?

Author: Jashmin Bhuiya

Most of the modern-day digital applications utilise Application Programming Interfaces (APIs) for interaction with other software systems or modules.

An Application Programming Interface is comparable to an invisible bridge between web applications. APIs also ensure interoperability between the front-end and back-end systems of web applications.

API
 
Automated testing of APIs have become an indispensable part of the software development life cycle of ecommerce applications. In this regard, an open-source tool, REST Assured has gained immense popularity in recent times.

In this blog, we explore various aspects of API testing – the need for such testing, and how you can test an API using REST Assured.

Why Perform API Testing When We Have Elaborate GUI Testing Procedures?

For years software testers have relied on good old User Interface (UI) based testing and used tools like Selenium. However, GUI (Graphical User Interface) testing is more suitable for evaluating webpage elements. An example is testing how quickly and accurately a pop-up message appears on loading a webpage.

API level testing offers a host of benefits when compared to GUI testing, as indicated below:

  1. Testing at the level of Application Programming Interface is estimated to save considerable amount of time as compared to a conventional UI Testing routine. Here is a comparison of the time taken for API testing vs GUI testing when parallelly executed (Source of information: qasource.com)

    3,000 API tests performed in 50 minutes

    3,000 GUI tests performed in 30 hours

  2. API level testing is critical to validate the build strength of the web application.
  3. API level testing helps in unearthing critical bugs or underlying vulnerabilities in the application, early on, rather than discovering them right before the release (as is the case with GUI testing). Thus, API testing can help you fix bugs before they delay your software release.
  4. Another advantage of testing Application Programming Interfaceis that it is highly cost and resource effective. API testing entails less code usage for test automation as compared to other forms of testing.

Let us not go further into the details of API testing and its benefits and focus on our main topic of discussion.

In the next section we well delve into details of how API testing can be done. We will focus on using the REST Assured tool for performing tests at the API level.

What is REST Assured?

REST Assured is a Java library that provides users with Domain Specific Language (DSL) to create powerful tests for REST APIs in an effective manner.

There are numerous reasons to choose this tool; we are listing a few here:

  • Open source, and hence, it is free of any license cost
  • Based on Java
  • Offers support for all kinds of HTTP methods like POST, GET, PUT, DELETE, OPTIONS, PATCH and HEAD
  • Supports the BDD/Gherkin style of writing, which supports clean coding & enhances readability
  • Enables easy integration with classic testing & reporting frameworks such as TestNG, JUnit runner and Allure Report
  • One can test an application written in any language like Java, .Net, Python, Ruby, etc.
  • JSON schema validation can be performed
  • Uses GPath to traverse through response, which is an excellent fit for both XML and JSON response traversal
  • Supports both path and query parameters
  • Makes it easy to test and validate REST services in the Java domain which is usually tough

    Now let us look at how we an perform API testing with the help of REST Assured.

How to Execute API Testing Using REST Assured

  1. Setup

    Once the Maven project is created, to be able to use REST Assured in tests, we need to include dependency in Project Object Model (POM) file:

    Project Object Model  1

     

    Project Object Model 2

     

    Project Object Model 3

     

    Project Object Model 4

     

    A best practice is to use static imports in class files, so it becomes readable:

    import static com.jas.common.ConfigFileReaderUtils.getValueFromEnvironmentFile;
    import static io.restassured.RestAssured.given;

    Config files should be created for the tests, to enhance re-usability, as all tests will access these config files. Here we can instruct REST Assured to access values from environment files, access some values from email server, fetch the client ID/project name, etc.

  2. Plain Old Java Object usage in REST Assured

    Create Plain Old Java Objects (POJOs) that represent the API request. POJOs can be used for both serialization and de-serialization.

    We can use Lombok to create POJOs (we can utilise @Getter/@Setter annotations to avoid writing boiler plate codes).

    Gson

    Gson is a good option as a Java serialization/deserialization library to convert Java Objects into JSON and back.

    REST Assured supports creating objects as request content which can easily be serialized to JSON/XML and sent as request. Hence, instead of passing JSON/XML files directly, we can build objects using POJOs.

    Builders are the classes that contain static methods to create a particular Object. Create a builder class where we can build the payload and serialize it using Gson. We can then pass it to request files to generate Response object.

  3. Response verification
    • Verifying the status code

    • One of the most primary and important validations one can do with REST Assured Response is to validate status code, which can be done using statusCode() method on a Response object. A quick example to get the status code and assert against an expected value is:

      int statusCode = rawRes.statusCode();

      Assert.assertEquals(statusCode, expectedStatusCode, String.format(“Invalid status code: %d”, statusCode));

    • Print response body

    • There is an interface ResponseBody which has a method called prettyprint(), which converts a ResponseBody into a readable String representation. Consider this example where we get the response for an API call which takes some parameters:

      Response rawRes =

      CartsHelper.createCart(

      emailId,

      currencyCode,

      countryCode,

      clientId,

      “Reserve”);

      To print response body, we should use rawRes.prettyPrint(). This gets the response body in readable format. If we use rawRes.toString(), it prints the string representation of the Response object.

      {

      “id”: ” “,

      “version”: 1,

      “lastMessageSequenceNumber”: 1

      .

      .

      }

    • Verifying content of response

    • We should create helper files to verify the content of response, as it will make the tests look cleaner and easy to comprehend. One way is to convert the response to string and pass the string content to JsonPath object constructor, as shown below:

      String responseString = rawRes.asString();

      return new JsonPath(responseString);

      Consider the response below:

      {

      “id”: ” “,

      “version”: 1,

      “lastMessageSequenceNumber”: 1,

      “createdAt”: “2020-06-11T18:20:26.461Z”,

      “lastModifiedAt”: “2020-06-11T18:20:26.461Z”,

      “lastModifiedBy”: {

      “clientId”: ” “,

      “isPlatformClient”: false

      },

      “createdBy”: {

      “clientId”: ” “,

      “isPlatformClient”: false

      },

      “amountPlanned”: {

      “type”: “centPrecision”,

      “currencyCode”: “USD”,

      “centAmount”: 1000,

      “fractionDigits”: 2

      }

      }

      You have to extract the value of centAmount, which can be easily fetched as (assuming “convertResToJson” method gives a JsonPath object):

      convertResToJson(rawRes).get(“amountPlanned.centAmount”)

  4. Set header parameters (single header/multi header) in request

    Sometimes webservices need headers to be passed as input to the call, without which the call will not succeed. Hence, we pass single header as shown below:

    Response rawRes =

    given()

    .baseUri(host)

    .header(“XYZ”, clientId)

    .auth()

    .

    .

    .extract()

    .response();

    Multiple headers (which get merged by default) or headers containing multiple values can also be passed:

    given().header(“header1”, “value1”, “value2”). ..

    given().header(“abc”, “1”).header(“xyz”, “2”).

  5. Set query parameters

    The query parameter in a URL comes after the symbol “?”, as shown below:

    https://www.google.com/search?q=spice&rlz=1C1GCEU_enIN823IN823&oq=spice&aqs=chrome..69i57j46l2j0l3j46.1191j1j9&sourceid=chrome&ie=UTF-8

    Query parameter which is a key-value pair, helps in retrieving the accurate data based on the inputs passed. If multiple parameters are passed, then in the URL it appears separated through “&” sign. The request with query parameter may look like this:

    Response rawRes =

    given()

    .baseUri(host)

    .header(“XYZ”, clientId)

    .queryParam(“country”, country)

    .queryParam(“currency”, currency)

    .auth()

    .

    .

    .extract()

    .response();

    The corresponding endpoint will appear as:

    {host}/cart?country={country}&currency={currency}

    We can also set multiple values in query parameters using array list.

  6. Set path parameters

    Path parameter is a part of the URL itself which is used in call to the web service. One of the ways we can use path parameter is by using variable, as in the following example. paymentId can contain any dynamic variable that is passed in the POST HTTP call:

    Response rawRes =

    given()

    .baseUri(host)

    .header(“XYZ”, clientId)

    .auth()

    .contentType(ContentType.JSON)

    .body(requestBody)

    .and()

    .when()

    .post(“https://myblog.io/payments/{paymentId}”, paymentId)

    .

    .

    .extract()

    .response();

  7. Set content type of request

    The data that is sent to the server in a POST request or the data that is received from the server as GET request most probably has a content-type. The type of body, be it XML, JSON, TEXT or some other format, is defined by the Content-Type header.

    If a POST request contains JSON data then the Content-Type header will have a value of application/json or can be specified as enum ContentType.JSON. Similarly, for a POST request containing XML, the Content-Type header value will be application/xml or can be specified as enum ContentType.XML.

Conclusion

REST Assured eases testing of REST APIs and seamlessly integrates JUnit and TestNG. In this blog, we have just provided you an overview of how it can be used for testing. We shall explore the REST Assured tool in detail in our upcoming blogs.

Author Bio:

JashmineJashmine is our QA lead with an impressive record in Software Quality Engineering. For over a decade, Jashmine has been at the helm of various quality control projects for leading software organizations. She has done her graduation in Information Technology from MVJ College of Engineering. When not coding, Jashmine loves to nurture her passion for sports and cooking.
 
 


  • 0

[Vlog] How to Perform HIL Testing for Automotive Software Development

Category : Embedded Blog

Hardware-in-loop (HIL) testing is performed at the validation testing stage. It occurs at the last stages of V-Cycle after the integration testing of the application has been performed.

At this juncture, the application is validated against the requirements based on which it has been developed.  Ideally, the application that is to be tested, is put inside an automotive ECU called the Device Under Test (DUT). During validation, the DUT must be tested in an environment where it communicates with the signals of other ECUs in the vehicle.

However, testing an ECU inside a vehicle is not a viable option as it escalates the cost enormously. Moreover, the entire target vehicle system may not be available at the time of testing this application.

HIL testing is the answer to these challenges!

Major Highlights of the HIL Testing Vlog

  • Understanding the need for HIL Testing
  • What is Hardware-in-loop testing?
  • How is HIL Testing performed
  • Understanding a HIL Test System
  • Widely used HIL Test systems in automotive domain

Our series of vlogs is created to drive home complicated concepts in the simplest and most interesting way possible. These vlogs are helpful not just for automotive engineers but also for business managers who wish to learn about automotive concepts to make better decisions in their line of work.

Happy viewing!


  • 0

Integration of DoIP over Linux: An Intro to the Process and Mitigation of Challenges

Category : Embedded Blog

Diagnostics over Internet Protocol, commonly referred to as DoIP has opened avenues for remote vehicle diagnostics, ECU re-programming and more.

On the ground, it translates into safer vehicles on road, less hassle for customers who have to drive down to the service center for every issue, and better customer experience provided by the OEMs.

Traditionally, DoIP is integrated into the Vehicle ECU (Server) and external tester device (Client). Both server and client are based on Embedded systems where DoIP is integrated into the Transport Layer.

In the recent past, technology vendors have been exploring the idea of integrating DoIP on Linux platform instead of Embedded systems. This is obviously not possible inside the vehicle system as it is largely dominated by embedded systems. However, the external testing device where DoIP is integrated as the client, can be based on a Linux platform.

Integration of DoIP directly to a Linux platform where other applications also reside, can make for an interesting use-case. But there are several challenges too!

So, what are the challenges involved? Despite the challenges, is it still worth migrating to Linux platform? How do we mitigate these challenges? We will try to answer these questions in our blog.

But first let’s understand the basic architecture of the Linux platform.

Understanding the Linux Architecture for DoIP Integration

In a Linux environment, there are two realms where the different modules reside- User Space and Kernel Space. If you compare this with an embedded system, there are some visible differences.

In contrast to the Linux environment, which is divided into two layers, an embedded system will at least have 3 layers- Application Layer, Base Software Module and the Low-Level Drivers. In addition to these modules, there is a scheduler/RTOS that manages the scheduling of tasks performed by the applications.

Now let’s understand the User Space and Kernel Space of a Linux Platform:

  1. User Space: This is the set of locations where the user processes run. The User Space will have the applications, the DoIP and other automotive stacks and the sockets required for stacks to communicate with the physical medium. As a matter of fact, the applications residing in the User Space have limited access to the memory. It is the Kernel that manages the applications running in the User Space.
  2. Kernel Space: Kernel is the core of the Linux platform. It has access to all of the memory. In addition to providing the APIs to the applications in the User Space, it has a very important role to play in the context of scheduling the applications. It is the Kernel that decides which application will be given priority for execution and the ones that will be queued.

As already mentioned, DoIP and other stacks are part of the User Space. Why are stacks kept in the User Space and not in the Kernel Space, you may ask? Theoretically, it is feasible. However, one of the many reasons that it should be avoided is that any instance of a run-time error can destabilize the kernel and result in system crash.

The diagrams below show a clear distinction in the architecture of embedded system and Linux OS

DoIP image
DoIP integration as per the OSI architecture (Embedded System)
DoIP OSI Model
DoIP integration in Linux Environment

While we are talking about integration of DoIP with Linux, an obvious question comes to mind- Why integrate DoIP with Linux when we already have a time-tested architecture based on Embedded system/RTOS. Let’s try to answer that!

Why Integrate DoIP with Linux?

In a classic off-board diagnostics scenario, a Tester device connects to the vehicle ECU and retrieves the Diagnostic Trouble Codes (DTC). Based on the DTCs, the service center professionals understand the issue and rectify it. When DoIP is involved, this diagnostic can also be remotely performed over Internet Protocols.

The tester device is limited to its role of retrieving the DTCs from the vehicle ECU. If you wish to perform other tasks related to the vehicle ECU, you would always require a gateway device that can connect to the ECU peripherals.

Having a LINUX based tester device with DoIP integrated into it, allows you to keep different applications within the tester device. Such a setup enhances the role of a tester device, allows it to re-program ECUs remotely, retrieves data other than the diagnostic details and opens avenues for many such use-cases. Obviously, the security aspect also has to be taken care of, but that will be the subject of another blog later on.

Embedded system and Linux platform are completely different in the way the applications and the underlying hardware interact. The scheduling part that takes care of how execution time is allocated to each application, is also quite distinct.

In the next section, we will look at the scheduling challenges that come to the fore when integrating DoIP over Linux.

Scheduling Challenges Involved in DoIP Integration with Linux Platform

When it comes to scheduling, the Kernel Space has complete control. As it is not a real-time scheduler, we cannot expect it to fulfill the timing constraints of a DoIP stack, which is usually 1 milli second ticks. In this 1 milli second stipulation, every task has to be called, failing which the entire operation would be stalled.

In a Linux environment, where the Kernel scheduler controls the scheduling of every application, the timing constraints of DoIP stack cannot be fulfilled. However, there are certain modifications that we can bring about in both the Kernel and the User Space to make them fulfill the timing constraints.

Option 1- Converting the Linux Kernel into a Non-Preemptive Real Time Scheduler:

A non-preemptive real time scheduler is one which does not interrupt a process till it is finished or the stipulated time has expired. Once the CPU allocates the resource to a process, the process can hold the resource till it terminates.

An RT patch is added to the Kernel Space to enable a non-preemptive real time scheduling mechanism in the Kernel Space. Although it looks simple, this is not a preferred choice among the automotive engineers as it has some downsides too. A non-preemptive scheduler is a rigid one, and automotive applications may require certain amount of flexibility at times.

Option 2- Using a High Resolution Timer and Scheduler:

This is a much cleaner process as we do not modify the Kernel Scheduler. Instead, we get our own Scheduler in the User Space and control it with a High Resolution Timer (HR Timer), which will be a part of the Kernel Space.

The HR timer is tuned for 1 milli second ticks which is transmitted to the scheduler (in the User Space) with the help of Sys FS (Pseudo File System) and SIGRT (Real Time Signal).

A scheduler especially fine-tuned for a Linux platform is required for the process. Any scheduler can be used with minimum modification. As seen in the diagram, SIGRT has a dedicated connection with the Scheduler through IOCTL protocol. It does not affect any other module of the User Space and is only meant to meet the timing constraints.

Additional Activities Involved in Integration of DoIP on Linux

DoIP uses TCP/UDP IP for communication; hence these protocols are integrated within the Linux Kernel Space. They interface with the Network physical layer in order to send the data packets.

Additionally, we need to integrate the socket that would interface between the DoIP stack and the TCP/IP and UDP stack in the Kernel Space. The role of the socket communication module is to process the data provided by DoIP stack before transmitting it over the TCP/IP protocol.

Conclusion

Diagnostics over IP has already solved several problems related to remote diagnosis of the vehicle ECU. When combined with a Linux Platform, its range of application will only widen. Hopefully, in the future, we might witness a whole lot of new use-cases for this set up. Watch this space!


  • 0

IoT Mobile App Development to Transform Business Processes

Category : Embedded Blog

Over the years, smartphones have transcended the status of mere gadgets and have established a special place in our day-to-day activities. Innovations in Internet of Things (IoT) have empowered mobile applications so that they can be configured to link to an IoT ecosystem and monitor connected devices.

An example of this is a recent project successfully implemented by our IoT mobility team.

The project involved the development of a BT-based mobile app to control the seating in a vehicle to assure supreme comfort to the driver. If the user configures a profile on their mobile app and enters the vehicle, the seating is automatically adjusted to his/her preferred settings for seat position, heating, etc.

Such mobile apps specifically crafted for automotive applications have been disrupting the mobility industry of late.

Let us take a deep dive into the world of IoT mobile apps and the infinite possibilities it presents.

Smartphone as Primary User Interface in IoT Architecture

In an IoT ecosystem, a smartphone fulfills the responsibility of an intuitive user interface that connects to the network and controls IoT devices. It facilitates a variety of use cases for which the IoT infrastructure was conceived. Here are a few of its applications:

  • Monitoring connected consumer applications in a house
  • Regulating HVAC system in automobiles
  • Management of industrial devices in a manufacturing plant
  • Monitoring patient vitals in the healthcare industry

IoT mobile/web/desktop applications can serve several purposes in a connected ecosystem:

  • Location tracking (Navigation, Telematics, etc. in Automotive industry)
  • Monitoring parameters (Eg. Heart Rate, Temperature in Healthcare domain and Battery/Device Monitoring in Industry 4.0)
  • Motion sensing (Screen rotation, mobile responsiveness)

Active cell network or WiFi connectivity enables smartphones with in-built sensors to connect to the IoT network, while the device retains its interactive properties.

There has been a significant improvement in UI technology that enables simple touch panels for the execution of complex tasks.

How IoT Mobile App Development Differs from Regular Mobile App Development

When a group of developers set out to design a mobile ecommerce application, there is a standard set of rules to adhere to.

For instance, they would have to abide by a standard flow for the development. This could entail the development of Home Page/Product Listing Page/Product Detail Page first and then the checkout and cart pages. There may be additional considerations with respect to the business for which the mobile app is being developed.

On the connectivity side, API interactions need to be configured for the users to connect to servers and fetch data/send requests.

On the other hand, there may not be a standard flow for the development of an IoT mobile app. This is specifically based on the use case for which it is being developed. Such an app also connects to the cloud through APIs or web socket interactions.

Another major difference is the fact that IoT mobile apps are real-time applications, wherein the user can see immediate result for the actions they perform.

Additionally, while developing IoT apps, the interactive points may vary between apps. This is specific to the use case as well.

For example, when using an IoT mobile app to unlock a car, the request is first transmitted to the server from where it is forwarded to the IVI unit that responds with the result. This data is then transmitted to the mobile application through the server. This may not be the path of data flow for another connected mobile application in the network.

Effectively, IoT mobile apps and regular mobile apps differ based on the features they offer. As far as the development activities are concerned, there are no significant variations.

Benefits Offered by IoT Mobile Applications

Integrating a mobile application or a similar user interface in an IoT infrastructure offers several benefits. Some of these are explained below:

  • Facilitates Data Collection and Analysis – In the healthcare domain, when businesses implement an IoT ecosystem, they can track the vitals of patients through continuous data collection. The information collected over a period of time can then be analysed to effectively predict the onset of diseases and take preventive measures. Hence, IoT is capable of delivering improved medical care to patients.


    Likewise, in an industrial setup, the data collected by sensors connected to industrial equipment can collate information on performance. Personnel monitoring the data will be alerted of the probability of equipment failure in the future. Hence, this aids in predictive maintenance of industrial equipment.

    In the retail sector, data collected by IoT user interfaces can be utilised to provide personalized services to customers and design new products.

  • Enables Remote Working Infrastructure – An Internet of Things enterprise infrastructure enables employees to easily accomplish remote working. When IoT is integrated with wireless technology, employees are able to connect to servers from remote locations through their IoT mobile/web/desktop apps. This improves the overall efficiency of the enterprise.
  • Reduces Human Effort – IoT-enabled apps can automate the monitoring and management of connected devices in an industrial setup. An example of this is a recent project in which our IoT engineers were able to successfully develop an IoT platform for a solar tracking system. This improved the efficiency of the plant’s open field deployment of solar panels.


    The SCADA solution developed in the next phase of the project reduced the overall manpower investment and streamlined monitoring activities. This also simplified asset handling. The solution proved to be extremely cost-effective and was deployed across various branches of the solar plant.

Examples Highlighting How IoT Apps Work

In order to understand the value IoT mobile apps bring to the table, we need to explore how these work in combination with other connected devices in the ecosystem.

Let us consider the example of our Driver Behaviour App, an intuitive IoT-enabled mobile application designed and developed within our IoT Innovation Lab. The app has in-built sensors that detect road conditions and predict humps and potholes in the path of a vehicle. When used by a driver, it can warn them of adverse road conditions and also provide interesting insights on driver behaviour at the end of the journey. This app can also be deployed by various agencies to analyse the driving style- gentle or rash.

Another application is the battery monitoring system found in industries. IoT sensors can detect the rate of drainage of battery charge and report it via IoT apps. In the absence of such a system, battery failures would have a direct impact on the longevity and performance of the industrial UPS network. An IoT-enabled industrial automation solution for battery monitoring and management would provide the benefits of predictive maintenance. And the IoT user interface/app would bear a significant role in notifying the industry managers of possible issues.

Points to Consider During the Design Phase of an IoT Mobile App

While developing an IoT app, it is imperative that designers and business innovators evaluate the various aspects of app usage and the IoT ecosystem.

  • Assessing the requirements and suitability of the app is the first point to be considered.
  • It is also necessary to review the format of data that is transmitted throughout the network.
  • The data from the app can be used to gain valuable insights; so, data analytics and end-user experience must be given a serious thought.
  • Another point to consider is the hardware compatibility of the mobile device with the app.
  • As technology evolves in the future, there may be a need to extend the capabilities of the IoT infrastructure. In order to meet such requirements, the scalability aspect of the mobile app should not be ignored in the initial design phase.

Challenges Faced While Developing Mobile Apps for an IoT Ecosystem

A serious threat faced by an IoT solution is security breach via hacking or malware infection. As more devices are integrated with the network, the vulnerability of the ecosystem to security threats increases. Hence, it is crucial for all the IoT devices and apps in the network to be highly secure by design.

So, how can IoT solutions be secured?

Data violations can be mitigated through the following measures:

MQTT implementation is usually adopted in IoT solutions to ensure security while connecting to the cloud infrastructure and for enabling asynchronous communication.

App developers should also ensure that frequent security updates are in place to protect the system. Since most of the IoT mobile apps are distributed only to customers, Firmware-Over-The-Air (FOTA) update feature can be integrated for keeping these apps up to date.

Tools and Technology for IoT Mobile App Development

  • UI/UX Design and Development – Tools used are AVOCODE, Illustrator, POP, etc. Technologies such as Angular JS and HTML5 are also used for UI development.
  • Authentication – OAUTH tool is used for authentication purposes.
  • Database Management – SQLite (for Android) and Realm platform (for iOS or Cross-Platform implementation)

Expertise in XCODE, Android Studio, Java, Swift, JNI, Object C, JQuery, HTML5, etc. is also required for IoT mobile app development.

How to Choose the Right Partner for IoT Mobile App Development

Collaborating with the right partner for IoT app development makes a world of difference. An engineering partner with immense experience across industries in the domain would be the right choice.

It is important to bear in mind that emphasis should be laid on relevance and quality of work, rather than price points. This will ensure that the IoT app solution weathers all challenges and increases the efficiency of the entire IoT ecosystem.


  • 0

Base Software Module (BSW) and Application Development for a Telematics Project

 

About Customer

Our customer is a leading manufacture of connected car devices and telematics solution. We have entered into technology partnership for multiple projects in the past as well.

What makes us the perfect partner is the shared passion and belief in the power of innovation. With our expertise in the automotive domain and the customer’s crystal clear vision of the product, we had a winner in hands.

Business Challenge

An OBD II adaptor platform is what the customer intended to build. This telematics dongle reads the diagnostics data from the vehicle and sends it to the cloud. The device is intended to be configurable for both OBD and SAE J1939, as per the use-case.

As clear from the introduction above, the device has two modules- one that interfaces with the vehicle and the other that interfaces with the cloud.

The design and development team of our customer is adept at handling the cloud interface part but is limited in its expertise in the vehicle interface module.

Integration of OBD II, CAN BUS and SAE J1939 stack requires extensive expertise in the automotive embedded domain. And our customer approached us for this very aspect of the project. Moreover, developing the automotive protocol software required  considerable amount of time.

In a nutshell, our technology assistance was required to mitigate the following challenges:

  • Configuration and integration of automotive stacks like OBD II, J1939, CAN, etc.
  • Application development for the vehicle interface part of the Telematics solution
  • Low-level device drivers for the telematics device to interface with the vehicle
BSW Interface

Embitel Solution

We had two teams working on the project- Automotive Stacks team, responsible for configuration and integration of protocol software and base software module and Development team building the required applications.

  1. ECU Communication and Vehicle Diagnostics Software Integration
  2. As the Telematics Device was expected to monitor vehicle diagnostics data, the following protocol stacks were integrated:

    SAE J1939: Used mostly for diagnostics and ECU communication in commercial vehicles.

    OBD II: On-board diagnostics of the vehicle; we configured OBD II for both CAN and K-Line physical medium. As per the requirement, we also configured the required PIDs to get the vehicle parameters- such as RPM, Engine speed and more. The customer provided the CAN Matrix/DBC file and we generated the configuration files using them.

  3. Development of the Base Software Module
    • Configuration of CAN IF and CAN NM
    • Integration of CAN Stack as per the CAN DBC files provided by the customer
    • Configuration and Integration of ISO 9141 to access the K-Line
    • Configuration of Low-level drivers as per the schematics
    • UART configuration
  4. Application Layer Support
    • Development of External Battery Monitoring application
    • Internal Battery Management over I2C
    • Ignition Detection Algorithm based on customer’s design
    • Diagnostics Reports through UART- to know whether the MCU is working fine
    • Wake up and sleep handler as per the customer’s design
    • Secure Communication (Encryption and Decryption algorithm)
  5. Testing and Validation Support
    • Integration and Functional Testing reports of the automotive stacks and the applications
    • MISRA C compliance reports
    • High-level design document

 

Embitel Impact

The customer could save approximately 6 months of man hours courtesy our library of ready-to-deploy automotive protocol software.

Separate teams working on the project reduced the time-to-market and the cost further. As our automotive stacks are offered on a one-time license fee model, the customer can use them in multiple series production of automotive components.

The stack software are reconfigurable too, so using these for different use-cases will not be a problem. The customer can choose to configure it themselves or get in touch with us.
 

Tools and Technologies

  • NXP Microcontroller: NXP family of MCU was used for the application
  • PE Micro Debugger- Used for development and debugging purpose
  • S32 Design Studio IDE- The integrated development environment used to develop device drivers for the microcontroller
  • FreeRTOS- It is a free Real Time Operating system used widely in automotive application development
  • PCAN– Used for Functional Testing of the automotive stacks