×

Happy to Help!

This website doesn't store cookies. Enjoy the experience, without worrying about your data!

Great, thanks!

Monthly Archives: May 2015

  • 0

Standardisation as a key to growth for Embedded systems

Category : Embedded Blog

Source: automotiveIT international, Author : Mr.Arjen Bongard

Embedded systems in the car will grow, according to a German survey. Automakers and suppliers see standardization as key to the growth of embedded systems, according to a survey of decision makers in the auto and aerospace industry. They also believe hardware and software can in future come from different companies and they see relatively good potential for service providers to play an important role in the growth of embedded systems.

The survey was carried out by the forsa agency for market researchers F.A.Z Institute and automotive electronics service provider ESG. The researchers polled 100 decision makers in the German auto and aerospace industries about their views on the growth of embedded systems.

“The view that software and hardware can be separated was mildly surprising,” said Jacqueline Preusser, the research analyst who coordinated the project at the F.A.Z Institute. “Those two sides are still connected at the moment.” According to the survey, 86 pc of automotive executives polled felt that infotainment hardware and software could come from different providers in future. For visualization components, 77 pc felt this way, while the percentage stood at 74 pc for driver assistance systems. Auto executives were less convinced of this trend for embedded systems supporting powertrain and engine electronics.

Growth seen Most executives expected the market for embedded systems to grow moderately or strongly. Embedded systems manage a specific function in the car through a combination of hardware and software that includes mechanics, sensors and various electronic components.

Hans Georg Frischkorn, the head of ESG’s automotive operations, said the poll underscored how important embedded systems are for future innovations in the car. In a press briefing at ESG headquarters here, he particularly cited the growth in networked functions. “It will be very exciting to see the smart car, the smart grid and the smart home all networked together,” he said.

Almost all executives interviewed said networked functions would grow most in importance when it came to r&d trends for the next five years. Safety-and-security and miniaturization were seen as the second and third-most important, respectively. Frischkorn said the connected car will mean increasing complexity in electronic systems. “Standardization is necessary to manage that complexity,” he said. In the auto industry, 36 pc of executives polled, considered standardization the biggest challenge for the next five years. User friendliness came second with 14 pc and compatibility of hardware and software was third with 12 pc.

System integration With embedded systems increasing in cars, 95 of the 100 executives polled agreed that system integration was becoming more important. In the auto industry, most – 56 pc – agreed that automakers will continue to manage this challenge. But 42 pc also saw a growing role for engineering service providers. ESG’s Frischkorn said that, given the multitude of tasks a carmaker has to manage, engineering service providers will play a role in system integration. Said the ESG executive: “It’s not an either-or question.”


  • 0

Embitel Partnership with CMD to offer Engine Control Unit Solutions

Category : Embedded Blog

Embitel has a partnership with CMD (COSTRUZIONI MOTORI DIESEL S.R.L.) to provide services in the area of Electronic Control of Engines. CMD’s strong system know-how, control strategy expertise and calibration expertise complements Embitel’s expertise in the area of automotive embedded systems. Together we offer customized solutions for electronic control of diesel and gasoline engines.

Statement: Embitel – CMD is offering solutions in the area of Electronic Control Units. The ECU for Diesel Engines is already under series production, and is calibrated and used in various applications in Marine, Automotive and Avionics domains. The ECU supports Common Rail Direct Injection systems and can be calibrated for engines ranging from 1 to 6 cylinders. The gasoline ECU is being jointly developed by CMD and Embitel. The advantages we offer to our clients include lower development and customization costs, faster time to market, and higher participation and know-how sharing with customers.

About CMD: CMD develops internal combustion engines, both diesel and gasoline, and related control systems (eg ECU). CMD Mechatronics was born from the spinoff of the group R&D of CMD Electronics SpA (www.cmdengine.com) company in the field of advanced design of internal combustion engines and hi-tech manufacturer of FNM marine engines (www.fnm-marine.com).

CMD Mechatronics design customized electronic systems for scientific and industrial fields and implement them as prototypes or for series production. Since its inception in 2000, CMD Mechatronics focused primarily on the design and development of electronics and software specifically for diesel engines.

Today CMD is available to cover every phase of the engineering design of a diesel and gasoline engine control system offering a wide range of services from PCB design services to the reengineering and migration in a complete and responding to customer specifications.

About Embitel: Embitel (www.embitel.com) was established in 2006 and currently employs about 130 people in Embedded Systems.

Embitel is an ISO 9001:2008 certified by TUV, a young entrepreneurial company offers specialized services in embedded automotive and Industrial automation. Embitel is headquartered in Bangalore along with operations in London and Stuttgart.


  • 0

“Do’s and Don’ts” when considering an FPGA to structured ASIC design methodology

Category : Embedded Blog

More and more engineers are considering structured ASICs when they are designing advanced systems, because these components offer low unit cost, low power, and high performance along with fast turn-around.

In a structured ASIC, the functional resources – such as logic, memory, I/O buffers – are embedded in a pre-engineered and pre-verified base layer. The device is then customized with the top few metal layers, requiring far less engineering effort to create a low cost ASIC (Fig 1). This reduces not only the time and development costs, but also the risk of design errors, since the ASIC vendor only needs to generate metallization layers. With 90-nm process technologies, structured ASICs offer the density and performance required to meet a wide range of advanced applications.

  1. Standard cell ASIC (top) versus structured ASIC (bottom). However, there is still risk involved when it comes to developing a structured ASIC. Errors in the logic design can still exist, so one way to avoid time-consuming and costly silicon re-spins is to use FPGA prototyping and to then convert the design from an FPGA to some form of ASIC. FPGA prototyping is more successful for structured ASICs compared to standard cell ASICs when the structured ASIC mirrors the resources available on the FPGA. The closer the match between the I/O and memory of the FPGA and the structured ASIC, the lower the risk when the design is converted to an ASIC.

Some “Do’s and Don’ts” to take into account when considering a structured ASIC design methodology are as follows:

Do

Establish a design methodology you can use for a wide range of applications. Make sure your design teams are trained on the tools and the FPGA and ASIC architectures to create the best possible design.

Use a software development environment that reduces the risk of design problems, such as functional logic errors. Logic verification and simulation, along with prototyping the design in an FPGA, is a proven method to ensure the design will work in the system.

Prototype your design with an FPGA using the FPGA features that give you the best performance and functionality. Also, generate the prototype with the IP you need for the application, which may require a soft processor, hard multipliers, and memory. In addition, use high-speed LVDS or other I/O to ensure you are building in the signal integrity needed to have a reliable system.

Test your design in-system as much as possible to verify the design works according to requirements. Make sure the system is tested with the FPGA prototype across the entire voltage and temperature range that the system will experience. That will reduce the risk that when the design is converted to an ASIC it will only operate over a limited temperature range and at nominal voltage.

Design the system to use either an FPGA or the structured ASIC. This allows you two major advantages. First, you can go into production with the FPGA and then change to the ASIC once it is available. That provides the advantage of getting to market faster and promotes a market position. Secondly, if there is an unexpected increase in the demand for ASICs and supplies are insufficient, some systems with an FPGA can be manufactured, thus keeping the production lines running. Finally, using the FPGA at the system’s end-of-life will save you from having to order more ASIC devices that are needed to fulfill manufacturing requirements.

An example is the Altera HardCopy II structured ASIC. Generate your prototype with a Stratix’ II FPGA, then go into production with the Stratix II FPGA while the Altera HardCopy Design Center migrates the design to a pin-compatible HardCopy II device. Once the HardCopy II device is approved and production units are available, the system can be produced using the lower cost HardCopy II device. The combination of Altera Stratix II FPGA and HardCopy II structured ASICs also gives you unique manufacturing flexibility, since you can use either in production. For example, you can use the HardCopy device for low cost, but if you have a sudden increase in demand and need more devices immediately, you can use off-the-shelf Stratix II FPGAs as a substitute. You can also go back to using Stratix II FPGAs exclusively if you need to update the design to fix an error or make a change for a specific customer.

Don’t

Use an FPGA to prototype only logic and low-level I/O (such as LVTTL or LVCMOS). That will limit your design to low-end gate arrays that won’t provide the performance edge needed. Too often, only the logic is prototyped in the FPGA, leading to a misconception of how well the design really works in the system. Many designs also require high-speed memory interfaces, and the best design practice is prototyping to ensure the interface performs as required, particularly across voltage and temperature variations.

Choose an ASIC methodology based only on unit cost. That may save some Bill-of-Material costs but make the system uncompetitive. Include factors such as realistic development time and costs along with total engineering effort. In the long run, an FPGA along with a structured ASIC can provide lower development costs and faster development turn-around time.

Consider only standard cell ASIC technology for ASSP designs. Sometimes, structured ASIC or even FPGAs are right for the annual volumes and the need for fast time to market. Choose the structured ASIC before you look at the market needs for the design. Trying to shoehorn a design into a structured ASIC that is too small or feature limited, results in a system that is DOA in the market.

Consider only single-chip solutions. Sometimes the best way to architect a system can be using two devices rather than one large ASIC. Partitioning the design can reduce overall development time and simplify the design process. You can also reduce the risk of having to re-spin a large ASIC design.

Author : Rob Schreck, Aletra Source :www.design-reuse.com


  • 0

Embedded Application and Product Engineering using ARM Processors

Category : Embedded Blog

Reduced Instruction Set Computing [RISC] is a processor design that is analogous to high performance and high energy efficiency. One of the forerunners in production and supply of RISC Embedded microprocessors is ARM Holdings. ARM Holdings’ catalogs of processors are characterized by strong performance and high energy efficiency; a market-decider when it comes to digital products. Delivering high performance at low costs for the current market of advanced digital applications is now a reality because of the advancements in Embedded product engineering using ARM processors. Expert ARM architects are constantly in research and development to further improve [on the already advanced] ARM architecture, that has now become a staple architecture for embedded product design and engineering. ARM Holdings is also the world’s leading semiconductor Intellectual Property [IP] supplier, and is at the core of development of digital electronic products.

According to current industry experts, an excess of two million ARM-based processors are being used in the production of various kinds of machinery and equipment. The ARM Community [of developers worldwide] develops top-notch processors for the benefit of product design companies and designers around the globe. There is an endless list of ARM developers [with rich experience and a high industry reputation] for Embedded Product engineering with ARM processors, at highly competitive pricing. Development of advanced ARM processors, implementation of DSP algorithms, exception and interrupt handling, cache technology and memory management are some of the tasks that ARM developers manage for companies. Embitel is a member of the ARM Connected Community and Partner Network, which is a global network of companies aligned to provide complete solutions, from design to manufacture, for embedded application and product engineering for products based on the ARM architecture. Embitel’s partnershipwith ARM includes digital multi-channel solutions for E-Commerce and Embedded technology development.


  • 0

FPGA’s vs. ASIC’s

Category : Embedded Blog

Deciding between ASICs and FPGAs requires designers to answer tough questions concerning costs, tool availability and effectiveness, as well as how best to present the information to management to guarantee support throughout the design process. The first step is to make a block diagram of what you want to integrate. Sometimes it helps to get some help from an experienced field applications engineer. Remember that time is money. Your next move is to come up with some idea of production volume. Next, make a list of design objectives in order of importance. These could include cost (including nonrecurring engineering charges), die size, time-to-market, tools, performance and intellectual property requirements. You should also take into account your own design skills, what you have time to do and what you should farm out. Remember that it must make sense financially or you are doomed from the start. Time-to-market is often at the top of the list. Some large ASICs can take a year or more to design. A good way to shorten development time is to make prototypes using FPGAs and then switch to an ASIC. But the most common mistake that designers make when they decide to build an ASIC is that they never formally pitch their idea to management. Then, after working on it for a week, the project is shot down for time-to-market or cost reasons. Designers should never overlook the important step of making their case to their managers. Before starting on an ASIC, ask yourself or your management team if it is wise to spend $250,000 or more on NRE charges. If the answer is yes and you get the green light, then go. If the answer is no, then you’ll need to gather more information before taking the ASIC route. Understand that most bean counters do not see any value in handing someone $250,000 for a one-time charge. They prefer to add cost to the production. Say your project has a NRE of $300,000, a volume of 5,000, and it replaces circuitry that costs $80. The final ASIC cost is $40. You do some math and determine the break-even point is three years. If you amortize the same design over five years, this could save your company $400,000 even after NRE has been absorbed. Another option is to do a “rapid ASIC” using preformed ASIC blocks, which saves time and lowers NRE costs. It could also make sense to convert an FPGA to ASIC directly, which lowers NRE a small amount from the rapid type. Now let’s say your company will not fund an ASIC effort. That means it’s time to consider FPGAs. First, be aware that while the tools are free on the Web for the smaller FPGAs, you’ll have to pay for a license file for the ones with high gate counts. The good news is that there are no NRE charges. Modern FPGAs are packed with features that were not previously available. Today’s FPGAs usually come with phase-locked loops), low-voltage differential signal, clock data recovery, more internal routing, high speed (most tools measure timing in picoseconds), hardware multipliers for DSPs, memory, programmable I/O, IP cores and microprocessor cores. You can integrate all your digital functions into one part and really have a system on a chip. When you look at all these features, it can be tough to argue for an ASIC. Moreover, FPGA can be reprogrammed in a snap while an ASIC can take $50,000 and six weeks to make the same changes. FPGA costs start from a couple of dollars to several hundred or more depending on the features listed above. So before you get moving, make sure to enlist some help, get the managers to support you, come up with a meaningful cost estimate, choose the right weapon — be it ASIC or FPGA — and then move into production.

Author : Jeff Kriegbaum Source : http://www.design-reuse.com/articles/9010/fpga-s-vs-asic-s.html


  • 0

Is DDR4 a bridge too far?

Category : Embedded Blog

We’ve gone through two decades where the PC market made the rules for technology. The industry faces a question now: Can a new technology go mainstream without the PC?

By now, you’ve certainly read the news from Cadence on their DDR4 IP for TSMC 28nm. They are claiming a PHY implementation that exceeds the data rates specified for DDR-2400, which means things are blazing fast. What’s not talked about much is how the point-to-point interconnect needed for large memory spaces is going to be handled.

Barring some earth-shattering announcement at IDF, Intel is way far way from DDR4. The nearest thing on their roadmap is Haswell-EX, a server platform for 2014. (Writing this when IDF is just getting underway is tempting fate, kind of like washing my car and then having it immediately rain.) AMD has been massively silent on the subject of DDR4 fitting into their processor roadmap.

Meanwhile, both Samsung and Micron are ramping up 30nm production of DDR4, and Samsung is publically urging Intel to get moving. Both memory suppliers are slightly ahead of the curve, since the DDR4 spec isn’t official just yet. However, JEDEC has scheduled the promised DDR4 workshop for October 30, something they said would approximately coincide with the formal release of the specification. (In other words, it’s ready.)

We also have to factor in that LPDDR3 just hit the ground as a released specification this May, and memory chips implementing it won’t reach the pricing sweet spot for another year. Most phone manufacturers are still using LPDDR2 for that reason. (Again, iPhone 5 announcement this week, rain on my post forecasted.) Tablet types are just starting to pick up LPDDR3, amid talk the first implementations already need more bandwidth.

So, why the push for DDR4, especially in TSMC 28nm? DDR4 is obviously the answer to much higher memory bandwidth for cloud computing and the like. I’m sure there are others out there, but the was easy to find.

Interest in DDR4 has to be coming from somewhere in the ARM server camp, otherwise Cadence and TSMC wouldn’t be spending time on it. In spite of the power advances, DDR4 is no where near low-power enough to show up in a phone, and there’s no sign of a LPDDR4 specification yet. ARM 64-bit server implementations are just getting rolling, and Applied Micro’s X-Gene has sampled – with DDR3.

The volume driver for DDR4 – if it’s not PCs – is in question. The natural progression of speed that the PC markets have pushed for looks like it is about to run smack into the economics of affordable implementations, and that in turn could make life for the memory manufacturers interesting. (In a related side note,Elpida’s bondholders have come in saying the Micron bid is way too low.) Or, Intel and AMD could jump in and force the issue, betting on adoption farther down their PC supply chains.

DDR4 and IP supporting it in ARM server space could prove to be a turning point for technology investment, an inflection point in the way things have been done and a change from the PC driving. Or, it could end up being a bridge too far, but paving the way for another specification suited for mobile devices.

What are your thoughts on the outlook for DDR4, LPDDR3, an ARM server market, and the overall dynamics of PCs, servers, tablets and phones versus memory technology?

Author : Don Dingee Source : http://www.semiwiki.com


  • 0

The Unknown in Your Design Can be Dangerous

Category : Embedded Blog

The System Verilog standard defines an X as an “unknown” value which is used to represent when simulation cannot definitely resolve a signal to a “1”, a “0”, or a “Z”. Synthesis, on the other hand, defines an X as a “don’t care”, enabling greater flexibility and optimization. Unfortunately, Verilog RTL simulation semantics often mask propagation of an unknown value by converting the unknown to a known, while gate-level simulations show additional Xs that will not exist in real hardware. The result is that bugs get masked in RTL simulation, and while they show up at the gate level, time consuming iterations between simulation and synthesis are required to debug and resolve them. Resolving differences between gate and RTL simulation results is painful because synthesized logic is less familiar to the user, and Xs make correlation between the two harder. Unwarranted X-propagation thus proves costly, causes painful debug, and sometimes allows functional bugs to slip through to silicon.

Continued increases in SOC integration and the interaction of blocks in various states of power management are exacerbating the X problem. In simulation, the X value is assigned to all memory elements by default. While hardware resets can be used to initialize registers to known values, resetting every flop or latch is not practical because of routing overhead. For synchronous resets, synthesis tools typically club these with data-path signals, thereby losing the distinction between X-free logic and X-prone logic. This in turn causes unwarranted X-propagation during the reset simulation phase. State-of-the-art low power designs have additional sources of Xs with the additional complexity that they manifest dynamically rather than only during chip power up.

Lisa Piper, from Real Intent, presented on this topic at DVCon 2012 and she described a flow in her paper that mitigates X-issues. The flow is reproduced here.

She describes a solution to the X-propagation problem that is part technology and part methodology. The flow brings together structural analysis, formal analysis, and simulation in a way that addresses all the problems and can be scaled. In the figure above, it shows the use model for the design engineer and the verification engineer. The solution is static analysis centered for the design engineer and is primarily simulation-based for the verification engineer. Also, the designer centric flow is preventative in nature while the verification flow is intended to identify and debug issues.

Author : Graham Bell Source : www.semiwiki.com


  • 0

Industrial Ethernet – The basics

Category : Embedded Blog

When you talk about office and home networking, typically you’re talking about Ethernet-based networks—computers, printers and other devices that contain Ethernet interfaces connected together via Ethernet hubs, switches and routers. In the industrial area the networking picture is more complex. Now, Ethernet is becoming a bigger part of that picture. This article is an introduction to the basics of Ethernet, with a bit of added detail on how it fits into the industrial networking picture.

Ethernet’s roots Although Xerox’s Bob Metcalfe sketched the original Ethernet concept on a napkin in 1973, its inspiration came even earlier. ALOHAnet, a wireless data network, was created to connect together several widely separated computer systems on Hawaiian college campuses located on different islands. The challenge was to enable several independent data radio nodes to communicate on a peer-to-peer basis without interfering with each other. ALOHAnet’s solution was a version of the carrier sense multiple access with collision detection (CSMA/CD) concept. Metcalfe based his Ph.D. work on finding improvements to ALOHAnet, which led to his work on Ethernet. Ethernet, which later became the basis for the IEEE 802.3 network standard, specifies physical and data link layers of network functionality. The physical layer specifies the types of electrical signals, signaling speeds, media and connector types and network topologies. The data link layer specifies how communications occurs over the media—using the CSMA/CD technique mentioned above—as well as the frame structure of messages transmitted and received.

Ethernet Physical Layer In the early days Ethernet options were more limited than they are today. Two common options were 10Base2 and 10Base5 configurations. Both operated at 10 Mbps and used coaxial cable with nodes connected to the cable via Tee connectors, or through ‘attachment unit interfaces’ (AUI) in a multi-drop bus configuration. 10Base2 networks allowed segment lengths of up to 185 feet using RG 58 coaxial cable (also called Thin Ethernet). 10Base5 offered greater distances between nodes but the thick coaxial cable and ‘vampire tap’ connections were bulky and difficult to work with. Later, another solution in this speed category was 10Base-FL, which uses fiber optic media and provides distances greater than 2000 feet. Another early 10 Mbps physical layer option—10Base-T—quickly gained popularity because it was easier to install and used inexpensive unshielded twisted pair (UTP) Category 3 cable. Nodes (typically computers with network interface cards, or NICs) were connected in a star topology to a hub, which in turn was connected to other network segments. Each computer had to be less than 100 feet from the hub. Standard RJ-45 connectors were used. In the mid-1990s 100 Mbps Ethernet equipment became available, increasing the data transfer rate significantly. NICs that would automatically adjust to operate at 10 Mbps or 100 Mbps made migration to the faster standard simple. Today, virtually all computer network interface cards implement 100Base-TX. Category 5e UTP cable is the standard cable used with 100Base-TX and cable lengths are the same as for 10Base-T networks. Coaxial-based networks are increasingly being replaced with fiber optic media, especially for point-to-point links. For example, 100Base-FX uses two optical fibers and allows full duplex point-to-point communications up to 2000 feet. Gigabit Ethernet (1000 Mbps) options also are available using twisted pair and fiber optic media.

Data Link Layer Ethernet’s data link layer defines its media access method. Half-duplex links, such as those connected in bus or star topologies (10/100Base-T, 10Base2, 10Base5, etc.), use carrier sense, multiple access with collision detection (CSMA/CD). This method allows multiple nodes to have equal access to the network, similar to early party-line telephone systems in which users listened for ongoing conversations and waited until the line was free before accessing the line. All nodes on an Ethernet network continuously monitor for transmissions on the media. If a node needs to transmit it waits until the network is idle, then begins transmission. While transmitting, each node monitors its own transmission and compares what it ‘hears’ with what it is trying to send. If two nodes begin transmitting at the same time, the signals will overlap, corrupting the originals. Both nodes will see a different signal to that which they are trying to send. This is recognized as a ‘collision’. If there is a collision, each node stops transmitting and only attempts to re-transmit after a preset delay, which is different for each node. This method of media access makes it simple to add to or remove nodes from a network. Simply connect another node and it begins to listen and transmit when the network is available. However, as the number of nodes grows and the volume of traffic from each node increases, opportunities to gain access to the network decrease. As utilization increases, the number of collisions increases exponentially and the probability of getting access within a given length of time decreases dramatically. This characteristic makes Ethernet a probabilistic network, as opposed to a deterministic network, in which access time can be reliably predicted. (Master/slave and token passing network schemes are deterministic.) Full-duplex point-to-point Ethernet links (10Base-FL, 100Base-FX, etc.) collisions are not an issue since only two nodes are present and separate send and receive channels are available. Another advantage is that data can be sent in both directions simultaneously, effectively doubling the data transfer rate.

The Ethernet Frame The Ethernet data link layer also defines the format of data messages sent on a network. The data message format, or frame, contains several fields of information in addition to the data to be transferred across the network. Obviously, at the heart of the message is the actual data that is to be sent. This is called the ‘data unit’. Ethernet data units can contain between 46 and 1500 eight-bit bytes of binary information. The actual length of the data unit is determined and included in the message as a field, to tell the receiver how to determine which part of the message is data. Each message must include source and destination addresses so that other nodes can determine where the message is coming from and going to. These six-byte binary numbers are called MAC addresses. Every Ethernet node has a unique MAC address permanently stored in its hardware memory. The Ethernet frame also contains a four-byte ‘frame check sequence’ (FCS) field which is a binary number generated by the sending node that allows high reliability cyclic redundancy checking (CRC) error checking to be done by the receiving node.

Hubs and Switches Ethernet hubs are simple physical layer devices used with 10/100Base-T(X) networks to repeat and split Ethernet signals. Nodes connect to ports on the hub as branches to create a physical star topology. Hubs receive data from the connected nodes, regenerate it and send it out on all other ports. By regenerating the data the maximum segment distance can be extended. All transmissions go to all the connected nodes, the same as on a bus network. Nodes respond to transmissions based on the destination address contained in the message frame. Hubs allow all wiring to connect to a central location making it easy to isolate problem nodes and make changes to the network. Switches are similar to hubs except that they divide the network into segments. An internal table is maintained of the destination addresses of the nodes connected to the switch. When an Ethernet packet is received at one of the switch’s ports the destination address in the packet is read, a connection is made to the appropriate port and the packet is sent to that node. This isolates the message traffic from the other nodes, decreasing the utilization on the overall network. Ethernet switches can be managed or unmanaged. Unmanaged switches operate as described above. Managed switches allow advanced control of the network. They include software to configure the network and diagnostic ports to monitor network traffic.

Higher Level Network Functions To facilitate reliable communications across multiple, and in some cases dissimilar networks, other higher-level protocols are used on top of Ethernet’s data link layer. The most common of these today, especially when connecting an Ethernet network to the Internet, is TCP/IP. IP, or Internet protocol, ensures packets are moved across the network based on their IP address. TCP, or transport control protocol makes sure data is delivered completely and error-free. Two or more Ethernet networks may be connected together via a router, a device that maintains a list of IP addresses on each network connected to it. The router monitors the IP addresses on packets received at its ports and routes them to the port connected to the appropriate network.

Ethernet and Industrial Systems Ethernet’s simple and effective design has made it the most popular networking solution at the physical and data link levels, especially for general-purpose office networks. With high-speed options and a variety of media types to choose from Ethernet is efficient and flexible. Using inexpensive UTP cable and star topology, and CSMA/CD media access, Ethernet networks are easily designed and built. Nodes can be added or removed simply and troubleshooting is relatively easy to do. As Ethernet and related technologies have become prevalent in the general networking arena a large base of trained personnel has become available. These factors, and the low cost of Ethernet hardware, have made Ethernet an attractive option for industrial networking applications. Also, the opportunity to use open protocols such as TCP/IP over Ethernet networks offers the possibility of a level of standardization and interoperability that has until now remained elusive in the industrial field. However, the probabilistic nature of Ethernet is one characteristic that is a drawback for some industrial network applications. Historically, time critical networking applications have been handled using deterministic networks (using master/slave or token passing schemes). Utilization levels on industrial Ethernet networks must be carefully controlled as levels greater than 10% often result in inadequate performance. Still, as the overall cost/benefits of Ethernet have increased, industrial users have found ways to enhance Ethernet’s data transfer performance. One method is to segment networks using switches and routers to minimize unwanted network traffic and reduce utilization. Another is to use newer, higher level protocols that incorporate prioritization, synchronization and other techniques to ensure timely delivery of messages. The result has been an ongoing shift toward the use of Ethernet for industrial control and automation applications. Ethernet is increasingly replacing proprietary communications at the plant floor level and in some cases moving downward into the cell and field levels. Most major control system manufacturers now incorporate versions of Ethernet networks and higher-level Ethernet-related protocols into their product offerings. Often, several manufacturers and/or industry stakeholders have entered into cooperative efforts to develop Ethernet-related standards and products. Several other these now exist, though interoperability between them continues to be elusive.

Ethernet for Control Automation Technology (EtherCAT) is an open real-time Ethernet network developed by Beckhoff. It provides real-time performance, features twisted pair and fiber optic media, and supports various topologies. It is supported by the EtherCAT Technology Group, which has 168 member companies.

Ethernet Powerlink is a real-time Ethernet protocol that combines the CANopen concept with Ethernet technology. The Ethernet Powerlink Standardization Group (EPSG) is an open association of industry vendors, research institutes and end-users in the field of deterministic real-time Ethernet.

EtherNet/IP is an industrial networking standard that takes advantage of commercial off-the-shelf Ethernet communications chips and physical media. The IP stands for ‘industrial protocol’. ControlNet International (CI), the Industrial Ethernet Association (IEA) and the Open DeviceNet Vendor Association (ODVA) support it.

Modbus-TCP, supported by Schneider Automation, allows the well-proven Modbus protocol to be carried over standard Ethernet networks on TCP/IP.

PROFINET is Profibus’ Ethernet-based communication system, currently under development by Siemens and the Profibus User Organization (PNO). The ongoing level of interest, activity and new product introductions of Ethernet-based equipment suggests industrial use of Ethernet will continue to grow for the foreseeable future.

Author : Mike Fahrion Source : www.eetimes.com


  • 0

Magento Used on 20% of Ecommerce Websites

A survey of ecommerce sites in the Alexa top million websites has found that, as of the first quarter of 2012, Magento is by far the most popular ecommerce solution.

Alexa Internet Inc. tracks the web use of millions of internet users and gathers information about the sites they visit. A recent analysis of their database has revealed that Magento is used on 20 percent of ecommerce sites internationally. Magento usage has skyrocketed over the last 4 months. The number of sites deploying Magento increased by 21 percent over the previous quarter, dwarfing the growth rate of its nearest competitor, Zen Cart, which grew by only seven percent.

Of the 33,632 ecommerce sites in Alexa’s top million, almost 7,000 of were using Magento, with just under 4000 using both Zen Cart and VirtueMart. Open Source ecommerce platforms dominate the list; more than half of all ecommerce websites in the top million are released under an open-source license.

The survey also examined the relative popularity of ecommerce platforms in the top 100,000 websites. In this segment of the market, enterprise and bespoke ecommerce solutions become more popular. IBM’s WebSphere and Oracle’s ATG make a strong showing, with 11 percent and 7 percent of sites respectively. However, Magento once again blows away the competition. Magento has more than double the number of users of WebSphere, its nearest rival, with 345 sites out of the total of 1,655, compared to WebSphere’s 185.

Magento’s open source development model, Magento partner network and an active community have allowed it to mature into the world’s most flexible and reliable ecommerce Web developmentplatform. It is now the platform of choice for vendors ranging from multinational corporations to mom-and-pops across the world.

Thousands of companies, including Samsung, InterFlora and Olympus have been richly rewarded for choosing Magento as the platform around which to build their online retail presence. Magento allows companies to avoid the vendor lock-in associated with proprietary ecommerce software, giving them a powerful foundation that can be adapted to suit the needs of their business.

With Magento’s upcoming 1.7 release, which packs a host of new features, including backup and rollback functionality, improved navigation, and a redesigned mobile theme, we expect to see it go from strength to strength in the coming months, cementing its position as the leading international ecommerce solution.


  • 0

Magento is the most popular ecommerce platform. Here’s Why!

If Magento is the most popular ecommerce platform or not, might still be debated, even though numbers prove that around 50% of the new online sellers are choosing it as their preferred platform. In my view, some of the areas where Magento outdoes competition are;

  • Talent Pool: One of the largest availability of talented resources, across popular ecommerce platforms.
  • Contemporary features: One of the most mature and feature rich platform, continuously being supported and enriched by a large pool of     talented developers and partner companies.
  • Magento’s ecosystem: Magento has developed a vast ecosystem of solution     partners, industry partners, Payment partners and hosting partners.     There is a chance that for any business requirement you have, there     is already something build by more than one of their partners. This     saves not only a lot of time and effort in building something from     scratch, but it also saves your planning time.
  • Extensions&Themes: The large number of already developed Extensions.
  • Magento Go, Community & Enterprise: Magento has a solution for every customer segment.
  • Technology Vendors: there are multiple options for technology vendors, across     geographies, and different levels of expertise, to undertake Magento     projects. There are freelancers, small tech startups, and some     established tech companies. There also are companies which are     regarded as expert magento solution partners. Even these solution     partner companies have various levels of partnerships with Magento.

There can be more reasons, and our readers can add other points I might have missed here. It will be interesting to hear their reasons behind choosing magento, from some buyers.