REST and Industrial Applications - An alternative to OPC

Technorati Tag: ,,

Writing software for industrial application is trivial and risky and a lot of different software platforms and hardware devices must be integrated in a common environment often evolved through tens of years.

In this scenario, one of the most important standard is OPC that defines sets of specification about how to produce and consume data, alarms and events generally produced and processed in a common industrial system.

The following picture is a sample scenario showing the concept behind OPC:


A sample scenario

On the bottom area there are some different sample classes of industrial devices:

  • PLC-A is a sample PLC which exposes a set of memory data (called tag).
    A tag is a calculated variable or the value of a physical signal. Usually tags are accessible using proprietary protocol like Siemens SH1 depending from the PLC used.
    In the sample the tags are named (A01, A02, A0n for PLC-A, Z01, Z02, Z0n).
  • PLC-B is another sample PLC, working in the same way as PLC-A but using GE EGD protocol to expose tags to the other systems.
  • Legacy-Z is a sample system implementing a complex mathematical-model and exposing data to the upper layer using a custom UDP protocol.
    This kind of system usually get data from PLCs, process them with feed-back model and generates setup data-packets sending them back to the PLCs. A lot of etherogenus operating system and programming languages have been used for delivering these application (Real-Time OS, Unix, Fortran, PML, C, etc.).
    In this sample we assume the some of the calculated values are exposed with the tags pattern (name, values) using a custom developed protocol.

Before OPC

As you could realize, before the OPC era, developing the upper software layers (like a databases with trend analysis, HMI modules with the user interfaces) required to create 1:1 connections with each integrated devices implementing custom protocol (SH1, EGD, TCP UDP, etc) from every client applications wasting a lot of effort in application plumbing.

For example, if you HMI needs to integrate the PLC-A, PLC-B, Legacy-Z you must have in your code the SDKs or the components for using Sinec-H1, EGD and custom UDP.The same for the trend-analysis database.

What is OPC

Referring to the depicted scenario:

OPC is a standard communication protocol to mediate and expose the underling protocols to the upper software layers through a single and standardized access model.

As you can see from the sample picture, the Trend server and HMI server are directly connected to the OPC Server using just a single protocol (the OPC protocol).

To implement an OPC Server you should get it from the market. There are a lot of different products (Kepware, OPC Power Server, Matrikon, etc.) and you choose one basing on the availability of the supported protocols you could need.

In the sample scenario there is a logical mapping between the OPC exposed tags values and the underlying tags (like the OPC tag 00-01 is mapped to the physical tag A01. the OPC tag FF-02 correspond to the physical tag Z02 and so on)!

One OPC Server integrates differents etherogeous devices using one common logical tags table.

The translation from the industrial protocols (Siemens SH1, EGD, etc.) is in charge to the OPC drivers (there are a lot of different drivers on the market). If you need to translate a custom protocol (from sample scenario the custom TCP UDP protocol ) you can write your own driver with existing OPC SDK.

Architectural Pattern

From an architectural perspective you can consider OPC as a common layer to map and provide access to a network of underlying devices and resources using a name/value addressing pattern.

When you implement complex mathematical models or process control software in a modern environment you'd like to leverage OPC to implements your common memory areas providing access to your computed variables to different algorithms and models.

So on complex mill there are different applications running on different hardware systems that need to share data in the same way OPC was build for.

The problem is that OPC isn't enough fast to enable process control software doing his tasks and also the code you've to write providing OPC access is trivial and "fat" due to the involved SDKs and components.

So the question is:

How we can leverage the OPC pattern to implement common data areas using modern technologies and providing access to shared variables?

My idea is the development a REST service.


Representational State Transfer (REST) is an architectural style to expose a set of connected resources (and their basic operation) usually leveraging the HTTP protocol suite.

It's a different thing than Web-Services and SOA. I guess to understanding REST the best approach is a sample:

  • A multimedia contents could exposes its multimedia catalog through REST providing basic services for updating them;
  • A complex system could exposes its configuration and metadata and clients browser could leverage rest to connect and update those data;

The pillar of REST are:

  • URI to address a connected resources (for example: http://mycontenctapplication/myCatalog/Author="Rocking Corrado";
  • HTTP verbs to specify an operation to the connected resource:
    • Get for fetch or read resource values;
    • Put for updating or insert of resource values;
    • Delete for deleting resources;
    • Post for appending resources;

Now the problem is how we can leverage REST for industrial applications?

REST and Industrial Software

If your software have to manage a complex site (like an Hot-Strip Mill), you've to deal with different applications and process controls (Furnace Control, Roughing Mill, Finishing Mill, Cooling Section) that have to exchange a lot of data and messages.

Those application usually uses custom protocols to exchange messages (pushing data with 1:1 synchronous interfaces) and using common shared areas for in-process communication.

The following pictures shows an Hot-Strip Mill process control system build using a REST architecture.


The Rest application will be developed with the following features:

  • A data-structure will be implemented to collect information from existing process control software;
  • The data-structure will be updated with standard interface with existing software for example TCP-IP sockets;
  • The data structure will be exposed with a restful interface;

If you will use REST to expose common area I guess that you will have a "closed" set of tags so that only GET and PUT verbs will make sense (consumer of REST will never add or delete new Tags).

Which are the main benefits of adopting REST in your process control software?

  • A standard protocol to expose data (HTTP)
  • Client application could access REST information in an easy way (XML and JSON);
  • REST is very well suited:
    • To provide access and update configuration data
    • To provide and update runtime information (like the Finishing-Mill parameters)

How we can write REST applications?

WCF Rest Starter kit is here (developed by MS WCF team)!

It has been published on Codeplex and it could included in .Net 4.0:

A set of templates are available for Visual Studio 2008:

  • Rest Collections/Singleton services;
  • Atom Feed/Atom Publishing Protocol
  • HTTP/POX services

And what about the c# code that we should write? Easy:



ProcessControlData GetTag(int tagId);

The WCF Rest Starter Kit also provides:

  • the WebProtocolException class to implement exception management in Rest services;
  • the RequestInterceptor class to manage the processing pipe-line of the service;


PDC 2008 - Day 4 - Sessions

Technorati Tag:

TL35 WCF: Developing RESTful Services by Steve Maine


Learn the latest features in Windows Communication Foundation (WCF)for building Web 2.0-style services that use URIs, HTTP GET, and other data formats beyond XML. See how these features can be applied to AJAX web sites, "REST" applications, and data feeds.

What I carryed out...

Great, great, great presentation. If you don't know the meaning of REST this is a must!

Talking about REST is long discussion for this incipit.

I just want to remember:

  • Rest Starter Kit it's a set of libraries and templates which makes easier the development of REST solution. I think that a big value in that is the exception management that usually is very tricky in this kind of architectures implementation.
  • The pillars of REST programming in .Net 3.5 are
    • [WebGet] + [WebInvoke]
    • UriTemplate
    • WebHttpBinding

BB12 .NET Services: Messaging Services - Protocols, Protection, and How We Scale by Clemens Vasters


Look under the hood of the Microsoft .NET Services service bus, the protocols we use, and how to use the services from non-Microsoft platforms and languages. Learn which part of the messages and requests the Building Block service inspects, which parts are not inspected, and how you can verify this. Also, learn how to work through NAT and Firewall limitations Last, hear about the architecture on the Data Center side that enables "Internet scale."

What I carryed out...


I didn't go to the previous presentation about messaging and event bus so it was very hard to understand some topics in this session.

  • There are a lot of different bindings to connect to the message bus. You've to evaluate and test the best-accordingly to you architecture.
  • The event-bus is a "queue" but it's not guaranteed to be fully reliable. You should consider it a buffer. This was a big surprise for me!

TL31 "Oslo": Building Textual DSLs by Chris Anderson, Giovanni Della-Libera


The "Oslo" modeling language can define schemas and transformations over arbitrary text formats. This session shows you how to build your own Domain Specific Language using the "Oslo" SDK and how to apply your DSL to create an interactive text editing experience.

What I carryed out...

How to implement a DSL to implement a new grammar.

How to generate assemblies implementing that grammar that could be loaded from .Net.

There is a huge value in creating old-style compiler. I'll blog later about it.

BB27 .NET Services: Orchestrating Services and Business Processes Using Cloud-Based Workflow by  Moustafa Ahmed


See how simple it is to use cloud-based workflow services to run business processes in the cloud as well as perform orchestration across on-premises and cloud services while running workflows in an environment that scales automatically.

What I carryed out...

When you create WF application you need an host. Now you can choose:

  • Your own implemented host;
  • Dublin (an application server)
  • .Net Workflow service on Azure platform

Why choosing Workflow on Azure?

  • It' scalable and if you need more performance you can buy it!
  • Reliable and available - it' hosted by MS Datacenters
  • Accessible from anywhere - it's a cloud, you can leverage it to connect you services over Internet


  • It supports .Net 3.5
  • There are new activities to interact with the Service bus
  • You can use the existing designer

My personal feeling is that you need to evaluate the worst case in a project to be sure that you can implement what you need. I'm scaried from wall in the environment when you're in a later stage in the project.


PDC 2008 - Day 3 - Other sessions

Technorati Tag:


The Keynote by Rick Rashid was very impressive. I already know about Microsoft Research, but I didn't know that it was wide and that so much people work on that. Google does a lot of marketing on their Google Labs but to be honest I'm more impressed by MS Research (which works in the shadow) and I hope that MS will not build a marketing strategy on that!

The video is here.

For people from Italy take a look to this.

TL06 WCF 4.0: Building WCF Services with WF in Microsoft .NET 4.0 by Ed Pinto


Eliminate the tradeoff between ease of service authoring and performant, scalable services. Hear about significant enhancements in Windows Communication Foundation (WCF) 4.0 and Windows Workflow Foundation (WF) 4.0 to deal with the ever increasing complexity of communication. Learn how to use WCF to correlate messages to service instances using transport, context, and application payloads. See how the new WF messaging activities enable the modeling of rich protocols. Learn how WCF provides a default host for workflows exposing features such as distributed compensation and discovery. See how service definition in XAML completes the union of WF and WCF with a unified authoring experience that simplifies configuration and is fully integrated with IIS activation and deployment.

What I carryed out...

One of the best presentation I've seen at PDC this year.

I will talk about Dublin as an hosting process in another blog. What I really apreciated in this session was the leveraging of Workflow Foundation (WF) for the management of asynchronous messaging in complex interacting systems scenario.

Why I've this feeling with asynchronous messaging? Because for my job (industrial and tracking applications) has a HUGE RELEVANCE so I strongly reccomend to my colleagues (Eros, Lucone, Gallo, Valerio e Luca) to take a look to the video (download it and play it on a airplane trip!)

Also it's very impressive how the editor of WF is improving. It's very closer to the Orchestration editor of BizTalk 2006 (that is born for the orchestration of business processes...)

PC22 Windows 7: Design Principles for Windows 7 by Samuel Moreau


Together, we can increase customer enthusiasm, satisfaction and loyalty by designing user experiences that are both desirable and harmonious. In this session we introduce the Windows User Experience Principles approach to shipping software. Along the way we share stories and lessons learned along the journey of designing the user model and experience for Windows 7, and leave you with a set of principles that you can apply as you build your applications for Windows.

What I carryed out...

A skipped lunch....

TL24 Improving .NET Application Performance and Scalability by Steve Carroll, Ed Glas


Performance must be considered in each step of the development lifecycle. See how to integrate performance in design, development, testing, tuning, and production. Work with tools and technologies like: static analysis, managed memory profiling, data population, load testing, and performance reports. Learn best practices to avoid the performance pitfalls of poor CPU utilization, memory allocation bugs, and improper data sizing.

What I carryed out...

This session were be of interest for my colleague AlessandroF because was based on VSTF 2010 and the new tools for perfomance testing.

The basic idea is that there are different tools to accomplish difference performances analysis requirements during the steps in project life-time.

  • During the "Gathering Requirements" you would use a tool to set the performance goal on different scenarios;
  • During the "Designing" you would run end-2-end test to evaluate your achitecture;
  • During the "Development" you would run tests and evaluating how your changes affects the previous release (the following pictures show an out-of-the-box report):


There is a strong interaction between the performance analysis tools and ALM of VSTF 2010 so that you can evaluate your progress during time.

Also there a lot of improvements in the tools theirs elf:

  • Now you can profile JavaScript!
  • There is a memory profiler tool and a Contention Profiler (this is very important for multi-core development, you can look to the lock and jump to the code that is causing the lock!)
  • Tools work remotely and under virtualization.

BB18 "Dublin": Hosting and Managing Workflows and Services in Windows Application Server by Dan Eshner


Hear about extensions being made to Windows Server to provide a feature-rich middle-tier execution and deployment environment for Windows Workflow Foundation (WF) and Windows Communication Foundation (WCF) applications. Learn about the architecture of this new extension, how it works, how to take advantage of it, and the features it provides that simplify deployment, management, and troubleshooting of workflows and services.

What I carryed out...

Dublin is one of my favourite technologies from PDC. It's an application server to host workflow instances.

It's also important for my job (in the industrial world but also for business process management and Sharepoint) so that I'll blog separately about it.

PC56 Windows Embedded "Quebec": Developing for Devices by Shabnam Erfani


Do you need to understand how to extend your applications and services to embedded devices using Windows 7 technologies? See the new Windows Embedded roadmap and hear plans for our next-generation offering built on Windows 7 technologies.

What I carryed out...

That's for my colleagues Eros and Martino which uses to work with Windows XP Embedded. Please read this!

  • Quebec is the new release of XP Embedded based on Windows 7 (yes, Vista has been skipped!)
  • No, It's not for Real-Time and you need Real-Time you need 3rd party extension
  • Language Independent (XP was based on English, here you can bind different language images)
  • Sensor SDK for development and integration of external sensors (but I cannot find anymore reference to this).
  • 64bit support
  • Minimum image size of 512MB (to fit on a Flash)
  • To create an image the following tools (Quebec image build tools) are available:
    • Image Builder Wizard (IBW
      Let you installs Quebec interactively or unattended
    • Image Configuration Editor (ICE
      GUI tool to create image configuration and distribution shares for image configuration
    • Deployment Image Servicing and Management (DISM)
      Installs M feature sets to an offline or online Quebec image
    • Windows PE 2.1
      Windows operating system with limited
      services, used for initial image installation
    • Syspre
      Removes system-specific data from an embedded Windows image
      Supports application plug-ins
    • Windows Deployment Services (WDS)
      Used for remote installation  of images on device
    • Additional tools for managing languages packs, drivers, and servicing

PDC 2008 - Day 2 - Other sessions

Technorati Tag:

Ok I've done hundreds of photo to the slides to discover that everything has been published 1 day later the presentation here https://sessions.microsoftpdc.com/public/timeline.aspx ! Happy for that :-D

It's hard to understand if it's good think, because a lot of people have payed to join the conference and to sell-back the new skills.

From my perspective the PDC it's a huge opportunity for a full immersion on a lot of new technologies having a direct feeling on the them. So I appreciate having the immediate availability of the slides and videos

BB36 FAST: Building Search-Driven Portals with Microsoft Office SharePoint Server 2007 and Microsoft Silverlight by Stein Danielsen, Jan Helge Sageflåt


The combination of FAST ESP and Microsoft Office SharePoint Server (MOSS) 2007 allows for the development of powerful search-driven portals. Learn about the architecture and functionality of FAST ESP, and see how FAST ESP can complement and extend existing search features in MOSS 2007. Watch a demonstration that shows how to create search user interfaces by configuring and extending the FAST ESP Search Web Parts, including the use of Silverlight to deliver unique search experiences.

What I carryed out...

At the moment Endeca is better than Fast (for my perspective) but there is a strong commitment from MS in inproving his platform.

The first result is the availability of that is a set of webparts on Codeplex to integrate Fast backend: http://www.codeplex.com/espwebparts

On a medium time perspective Fast strategically should be the best solution due to the strong integration with the platform.


BB26 SQL Server 2008: Business Intelligence and Data Visualization by Stella Chan


Learn how to create an entity data model and bind it to data visualization and ReportViewer controls. Dive into new Reporting Services features like: Tablix, new Data Visualization controls, and the new Report Creation experience. Also, preview the future AJAX ReportViewer control and the new RDLC designer.

What I carryed out...

My expectation was for a session which goes deeply inside BI topics discussing about mining but a lot of time went on graphics controls.

The interesting stuff was:

  • Take a look to Microsoft Charting Control for .Net framework 3.5. It's very powerful. In the past I always used OWC or 3rd parties.
  • Report Builder 2.0 is the new Report Designer shipped with SQL Server 2008 tailored to power users. Now the reports designed with could be hosted by Visual Studio 2008 Reporting Controls so that you can embed a report inside an application without having a full Report Server.

TL27 "Oslo": The Language by Don Box, David Langworthy


The "Oslo" language, at the heart of the Oslo modeling platform, allows developers to quickly and efficiently express domain models that power declarative systems, such as Windows Workflow Foundation and "Dublin." In this session, we'll get you started writing models for your own domains by introducing you to key features of the language, including its type system, instance construction, and query. You'll learn to author content for the Oslo repository and understand how to programmatically construct and process the content to target your own specific runtime environment.

What I carryed out...

Oslo was one of the top topics from the PDC and I'll about it to reorder my understanding.

In this (short) session David Langworthy was a little bit restless (maybe because Don Box was there) so the presentation wasn't great.

We've seen M (the textual-based DSL language and how to persists the modelled DSL to the DB).

The idea is that you can model a world with a textual-based language (for example defining entites such as PowerSwitch, PowerLine, PowerConsumer,...), also using the language you define the "plumbing", instances and attributes for each entitities persisting it to a SQL-Server Database.

Using M you can also query the DBs!

More or less you can think to M as a query and definition language for a DSL, in the same way like LinQ is a query language for SQL...

ES02 Windows Azure: Architecting & Managing Cloud Services by Yousef Khalidi


From design to deployment, building a scalable, highly available service is different from building other kinds of applications. This session discusses the impact that designing for the cloud has on all stages of the service lifecycle, and how the Microsoft cloud platform works for you to meet the scaling and availability goals of your service. This session will show how automation is used to free the developer from dealing with many hardware and networking issues. Also learn how the cloud services platform is architected to enable a pay-for-use dynamic model.

What I carryed out...


You've to think to an Azure solution in a total different way. Too early to say something more. I need to try writing some code to understand how it work, the walls and the real world application you can write.


TIQ-Industrial - New white papers released

Technorati Tag:

On TIQ-Industrial site, you can find some new white-papers:

  • Industrial sites vehicles tracking with GPS-DGPS-GPRS technologies
  • Data-Warehouse And Mining Tools  For Steel  Production Control
  • Sunsetting: A solution framework to revamp and integrate the Level-2 process control software
  • An Integrated Production Site

If you're interested on those topics, hoping you will enjoy them!

PS: I know, I know the aesthetics and look & feel of the site is bad! Give us more time, we're working on that...