Enterprise: nozom – based on OSIsoft PI – has been developed to serve real-time/near-time industrial system-integration . As the hunger for data grows by the day and seamless integration between the control, MES and corporate IT is strongly required, nozom responds with a middleware architecture which was designed with unlimited scalability in mind.
Architecture
The architecture of a software system consists of more than interfaces, functions and data shifting. It expresses the vision and direction of the developers and ultimately defines the usability and the value proposition for the client. system: nozom’s architecture is built around the vision to do more with less (implementation effort and cost), to leverage existing investments (servers, endpoints, interfaces) while assuring the ability to respond to technological changes and remain as future-proof as possible (complete standards support, 64-bit Services, schema-less, open-format Repositories).
The time of a single application defining connectivity abilities (and limitations) is over. Applications should share a common data infrastructure, a MES repository so to say. This system cannot be built from a disparate selection of individual tools widespread over a network, it must share central intelligence about the data it processes and it must be centrally manageable and accessible, putting the IT department firmly in the driver seat.
We also think that the basic need for complete and flexible data historization should not cost the same as a Learjet. Classic SQL databases no longer qualify as the best storage solution for ever-changing and ever-incrementing data structures. We strongly believe that picking technologies which have greatly evolved in the past years, have a lot to offer for industrial data management systems.
We invite you to join in.
Browse the Features
A systematic approach for industries and organizations to manage their data infrastructure in an ever-evolving landscape of data sources and sinks – system: nozom
The nozom approach decouples the multitude of data sources from the array of data consuming applications. We supply a data middleware system capable of scaling the enterprise data endpoint infrastructure so that corporations and technology-driven organizations can better organize, manage and ensure the delivery of real-time information. Initially the infrastructure is built, managed and monitored. Next, the information models are classified and organized. Finally, distinct access points are created which supply standard interfaces for data driven applications. Our approach unloads the necessity for individual functionality-driven applications to each maintain data infrastructures as a side task. It additionally supplies an alternative to the creeping number of individual tools required to run a secure, redundant and fire-wall-aware data infrastructure. system: nozom is a system of embedded functionality aiming for one thing: better management of enterprise-wide data connectivity.
Deliver any data from any source of any scope at any resolution to the enterprise repository. Simply put, hard to realize. not to mention the price tag usually attached – not so with system: nozom.
Protecting existing investments, supporting common legacy technology and preparing the grounds for the future, system: nozom has a variety of in-built protocol stacks. The connector:service component can be used to integrate COM-based OPC DA, A&E and HDA servers, OPC UA servers as well as database systems, spreadsheets, files, and more. The emphasis is on real-time streams (OPC) and helping to turn existing non-OPC endpoints and information containers into OPC-compliant formats. Independent of the specific endpoint protocol in the local subnet, e.g. a solitary production facility or remote measurement modules, the individual connector service requires a single TCP port for communication to the central core:service installation, usually located at a global or regional data center. There is no real limit to the number of endpoints that can be managed by a single connector:service nor the number of total connector:services that can be managed by a core:service installation.
system: nozom uses the leading in memory database system, OSIsoft PI, as its central repository. How does this product support the storage of process data so well?
Process data has many distinct formats. OPC values can be anything, from a simple Boolean to an array of Strings. PI copes easily with variant data formats(Integer, Float, Real,..).
Data historization of time-series data requires high performance on inserts. This is the reason why so many products used their own proprietary formats in the past to store such data. With PI, system: nozom is able to run 100,000 inserts per second against a single data store. Even better, you can work with the data as it is created. It is an open format.
Industrial Alarm and Event data has different formats, defined by the vendor, the system, the interface in use. nozom deals easily with variant record structures – or better – documents.
nozom is ultimatively scalable. With its unrivaled simplicity in creating replica sets and sharded data clusters, industrial clients can save considerable time and effort creating the right storage foundation for data-hungry applications.
nozom does not require Windows. It actually runs on nearly any modern OS platform. You may create a low-cost data storage foundation based on Ubuntu or Debian machines, supplying dozens of Terabytes at high reliability for enterprise-wide process data management.
nozom uses JSON (Java Script Object Notation) to interface with any client application, and supplies the more efficient BSON (Binary JSON) with the drivers available for major languages such as C++, C# (.NET), Java, Perl, Python and many more. This environment will be embraced by your IT department. The time of closed, or semi-open shops for process data are over.
Process data today is no longer only in the hands of a few selected persons. Business and consumer interaction with data is now spread widely across an organization. Managing access rights efficiently by using system: nozom opens up new horizons for secure real-time interaction.
In common scenarios of real-time data source integration, security is left as a point solution topic, which is in contradiction to the aim of central control of all information security aspects and managing access rights of individuals (person or group, internal or external to the organization) from a single repository, further needing to integrate with the corporate directories. As system: nozom consolidates all endpoints, data groups and items into a single enterprise model, access and read/write permissions can be managed in close conjunction with the generic IT security. Policy sets can be shared for a certain scope selection, such as all OPC DA servers having the ProgID xyz or data items across several OPC servers which all share a certain naming convention in the source systems.
Independent of network and communication characteristics, this middleware shall cope with any number and scope of connected data namespaces. The corporate real-time infrastructure is an ever growing beast – which can be tamed by system: nozom
Scalability works in two directions. Quantitative scalability works in system: nozom as each connector:service connect to dozens, hundreds or even thousands of data source endpoints. Depending on type and namespace scope of the sources in conjunction with the desired data refresh rates, implementation-specific design decisions will be made. As the number of connector:service machines is also not limited, system: nozom allows you to build up a future-proof real-time data infrastructure bringing each required detail information to the heart of the corporation.Qualitative scalability helps to improve the availability and reliability of the integrated data streams. system: nozom supports redundancy strategies for all data sources in question including multiple backup streams for a distinct connection.
In a multi-vendor, multi-system, multi-protocol world, integrative systems tend to grow out of a tool set. At a certain point of scale such systems turn merely unmanageable – but not system: nozom.
As system: nozom tightly integrates with the leading schema-less database system, all configuration data, alongside with timeseries, event archives, logs, and metadata is stored in a central repository and is managed centrally. In cases where a new remote:connector is set up, only the service process needs to be installed and the remote setup is complete. The remote: connector configuration happens through Data Management Studio, which itself requires only to be connected with core:services. No need for remote desktop, no more remote log file investigations, and no more remote applications to configure in order to extend or change the data infrastructure. remote:connectors are technically license-free. As such, time is saved, less headaches are created and the entire system becomes simplified – just as it should be, right?
Vendors and engineering companies often have their own naming standards. As enterprise designation systems evolve. Efficient asset management requires a single language – provided by system: nozom.
Namespaces work well. Namespaces drive the world. Interpreting information in the context which is intuitively taken from its name is a proven concept of the modern world. Industrial companies and technical organizations are facing the challenge of hundreds of thousands or even millions of information items carrying a name context. There exist far and few between having a fully organized namespace from device I/O label to ERP asset management. Simply because of different vendor standards, different historical designation standards when systems came to life, different domains – yes, again the engineer vs. the accountant, and all the other areas creating inconsistencies, you name it. A decoupled middleware acting as an information broker between the data (and naming) source and consumer systems, applications and 3rd parties helps to enforce actual data labeling and context assignment. system: nozom supports companies to harmonize individual namespaces born in the various systems and assets towards one unified standard which is following asset hierarchies according to ISA-95 and being complete and expressive as RDS-PP, which is the accepted actual designation standard for European energy producers, spanning from world to single I/O level.
Be in the know with system: nozom. Control devices and systems together with their intermediate applications vary hugely in scope and performance, most of them holding crucial data which can impact the entire corporate process, but not exposing it – be in the know with system: nozom.
tions. Three tiers typically work best.It would be of tremendous aid to companies if all their technical specialists had the tools at hand to drill down to the real-time data ware-house and find the correlations and patterns to identify the issues to de-bottleneck processes. Whether a remote wind turbine has a yawing problem, the sub-surface oil pump in the middle of nowhere lacks performance or a particular issue is hitting the production bot-tom line, enabling the precious asset of tech-nical specialists to work real-time with the data from potentially widespread locations not only increases profits but is also recognized positively by the workforce. Nobody likes to work with old or incorrect data. With all the ingredients at hand (sensors, devices, control systems following data interface standards, secure and adequate data communication available even to the most remote location), the time is right to equip your company with complete and current information supplied by every asset. But the times are also right to re-think the general solution approach. Instead of piling up a plethora of tools and dealing individually with each system integration topic, it is recommended to separating data sources.
Data source interface products are – unfortunately – not all of the same quality. Strolling down the chain of causes and effects can be a tedious task, unless a system can put it all in context and deliver the necessary detail of information exactly when required – just like system: nozom.
Many applications in the field supply connectivity and only that. Fine for the moment, but more or less unsupportive to judge the quality of service or to track down root causes whenever lost data is detected further down the consumer chain. Inspecting remote log files is a tedious task, and in most cases log files are either not supplying the required information, or are apparently not switched to the right level of detail. system: nozom logs continuously all connection-related details, redundancy switch-overs and fallbacks as well as any critical change in the overall infrastructure. Such log information is stored in the central repository which is by its flexible and adaptive nature, perfectly suited for searching, filtering, and pattern analysis.
Driven by technical vision evolved from two decades of professional industrial system integration, embracing new platform and endpoint capabilities, a new system is developed to do one thing right.
nozom was founded early in 2012 by a group of industry specialists, who are working with all aspects of industrial systems integration during the past two decades. While system: nozom‘s design aspects have been evolving over a longer period of time, e.g. inherited from large software deployment projects in the refining and chemical industry, other parts are taken further to benefit from the latest software design patterns, libraries and tools. The backend services are entirely written in highly parallel C++ in order to satisfy the performance requirements while front-end processes and web services are built on top of WCF/WPF. Our aim for the next years to come is to supply the leading middleware system for companies who do not want to compromise on their path forward towards the real-time enterprise. System-independent, integrative, scalable and robust.