<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Stories by Solace on Medium]]></title>
        <description><![CDATA[Stories by Solace on Medium]]></description>
        <link>https://medium.com/@solacedotcom?source=rss-9eda3d135eff------2</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 17:51:00 GMT</lastBuildDate>
        <atom:link href="https://medium.com/@solacedotcom/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Governance in the World of Event-Driven APIs]]></title>
            <link>https://medium.com/event-driven-times/governance-in-the-world-of-event-driven-apis-2680bd52aa29?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/2680bd52aa29</guid>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[restful-api]]></category>
            <category><![CDATA[event-driven]]></category>
            <category><![CDATA[api-management]]></category>
            <category><![CDATA[api]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Mon, 15 May 2023 19:25:38 GMT</pubDate>
            <atom:updated>2023-08-24T15:01:21.543Z</atom:updated>
            <content:encoded><![CDATA[<p>by Bruno Baloi</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*FgFsOjLyMDFKluEuY8gCGw.png" /></figure><h3>Key Considerations and Requirements when Applying the Principles of API Management to Event-Driven Architecture</h3><p>Application programming interfaces (APIs) have been around for a long time, but API management (APIM) per se only arrived on the scene in the last two decades. APIM aimed to give organizations control of development processes that often saw the uncontrolled proliferation of APIs. For example, developers would often build APIs without following enterprise standards or specifications, write redundant logic, and deploy APIs without security considerations in mind.</p><p>This became a liability as the more unsecured APIs you have, the more internal data and services you’re putting at risk. APIM introduced a design-first approach to building APIs, offered security policies one could apply to their APIs, and gave developers better control over where APIs were deployed. In short, APIM enabled the effective governance of enterprise assets and processes, and added a layer of security to the enterprise’s digital façade.</p><p>APIM also gave people better visibility into existing assets. This increased the reuse of APIs, which reduced duplication of work and accelerated time to value.</p><p>Although APIM has evolved, it has done so primarily in the realm of synchronous communications, benefiting mostly SOAP and more recently RESTful microservices and APIs. Conversely, the realm of asynchronous communications however has greatly lagged in the adoption of an API Management strategy. As such, even though event-driven APIs have garnered a lot of interest in the market in recent years, not much has been done in the space to bring the same type of governance as it exists in the Synchronous space.</p><p>In the following sections I will explore why the need for governance is critical for the event-driven world, and what solutions need to evolve to fill that gap.</p><h3>Challenges Governing Event-Driven APIs</h3><p>Before we explore what APIM and governance could look like for the realm of event-driven APIs, it is important to understand the differences between RESTful APIs and their event-driven counterparts.</p><h3>Semantic Differences Between RESTful and Event-Driven APIs</h3><ul><li><strong>RESTful APIs</strong> utilize a variety of actions/verbs called methods: get, post, put, and update.</li><li><strong>Event-driven APIs</strong> only use two verbs: publish and subscribe.</li></ul><h3>Structural Differences Between RESTful and Event-Driven APIs</h3><ul><li><strong>RESTful APIs</strong> have resources that map into one endpoint, and only have to support one protocol: HTTP.</li><li><strong>Event-driven APIs</strong> have channels (topics or queues) with two endpoints: a producer and a consumer. They need to support a variety of transport protocol bindings, e.g. AMQP, Apache Kafka, JMS, and MQTT.</li></ul><p>It’s these structural differences that make the governance of event-driven APIs more difficult than that of RESTful APIs. In the world of RESTful APIs you need to apply security policies to one endpoint, but for event-driven interactions you need to apply policies at the producer and consumer ends of the channel. And since there is only one protocol for REST (HTTP), there are no binding or implementation implications. For event-driven interactions, different transport protocols/brokers have different capabilities, which impacts the implementation and enforcement of security policies. For instance, Kafka only supports topics, whereas JMS supports topics and queues. Similarly, they offer varying degrees of support for authentication and authorization that need to be considered. Also, only some protocols/brokers support throttling and rate limitation.</p><p>It’s clear that governing event-driven APIs requires a new approach and toolset.</p><p>Here is a high-level rendition of the components and flow:</p><figure><img alt="High-level architecture of governance of event-driven applications, APIs, and architecture." src="https://cdn-images-1.medium.com/max/1024/0*tqGyk5BicL2Ctf2r.png" /></figure><h3>Potential Solution for Governing Event-Driven APIs</h3><p>Effective governance blueprint of RESTful APIs requires 4 things:</p><ol><li><strong>API Manager</strong>: responsible for facilitating the definition of policies and their assignment to APIs.</li><li><strong>Gateways: </strong>responsible for the enforcement of policies on the API endpoints.</li><li><strong>Clients</strong>: the systems that actually invoke the APIs.</li><li><strong>Auditing Tool</strong>: responsible for ensuring that the artifacts designed are the same as those deployed</li></ol><p>The governance of event-driven APIs requires the same participants, but they play different roles. The API manager and the gateway have similar capabilities, but differ in the types of policies available and their application (i.e. two endpoints vs. one), and clients can be producers or consumers.</p><p>There are a couple of ways to approach the challenge of governing event-driven APIs:</p><ul><li><strong>Intermediated/Indirect</strong>: Follow the RESTful APIM model of having an API manager and a gateway.</li><li><strong>Dis-intermediated/Direct</strong>: Still have an API manager, but not a gateway, and enforcing policies directly at the producer and consumer endpoints.</li></ul><p>In the following sections I will analyze and explain both options.</p><h3>Intermediated Governance of Event-Driven APIs (Gateway based)</h3><p>Intermediated governance refers to an architecture that has a component that intercepts the traffic between a client and an API, and in the process applies governance rules (aka policies). That component is an API gateway. Essentially the gateway hosts and manages API proxies and enforces policies on those proxies. I.e. before routing the traffic from the client to the required API, it will apply the necessary security and traffic shaping constraints.</p><p>As previously indicated, this process is rather simple when you’re talking about synchronous RESTful APIs. For the asynchronous event-driven domain, however, things are more involved. Even though there will still be a gateway in the middle, as opposed to having just a client and an API, and needing to restrict access only to the client, you now have a producer and a consumer, and you need to restrict access to both. Another consideration for an event-driven API gateway is the need to support multiple protocols. Producers and consumers will leverage the same transport protocol (Solace/Kafka/JMS/MQTT/AMQP etc.), but since they’re not all the same, the gateway needs to know which policies apply to which protocol.</p><p>This diagram shows some difference between gateways in RESTful and event-driven environments:</p><figure><img alt="Differences between gateways in RESTful and event-driven architecture (EDA) environments." src="https://cdn-images-1.medium.com/max/1024/0*Rj0eMgwqO4g6peIf.png" /></figure><h3>Pros and Cons of Intermediated Governance</h3><p>The key benefit of a gateway is that it adds a layer of security between the consumer and the resource; a layer which can be scaled independently from the rest of the infrastructure. Its capabilities can also be evolved without impacting either producer or consumer code. There is also a separation of concerns in terms of enabling security architects to design both the governance model and the gateways that enforce it.</p><p>There are two disadvantages of using gateways in event-driven APIM:</p><ul><li>They add a layer of operational management, and additional cost.</li><li>They increase latency — even though it’s not always noticeable — due to the interception process.</li></ul><p>A gateway-based architecture is ideal for scenarios in which cost and latency aren’t key considerations, and gateway-based systems are easier to manage since they do not incur or require additional development costs</p><h3>Dis-Intermediated Governance of Event-Driven APIs (non-Gateway based)</h3><p>Alternatively, you can implement governance of event-driven APIs without a gateway. You still need a “manager” to handle the definition of policies and their assignment to various channels, but may not need a gateway if producers and consumer can:</p><ul><li>Access the API manager to download the policies associated with the channels they are connecting to.</li><li>Have the ability to enforce those policies locally.</li></ul><p>This diagram shows a disintermediated governance model:</p><figure><img alt="Disintermediated (brokerless) governance of event-driven APIs and architecture." src="https://cdn-images-1.medium.com/max/1024/0*OF3O783c5K6M_a6H.png" /></figure><p>In this instance, you are essentially delegating policy enforcement to producers and consumers. For instance, when a developer generates the code scaffolding for a given AsyncAPI, they would have at their disposal a “Governance SDK” or library they could incorporate into their implementation. The SDK would take the form of an embedded agent that could then perform the tasks of:</p><ul><li>Interacting with the API manager to get the policies that the producer/consumer needs to apply to the channels they interact with.</li><li>Emitting stats/metrics to the API manager to visualize the behavior/performance of the producer/consumer.</li></ul><p>In essence the governance agents would enable a seamless interaction with the event-driven infrastructure without really impacting the development process.</p><h3>Pros and Cons of Disintermediated Governance</h3><p>Similar to the intermediated governance model, this achieves the same type of separation of concerns, but enforcement will be done by the clients (producers/consumers) not a gateway. The advantage is that there’s less operational impact on the infrastructure, as there are no gateways to deploy and provision, so less infrastructure overhead and cost, and that there’s no latency impact on the data flow.</p><p>The <em>downside</em> of the direct governance model is that it puts the onus on the developers to incorporate governance into their applications and microservices. Depending on how the governance SDK is implemented (i.e. if it is configuration driven), one option could be to incorporate the addition of the governance agent into the DevOps/CICD pipeline by leveraging an aspect-oriented (AOP) framework. This would remove the need for the developers to manually add the governance logic to their codebase.</p><h3>Policy Types &amp; Implementation Considerations</h3><p>The types of policies applicable to event-driven APIs are similar to their counterparts in the RESTful arena. Here are some policies that could be applied to the governance of event-driven APIs:</p><ul><li><strong>Authentication:</strong> Ensure a client is who they say they are when trying to access a broker or event mesh.</li><li><strong>Authorization:</strong> Ensurie that authenticated parties have been granted access to specific resources like channel and events. E.g. a producer may only have access to a certain set of channels, and only able to publish/consume a certain sub-set of events.</li><li><strong>Volumetric (Rate Limitation/Throttling):</strong> Limiting the number of events per second a given channel allows.</li><li><strong>Circuit Breaker:</strong> Block access to a channel if a client violates an SLA a certain number of times.</li><li><strong>Segmentation:</strong> Limit access to a subset of events on a given channel.</li><li><strong>Content Filtering:</strong> Only allow events with certain payload patterns on a channel.</li><li><strong>Validation:</strong> Ensure that events on channel follow a certain structure/schema.</li><li><strong>White/Black Listing:</strong> Control which clients can connect to a given broker/node in the event mesh.</li></ul><h3>The Importance and Advantages of a Virtual Security Layer</h3><p>A major challenge with policy enforcement for event-driven APIs is that it needs to be applied over a wide variety of transport protocols/brokers that don’t have the same capabilities. For instance most brokers have some authentication and authorization mechanism via an access control list (ACL), but most don’t support validation, volumetric, and actually <em>most</em> of the other policy types outlined above, at least not to the same degree.</p><p>To apply consistent governance across all transports/brokers, you almost <em>have</em> to have a virtual security layer that can translate abstract policies into the native capabilities of the brokers. There are two ways to implement a “standardized” approach to governing event-driven APIs:</p><ol><li>Only implement policies that a broker can natively sustain, e.g. throttling policy for Solace, Kafka etc.<br> This is easy, but means architects and administrators need to know the capabilities of most brokers.</li><li>Implement all polices as a virtual model where in cases where there are no native broker capabilities, the policy agent will provide the necessary enforcement. This takes more work up front, but will make the lives of your security architects easier as they’ll be able to think in terms of security constraints and patterns, without needing intimate knowledge of individual broker capabilities.</li></ol><h3>Event-Driven API Management Capabilities</h3><p>In the RESTful world, APIM is usually part of suite of products, i.e. an API portal that contains:</p><ul><li><strong>Design Tool</strong> that enables users to create the API specification, e.g. OAS, RAML etc.</li><li><strong>Catalog</strong> where users can publish and discover API specifications.</li><li><strong>Manager</strong> that allows users to secure the APIs via a wide array of policies.</li></ul><p>In terms of API lifecycle management, users design an API, publish it to the catalog, and then once it’s deployed, you use the API manager to secure the API endpoint.</p><p>Managing event-driven APIs is similar. The difference is in the type of assets that are built, published and managed. Borrowing from the <a href="https://www.asyncapi.com/docs/specifications/v2.0.0">AsyncAPI specification</a>, the key assets to be managed in terms of security are channels, publishers, and subscribers. In addition, there will be the various policy types that the manager will allow to be associated with the various channels, publishers and subscribers. The manager may also deal with certificate management to ensure that the connections to the brokers are TLS compliant wherever possible.</p><h3>Governance Ecosystem Integration Patterns</h3><p>A blueprint for governance of event-driven APIs can be built from the following “Lego blocks”:</p><ul><li>Catalog</li><li>Manager</li><li>Gateway</li><li>Governance agent</li><li>Auditor</li></ul><p>Essentially the first aspect of any governance solution will be the need for a repository of events and APIs, with the APIs describing the flow of events over a given set of channels that are bound to a given set of brokers and protocols. The API manager would enable the design of a security scheme that gives control over the assets defined in the catalog. Lastly, enforcement of the security scheme comes into play, with the option of taking an intermediated or disintermediated approach. Once the infrastructure is in place, you need to continuously monitor the run-time artefacts against the design-time artefacts.</p><p>Most enterprises have a well-defined CI/CD process that lets them control asset development and previsioning/deployment. Sometimes errors do occur, however, or somebody sidesteps the CI/CD process resulting in assets being deployed and used that are not under management. For example, deploying consumers/producers without governance agents, or manually creating topics/queues. These “un-managed” assets could pose a security risk. That’s why you need an “auditor” component that monitors traffic and makes sure the production system matches the design/architecture, and alerts administrators about anomalies.</p><p>This diagram shows a sample <em>governance blueprint</em>:</p><figure><img alt="Blueprint for governance of event-driven APIs and architecture." src="https://cdn-images-1.medium.com/max/1024/0*MsAEKaDAf6CsZ8pL.png" /></figure><p>Today there are no technology providers that offer all the components indicated above, so to assemble a governance solution that meets all of your needs you need to use a combination of off-the shelf products, custom code and/or open source components.</p><p>An open ecosystem for the governance of event-driven APIs would allow for a great degree of flexibility and interoperability. For instance, there are vendors that offer catalogs for event-driven APIs, and API managers, and gateways that support some degree of enforcement across event-driven APIs.</p><p>In an open ecosystem for the governance of event-driven APIs:</p><ul><li><strong>Catalogs</strong> should support the ability to import/ingest different kinds of document sets (e.g. <a href="https://solace.com/blog/asyncapi-cloudevents-opentelemetry-event-driven-specs-devops/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=governance-api&amp;utm_campaign=medium_eda">AsyncAPI and CloudEvents</a>) from other catalogs or repositories. They should also have discovery modules that can scan and import event and infrastructure definitions from various brokers.</li><li><strong>Managers</strong> should allow the importation of externally defined policies, i.e. offer an SDK developers can use to create and design their own policies. This is especially valuable when paired with governance agents, where developers can define policies and build their own enforcement in the governance agent.</li><li><strong>Gateways</strong> should have APIs that allows them to be managed and provisioned by any API manager. Depending on the gateway’s openness, users can follow the governance agent model, where external parties can provide enforcement code for new externally defined policies.</li><li><strong>Auditors</strong> should be able to monitor various brokers for inconsistencies in configurations and event flows.</li></ul><p>In short, governing event-driven APIs is most successful when it’s based on an open ecosystem where vendors and the developer community collaborate to expand the ecosystem’s capabilities.</p><h3>Coexistence of Governance Solutions for RESTful and Event-Driven APIs</h3><p>Throughout this article I have highlighted the differences between the management of RESTful and event-driven APIs. Despite the differences, you need to do both. Most enterprises would much prefer to have a unified governance solution that lets them effectively govern and model their RESTful and event-driven APIs with a “single pane of glass”.</p><p>Very few vendors offer the tools you need to manage both kinds of APIs, but over time that number will increase. The reality is that not many have the knowledge and the ability to fully support both paradigms, and it will be challenging to provide a high degree of unification. The most likely scenario (and this goes back to interoperability and integration), will be that APIM solutions for RESTful and event-driven APIs will offer ways to extend their capabilities and to include each other in their user experience. Alternatively, there can be a higher-order interface (i.e. a portal) that presents those APIM capabilities in one place.</p><p>It is difficult to predict how the market for <em>Unified/Universal API Management</em> will evolve but for the short term most likely enterprises will have to entertain two separate technology stacks. Eventually certain mature vendors will emerge that will natively support both paradigms, however even in that case there will likely be complementary technologies that will augment or complement existing capabilities.</p><h3>Auditing</h3><p>Auditing is often an afterthought in designing governance solutions, but it’s a critical feedback mechanism that ensures that the entire ecosystem is operating in a balanced and controlled way.</p><p>Auditing is not trivial to set up as it requires:</p><ul><li>The ability to interact with multiple brokers to get access to the number of infrastructure components defined (queues, topics, bridges etc.)</li><li>The ability to act as a “sniffer” or an event sink to look at the event types on the wire and see if they are consistent with their respective event definitions</li><li>The ability to interact with EDA-centric API managers and event catalogs to compare the live configurations/event data to their intended architecture.</li></ul><p>As such, an auditor needs to be able to “speak” multiple protocols, perform context-based matching, and potentially even look at event patterns. Although there are auditing solutions today, they are not designed to cover such a range of capabilities, as such an auditing solution may involve multiple products/components.</p><h3>Summary</h3><p>Event-driven architecture has been around for decades, and its architectural patterns are well known and implemented in many products, but governance has always taken a back seat. In the last few years, event-driven architecture has emerged as the de facto standard way of building distributed applications. We are at an inflection point, where we will see technologies that support it soar in demand. This proliferation of event-driven architecture solutions will introduce governance requirements that will need to be addressed very soon.</p><p>I’ve introduced here the requirements and pros and cons of various approaches. Achieving comprehensive governance across RESTful and event-driven APIs is not a trivial task, and requires careful consideration and architecting. Although API specifications in the event-driven space have begun to mature (e.g. AsyncAPI and CloudEvents), standards like OAuth and OIDC that are well defined and implemented in the RESTful arena are still coming together for event-driven APIs. Although the two paradigms are different, functional policies are in fact similar (albeit implementation and enforcement will vary by broker).</p><p>In short, we are at the beginning of the governance journey for event-driven architecture, but it is a journey that will accelerate over the next few years. It is important that enterprises consider how they will govern event-driven APIs soon and start exploring solutions now even though they will evolve over time. Ultimately, assembling a solution for the governance of event-driven APIs will be based on the requirements at hand and the needs of the enterprise. Lastly, specific vendors will emerge as leaders in the space, but it is important to remember that adoption will be driven by ease of use and embracing interoperability and openness.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*TQBVt3A5JxMY-TRX.jpeg" /><figcaption>Over 25 years of experience in IT, in a wide variety of roles (developer, architect, product manager, director of IT/Architecture, ), always looking to find ways to stimulate the creative process. A tireless innovator and a seasoned technology management professional. As an innovator, I often take unorthodox routes in order to arrive at the optimal solution/design. By bringing together diverse domain knowledge and expertise from different disciplines I always try to look at things from multiple angles and follow a philosophy of making design a way of life.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/governance-event-driven-apis/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=governance-api&amp;utm_campaign=medium_eda"><em>https://solace.com</em></a><em> on May 15, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2680bd52aa29" width="1" height="1" alt=""><hr><p><a href="https://medium.com/event-driven-times/governance-in-the-world-of-event-driven-apis-2680bd52aa29">Governance in the World of Event-Driven APIs</a> was originally published in <a href="https://medium.com/event-driven-times">Event-Driven Times</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How Does PubSub+ Cloud Help you Secure Your Data in the Cloud?]]></title>
            <link>https://medium.com/pubsubplus/how-does-pubsub-cloud-help-you-secure-your-data-in-the-cloud-436628d47848?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/436628d47848</guid>
            <category><![CDATA[data-security]]></category>
            <category><![CDATA[cloud]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[deployment]]></category>
            <category><![CDATA[cloud-security]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Thu, 20 Apr 2023 13:11:47 GMT</pubDate>
            <atom:updated>2023-08-22T13:02:27.850Z</atom:updated>
            <content:encoded><![CDATA[<p>by Preena Patel</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*t7nWqlNX_4itx2stSFSK7w.png" /></figure><p>Modern businesses are rapidly adopting cloud-based computing platforms such as IaaS, PaaS, and SaaS. The dynamic nature of infrastructure management, especially when increasing applications and services, can cause several problems even when organizations adequately resource their departments. These as-a-service models allow organizations to outsource many of the time-consuming IT-related tasks.</p><p>As businesses migrate to the cloud, it has become crucial to understand the security requirements for protecting data. Although the management of this infrastructure may be transferred to a third-party cloud computing service provider, the company is ultimately accountable for the security of the data assets.</p><p>Cloud service providers strive to ensure the integrity of their servers by implementing policies, procedures and tools in the areas of authentication, authorization and encryption, and enterprises must consider their particular circumstances while securing data, applications, and workloads that are housed on the cloud.</p><p>Security problems have grown as the digital environment continues to expand. These hazards primarily target cloud computing suppliers because an organization’s overall lack of visibility in data access and mobility makes them difficult to manage. Regardless of where client information is stored, organizations may experience major governance and compliance difficulties if they don’t take proactive steps to improve their cloud security.</p><p>That’s why cloud security needs to be a major talking point for companies of all sizes. Almost every aspect of contemporary computing is supported by cloud infrastructure, across all verticals and all sectors. However, putting in place sufficient defenses against contemporary cyberattacks is essential for successful cloud adoption. Cloud security solutions and best practices are essential for maintaining business continuity regardless of whether a firm uses a public, private, or hybrid cloud environment.</p><h3>What Exactly is Cloud Security?</h3><p>Cloud security is a set of practices and tools designed to protect data stored in cloud computing environment from theft, deletion and leakage. Cloud security is required as organizations implement their digital transformation strategy and integrate cloud-based tools and services into their infrastructure. Protective methods include:</p><ul><li>Access control</li><li>Firewalls</li><li>Penetration testing</li><li>Obfuscation</li><li>Virtual private networks (VPNs)</li><li>Not using public internet connections</li><li>Tokenization</li></ul><p>The terms “digital transformation” and “cloud migration” have become commonplace in business contexts in recent years. Both expressions are driven by a desire for change, even if their meanings differ depending on the organization.</p><p>New challenges in balancing security and productivity levels emerge as companies implement these ideas and work to improve their operational strategy. While moving to a cloud-based environment can have various consequences if done incorrectly, more modern technologies allow firms to develop capabilities outside the confines of on-premises infrastructure. To strike the right balance, it is important to comprehend how modern enterprises may profit from the utilization of connected cloud technology while putting the best cloud security practices into place.</p><h3>PubSub+ Cloud: What is it?</h3><p>As I started using <a href="https://solace.com/products/platform/cloud/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pubsub-cloud-security&amp;utm_campaign=medium_pubsub">PubSub+ Cloud</a>, I could see how well it combines best practices, knowledge, and technology for event-driven architecture (EDA) on a single platform. It is a full event streaming, event management, and monitoring platform. As a software-as-a-service (SaaS), the platform:</p><ul><li>Provided me with everything I needed to hasten the implementation of EDA in my company and to meet the requirements of contemporary use cases that call for real-time, intelligent event streaming.</li><li>Enabled me to build, install, manage, monitor, and regulate my event streaming infrastructure (including the events that flow over it) in the most secure way possible using a simple, unified interface.</li></ul><p>I find PubSub+ Cloud to be quite secure. Every level of the platform is developed with enterprise-grade security to keep the event-driven architecture and its data secure.</p><p>While Solace completed some deployment tasks for me, most of them I completed myself. In addition, I have come across several deployment options that reference Solace — or customer-owned resources or infrastructure.</p><h3>PubSub+ Cloud Security Capabilities</h3><h3>Cloud Security Architecture</h3><p>With PubSub+ Cloud, I got the option of deploying our services to public regions, dedicated regions, or customer-controlled regions. Its customer-specific event broker services (non-shared) were deployed in a shared region using public regions and dedicated regions. I used the public Internet to connect applications.</p><figure><img alt="A diagram showing PubSub+ securing the network." src="https://cdn-images-1.medium.com/max/624/0*BQNiNs6teZFTr3wZ.png" /></figure><p>Event broker services are deployed in a region dedicated to each customer, which was ideal for my use case as it required isolated infrastructure and applications that connect via a private network rather than the public Internet.</p><p>A customer-controlled region enabled me to deploy to a Kubernetes cluster in a region that I manage. I tuned and started managing the infrastructure and Kubernetes cluster.</p><p>The deployment option chosen determines whether the customer manages the security aspects of Solace. My security responsibilities grew as I progressed from public regions to dedicated regions to customer-controlled regions.</p><h3>Data Security</h3><p>Lots of different kinds of data get distributed by PubSub+ Cloud, including management, monitoring and communication. I find it crucial to understand that the paths taken by the management, monitoring, and messaging data are distinct and clearly defined is critical.</p><p>Logically, the control plane (for management and monitoring data) and the messaging plane (for actual communications and interactions) are the two data planes that make up PubSub+ Cloud (for messaging data). This helped me more easily handle both kinds, simultaneously. While stationary or in transit, all my data, including management and messaging data, is secure and encrypted.</p><figure><img alt="A diagram showing how the Planes logically look in a Typical Deployment." src="https://cdn-images-1.medium.com/max/624/0*ghUlnLErF68SWH3z.png" /></figure><h3>Isolation of VPCs and VNets</h3><p>I chose to deploy the event brokers into a dedicated and isolated VPC. This isolated VPC/VNet provided added security meeting my requirements:</p><ul><li>The event broker must not be accessible via the public internet.</li><li>The event broker needs to run in a secluded environment. (i.e. I didn’t want the event brokers to run into a multi-tenant environment or on a shared public infrastructure).</li><li>I had data localization requirements (I needed a VPC/VNet kept in a specific region of the world).</li></ul><h3>Authentication and Authorization of Client Applications</h3><p>To me, it’s amazing to have precise, granular control over how client applications authenticate and are authorized to access event broker services and manage them.</p><p>Client applications fall into two categories: messaging apps that connect to event broker services to exchange data and events (e.g., publish/subscribe) and applications that manage and monitor event broker services. These applications helped me to automate the management of event broker services (configuration, monitoring, and so on) that are common in continuous integration and development (CI/CD) workflows.</p><h3>Authorization and Authentication of Users</h3><p>To construct event broker services, monitor event broker services, and develop an event-driven architecture, users must be authenticated and authorized. I find this to be a great security measure.</p><p>Now, I can effectively manage user accounts and grant permissions so that users can access the various service categories in the PubSub+ Cloud Console using the account and user administration system for PubSub+ Cloud.</p><p>PubSub+ Cloud can be coupled with an OpenID Connected central identity management system (SSO) for simpler user management and Single Sign-On. Azure Active Directory, Okta, PingOne, and Auth0 are all supported. I’m surprised at how simple yet secure it is!</p><h3>Protecting Customer Data</h3><p>In PubSub+ Cloud, customer data is always safeguarded.</p><p>The data is logically divided into a control plane and a message plane by the PubSub+ Cloud architecture. The messaging plane carries the messaging data between the event broker services and client applications, while the control plane carries data for management and monitoring.</p><p>The data transfer is encrypted both in transit and at rest using AES-256 and TLS 1.2 on these separate planes, which are quite safe. For the following reasons, the various forms of data are crucial to the security architecture:</p><ul><li>It enabled me to have more control over the data since I could, for example, store all messaging data in a separate VPC or VNet for customer-controlled environments.</li><li>Greater security and reliability (When one plane is hit, the other is unaffected).</li></ul><h3>System Logs and Audit Logs</h3><p>Full logs and system notifications are accessible using PubSub+ Cloud, and they include:</p><ul><li>Audit records for access related to security for the PubSub+ Cloud Console.</li><li>Complete logs for the services of the event broker (I access these by setting up SysLog Forwarding)</li><li>Event broker services provide a portion of their logs to the central monitoring server, which is accessible from PubSub+ Insights and helps to generate additional notifications and alerts.</li></ul><p>Keep in mind that logs and any other data gathered to check on the status of the system and the event broker services do not include any personally identifying information. And this has made me feel immensely secure!</p><h3>Industry Standards Adherence</h3><p>Cloud Security Alliance Consensus Assessments Initiative Questionnaire (CAIQ) v3.1, Service Organization Control (SOC) 2 Type 2, and ISO/IEC 27001:2013 accreditation are only a few of the significant industry standards for cloud and SaaS that PubSub+ Cloud complies with.</p><h3>Reinforced Operational and Developmental Processes</h3><p>Security seems to be the priority in the design of the PubSub+ Cloud platform. I find that the security of the PubSub+ Cloud platform is retained due to exceptionally strong developer and operational procedures. The areas consist of:</p><ul><li><strong>Operational practices</strong> guaranteeing the security of PubSub+ Cloud production settings and tracking and resolving operational/security incidents with a detailed root-cause investigation.</li><li><strong>Ongoing inspection:</strong> Development and production processes make sure that every change is regularly examined with increasing thoroughness. The agile method’s primary focus on security involves threat-modelling analyses and targeted responses to any potential security issues that might be brought up.</li><li><strong>Access Restrictions:</strong> With a clear hierarchical access structure and established line of command, I could see that Solace has numerous regulations and stringent internal access restrictions. At every stage of the development and production pipelines, internal audits and testing are routinely conducted to look for vulnerabilities. Ongoing security efforts aim to maintain a safe and dependable environment that continuously improves to fulfill ongoing security requirements.</li><li><strong>HA/DR:</strong> The Solace Home Cloud and PubSub+ Cloud Console have 99.95% availability and disaster-recovery plans in place. Numerous safeguards are in place to prevent the loss of important data and to cut down on downtime and recovery time.</li><li><strong>Best Cloud Providers:</strong> In order to defend against threats, physical and environmental security uses the top cloud service providers. Vendors are chosen based on necessary controls (such as power/electrical controls, physical-access safeguards, and fire detection/suppression systems).</li></ul><h3>Additional Security Considerations</h3><p>The event broker services were installed with a secure configuration, and PubSub+ Cloud is secure by default. I configured further settings to strengthen security and security updates as required.</p><p>The initial integration of PubSub+ Cloud’s default settings strikes a balance between security and ease of development and production requirements. When I needed more security, some extra suggestions for my environment were suggested to further harden deployments in the infrastructure I control.</p><h3>Conclusion</h3><p>Security is crucial for maintaining the integrity of the services as the event broker services convey the messaging data. Initially, I was a little reluctant. But with understanding and support from the team, I think I eased out. As I started using it, I found Solace PubSub+ Cloud to be a perfect combination of software components and physical sites. My data is secure and private to Solace.</p><p>As Solace PubSub+ Cloud adheres to the General Data Protection Regulation (GDPR), which requires Solace to safeguard the personal information and privacy of EU residents, specifically for customers based in the EU, I could trust it completely with my data. Without the express consent of the consumer, personal data will not be used for reasons unrelated to those for which they were obtained; which is so relieving! PubSub+ Cloud is built to enable mission-critical applications and they’ve incorporated enterprise-grade security into every level of the platform to keep our message data safe. Their cutting-edge security safeguards our data so that we can concentrate on creating world-class apps.</p><p>I hope this article has helped you understand PubSub+ Cloud’s security capabilities so you can more effectively keep your own deployments safe and secure!</p><h3>About the Author</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*et5UMOuar1Z70lHS.jpg" /><figcaption>Preena is a Technical Writer at Maropost. She is an experienced Technical Writing professional who has been working with clients and enterprises in the information technology and services industry including Microsoft. She is skilled in WordPress, Technical Documentation, Markdown, Confluence, Jira, Visual Studio Code, Communication, Editing, and Academic Writing</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*VBNpf1g1G54-tsKT.png" /><figcaption>The <a href="https://solace.com/scholars/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pubsub-cloud-security&amp;utm_campaign=medium_pubsub">Solace Scholars</a> Program encourages writers from our community to create technical and original content that describes what our technology and/or third-party integrations are being used for and exciting projects that are made possible by event-driven architecture. Solace Scholars are great at solving challenges and explaining complex ideas. If you’re interested in writing for us and learning about what you can earn, check out the website and submit an idea to us!</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/pubsub-cloud-secure-data/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pubsub-cloud-security&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on April 20, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=436628d47848" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/how-does-pubsub-cloud-help-you-secure-your-data-in-the-cloud-436628d47848">How Does PubSub+ Cloud Help you Secure Your Data in the Cloud?</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Leveraging Datadog and Solace PubSub+ for Improved Visibility in Event-Driven Systems]]></title>
            <link>https://medium.com/pubsubplus/leveraging-datadog-and-solace-pubsub-for-improved-visibility-in-event-driven-systems-8d309dbdc116?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/8d309dbdc116</guid>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[distributed-tracing]]></category>
            <category><![CDATA[datadog]]></category>
            <category><![CDATA[opentelemetry]]></category>
            <category><![CDATA[event-driven-systems]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Tue, 04 Apr 2023 20:06:08 GMT</pubDate>
            <atom:updated>2023-08-08T13:01:56.924Z</atom:updated>
            <content:encoded><![CDATA[<p>by Tamimi Ahmad</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*HmCQGei35bWN-1MIR915rw.png" /></figure><p>To manage complex distributed systems, you need to be able to observe and understand what’s happening to all of the components that make up the system, including the flow of information between them. Observability, however, hinges on the assumption that every component can generate information about what’s happening with it, and in an event-driven system that can be quite complicated.</p><p>For example, if you have an application that executes activities A, B, and C, and publishes a message to an event broker, which then goes to a queue, you would want to know what happened from start to finish: from the publishing application, to and within the broker, all the way to the receiving application, for every event.</p><p>With this blog post I’ll explain two importance of observability in the context of <a href="https://solace.com/what-is-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub">event-driven architecture (EDA)</a>:</p><ul><li>How can event brokers generate information about what’s happening inside the broker and between microservices?</li><li>How can we take actions on the generated information from a complex distributed system’s behaviour with multiple event brokers in the mix?</li></ul><p>To answer these two questions, we will look into two technologies:</p><ul><li><a href="https://solace.com/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub">Solace PubSub+ Event Broker </a>: an event broker that enables real-time data distribution in an event-driven system.</li><li><a href="https://www.datadoghq.com/">Datadog</a>: a cloud-based observability backend that lets you collect, process, and visualize metrics, logs, and traces from applications and systems.</li></ul><h3>Introduction to Distributed Tracing</h3><p>Before diving deep into the distributed tracing of event-driven systems, I’d like to step back and cover some core concepts.</p><p><a href="https://solace.com/pubsub-platform-features/#dt?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub">Distributed tracing (DT)</a> is designed to let you observe and understand the journey of information through a distributed system by generating and collecting information about what happens as a piece of information flows through the system. DT falls under the umbrella of tracing, which is in turn one of the three pillars of observability. The goal of observability is to understand what is happening in the system so you can tell what went wrong when something does, or identify bottlenecks and figure out how to fix them.</p><figure><img alt="A graphic illustration of the three pillars of observability: logs, tracing, and metrics." src="https://cdn-images-1.medium.com/max/803/0*S6CSHbXnIoLugxdW.png" /><figcaption>Figure 1: The three pillars of Observability</figcaption></figure><p>A big part of the increasing popularity and importance of observability, was the advent of open standard vendor-neutral way of tracking transactional information in a distributed system: <a href="https://opentelemetry.io/">OpenTelemetry</a>. I recently created a set of 1–2 minute videos that quickly <a href="https://www.youtube.com/embed/jt5HLptVvbM">introduce the standard</a> and explain <a href="https://youtu.be/YwyfYfgjG0w">how it works</a>.</p><p>An asynchronous system with an event broker at its core — commonly called an <a href="https://solace.com/what-is-an-event-mesh/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub">event mesh</a> — <a href="https://solace.com/blog/why-your-event-broker-needs-opentelemetry/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub">needs just such a standard protocol to solve mysteries</a> about the flow of transactional events across the system.</p><p>There is a direct correlation between the degree of distribution in the system and the complexity of system observability. Advanced observability tools like Datadog enhance the tracing management of such complex systems by letting you monitor, optimize, and investigate all the different components in the system.</p><p>By stitching together tracing data from across the system, Datadog’s dashboards gives a bird’s eye view of what’s going on. With Datadog leading in the observability domain, there are still some gaps in the industry when it comes to collecting metrics from event brokers in event-driven systems</p><h3>Distributed Tracing Meets Event-Driven Architecture</h3><p>There are three levels at which traces can be collected in an event-driven system:</p><ul><li>Application level; during business logic execution.</li><li>API level; during communication between other components and services.</li><li>Event broker level; at every hop inside the event mesh.</li></ul><p>The advent of OpenTelemetry has led to lots of tools that generate and collect trace information at the application and API levels, but it’s been hard to trace events as they transit event-driven systems because event brokers haven’t historically supported OpenTelemetry.</p><p>I’ll give you an example: imagine an e-commerce site that offers its customers a variety of payment services. To support that, they run microservices on different cloud providers, and events flow from one service to the other. A single action, like a user clicking to pay for their order, will trigger a series of events such as checking inventory, running fraud detection, updating their customer profile, and actually charging them.</p><figure><img alt="A diagram showing how an event broker routes events from one application to other applications using the publish subscribe model." src="https://cdn-images-1.medium.com/max/1024/0*wia9yFQxqzswfmnn.png" /></figure><p>Now consider their distributed tracing strategy. Assume that events are published and subscribed to between all the backend microservices over a message broker. As a system architect or a developer, when a failure happens you might ask several questions such as:</p><ul><li>Why did the fraud detection microservice never received the message it subscribed to? Is it due to a queue reaching quota capacity? Is it due to subscription permissions?</li><li>What happened to the event in the event mesh if there are multiple message brokers involved?</li><li>Did my message make it to the event broker?</li><li>I want to track the journey the message took from customer hitting the purchase button all the way to the fraud detection microservice, how can I do that?</li></ul><p>We can clearly see an observability gap in an event-driven system. With to distributed tracing in the event broker and <a href="https://www.datadoghq.com/blog/datadog-supports-opentelemetry-open-source/">Datadog’s commitment to contributing to OpenTelemetry</a>, we can now bridge the observability in gap event-driven architecture.</p><p>If you want a little more context, check out these 1–2 minute videos about the <a href="https://youtu.be/uxT032OxVOA">basics of distributed tracing in event-driven systems</a>, and some of the specific <a href="https://youtu.be/u9oBD5pqDig">challenges you’ll face</a>.</p><h3>Closer Look into the Architecture</h3><p>As I said before, complete observability is achieved when all the components of the distributed system generate information about their actions. This includes message brokers.</p><p>As seen in the diagram below, applications can generate their own OpenTelemetry trace messages directly from the application logic, or from the API using OpenTelemetry client libraries. As applications start publishing guaranteed messages to the event broker and subscribing to these messages, the broker generates spans that reflect every hop inside broker. Activities such as enqueuing from publishing, dequeuing from consuming, and acknowledgment will generate spans that is consumed by the OpenTelemetry collector.</p><figure><img alt="A diagram illustrating how PubSub+ interacts with a Datadog observability tool." src="https://cdn-images-1.medium.com/max/1024/0*Zh3xU1k5e0rNVQrx.png" /></figure><p>Thanks to the standardization of trace messages using the OpenTelemetry Protocol (OTLP), after the spans are received by the Solace Receiver on the OpenTelemetry collector, they are processed to standardized OpenTelemetry trace messages and passed to exporters. The exporter is a component in the collector that supports sending data to the back-end observability system of choice.</p><p>In this example, I’ve used the Datadog exporter to export the trace messages to Datadog, where they are stitched together and correlated based on several properties and traceIDs so they can be further examined and analyzed using different dashboards and tooling.</p><p>I walked through this scenario in a video about <a href="https://youtu.be/q4035-O4bww">how DT works with EDA</a>, and think it makes the concept a little more clear.</p><h3>Final Thoughts</h3><p>Solace’s new distributed tracing capability means traces can be generated at every hop in the event mesh to reflect the event’s entire journey, every step of the way. Using advanced observability backends, like Datadog, all those spans and traces can be correlated giving you a better understanding of your system.</p><p>Solace is committed to making its distributed tracing support in Solace PubSub+ Event Broker richer and more sophisticated over time, so keep an eye on our releases and collaborations for more cool projects!</p><p>If you haven’t been clicking through to watch the videos I created about distributing tracing and EDA, you can check out this video series here:</p><iframe src="https://cdn.embedly.com/widgets/media.html?src=https%3A%2F%2Fwww.youtube.com%2Fembed%2Fjt5HLptVvbM%3Flist%3DPLY1Ks8JEfJR7jWm3aafht9cou2oleB_Ef&amp;display_name=YouTube&amp;url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3Djt5HLptVvbM&amp;image=https%3A%2F%2Fi.ytimg.com%2Fvi%2Fjt5HLptVvbM%2Fhqdefault.jpg&amp;key=a19fcc184b9711e1b4764040d3dc5c07&amp;type=text%2Fhtml&amp;schema=youtube" width="854" height="480" frameborder="0" scrolling="no"><a href="https://medium.com/media/4ab68adc20b643bcac0920513dc1e021/href">https://medium.com/media/4ab68adc20b643bcac0920513dc1e021/href</a></iframe><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*4ysj8hetwlgGfOWw.jpg" /><figcaption>Tamimi enjoys educating people about and exploring innovative ways of integrating Solace technologies with emerging tools, technologies and techniques. With this focus in mind he’s helped Solace’s developer relations team run scores of virtual events for individual developers and partners alike, frequently presenting or facilitating tutorials and hands-on workshops.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/datadog-solace-observability-event-driven/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=datadog-visibility&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on April 4, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=8d309dbdc116" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/leveraging-datadog-and-solace-pubsub-for-improved-visibility-in-event-driven-systems-8d309dbdc116">Leveraging Datadog and Solace PubSub+ for Improved Visibility in Event-Driven Systems</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Monitoring a Plant Using PubSub+, Raspberry Pi and Flutter]]></title>
            <link>https://medium.com/pubsubplus/monitoring-a-plant-using-pubsub-raspberry-pi-and-flutter-1110af39561?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/1110af39561</guid>
            <category><![CDATA[pub-sub]]></category>
            <category><![CDATA[raspberry-pi]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[mqtt]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Fri, 10 Mar 2023 18:55:58 GMT</pubDate>
            <atom:updated>2023-08-04T14:58:47.419Z</atom:updated>
            <content:encoded><![CDATA[<p><a href="https://solace.com/blog/monitoring-a-plant-using-pubsub-raspberry-pi-and-flutter/#authorbio?utm_source=medium&amp;utm_medium=referral&amp;utm_content=moniter-plant-pi&amp;utm_campaign=medium_pubsub"><em>Khajan Singh</em></a><em> is a middle school student and the youngest in the </em><a href="https://solace.com/scholars/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=moniter-plant-pi&amp;utm_campaign=medium_pubsub"><em>Solace Scholar</em></a><em> program. He enjoys using hardware and data cloud to solve real-life problems.</em></p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Tk5l5nGUFsMRuwnF6fkIMg.png" /></figure><p>I have a plant that I’ve been taking care of for a while, but I recently started to neglect it. As a techie, I decided the best way to make sure this plant gets the care it needs was to make a system that uses sensors to read things like the moisture of the soil and temperature around the plant, and lets me know when I need to water it.</p><p>As far as hardware goes, that’s a natural job for Raspberry Pi, and I decided to create an app using Flutter to notify me whenever I need to water the plant using data from the sensors. The Raspberry Pi will send the data to a database, the app will be able to read the data from there, and let me know when it notices a situation that needs my attention. I also wanted to make it possible to manually refresh the app, and had to figure out a way to actually do it.</p><p>Flutter is a widely used app-making framework to create cross-platform apps which can be run on multiple operating systems. Using Solace’s MQTT to subscribe and publish functionality, I can communicate with the Pi and the app using it as a broker to manage the messages.</p><p>In this article, I’ll explain how I used MQTT to communicate with the Raspberry Pi using a Flutter app, and I’ll present it in the form of a tutorial in case you want to do something similar yourself.</p><h3>Step 1: Prerequisites</h3><p>To get started on the app side of things, you’ll need to install Flutter, and you can find a guide on how to do that <a href="https://docs.flutter.dev/get-started/install">here</a>. You should also have a somewhat decent knowledge of Flutter to figure things out. I also assume that you have a Solace broker set up so that you can connect to it, and you can create your own one using Docker or set it up in the cloud using PubSub+ Cloud. I made a using it with Docker which you can find <a href="https://solace.com/blog/sensor-data-solace-mqtt-raspberry-pi-motion-sensor/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=moniter-plant-pi&amp;utm_campaign=medium_pubsub">here</a>.</p><p>For the Raspberry Pi, you’ll obviously need to have one along with an operating system for it to run. I’d recommend the official Raspberry Pi OS. I will be using Python to receive the data and Pi OS has that pre-installed.</p><h3>Step 2: Set up the Flutter App</h3><p>First off, start a Flutter project. To implement the MQTT usage, we need to run “pub mqtt_client” which will connect the app to the server. Information on this package can be found <a href="https://pub.dev/packages/mqtt_client">here</a>.</p><p>Let’s create a file called MQTTManager.dart to contain connecting, disconnecting, subscribing, and publishing functions that will run in the app.</p><pre>import &#39;package:mqtt_client/mqtt_client.dart&#39;; import &#39;package:mqtt_client/mqtt_server_client.dart&#39;; class MQTTManager { MqttServerClient? _client; final String _identifier; final String _host; final String _topic; MQTTManager({ required String host, required String topic, required String identifier, }) : _identifier = identifier, _host = host, _topic = topic; void initializeMQTTClient() { _client = MqttServerClient(_host, _identifier); _client!.port = 1883; _client!.keepAlivePeriod = 20; _client!.onDisconnected = onDisconnected; _client!.secure = false; _client!.logging(on: true); _client!.onConnected = onConnected; _client!.onSubscribed = onSubscribed; final MqttConnectMessage connMess = MqttConnectMessage() .startClean() // Non persistent session for testing .withWillQos(MqttQos.atLeastOnce); print(&#39;Client connecting...&#39;); _client!.connectionMessage = connMess; } void connect() async { assert(_client != null); try { print(&#39;Starting to connect...&#39;); await _client!.connect(); } on Exception catch (e) { print(&#39;Client exception - $e&#39;); disconnect(); } } void disconnect() { print(&#39;Disconnected&#39;); _client!.disconnect(); } void publish(String message) { final MqttClientPayloadBuilder builder = MqttClientPayloadBuilder(); builder.addString(message); _client!.publishMessage(_topic, MqttQos.exactlyOnce, builder.payload!); } void onSubscribed(String topic) { print(&#39;Subscribed to topic $topic&#39;); } void onDisconnected() { print(&#39;Client disconnected&#39;); } void onConnected() { print(&#39;Connected to client&#39;); _client!.subscribe(_topic, MqttQos.atLeastOnce); _client!.updates!.listen((List&lt;MqttReceivedMessage&lt;MqttMessage?&gt;&gt;? c) { final MqttPublishMessage recMess = c![0].payload as MqttPublishMessage; final String pt = MqttPublishPayload.bytesToStringAsString(recMess.payload.message); print(&#39;Topic is &lt;${c[0].topic}&gt;, payload is &lt;-- $pt --&gt;&#39;); print(&#39;&#39;); }); print(&#39;Connection was successful&#39;); } }</pre><h3>Step 3: Implement the MQTT Interface</h3><p>To use the MQTT in the app, we must create a user interface to interact and send messages. You can use the functions however you want in a way that will fit your application correctly.</p><p>For demonstration, I created a simple example app which can let you connect to the MQTT broker and send messages to a topic by pressing a button. We’ll have to modify the MyHomePage class in the main.dart file.</p><p>Replace the MyHomePage class with this code:</p><pre>class MyHomePage extends StatefulWidget { const MyHomePage({Key? key}) : super(key: key); @override State&lt;MyHomePage&gt; createState() =&gt; _MyHomePageState(); } class _MyHomePageState extends State&lt;MyHomePage&gt; { late MQTTManager manager; void _configureAndConnect() { String osPrefix = &#39;Flutter_iOS&#39;; if (Platform.isAndroid) { osPrefix = &#39;Flutter_Android&#39;; } manager = MQTTManager( host: &quot;127.0.0.1&quot;, topic: &quot;app/test&quot;, identifier: osPrefix, ); manager.initializeMQTTClient(); manager.connect(); } void _disconnect() { manager.disconnect(); } void _publishMessage(String text) { String osPrefix = &quot;mobile_client&quot;; final String message = osPrefix + &#39; says: &#39; + text; manager.publish(message); } @override void initState() { _configureAndConnect(); super.initState(); } @override Widget build(BuildContext context) { return Scaffold( appBar: AppBar( title: const Text( &#39;Flutter Demo&#39;, ), ), body: Stack( children: [ Column( children: [ child: ElevatedButton( style: ElevatedButton.styleFrom( fixedSize: const Size(240, 50), ), onPressed: () { try { _publishMessage(&quot;Hi&quot;); } on ConnectionException catch (e) { print(e); final snackBar = SnackBar( content: const Text(&#39;Connecting...&#39;), backgroundColor: (Colors.black), duration: const Duration(seconds: 1), action: SnackBarAction( label: &#39;Dismiss&#39;, onPressed: () {}, ), ); ScaffoldMessenger.of(context).showSnackBar(snackBar); } }, child: const Text( &quot;Refresh&quot;, style: TextStyle( fontFamily: &#39;Open Sans&#39;, fontSize: 17.5, ), ), ), ), ], ), ], ), ); } @override void deactivate() { _disconnect(); super.deactivate(); } }</pre><h3>Step 4: Raspberry Pi Setup</h3><p>I have a Raspberry Pi that I want to receive data from for my use case. To implement this, you will need a script where the Pi is subscribed to MQTT and listening to the Solace PubSub+ server.</p><p>For my use case, I have a humidity/temperature sensor to record data around my plant. To replicate this, you must write the sensor according to your needs and install the Adafruit package by running this command in the terminal:</p><p>pip3 install adafruit-circuitpython-dht</p><p>Now, you’ll want to create a new Python file on the Pi which will hold the code for listening and responding whenever a refresh call is sent. In the text editor of your choice, edit the file to have this code in it:</p><pre>import RPi.GPIO as GPIO import adafruit_dht import board import random import time import os from paho.mqtt import client as mqtt_client broker = &#39;192.168.1.73&#39; port = 1883 topic = &quot;app/temp&quot; info_topic = &quot;app/temp&quot; client_id = f&quot;python-mqtt-{random.randint(0, 1000)}&quot; dht_device = adafruit_dht.DHT11(board.D27, use_pulseio=False) def connect_mqtt(): def on_connect(client, userdata, flags, rc): if rc == 0: print(&quot;Connected to MQTT Broker!&quot;) else: print(&quot;Failed to connect, return code %d\n&quot;, rc) client = mqtt_client.Client(client_id) client.on_connect = on_connect client.connect(broker, port) return client def refreshCall(client): def on_message(client, msg): def tempHumidity(client): temperature = dht_device.temperature humidity = dht_device.humidity answer = f&quot;The temperature is {temperature} celcius and the humidity is {humidity}%&quot; result = client.publish(info_topic, answer) status = result[0] if status == 0: print(f&quot;Sent `{msg}` to topic `{topic}`&quot;) else: print(f&quot;Failed to send message to topic {topic}&quot;) time.sleep(2) receivedMsg = msg.payload.decode() if receivedMsg == &quot;Refresh&quot;: tempHumidity(client) else: print(f&quot;Received `{receivedMsg}` from `{msg.topic}` topic&quot;) client.subscribe(topic) client.on_message = on_message def run(): client = connect_mqtt() refreshCall(client) client.loop_forever() if __name__ == &#39;__main__&#39;: run()</pre><p>Now you can have a Raspberry Pi that listens to MQTT commands sent from an app and execute commands from those sent commands! That’s great for my use case so that I can send a command to refresh from a Flutter app and use Solace’s MQTT platform to do something about that data like having it read temperature and humidity so that the app can see that refreshed data. If you’d like, there is also a Github repository linked <a href="https://github.com/Khajan-Singh/MQTTApp">here</a> with the code for the app referenced in it.</p><h3>Future Applications</h3><p>Hopefully, this showed you how you can use MQTT to solve problems. The sky’s the limit for possibilities in what you can do, just like how I used my Raspberry Pi to receive data from an app, you can use this guide to communicate using Solace in an app or just using a Raspberry Pi, or both if you’d like! This allows for so many different use cases and scenarios in which you can communicate between devices. I hope you enjoyed it and found this guide helpful and interesting!</p><h3>About the Author</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/300/0*oREofL6iJ1cUbPK1.png" /><figcaption>Khajan is a middle schooler who lives in Texas, USA. He likes to mess around with technology and learn new things about it. He also enjoys using hardware and data cloud to solve real-life problems. In his free time, he likes to garden and read whatever books he can find. Check out more of his projects <a href="https://protect-us.mimecast.com/s/xTcYC9rB5Ysk8G67uotzbJ?domain=instructables.com/">here</a>.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*SnsO6UhD9_1OYttI.png" /><figcaption>The <a href="https://solace.com/scholars/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=moniter-plant-pi&amp;utm_campaign=medium_pubsub">Solace Scholars</a> Program encourages writers from our community to create technical and original content that describes what our technology and/or third-party integrations are being used for and exciting projects that are made possible by event-driven architecture. Solace Scholars are great at solving challenges and explaining complex ideas. If you’re interested in writing for us and learning about what you can earn, check out the website and submit an idea to us!</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/monitoring-a-plant-using-pubsub-raspberry-pi-and-flutter/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=moniter-plant-pi&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on March 10, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1110af39561" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/monitoring-a-plant-using-pubsub-raspberry-pi-and-flutter-1110af39561">Monitoring a Plant Using PubSub+, Raspberry Pi and Flutter</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Advantages of Building an Event-Driven Application with an Event Portal]]></title>
            <link>https://medium.com/event-driven-times/the-advantages-of-building-an-event-driven-application-with-an-event-portal-2e7397d2c9e8?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/2e7397d2c9e8</guid>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[event-portal]]></category>
            <category><![CDATA[application]]></category>
            <category><![CDATA[rest]]></category>
            <category><![CDATA[event-driven]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Thu, 09 Mar 2023 15:52:08 GMT</pubDate>
            <atom:updated>2023-08-10T15:02:25.225Z</atom:updated>
            <content:encoded><![CDATA[<p>by Stephen Tsoi</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*w46XK49xg4y1r_k4WbiKzA.png" /></figure><p>All digital transformation projects have one thing in common: they produce lots of events, i.e. changes of state that can be recognized, transmitted, processed, and reacted to by applications across an enterprise.</p><p>In my previous article “<a href="https://solace.com/blog/designing-and-naming-topics-for-event-driven-architecture-with-pubsub/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">Designing and Naming Topics for Event-Driven Architecture with PubSub+</a>”, I used a payment system as an example to explain how to build an event flow using the design principles of <a href="https://solace.com/what-is-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">event-driven architecture</a> (EDA). In this post, I will explain how to build an application that can support up to 200,000 concurrently connected customers with an <a href="https://solace.com/what-is-an-event-portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">event portal</a> (like an API portal for events) that enables the management of thousands of events, event flows, schemas, etc.</p><h3>Challenges with REST</h3><p>Many organizations develop numerous products riding on a REST interface to deliver a richer customer experience. With this blog post I’ll present an example of providing sports betting across a variety of devices.</p><p>Depending on the channel application, the UI design properties such as screen size, hardware, or device will vary. For example, a Web application needs to provide rich information, while a Mobile client must offer a compact user experience and easy interaction. An Automatic Teller Machine (ATM), on the other hand, may display less information, but provide more shortcuts to different services such as ePayment, purchasing, and account operation. These three different types of applications will have different UI requirements but share over 90% of data.</p><figure><img alt="A graphic comparing the different UI Designs for a website, mobile app and OCB terminal." src="https://cdn-images-1.medium.com/max/1024/0*8-L4M2Jn3-X0oR4c.png" /></figure><p>Traditional REST design, which revolves around a <a href="https://en.wikipedia.org/wiki/Model%E2%80%93view%E2%80%93controller">model-view-controller pattern</a>, is tightly coupled with backend systems exposing services to provide data endpoints to various but specific channel systems. To cater to different UI components or screen requirements, developers need to separate the program logic into interconnected layers, and present the data fields for these UI components or screens. This results in the same data field overlapping in multiple service calls, and these service calls are hard to reuse. Any requirement change would affect both the backend and the UI events, even if it’s something as simple as adding or removing a single data field.</p><p>If the affected service is shared by multiple channels, or data is used in multiple services, the change must also be implemented on all affected channels. This then also results in significant effort being required for regression, functional and non-functional testing on all affected channels.</p><p>Additionally, code implemented on client applications usually require complex logic in order to handle various operations; this means that the same screen may need to call multiple backend services behind the scenes. Applications need constant enhancements for new services based on UI requirements, and as a result, an application might need to maintain hundreds of channel services with longer latency.</p><figure><img alt="A visual showing how different components on mobile apps, websites, and OCB terminals interact with channel side service catalogs." src="https://cdn-images-1.medium.com/max/895/0*H47cg0aPIvDJ9EgW.png" /></figure><h3>Architecture Design</h3><p>With EDA, you can eliminate any dependencies between channels and the backend systems. EDA uses asynchronous request/reply and publish/subscribe instead of REST’s synchronous request-reply, so data can be pushed to the channel as it is updated or whenever it becomes available. This reduces latency, improves the freshness of data, and increases efficiency since clients do not have to constantly poll even if nothing is changing. Using choreography, the channel can also publish their request (ePayment, purchasing, account operation etc.) and get back the result from the corresponding reply topic.</p><figure><img alt="A visual showing how websites, mobile apps, and OCB terminals interact with the event mesh and data source." src="https://cdn-images-1.medium.com/max/553/0*2wncOvGsHtpNGOx3.png" /></figure><p>All that means channel developers can focus on UI presentation, as applications can simply subscribe to the data topic from specific events they are interested in and display them. Depending on the requirement, backend services can also subscribe to multiple channel requests via separated event queues, without the need for a middle layer service. The overall design becomes simpler, and time (and cost) is saved from having to maintain an extra service layer.</p><p>Lastly, channel behavior becomes more consistent, especially when you have multiple customer touchpoints in omni or ‘phygital’ use cases that combine physical and digital interactions.</p><figure><img alt="A visual showing how an event catalog interacts with the service catalog and simplifies the overall design." src="https://cdn-images-1.medium.com/max/852/0*Hjfpy3dEL5vgYGq8.png" /></figure><h3>Infrastructure Design</h3><p>With a traditional REST-based “polling” design, you need to set up a huge number of Web servers and application servers to serve 200,000 concurrent user connections on each channel. Even when the design can use a CDN to cache traffic on SaaS to decrease the capacity requirement on infrastructure, you still need to set up dozens of servers to accommodate 10,000 connections worth of traffic when the hit rate is 5%. The CDN also introduces extra latency, and the value will depend on the page TTL (time-to-live) value and information update frequency. For example, the information update frequency is 5 seconds on the backend, and the CDN TTL is 2 seconds. Then the latency will be up to 2 seconds. You can’t just reduce the cache value to 1 second because it will affect the cache rate which will increase the capacity requirement on the Web servers. Finally, you need to build same number of Web servers and Application servers for each channel solution if they need to cater to the same amount of peak traffic.</p><p>The EDA-based “pushing” solution allows the channel to directly connect to the event bus via open protocol MQTT, and send a request or receive information update when it available. The data source can directly export the service end point to the channel, and avoid extra latency on the cache layer. The backend microservice design can scale up when it needs. This infrastructure will be simpler, more flexible and cost effective than the REST polling design. Referring to the following infrastructure diagram, the EDA solution can decrease latency from 14 seconds to 2 seconds (and lower..), and the cost will be cheaper by at least 20% when using the event bus to replace the Web server, Application server and Load balancer. This infrastructure can also cater multiple channel usage.</p><figure><img alt="A diagram that shows the difference in total latency between REST Polling (Total Latency 14s) and EDA Pushing (Total Latency 2s)." src="https://cdn-images-1.medium.com/max/1024/1*_Ffrj2GUvA2e0KqYMe5-rA.png" /></figure><h3>Application Data Flow Design</h3><p>As mentioned before, the EDA-based solution uses <a href="https://solace.com/blog/publish-subscribe-messaging-pattern/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">publish/subscribe</a> to implement information fan-out and asynchronous request-reply. My previous article explained how to define the Topic and Queue naming to implement orchestration flow. However, the EDA orchestration flow is for distribution control. You also need to put in place an event portal, which is similar to the REST API management portal/tools, to help design the EDA flow, and allow the flexibility to reuse these events for other projects.</p><p>To help developers leverage the many assets that make up an event-driven system — APIs, applications, schemes, etc. — you need to implement a self-service “event portal”. All applications, events or schema, and corresponding mappings are created and cataloged in the event portal during the design stage. Any new enhancements could easily be mapped to corresponding events from the event portal to ensure downstream consumers of data can detect enhanced events. Service providers will know all subscribers and any respective impact from updates made on their service or interface. The event portal can be built in-house or by a 3 rdparty, although there are limited options in the market. In the following paragraphs, I will use an event portal to show how to build an ePayment system to support payment transaction from different channels or banks.</p><p>The ePayment system is used to support EFT (Electronic Funds Transfer), PPS (Positive Payment System), FPS (Fast Payment Service) and bank fund transfer which are built on channels or banks with different UI presentation layers. The payment service publishes the initial event to its associated message brokers which are connected via an <a href="https://solace.com/what-is-an-event-mesh/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">event mesh</a>, and then waits for the reply asynchronously.</p><p>As you can see in the following diagram, the methods’ triggering points and subsequent processes are decoupled and handled via individual services. Eventually, these services need to update the corresponding account records in the database via a backend account service.</p><figure><img alt="This diagram shows the methods’ triggering points and how subsequent processes are decoupled and handled via individual services." src="https://cdn-images-1.medium.com/max/1024/0*q6l63AW_qpotMuEW.png" /></figure><p>With an event portal, we first need to create corresponding applications and events based on the above diagram, and then import the AsyncAPI interface for each event as a schema. We can view the following diagram after deploying it to the event mesh via Runtime Event Manager, and the event catalog will show all applications/events/schemas in list view. You can right-click the corresponding icon on the diagram to view detailed information about an application or event.</p><figure><img alt="A screenshot of how the previous diagram looks when using an event portal." src="https://cdn-images-1.medium.com/max/1024/0*ulGpQ22kvnEQhSWY.png" /></figure><h3>Benefits of Event-Driven Architecture</h3><p>In the modern world, organizations need to quickly respond to the ever-changing market to maintain competitiveness and maximize efficiency/profitability. Both business and IT should work hand in hand to accomplish this.</p><h3>Business Benefits of EDA</h3><ul><li>Faster time to market</li><li>Optimize cost of change</li><li>Lower, most consistent latency</li><li>Reusability of UI components</li><li>Lower Risk</li></ul><h3>Technical Benefits of EDA</h3><ul><li>Shared services and events</li><li>Standardized interface</li><li>High throughput</li><li>Real-time event reaction</li><li>Reuse of design patterns</li></ul><p>EDA can work with other new and emergent technologies and methodologies such as Agile, DDA, and DevOps to meet the requirements above. Product delivery cycles can be decreased from 8–12 months down to 1–2 weeks depending on the technology/methodology combinations used.</p><p>EDA models help decouple systems, while the Event Portal provides a platform to create, maintain and govern thousands of applications, events, and schemas. Riding on agile development we can split the development lifecycle into smaller sprints to allow more flexibility and easily accommodate any required changes. With this, we now follow a 5-step process to implement changes<strong>: Discover, Design, Develop, Test, </strong>and<strong> Deploy.</strong> This process can be repeated weekly or even daily.</p><figure><img alt="This diagram shows how agile development can decrease product delivery cycles from 8–12 months to 1–2 weeks, depending on the technology/methodology combinations used." src="https://cdn-images-1.medium.com/max/815/0*uzGkfHIQ1tPZB5bX.png" /></figure><h3>EDA Extends the Lifetime of Events</h3><p>To recap, all digital transformations have one thing in common: they have the capacity to produce massive amounts of events. Getting these events ‘in-motion’ across the enterprise, out of siloed architectures, provides possibilities of leveraging them for as-of-yet unimagined use cases.</p><p>Under EDA design, all events brought forward are the super set of the original event; extra services can be built to collect and analyze activities about client applications and customer behavior in order to enhance user experience and interactions.</p><figure><img alt="This diagram shows that under the EDA design, the Operational Data Plane and Analytical Data Plane work together to achieve business needs and collect and analyze activities about client applications and customer behaviour." src="https://cdn-images-1.medium.com/max/1024/0*MbeZQQQWo3bm_SFb.png" /></figure><p>In the following transformation diagram, the red circle represents the main operation with multiple event flows. First, we define the product to sell, and then the product triggers subsequent initial odds. The customer logs into their account to place a wager (tickets) on specific products (markets) via various channels. The turnover of specific products drives subsequent odds changes. This circular flow then repeats. All these events combined together provide feedback and insights for business through big data analytics. For example, data scientists can analyze millions of the same pattern of events to design specific campaigns to promote bestselling products, remove unwelcome products or pages, fine-tune system capacity based on timeslot usage statistics, or provide tailor-made promotions based on customer activities.</p><p>This is possible with an event-driven, publish-subscribe communication pattern: adding additional subscribers/consumers of data do not impact any existing applications. These functions do not affect existing event flows or application performance while providing insights to focus on specific target groups and create better marketing campaigns and have better customer engagements.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*wDb9RiRtjC1IJfHt.png" /></figure><h3>Summary</h3><p>With EDA, service components for business processes are decoupled, and initiation and downstream services in the flow are associated by real-time events and handled asynchronously. It is clear that EDA can help solve REST polling’s longer latency, lower throughput, and higher cost issue to address business requirements.</p><p>EDA is not just an architecture design or data flow change, it also requires the change of people’s mindset; often referred to as culture change. That is the most difficult part.</p><p>Management support is key, and training is necessary, but it needs buy in at all levels to really deliver material benefits.</p><p>Training should be general, and not dependent on an individual project. At our company, we have trained more than 300 colleagues to get EDA related certifications last year. Certified colleagues range from Users, Testers, Developers, Architects, PMs, and Management. So now when we talk about EDA on individual projects, they already know what it is, and that saves us a lot of time explaining or arguing the concept.</p><h3>About the Author</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/176/1*v52gPJ_1nBzr12-es6LiBw.png" /><figcaption>Stephen has more than 20 years of IT experience in solution and architecture design, including most challenging areas such as low-latency risk management, ultra-high speed GPU calculation and scalable voice recognition system. He has been leading an architect team to formulate IT technical solutions, establish and develop architecture framework, technology policies, principles and standards for the Enterprise Integration portfolio.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*SwdXpc2SK2yOSe6B.png" /><figcaption>The <a href="https://solace.com/scholars/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda">Solace Scholars</a> Program encourages writers from our community to create technical and original content that describes what our technology and/or third-party integrations are being used for and exciting projects that are made possible by event-driven architecture. Solace Scholars are great at solving challenges and explaining complex ideas. If you’re interested in writing for us and learning about what you can earn, check out the website and submit an idea to us!</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/building-event-driven-application-event-portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=advantages-event-portal&amp;utm_campaign=medium_eda"><em>https://solace.com</em></a><em> on March 9, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=2e7397d2c9e8" width="1" height="1" alt=""><hr><p><a href="https://medium.com/event-driven-times/the-advantages-of-building-an-event-driven-application-with-an-event-portal-2e7397d2c9e8">The Advantages of Building an Event-Driven Application with an Event Portal</a> was originally published in <a href="https://medium.com/event-driven-times">Event-Driven Times</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[The Business Case for EDA in Retail]]></title>
            <link>https://medium.com/event-driven-times/the-business-case-for-eda-in-retail-672dfd867863?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/672dfd867863</guid>
            <category><![CDATA[retail]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[real-time-analytics]]></category>
            <category><![CDATA[business-case]]></category>
            <category><![CDATA[omni-channel-retailing]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Fri, 27 Jan 2023 19:14:11 GMT</pubDate>
            <atom:updated>2023-08-14T13:01:58.635Z</atom:updated>
            <content:encoded><![CDATA[<p>by Alecia O’Brien</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*_AtlUPhLd13jKO3C.jpg" /></figure><p>The retail industry has transformed over the past few years, as global events and forces have pressured retailers of all kinds to accommodate new consumer buying patterns, strained supply chains, and increased competition.</p><p>Over the past 5 years, the number of digital shoppers has increased by 40%, and the frequency with which people shop online continues to climb too, leading to 218% growth in total sales volume over that time span. In America in 2023, 57% of B2C e-commerce sales flow through marketplaces today. (<a href="https://antavo.com/blog/customer-loyalty-statistics/">Forrester</a>), the majority of which are GenXers who have significant spending power today.</p><p>This growth means retailers must be focused on creating a user-friendly, engaging and efficient online experience, for it directly relates to customer satisfaction, retention and NPS scores. As another Ottawa-based tech company Shopify predicted in 2021 in their <a href="https://cdn.shopify.com/static/future-of-commerce/Shopify%20Future%20of%20Commerce%202021.pdf">Future of Commerce study</a>, businesses need to be prepared for independent retailers and the next generation of consumers to changes commerce as we know it forever.</p><p>Brands need to strengthen their omnichannel strategies and interactions with consumers as they shift to online spending. They’re well aware that customer data is the lifeblood of business today, and are actively rethinking their data strategy and <a href="https://antavo.com/blog/first-party-data-with-loyalty-programs/">investing in first-party data capture</a>. Most retailers offer a loyalty program to build a stronger connection with their customers, but <a href="https://www.bcg.com/publications/2021/the-value-of-first-party-data">only 30% are creating a single customer view across channels</a>, and only 1–2% are using such data to deliver a full cross-channel experience for their customers!</p><p>The opportunity for retailers to capitalize on all that rich engagement with customers is now and short-lived, so it’s imperative that they have systems in place which can act and react to information and events in real-time.</p><p>Thankfully, EDA (event-driven architecture) is here to save the day by giving retailers a way to collect, share and leverage real-time customer data across their organization to perfect their personalization strategy.</p><h3>What is Event-Driven Architecture?</h3><p>EDA is a way of building IT systems that routes data from one application or device to other applications or devices, no matter where they’re all deployed — in real-time. In the case of retail, this helps retailers not only capture data points and interactions they previously couldn’t, but share it with other their systems super quickly to then deliver a personalized, sophisticated response to the customer (and keep their loyalty).</p><p>EDA relies on the use of <a href="https://solace.com/what-is-an-event-broker/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">event brokers</a> that handle the routing of this data, which can be connected to form an “ <a href="https://solace.com/what-is-an-event-mesh/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">event mesh</a> “ — a fast, reliable, self-healing real-time data highway that spans the retail enterprise across operating environments, geographies, and lines of business.</p><p>The core value of EDA is that it enables real-time communication between systems — something there are many use cases for in retail ecosystems.</p><h3>The Business Benefits of EDA for Retailers</h3><p>I spoke to three of my colleagues who have helped leading retailers around the world embrace and benefit from EDA (Floyd Davis, Jason Abram and <a href="https://solace.com/blog/author/vidyadhar-kothekar/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">Vidyadhar Kothekar</a>) about their personal experience, and will introduce with this blog post three areas in which they believe the business case for EDA in retail is clear and compelling:</p><ol><li>Hyper Personalization &amp; Next Best Offer</li><li>Omni-Channel Synchronization</li><li>After-Purchase Experience</li></ol><h3>Hyper Personalization &amp; Next Best Offer</h3><p>Customer loyalty programs have gained traction in recent years, originating in the airline industry with concepts like frequent flyer points. This approach to customer loyalty and retention draws a connection with the customer that traditional sales channels could not. Purchases made using a loyalty card/program give the retailer invaluable information about the customer’s purchasing behavior.</p><p>As more customers use loyalty programs, retailers to build a profile of spending habits, timing, location, patterns — all of which can be leveraged, analysed and used to calculate and produce special offers tailored to each customer, or customers with similar profiles.</p><p>With these special offers, the timing of the ‘next best offer’ is key. A customer at a hardware store, for example, buying a barbeque. As the customer gets to the checkout, they scan the barcode, scan the loyalty card, click checkout and make the payment. In an event-driven world, each of these steps is an event that can be published for downstream processing.</p><p>The payment, for example, is a logical end to a customer interaction. If a loyalty card has been used as part of the transaction, the retailer has a combination of <strong>customer, items purchased, location and timing</strong> data to work with. This information can be encapsulated and published as an event, then consumed asynchronously and in real-time by several systems:</p><ul><li><strong>CRM </strong>to enable a deeper understanding of a customer’s interests, which can generate insight into their preferences and early emotional drivers.</li><li><strong>Loyalty </strong>to trigger outbound communications and the delivery of a personalized ‘next best offer’ based on the 360 view of the customer.</li><li><strong>Analytics </strong>that let retailer analyze how the program is affecting their business in real-time, instead of based on potentially stale data.</li><li><strong>Warehouse </strong>systems so new stock level can trigger dispatch or distribution of stock if levels get below a threshold.</li></ul><p>There are often many more actors in the mix, of course. EDA lets retailers simultaneously send information about such events to any number of subscribers.</p><p>Another element of loyalty is new customer sign-up. The best chance to get a customer to sign up is when they are ready to checkout. Retailers want to entice customers by providing them a discount on their purchase immediately if they are to sign-up. Simple it may seem, there are a bunch of applications that need to be woven together to do that. For example, the Loyalty Application needs to create a new master record for the newly signed-up customer that needs to be sent to CRM, ERP (for accounting and invoicing) and retail store management applications such as Magenta.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1000/0*rbN8VBo_y2U2PeGR.jpg" /></figure><h3>Omni-Channel Synchronization</h3><p>Gone are the days where the only option consumers had was to get in the car, drive to the store and physically pick up the item(s), or to order it online and wait for the shipment.</p><p>The purchasing and fulfilment pattern called “buy online, pick up in store” or BOPIS, is an omnichannel strategy that lets consumers purchase on their computer or mobile device and pick up from a designated customer service booth, curb-side, or a locker.</p><p>BOPIS blurs the lines between digital and physical shopping, helping retailers offer a more seamless shopping experience. BOPIS can be a profitable online channel for retailers with <a href="https://www.shopify.com/retail/bopis">59% of consumers interested in BOPIS</a>-type shopping options.</p><p>To succeed with BOPIS, applications for payments, inventory, POS and warehousing must all be synchronized, and communicate with each other in real-time. If a customer orders an item online, specifying their local store, it is important that the store has an accurate quantity of that item in the inventory system. If there are any delays in relaying stock levels between systems — or worse, there is no communication between them — the retailer risks misinforming the customer, wasting their time, and losing them and their loyalty.</p><p>Customers want an optimal experience across channels, and want the transition between the digital and brick-and-mortar channels to be seamless. For this to be achieved, events need to flow between the systems that support both securely, reliably and in real-time.</p><p>When a customer interacts with an online channel, there are a few simple expectations they have:</p><ul><li>The website or mobile app works</li><li>The stock range/availability is accurate</li><li>There are payment options (e.g. credit card, gift card, store credit, PayPal, etc.)</li><li>There are collection options (e.g. delivery and BOPIS)</li></ul><p>EDA enables all of this by streaming and synchronizing information across all of a retailer’s channels, systems, applications, etc. alongside headquarters, stores and warehouses:</p><ul><li><strong>Headquarters: </strong>HQ is the home of enterprise applications like CRM, e-Commerce, inventory and logistics. The first touchpoint for customers is e-commerce. Stock levels from inventory need to be propagated to the e-commerce platform in real-time to ensure they are accurate and customers aren’t seeing items as “in stock” when they aren’t. In this use case, an event broker at HQ provides the framework for the e-commerce platform to subscribe to Inventory Update events published by the HQ Inventory platform.</li><li><strong>Stores:</strong> If the customer chooses to take advantage of BOPIS, their order needs to be routed from the e-commerce platform to the store of their choice. Additionally, the inventory system needs to be synced to ensure the items are in stock for the customer to pick up.</li><li><strong>Warehouses:</strong> If the customer chooses delivery, the warehouse may be the most appropriate and efficient location to source the stock from. A Warehouse event broker connected to the HQ and Store event mesh enables complete inventory synchronization between the HQ, store and warehouse. Further, once the stock has been picked and packed for delivery, an “order ready” event can be published from the warehouse onto the mesh and subscribed to by Logistics at HQ.</li></ul><p>EDA ensures the following:</p><ol><li><strong>Accurate stock data: </strong>The e-commerce platform presents the customer with real-time stock level information at the store of their choice (click-and-collect)</li><li><strong>Accurate inventory data: </strong>Inventory information at HQ is synchronized with inventory information at the store/stores</li><li><strong>Real-time inventory and stock updates: </strong>Any POS transactions in the store update store inventory in real-time, and propagate stock levels to HQ, also in real-time</li></ol><h3>After-Purchase Experience</h3><p>The world of retail is getting smarter, as are the products being sold. A primary example of this is products enabled with RFIDs and other sensor technology. More and more “smart devices” are hitting the market every year, enabling the propagation of valuable usage-related information to be pushed to the internet and — potentially — back to the manufacturer for detailed analysis on the usage of their products. This is the world of Retail IoT.</p><p>Many retail brands have already started turning to IoT which is expected to grow to $94.44 billion through 2025 (source: <a href="https://www.digiteum.com/internet-of-things-retail-industry/">Digiteum</a>)., for a variety of benefits:</p><ul><li>Personalized CX (microtargeting, cost-efficient advertising, etc.)</li><li>GPS and RFIS technologies to track and optimize product movement through the supply chain</li><li>Sensor data from wearable devices can be used to capture end-of-warranty and end-of-life information, or as some hoteliers are doing, to identify loyal clients and provide extra services</li><li>Efficient use of in-store staff and optimized product placement (Amazon Go is the most famous example of this implementation)</li><li>Smart shelves (introduced first by <a href="https://www.cincinnati.com/story/money/2015/10/02/next-shelves-giving-cues-kroger/73218252/">Kroger</a> in 2016) with RFID tags can provide customer insight and stock data.</li><li>Improved store management efficiency (drones for inventory monitoring, predictive equipment maintenance, inventory management, automated packing services, SKU accounting)</li></ul><p>With EDA acting as the digital backbone for a retail organization’s dissemination of real-time data, sensor-based data can be shared and leveraged by back-end systems such as CRM and marketing tech. EDA ensures there is a high-performance framework to push product usage events from smart devices/products in real-time when in range of mobile networks, and a resilient framework to buffer events on the broker when mobile connectivity is unavailable, delivering them when connectivity is restored.</p><p>If we broaden our thinking beyond customer-to-back-end event propagation, the<strong> </strong>connected stores IoT use cases is also prevalent. For example, if a customer product publishes an end-of-life notification, the event can be published to Marketing Tech to identify same or similar products the customer may like as replacements. The same event can be published to brokers at the stores to identify whether the item is in stock. If the event carries GPS information, the closest store to the customer can also be identified. Once marketing tech and store stock levels have been aggregated, this data can be published to CRM, Loyalty and/or e-commerce systems to push a notification/offer out to the customer.</p><h3>Conclusion</h3><p>These are just a few of the ways EDA can offer business value to retailers. Note we didn’t touch on some other sweeping subjects like connected stores or supply chain optimization, both of which are huge, and limited our discussion of omni-channel enablement to BOPIS which is obviously just one slice of that pie.</p><p>You can learn how EDA can help you integrate the remote edges of retail operations for connected store purposes <a href="https://solace.com/blog/integrating-remote-edges-retail-operations/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">here</a>, how a top CPG company has achieved retail-time supply chain visibility with EDA <a href="https://solace.com/blog/cpg-real-time-supply-chain-visibility/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">here</a>, and get a general intro to the idea of “real-time retail” <a href="https://solace.com/blog/real-time-retail-operations-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">here</a>.</p><p>To learn more about EDA in retail, we recommend you check out our comprehensive <a href="https://solace.com/resources/retail/the-architects-guide-to-real-time-retail?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">Architect’s Guide to Real-Time Retail</a>.</p><p>Solace has lots of experience helping retailers of all sizes implement EDA within their organization. With our EDA platform, event brokers, and event management tools, retailers can deliver hyper-personalized omni-channel customer experiences and optimize their supply chains by establishing a real-time flow of information across their enterprise.</p><p>To get to know Solace better, <a href="https://solace.com/resources/retail?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">take a look at our work</a>. To discuss your project, <a href="https://solace.com/contact/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda">contact our team</a> — we’ll reach out in no time.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*LQngbuIL1ZxWEuMj.jpg" /><figcaption>With over two decades of digital and product marketing leadership, Alecia currently oversees Solace’s global marketing and enablement campaigns, and vertical product marketing initiatives.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/business-case-eda-retail/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=business-case-retail&amp;utm_campaign=medium_eda"><em>https://solace.com</em></a><em> on January 27, 2023.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=672dfd867863" width="1" height="1" alt=""><hr><p><a href="https://medium.com/event-driven-times/the-business-case-for-eda-in-retail-672dfd867863">The Business Case for EDA in Retail</a> was originally published in <a href="https://medium.com/event-driven-times">Event-Driven Times</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Liberating Mainframe Data with Solace PubSub+ and tcVISION]]></title>
            <link>https://medium.com/pubsubplus/liberating-mainframe-data-with-solace-pubsub-and-tcvision-e37c0d71273?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/e37c0d71273</guid>
            <category><![CDATA[pub-sub]]></category>
            <category><![CDATA[mainframe-modernization]]></category>
            <category><![CDATA[real-time-communication]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[event-mesh]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Wed, 21 Dec 2022 15:04:42 GMT</pubDate>
            <atom:updated>2023-08-18T15:02:34.064Z</atom:updated>
            <content:encoded><![CDATA[<p>by Mathew Hobbis</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wApZ54dQroHOYlJOC4FsZg.png" /></figure><p>Organizations are looking to improve business agility and customer experience by embracing modern application integration and development practices, new architectural patterns such as event-driven architecture (EDA), and new deployment locations like the cloud. Many organizations, however, still have a lot of data assets tied to critical applications that run on a mainframe.</p><p>Data that is stored on the mainframe has historically been difficult to integrate with newer platforms due to proprietary formats, data encoding, and access methods. This makes it hard to liberate the data from the mainframe so it can be used by the new applications, potentially making use of artificial intelligence, machine learning, and analytics from cloud service providers.</p><p>Together, Solace and <a href="https://www.bossoftware.com/">B.O.S. Software</a> offer organizations a low-risk way to enhance and extend the reach of their mainframe data by liberating data and converting it into events that can consumed and enriched by new, modern application components across the organization. The combination of Solace <a href="https://solace.com/products/platform/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pubsub-tcvision&amp;utm_campaign=medium_pubsub">PubSub+ Platform</a> and <a href="https://www.bossoftware.com/index.php/en/products/tcvision-real-time-mainframe-data-integration">tcVISION</a> enables this without the risk associated with altering and changing the critical and stable mainframe applications that often support the core business.</p><h3>The Solution</h3><p>The solution is made up of two elements:</p><ul><li>tcVISION is a change data capture (CDC) service for application databases on the mainframe, and packages the changes into ‘events’ that are published to an event mesh powered by Solace PubSub+.</li><li>Solace PubSub+ Platform is a full-featured event streaming platform that enables the real-time distribution of information across organizations, including hybrid and multi-cloud systems.</li></ul><p>The interaction between the two components is shown in the figure below. In this example, events are flowing from the mainframe through an event mesh powered by PubSub+ Platform to a set of modern applications providing a mobile front end for a legacy application that is running on the mainframe.</p><p>An event mesh is a set of interconnected event brokers that learn subscriptions from the subscribers, and then exchange the subscription routing information with the rest of the brokers in the event mesh. This means that the event mesh learns dynamically as subscriptions are injected and withdrawn. It also means that an event can be injected anywhere in the mesh, and it will flow to all interested subscribers connected to the mesh.</p><figure><img alt="A diagram showing how events are fed into am event mesh." src="https://cdn-images-1.medium.com/max/1000/0*QVuqBuEcHdLv0T-s.jpg" /><figcaption>Figure 1: tcVISION feeding mainframe-generated events to the event mesh</figcaption></figure><p>Such an approach could be used for, for example:</p><ul><li>Provisioning a new mobile consumer channel; perhaps a new mobile banking application front end for a mainframe core banking application.</li><li>Provisioning a new fraud dashboard that works across multiple legacy payment systems that utilizes cloud-based AI/ML to detect and flag any suspicious activity.</li><li>Provisioning a real-time recall mechanism; imagine an ingredient in a recipe has a recall notice and an organization needs to alert all the factories producing affected products and the distributors of those products.</li></ul><p>The figure shows tcVISION’s capture agent retrieving CDC events and sending them to the tcVISION Replication Server. tcVISION Replication Server provides a flexible transformation service and transforms the mainframe data into something suitable for the wider world.</p><p>tcVISION Replication Server then publishes the event to an event mesh powered by PubSub+ Platform. tcVISION Replication Server annotates the published message with a topic by which the event can be routed to all interested, and entitled, subscribers.</p><p>Unlike other streaming platforms, Solace PubSub+ Platform supports hierarchical topics and wildcard subscriptions. This means that the topic really is much more than a simple destination label. The topics should be thought of as piece of metadata that describes the events. Subscribers can determine which parts of the metadata are important and describe the events that they wish to receive. Parts of the metadata that are not important can be simply replaced by a wildcard. This means that subscribers can have very broad, coarse, or very specific, fine subscriptions allowing the subscribe control over which event streams are received.</p><p>Solace PubSub+ Platform also guarantees that messages will be delivered in the order that the publisher sent them. This means that the tcVISION Replication Server can publish events across several topics, perhaps the topic is based on fields within the event and changes for each event. It does not matter — any subscriber subscribing to some or all of the events from the tcVISION publisher will see the events in published order. Unlike some other streaming event brokers, Solace does not force the user to choose between ‘filtering’ and ‘order’.</p><h3>Conclusion</h3><p>Liberating data sets on the mainframe is a vital component of business and business application modernization. Being able to turn changes to these data sets into events enables organizations to distribute them and make real and meaningful business actions based on them. As events are emitted as data changes and propagated in near real-time to interested systems, organizations can act on changes more quickly.</p><p>Solace PubSub+ and tcVISION allow organizations to liberate and event enable mainframe data to realize business benefits via improved efficiencies and customer experience.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/150/0*Y9RQL2CIlpaerGsu.jpg" /><figcaption>Mat joined Solace in 2005 and currently runs the technical operations within EMEA. He has extensive expertise within the messaging space and has worked to evangelize the appliance form factor for messaging within the region. Following successful sales Mat has work closely with all of the major accounts and partners within the region providing guidance on messaging and application architecture. Prior to joining Solace Mat worked for Alcatel through the acquisition of Newbridge Networks. At Alcatel/Newbridge Mat held a Systems Architect role and was responsible for the design and successful implementation of many large networks encompassing diverse technologies as optical transmission, ATM, IP, Ethernet, NGN and Mobile for many large clients. Before making the shift into the vendor space Mat held numerous positions at NTL and BT (UK Carriers), building and operating large networks and services for business and residential customers.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/liberating-mainframe-data-with-solace-pubsub-and-tcvision/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pubsub-tcvision&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on December 21, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=e37c0d71273" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/liberating-mainframe-data-with-solace-pubsub-and-tcvision-e37c0d71273">Liberating Mainframe Data with Solace PubSub+ and tcVISION</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A Guide to Event-Driven Architecture Pros and Cons]]></title>
            <link>https://medium.com/event-driven-times/a-guide-to-event-driven-architecture-pros-and-cons-6071394c4549?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/6071394c4549</guid>
            <category><![CDATA[event-driven-systems]]></category>
            <category><![CDATA[enterprise-technology]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[enterprise-architecture]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Mon, 31 Oct 2022 13:10:31 GMT</pubDate>
            <atom:updated>2023-03-24T13:17:16.643Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Meshvi Patel</em></p><figure><img alt="Abstract graphic with lines connecting various tech icons including laptops, mail, lightbulbs and clouds." src="https://cdn-images-1.medium.com/max/1024/1*DsCWF0v6StLl78dEzWHqHA.png" /></figure><p><a href="https://solace.com/what-is-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Event-driven architecture</a> is a hot topic amongst enterprise architects and software developers these days. But when it comes to the pros and cons of event-driven architecture, not everyone has made up their mind on what they think.</p><p>Event-driven architecture is defined as an architectural style where decoupled applications, microservices, and IoT devices can asynchronously exchange events as they occur via an event broker. It requires building IT systems such that they can sense/detect business events as they occur (e.g., an order placed, a payment received, inventory updated, an order shipped, etc.) and distribute them in real-time to all interested parties/systems to help make time-critical informed decisions.</p><p><a href="https://solace.com/event-driven-architecture-statistics/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">A global survey about event-driven architecture</a> from 2021 showed that <strong>72% of respondents believe that the benefits of event-driven architecture outweigh the costs</strong>, or at least equal them. Read on to see if you agree.</p><figure><img alt="A bar graph depicting the state of event-driven architecture implementation in businesses globally." src="https://cdn-images-1.medium.com/max/1004/1*5wMMhheX9kMipTSSHGmGbw.png" /><figcaption>Source: <a href="https://solace.com/event-driven-architecture-statistics/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Event-Driven Architecture Statistics (2021)</a></figcaption></figure><p>For more on what event-driven architecture is and the answers to some frequently asked questions, visit this page dedicated to <a href="https://solace.com/what-is-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">understanding event-driven architecture and the related tools and concepts.</a></p><h3>What are the advantages of event-driven architecture?</h3><p>Agility, reliability, availability, and scalability are core attributes of any high-performance enterprise architecture. Common advantages of event-driven architecture include:</p><ul><li>Loose coupling / decoupling of producers and consumers</li><li>Superior fault tolerance</li><li>Fan out and less technical debt</li></ul><p>Below is an analysis of the three advantages to event-driven architecture and how each relates to the essential traits of performant enterprise architectural design.</p><h4>Loose Coupling / Decoupling of Producers and Consumers</h4><p>In event-driven architecture, producers and consumers of events are decoupled. That is, the producer of the event does not need to know who the consumer is, and the consumer need not know who the producer is. Since events are processed asynchronously as they occur and there is no dependency on other services, the response times are much higher. There is no blocking on waiting for responses as with synchronous calls.</p><p>The independent and autonomous nature of decoupled producers and consumers also reduces the risk of changing one without necessitating a change to others. This gives <strong>greater flexibility and agility</strong> to bring new functionality faster to the market. It also makes the architecture <strong>highly scalable and extensible</strong>.</p><h4>Fault Tolerance</h4><p>Decoupled services also means if one service fails, it does not cause others to fail. The event broker, a key component of event-driven architecture, is a stateful intermediary that acts as a buffer, storing events and delivering them when the service comes back online. Because of this, service instances can be quickly added to scale because it doesn’t result in downtime for the whole system — thus, <strong>availability and scalability are improved</strong>.</p><h4>Fan Out and Less Technical Debt</h4><p>Event-driven architecture is push-based, so if multiple downstream systems need to act based on the occurrence of the event, then the event can be fanned out to these systems in parallel without the need for custom code. This also saves the downstream systems from continuous polling to check for an event occurrence, resulting in <strong>less resource utilization</strong> in terms of network bandwidth, CPU, etc.</p><h3>What are the disadvantages of event-driven architecture?</h3><p>Building applications with event driven architecture can be a great way to tie internal features together and make them more responsive. However, not everyone has embarked on their event-driven journey due to some perceived challenges. Three commonly quoted disadvantages of event-driven architecture are:</p><ul><li>Increased complexity</li><li>Debugging and troubleshooting challenges</li><li>Difficulties with monitoring</li></ul><h4>Added Complexity</h4><p>Developers perceive event-driven architecture to be inherently complex. With too many events, producers and consumers associated with different business processes and workflows can be daunting to manage. How do you lifecycle manage your events? How do you discover existing events? How do you generate code without an “event” contract?</p><h4>Debugging and Troubleshooting Challenges</h4><p>With the distributed and decoupled nature of event driven applications, it can be hard to trace an event from source to destination. This can result in testing and debugging challenges and an increase in resources and time for root-cause analysis.</p><h4>Difficulties with Monitoring</h4><p>Monitoring distributed, highly decoupled applications and systems can be trickier. Since the services are independent of each other, you need a proper design to understand how they interact with each other and also a proper alerting mechanism to understand the knock-on effect should a service fail.</p><h3>Overcoming the Challenges</h3><p>Fortunately, with the right tools on hand the challenges discussed in the previous section can be overcome. Here are some tools that you can look into to make event-driven architecture an easier sell within your organization:</p><ul><li><a href="https://solace.com/what-is-an-event-portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Event portal</a></li><li><a href="https://opentelemetry.io/">OpenTelemetry</a></li><li><a href="https://www.asyncapi.com/">AsyncAPI specification &amp; code generation</a></li><li><a href="https://solace.com/blog/what-is-distributed-tracing-and-how-does-opentelemetry-work/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Distributed tracing</a></li></ul><p>The other challenge that comes with event-driven architecture is on the business end. Almost 40% of businesses believe that educating the rest of the company (especially those in less technical roles) on the benefits of event-driven architecture is a major hurdle that prevents them from moving forward. While survey respondents also cited the lack of adequate talent as an issue, they agreed that the gap between tech and leadership buy-in as well as cost are the least worrisome challenges when it comes it implementing event-driven architecture. <a href="https://solace.com/event-driven-architecture-statistics/#challenges?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Read stats regarding other business challenges</a>.</p><figure><img alt="A bar graph showing results for whether organizational challenges are or are not a barrier to event-driven architecture implementation." src="https://cdn-images-1.medium.com/max/871/1*CuxdUw_EylWtW6HPiML80Q.png" /><figcaption>Source: <a href="https://solace.com/event-driven-architecture-statistics/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">Event-Driven Architecture Statistics (2021)</a></figcaption></figure><h3>When should you use event-driven architecture?</h3><p>Event-driven architecture is not a new concept; it has been around for decades, but has been gaining popularity with many modern use cases and IT leaders around the globe touting its effectiveness. Different organizations have evaluated the pros and cons of event-driven architecture and <a href="https://solace.com/what-is-event-driven-architecture/#eda-use-cases?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">use it to solve modern business problems</a> and achieve better results in critical decision making, operational efficiency, integration of environments, and innovation.</p><ul><li><strong>Critical Decision Making: </strong>Real-time situational awareness and responsiveness is important for better decision-making in critical situations, like stopping a faulty production line in manufacturing or reducing time to treatment to improve patient outcomes in healthcare systems.</li><li><strong>Operational Efficiency: </strong>Parallel processing where multiple processes need to execute asynchronously off a triggered event.</li><li><strong>Integration of heterogeneous environments</strong>: A network of event brokers (event mesh) can be used to dynamically route events from a variety of locations (on premises, cloud, IoT) across connected applications and devices.</li><li><strong>Innovation</strong>: Agility to add new functionality without impacting existing interfaces gives the flexibility to innovate rapidly.</li></ul><p>Event-driven architecture can make it easier to build applications that run across different types of platforms. And it is a simple and powerful way to employ decoupling, which can help with scaling systems and building resilient systems. The real-time insights enable businesses to be innovators and leaders, like these <a href="https://solace.com/what-is-event-driven-architecture/#who-uses-eda?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">examples of businesses that use event-driven architecture</a>.</p><h3>Conclusion</h3><p>As you have read, there are a few pros and cons to consider when it comes to event-driven architecture, but it is a powerful way to build applications that run across different types of platforms. It’s not a silver bullet or a one-size-fits all solution, but the many successful implementations, (along with 71% of survey respondents) point to the <a href="https://solace.com/event-driven-architecture-statistics/#cost-benefits?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda">advantages outweighing the disadvantages</a>.</p><figure><img alt="Meshvi Patel" src="https://cdn-images-1.medium.com/max/150/0*bUedwGaOk1aI3EWA.jpg" /><figcaption>As one of Solace’s most trusted solutions architects, Meshvi leverages her in-depth knowledge of enterprise application integration, enterprise messaging, iPaaS and other middleware technologies to help customers achieve the benefits of event-driven architecture in the areas of business process management, supply chain management and more. Prior to her work with Solace, Meshvi served as a middleware consultant with Credit Suisse and held technical leadership positions with Barclays Capital and Infosys. In addition to achieving her Solace Solutions Consultant and EDA Practitioner certifications, she is also Sun Certified Java Professional. She holds a bachelor of science (electronics) from Gujarat University, along with a masters in information technology and a diploma in business management.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/event-driven-architecture-pros-and-cons/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=pros_cons&amp;utm_campaign=medium_eda"><em>https://solace.com</em></a><em> on October 31, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6071394c4549" width="1" height="1" alt=""><hr><p><a href="https://medium.com/event-driven-times/a-guide-to-event-driven-architecture-pros-and-cons-6071394c4549">A Guide to Event-Driven Architecture Pros and Cons</a> was originally published in <a href="https://medium.com/event-driven-times">Event-Driven Times</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Add Logging to your Event-Driven System with PubSub+ and ZincSearch]]></title>
            <link>https://medium.com/pubsubplus/how-to-add-logging-to-your-event-driven-system-with-pubsub-and-zincsearch-775d48a6eb4f?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/775d48a6eb4f</guid>
            <category><![CDATA[solace]]></category>
            <category><![CDATA[pub-sub]]></category>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[zincsearch]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Wed, 26 Oct 2022 13:10:38 GMT</pubDate>
            <atom:updated>2023-02-17T19:30:58.217Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Thomas Kunnumpurath</em></p><figure><img alt="ZincSearch and Solace logos on an abstarct tech background." src="https://cdn-images-1.medium.com/max/1024/1*SlyafxGRGBppb1KKop2VmA.png" /></figure><p>In this post, I will demonstrate how you can easily and natively integrate logging into your event-driven system using an up-and-coming Elastic Search alternative called <a href="https://zincsearch.com/"><strong>ZincSearch</strong></a> and <a href="https://solace.com/try-it-now/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=zincsearch&amp;utm_campaign=medium_pubsub"><strong>Solace PubSub+ Event Broker</strong></a>. Before I begin, I’ll introduce the two pieces of that puzzle.</p><h4>ZincSearch</h4><p>ZincSearch is a search engine that can perform full text search on documents. It’s open source, written in <a href="https://go.dev/">Golang</a>, and uses an open source indexing library for Go called <a href="https://blugelabs.com/bluge/">Bluge</a>. It’s viewed as a lightweight replacement for Elasticsearch which is the dominant player in the document search category. It also aims to be a drop-in replacement for Elasticsearch by having Elasticsearch-compatible APIs for applications that want to migrate from Elasticsearch to Zinc. You can find a good write up of a lot of the motivations for building out ZincSearch from the author’s <a href="https://prabhatsharma.in/blog/in-search-of-a-search-engine-beyond-elasticsearch-introducing-zinc/"><strong>blog</strong></a>.</p><p>ZincSearch is currently the fast growing project on GitHub — showing that there is a significant demand/appetite for a simple, lightweight and easy to use alternative to Elasticsearch.</p><h4>Solace PubSub+ Event Broker</h4><p>Solace PubSub+ Event Broker is an event broker that comes in a variety of form factors: hardware, software and SaaS. A core differentiating factor of PubSub+ Event Broker is multi-protocol support — including REST — as shown below.</p><figure><img alt="An image depicting Solace multi-protocol, multi-language, and common API support." src="https://cdn-images-1.medium.com/max/1024/1*oNZTzIkq1HGy2hrl7K_OwQ.jpeg" /></figure><p>This means you could do a RESTful operation directly into the event broker and consume the output as an event stream, or push an event into the event broker and have it do a webhook out to a RESTful endpoint.</p><h3>Installing ZincSearch</h3><p>The first step is to <a href="https://docs.zincsearch.com/installation/">install ZincSearch</a>. I’ll use Docker for this post but feel free to try any of the other installation methods (make sure to modify /full/path/of/data to match where you created the data directory).</p><pre>mkdir data<br>docker run -v /full/path/of/data:/data -e ZINC_DATA_PATH=&quot;/data&quot; -p 4080:4080 \ -e ZINC_FIRST_ADMIN_USER=admin -e ZINC_FIRST_ADMIN_PASSWORD=Complexpass#123 \ --name zinc public.ecr.aws/zinclabs/zinc:latestOnce you’ve installed ZincSearch, you should be able to access the web interface on http://localhost:4080. I&#39;m also going to assume that you are going to be running this locally for the purpose of this exercise (if not replace localhost with whatever hostname you are running on).</pre><p>Once you’ve installed ZincSearch, you should be able to access the web interface on http://localhost:4080. I’m also going to assume that you are going to be running this locally for the purpose of this exercise (if not replace localhost with whatever hostname you are running on).</p><h3>Installing Solace PubSub+ Event Broker</h3><p>There are multiple ways to install the Solace broker. You could sign up for a <a href="https://console.solace.cloud/login/new-account?product=event-streaming&amp;utm_source=medium&amp;utm_medium=referral&amp;utm_content=zincsearch&amp;utm_campaign=medium_pubsub">free trial of Solace PubSub+ Cloud</a> which is a click button deployment in the cloud of your choosing. But for the sake of simplicity, I suggest you use the <a href="https://solace.com/products/event-broker/software/getting-started/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=zincsearch&amp;utm_campaign=medium_pubsub">Docker Install option</a> once again.</p><pre>docker run -d -p 8080:8080 -p 55554:55555 -p 8008:8008 -p 1883:1883 -p 8000:8000 -p 5672:5672 -p 9000:9000 -p 2222:2222 --shm-size=2g --env username_admin_globalaccesslevel=admin --env username_admin_password=admin --name=solace solace/solace-pubsub-standard</pre><p>(Note if you see any port conflicts with existing applications running, change the port mappings above.)</p><p>You should be able to access the web interface for the broker by hitting <a href="http://localhost:8080">http://localhost:8080</a>.</p><h3>Setting up an Event Driven Logging Architecture</h3><p>With PubSub+ Event Broker, you publish events using dynamic topics. Topics are simply metadata on an event and do not consume resources on a broker. A representation of a message in Solace is shown below:</p><figure><img alt="Solace message structure with topic and payload." src="https://cdn-images-1.medium.com/max/300/1*Zo8rDrWRnddYEdBALuGTZg.png" /></figure><h4>Publishing and Subscribing to Messages with Solace</h4><p>You can use a <a href="https://www.solace.dev/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=zincsearch&amp;utm_campaign=medium_pubsub">variety of APIs</a> to interact with the Solace broker, but I suggest you use the nifty built-in ‘Try Me’ tab to publish and subscribe to Solace messages. To gain access to this try me tab, log in to the broker’s Web Interface using <a href="http://localhost:8080/">http://localhost:8080</a>.</p><p>Assuming you used the default credentials, log in to the broker ui using admin/admin. Once inside, navigate to the default VPN and click on the ‘Try Me Tab’ as shown below.</p><figure><img alt="Solace WebUI, navigating to the default VPN and the “try me” tab is highlighted." src="https://cdn-images-1.medium.com/max/1024/1*24YOT2oNNVIT3LhMkVooyg.png" /></figure><p>Once you are in the ‘Try Me’ tab, click on <strong>Connect</strong> on the Publisher and Subscriber which will connect to the broker using a websocket connection for testing publishing and subscribing to messages.</p><p>Now it’s time to publish your first event onto Solace. A key aspect of Solace implementations is a topic taxonomy. Say you are building out microservices for a retail conglomerate called ACME Stores. Your first order of business is to build out a “new order” microservice. A suitable topic taxonomy might look something like this:</p><p>[region]/[corporation]/[status]/[ordernumber]</p><p>Solace can transmit any payload, so assume a payload with the following schema:</p><pre>{<br><br>&quot;region&quot;: string,<br>&quot;orderNumber&quot;: string,<br>&quot;status&quot;: string,<br>&quot;item&quot;: string,<br>&quot;quantity&quot;: number,<br><br>&quot;price&quot;: number<br><br>}</pre><p>With the topic taxonomy and schema decided upon, you can publish an event or two using the “try me” tab with the following topic and payload:</p><pre>Topic: us/acme/new/ord12345<br>Payload:<br>{<br>&quot;region&quot;: &quot;us&quot;,<br>&quot;orderNumber&quot;: &quot;ord12345&quot;,<br>&quot;status&quot;: &quot;new&quot;,<br>&quot;item&quot;: &quot;widget1&quot;,<br>&quot;quantity&quot;: 1,<br>&quot;price&quot;: 0.99<br>}</pre><p>Your try me screen should be populated like the image below and then click ‘Publish’ and you should see the message publisher counter increment.</p><figure><img alt="Solace Web UI for publishing a topic and displaying the message content and count." src="https://cdn-images-1.medium.com/max/624/0*8kQCyuGugPGQHdJH.png" /></figure><p>Now that you have published an event into PubSub+, you need to consume it. The typical pattern on how to consume events from Solace is via a construct called queues. So, let’s create a ‘new order queue’ and attach a new order topic subscription to it.</p><p>Navigate to the queues tab and click on the ‘+ Queue’ button and follow the sequence of steps as below and click apply:</p><figure><img alt="Solace Web UI Queue Setup." src="https://cdn-images-1.medium.com/max/1024/1*0gD-pZJkqFWQBkatLnJuVw.png" /></figure><figure><img alt="Solace WebUI displaying the edit queue settings." src="https://cdn-images-1.medium.com/max/1024/1*PKl_JpPpSP4w_pInu7cJ3g.png" /></figure><p>With a queue created, you need to attach a topic subscription to it in order to capture new orders. Since specifying the universe of order numbers is not feasible, we’ll use the concept of topic wildcards that will pattern match all events against a string to the queue. As such, the topic pattern that we want to use to match all orders will be: us/acme/new/order/*</p><p>To attach this subscription to the queue, go back to the queues tab, click on ‘New Order Queue’ and then the subscriptions tab, and click ‘+ Subscription’, enter the topic subscription, and press the create button:</p><figure><img alt="An image showing the queues listed in order and their details — in this case, just one." src="https://cdn-images-1.medium.com/max/1024/1*fmB0qGghIQSMCgMw_JeW5Q.png" /></figure><figure><img alt="Queue setup with the subscriptions tab highlighted." src="https://cdn-images-1.medium.com/max/1024/1*hFKcq_jtUP9V63C14F-O4A.png" /></figure><figure><img alt="A window to create a subscription." src="https://cdn-images-1.medium.com/max/1024/1*S4yNDvLVa9EpnodkIs1Fug.png" /></figure><p>You now have a queue that is subscribing to all of the US-based new order events that your New Order Microservice can connect to. Test the end-to-end flow by navigating back to the Try Me tab.</p><p>On the right hand side of the Try Me screen, click on ‘Bind to an endpoint to receive guaranteed messages’ and put in ‘new-order-queue’, and click on Start Consume. Once again publish the message on the us/acme/new/ord12345 topic and you will see the consumer consume a message on the right hand side of the screen:</p><figure><img alt="Publisher window with the topic details and the subscriber window with the queue details." src="https://cdn-images-1.medium.com/max/1024/1*i9rmIdHpq93D09synsS3Qw.png" /></figure><p>Congratulations, you’ve set up a basic event driven system which looks like this!</p><figure><img alt="Topic to queue architecture with PubSub+ event broker in the middle, showing the queue and topic details." src="https://cdn-images-1.medium.com/max/681/1*vXnI7k4xZ3Z29w6UmzJAZg.png" /></figure><h3>Logging New Orders to ZincSearch</h3><p>Now that you have a basic publish/subscribe system in place, the next requirement is to send all new orders to ZincSearch for searching and indexing. There are many ways you could do this -perhaps have your consumer above also log the event to ZincSearch or you could write a separate microservice to do so.</p><p>However, one of the differentiating factors of PubSub+ compared to other event brokers in the market is first-class support for REST,which means you could trigger a webhook out from the Solace broker when an event hits a queue! This way you won’t have to change your publisher and/or subscriber that you implemented above to do anything different. ZincSearch just becomes another consumer transparent to the publisher and existing subscribers. The end-to-end architecture is illustrated below.</p><figure><img alt="End-to-end architecture with the producer at the left, PubSub+ event broker in the middle, a consumer and Zinc Search API Endpoint on the right." src="https://cdn-images-1.medium.com/max/749/1*jzeeOb8KyZGHDbuwByyDnA.png" /></figure><p>As shown here, the publisher publishes to a topic us/acme/new/ord123456 and it gets attracted to the <strong>new-order-queue </strong>and also the <strong>zinc-queue.</strong> The key thing here is that the publisher only publishes it once, the Solace broker distributes it to the two new queues. In addition, the broker will have the responsibility to push the event out into ZincSearch&#39;s RESTful endpoint for indexing.</p><h4>Connecting Solace PubSub+ and ZincSearch</h4><p>Now that you understand the architecture, here’s how to configurethe connectivity to ZincSearch.</p><p>The first step is to create the zinc-queue by navigating to the Queues tab and, clicking the +Queue button with the following steps:</p><figure><img alt="Zinc Search queue setup window for creating and naming a queue." src="https://cdn-images-1.medium.com/max/935/1*EalYbdMGetf62dyK-vgNGw.png" /></figure><figure><img alt="A window for editing the zinc search queue details like access type, messages quota, consumer count, etc." src="https://cdn-images-1.medium.com/max/1024/1*3XsyqSSXiIegHMEz7OCMfw.png" /></figure><figure><img alt="Create subscription window with details and a create button." src="https://cdn-images-1.medium.com/max/1024/1*UMNBgFexCAzZXsOfYNsFzA.png" /></figure><p>Now you’ve set up that queue, set up a connector to the ZincSearch endpoint. To do this navigate to the ‘Clients’ tab and then to the ‘REST’ tab as shown below:</p><figure><img alt="REST delivery point settings window with the enabled option toggled on." src="https://cdn-images-1.medium.com/max/1024/1*_xhX0riYpUovFcN-n0KJLg.png" /></figure><p>The next step is to create a REST Delivery Point by Clicking the ‘+Rest Delivery Point’ button and creating one called ‘zinc-rdp’ as shown below.</p><figure><img alt="A window to create a REST Delivery Point and option for naming." src="https://cdn-images-1.medium.com/max/624/1*6VUf4CVCvZ6Lq_x2SPHqBA.png" /></figure><p>On the next screen, toggle the Enabled button and change the vendor to Zinc Labs as shown below and click Apply.</p><figure><img alt="REST delivery point settings window with the enabled option toggled on." src="https://cdn-images-1.medium.com/max/624/1*lOpHV0mxhhmVAohdIAL9Gw.png" /></figure><p>Once created, you will be taken back to the REST screen, and you will see<strong> zinc-rdp </strong>as an entry in the table. as shown below:</p><figure><img alt="Original REST screen window open to show the new ZincSearch RDP." src="https://cdn-images-1.medium.com/max/1024/1*dRPY_BFB3Yqr3xJ6aji5iA.png" /></figure><p>Click into the zinc-rdp entry and the REST Consumers and then click the +REST Consumer button naming your REST Consumer as ‘zinc-rdp-rest-consumer’ as shown below:</p><figure><img alt="A window with REST consumers tab highlighted and showing the option to add a REST consumer with a button." src="https://cdn-images-1.medium.com/max/1024/1*GKIFEUUfQF3fqnM41tOMzw.png" /></figure><figure><img alt="Naming the REST Consumer." src="https://cdn-images-1.medium.com/max/624/1*gfOmYmzcnkHAwnf7lz86oA.png" /></figure><p>Once you click the create button, you will be greeted with a screen to configure the REST consumer. Assuming you deployed ZincSearch on the same host as the Solace broker, you will need to change the following settings (and click the Enabled toggle as well):</p><p>Host: host.docker.internal<br> Port:4080<br> Authentication Scheme: Basic Authentication<br> Username: admin<br> Password: Complexpass#123</p><p>Your screen should look something like this:</p><figure><img alt="A window showing the details of the REST consumer settings." src="https://cdn-images-1.medium.com/max/1024/1*1lAU3ZS-3JTtDN3yZ8bElw.png" /></figure><p>If you did everything correctly, you would have been greeted with a screen with the REST Consumer showing the status of Up as below:</p><figure><img alt="Window with the REST Consumers tab highlighted and showing the RDP consumer status as “up”." src="https://cdn-images-1.medium.com/max/1024/1*wYYzWlo4FdatX9IQQf3L8w.png" /></figure><p>The very last thing to do is to configure a Queue Binding, navigate to the Queue Bindings tab and click ‘+Queue Binding’ to create a binding to the previously create ‘zinc-queue’ as shown below:</p><figure><img alt="Window with queue bindings tab highlighted and an option to add a new one with a button." src="https://cdn-images-1.medium.com/max/1024/1*tuR5DBYUUDdyaJdSIcjasA.png" /></figure><figure><img alt="Naming the queue binding." src="https://cdn-images-1.medium.com/max/624/1*uSlUEntC9M-1bxuz5mIIbA.png" /></figure><p>In the next screen, you will want to set the target to ZincSearch’s API to upload a doc to the orders index:</p><figure><img alt="Queue binding parameters window with index and document details." src="https://cdn-images-1.medium.com/max/1024/1*w8o4u7pVSg2jGFCM3TdrEg.png" /></figure><p>You do this by inputting /api/orders/_doc as the Post request Target as shown below:</p><figure><img alt="Adding a parameter to the post request target in the edit queue binding settings window." src="https://cdn-images-1.medium.com/max/1024/1*fLBmCUWBZy8ZenT8ou07fQ.png" /></figure><p>Once again, if everything was done correctly you will be greeted with a queue binding screen and an operational status of ‘Up’ as shown below:</p><figure><img alt="Queue binding screen showing the status of the new binding as “up”." src="https://cdn-images-1.medium.com/max/1024/1*hkyeUSJlfIuNv57qHOX6Yw.png" /></figure><h3>Testing it all out</h3><p>If you’ve made it this far that means you’ve wired up everything successfully and the last thing to do is to test the end-to-end flow out. Go back to the try me tab and publish a new order once again with the following topic/payload:</p><pre>Topic: us/acme/new/ord12345<br>Payload:<br>{<br>&quot;region&quot;: &quot;us&quot;,<br>&quot;orde[rNumber&quot;: &quot;ord12345&quot;,<br>&quot;status&quot;: &quot;new&quot;,<br>&quot;item&quot;: &quot;widget1&quot;,<br>&quot;quantity&quot;: 1,<br>&quot;price&quot;: 0.99<br>}</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/624/1*FsLEip-WLUiYL7kUr3lUBA.png" /></figure><p>If everything worked, you should see an index “Order” created with the Order 12345 in your ZincSearch UI (which can be accessed by hitting <a href="http://localhost:4080)">http://localhost:4080)</a></p><figure><img alt="ZincSearch UI window showing an index order created." src="https://cdn-images-1.medium.com/max/1024/1*chkKywv7sJT2CaeRbEHORQ.png" /></figure><h3>Conclusion</h3><p>In this post, I demonstrated how easy it was to extend your event driven architecture over Solace PubSub+ with ZincSearch. By implementing this natively within the event broker, you alleviate the need for your existing microservice or an entirely new microservice being deployed to handle logging to ZincSearch. You can further extend this pattern to do more sophisticated logging into ZincSearch by updating existing indexes or to log multiple stages of a workflow with no interruption to your existing microservices as all the steps I described above are in-service activities.</p><figure><img alt="Thomas Kunnumpurath" src="https://cdn-images-1.medium.com/max/150/0*_YVBAMmvwgfpzhXs.jpg" /><figcaption><strong>Thomas Kunnumpurath</strong> is the Vice President of Systems Engineering for Americas at Solace where he leads a field team across the Americas to solution the Solace PubSub+ Platform across a wide variety of industry verticals such as Finance, Retail, IoT and Manufacturing. Prior to joining Solace, Thomas spent over a decade of his career leading engineering teams responsible for building out large scale globally distributed systems for real time trading systems and credit card systems at various banks. Thomas enjoys coding, blogging about tech, speaking at conferences and being invited to talk on PodCasts. You can follow him at Twitter with the handle @TKTheTechie, GitHub @TKTheTechie and his blog on TKTheTechie.io</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/event-driven-logging-pubsub-zincsearch/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=zincsearch&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on October 26, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=775d48a6eb4f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/how-to-add-logging-to-your-event-driven-system-with-pubsub-and-zincsearch-775d48a6eb4f">How to Add Logging to your Event-Driven System with PubSub+ and ZincSearch</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[PubSub+ Event Portal vs Confluent Stream Governance]]></title>
            <link>https://medium.com/pubsubplus/pubsub-event-portal-vs-confluent-stream-governance-468a276c6aa3?source=rss-9eda3d135eff------2</link>
            <guid isPermaLink="false">https://medium.com/p/468a276c6aa3</guid>
            <category><![CDATA[event-driven-architecture]]></category>
            <category><![CDATA[event-streaming]]></category>
            <category><![CDATA[confluent]]></category>
            <category><![CDATA[solace]]></category>
            <category><![CDATA[enterprise-architecture]]></category>
            <dc:creator><![CDATA[Solace]]></dc:creator>
            <pubDate>Fri, 30 Sep 2022 13:16:15 GMT</pubDate>
            <atom:updated>2023-03-22T16:34:06.014Z</atom:updated>
            <content:encoded><![CDATA[<p><em>By Sandra Thomson</em></p><figure><img alt="Abstract tech graphic resembling a 3D blue gridded structure." src="https://cdn-images-1.medium.com/max/1024/1*WiOJWRyU2SJPbI5J5pM-fw.png" /></figure><p>If your company has started to embrace <a href="https://solace.com/what-is-event-driven-architecture/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">event-driven architecture</a> (EDA), you’ve probably been considering or struggling with how to manage your “events” with some semblance of governance, reuse and visibility. If that’s you, then read on!</p><p>In 2020, Solace introduced <a href="https://solace.com/products/portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">PubSub+ Event Portal</a> to make it easier for enterprises to efficiently visualize, model, govern and share events across their organization. Why?</p><p>While organizations are embracing EDA, many are finding it hard to realize the intended benefits of better customer experiences, more efficient operations, and greater agility due to a lack of tools that help them manage the lifecycle of events, integrate with software development tools, and expose event flows for reuse.</p><p>Forrester advises that organizations should “implement governance and lifecycle management for event streams. As they grow in importance, you can include enterprise business events in the portfolio management you should already have in place for APIs. Establish an event portal where users of event streams can discover events, understand their functionality, and subscribe.”(1)</p><p>If you agree that there is a need for a toolset that helps you manage events, the next question is — what should you look for in such a toolset? And which one should you use?</p><p>I caught up with <a href="https://solace.com/blog/author/shawnmcallister/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">Shawn McAllister</a>, Solace’s chief technology and product officer, to get his take on what enterprises should look for when they look for a tool to manage their events, and how our own offering — PubSub+ Event Portal — meets them and compares to alternatives like Confluent’s Stream Governance product.</p><h4>Sandra: Let’s start by setting the stage -what does “event management” mean?</h4><p>Shawn: First of all, it’s not about hosting a conference or a wedding. Event management is like API management for your event-driven system. Many companies are building event-driven microservices and/or integrating real-time data. To achieve either of these significant initiatives, there are certain things architects and developers need to be able to do. One is to have an event broker to route your events where they need to go as they stream across your enterprise in real-time. That need has been met by event brokers like Apache Kafka and our own PubSub+ Event Broker, but there’s been a lack of tools to <em>manage</em> your events like you do your RESTful APIs.</p><p>Without such tooling, there’s no way for your architects and developers to collaborate, making it hard to find events to reuse them, to know what events are available where, to keep documentation up to date, and to make complex model changes without impacting upstream and/or downstream applications. There’s no way to manage the lifecycle of events and other assets that are part of an event-driven system. Having a solution to this problem is a critical factor that made the API economy so successful and filling that gap that existed in 2020 for the event driven world is what we set out to do and we call it event management.</p><p><a href="https://solace.com/products/portal/kafka/">PubSub+ Event Portal for Apache Kafka - Solace</a></p><h4>Can you describe the key functions an event management tool should perform?</h4><p>We believe that an event management tool must accelerate and simplify the entire event lifecycle, from design and development right through operation and retirement. It’s innovative, disruptive, and customer-centric, and it’s the right thing to do to ensure EDA can become mainstream!</p><p>There are some core functions we think every event management tool should do. Get ready, it’s a long list:</p><ol><li><strong>Event Design</strong> — architects &amp; developers need to be able to define their events, schemas and applications…describe them, annotate them, enforce governance, and foster best practices and consistency in their design before they’re deployed. Event streams as reusable assets and interfaces need to be thought through and well described just like RESTful APIs — or it makes reuse really hard.</li><li><strong>Runtime Discovery</strong> — Many enterprises have event streams now, but in my experience most don’t have a holistic understanding of what all their streams are, which apps are consuming them, the business moment they represent — and they certainly don’t know all of that across various pre-prod and prod environments. So, you need event management to “learn” this from your broker cluster and show you what’s really there vs. what you think is there.</li><li><strong>Runtime Audit</strong> — once you design or discover the flows in your system, you need an ongoing audit to compare your design intent (in event management) to the deployed runtime (in your brokers) and flag any discrepancies for resolution. Just because something is deployed doesn’t mean it’s right.</li><li><strong>Lifecycle Management</strong> — you need to be able to manage different versions of different events across different environments (pre-prod, prod) from creation to release to new versions to retirement of the event. You rarely build something and don’t evolve it — handling “day 2” concerns are just as important.</li><li><strong>Catalog &amp; Search</strong> — now that you have an inventory of your event streams across environments, you need to ensure they can be found and reused by consumer apps in a self-service manner since this is the key to agility for new application creation. This is where a rich search capability across your many clusters in many environments is critical — otherwise, you don’t get reuse.</li><li><strong>Digital Products</strong> — for use cases like data mesh and other real-time integration use cases, you need to define, curate and manage your event streams as digital products, often managed by a product manager. You need to be able to bundle associated events, maybe from different producers, into digital products that we call event API products for sharing, either within your organization or outside.</li><li><strong>Graphical Visualization</strong> — for event driven microservices, you want to see which microservice is producing which events and who is consuming them in a visual manner that helps you understand the choreography and architecture of your distributed application or data pipeline and the events used as interfaces between them.</li><li><strong>Tooling Integration</strong> — no new tool can exist in a silo. Event management needs to be API first and integrate well with existing tooling such as Git, Confluence, IntelliJ, and Slack to support your development and GitOps processes.</li><li><strong>Support for Several Event Brokers </strong>— most enterprises have several message or event brokers, but they do not want multiple tools to manage the information flowing through them. So having one event management tool that handles many and more importantly, can be extended by you to support more is key.</li></ol><p>It was with those capabilities in mind that we created PubSub+ Event Portal, which we launched in 2020.</p><h4>Can PubSub+ Event Portal do <em>all</em> of these things?</h4><p>Event Portal can do most of them today and will be able to do all of them by early next year. It’s always been our vision to deliver a product that provides all this functionality, and since it’s been on the market for two years we’ve learned a lot from our customers on how it can be improved and what else they need it to do.</p><h4>Since you brought PubSub+ Event Portal to the market, Confluent introduced Stream Governance. How do they compare?</h4><p>While I am not an expert on Stream Governance, I’d say that while at first glance they seem to serve a similar purpose, in reality they are quite different. Stream Governance only works with Confluent Cloud, and really only adds value in the area of understanding your live operational environment. Its catalog interrogates a particular Confluent Cloud environment to show you what’s happening in its clusters at that time — which is <em>after</em> an application has been designed, developed and deployed — at a stream and stream processing level.</p><p>This is useful for operational reasons once an application or a stream is deployed, but it doesn’t help developers in creating or modifying existing applications or streams, guiding best practices, helping with governance at design time and does not provide a “design vs actual” comparison to ensure what you have deployed is what was intended. This is all because it provides visibility of what is deployed — not what you are designing. It is tightly integrated with Confluent Cloud and combines other functionality to provide runtime lineage and data quality capabilities — again both are operationally focused.</p><p>Event Portal, on the other hand, helps architects and developers across all phases of the software development lifecycle, not just after deployment, and is designed to support many types of brokers so you don’t need to have a different tool for each event broker you use. We view Event Portal as the central point that helps you design, discover, catalog, manage, govern, visualize and share your event driven assets. It has specific functionality to help you build event driven microservices applications and to define event driven interfaces/APIs for real-time data integration, data meshes and analytics.</p><p>Event Portal works not only with Solace brokers, but <a href="https://solace.com/products/portal/kafka/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">with Apache Kafka</a>, including Confluent and MSK brokers. It discovers and imports topics, schemas and consumer groups from these brokers. The open source Event Management Agent, a component of Event Portal, has a pluggable architecture so you can add your own plugin to perform discovery from other brokers too.</p><p>So, I say use the right tool for the right job, — these tools serve a complementary purpose. Operational visibility into your Confluent Cloud cluster stream architecture is best done with Stream Governance. But if you want a tool to support cradle to grave design time management of your event driven assets across environments and with different types of event brokers, then Event Portal is what you want. If you are a Confluent Cloud user, then maybe you want both.</p><h4>Can you elaborate on design vs operational concerns?</h4><p>In terms of design concerns, here are a few examples of what people want to do at design time in event-driven systems that they need tools for:</p><ul><li><strong>Design and Share Events and Digital Products</strong> — Developers want to produce events that others can consume. We find more organizations with product managers creating digital products for internal or external consumption. To do this, you need to collaborate to define the specifics of the streams for your domain. Say for a <a href="https://solace.com/blog/what-is-data-mesh-architecture-faq/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">data mesh</a>, you want to describe these streams, annotate them and have them in a catalog with custom fields, descriptions, owners, state so others can find and consume them. You want to version them as they change, and deprecate them at end of life while doing change impact assessments. You want to see which topics use which schemas, which applications are consuming which topics. You want to bundle events into an event API to describe all order management events or all fulfillment events and make them available to consumers as a managed digital asset.</li><li><strong>Consume Events</strong> — Other developers want to find the stream or streams they need to consume to build their application, and they’d ideally like to do so using a rich Google-like search of a catalog that spans all environments and clusters and contains all the metadata that owners have added to annotate topics, schemas, applications. They want to know which cluster they can consume it from whether in pre-prod or prod and which version is where, whether security policies allow them to consume it, whether it is being deprecated and who to talk to if they have questions.</li><li><strong>Define Applications and Generate Code</strong> — You want to define event-driven applications and describe what they do, who owns them, link out to Confluence pages or import this data into Confluence. When a developer has defined their microservices and the events that each one produces and consumes, you want to visualize the interactions at design time and afterwards to better understand your system, and be able to generate code stubs using AsyncAPI code generators is a big productivity boost.</li><li><strong>Govern Event-Driven Applications and Information</strong> — Enterprise architects want to decompose their enterprise for democratized data ownership and create and enforce a topic structure associated with domains that makes ownership and governance clear, enforce role-based access controls (RBAC) uniformly across teams at design time — not just runtime, define and enforce conventions and best practices. And an increasing number of enterprise architects I’ve talked to want to integrate their API management solution or developer portal with their event management services so they can offer a “one stop shop” developer experience for synchronous and asynchronous APIs.</li></ul><p>Here are some of the areas in which Stream Governance does not address design time concerns:</p><ul><li>Stream Governance provides some of this information, but only for streams currently available in a particular Confluent Cloud environment. This means that if you are designing or updating a stream but have not deployed it yet, you can’t see it.</li><li>Role-based access controls do not let you control information sharing in as fine-grained a way and can’t be controlled by data owners.</li><li>Stream Governance supports tags on schemas, records and fields are supported, but no other metadata on other objects (eg via user-defined attributes, owners, etc.).</li><li>Since it is not a design tool, promotion of a given artefact between environments and AsyncAPI import/export are not supported. You are shown what is in your environment.</li></ul><p>However, from an operational point of view, Stream Governance is better suited for managing schema registries, validating schemas at runtime, determining runtime data lineage and resolving operational issues involving the flow of streams in Confluent Cloud.</p><h3>Can you tell me how the lifecycle management capabilities of the two solutions differ?</h3><p>Lifecycle management is defined as the people, tools, and processes that oversee the life of a product/application/event from conception to end of life.</p><p>Most enterprise software development teams follow a software development lifecycle management process in delivering high value applications to their organization. We believe that event management <a href="https://solace.com/blog/software-development-lifecycle-with-event-portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">must integrate with these development processes and tools</a> to facilitate and provide added governance and visibility into EDA application development and deployment. As a result, new application and event versions can be created efficiently and without negative impacts to downstream applications.</p><p><a href="https://solace.com/blog/software-development-lifecycle-with-event-portal/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">Supercharge your Software Development Lifecycle with PubSub+ Event Portal - Solace</a></p><p>This graphic shows the typical stages of development of an event driven application or an event driven API and Event Portal plays a role in each of these stages. From supporting the definition of streams, microservices and visualization of their interactions during the design phase, to versioning as you evolve artefacts on day 2, to environments and promotion between them, to audit of design intent vs what is actually deployed. Open-source integrations we have created integrate Event Portal with tools like Confluence for design, Intellij for development, Git for GitOps and you can use Event Portal’s API to integrate its catalog and repository with your own favorite developer tools using our open source integrations as examples.</p><figure><img alt="An image of a wheel with the typical stages of development of an event driven application/API and inside each section is how PubSub+ Event Portal plays a role." src="https://cdn-images-1.medium.com/max/692/0*UfKhSuKCC-lGei5V.jpg" /></figure><p>In comparison, Stream Governance provides visibility into what is deployed in Confluent Cloud clusters — and it does give you great visibility of that — but not across the entire development lifecycle and not beyond Confluent Cloud deployments.</p><h3>Conclusion</h3><p>Event Portal and Stream Governance solve different problems in overlapping situations. Event Portal focuses on cradle-to-grave design, discovery, reuse, audit and lifecycle management of your event streams, schemas and applications across Solace brokers and Kafka brokers including Confluent Platform and Confluent Cloud along with Apache Kafka, MSK and others that can be added via the open-source Event Management Agent. Stream Governance focuses on visibility of your runtime streams in Confluent Cloud deployments. Which tool is best for you depends on the problem you are trying to solve.</p><p>I hope this Q&amp;A with Shawn has helped you understand the difference between PubSub+ Event Portal and Stream Governance. You can learn more about Event Portal at <a href="https://solace.com/products/portal/kafka/ ?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub">https://solace.com/products/portal/kafka/</a></p><ol><li>Source: Forrester Research “Use Event-Driven Architecture in Your Quest for Modern Applications”, David Mooter, April 9 2021</li></ol><figure><img alt="Sandra Thomson" src="https://cdn-images-1.medium.com/max/200/1*NSyJ1gFWd0s9Eq7pUSWLPQ.png" /><figcaption>Sandra Thomson was the Director of Product Marketing at Solace until 2023.</figcaption></figure><p><em>Originally published at </em><a href="https://solace.com/blog/pubsub-event-portal-vs-confluent-stream-governance/?utm_source=medium&amp;utm_medium=referral&amp;utm_content=ep_vs_stream&amp;utm_campaign=medium_pubsub"><em>https://solace.com</em></a><em> on September 30, 2022.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=468a276c6aa3" width="1" height="1" alt=""><hr><p><a href="https://medium.com/pubsubplus/pubsub-event-portal-vs-confluent-stream-governance-468a276c6aa3">PubSub+ Event Portal vs Confluent Stream Governance</a> was originally published in <a href="https://medium.com/pubsubplus">Solace PubSub+</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>