<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Object Computing - Medium]]></title>
        <description><![CDATA[Women-owned tech consulting firm with deep expertise in designing, modernizing, &amp; connecting mission-critical platforms &amp; systems. Working in collaboration with a global tech ecosystem, we partner with our clients to build innovative, sustainable, &amp; impactful systems &amp; software. - Medium]]></description>
        <link>https://medium.com/object-computing?source=rss----849f5535ced0---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 21:39:27 GMT</lastBuildDate>
        <atom:link href="https://medium.com/feed/object-computing" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Level Up with AI: Insights on Gaming from a Data Strategy Veteran]]></title>
            <link>https://medium.com/object-computing/level-up-with-ai-insights-on-gaming-from-a-data-strategy-veteran-b08e7d8ba4f0?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/b08e7d8ba4f0</guid>
            <category><![CDATA[gaming]]></category>
            <category><![CDATA[data-strategy]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 09 Jan 2025 15:26:49 GMT</pubDate>
            <atom:updated>2025-01-09T15:26:37.965Z</atom:updated>
            <content:encoded><![CDATA[<p>Q&amp;A with Andrew Montgomery</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*l9suv4v5UUOHF18-cqPN3w.png" /></figure><p>Over the last several decades, the unprecedented surge in the video game industry has led to its rise as <strong>THE</strong> titan in the global entertainment and media sector.</p><blockquote>The surge is not expected to stop anytime soon with a projected revenue growth from $262 billion in 2023 to $312 billion in 2027.* AI is the catalyst for this growth, enabling game developers to create more immersive, accessible, and profitable games.</blockquote><p>In this Q&amp;A, our VP of Strategy Andrew Montgomery, a 20-year veteran of data strategy, will discuss the future of gaming and AI. We’ll delve into his insights on the most exciting area of gaming, the biggest AI challenge for 2025, and practical advice for teams integrating AI into their development life cycle.</p><p><strong>What area of gaming excites you the most?</strong></p><p>Player analytics because I have been in data science for most of my career. I see so many interesting things happening in player behavior analysis. AI is used to gather and analyze vast amounts of player data, providing insights into player behavior, preferences, and engagement patterns. This helps in making informed decisions for game updates and new features.</p><p>With predictive analytics, AI models predict player churn, in-game spending, and other key metrics, enabling better strategic planning and targeted engagement efforts.</p><p>This is where smart organizations can lean into new avenues of growth. With rich profiles and player insights, brands can bring forward cross-title offerings and mashups that excite and draw new customers to their brand. In the digital worlds that are informed and shaped by AI, anything is possible.</p><p><strong>When you think about AI in gaming, which challenge should organizations focus on in 2025?</strong></p><p>I think organizations in the gaming space are rightly focused on scaling their titles and improving player retention, and system interoperability combined with advancements in AI are key to achieving those business outcomes. Additionally, the notion of what constitutes a “gaming platform” has continued to evolve. It is no longer just a console or a PC but includes everything from handheld devices to mobile phones, tablets, and wearables. The diversity of platforms in conjunction with the legacy infrastructure many game brands are built on creates a significant barrier to AI adoption.</p><p>Thankfully, with the rise of modern computing from system virtualization and cloud computing, models and languages can find new life in build-once-deploy-anywhere ecosystems. This provides some reprieve but ultimately architecting and designing gaming frameworks is an essential step in creating a game platform that can scale.</p><p><strong>As a data strategist for many years, what is your advice for teams who want to integrate AI into their development life cycle?</strong></p><p>Quickly understanding data gaps is pivotal for integrating AI into the dev cycle for many reasons including building and securing confidence with stakeholders.</p><p>It’s important to realize what looks good with simulated data may not be achievable in practice for many reasons. Commonly identified issues include system interoperability, data quality, and data governance. It is critical to understand gaps and mitigation strategies early while evaluating and prioritizing use cases to develop. Stakeholder support can be lost if you experience too many false starts.</p><p>As we look ahead to 2025, I’m excited to see how AI continues to shape the future of gaming. For any organization looking to level up with impactful AI capabilities, I encourage you to reach out. <a href="https://objectcomputing.com/services/contact-us">Let’s discuss how we can partner to unlock the full potential of AI.</a></p><p>Sources:</p><ul><li>Jon Wakelin and Alex Baker, “Top 5 developments driving growth for video games,” pwc (blog), 16 January, 2024, <a href="https://www.pwc.com/us/en/tech-effect/emerging-tech/emerging-technology-trends-in-the-gaming-industry.html">https://www.pwc.com/us/en/tech-effect/emerging-tech/emerging-technology-trends-in-the-gaming-industry</a></li></ul><p><em>Andrew Montgomery, vice president of strategy, is an experienced technology executive and data strategist with 20+ years of experience with Fortune 500 companies. Andy’s focus is helping customers unlock their data to simplify business complexities and reshape business outcomes.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b08e7d8ba4f0" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/level-up-with-ai-insights-on-gaming-from-a-data-strategy-veteran-b08e7d8ba4f0">Level Up with AI: Insights on Gaming from a Data Strategy Veteran</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Flutter Fundamentals: Building a Mobile App with APIs and Databases]]></title>
            <link>https://medium.com/object-computing/flutter-fundamentals-building-a-mobile-app-with-apis-and-databases-143e6d628678?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/143e6d628678</guid>
            <category><![CDATA[mobile-apps]]></category>
            <category><![CDATA[flutter]]></category>
            <category><![CDATA[mobile-app-development]]></category>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[dart]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Wed, 23 Oct 2024 18:48:57 GMT</pubDate>
            <atom:updated>2024-10-23T18:53:42.370Z</atom:updated>
            <content:encoded><![CDATA[<p>By Chad Elliott</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Av9xu3WTJboTVl-AhC53dg.png" /></figure><p>Most mobile apps rely on external APIs and database storage to function effectively. Flutter, a powerful cross-platform framework that uses the Dart programming language, provides a streamlined way to integrate these components into your app development process. In this article, we’ll guide you through the essential steps of building a Flutter app that interacts with APIs and databases, demonstrating the simplicity and efficiency of this approach.</p><h3>Getting Started</h3><p>First things first, let’s tell Flutter that we want to create a new application called <em>music_collector</em>. We will assume that you already have Flutter installed on your development machine. If you have Android Studio installed, you can create a new application through the File menu system. Or, if you prefer, you can do it on the command line.</p><pre>flutter create --org com.your_organization music_collector</pre><p>Once the application has been created, we need to indicate which packages we plan on using. We can do this by either editing the pubspec.yaml directly or by using flutter on the command line.</p><pre>flutter pub add elite_orm http numberpicker path path_provider sqflite xml -</pre><p>This will add the following lines to your pubspec.yaml under the dependencies section:</p><pre>  elite_orm: ^1.0.9<br>  http: ^1.2.0<br>  numberpicker: ^2.1.2<br>  path: ^1.8.3<br>  path_provider: ^2.0.13<br>  sqflite: ^2.3.2<br>  xml: ^6.1.0</pre><p>We’ll be using the <a href="https://pub.dev/packages/elite_orm"><em>elite_orm</em></a> and <em>sqflite</em> packages to persist our music albums and the <em>http</em> and <em>xml</em> packages to access an online music database API.</p><p>After we get the project created and configured, we’re ready to start writing our app by creating the model.</p><h3>Defining the Model</h3><p>By using <em>elite_orm</em>, we can create a class that will serve as our data model and, in doing so, we will have essentially written all of the code needed to persist instances of our model in the database. This package greatly simplifies the effort required in order to create and read persistent data. First, let’s import the package.</p><pre>import &#39;package:elite_orm/elite_orm.dart&#39;;</pre><p>After that, it’s just a matter of extending the Entity class and adding our data members to describe what our model will contain. We define our data members and indicate a composite primary key to ensure that each album by the same artist is unique within the database</p><pre>class Album extends Entity&lt;Album&gt; {<br> Album([artist = &quot;&quot;, name = &quot;&quot;, DateTime? release, Duration? length])<br>     : super(Album.new) {<br>   // The composite primary key is artist and name.<br>   members.add(DBMember&lt;String&gt;(&quot;artist&quot;, artist, true));<br>   members.add(DBMember&lt;String&gt;(&quot;name&quot;, name, true));<br>   members.add(DateTimeDBMember(&quot;release&quot;, release ?? DateTime.now()));<br>   members.add(DurationDBMember(&quot;length&quot;, length ?? const Duration()));<br> }<br><br> // Accessors.<br> String get artist =&gt; members[0].value;<br> String get name =&gt; members[1].value;<br> DateTime get release =&gt; members[2].value;<br> Duration get length =&gt; members[3].value;<br>}</pre><p>That’s it! We can now <em>Create</em>, <em>Read</em>, <em>Update</em>, and <em>Delete</em> Album objects within our database using the Bloc class provided by <em>elite_orm</em>.</p><h3>Accessing an Online API</h3><p>There are a multitude of Dart packages that make a Flutter developer’s life easy. There’s a package for many of the basic tasks required to develop a complex application. Making HTTP requests and parsing XML are no exceptions.</p><p>We’ll be using the open music encyclopedia Musicbrainz. Our use of the online database will be fairly narrow and simple. We’re going to provide a list of albums released by an artist by performing two searches. The first will be an artist search based on input from the user and the second will be a search of the releases made by the artist based on the artist identifier obtained from the first search.</p><p>First, we need to import our packages and files.</p><pre>import &#39;package:http/http.dart&#39; as http;<br>import &#39;package:xml/xml.dart&#39;;<br>import &#39;../model/album.dart&#39;;</pre><p>For brevity, we’re not going to show the whole class. If you want to see all of the private helper methods, please see the <a href="https://github.com/ocielliottc/music_collector">github repository</a>.</p><p>The entry point for consumers of this class is the <em>getAlbums</em> method. It takes a string that represents all or part of an artists/groups name. It makes a call to get the artist identifier and, after receiving it, the method makes another call to get the releases associated with that artist identifier. All in all, the processing of the XML is fairly simple and, frankly, not that interesting.</p><p>As you may already know, the leading underscore for members and methods indicates that it is private. We define a private API URL and a couple of private helper methods to get different URLs to help us get the data we need.</p><pre>class MusicBrainz {<br> static const String _apiURL = &quot;https://musicbrainz.org/ws/2&quot;;<br><br> String _getArtistURL(String artist) =&gt; &quot;$_apiURL/artist?query=$artist&quot;;<br> String _getReleasesURL(String artistId) =&gt;<br>     &quot;$_apiURL/artist/$artistId?inc=release-groups&quot;;<br><br> Future&lt;List&lt;Album&gt;&gt; getAlbums(String artist) async {<br>   // Request a list of artists that partially match &quot;artist&quot;.<br>   http.Response response =<br>       await http.get(Uri.parse(_getArtistURL(artist)));<br>   if (response.statusCode == 200) {<br>     // Get the artist id from the response, if possible.<br>     final List&lt;String&gt; artistInfo = _getArtistInfo(artist, response.body);<br>     // Now request albums from the artist (if we found one).<br>     if (artistInfo.first.isNotEmpty) {<br>       final String artistId = artistInfo[0], artistName = artistInfo[1];<br>       response = await http.get(Uri.parse(_getReleasesURL(artistId)));<br>       if (response.statusCode == 200) {<br>         return _getAlbums(artistName, response.body);<br>       }<br>     }<br>   }<br>   return [];<br> }<br>}</pre><h3>Writing the User Interface</h3><p>Our UI is going to be straightforward and utilitarian. We’ll have just two screens, one for adding or editing albums and another for listing the albums in our database. We’re going to start with the more complicated of the two, the add/edit album screen.</p><h3>Editing Interface</h3><p>The majority of the layout is the same for both adding and editing an album. If the EditAlbum object is constructed with an Album, we know we are editing an album. Without an Album, we’ll be adding a new one and we’ll need a couple of additional widgets to help the user fill in the information from the MusicBrainz site. As we can see, the EditAlbum widget doesn’t do much. The bulk of the functionality will be in the EditAlbumState class.</p><p>We will be using a <a href="https://www.educative.io/answers/what-is-flutter-bloc">BLoC</a> to access the data in our database. A BLoC (Business Logic Component) helps separate logic from the user interface while maintaining the flutter reactive model of redrawing the UI when a state or stream changes.</p><pre>import &#39;package:flutter/material.dart&#39;;<br>import &#39;package:numberpicker/numberpicker.dart&#39;;<br>import &#39;package:elite_orm/elite_orm.dart&#39;;<br><br>import &#39;../model/album.dart&#39;;<br>import &#39;../database/database.dart&#39;;<br>import &#39;../utility/musicbrainz.dart&#39;;<br>import &#39;../utility/error_dialog.dart&#39;;<br>import &#39;../style/style.dart&#39;;<br><br>// There is only one Bloc object that we will use on both this screen and the<br>// home screen.<br>final bloc = Bloc(Album(), DatabaseProvider.database);<br><br>class EditAlbum extends StatefulWidget {<br> final Album? album;<br> const EditAlbum({super.key, this.album});<br><br> @override<br> State&lt;EditAlbum&gt; createState() =&gt; EditAlbumState();<br>}<br><br>class EditAlbumState extends State&lt;EditAlbum&gt; {<br> @override<br> Widget build(BuildContext context) =&gt; PopScope(<br>     canPop: false,<br>     onPopInvoked: _onWillPop,<br>     child: Scaffold(<br>       appBar: AppBar(<br>           title: Text(widget.album == null ? &quot;Add Album&quot; : &quot;Edit Album&quot;)),<br>       body: SafeArea(child: _renderContent()),<br>       bottomNavigationBar: BottomAppBar(<br>         child: Container(<br>           padding: Style.bottomBarPadding,<br>           decoration: Style.bottomBarDecoration(context),<br>           child: Row(<br>             mainAxisAlignment: MainAxisAlignment.end,<br>             children: _bottomIcons(),<br>           ),<br>         ),<br>       ),<br>     ),<br>   );</pre><p>As you can see, we override the build method with a typical Scaffold widget, which contains an app bar area, a body, and a bottom navigation bar. The whole thing is wrapped by a PopScope widget. This allows us to ask the user to save or discard their changes before leaving the screen. Our _onWillPop method will only leave the screen if there are no modifications or the user chooses to discard the modifications.</p><p>The SafeArea is what will hold the bulk of our UI. It provides a dynamic level of padding to avoid the operating system interface of the phone or tablet on which your app will run. It has only one required parameter which, in our case, is a widget created by the _renderContent method that contains the UI for this screen.</p><p>The last part of the UI is the bottom navigation bar. It contains a row of buttons that will allow the user to save new and existing albums and to delete existing albums. Later, we will take a closer look into what goes into making an app functional.</p><p>Next, we’ll take a look at the data members and initialization of the State object.</p><pre> // Content editing<br> bool _modified = false;<br> int _minutes = 0;<br> int _seconds = 0;<br> final _artistController = TextEditingController();<br> final _titleController = TextEditingController();<br> final _dateController = TextEditingController();<br><br> // Keep track of searching and results.<br> bool _searching = false;<br> final List&lt;Widget&gt; _possible = [];<br><br> // These are static so that we can cache the previous search automatically.<br> static final List&lt;Album&gt; _albums = [];<br> static final _searchController = TextEditingController();<br><br> @override<br> void initState() {<br>   super.initState();<br><br>   // Fill in the widgets with data.<br>   _fillSearchList();<br>   if (widget.album != null) {<br>     _fromAlbum(widget.album!);<br>   }<br><br>   // Set up listeners so that we can notify the user if there is unsaved data<br>   // when they leave this screen.<br>   _artistController.addListener(() =&gt; _modified = true);<br>   _titleController.addListener(() =&gt; _modified = true);<br>   _dateController.addListener(() =&gt; _modified = true);<br> }</pre><p>Our UI will have a field for editing the artist, title, date, and duration. It will also have, when adding a new album, an artist field for searching and a list of albums as a search result. In our initState method, we first call a method to fill in our search related widgets and then, if this EditAlbumState object was constructed with an Album object, we will fill in the album editing fields with the data from the Album object. Next, we set up some listeners on the text editing controllers so that, when the user modifies them, we set our flag to keep track of modifications.</p><p>Now let’s take a look at the code to build the UI. It’s a fairly large method, but not complex at all.</p><pre>Widget _renderContent() {<br>   List&lt;Widget&gt; content = [];<br>   if (widget.album == null) {<br>     content.addAll([<br>       const Padding(<br>         padding: Style.columnPadding,<br>         child: Text(&quot;Search by Artist&quot;, style: Style.titleText),<br>       ),<br>       Padding(<br>         padding: Style.columnPadding,<br>         child: Row(<br>           children: [<br>             Expanded(<br>               child: TextField(<br>                 controller: _searchController,<br>                 decoration: Style.inputDecoration,<br>                 textInputAction: TextInputAction.search,<br>                 onSubmitted: (s) =&gt; _searchArtist(),<br>               ),<br>             ),<br>             IconButton(<br>               icon: Icon(<br>                 Icons.search,<br>                 color: Theme.of(context).colorScheme.primary,<br>               ),<br>               onPressed: _searchArtist,<br>             ),<br>           ],<br>         ),<br>       ),<br>       Container(<br>         height: 100,<br>         margin: Style.columnPadding,<br>         padding: Style.columnPadding,<br>         decoration: Style.containerOutline(context),<br>         child: _searching<br>             ? const Column(<br>                 mainAxisAlignment: MainAxisAlignment.center,<br>                 children: &lt;Widget&gt;[<br>                   CircularProgressIndicator(),<br>                   Text(&quot;Searching...&quot;, style: Style.titleText)<br>                 ],<br>               )<br>             : ListView(children: _possible),<br>       ),<br>     ]);<br>   }</pre><p>This first section adds an interface for searching for albums released by a particular artist. But, it is only added when we are creating a new Album, i.e., widget.album == null. When the search is in progress, the _searching boolean is true which will cause the UI to display a CircularProgressIndicator until _searching is set to false. When the search is finished, we will display the list of possible albums in a ListView.</p><pre> content.addAll([<br>     const Padding(<br>       padding: Style.columnPadding,<br>       child: Text(&quot;Artist&quot;, style: Style.titleText),<br>     ),<br>     Padding(<br>       padding: Style.textPadding,<br>       child: TextField(<br>         controller: _artistController,<br>         textCapitalization: TextCapitalization.words,<br>         decoration: Style.inputDecoration,<br>       ),<br>     ),<br>     const Padding(<br>       padding: Style.columnPadding,<br>       child: Text(&quot;Title&quot;, style: Style.titleText),<br>     ),<br>     Padding(<br>       padding: Style.textPadding,<br>       child: TextField(<br>         controller: _titleController,<br>         textCapitalization: TextCapitalization.words,<br>         decoration: Style.inputDecoration,<br>       ),<br>     ),</pre><p>This next bit simply adds text editing fields for the artist and album title. It’s all pretty straightforward.</p><pre>const Padding(<br>       padding: Style.columnPadding,<br>       child: Text(&quot;Release Date&quot;, style: Style.titleText),<br>     ),<br>     Padding(<br>       padding: Style.columnPadding,<br>       child: Row(<br>         children: [<br>           Expanded(<br>             child: TextField(<br>               readOnly: true,<br>               controller: _dateController,<br>               decoration: Style.inputDecoration,<br>             ),<br>           ),<br>           IconButton(<br>             icon: Icon(<br>               Icons.calendar_month,<br>               color: Theme.of(context).colorScheme.primary,<br>             ),<br>             onPressed: _pickDate,<br>           ),<br>         ],<br>       ),<br>     ),</pre><p>The <em>Release Date</em> field is also a text field. But, we’re not going to leave the date formatting up to the user. To make things easy, we make use of the flutter function showDatePicker which we call within the _pickDate method. The showDatePicker function is a modal dialog that provides a calendar from which the user can pick a date. Once the user picks a date, we update the <em>Release Date</em> text field to reflect the value that the user chose.</p><pre>const Padding(<br>       padding: Style.columnPadding,<br>       child: Text(&quot;Duration&quot;, style: Style.titleText),<br>     ),<br>     Container(<br>       margin: const EdgeInsets.all(8),<br>       padding: const EdgeInsets.all(3),<br>       decoration: Style.containerOutline(context),<br>       child: Column(<br>         children: [<br>           Padding(<br>             padding: Style.columnPadding,<br>             child: Row(<br>               children: [<br>                 Expanded(<br>                   child: Column(<br>                     children: [<br>                       const Text(&quot;Minutes&quot;),<br>                       NumberPicker(<br>                         value: _minutes,<br>                         axis: Axis.horizontal,<br>                         minValue: 0,<br>                         maxValue: 999,<br>                         itemWidth: 50,<br>                         onChanged: (value) =&gt; setState(() {<br>                           _minutes = value;<br>                           _modified = true;<br>                         }),<br>                       ),<br>                     ],<br>                   ),<br>                 ),                 <br>                  Expanded(<br>                   child: Column(<br>                     children: [<br>                       const Text(&quot;Seconds&quot;),<br>                       NumberPicker(<br>                         value: _seconds,<br>                         axis: Axis.horizontal,<br>                         minValue: 0,<br>                         maxValue: 59,<br>                         itemWidth: 50,<br>                         onChanged: (value) =&gt; setState(() {<br>                           _seconds = value;<br>                           _modified = true;<br>                         }),<br>                       ),<br>                     ],<br>                   ),<br>                 ),<br>               ],<br>             ),<br>           ),<br>         ],<br>       ),<br>     ),<br>   ]);</pre><p>For the duration of the album, we are going to use a NumberPicker widget for the minutes and another for the seconds. This widget provides a scrolling interface to select numbers within a specified range. If you recall, it’s one of the packages we installed in the beginning.</p><pre>   return ListView(children: content);<br> }</pre><p>Once we have built up the set of widgets that make up our UI, we wrap them all in a ListView so that e can easily scroll the contents of the UI up and down, as it may be too long to display it all on the phone screen at the same time.</p><p>The last bit that we’re going to look at in the EditAlbumState class is how we save all of the information provided by the user to the database. When the user presses the save button, the _saveAlbum method is invoked.</p><pre>Album _toAlbum() {<br>   final DateTime releaseDate = DateTime.parse(_dateController.text);<br>   final Duration duration = Duration(minutes: _minutes, seconds: _seconds);<br><br>   return Album(<br>       _artistController.text, _titleController.text, releaseDate, duration);<br> }<br><br> void _saveAlbum() async {<br>   try {<br>     final Album album = _toAlbum();<br>     String message;<br>     if (album.artist.isNotEmpty &amp;&amp; album.name.isNotEmpty) {<br>       if (widget.album == null) {<br>         await bloc.create(album);<br>         message = &quot;Album Saved&quot;;<br>       } else {<br>         if (widget.album!.artist != album.artist ||<br>             widget.album!.name != album.name) {<br>           // Changing the name of the artist or album is the same as creating<br>           // a new album.  Because the artist and album make up the primary<br>           // key, we have to create the new album and delete the old one.<br>           // There&#39;s no way to just &quot;rename&quot; an entry in the database.<br>           await bloc.create(album);<br>           await bloc.delete(widget.album!);<br>         } else {<br>           // If the artist and name has not changed, then we can update.<br>           await bloc.update(album);<br>         }<br>         message = &quot;Album Updated&quot;;<br>       }<br>       _modified = false;<br><br>       // Because we&#39;re using the build context after an await, we need to<br>       // ensure that this widget is still mounted before using it.<br>       if (mounted) {         Navigator.pop(context);<br>       }<br>     } else {<br>       message = &quot;Invalid Album&quot;;<br>     }<br><br><br>     // Same here.<br>     if (mounted) {<br>       ScaffoldMessenger.of(context).showSnackBar(<br>         SnackBar(content: Text(message)),<br>       );<br>     }<br>   } catch (err) {<br>     if (mounted) {<br>       ErrorDialog.show(context, err.toString());<br>     }<br>   }<br> }</pre><p>As we can see above, once we have the Album created we can then give that to the bloc to have it store it in the database. If this is a new album, we simply tell the bloc to create it. If it is an existing album, we have the bloc update it.</p><p>Below is a screenshot of what our UI will look like when adding a new album.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/540/0*3Y65TaQWmgg7C0Qr" /></figure><p>There’s more to this screen. If you’re interested in seeing more of the inner workings of this particular UI, please see the <a href="https://github.com/ocielliottc/music_collector">git repository</a>. Now, we’re going to move on to the home screen.</p><h3>Home Interface</h3><p>The home screen is a much simpler UI. All it does is display a scrolling list of the albums that the user has saved in the database.</p><pre>import &#39;package:flutter/material.dart&#39;;<br><br>import &#39;../model/album.dart&#39;;<br>import &#39;../screens/edit_album.dart&#39;;<br>import &#39;../style/style.dart&#39;;<br><br>class ListAlbums extends StatefulWidget {<br> const ListAlbums({super.key});<br><br> @override<br> State&lt;ListAlbums&gt; createState() =&gt; _ListAlbumsState();<br>}<br><br>class _ListAlbumsState extends State&lt;ListAlbums&gt; {<br> Widget _renderAlbum(Album album) {<br>   return GestureDetector(<br>     child: Card(<br>       shape: Style.cardShape(context),<br>       child: ListTile(<br>         subtitle: Text(album.artist),<br>         title: Text(album.name, style: Style.cardTitleText),<br>       ),<br>     ),<br>     onTap: () {<br>       Navigator.push(<br>         context,<br>         MaterialPageRoute(builder: (context) =&gt; EditAlbum(album: album)),<br>       );<br>     },<br>   );<br> }</pre><p>The _renderAlbum is called for each album in the database, but only as they are visible on the screen due to the nature of the ListView.builder constructor. We use a Card wrapped in a GestureDetector so that when the user taps on the card, it will bring the user to the album editing screen through the Navigator using a MaterialPageRoute.</p><pre>Widget _renderAlbums(AsyncSnapshot&lt;List&lt;Album&gt;&gt; snapshot) {<br>   if (snapshot.hasData) {<br>     // Sort the list by artist first and then the release date.<br>     snapshot.data!.sort((a, b) {<br>       final int cmp = a.artist.compareTo(b.artist);<br>       return cmp == 0 ? a.release.compareTo(b.release) : cmp;<br>     });<br>     return ListView.builder(<br>       itemCount: snapshot.data!.length,<br>       itemBuilder: (context, index) {<br>         return _renderAlbum(snapshot.data![index]);<br>       },<br>     );<br>   } else {<br>     return Center(<br>       child: const Column(<br>         mainAxisAlignment: MainAxisAlignment.center,<br>         children: &lt;Widget&gt;[<br>           CircularProgressIndicator(),<br>           Text(&quot;Loading...&quot;, style: Style.titleText)<br>         ],<br>       ),<br>     );<br>   }<br> }<br><br> Widget _renderAlbumsWidget() {<br>   return StreamBuilder(<br>     stream: bloc.all,<br>     builder: (context, snapshot) =&gt; _renderAlbums(snapshot),<br>   );<br> }</pre><p>The _renderAlbumsWidget method uses a StreamBuilder to create the scrolling list of albums. Whenever the bloc.all stream is updated, this widget will automatically recreate the list of albums. So, as albums are added, deleted, or updated, it will be reflected in our list automatically.</p><pre> List&lt;Widget&gt; _bottomIcons() {<br>   return [<br>     IconButton(<br>       icon: Icon(Icons.add, color: Theme.of(context).colorScheme.primary),<br>       iconSize: Style.iconSize,<br>       onPressed: () {<br>         Navigator.push(<br>           context,<br>           MaterialPageRoute(builder: (context) =&gt; const EditAlbum()),<br>         );<br>       },<br>     ),<br>   ];<br> }</pre><p>Our bottom row of buttons only contains a single icon for adding new albums. When the user presses the icon, it will take the user to our album adding screen.</p><pre>@override<br> void initState() {<br>   super.initState();<br><br>   // Ensure that the bloc stream is filled.<br>   bloc.get();<br> }</pre><p>We override the initState method so that we can fill the bloc stream when the screen is created. This causes the UI to initially show the list of existing albums from the database.</p><pre> @override<br> Widget build(BuildContext context) {<br>   return Scaffold(<br>     appBar: AppBar(title: const Text(&quot;Music Collector&quot;)),<br>     body: SafeArea(child: _renderAlbumsWidget()),<br>     bottomNavigationBar: BottomAppBar(<br>       child: Container(<br>         decoration: Style.bottomBarDecoration(context),<br>         child: Row(children: _bottomIcons()),<br>       ),<br>     ),<br>   );<br> }<br>}</pre><p>As we did for our editing UI, we override the build method and use the Scaffold widget to contain our UI. The functionality, again, is delegated to other methods. As you can see, this screen is much simpler than the editing screen.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/270/0*y5KX6DqhbESTM7iA" /></figure><h3>Main</h3><p>The final part of the application that we’re going to look at is the code that actually starts the UI. This is main.dart and is the entry point into the application. We create a class that extends the StatelessWidget and creates a MaterialApp that runs our home screen, i.e. ListAlbums.</p><pre>import &#39;package:flutter/material.dart&#39;;<br>import &#39;screens/list_albums.dart&#39;;<br><br>void main() {<br> runApp(const MusicCollector());<br>}<br><br>class MusicCollector extends StatelessWidget {<br> const MusicCollector({super.key});<br><br> @override<br> Widget build(BuildContext context) {<br>   return MaterialApp(<br>     darkTheme: ThemeData(<br>       brightness: Brightness.dark,<br>       colorSchemeSeed: Colors.yellow.shade600,<br>     ),<br>     theme: ThemeData(<br>       brightness: Brightness.light,<br>       colorSchemeSeed: Colors.red.shade800,<br>     ),<br>     home: const ListAlbums(),<br>   );<br> }<br>}</pre><h3>Conclusion</h3><p>Flutter is a powerful mobile platform that can help you build complex applications that can run on Android, iOS, and other systems. There are many mobile development platforms available, but Flutter makes it quick and easy to get started writing mobile apps. This example application shows the basics and can be a good starting point for your own applications.</p><p>Be sure to follow and subscribe to be notified of future articles. Please visit <a href="https://objectcomputing.com/how-we-serve/capabilities/application-development">Object Computing’s website</a> to learn more about our application development services.</p><p><em>Chad Elliott is a Principal Software Engineer at Object Computing, with over 30 years experience in software development ranging from embedded software to server-side applications to mobile applications. The majority of his free time is spent developing mobile apps in Flutter.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=143e6d628678" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/flutter-fundamentals-building-a-mobile-app-with-apis-and-databases-143e6d628678">Flutter Fundamentals: Building a Mobile App with APIs and Databases</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Accidental to Intentional: Your Roadmap to Architectural Excellence]]></title>
            <link>https://medium.com/object-computing/from-accidental-to-intentional-your-roadmap-to-architectural-excellence-003444591309?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/003444591309</guid>
            <category><![CDATA[application-architecture]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 08 Aug 2024 20:16:52 GMT</pubDate>
            <atom:updated>2024-08-08T20:16:37.533Z</atom:updated>
            <content:encoded><![CDATA[<p>By Garey Hoffman and Mike Pleimann</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/1*jKaOtpELVoQGBUHHsZ87Vw.png" /></figure><p>The first article of this two-part series discussed how to identify <a href="https://medium.com/object-computing/business-leaders-how-do-you-know-if-your-enterprise-has-accidental-architecture-1866c904cb5d?source=friends_link&amp;sk=6c8c7b6137a12f032d8754c44e21c5d2">Accidental Architecture</a> and the challenges it can cause an organization. We defined Intentional Architecture as a deliberate system architecture that is created from a set of goals, trade-offs, team structure, and constraints under which it’s built and maintained.</p><p>If you’re stuck in the muck of Accidental Architecture, taking action is certainly the right course. That said, the wrong series of actions can have consequences that are just as dire as no action. Our roadmap to Intentional Architecture is paved with insights gleaned from countless organizations that have navigated through the challenges of Accidental Architecture.</p><p><strong>Preparing for the Journey</strong></p><p>The mechanisms to move a system from Accidental to Intentional Architecture are simple to describe, but they may be difficult to implement. It requires commitment from business and technical leaders and a proper understanding of its importance. In short, Intentional Architecture requires alignment between technology and business goals.</p><blockquote>There is a journey ahead for any organization that is committed to achieving an Intentional Architecture. And, ensuring your organization is ready for the journey is critical. Without real support, the odds of completing the journey — or even making meaningful progress — are very low.</blockquote><p>This is the stopping point of this journey if your organization is not able to do the work of aligning business and technology goals. We invite you to resume this journey when your organization is ready to create the conditions where that alignment can happen.</p><h3>The Roadmap to Intentional Architecture</h3><p>After working with many clients, we have found the following guide on advancing Intentional Architecture is successful within organizations that are ready for this journey. Here’s a roadmap to get you started:</p><h4><strong>Mission Possible: Defining Your IT Purpose</strong></h4><p>One place to start is by identifying the primary purpose of IT in the organization. We’ve experienced all sorts of responses when we propose this as a starting point. From quizzical looks to downright shock is common. Yet, when organizations really take a look at their own internal beliefs, there is a wide discrepancy in the view of purpose.</p><p>For example, a mission statement for an IT organization could read something like this:</p><p><strong><em>Empower teams to build reliable, efficient, and user-friendly software that delivers</em> <em>competitive features with reduced cost, risk, and time to delivery.</em></strong></p><p>A clear and inspiring mission statement is crucial to prepare your organization for the technological journey ahead. Your organization should have a statement that is more specific to your organization than our example. It should act as a mandate, empowering your tech team and guiding their work. Think of visionary leadership — a bold leader who can clearly articulate the future. Or in some organizations, this can be found in a grassroots campaign where leadership is gained through a track record of success. An inspiring leader, whether internal or external, can ignite a movement within your organization to embrace intentional architecture.It may even be an outside evangelist who inspires and directs an internal groundswell.</p><h4><strong>Agreeing on the Path</strong></h4><p>Hopefully, this doesn’t occur with a traumatic incident like a major system crash or security breach. (Sadly, these are real events that take place.) The goal is to establish alignment between business decision-makers and technical leadership on the purpose of your application or system that suffers from Accidental Architecture. An assessment of responsibilities or a joint workshop may be necessary to uncover all relevant information and create a coalition.</p><p>A common discussion with our clients includes a session to understand the pain points that they experience. Importantly, we work to learn <em>who</em> experiences the pain. We find it common for decision makers to disproportionately feel a lesser amount of pain under its current architectural state. Understanding <em>why</em> decision-makers feel less pain (and may not be aware of the depth of pain felt by other stakeholders) is a critical part of these discussions.</p><p>Your technical team likely faces challenges meeting deadlines and budgets beyond their control, often accumulating technical debt. Technical debt is often the result of working around architectural deficiencies to meet these deadlines and remain within budget.</p><p><em>The accumulation of substantial technical debt is the single largest symptom of Accidental Architecture. However, we cannot stress enough the importance of understanding the types of technical debt your organization may hold.</em></p><h4><strong>Driving a Culture of Improvement</strong></h4><p>To build a culture of improvement, start by providing the space and resources your team needs to tackle these challenges head-on. Continue by facilitating open dialogue between teams; seek outside consultation when needed. This is an investment in your organization’s future success. If this is counter to your corporate culture, we recommend collaborating with change management specialists.</p><p>Examples of tackling this task with our clients include:</p><ul><li>Conveying the full risk profile they bear under large technical debt loads.</li><li>Illustrating how to classify technical debt as benign, moderate, risky, or dangerous.</li><li>Creating efficient and effective workflows and processes that stop (or slow down) the accumulation of more technical debt.</li><li>Collaborating on how to create realistic roadmaps for emergence from Accidental Architecture, building hope and trust.</li><li>Helping them create the business case that ties improvements in technical debt to new initiatives or new feature development.</li></ul><h4><strong>Prioritizing Your “Ilities”: What Matters Most?</strong></h4><p>While all aspects of quality in an Intentional Architecture are important, after some thought you’ll certainly find that some aspects are more important than others. Choose 3 to 5 “ilities” — essential <a href="https://en.wikipedia.org/wiki/List_of_system_quality_attributes">quality attributes</a> — that are most important to your business and systems.</p><p>Here are some of the most popular attributes to consider:</p><ul><li>Reliability</li><li>Scalability</li><li>Efficiency</li><li>Security</li><li>Familiarity</li><li>Safety</li><li>Maintainability</li><li>Adaptability</li><li>Portability</li><li>Resilience</li><li>Responsiveness</li></ul><p>Many of these overlap in concept. Often depth in one must be traded for another. Ranking your choices according to their relative importance to your business is crucial. While highly subjective, the ranking will communicate your priorities so that stakeholders can make reasonable trade-offs.</p><h4><strong>Establishing a Baseline (and Stop Making It Worse)</strong></h4><p>Gather the stakeholders for each component or subsystem that knows where the skeletons are hiding. Document a baseline score indicating how close the component is to achieving each of the attributes that were selected from the “ilities” attributes. This again is subjective, but that’s OK. Be honest and base the evaluation on the experience of all stakeholders–including the technical teams.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/759/1*JqhwMC4eupWekEdi9aN4uQ.png" /></figure><h4><strong>Mapping Out a Plan: Defining Intentional Architecture with Quality Targets</strong></h4><p>Collaborate with technical leaders and engineers to define specific goals, known as fitness functions, for each component or sub-system. Fitness is the objective ability of a system or component to meet one more of the quality objectives. <em>Is X fit for purpose?</em> What is “acceptable” as an objective target will depend on the particular requirements of your system, your tolerance for variation, and any immovable obstacles such as compliance requirements.</p><p>Here’s an example list of (simplified) technical target statements with their corresponding business targets:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*e_NjAnW2biPFXNaBnMmQsg.png" /></figure><p>Note: Meeting these targets is not free. The more stringent the target, the more it will cost to meet, and the more trade-offs will need to be made to satisfy it. Ensure that your targets are reasonable and in line with the requirements of your business and risk tolerance. This is the primary motivation for choosing no more than five qualities that are most important to your business.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*zxHEsgv6GvCCWlHU" /><figcaption><em>Visualizing the Path to Intentional Architecture: Swimlanes illustrate incremental progress toward defined quality targets, aligning technical goals with broader business objectives.</em></figcaption></figure><h4><strong>Incrementally Improving While Continuously Validating Against Targets</strong></h4><p>Intentional Architecture is not a one-time activity — it becomes woven into the fabric of an organization. It is the adoption of a different mindset and a different way of thinking. Both business and technical teams should be connecting throughout the software development lifecycle and discussing design choices that positively and negatively impact quality targets.</p><p>You may be asking “are we there yet” at this point, and the answer is yes and no. This journey of crafting intentional software and systems architecture is a continuous loop of evaluation, adaptation, and refinement. The team must constantly assess the established guidelines, adjusting their course as conditions and needs evolve. New guidelines may emerge as they explore the landscape while existing ones are reaffirmed or retired based on their continued relevance.</p><p>Just like a team of seasoned explorers navigating uncharted territory, achieving this requires a deep understanding of the system’s inner workings and the constraints of the environment it operates within. Skill, training, and experience are essential for traversing these complexities, ultimately leading to a successful team “conquering” the challenge of Accidental Architecture.</p><p>Did the steps we’ve outlined provide clear guidance on transitioning from Accidental to Intentional Architecture? What additional challenges or questions do you have about implementing Intentional Architecture? Please comment to let us know!</p><p>If our team of experienced consultants, strategists, and architects can help you navigate this journey, <a href="https://objectcomputing.com/services/contact-us">contact us</a> to schedule a consultation and discover how we can help you achieve your business goals.</p><p><em>Garey Hoffman is a Partner and Vice President of Engineering of Object Computing. Hoffman plays a pivotal role in overseeing and driving the technical aspects of the organization. He is involved in managing a team of skilled engineers, architects, and technical professionals to deliver quality solutions and services to clients. He is also responsible for aligning technical strategies with business objectives, ensuring the successful execution of projects, and maintaining a strong focus on innovation and emerging technologies. Outside of the office, Garey is a lifelong builder and can be found enjoying the outdoors with his family.</em></p><p><em>Mike Pleimann has 15 years of experience as an Application Architect and almost 25 years in software engineering. Currently leading the Application Architecture team, he combines technical proficiency with managerial skills to guide teams toward success in large-scale software architecture and engineering programs. As a seasoned leader in the field, he brings a wealth of knowledge and strategic vision to his projects in telecom, collections, defense, and gaming industries. He is dedicated to crafting robust, scalable solutions that are fit for purpose and has consistently earned accolades from clients and peers. Mike holds a BS in Computer Science from Missouri S&amp;T.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=003444591309" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/from-accidental-to-intentional-your-roadmap-to-architectural-excellence-003444591309">From Accidental to Intentional: Your Roadmap to Architectural Excellence</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Maximizing Performance with Netty and Reactive Programming in Java]]></title>
            <link>https://medium.com/object-computing/maximizing-performance-with-netty-and-reactive-programming-in-java-dc984a4316eb?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/dc984a4316eb</guid>
            <category><![CDATA[netty]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[reactive-programming]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 27 Jun 2024 21:01:14 GMT</pubDate>
            <atom:updated>2024-06-27T21:00:09.011Z</atom:updated>
            <content:encoded><![CDATA[<p>By Matthew Perry</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*MfFhlkYvyClkCgw1V6wQcg.png" /></figure><p>In the modern world of software development, building responsive and scalable applications is essential. This is especially true for cloud-based deployments, where keeping costs down while maintaining performance is crucial. Reactive programming, a paradigm centered on efficiently processing asynchronous events and data streams, can be an excellent choice for achieving this. In this article, we will:</p><ul><li>Explore what reactive programming is and why you might use it.</li><li>Take a brief look at Netty</li><li>Compare traditional blocking I/O to reactive non-blocking I/O</li><li>Recommend some reactive libraries and supporting frameworks to get you started.</li></ul><h4>What is Reactive Programming?</h4><p>Reactive programming is a declarative programming paradigm primarily focused on processing asynchronous events and data streams. When used appropriately, it enables applications to be more resilient, concurrent, and responsive by managing data flows and reacting to changes in real time. This approach is especially beneficial for systems handling a large number of concurrent requests, such as web servers or real-time data processing applications.</p><p>Reactive programming’s declarative and functional approach abstracts away the “how” of asynchronous programming, allowing developers to focus on the desired outcome. This stands in contrast to the more imperative programming style familiar to many Java developers, where one explicitly defines step-by-step instructions for executing a task. For many Java developers, Reactive’s declarative approach presents a learning curve. However, once overcome, it enables them to write more concise, readable code, ultimately leading to more robust and maintainable software.</p><h4>Why Utilize Reactive Programming?</h4><p>With reactive programming gaining traction, it is crucial to stay ahead of the curve and understand why it may or may not suit your needs.</p><ul><li><strong>Ecosystem and Community Support:</strong> Reactive programming is by no means a new concept, and has a wide range of support throughout the Java ecosystem with various libraries and resources to accelerate development efforts.</li><li><strong>Scalability and Performance:</strong> Allows for efficiently handling large numbers of concurrent events to ensure a responsive and performant system under heavy load.</li><li><strong>Real-time Data Processing:</strong> Provides ways to handle the continuous ingestion and processing of large streams of data making it ideal for real-time data processing applications.</li><li><strong>Cost Optimization:</strong> The efficient use of resources can significantly reduce cloud costs for options such as serverless deployments where you are paying only for the resources consumed.</li><li><strong>Responsive User Interfaces:</strong> Enhances user experience through asynchronous handling of user input, network requests/responses, and database updates.</li><li><strong>Future Proofing:</strong> Embracing reactive programming within your application early can save costly refactors in the future assuming there is an anticipated need for high concurrency.</li><li><strong>Learning Curve:</strong> Reactive programming is a paradigm shift away from what the typical Java developer may be comfortable with, and will require some ramp-up time for anyone new to it. Any organization that anticipates a need for it will find it invaluable to have developers that are well versed in its application.</li></ul><h4>Introducing Netty</h4><p>One framework that capitalizes on the reactive paradigm is the Netty Client/Server framework (<a href="https://netty.io">https://netty.io</a>), which efficiently handles asynchronous I/O operations by leveraging reactive principles. Netty enjoys widespread adoption across the industry, and depending on your use case, you may interact directly with it or through other Java frameworks that rely on it as their underlying server model, particularly when working with reactive applications. Next, we will delve into the traditional model for blocking I/O and its drawbacks, before introducing the reactive non-blocking I/O model, which lies at the core of the Netty framework.</p><h4>Traditional Blocking I/O — The Thread Per Request Model</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*cfnp2IE4vQUhpBvW" /></figure><p>With traditional blocking HTTP, an incoming request is assigned a dedicated thread until the request is completed. This is referred to as the Thread Per Request Model. This model offers simplicity and performs well within its limits. However, its downside lies in its limited ability to handle concurrent requests, as it depends on the number of threads available in its pool. Furthermore, if any external blocking calls are required during request processing (such as a database query), the corresponding thread must wait until a response is received, preventing it from serving new incoming requests.</p><p>While this approach may suffice for applications with low load and fast request processing times, it may encounter resource inefficiencies and scalability issues in high-load scenarios or when dealing with slower request processing times, such as external databases or service calls. In such cases, the application’s capacity to process concurrent requests is directly tied to the number of threads in its pool.</p><p>Of course, scaling the application itself, either vertically or horizontally, could address this issue but may incur significant costs. A more efficient solution would be to adopt a better threading model.</p><h4>Reactive Non-Blocking I/O — The Event Loop Model</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/917/0*ZD0fuUqPnsNLaVIz" /></figure><p>Netty achieves reactive, non-blocking I/O through its Event Loop Model. Incoming events, such as HTTP requests, are queued for processing. The event loop, operating on a single thread, periodically checks the queue, handling events sequentially. If a blocking operation occurs, Netty registers a callback to resume processing the event after the operation completes, freeing the event loop to handle other events. For any CPU-intensive tasks or blocking I/O, the event loop should delegate execution to a separate worker thread, while non-blocking I/O should be executed on the event loop thread itself. Generally, Netty employs 1 to 2 event loops per CPU core to leverage the asynchronous nature of multi-core systems.</p><p>In summary, Netty and its Event Loop Model facilitate concurrent request processing with minimal thread proliferation, optimizing resource utilization and scalability for high-performance web applications.</p><h4>Considerations Before Adopting the Reactive Approach</h4><ul><li><strong>Use Non-Blocking I/O Libraries:</strong> To get the most benefit from the event loop model discussed above, it is crucial that the majority of I/O used by your application is non-blocking. For instance, if you are currently using JDBC for database interactions, you can switch to R2DBC. If there are no non-blocking alternatives for a specific library, many reactive frameworks provide options for handing off the execution of these calls to a separate thread pool, which is crucial to avoid blocking the event loop. However, be warned that this should only be done when absolutely necessary. Using a separate thread pool for the majority of your I/O will essentially revert your application to the thread-per-request model, and you will likely see no benefit from going reactive. In fact, you may experience worse performance due to the general overhead of using a reactive framework.</li><li><strong>Virtual Threads as an Alternative: </strong>Virtual Threads were added in JDK 21 and are designed to handle blocking I/O in a non-blocking manner. This means you can write your code in a typical imperative style and continue to use blocking I/O libraries like JDBC. Although Virtual Threads are fairly new, many frameworks, such as Micronaut and Spring Boot, already support them. If you require a fairly simple application, for instance, an API that takes a request, reads/writes to a database, and sends a response, then Virtual Threads might be a great option for you. You may find Virtual Threads have a smaller learning curve and could save you from having to refactor your application to use non-blocking reactive libraries like R2DBC. Note that Virtual Threads and reactive programming are both tools that can be used to solve similar problems, and in the future, they will likely each find their respective niches. You should weigh your options carefully when choosing which will best suit your needs.</li></ul><h4>Reactive Libraries Utilizing Netty</h4><p>To harness Netty’s reactive non-blocking I/O for responsive, resilient, and scalable software, numerous libraries are available. One of the most popular and widely supported reactive libraries for Java is Project Reactor. Project Reactor seamlessly integrates with Netty and offers a reactive programming model based on the Reactive Streams specification, facilitating the development of reactive servers and clients with Netty as the transport layer. Other notable reactive libraries supporting Netty include Vert.x, RxNetty, and Akka Streams.</p><h4>Java Frameworks Supporting Reactive Programming and Netty</h4><p>Several Java frameworks can efficiently develop reactive applications running on Netty. These frameworks typically support various reactive libraries, allowing you to choose the one that best suits your use case. Below are some of the most widely used frameworks for your reference.</p><ul><li>Micronaut — <a href="https://docs.micronaut.io/4.4.10/guide/#introduction">Micronaut Documentation</a></li><li>Spring Boot via Spring Webflux — <a href="https://docs.spring.io/spring-framework/reference/web/webflux.html">Webflux Documentation</a></li><li>Quarkus via Mutiny — <a href="https://quarkus.io/extensions/io.quarkus/quarkus-mutiny/">Mutiny Documentation</a></li></ul><h4>Conclusion</h4><p>Reactive programming offers a powerful solution to the challenges of modern application development, enabling developers to create highly responsive and scalable systems. By abstracting away the complexities of asynchronous programming and providing a clear focus on desired outcomes through its declarative programming style, the reactive programming paradigm opens up a plethora of new possibilities. With frameworks like Netty and libraries such as Project Reactor, developers have powerful tools at their disposal to harness the full potential of reactive programming and deliver high-performance web applications that meet the demands of software development in the modern world.</p><p>Further Reading:</p><p><a href="https://www.reactivemanifesto.org/">https://www.reactivemanifesto.org/</a></p><p><a href="https://netty.io/">https://netty.io/</a></p><p><a href="https://projectreactor.io/">https://projectreactor.io/</a></p><p><a href="https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html">https://docs.oracle.com/en/java/javase/21/core/virtual-threads.html</a></p><p><em>Matthew J. Perry is a Senior Software Engineer at Object Computing, specializing in building and deploying high-performance cloud native applications for our clients. With over 8 years of experience, Matthew has a proven track record in leveraging Java and cloud-native frameworks such as Micronaut and Spring Boot to deliver scalable, robust solutions for our clients. He excels in driving complex projects from concept to completion, optimizing application performance, and implementing best practices in software development.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=dc984a4316eb" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/maximizing-performance-with-netty-and-reactive-programming-in-java-dc984a4316eb">Maximizing Performance with Netty and Reactive Programming in Java</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Unlocking Manufacturing’s Hidden Profits: How AI Revolutionizes Design, Efficiency & Sustainability]]></title>
            <link>https://medium.com/object-computing/unlocking-manufacturings-hidden-profits-how-ai-revolutionizes-design-efficiency-sustainability-060bd5149eff?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/060bd5149eff</guid>
            <category><![CDATA[manufacturing-industry]]></category>
            <category><![CDATA[generative-ai-consulting]]></category>
            <category><![CDATA[manufacturing]]></category>
            <category><![CDATA[gen-ai-for-business]]></category>
            <category><![CDATA[ai]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Fri, 21 Jun 2024 13:38:07 GMT</pubDate>
            <atom:updated>2024-06-21T13:37:09.487Z</atom:updated>
            <content:encoded><![CDATA[<p>By Andrew Montgomery</p><figure><img alt="AI Essentials for Manufacturing" src="https://cdn-images-1.medium.com/max/1024/1*aEfiYNLXFY1hHyzYNd1xQg.png" /></figure><h4>Introduction</h4><p>Few technological advances have generated as much excitement as artificial intelligence (AI). None more so than generative AI. Manufacturers view AI as integral to the creation of hyper-automated and intelligent factories. AI is seen as a powerful tool for product and process innovation. It reduces cycle time, minimizes waste, improves maintenance practices, and enhances demand, inventory, and forecasting capabilities — all while contributing to achieving sustainability goals.</p><p>In an industry study by MIT Technology Review Insights, researchers interviewed 300 manufacturers that have begun working with AI. Most of these (64%) are still in the early phases, currently researching or experimenting with AI. 35% of executives indicated that they have begun to put AI use cases into production.</p><p>To thrive in today’s dynamic market, manufacturers must leverage AI by developing and scaling use cases. To facilitate this, manufacturers must also address challenges with talents, skills, and data.</p><h4>We will explore these essential areas of opportunity, critical obstacles for information technology (IT) and operational technology (OT), and techniques to scale AI in manufacturing.</h4><figure><img alt="Areas of Opportunity" src="https://cdn-images-1.medium.com/max/1024/1*qYc9I6hGPyg9IxxfYsT4og.png" /></figure><h4><strong>AI’s Triple Threat: Boosting Design, Efficiency, and Sustainability</strong></h4><p>Companies that successfully implement AI-powered solutions have already seen significant reductions in downtime and improvements in labor productivity. A McKinsey study shows manufacturers that have adopted AI practices have reported cost decreases by as much as 55%, and revenue increases by 66%. Improvements were realized in these three areas of opportunity: product design and development, manufacturing productivity, and sustainability.</p><p><strong>Product Design and Development:</strong> Manufacturers can revolutionize product development by accelerating design processes, enhancing innovation, and improving overall efficiency.</p><ul><li><em>Design Twinning:</em> Digital twins of in-development products can be evaluated, tested, and rapidly iterated long before the first prototypes are constructed.</li><li><em>Customization:</em> AI allows for greater customization in production, enabling manufacturers to meet specific customer needs more precisely and efficiently.</li><li><em>Solution Development:</em> AI solutions can be integrated into the product, unlocking new capabilities that interpret and respond to complex signals that traditional software rules or logic cannot handle (e.g. object recognition).</li></ul><p><strong>Manufacturing Productivity:</strong> Manufacturers have a substantial opportunity to transform operations as AI becomes more accessible — from optimizing resources to eliminating waste, and improving throughput. Here are several key areas where AI can drive significant improvements:</p><ul><li><em>Predictive Maintenance:</em> By taking historical data from maintenance logs, you can predict how a machine will behave under a future payload, whether you’ll need to fix it, when, why, and how — based on what fixed that problem in the past. This can reduce downtime significantly.</li><li><em>Predictive Quality:</em> Predicting and reducing failures can yield significant cost savings.</li><li><em>Waste Reduction:</em> Using metrics to predict behavior across product specifications and processes can minimize scrap and maximize product quality.</li><li><em>Demand/Inventory Forecasting:</em> With a thorough understanding of plant operations and the data behind production, it’s possible to forecast the demand and movement of critical parts, resulting in significant inventory savings.</li></ul><p><strong>Sustainability:</strong> Manufacturers have a responsibility to balance economic growth while also minimizing their impact on the environment and society. AI can help manufacturers improve supply chain transparency, enable them to design and produce more sustainable products, and minimize environmental impacts.</p><ul><li><em>Energy Utilization:</em> Using metrics and historical trends we can predict energy demands and optimize factory operations to utilize more environment-friendly energy as well as reduce waste.</li><li><em>CO2 Emissions:</em> Real-time monitoring of emissions throughout the manufacturing process can identify and address emission hotspots promptly​.</li></ul><figure><img alt="Critical Obstacles" src="https://cdn-images-1.medium.com/max/1024/1*A1-a6IPL2DnpiOuWREsuDg.png" /></figure><h4>Bridging the Gaps: Overcoming Challenges to AI Adoption in Manufacturing</h4><p>Manufacturing is undergoing a revolution, with traditional boundaries between operational and technological teams dissolving. AI and its associated advancements are a catalyst, pushing collaboration to the forefront. Let’s unpack five critical challenges that IT and OT must tackle together to harness the full potential of this transformative era.</p><ol><li><strong>Data, Data, Data:</strong> Data quality, weak integration, and data governance are the most commonly cited reasons for project failure and use case abandonment. In the MIT research, 57% of executives cited data quality hampered use case development.</li><li><strong>Talents and Skills:</strong> A lack of talent and skills is the toughest challenge in scaling AI use cases. The closer use cases get to production, the larger the impact. Additionally, data quality and governance as well as insufficient access to cloud resources are magnified due to workforce gaps.</li><li><strong>Stakeholder Alignment:</strong> Adopting AI in manufacturing involves significant changes in processes, technologies, and mindsets. It is crucial to ensure that these changes are successfully implemented in a sustainable way.</li><li><strong>Fragmentation:</strong> Most manufacturers find some modernization of data architecture, infrastructure, and processes are needed to support AI, along with other technology and business priorities. Modernizing data systems to improve interoperability between engineering, design, the factory floor, OT, and IT is a critical priority.</li><li><strong>Use case definition:</strong> Identifying the right use case is essential to AI adoption. Manufacturing systems are holistic and one metric has implications for multiple downstream systems. Additionally, it is easy to fall into the trap of never-ending analysis. Selecting the right use case and clearly identifying its dependencies is critical.</li></ol><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*8IKrPhtlP-czHxQx6SF1aw.png" /></figure><h4>Building a Sustainable AI Foundation: Strategies for Success in Manufacturing</h4><p>There can be a lot of skepticism about introducing AI solutions in manufacturing and whether the investment is justified. To overcome hesitancy and create trust with AI-powered solutions, you have to: 1) Be Intentional about the use case that is developed, 2) Assemble the right team (including leadership, operations, IT/tech, digital transformation, and finance people), and 3) Adopt a data first agile delivery process that reflects data accessibility and feasibility with model development.</p><p>Below are 5 techniques we have found successful in supporting these needs and ensuring we can unlock the power of AI in manufacturing:</p><ol><li><strong>Getting the right people in the room: </strong>AI is not just an IT or data scientist’s problem. AI is an integral part of solving complex business objectives and as such requires a multidisciplinary team to unlock it. It requires OT leaders, IT leaders, finance leaders, and SMEs to work together to design and plan solutions. This approach ensures there is both top-down and bottom-up support, that the appropriate budget is allocated relative to the target ROI, and that the right skills are available to execute.</li><li><strong>Starting small and achievable: </strong>There are so many things that can become hurdles in implementing AI in manufacturing, from overcoming incompatibilities in systems to securely connecting shop floor systems and networks to the appropriate cloud systems needed to build and run AI models. Prioritizing a well-defined, value-adding AI use case within a single facility can cultivate internal advocates for AI. This initial success can then be leveraged to build a collaborative foundation for future AI initiatives.</li><li><strong>Gaining consensus on how to track progress: </strong>Aligning the organization and team with what “done” means and the mechanics that track progress is critical. Oftentimes, organizations struggle to align the IT, OT and business stakeholders. That is why it is critical, given the dynamics of delivering AI solutions into always-on manufacturing environments, that an end-to-end lifecycle be represented in milestones and iterative success criteria. This allows the delivery, deployment, and operational teams to understand their target metrics, and stakeholders understand the progress through the lifecycle of the project.</li><li><strong>Quickly identifying and understanding data gaps: </strong>It is important to realize that what looks good with simulated data may not be achievable in practice for many reasons. Commonly identified issues include system interoperability, data quality, and data governance. It is critical to understand gaps and mitigation strategies early while evaluating and prioritizing use cases to develop. Stakeholder support can be lost if you experience too many false starts.</li><li><strong>Clearly articulating foundational and use case investment: </strong>Whether this is the first use case being developed or the use cases require integration with older OT systems, there is likely a need for some incremental investment into foundational systems and architecture.</li></ol><p>It is important to clearly understand this investment independent of the use case(s) being developed, because the funding and ROI may match different time horizons. With this in mind, programs manage these deliverables as separate work streams with different ROI horizons.</p><p>To realize the value of AI within manufacturing, organizations must comprehend use case feasibility, address gaps early, and adopt a delivery methodology that can align and deliver.</p><blockquote>If you’re looking to leverage AI for your organization, our team can partner with you to unlock value and mitigate pitfalls. Learn about our AI and data insights <a href="https://objectcomputing.com/expertise/ai">expertise</a> and <a href="https://objectcomputing.com/services/quick-start-workshops">workshops</a>, and be sure to follow us on Medium for upcoming content.</blockquote><p>Sources:</p><p><a href="https://www.technologyreview.com/2024/04/09/1090880/taking-ai-to-the-next-level-in-manufacturing/">Taking AI to the Next Level in Manufacturing</a>, MIT Technology</p><p><a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai-in-2023-generative-ais-breakout-year">State of AI in 2023</a>, McKinsey Study</p><p><em>Andrew Montgomery, vice president of strategy, is an experienced technology executive and data strategist with 20+ years of experience with Fortune 500 companies. Andy’s focus is helping customers unlock their data to simplify business complexities and reshape business outcomes.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=060bd5149eff" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/unlocking-manufacturings-hidden-profits-how-ai-revolutionizes-design-efficiency-sustainability-060bd5149eff">Unlocking Manufacturing’s Hidden Profits: How AI Revolutionizes Design, Efficiency &amp; Sustainability</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Business Leaders: How Do You Know If Your Enterprise Has Accidental Architecture?]]></title>
            <link>https://medium.com/object-computing/business-leaders-how-do-you-know-if-your-enterprise-has-accidental-architecture-1866c904cb5d?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/1866c904cb5d</guid>
            <category><![CDATA[application-architecture]]></category>
            <category><![CDATA[software-architecture]]></category>
            <category><![CDATA[app-development]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[devops]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 06 Jun 2024 13:48:21 GMT</pubDate>
            <atom:updated>2024-06-06T14:14:31.018Z</atom:updated>
            <content:encoded><![CDATA[<p>By Mike Pleimann and Garey Hoffman</p><figure><img alt="Man on laptop in front of shadow monster" src="https://cdn-images-1.medium.com/max/680/1*43ccScC7wkBxR8ivp2dwNA.png" /></figure><p>A well-known website crashed a few weeks ago, and it made the news. We won’t disclose the company name, but it’s a very popular site and that crash affected millions of people. Fortunately, due to the heroic efforts of the support teams, it was operational after about 12 hours. Unfortunately, it lost a mountain of revenue.</p><p>We know the cause and the extent of how deeply this problem is entrenched within this organization. And, we have seen how organizations like this arrive at a place where they see symptoms like this system crash. It was likely a result of years of “architecture by accident.” In our observation, this architectural anti-pattern is common in organizations of all sizes, from small startups to large enterprises. Most experienced technologists understand that a system’s architecture is created on a set of goals, trade-offs, team structure, and the constraints under which it was built and is maintained. Without a deliberate architectural strategy, atmospheric pressures result in Accidental Architecture.</p><h3><strong>Agile Methodology Over Time</strong></h3><p>Failure to plan system architecture — and neglecting the execution of those plans — plays a substantial role in the growth of accidental architecture. While Agile methodology encourages adaptability, collaboration, and continuous improvement — which are excellent means of responding to change — <strong>this philosophy isn’t meant to replace real architectural planning</strong>.</p><p>The natural tendency of systems to become more disordered and complex over time advances more quickly in software than in physical systems. This advantage of software systems in allowing more flexibility also allows more chaos and unpredictable system future states. Think of this as gaining one pound a year, and in 10 years you realize you’re no longer able to fit into your favorite jeans. This is the impact of accidental architecture: critical systems can become unstable and difficult to maintain in a way that goes unnoticed for years.</p><h3><strong>Recognizing the Symptoms</strong></h3><p>Major system crashes that impact millions of users aren’t the only symptom of accidental architecture. Below is a list of symptoms. You may recognize signs of your own business and architecture listed here:</p><ol><li><strong>Perpetual First Aid<br></strong>The organization must constantly pay for the rising costs of non-stop bug fixes, customer churn from software defects, and cyberattacks.</li><li><strong>Hero Technical Team<br></strong>The technical team must routinely rise up to handle frequent (and risky) incredible technical challenges just to sustain normal business operations.</li><li><strong>Innovation Gridlock</strong><br>Implementing new business features takes longer than the market will bear and brings new, unexpected system failures upon release — resulting in more reputational damage.</li><li><strong>Application Monster</strong><br>The application user interface is unnecessarily complicated or just doesn’t make sense to users. New users are frequently overwhelmed, lost, or confused. Typical symptoms can include randomly logging out users, data inconsistencies in different parts of the system, difficult navigation in commonly used areas, time-consuming training, and user circumvention of the system to do their jobs a different way.</li><li><strong>Molehills into Mountains</strong><br>Technical changes or system updates that should be straightforward are monumentally complex. Also, the risk of failing is so high that making changes is actively avoided or simply not approved by leadership.</li><li><strong>Vendors Threaten to Not Support Key Dependencies (or Worse, They Do Support It!)</strong><br>Third-party tools, libraries, services, or service providers threaten to end support for dependencies. Or, they charge huge premiums to continue to support obsolete versions. Or, they simply stop offering support at any price.</li></ol><p>These technical symptoms are frequent root causes of the painful business symptoms above:</p><ol><li><strong>Big Ball of Mud</strong><br>A modification in one area of the system requires substantial change in one (or more) additional areas of the application. Relationships between system areas are not well understood.</li><li><strong>God Objects</strong><br>Software code (especially classes or services) that tries to do too much — or must do too much — just to make the application function. Or, there are attempts to make code “reusable” in multiple parts of the overall system.</li><li><strong>Circular dependencies</strong><br>The system consists of chains of components depending on each other in a circular and sometimes non-trivial fashion (A ➞ B ➞ C ➞ A).</li><li><strong>Code Duplication<br></strong>A severe example of Code Duplication occurs when functions (or classes) are duplicated, modified slightly, and then re-inserted into the application. Generally, the reason this occurs is due to a poor understanding of how a modification to the original code would impact existing use cases (e.g. no regression testing is available).</li><li><strong>Scary Migrations</strong><br>Migrating to a new version of system dependencies takes years and brings inordinate risk. A common example is moving to a new database version that is several revisions ahead of the current installation.</li><li><strong>Cockroach Defects</strong><br>There are bugs that were supposed to be fixed that reappear. They are seemingly impossible to eliminate. This is typically caused by automated tests that are ignored or skipped, dramatically increasing the risk of system rollbacks after deployment.</li><li><strong>Software Bloat</strong><br>Each release of the system demands ever-increasing resource allocation (memory, processing, storage, etc.) driving up the cost of operations. Related symptoms include: reducing overall system performance, increasing system resource requirements, or increasing deployment time.</li></ol><p>If these symptoms sound familiar to you, it’s likely your organization faces the challenges of Accidental Architecture. There is hope. The next article in the series will serve as a guide for business and technology leaders to restructure their architecture practices and refine their approach to achieving Intentional Architecture.</p><p>Be sure to follow and subscribe to be notified of the next installment. Please visit <a href="http://objectcomputing.com">Object Computing’s website</a> to learn more about our services.</p><p><em>Mike Pleimann has 15 years of experience as an Application Architect and almost 25 years in software engineering. Currently leading the Application Architecture team, he combines technical proficiency with managerial skills to guide teams toward success in large-scale software architecture and engineering programs. As a seasoned leader in the field, he brings a wealth of knowledge and strategic vision to his projects in the telecom, collections, defense, and gaming industries. He is dedicated to crafting robust, scalable solutions that are fit for purpose and has consistently earned accolades from clients and peers. Mike holds a BS in Computer Science from Missouri S&amp;T.</em></p><p><em>Garey Hoffman is a Partner and Vice President of Engineering of Object Computing. Hoffman plays a pivotal role in overseeing and driving the technical aspects of the organization. He is involved in managing a team of skilled engineers, architects, and technical professionals to deliver quality solutions and services to clients. He is also responsible for aligning technical strategies with business objectives, ensuring the successful execution of projects, and maintaining a strong focus on innovation and emerging technologies.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=1866c904cb5d" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/business-leaders-how-do-you-know-if-your-enterprise-has-accidental-architecture-1866c904cb5d">Business Leaders: How Do You Know If Your Enterprise Has Accidental Architecture?</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[4 Keys to Successfully Implementing Security by Design]]></title>
            <link>https://medium.com/object-computing/4-keys-to-successfully-implementing-security-by-design-ef3742e2838f?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/ef3742e2838f</guid>
            <category><![CDATA[devsecops]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[security-by-design]]></category>
            <category><![CDATA[software-development]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 16 May 2024 20:56:50 GMT</pubDate>
            <atom:updated>2024-05-16T20:56:28.968Z</atom:updated>
            <content:encoded><![CDATA[<p>By Brandon Lynch</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*BVVhYKEq14FqVn2CTaYYww.png" /></figure><p>Imagine a solution that not only enhances the security of your products but also saves time and reduces costs. This is precisely what Secure by Design (SbD) principles offer. Unlike traditional reactive security measures, which often lead to last-minute setbacks and avoidable changes, SbD takes a proactive approach. By integrating security from the beginning, we not only save valuable time, money, and resources but also elevate the overall project experience for everyone involved.</p><p><strong>The Foundations of Implementing Secure by Design</strong></p><p>In <a href="https://medium.com/object-computing/from-rework-to-results-how-we-achieved-cost-efficiency-through-embedded-security-9261b0a66f13">How We Achieved Cost-Efficiency through Embedded Security</a>, we talked about how trust and collaboration are paramount to the successful implementation of an SbD program. By integrating a dedicated security engineer into our team, we can engage in effective collaboration to identify risks, threats, and appropriate mitigation steps. Let’s explore how we can seamlessly integrate agile security into the software development lifecycle.</p><p><strong>1. Security Planning and Assessments</strong></p><p>During the planning phase of a project, it’s crucial to develop our security architecture. This involves analyzing applicable privacy and security laws, frameworks, or policies, such as GDPR, HIPAA, and SOC 2.</p><p>In addition, threat modeling is vital to the completeness of our security architecture. It helps us identify realistic privacy and security concerns along with methods to mitigate their risks. Subsequently, we can translate identified security controls or mitigation steps from the architecture into user stories or update the acceptance criteria of existing stories. This approach allows us to effectively plan, track, and implement security measures into the product from the beginning.</p><p><strong>2. Continuous Security Support</strong></p><p>Throughout the lifecycle of a project, features and scope often change. Therefore, it’s essential to make security as agile as the rest of the development team. One way we accomplish this is by creating abuser stories. An abuser story is essentially the ‘evil’ version of a user story that describes what a threat actor can do. Each abuser story contains threat scenarios, which capture how a threat actor can accomplish the abuser story. We can brainstorm this information by asking the team to think about features from an attacker’s perspective.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*YPnYlc-msC0R0didY0mXug.jpeg" /><figcaption>Example of a story-driven threat model</figcaption></figure><p>This approach allows us to examine the privacy and security aspects of specific features in greater depth than would otherwise be possible. Moreover, it helps us stay current with any new or evolving features or requirements. This process is led by the security team and can be integrated into sprint planning or refinement sessions to involve the entire team while minimizing the time commitment.</p><p><strong>3. DevSecOps</strong></p><p>DevSecOps represents an enhanced approach to DevOps that integrates security practices into the pipeline, reducing the need for manual security checks and accelerating vulnerability remediation. This is my favorite aspect of SbD, as it allows us to incorporate a broad range of security functionality in an agile manner, enabling security teams to concentrate on more strategic initiatives. Additionally, developers receive immediate feedback on potential security issues, allowing them to write secure code from the start.</p><p>Below is a list of essential security functions that should be integrated into your DevOps pipeline:</p><ul><li>Static Application Security Testing (SAST)</li><li>Dynamic Application Security Testing (DAST)</li><li>Secret Detection</li><li>Source Composition Analysis (SCA)</li><li>Software Bill of Material (SBOM) Generation</li><li>Continuous SBOM Analysis</li><li>Infrastructure Misconfiguration Detection</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*g50EZVai_XjEunUmIHP-AA.jpeg" /><figcaption>DevSecOps approach to software development integrates security practices in the pipeline</figcaption></figure><p>Your security tools should be configured to proactively block security vulnerabilities from being introduced into your application or environment. Furthermore, it’s crucial to collaborate with your team to fine-tune these tools to minimize noise and avoid unnecessary work. By adopting this approach, we establish a seamless, automated method for ensuring the security of our applications while keeping up with rapid development.</p><p><strong>4. Assessments and Validation</strong></p><p>When following SbD principles, the overall product quality and security are significantly improved. However, it’s still crucial to verify that your application is free from security flaws. This can be achieved through various methods, such as:</p><ul><li>Application security testing</li><li>Penetration testing</li><li>Cloud security assessments</li><li>Vulnerability assessments</li><li>Code security reviews</li></ul><p>In many cases, conducting smaller tests throughout the project lifecycle rather than one large assessment at the end can be more effective. This approach enables us to identify and address issues early, minimizing the need for significant re-work and creating shorter feedback loops for our team.</p><p><strong>Conclusion</strong></p><p>Implementing SbD has not only enhanced the overall quality and security of our work but has also delivered tangible benefits to our clients. By integrating security considerations into every phase of our development process, we ensure that the products and solutions we deliver are inherently robust and resilient against potential threats. This proactive approach not only minimizes risks for our clients but also translates into lower costs and faster time-to-market. Ultimately, our clients benefit from increased confidence in the security and reliability of our offerings, leading to enhanced trust and satisfaction with our services.</p><p><em>Brandon Lynch is a Security Engineer with expertise in software development lifecycle security, encompassing infrastructure and software assessments, as well as comprehensive reporting. He’s skilled in defense-in-depth threat identification, remediation planning, and offering strategic recommendations, as well as designing and implementing DevSecOps practices, creating automated CI/CD pipelines through the integration of security scanning tools and the application of branch protection rules, utilizing both commercial and open-source solutions. He holds certifications such as a Certified Ethical Hacker (CEH) Master and Google Cloud Certified Professional Cloud Security Engineer.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=ef3742e2838f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/4-keys-to-successfully-implementing-security-by-design-ef3742e2838f">4 Keys to Successfully Implementing Security by Design</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[How to Chat with Your Data]]></title>
            <link>https://medium.com/object-computing/how-to-chat-with-your-data-446d636717a1?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/446d636717a1</guid>
            <category><![CDATA[gen-ai-tools]]></category>
            <category><![CDATA[gen-ai-for-business]]></category>
            <category><![CDATA[large-language-models]]></category>
            <category><![CDATA[langchain]]></category>
            <category><![CDATA[ai-for-business]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Tue, 30 Apr 2024 19:35:33 GMT</pubDate>
            <atom:updated>2024-04-30T19:33:13.977Z</atom:updated>
            <content:encoded><![CDATA[<p>By Madison Koehler</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*wnihz2FPchy9KGJZADohlQ.png" /></figure><p>Imagine having a conversation with your dataset as if you were talking to a colleague, gaining insights by simply asking questions. With generative AI, this is entirely possible. In the era of conversational chatbots, businesses are beginning to think about how these can be leveraged to harness the full potential of their data. Traditional methods for extracting insights from data often require special skills and are time-consuming. Using Gen AI models to “chat” directly with our data makes these data insights more accessible and actionable, revolutionizing the way data-driven decisions are made.</p><p>While many people tend to think of Gen AI models like ChatGPT as a sort of ‘search engine’, tech leaders recognize the potential to tap into their company’s own proprietary data. Large language models (LLMs) in isolation only know what they have been trained on. Proprietary data and documents published after the model’s training are unknown to out-of-the-box LLMs.</p><p>This article discusses the concept of “chatting with your data.” It is an approach to integrate proprietary, sensitive data with LLMs such that users securely engage in natural-language conversations with their datasets and unlock the data’s full potential with less manual effort. It specifically focuses on chatting with “messy” unstructured data such as PDF documents, raw text, CSV, or JSON files. Furthermore, it includes an overview of how LangChain, a popular open-source framework, easily jumpstarts your chats.</p><p><strong>Why Chat with Your Data?</strong></p><p>“Chatting with your data” presents valuable opportunities to automate workflows, democratize data access, and accelerate the monetization of data insights. Leveraging Gen AI in this way allows for Q&amp;A sessions with a dataset, and also the ability to automate mundane tasks like summarizing or translating documents. There are numerous benefits associated with the ability to understand and transform data simply by having a natural language conversation.</p><ol><li><strong>Automate and accelerate workflows:</strong> Chatting with data is an effective way to accelerate workflows and automate some of the more mundane processes that humans spend time doing manually. The focus is on acceleration with human oversight rather than replacing humans altogether. By taking away the boring and repetitive tasks, time is freed up for data professionals to focus on the more creative, innovative, and decision-making aspects of their roles. This is comparable to the role of spellcheck in word processors — it didn’t replace the need for humans to write the content, but it saved time and improved quality.</li><li><strong>Empower more people to interact with the data:</strong> Through conversational interfaces, this approach empowers a broader spectrum of users, including non-technical stakeholders, to interact with and derive insights from the data in real time. Making complex datasets more accessible and understandable will foster more collaboration across teams.</li><li><strong>Quick insights to unlock the value of data assets:</strong> Harnessing the time-save provided by this automation allows organizations to extract valuable data insights quickly, translating them into actionable strategies at an accelerated pace.</li></ol><p><strong>Unlock Conversational Data Interactions with RAG</strong></p><p>Chatting with data has many benefits, but how can a model perform tasks and analysis on data it hasn’t seen during training? Enter Retrieval Augmented Generation (RAG), a technique for enhancing a model’s abilities by referencing an external knowledge base beyond the information it was trained on. This technique enables users to provide an LLM with a proprietary dataset and make queries about the data or assign tasks to the LLM, receiving informed responses in real time.</p><p>To most effectively utilize RAG, it is good practice to split unstructured data like text documents into semantically meaningful chunks. This allows the LLM to search for the most relevant chunk related to the user’s query and use only the most relevant information to generate a helpful response. Splitting a document involves steps of tokenization and vector storage that will be discussed later in the article.</p><p>Once a user makes a query, the RAG system retrieves the relevant information from the vector store and uses its natural language abilities to generate a helpful response. This is the power of RAG, bridging the gap between data exploration and response generation to facilitate more nuanced and insightful conversations between a user and their data.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*MljEvFPkehm9V2Vb" /><figcaption><em>Steps for Retrieval Augmented Generation (RAG) with LangChain</em></figcaption></figure><p><strong>Preparing Your Data to Chat</strong></p><p>While generative AI models require less polished and meticulously cleaned data than traditional machine learning models, effective data preparation is still an essential step to maximize the potential of RAG. To get the most out of RAG, there are some standard steps to prepare unstructured data such as text documents.</p><ol><li><strong>Meaningful tokenization (splitting):</strong> LLMs have associated token limits. This is a number of tokens (usually characters but can also be parts of words or entire words) that can be entered into the model’s context window. For this reason, it is often infeasible to pass an entire large document into the LLM along with our prompt (question or task). To avoid hitting token limits and receiving an error following a query, a common practice is to split documents into smaller sub-documents, reducing the number of tokens per “document.” It is good practice to split documents into chunks that are semantically meaningful, keeping groups of relevant tokens (words, sentences, paragraphs, etc) together. This helps the RAG process return the data that is most relevant to the query at hand.</li><li><strong>Embedding:</strong> Embedding text data for RAG is the process of transforming the natural language text that humans use into numeric representations that the LLM can understand. These are high-dimensional vector representations of text that capture semantic meaning and contextual relationships between tokens. This enables similarity computations and the retrieval of the most relevant information by comparing the data’s embeddings to those of the user’s query.</li><li><strong>Vector store:</strong> Storing embeddings in a vector store facilitates the retrieval of documents or document chunks that are the most semantically similar to the user’s query. This ensures that only the most relevant information is considered when the LLM generates its response.</li></ol><p>These are the essential steps to preparing data for RAG systems, leading to the most accurate responses and most accessible insights.</p><p><strong>Crafting Data Dialogues: The Art of Effective Prompt Design with RAG</strong></p><p>A prompt is a request sent to an LLM to receive a response back, and prompt design is the process of creating prompts that elicit the desired response. Effective prompt design is essential to ensuring helpful, accurate, and well-structured responses from language models. It is important to make sure the model understands what is being requested of it and has clear guidelines to respond in a way that is most helpful to the user and meets their expectations. Beyond the short query that the user submits to the model (like in interactions with ChatGPT), PromptTemplates are often used to provide the model with the most information possible about the expectations for the generated response.</p><p>PromptTemplates integrate nicely with RAG because they can be used to instruct the model to only make use of the designated data store when generating answers. While prompts can contain questions, they can also be constructed to instruct the model to take on an assignment. Examples of such tasks include summarizing, editing, switching text to a different tense or tone, or generating new text. Prompt templates are excellent ways to designate between question-answer and task-based instructions for the model.</p><p>The following shows the creation of a PromptTemplate for a question/answer interaction with a dataset consisting of transcripts from lectures in a machine learning course. Note how the template provides placeholders for not only the query, but the useful context to inform the response.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*xIvEDCnlvRIcB-ge" /><figcaption><em>Demonstration of a PromptTemplate being used in a RetrievalQAChain to answer queries.</em></figcaption></figure><p>Prompt design is an essential part of creating a RAG system. The following are some tips for ensuring more effective prompt design:</p><ol><li><strong>Give clear and specific instructions to the model</strong>. Define the task and any constraints, or even the format the response should be given in (e.g. a bulleted list, a five-sentence paragraph, or a JSON file)</li><li><strong>Include few-shot examples</strong>. Give the model a few examples of similar prompts and expected responses, so it can learn how to respond in the desired format. “Few-shot” prompting is a common technique that provides the model with the expectations for how similar queries should be responded to.</li><li><strong>Break down complex prompts into individual steps</strong>. A prompt could contain a list of steps for the model to take in a particular order to best complete the request.</li><li><strong>Experiment with wording and tone</strong>. Altering prompts to guide the model toward the expected behavior can be helpful.</li></ol><p><strong>Putting it All Together with LangChain</strong></p><p>LangChain is an open-source framework for developing applications using LLMs. LangChain comes equipped with modules and end-to-end templates that make it easy to quickly implement each step of the “chat with your data” approach.</p><p>To make data preparation simple with unstructured data, LangChain provides a variety of document loaders, configurable document splitters, embedding strategies (including those assisted by an LLM), and supported vector stores.</p><p>For data retrieval during the RAG process, LangChain provides several Retrievers — a class of objects that can be used to configure a RAG strategy dependent on use case. For example, if hitting token limits is an issue, the ContextualCompresionRetriever can be used to retrieve only the most relevant portions of a document chunk to further reduce the number of context tokens being passed into the LLM.</p><p>Chains are a valuable feature of LangChain that encapsulate sequences of interconnected components that execute in a specific order. There are several use-case-specific options for chains to use, and custom chains can be created as well. For example, the RetrievalQAChain is designed to facilitate RAG for question/answer-based exchanges with an LLM. Chains can easily be integrated with a preferred retriever, LLM, and vector database. Chains can also be joined together in sequences for more complex workflows, and Router Chains can be leveraged as a tool to decide which of multiple chains is best suited to handle a given query. Ultimately, chains provide a framework to implement a chat-with-your-data workflow with minimal lines of code.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*0PsWYoHPHLPKbeoe" /><figcaption><em>Steps executed in LangChain’s RetrievalQAChain Class</em></figcaption></figure><p>Ultimately, LangChain’s modularization of useful components that can be strung together into chains makes the framework an effective tool for implementing RAG systems to chat with a dataset.</p><p><strong>Going from Q/A to ChatBot</strong></p><p>Asking questions or making requests to the LLM in a one-at-a-time manner is sufficient for several use cases. When a more conversational structure would be beneficial, such as for a customer-service chatbot that needs to remember conversation history, LangChain provides tools for these use cases as well. This structure looks similar to the chains explored previously but will involve the addition of a “memory” of sorts for the LLM to reference context from previous chats in future responses. This allows for follow-up questions or requesting refinements or edits to a previously completed task.</p><p>The following example shows an application of this when having a conversation with lecture transcripts from a machine learning course. The ChatBot uses these transcripts to answer a question, then uses the transcripts plus the conversation history to answer a follow-up question. Note that in the second question, TAs are not mentioned, but the ChatBot uses the context of the previous question to understand that those are the people whose majors are being referred to.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/685/0*DD_AI5yZ8OYPaV6c" /><figcaption><em>Example of ChatBot using conversational history to answer a follow-up question</em></figcaption></figure><p>LangChain provides a variety of options for conversation memory. One example is “conversation buffer memory,” which simply keeps a list of chat messages in history and passes those along with the question to the chatbot each time. Using this, a new chain (i.e. ConversationalRetrievalChain) can be created from building upon the initial RetrievalQAChain with the addition of the memory component.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*OqOdGm2ldPRV98j_" /><figcaption><em>LangChain’s implementation of conversational memory</em></figcaption></figure><p><strong>Data Security Considerations</strong></p><p>RAG is a powerful technique, but it is important to ensure that applications that utilize RAG are used safely, without compromising data integrity. The following are key best practices to consider when implementing a RAG system to chat with data.</p><ul><li><strong>Encryption of data in transit and at rest:</strong> Utilizing industry-standard data encryption protocols is a good way to ensure sensitive information is protected both in transmission and while being stored.</li><li><strong>Access controls and authentication:</strong> Implementing different levels and scope of access to the RAG system is a good way to ensure that only authorized users can interact with the system and access sensitive data.</li><li><strong>Data anonymization:</strong> Employing anonymization is an effective way to protect sensitive information within the data. Anonymization includes actions like removing personally identifiable information (PII) from the data and using anonymized identifiers to prevent the identification of individuals. Masking data in this way will also help protect this sensitive information from being stored by an LLM from a third party API, like OpenAI.</li></ul><p><strong>Takeaways &amp; Next Steps for the Future</strong></p><p>Turning proprietary, unstructured data into a conversational agent that empowers users to accelerate workflows and increase the accessibility of quick, actionable data insights.</p><p>Some considerations for future improvements include:</p><ul><li><strong>Fine-tuning</strong>: Continuously fine-tuning the parameters of the RAG model can help improve response quality and relevance. Fine-tuning can be helpful at the data prep strategy, with prompt design, or with other LLM parameters.</li><li><strong>User interface</strong>: Integrating this process with a user interface can make having data interactions more convenient, leading to more accessibility for RAG systems to enable more people to chat with their datasets.</li><li><strong>Experiment with more retrieval techniques</strong>: Experimenting with additional retrieval techniques such as re-ranking could help the model construct more helpful responses that make the most effective use of the provided data.</li></ul><p>If you’re intrigued by the concept of ‘chatting with your data’, our team can help you turn this concept into reality. Learn about our <a href="https://objectcomputing.com/expertise/ai">AI and data insights expertise</a>, check out our recent <a href="https://objectcomputing.com/resources/webinars">webinars</a>, and be sure to follow us on Medium for upcoming content.</p><p>Additional Resources:</p><ul><li><a href="https://python.langchain.com/docs/get_started/introduction">LangChain</a></li><li><a href="https://js.langchain.com/docs/modules/data_connection/document_transformers/">Document splitters</a></li><li><a href="https://js.langchain.com/docs/modules/data_connection/retrievers/">Retrievers</a></li><li><a href="https://python.langchain.com/docs/use_cases/chatbots/">Chatbots</a></li><li><a href="https://learn.deeplearning.ai/courses/langchain-chat-with-your-data/lesson/1/introduction">Deeplearning.ai course</a></li></ul><p><em>Madison Koehler, Data Scientist at Object Computing, obtained her MS in Artificial Intelligence in 2022 and has spent the last two years kicking off her career as a data scientist. She utilizes a strong background in mathematics, statistics, and computer science to create data-driven insights in the machine learning and deep learning space, and has utilized cloud service providers such as Amazon Web Services (AWS) to build and optimize large-scale pipelines spanning the entire machine learning lifecycle.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=446d636717a1" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/how-to-chat-with-your-data-446d636717a1">How to Chat with Your Data</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[LiDAR + ML + Geospatial Data: A Powerful Engine for Smarter Railway Management]]></title>
            <link>https://medium.com/object-computing/lidar-ml-geospatial-data-a-powerful-engine-for-smarter-railway-management-04fafa4f685f?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/04fafa4f685f</guid>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Mon, 15 Apr 2024 17:11:53 GMT</pubDate>
            <atom:updated>2024-04-16T17:17:00.687Z</atom:updated>
            <content:encoded><![CDATA[<p>By Samuel Vanfossan and Allan Trapp</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*B8mEyrJwscgv66V9xjNuIg.png" /></figure><p>As an integral part of modern critical infrastructure and pivotal to the transportation of goods in many developed countries, rail carriers assume responsibility for thousands of miles of track that must be monitored and maintained. Performing these duties adequately is paramount to regulatory compliance and safety. However, the processes to conduct these operations are generally slow, manual, and resource-intensive. These inefficiencies can lead to backlogs and compromise safety.</p><p>The solution lies in building the right tech stack. By harnessing the power of cloud computing and machine learning (ML), we can extract rich insights from LiDAR (Light Detection and Ranging) data. Let’s dig deeper into the problems and how our solutions can provide railroads with vital and timely information for maintenance and intervention.</p><h4><strong>Challenges in Vegetation Management</strong></h4><p>One critical area requiring improvement is vegetation management at railway crossings. For safety reasons, the railway responsible for a crossing must ensure the area around it is free of vegetation that could obstruct visibility. When this vegetation management is done poorly, vehicle passenger and train operator sightlines can become obscured, leading to collisions and other accidents.</p><p>The manual and time-consuming process to complete this critical responsibility requires dispatching crews or contractors to take measurements and make assessments. Plus, the costs associated with conducting and validating manual assessments can be prohibitive, making it impossible to do with enough frequency to adequately manage vegetation encroachment. A means to expedite this process could serve to greatly enhance rail crossing safety, ensuring that visibility-obscured crossings are identified and remedied in a more timely manner.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/447/0*ERaU8Rkot_uCKd_7" /><figcaption>Figure 1 — Depiction of a typical train track with its surrounding vegetation. Plants pose safety risks such as impaired visibility and train derailment.</figcaption></figure><h4><strong>LiDAR Technology and Its Potential</strong></h4><p>Fortunately, advancements in technology offer promising solutions. LiDAR is a popular tool for gathering data on trackside objects, including vegetation. However, the sheer volume of LiDAR data can be overwhelming, often remaining unused because there’s no easy way to interpret and analyze it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*EKnEj6ogESsbU-_y" /><figcaption>Figure 2 — A rail-enabled truck carrying a LiDAR scanning payload. A common means by which major rail carriers gather LiDAR data describing the railway they maintain.</figcaption></figure><h4><strong>Unlocking the Value of LiDAR Data</strong></h4><p>While LiDAR data holds undeniable value for railroads, it’s a tough nut to crack. It contains a massive amount of detail about trackside objects, making analysis challenging. However, if we can properly analyze this data, it can significantly speed up many tasks for rail carriers. This translates to both improved track safety and reduced resource needs for maintenance.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*lhmV-pCKizdDyAaq" /><figcaption>Figure 3 — Basic architecture leveraging LiDAR Data, Cloud Computation, and Machine Learning to yield powerful insights and visualizations from rich LiDAR data.</figcaption></figure><h4><strong>A Practical Example: Streamlining Vegetation Management</strong></h4><p>A specific application of this tech stack is a tool we designed to manage railroad-crossing vegetation. This tool leverages:</p><ul><li>LiDAR data collected from vehicles already using the railway.</li><li>Cloud computing for secure data transfer and storage facilities, along with the processing power and scalability required to perform complex calculations and assessments systematically. The computational power and integrability of cloud services also enable the rendering and display of meaningful visualizations and reports based on the data analysis.</li><li>Machine learning algorithms for automated insight generation, eliminating the need for manual inspection. This enables rapid assessment of large datasets, pinpointing critical issues and recommending targeted actions.</li><li>Asterisms, a vendor-neutral platform for interactive data visualization.</li></ul><p>The tool analyzes LiDAR data in conjunction with open-source geospatial data to:</p><ul><li>Detect and classify vegetation within designated compliance zones.</li><li>Identify potential compliance violations related to vegetation growth.</li><li>Quantify the extent and type of vegetation occluding the view.</li></ul><h4><strong>LiDAR in Production: Road-Rail Intersection Sightline Compliance</strong></h4><p>By combining the gathered LiDAR and open-source geospatial data with powerful ML techniques, the tool detects and classifies vegetation within specified compliance zones around railway crossings. The analysis culminates in identifying compliance violations and details the magnitude and variety of the vegetation occlusion. These insights, both qualitative and quantitative, provide a meaningful evaluation of the crossing’s sightline compliance and can be used directly to identify crossings needing vegetation management.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*VkCT74tq_IbGsA1f" /><figcaption>Figure 4 — Tool screenshot with paginated table describing calculated occlusion statuses for five crossings. The filterable and sortable table is overlaid upon a geospatial map depicting the crossings analyzed by the tool. Crossing icons are color-coded to align with calculated occlusion status.</figcaption></figure><p>This tool goes beyond simply identifying vegetation by providing:</p><ul><li><strong>Detailed breakdowns:</strong> The tool calculates an “occlusion proportion” for each zone, revealing the overall blockage percentage and breakdown by vegetation height.</li><li><strong>Interactive 3D visualizations:</strong> Users can virtually explore 3D maps of crossings, complete with surrounding vegetation and objects.</li><li><strong>Prioritization for resource allocation: </strong>Capable of analyzing thousands of crossings in a matter of minutes, the tool allows users to quickly identify and prioritize the most critical compliance violations and determine the necessary remediation resources.</li><li><strong>Remote inspection: </strong>Through the supplied 3D visualizations, users can remotely search and inspect crossings of interest to verify reported statuses.</li></ul><blockquote><strong>All of this occurs in the browser, providing an automated and centralized toolkit instead of a formerly manual and fragmented practice.</strong></blockquote><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*vxyh21psYELvTBlt" /><figcaption>Figure 5–2-Dimensional projection of vegetation detected within the compliance area about a road-rail crossing. Vegetation is color-coded with red representing the tallest vegetation.</figcaption></figure><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*jjoSPDtsKawjkAeS" /><figcaption>Figure 6–A 3-Dimensional view showing an ML-classified point cloud overlaid on a geospatial image of a road-rail crossing. Here, vegetation is green, non-vegetation objects are blue, and roadway points are rendered red.</figcaption></figure><h4><strong>The Utility of Using Your LiDAR Data</strong></h4><p>The successful utilization of LiDAR data means reduced manual work for rail carriers while pursuing maintenance standards and compliance. Not only is the manual workload (and associated resource cost) of data collection mitigated, but the effort required to complete assessments about this data is reduced. Nesting LiDAR data within the detailed tech stack allows targeted insights to be generated automatically and much more expeditiously.</p><p>Accordingly, the <em>time to insights</em> is greatly improved; the delta between data collection and the realization of actions to be taken becomes much smaller. This throughput enhancement then allows operators to dispatch remediation resources in a more timely manner. The generalized reduction in procedural effort and time lets assessments be conducted more frequently, keeping a more-present eye on the status of railway conditions.</p><p>The real power of this technology lies in its speed. As multiple assessments can be completed quickly, action item prioritization is also enabled. Considering a large number of identified tasks to be completed, the most pressing or impactful can be selected to receive correction efforts first. This not only helps railways comply with regulations, but more importantly, it keeps everyone safe.</p><h4><strong>Looking Ahead</strong></h4><p>This technology represents a significant step forward. We are already developing ways to use the same basic architecture with LiDAR data for many other applications, both on and off the rails. Early tests show it can be used to assess ballast quality, monitor the health of the tracks and ties, identify objects encroaching on the tracks, and even evaluate tunnel quality.</p><p>The integration of LiDAR, cloud computing, and machine learning holds immense potential for a wide range of applications, not just in railways but across various industries.</p><blockquote>To see a demo of this tool and learn more about our LiDAR expertise, visit our <a href="https://objectcomputing.com/expertise/lidar">website</a>.</blockquote><p><strong><em>Samuel Vanfossan, PhD, </em></strong><em>is a Data Scientist at Object Computing, responsible for the design and implementation of machine learning and operations research solutions. His interests include the application of artificial intelligence to geospatial data and the intersection of machine learning and optimization.</em></p><p><strong><em>Allan Trapp, PhD, </em></strong><em>is the Managing Director of Data Science at Object Computing. He leads a team of data scientists and engineers who craft ML business solutions across diverse industries including agriculture, transportation, finance, and life sciences. Data-driven actions are his passion and motivate his research on agricultural management zones and the carbon-offset marketplace.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=04fafa4f685f" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/lidar-ml-geospatial-data-a-powerful-engine-for-smarter-railway-management-04fafa4f685f">LiDAR + ML + Geospatial Data: A Powerful Engine for Smarter Railway Management</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[From Rework to Results: How We Achieved Cost-Efficiency through Embedded Security]]></title>
            <link>https://medium.com/object-computing/from-rework-to-results-how-we-achieved-cost-efficiency-through-embedded-security-9261b0a66f13?source=rss----849f5535ced0---4</link>
            <guid isPermaLink="false">https://medium.com/p/9261b0a66f13</guid>
            <category><![CDATA[secure-by-design]]></category>
            <category><![CDATA[cybersecurity]]></category>
            <category><![CDATA[software-security]]></category>
            <category><![CDATA[embedded-security]]></category>
            <category><![CDATA[cost-efficiency]]></category>
            <dc:creator><![CDATA[Object Computing, Inc.]]></dc:creator>
            <pubDate>Thu, 14 Mar 2024 19:16:26 GMT</pubDate>
            <atom:updated>2024-03-14T19:16:16.561Z</atom:updated>
            <content:encoded><![CDATA[<p>By Janelle Morris</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*O_AXu3MJnrlw8Myrg3nRIA.png" /></figure><h3>The Escalating Risks: Why Secure Software Development is Critical Now</h3><p>The financial impact of cybercrime is undeniable. A 2021 report by Cybersecurity Ventures estimates global losses reached $6 trillion. Additionally, Gartner predicts a significant rise in software supply chain attacks, potentially leading to increased data breaches and operational disruptions for businesses. Furthermore, the Identity Theft Resource Center reported a concerning 78% year-over-year increase in data breaches, highlighting the crucial need for robust cybersecurity measures to safeguard sensitive information and mitigate financial risks.</p><p>These statistics highlight the crucial need for robust security embedded throughout the software development lifecycle. Traditionally, as I’ve experienced throughout my career — and which persists in many organizations — security reviews were a reactive measure. Once the development team had a good bit of the system developed, they would pull in the security team to do a review. Security would give feedback and the engineers would have to do rework.</p><p>On rare occasions, security would be pulled in early to look at the design and make suggestions, but often when security was brought back to review near the end, much of the original design had changed during iterative development. This approach often increased timelines and costs due to rework and created a culture of divisiveness between security and product development.</p><p>At Object Computing, we knew there had to be a better way. We set out to proactively address growing security demands while fostering collaboration within our company and ensuring the success of our client’s projects.</p><h3>Introducing Secure by Design: A Collaborative Approach</h3><p>We embraced the concept of Security by Design (SbD), which advocates that software be designed, developed, and delivered securely by default. Initially, there were concerns that integrating security from the very beginning of the development process would increase time and cost for clients, but we started small and internally, focusing on:</p><ul><li><strong>Threat Modeling: </strong>By analyzing the design and creating a threat model, we had focus areas from a security perspective as the application was built. This proactive approach helped us identify potential security vulnerabilities early on. It involves identifying assets, understanding attackers, creating attack scenarios, evaluating risks, and implementing countermeasures.</li><li><strong>Open-Source Security Scanning:</strong> We initially looked at commercial tools — and there are great ones in the market — but the costs for these tools can be prohibitive. For clients with a tighter budget, we found a robust, trustworthy open-source community with tools that do the same types of scans at only the cost of implementation.</li></ul><p>We then began incorporating:</p><ul><li><strong>Strategic Touchpoints:</strong> Security is now embedded in the project team, participating in project planning, kick-off, standup meetings as needed, and end–of-sprints. This close communication prevents the need for rework and fosters collaboration while limiting the number of touchpoints to only what is needed to reduce overhead.</li><li><strong>Automatic Scans: </strong>By using open-source scanning tools, we have continuous dependency and vulnerability checking throughout the software development lifecycle with each code check-in. This ensures that no new issues are introduced in the code during the project.</li><li><strong>Risk Assessments:</strong> Both the security and development teams continually review our threat models to identify the highest areas of risk and give them extra scrutiny right from the start.</li></ul><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*KAcXC8p_rWzVQPotojdbyQ.png" /></figure><h3>The Results: Efficiency Gains and Improved Security</h3><p>The old siloed approach would pull together the project team who would walk the architecture with the security team, which could be a one- or two-day event involving multiple people. The security team members would then analyze the application in full, including scans and manual walkthroughs, and generate their reports. Then the application teams were charged with implementing the findings towards the end of a project. This rework could easily take 2 to 3 weeks of time, along with frustration.</p><p>Over the past three years of developing our SbD practice, we have adjusted our processes to meet the cost expectations of our clients and still ensure we deliver a product that is secure by design. We’ve seen significant improvements in:</p><ul><li><strong>Time and Cost Reduction:</strong> By strategically incorporating security team members throughout the process, eliminating the need for lengthy reviews and rework, we have reduced our security costs in both people and time. And, automating scans and integrating security throughout the process minimizes rework and saves our clients time and money. <strong>This slashes security review times from weeks to 2–3 hours per week.</strong> There is no rework creating delays to the project — just a smooth, efficient process that keeps timelines on track and budgets in check.</li><li><strong>Early Risk Detection: </strong>Threat modeling and continuous scanning help identify and address security issues early, preventing costly late-stage fixes. We participate at the end of sprint planning, update our threat model, and add abuser stories to align with relevant features. These abuser stories are created in partnership with developers who enjoy thinking about potential ways to hack the systems.</li><li><strong>Stronger Collaboration: </strong>Strategic touchpoints between developers and security professionals throughout the development process foster a more positive and productive work environment. We collaborate instead of dictate and work as partners instead of adversaries.</li></ul><h3>Embracing the Future of Secure Development</h3><p>In April 2023 and an update in October, the Cybersecurity and Infrastructure Security Agency (CISA), National Security Agency (NSA), Federal Bureau of Investigation (FBI) and 13 additional internal partners published recommendations on how software manufacturers should ensure the security of their products. <a href="https://www.cisa.gov/sites/default/files/2023-10/SecureByDesign_1025_508c.pdf">Shifting the Balance of Cybersecurity Risk: Principles and Approaches for Secure by Design Software</a> is raising the visibility and shifting the responsibility to software manufacturers. The United States <a href="https://www.whitehouse.gov/wp-content/uploads/2023/03/National-Cybersecurity-Strategy-2023.pdf">National Security Strategy 2023</a> pillar 3.3 discusses holding software manufacturers accountable for insecure software development practices.</p><p>These types of initiatives will only grow in strength and power in the years to come. Companies that adopt SbD practices now will be well-positioned for success in the evolving security landscape, gaining a competitive advantage and building trust with their customers.</p><p><em>Janelle Morris, Senior Director of Information Security at Object Computing, is accomplished at creating quality technology platforms, integrating acquisitions, managing vendors, growing key talent, and exceeding customer expectations. She leads with expertise in solving customer problems, driving change, and delivering exceptional results.</em></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=9261b0a66f13" width="1" height="1" alt=""><hr><p><a href="https://medium.com/object-computing/from-rework-to-results-how-we-achieved-cost-efficiency-through-embedded-security-9261b0a66f13">From Rework to Results: How We Achieved Cost-Efficiency through Embedded Security</a> was originally published in <a href="https://medium.com/object-computing">Object Computing</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>